XPages performance: pro tips

The ever-vigilant David Leedy pointed me to a LinkedIn conversation about XPages performance tips this evening that lead me down a particularly interesting rabbit hole.

Those of you who’ve been following the XPages story for a while know about my disdain for SSJS. What started as a criticism of performance later evolved to a criticism of programming practices. However it’s worth noting that in the meantime, one of the principle performance criticisms has been addressed. You see, SSJS is no longer strictly an interpretted language. It has it’s own version of expression compiling and tree-caching on an application level.

I don’t know what version this was introduced in. It may have been there all along. It’s certainly not something that’s easily discovered. When I found out how the cache is defined, I goggled the xsp.properties setting (ibm.jscript.cachesize) and found 3 references to it: 1) The XPages Portable Command Guide; 2) An XPages performance presentation from IBM Connect 2013; and 3) a Tweet from Bruce Elgort. And Bruce rightly asks “how come nobody is talking about this?”

I have your answer Bruce: nobody really knew about it. Oh, we might have gone to the session, and we might own a copy of the XPCG, but when you add an enhancement like parsed expression caching to what everyone knows is an interpretted language, you don’t put it in Appendix Q of Redbook 13948. You write it in the sky in 40-foot burning letters and fly supersonic jetfighters through it so no one can look away.

So anyway, yeah. The point is that the complaint about SSJS, that each time a ValueBinding is evaluated the expression is re-parsed, is false. I don’t know when it went from being true to false. Maybe it was false all along. But the thing that makes it false has been a closely guarded secret until Domino 9.

Now with 9, you can open the Xsp Properties element in your template, go to the Persistence tab, and change the “Compiled JavaScript cache size” setting. It defaults to 400.

Let us note that it does not say “Compiled SERVER JavaScript cache size.” We have to infer that’s the JavaScript being dealt with here. The tooltip help on the setting is quite interesting. It even brags about the caching using WeakReferences so the memory can be reclaimed if Java needs it.

It also answers a question I’ve had for a long time: “why does the generated Java class for XPages include the XPath expression for a value binding along with the actual SSJS source code?” Now I suspect that’s the key used for the cache.

So, in theory — and I’d love it if someone with authoritative knowledge chimed in here — each time an SSJS expression is encountered, once it’s parsed, the resulting AST tree is retained in a cache so that the next time the same expression is processed, there is no need to parse it again. The AST itself can simply be executed against updated values. This is a wonderful enhancement and it leads to some interesting conclusions…

  1. If you can write less than 400 total SSJS expressions in your application, it’s going to run a lot faster.
  2. If you write more than 400 total SSJS expressions in your application, it’s still going to optimize the ones used the most
  3. IBM continues to suck at documenting important features. And they even failed at educating IBM champions who globally promote XPages, since out of all the ones I know, only Bruce Elgort has said anything about this publicly.
  4. Nobody external to IBM has published SSJS vs. beans vs. standard EL vs. Java benchmarks that take this concept into consideration. If JavaScript ASTs are cached, then #{javascript: return true;} will be really fast. However #{javascript: var foo = someComplexFunctionCalledFromALibrary(); return foo.someOtherComplicatedFunction();} might not be so fast. We simply don’t know the edges of what’s fast and what’s not.
  5. We also don’t know what happens when you put 10,000 SSJS expressions in an application. Is that a lot? I don’t know. I do know that teamrm9.ntf has 1160 matches for the string “#{javascript:” And 355 lines of SSJS in libraries. Is this the yardstick that IBM is using?
  6. When you think about what has been tested, it barely scratches the surface. Benchmarking DataSources vs. JavaScript vs. DataContexts vs. Beans vs. static Java methods vs…. the list is endless. We don’t even know whether there are performance differences in EL expressions that discover beans or DataObjects or Maps or Lists. This is a huge knowledge gap.

Now… all that being said, I’d like to address the title of this post. Because you see, none of what I’ve mentioned so far has anything to do with the performance of XPages applications. Oh sure, if you’ve optimized the snot out of your application and written a brilliant execution of your spec, then yes, I imagine the difference between cached SSJS trees and EL reflection calls against explicit getters will improve your CPU time. But I seriously doubt you’ve dealt with all the other areas in which you can make performance improvements.

That’s not a reflection on you. I haven’t dealt with them either, because it’s really hard to focus on things that matter instead of the things that are easy.

If you want to write fast XPages applications, here’s how you do it…

  1. Minimize I/O between the client and the server. So avoid POSTs when GETs will do; encourage caching of your resources; use Gzip and resource optimizations like CSS sprites and the automatic aggregation in 9.0; be rigorous about the scope of your partial refresh & execs.

    Reducing the amount of network traffic triggered by your application is job #1. The POST vs. GET part is one of my favorite changes.

  2. Minimize I/O between the XPage and the underlying NSF. Java API access to NSF data is slow. Some parts of XPages do it faster than the regular API, but it’s still slow compared to expectations on other platforms. And just like Notes apps, when you can take advantage of the faster parts of NSF, you can make big gains. ViewNavigators tend to be faster than ViewEntryCollections tend to be faster than Documents. If you can do without Views altogether, and use keyed UNIDs and MIMEBean objects, you can save loads of time.

    But even before that, simply putting NSF-based data into scoped variable helps a lot. If you want to display a list of sales territories in a combobox, you probably only need to do your @DbLookup code once and then stick the result in an Application scope variable. If you want to grab the current user’s email address from the Directory, put the result in a Session scope variable so you don’t have to go back to the source data each time.

  3. Write less code. As it turns out, this is just good general software practice, but it applies to performance as well. The more code you create, the more work the platform has to do to track it and keep it optimized. Much of this has to be done looking down seldom-used paths. If you can accomplish your goals by writing less overall code, you stand a much greater chance of being able to tune that code both manually and through compilers.

    In addition, think about my earlier advice to run XPages apps from a single NSF for the whole server. If you have common images and script files from the template, then you’ll be generating a single, cacheable URL for all your apps. If you have common expressions used all over your apps, you’ll get the advantage of the AST caching. If you have common beans for your apps, you’ll constructing and initializing them only once.

  4. Learn. Pay attention to what’s happening in the world with regards not just to XPages performance, but with Java as a whole. With NSF. With wireless and cellular networks. With solid state drives. With OpenNTF. With Eclipse and Apache and Google Code and GIT and Linux. Run a local dev server so you can do your own benchmarks. Educate yourself on the state of the art in high-speed persistence patterns. Learn about graph databases. Learn about JSON. Learn about REST services.

    Because ultimately, the best way to make sure your XPages applications go faster is the same way to make sure your racecar goes faster: improve your skills.

Posted in Uncategorized
21 comments on “XPages performance: pro tips
  1. When I used tested return true over 50,000 rows, calling a function in an external SSJS library was the slowest. It’s a reason I avoid SSJS SLs heavily. Having said that, would I recommend to a customer that they pay me to rewrite an app I delivered in 8.5.1 using SSJS so that it uses Java because of the performance benefits? I could recommend it, but I’d say there are better uses of their money that would impress them more.

  2. MarkyRoden says:

    To expand on Nathan’s #1 performance tip (which is also a UX tip) – I have noticed that the use of partialRefresh to get serverside variables into the client is a heavily used practice and badly abused.

    If you need to get a single value from the back end and you HAVE to use a partialRefresh – be judicious with it. The value can be retrieved with one field – not with a whole panel being refreshed.


    consider using a JSONRPC call to get what you need/update what you need – one small post and one small response is also considerably easier on the user experience.

    Learn how to execute an ajax call manually and not rely on the partialRefresh to get what you need.

    Bottom line – if you don’t need to refresh a DOM element on your page – don’t – and plan accordingly when you are planning your application.

  3. MarkyRoden says:

    Another separate performance Tip:

    If you are using XPiNC and connecting to a server to get data – use a REST service.

    xAgents suffer from having to getNextEntry having to go back and forth from the client to the data source (server) for every record you are looking at – that is SLOW. A REST service sends all the data with one request.

    If your extlib REST service doesn’t give you the data in the format you need it – reformat it in the browser using CSJS – not serverside in the xAgent. Or write a custom REST service.

    Bottom line – minimize the number of trips to the server to get the data you need in XPiNC

  4. Mark Barton says:

    Personally with the power of client side frameworks like angularjs I would consider going down the SPA route and just leveraging the Domino Security / Custom REST API route.

    It gives you more options from an architecture point of view and lets be honest the future is looking bright for these frameworks.

  5. V.L. Watson says:

    It was requested that another tip be added.

    If you use other people’s code, make sure they are people who are OCD about performance!!!

  6. Philippe Riand says:

    The JS cache had been there since forever. And the key value of the cache is not the XPath expression value binding, but the whole script itself. Thus, the same expression used in multiple pages (even cross NSF) just appears once in the cache. Well assuming that it is exactly identical, without an extra single space.
    The cache also uses a MRU algorithm, so the latest JS expression used will be discarded last. Every time a JS expression is used, it it pushed up to the stack.
    By the way, the XPath is for the debugger, so an expression can be localized in the source at runtime. And you’re right, we compile JS just once and store the AST tree in the cache. A while ago, I compared this tree execution against Rhino which compile to native bytes code. The tree was actually as fast as the byte code, mostly because of the dynamic nature of JS.
    To be complete, the JS libraries are also loaded and compiled once in memory. They stay there until the host app times out. So they are not compiled every time you import them in your pages.
    When we profiled the teamroom using a low level Java profiler, the time consumed by the JS compiler was negligible with the default cache size of 400.
    To your point 2., I would add: use managed beans to cache your data. This way, the managed bean can transparently discard the cache when necessary, and the calling code doesn’t have to worry about it. A great example for this are the userBean/peopleBean from the Extension Library.

    • thentf says:

      Thanks for stopping by, Phil. It sounds like the cache is keyed from a hash of the source block in that case. Good to know.

      I feel pretty guilty about perpetrating myths regarding SSJS for a long time. I really wish I’d known about this caching thing back in 8.5.1. You’d think you would have corrected me about it while we were presenting together at Lotusphere! πŸ™‚

    • thentf says:

      Oh, and you say it caches across NSFs… does that mean when I increase the value in my xsp.properties on one NSF, it’s actually increasing for the whole server?

      • Philippe Riand says:

        No, the property value can only come from the global xsp.property. It has no effect when set at the NSF level. At a time, I wanted to add a capability in the XPages toolbox for monitoring how the cache behaves, but never got the time…

      • thentf says:

        Phil, you MIGHT want to check with the team, then, because as I mentioned above, there’s a control to set it PER NSF in Designer now. It would be a shame if that setting didn’t have any effect. πŸ™‚

  7. I actually wrote the [short] detail in XPCG on those two runtime properties and was totally unaware of your queries in this area – forgive me for not knowing of your “mythical” quest, otherwise exacting explanation would have been forthcoming earlier – being so familiar, sometimes we assume too much about the finer details and features of the XPages Runtime. Nonetheless, the script caches are low level and have been there from “day dot” and do exactly as specified per XPCG – versus a devs app code which is higher level and always there regardless of version or any possible runtime aids. The trade off between the two is huge whereby quite frankly misunderstanding or blatantly ignoring the XPages Request Processing Lifecycle in the first instance within custom app dev code has a more profound and detrimental impact on an application regardless of script cache configuration. One should only be concerned about the intricacies of the script caches beyond the default configuration when absolutely sure that optimal use has been made of Partial Refresh AND Partial Execution AND minimal Lifecycle Phase execution AND minimal JVM retained memory usage has been achieved for a given application. If this is the case, then I highly recommend utilising a tool such as the Eclipse Memory Analyzer to fully understand jvm memory usage periodically over time for a given working set in order to fully optimise the script caching sizes relative to average stress load.

    • thentf says:

      Tony, can you elaborate on what you mean by this? “…whereby quite frankly misunderstanding or blatantly ignoring the XPages Request Processing Lifecycle in the first instance within custom app dev code…”

      I’m quite familiar with the the XPages Request Processing Lifecycle. But I’m not at all clear on how one would “blatantly ignore” it in app code.

      And to be fair, I don’t know anyone, anywhere in the development community who is concerned with *tuning* the ibm.jscript.cachesize parameter. It is simply a relevation for almost all XSP developers that such a cache exists AT ALL. Once we know it’s there and that this is why SSJS is able to run faster than a postman in a coma, then we can make more sensible decisions on when to leverage it and when to avoid it.

      By the way, as long as we’re on the topic of SSJS and the XRPL, how about some SSJS functions to reveal what phase is being executed at the time? πŸ™‚

      • That’s it… you can’t and should’nt ignore it! There are lots of apps out there performing poorly and generally feeling clunky due to sub-optimal usage of the XRPL… and in some cases being knowingly left as-is due to cost and time factors to optimize correctly – hence “blatantly ignoring” the real cause whilst getting distracted by other settings etc. As you already pointed out… improving skills is key… and I’d encourage dev’s to really get to grips with the XRPL above everything else in the first instance πŸ™‚

        On the SSJS XRPL API front… I’d be very reluctant to introduce methods in this highly sensitive area of the XPages Runtime. We want request processing to occur with minimum overhead. The suggested methods would also be extreme fringe cases – not used for 99.9% of application use cases – most likely only for debugging tasks or the like.

        Instead I’ve just posted a custom application level approach to achieving this (especially just for you πŸ™‚ ) on XSnippets:


  8. Hmm, I have these options already in my 8.5.3er Designer:

    • A testament to the obscurity of the jscript and xpath cache settings, is the fact that the Designer Application Properties editor gained input fields for these settings back in v8.5.3 (that inadvertently give the impression of per application usage) but this Designer anomaly has gone undetected since! The Designer team will be removing these two settings from the Application Properties editor in the next applicable release – an SPR has now been logged – thank you for calling this out!

      And just to confirm, both properties are explicitly server-level and should be declared within the global xsp.properties file under the /data/properties directory if values other than the defaults are required. FYI, Phil alluded to a cache monitor capability – this is something I will be prototyping, that hopefully could be integrated within the XPages Runtime in a “next” release, and surfaced via the XPages Toolbox. This would give “time/space/occurence” insight into the behavior of the script cache so fine-tuning would be decision-based.

      Finally, a couple of my own pro tips over-and-above your’s Nathan:

      #1 – Understand the XPages Request Processing Lifecycle (aka XRPL)… GET vs POST, loaded vs rendered / $/#, broadening and narrowing of rendering and execution using Partial Refresh and Partial Execution, Object Scopes / Persistence, immediate/disableValidators… and so on. In the OpenNTF XPages Masterclass project, I’ve described this major component of the XPages Runtime as the “center of gravity” for every request where everything else (as in other performance/scalability related features) are hinged around the lifecycle. Applying a good understanding of the XRPL can mean the difference between response times of milliseconds vs seconds, and memory consumption of bytes vs megabytes.

      #2 – Use small fragments of Inline SSJS or EL (speed of access execution for each approach is based on several factors including type of target object (function, static/dynamic object, parameters, return types etc. – this is extremely low-level and determined by the dynamic runtime type information and compiler… ultimately the differences are nominal for typical use) calling into SSJS Libs, Custom Java, or Managed Beans A) because it makes the code easier to maintain due to loose-coupling per an MVC design pattern B) according to three simple CPU/memory based principles:

      SSJS Libs are loaded and compiled, then stored in-memory within the XPages Runtime for subsequent execution by the owning application as and when required until the owning application time-out is exceeded and the application is discarded. Technically this means that SSJS Lib translation/parsing/compilation processes must occur in the first instance before resulting in a compiled representation, therefore using CPU, before using JVM heap space to store the compiled representation for a “non-garbage collector” defined period of time.

      Custom Java classes, regardless of using direct classes or from within libraries, are already precompiled, therefore using less CPU during initial loading. JVM heap space will still be used thereafter, but the JVM garbage collector is responsible for discarding objects per it’s own schedule and reference claiming policies as determined by any persisted objects that are keeping any such Custom Java objects alive in a retained set. You are responsible for object construction as and when required – which can use more CPU if an object needs to be frequently constructed during application usage.

      Managed Beans, regardless of using direct classes or from within libraries, are also precompiled, therefore using less CPU during initial loading. Again, JVM heap space will still be used thereafter, but the object will be automatically “managed” by the XPages Runtime bean container. This means that the bean will be automatically instantiated by the bean container when the first occurrence of a reference is encountered during a request and persisted into the desired scope accordingly for subsequent use thereafter.

      One note of caution… if you implement Custom Java or Managed Bean objects that cache values and/or other objects, be conscious of JVM heap space usage for your objects. Particularly if caching many entries into Collections. For example, for demonstration purposes I deliberately configured a Managed Bean (searchBean) to be in view Scope along with a Persistence setting of “Save the current page in Memory” within the OpenNTF XPages Insights into Big Data project. When this bean performs a search across multiple databases with no limits on maximum documents…. it quickly consumes a lot of JVM heap space as it is caching search results into a Collection of JsonJavaObjects. The solution here is to cache a smaller object (containing only UNID and database key perhaps), but too also use a Persistence setting of “Save Pages to Disk”. This will help to make the application more performant and scalable. The point here is generically applicable – use memory sparingly in your custom objects, analyse it using heapdumps, and multiply it by expected numbers of concurrent users etc – this way you will understand the limits of your application!

      #3 – Use the Dynamic Content control (and other dynamic type controls InPlace Form etc) whenever possible. A lot of the intricacies of the XRPL are built into these controls to deal with loading / discarding portions of the component tree, along with Partial Refresh and Execution.

      #4 …

  9. Howard says:

    Great post by Nathan and a great discussion so far! Thanks for the enlightenment everyone.

  10. […] I asked some people who are way smarter than I and Nathan Freeman pointed me to his article about XPages Performance Pro Tips and told me he expected that the prototype would actually be at the server level and he was […]

  11. […] Freeman’s wrote a remarkably thoughtful and well presented article on performance pro tipsΒ and I finally determined to write a constructive, helpful, positive on article on how partial […]

Take the red pill.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: