Help

Built with Seam

You can find the full source code for this website in the Seam package in the directory /examples/wiki. It is licensed under the LGPL.

SeamFramework.org runs Seam 2 on JBoss AS. This page is a working document where we collect TODOs and other items related to tuning the performance and scalability of the website. You will, most likely, face similar issues and challenges when you want to scale out your own Seam application in production.

TODOs are marked with LOW, MEDIUM, or HIGH PRIORITY. If you complete a TODO, remove it and add your solution and findings to the text of the section. Feel free to create new TODOs and sections.


Performance of requests, reducing the time it takes to get a response

What has already been done or what can we do to lower the average response time of a client GET request to a typical wiki page (like this one). This involves several stages of caching as well as optimization of any processing and database queries.

Wiki text parser is slow

It's actually not that slow, but if you call it 50 times to render a single page with 49 comments/forum replies at the bottom, most of the processing time is spent here. We also need to use c:forEach to render wiki text iteratively, because we build the component tree dynamically with a custom Facelets handler that understands wiki plugins.

This is now low priority, because we are caching all rendered comments and forum replies with the Seam page fragment cache. This considerably speeds up rendering pages.

  • LOW PRIORITY: Investigate with a profiler if the time is spend in creating the JSF component tree or running the ANTLR parser. If it's the component tree, we need to stop using c:forEach on comments/forum replies and write a version of wiki:formattedText that is not backed by a custom Facelet handler (no plugins then for comments, disabled anyway). If the ANTLR parsing process is slow, uhm, then I don't know what to do :)

Iteration (datatable, repeat) in the UI with value bindings that run through interceptors

Since we don't cache this, a particular backing bean might be called hundreds or thousands of times during rendering of a datatable, if a value binding has to be evaluated for each row. If the backing bean is a Seam component, injection of dependencies will occur for every call, involving potentially thousands of map lookups.

No, compiling EL is not the answer. It doesn't matter if you can reduce the processing time of a single call from 5ms to 3ms, you are still making thousands of them too many.

Already available solutions for this problem involve a DTO-like pattern where values required for iterative rendering are copied onto a special backing bean instance that is marked with @BypassInterceptors. Worst case scenario, if we can't optimize at the JSF/EL level, we need to generalize this pattern and build it into Seam.

  • HIGH PRIORITY: Isolate hot spots of injection (e.g. every time you use h:datatable or ui:repeat like so...) and devise a strategy for optimization at the JSF/EL level. If optimization here is not viable/possible, document patterns for user-level optimizations.

Optimizing queries

Optimizing SQL execution plans for costly queries. All queries have been optimized on MySQL for execution times under 100ms, the most frequent are 10-20ms. Pay special attention to any aggregation queries!

Control client-side caching

Cache headers and conditional request processing for all relevant resources has been implemented.

Compression of representations

We currently run with gzip compression enabled in Tomcat. It looks like the output buffer is only flushed to the client when rendering of a representation is complete. In other words, the browser does not partially render the page as it is build on the server (with flushing in batches) but only renders the whole page. This might actually be a problem with how JSF works and how it handles its output buffer. Although the request response time is not changed, users might perceive this full rendering as slower than progressive page rendering.

  • MEDIUM PRIORITY: Find out how buffers work and how they are flushed in JSF with enabled/disabled gzip output streams in Tomcat 5.5.

Scalability of the system under load

Scaling up the website to handler higher loads with concurrent requests from many users in many sessions. In other words, keeping single-request response performance in a scenario with multi-threaded many requests.

Session memory consumption

  • MEDIUM PRIORITY: Analyze and optimize session memory consumption, calculate maximum number of active sessions based on available heap size, etc.

Caching rendered page fragments on the server

This website now uses the Seam page fragment cache extensively. All content that is changing rarely is cached after it is rendered for the first time. This is a shared cache (relying on concurrency control in EHCache, for this implementation). So one users requests a page, some fragments are cached after they are rendered, so that the next user requesting the same fragment will get it from the cache and we don't have to encode all the JSF components into HTML again. We cache the plain HTML output.

We had to implement some special routines to calculate cache keys (a cache key for a fragment has to include things like the current users access level). Two things are problematic and prevent effective caching of fragments:

1. The rendered="#{expr}" attribute is evaluated by JSF on any component, potentially in any phase. This means that any child component of <s:cache> might have its isRendered() method called by the JSF engine, even if you expected that all HTML output of these child components would come from the page fragment cache. So make sure that you do not do anything expensive in your rendered attribute expression. There is no solution for this, JSF 2.0 needs to split the rendered attribute into two attributes, one that really means call encodeBegin() etc. in RENDER RESPONSE phase and another one that is used for things like apply the values of this component to the model in UPDATE MODEL phase. This is all mixed up in JSF 1.x in a single attribute, which is really really bad.

2. The URL encoding rules of the servlet specification conflict with cached content. If, for example, you try to cache the rendered HTML of an <h:outputLink>, you will find that the servlet container might encode (e.g. if this is the first request to the page by a particular client) the session identifier (;jsessionid=123foo123) into the URL. If you now cache this output, some other user will see the same link with the same session identifier. That means someone can hijack the session of the user who rendered the page fragment when it was put into the cache. The only viable solution is to completely disable URL rewriting in the servlet container and rely on cookies only for session identification. Unfortunately, Tomcat has no configuration switch to do that, so you need to implement a custom servlet filter with a response wrapper. This wiki software does that.

Optimize concurrent database access

Typical Hibernate first, second-level, and query cache tuning. Already done for the wiki, although I'm still suspicious of second-level cache inconsistencies I have seen with caching WikiNode (superclass of all documents, comments, directories, etc). We are no longer using database-level cascading of delete operations, so that should be solved. Still, we gain most from caching reference data (preferences, roles), non-critical data (feed entries), or data that isn't updated frequently (user accounts). This is in place and working fine. Quite a few (but not too many) aggregation queries are cached with the query cache.

  • MEDIUM PRIORITY: We need to evaluate this test data generator.
  • LOW PRIORITY: Find out why the second-level cache for core read/write entities produces inconsistent data. Difficult and probably not worth the effort.