Tuesday, November 25, 2014

JSF and MVC 1.0, a comparison in code

One of the new specs that will debut in Java EE 8 will be MVC 1.0, a second MVC framework alongside the existing MVC framework JSF.

A lot has been written about this. Discussions have mostly been about the why, whether it isn't introduced too late in the game, and what the advantages (if any) above JSF exactly are. Among the advantages that were initially mentioned were the ability to have different templating engines, have better performance and the ability to be stateless. Discussions have furthermore also been about the name of this new framework.

This name can be somewhat confusing. Namely, the term MVC to contrast with JSF is perhaps technically not entirely accurate, as both are MVC frameworks. The flavor of MVC intended to be implemented by MVC 1.0 is actually "action-based MVC", most well known among Java developers as "MVC the way Spring MVC implements it". The flavor of MVC that JSF implements is "Component-based MVC". Alternative terms for this are MVC-push and MVC-pull.

One can argue that JSF since 2.0 has been moving to a more hybrid model; view parameters, the PreRenderView event and view actions have been key elements of this, but the best practice of having a single backing bean back a single view and things like injectable request parameters and eager request scoped beans have been contributing to this as well. The discussion of component-based MVC vs action-based MVC is therefore a little less black and white than it may initially seem, but of course in it's core JSF clearly remains a component-based MVC framework.

When people took a closer look at the advantages mentioned above it became quickly clear they weren't quite specific to action-based MVC. JSF most definitely supports additional templating engines, there's a specific plug-in mechanism for that called the VDL (View Declaration Language). Stacked up against an MVC framework, JSF actually performs rather well, and of course JSF can be used stateless.

So the official motivation for introducing a second MVC framework in Java EE is largely not about a specific advantage that MVC 1.0 will bring to the table, but first and foremost about having a "different" approach. Depending on one's use case, either one of the approaches can be better, or suit one's mental model (perhaps based on experience) better, but very few claims are made about which approach is actually better.

Here we're also not going to investigate which approach is better, but will take a closer look at two actual code examples where the same functionality is implemented by both MVC 1.0 and JSF. Since MVC 1.0 is still in its early stages I took code examples from Spring MVC instead. It's expected that MVC 1.0 will be rather close to Spring MVC, not as to the actual APIs and plumbing used, but with regard to the overall approach and idea.

As I'm not a Spring MVC user myself, I took the examples from a Reddit discussion about this very topic. They are shown and discussed below:

CRUD

The first example is about a typical CRUD use case. The Spring controller is given first, followed by a backing bean in JSF.

Spring MVC

@Named
@RequestMapping("/appointments")
public class AppointmentsController {

    @Inject
    private AppointmentBook appointmentBook;

    @RequestMapping(value="/new", method = RequestMethod.GET)
    public String getNewForm(Model model) {
        model.addAttribute("appointment", new Appointment();
        return "appointment-edit";
    }

    @RequestMapping(value="/new", method = RequestMethod.POST)
    public String add(@Valid Appointment appointment, BindingResult result, RedirectAttributes redirectAttributes) {
        if (result.hasErrors()) {
            return "appointments/new";
        }
        appointmentBook.addAppointment(appointment);
        redirectAttributes.addFlashAttribute("message", "Successfully added"+appointment.getTitle();

        return "redirect:/appointments";
    }

}

JSF

@Named
@ViewScoped
public class NewAppointmentsBacking {

    @Inject
    private AppointmentBook appointmentBook;

    private Appointment appointment = new Appointment();

    public Appointment getAppointment() {
         return appointment;
    }

    public String add() {
        appointmentBook.addAppointment(appointment);
        addFlashMessage("Successfully added " + appointment.getTitle());

        return "/appointments?faces-redirect=true";
    }
}

As can be seen from the two code examples, there are at a first glance quite a number of similarities. However there are also a number of fundamental differences that are perhaps not immediately obvious.

Starting with the similarities, both versions are @Named and have the same service injected via the same @Inject annotation. When a URL is requested (via a GET) then in both versions there's a new Appointment instantiated. In the Spring version this happens in getNewForm(), in the JSF version this happens via the instance field initializer. Both versions subsequently make this instance available to the view. In the Spring MVC version this happens by setting it as an attribute of the model object that's passed in, while in the JSF version this happens via a getter.

The view typically contains a form where a user is supposed to edit various properties of the Appointment shown above. When this form is posted back to the server, in both versions an add() method is called where the (edited) Appointment instance is saved via the service that was previously injected and a flash message is set.

Finally both versions return an outcome that redirects the user to a new page (PRG pattern). Spring MVC uses the syntax "redirect:/appointments" for this, while JSF uses "/appointments?faces-redirect=true" to express the same thing.

Despite the large number of similarities as observed above, there is a big fundamental difference between the two; the class shown for Spring MVC represents a controller. It's mapped directly to a URL and it's pretty much the first thing that is invoked. All of the above runs without having determined what the view will be. Values computed here will be stored in a contextual object and a view is selected. We can think of this store as pushing values (the view didn't ask for it, since it's not even selected at this point). Hence the alternative name "MVC push" for this approach.

The class shown for the JSF example is NOT a controller. In JSF the controller is provided by the framework. It selects a view based on the incoming URL and the outcome of a ResourceHandler. This will cause a view to execute, and as part of that execution a (backing) bean at some point will be pulled in. Only after this pull has been done will the logic of the class in question start executing. Because of this the alternative name for this approach is "MVC pull".

Over to the concrete differences; in the Spring MVC sample instantiating the Appointment had to be explicitly mapped to a URL and the view to be rendered afterwards is explicitly defined. In the JSF version, both URL and view are defaulted; it's the view from which the bean is pulled. A backing bean can override the default view to be rendered by using the aforementioned view action. This gives it some of the "feel" of a controller, but doesn't change the fundamental fact that the backing bean had to be pulled into scope by the initial view first (things like @Eager in OmniFaces do blur the lines further by instantiating beans before a view pulls them in).

The post back case shows something similar. In the Spring version the add() method is explicitly mapped to a URL, while in the JSF version it corresponds to an action method of the view that pulled the bean in.

There's another difference with respect to validation. In the Spring MVC example there's an explicit check to see if validation has failed and an explicit selection of a view to display errors. In this case that view is the same one again ("appointments/new"), but it's still provided explicitly. In the JSF example there's no explicit check. Instead, the code relies on the default of staying on the same view and not invoking the action method. In effect, the exact same thing happens in both cases but the mindset to get there is different.

Dynamically loading images

The second example is about a case where a list of images is rendered first and where subsequently the content of those images is dynamically provided by the beans in question. The Spring code is again given first, followed by the JSF code.

Spring MVC

<c:forEach value="${thumbnails}" var="thumbnail">
    <div>
        <div class="thumbnail">
            <img src="/thumbnails/${thumbnail.id}" />
        </div>
        <c:out value="${thumbnail.caption}" />
    </div>
</c:forEach>
@Controller
public ThumbnailsController {

    @Inject
    private ThumbnailsDAO thumbnails;

    @RequestMapping(value = "/", method = RequestMethod.GET)
    public ModelAndView images() {
        ModelAndView mv = new ModelAndView("images");
        mv.addObject("thumbnails", thumbnailsDAO.getThumbnails());
        return mv;
    }

    @RequestMapping(value = "/thumbnails/{id}", method = RequestMethod.GET, produces = "image/jpeg")
    public @ResponseBody byte[] thumbnail(@PathParam long id)  {
        return thumbnailsDAO.getThumbnail(id);
    }
}

JSF

<ui:repeat value="#{thumbnails}" var="thumbnail">
    <div>
        <div class="thumbnail">
            <o:graphicImage value="#{thumbnailsBacking.thumbnail(thumbnail.id)}" />
        </div>
        #{thumbnail.caption}
    </div>
</ui:repeat>
@Model
public class ThumbnailsBacking {

    @Inject
    private ThumbnailsDAO thumbnailsDAO;

    @Produces @RequestScoped @Named("thumbnails") 
    public List<Thumbnail> getThumbnails() {
        return thumbnailsDAO.getThumbnails();
    }

    public byte[] thumbnail(Long id) {
        return thumbnailsDAO.getThumbnail(id);
    }
}

Starting with the similarities again, we see that the markup for both views is fairly similar in structure. Both have an iteration tag that takes values from an input list called thumbnails and during each round of the iteration the ID of each individual thumbnail is used to render an image link.

Both the classes for Spring MVC and JSF call getThumbnails() on the injected DAO for the initial GET request, and both have a nearly identical thumbnail() method where getThumbnail(id) is called on the DAO in response to each request for a dynamic image that was rendered before.

Both versions also show that each framework has an alternative way to do what they do. In the Spring MVC example we see that instead of having a Model passed-in and returning a String based outcome, there's an alternative version that uses a ModelAndView instance, where the outcome is set on this object.

In the JSF version we see that instead of having an instance field + getter, there's an alternative version based an a producer. In that variant the data is made available under the EL name "thumbnails", just as in the Spring MVC version.

On to the differences, we see that the Spring MVC version is again using explicit URLs. The otherwise identical thumbnail() method has an extra annotation for specifying the URL to which it's mapped. This very URL is the one that's used in the img tag in the view. JSF on the other hand doesn't ask to map the method to a URL. Instead, there's an EL expression used to point directly to the method that delivers the image content. The component (o:graphicImage here) then generates the URL.

While the producer method that we showed in the JSF example (getThumbnails()) looked like JSF was declarative pushing a value, it's in fact still about a push. The method will not be called, and therefor a value not produced, until the EL variable "thumbnails" is resolved for the first time.

Another difference is that the view in the JSF example contains two components (ui:repeat and o:graphicImage) that adhere to JSF's component model, and that the view uses a templating language (Facelets) that is part of the JSF spec itself. Spring MVC (of course) doesn't specify a component model, and while it could theoretically come with its own templating language it doesn't have that one either. Instead, Spring MVC relies on external templating systems, e.g. JSP or Thymeleaf.

Finally, a remarkable difference is that the two very similar classes ThumbnailsController and ThumbnailsBacking are annotated by @Controller respectively @Model, two completely opposite responsibilities of the MVC pattern. Indeed, in JSF everything that's referenced by the view (via EL expressions) if officially called the model. ThumbnailsBacking is from JSF's point of the view the model. In practice the lines are bit more blurred, and the backing bean is more akin to a plumbing component that sits between the model, view and controller.

Conclusion

We haven't gone in-depth to what it means to have a component model and what advantages that has, nor have we discussed in any detail what a RESTful architecture brings to the table. In passing we mentioned the concept of state, but did not look at that either. Instead, we mainly focussed on code examples for two different use cases and compared and contrasted these. In that comparison we tried as much as possible to refrain from any judgement about which approach is better, component based MVC or action-oriented MVC (as I'm one of the authors of the JSF utility library OmniFaces and a member of the JSF EG such a judgement would always be biased of course).

We saw that while the code examples at first glance have remarkable similarities there are in fact deep fundamental differences between the two approaches. It's an open question whether the future is with either one of those two, with a hybrid approach of them, or with both living next to each other. Java EE 8 at least will opt for that last option and will have both a component based MVC framework and an action-oriented one.

Arjan Tijms

Monday, November 24, 2014

OmniFaces 2.0 released!

After a poll regarding the future dependencies of OmniFaces 2.0 and two release candidates we're proud to announce that today we've finally released OmniFaces 2.0.

OmniFaces 2.0 is a direct continuation of OmniFaces 1.x, but has started to build on newer dependencies. We also took the opportunity to do a little refactoring here and there (specifically noticeable in the Events class).

The easiest way to use OmniFaces is via Maven by adding the following to pom.xml:

<dependency>
    <groupId>org.omnifaces</groupId>
    <artifactId>omnifaces</artifactId>
    <version>2.0</version>
</dependency>

A detailed description of the biggest items of this release can be found on the blog of BalusC.

One particular new feature not mentioned there is a new capability that has been added to <o:validateBean>; class level bean validation. While JSF core and OmniFaces both have had a validateBean for some time, one thing it curiously did not do despite its name is actually validating a bean. Instead, those existing versions just controlled various aspects of bean validation. Bean validation itself was then only applied to individual properties of a bean, namely those ones that were bound to input components.

With OmniFaces 2.0 it's now possible to specify that a bean should be validated at the class level. The following gives an example of this:

<h:inputText value="#{bean.product.item}" />
<h:inputText value="#{bean.product.order}" />
 
<o:validateBean value="#{bean.product}" validationGroups="com.example.MyGroup" />

Using the existing bean validation integration of JSF, only product.item and product.order can be validated, since these are the properties that are directly bound to an input component. Using <o:validateBean> the product itself can be validated as well, and this will happen at the right place in the JSF lifecycle. The right place in the lifecycle means that it will be in the "process validation" phase. True to the way JSF works, if validation fails the actual model will not be updated. In order to prevent this update class level bean validation will be performed on a copy of the actual product (with a plug-in structure to chose between multiple ways to copy the model object).

More information about this class level bean validation can be found on the associated showcase page. A complete overview of all thats new can be found on the what's new page.

Arjan Tijms

Thursday, November 20, 2014

OmniFaces 2.0 RC2 available for testing

After an intense debugging session following the release of OmniFaces 2.0, we have decided to release one more release candidate; OmniFaces 2.0 RC2.

For RC2 we mostly focused on TomEE 2.0 compatibility. Even though TomEE 2.0 is only available in a SNAPSHOT release, we're happy to see that it passed almost all of our tests and was able to run our showcase application just fine. The only place where it failed was with the viewParamValidationFailed page, but this appeared to be an issue in MyFaces and unrelated to TomEE itself.

To repeat from the RC1 announcement: OmniFaces 2.0 is the first release that will depend on JSF 2.2 and CDI 1.1 from Java EE 7. Our Servlet dependency is now Servlet 3.0 from Java EE 6 (used to be 2.5, although we optionally used 3.0 features before). The minimal Java SE version is now Java 7.

A full list of what's new and changed is available here.

OmniFaces 2.0 RC2 can be tested by adding the following dependency to your pom.xml:

<dependency>
    <groupId>org.omnifaces</groupId>
    <artifactId>omnifaces</artifactId>
    <version>2.0-RC2</version>
</dependency>

Alternatively the jars files can be downloaded directly.

We're currently investigating one last issue, if that's resolved and no other major bugs appear we'd like to release OmniFaces 2.0 at the end of this week.

Arjan Tijms

Sunday, November 16, 2014

Header based stateless token authentication for JAX-RS

Authentication is a topic that comes up often for web applications. The Java EE spec supports authentication for those via the Servlet and JASPIC specs, but doesn't say too much about how to authenticate for JAX-RS.

Luckily JAX-RS is simply layered on top of Servlets, and one can therefore just use JASPIC's authentication modules for the Servlet Container Profile. There's thus not really a need for a separate REST profile, as there is for SOAP web services.

While using the same basic technologies as authentication modules for web applications, the requirements for modules that are to be used for JAX-RS are a bit different.

JAX-RS is often used to implement an API that is used by scripts. Such scripts typically do not engage into an authentication dialog with the server, i.e. it's rare for an API to redirect to a form asking for credentials, let alone asking to log-in with a social provider.

An even more fundamental difference is that in web apps it's commonplace to establish a session for among others authentication purposes. While possible to do this for JAX-RS as well, it's not exactly a best practice. Restful APIs are supposed to be fully stateless.

To prevent the need for going into an arbitrary authentication dialog with the server, it's typically for scripts to send their credentials upfront with a request. For this BASIC authentication can be used, which does actually initiates a dialog albeit a standardised one. An other option is to provide a token as either a request parameter or as an HTTP header. It should go without saying that in both these case all communication should be done exclusively via https.

Preventing a session to be created can be done in several ways as well. One way is to store the authentication data in an encrypted cookie instead of storing that data in the HTTP session. While this surely works it does feel somewhat weird to "blindly" except the authenticated identity from what the client provides. If the encryption is strong enough it *should* be okayish, but still. Another method is to quite simply authenticate every time over again with each request. This however has its own problem, namely the potential for bad performance. An in-memory user store will likely be very fast to authenticate against, but anything involving an external system like a database or ldap server probably is not.

The performance problem of authenticating with each request can be mitigated though by using an authentication cache. The question is then whether this isn't really the same as creating a session?

While both an (http) session and a cache consume memory at the server, a major difference between the two is that a session is a store for all kinds of data, which includes state, but a cache is only about data locality. A cache is thus by definition never the primary source of data.

What this means is that we can throw data away from a cache at arbitrary times, and the client won't know the difference except for the fact its next request may be somewhat slower. We can't really do that with session data. Setting a hard limit on the size of a cache is thus a lot easier for a cache then it is for a session, and it's not mandatory to replicate a cache across a cluster.

Still, as with many things it's a trade off; having zero data stored at the server, but having a cookie send along with the request and needing to decrypt that every time (which for strong encryption can be computational expensive), or having some data at the server (in a very manageable way), but without the uneasiness of directly accepting an authenticated state from the client.

Here we'll be giving an example for a general stateless auth module that uses header based token authentication and authenticates with each request. This is combined with an application level component that processes the token and maintains a cache. The auth module is implemented using JASPIC, the Java EE standard SPI for authentication. The example uses a utility library that I'm incubating called OmniSecurity. This library is not a security framework itself, but provides several convenience utilities for the existing Java EE security APIs. (like OmniFaces does for JSF and Guava does for Java)

One caveat is that the example assumes CDI is available in an authentication module. In practice this is the case when running on JBoss, but not when running on most other servers. Another caveat is that OmniSecurity is not yet stable or complete. We're working towards an 1.0 version, but the current version 0.6-ALPHA is as the name implies just an alpha version.

The module itself look as follows:

public class TokenAuthModule extends HttpServerAuthModule {
    
    private final static Pattern tokenPattern = compile("OmniLogin\\s+auth\\s*=\\s*(.*)");
    
    @Override
    public AuthStatus validateHttpRequest(HttpServletRequest request, HttpServletResponse response, HttpMsgContext httpMsgContext) throws AuthException {
        
        String token = getToken(request);
        if (!isEmpty(token)) {
            
            // If a token is present, authenticate with it whether this is strictly required or not.
            
            TokenAuthenticator tokenAuthenticator = getReferenceOrNull(TokenAuthenticator.class);
            if (tokenAuthenticator != null) {
                
                if (tokenAuthenticator.authenticate(token)) {
                    return httpMsgContext.notifyContainerAboutLogin(tokenAuthenticator.getUserName(), tokenAuthenticator.getApplicationRoles());
                }                
            }            
        }
        
        if (httpMsgContext.isProtected()) {
            return httpMsgContext.responseNotFound();
        }
        
        return httpMsgContext.doNothing();
    }
    
    private String getToken(HttpServletRequest request) { 
        String authorizationHeader = request.getHeader("Authorization");
        if (!isEmpty(authorizationHeader)) {
            
            Matcher tokenMatcher = tokenPattern.matcher(authorizationHeader);
            if (tokenMatcher.matches()) {
                return tokenMatcher.group(1);
            }
        }
        
        return null;
    }

}
Below is a quick primer on Java EE's authentication modules:
A server auth module (SAM) is not entirely unlike a servlet filter, albeit one that is called before every other filter. Just as a servlet filter it's called with an HttpServletRequest and HttpServletResponse, is capable of including and forwarding to resources, and can wrap both the request and the response. A key difference is that it also receives an object via which it can pass a username and optionally a series of roles to the container. These will then become the authenticated identity, i.e. the username that is passed to the container here will be what HtttpServletRequest.getUserPrincipal().getName() returns. Furthermore, a server auth module doesn't control the continuation of the filter chain by calling or not calling FilterChain.doFilter(), but by returning a status code.

In the example above the authentication module extracts a token from the request. If one is present, it obtains a reference to a TokenAuthenticator, which does the actual authentication of the token and provides a username and roles if the token is valid. It's not strictly necessary to have this separation and the authentication module could just as well contain all required code directly. However, just like the separation of responsibilities in MVC, it's typical in authentication to have a separation between the mechanism and the repository. The first contains the code that does interaction with the environment (aka the authentication dialog, aka authentication messaging), while the latter doesn't know anything about an environment and only keeps a collection of users and roles that are accessed via some set of credentials (e.g. username/password, keys, tokens, etc).

If the token is found to be valid, the authentication module retrieves the username and roles from the authenticator and passes these to the container. Whenever an authentication module does this, it's supposed to return the status "SUCCESS". By using the HttpMsgContext this requirement is largely made invisible; the code just returns whatever HttpMsgContext.notifyContainerAboutLogin returns.

If authentication did not happen for whatever reason, it depends on whether the resource (URL) that was accessed is protected (requires an authenticated user) or is public (does not require an authenticated user). In the first situation we always return a 404 to the client. This is a general security precaution. According to HTTP we should actually return a 403 here, but if we did users can attempt to guess what the protected resources are. For applications where it's already clear what all the protected resources are it would make more sense to indeed return that 403 here. If the resource is a public one, the code "does nothing". Since authentication modules in Java EE need to return something and there's no status code that indicates nothing should happen, in fact doing nothing requires a tiny bit of work. Luckily this work is largely abstracted by HttpMsgContext.doNothing().

Note that the TokenAuthModule as shown above is already implemented in the OmniSecurity library and can be used as is. The TokenAuthenticator however has to be implemented by user code. An example of an implementation is shown below:

@RequestScoped
public class APITokenAuthModule implements TokenAuthenticator {

    @Inject
    private UserService userService;

    @Inject
    private CacheManager cacheManager;
    
    private User user;

    @Override
    public boolean authenticate(String token) {
        try {
            Cache<String, User> usersCache = cacheManager.getDefaultCache();

            User cachedUser = usersCache.get(token);
            if (cachedUser != null) {
                user = cachedUser;
            } else {
                user = userService.getUserByLoginToken(token);
                usersCache.put(token, user);
            }
        } catch (InvalidCredentialsException e) {
            return false;
        }

        return true;
    }

    @Override
    public String getUserName() {
        return user == null ? null : user.getUserName();
    }

    @Override
    public List<String> getApplicationRoles() {
        return user == null ? emptyList() : user.getRoles();
    }

    // (Two empty methods omitted)
}
This TokenAuthenticator implementation is injected with both a service to obtain users from, as well as a cache instance (InfiniSpan was used here). The code simply checks if a User instance associated with a token is already in the cache, and if it's not gets if from the service and puts it in the cache. The User instance is subsequently used to provide a user name and roles.

Installing the authentication module can be done during startup of the container via a Servlet context listener as follows:

@WebListener
public class SamRegistrationListener extends BaseServletContextListener {
 
    @Override
    public void contextInitialized(ServletContextEvent sce) {
        Jaspic.registerServerAuthModule(new TokenAuthModule(), sce.getServletContext());
    }
}
After installing the authentication module as outlined in this article in a JAX-RS application, it can be tested as follows:
curl -vs -H "Authorization: OmniLogin auth=ABCDEFGH123" https://localhost:8080/api/foo 

As shown in this article, adding an authentication module for JAX-RS that's fully stateless and doesn't store an authenticated state on the client is relatively straightforward using Java EE authentication modules. Big caveats are that the most straightforward approach uses CDI which is not always available in authentication modules (in WildFly it's available), and that the example uses the OmniSecurity library to simplify some of JASPIC's arcane native APIs, but OmniSecurity is still only in an alpha status.

Arjan Tijms

Saturday, November 8, 2014

OmniFaces 2.0 RC1 available for testing

We are happy to announce that we have just released OmniFaces 2.0 release candidate 1.

OmniFaces 2.0 is the first release that will depend on JSF 2.2 and CDI 1.1 from Java EE 7. Our Servlet dependency is now Servlet 3.0 from Java EE 6 (used to be 2.5, although we optionally used 3.0 features before). The minimal Java SE version is now Java 7.

A full list of what's new and changed is available here.

OmniFaces 2.0 RC1 can be tested by adding the following dependency to your pom.xml:

<dependency>
    <groupId>org.omnifaces</groupId>
    <artifactId>omnifaces</artifactId>
    <version>2.0-RC1</version>
</dependency>

Alternatively the jars files can be downloaded directly.

If no major bugs surface we hope to release OmniFaces 2.0 final in about one week from now.

Arjan Tijms

Friday, October 31, 2014

Java EE process cycles and server availability

When we normally talk about the Java EE cycle time, we talk about the time it takes between major revisions of the spec. E.g. the time between Java EE 6 and Java EE 7. While this is indeed the leading cycle time, there are two additional cycles that are of major importance:
  1. The time it takes for vendors to release an initial product that implements the new spec revision
  2. The time it takes vendors to stabilize their product (which incidentally is closely tied to the actual user adoption rate)

In this article we'll take a somewhat closer look on the time it takes vendors to release their initial product. But first let's take a quick look at the time between spec releases. The following table lists the Java EE version history and the delta time between versions:

Java EE delta times between releases
Version Start date Release date Days since last release Days spent on spec
1.2 - 12 Dec, 1999 - -
1.3 18 Feb, 2000 24 Sep, 2001 653 days (1 year, 9 months) 584 (1 year, 7 months)
1.4 22 Oct, 2001 12 Nov, 2003 779 days (2 years, 1 month) 751 (2 years)
5 10 May, 2004 11 May, 2006 911 days (2 years, 6 months) 731 (2 years)
6 16 Jul, 2007 10 Dec, 2009 1310 days (3 years, 7 months) 878 (2 years, 4 months)
7 14 Mar, 2011 28 May, 2013 1266 days (3 years, 5 months) 806 (2 years, 2 months)

As can be seen the time between releases has been steadily increasing, but now seems to have been stabilized to approximately three and a half years. The plan is to release Java EE 8 using the same pace, meaning we should expect it around the end of 2016.

It may be worth emphasizing that the time between releases is not fully spend on Java EE. Typically there is what one may call with respect to spec work The Big Void™ between releases. It's a period of time where there is no spec work being done. This void starts right after the spec is released and the various EGs are disbanded. The time is used differently by everyone, but typically it's used for implementation work, cleaning up and refactoring code, project structures, tests and other artifacts.

After some time (~1 year for Java EE 6, ~5 months for Java EE 7) initial discussions start where just some ideas are pitched and the landscape is explored. After that it still takes some time until the work really kicks off for the front runners (~1 year and 5 months for Java EE 6, ~1 year and 3 months for Java EE 7).

Those numbers are however for the front runners, a bunch of sub-specs of Java EE start even later than this, and some of them even finish well before the release date of the main umbrella spec. So while the time between releases seems like a long time, it's important to realize by far not all this time is actually spend on the various specifications. As can be seen in the table above, the time actually spend on the specification has been fairly stable at around 2 years. 1.3 was a bit below that and 6 a bit above it, but it's all fairly close to these two years. What has been increasing is the time taken up by The Void (or uptake as some others call it); less than a month between 1.3 and 1.4 to well over a year between 5 and 6, and 6 and 7.

As mentioned previously, finalizing the spec is only one aspect of the entire process. With the exception of GlassFish, the reference implementation (RI) that is made available at the same time that the new spec revision becomes available, the implementation cycle of Java EE starts right after a spec release.

A small complication in tracking Java EE server products is that various of these products are variations of each other or just different versions taken from the same code line. E.g. WASCE is (was) an intermediate release of Geronimmo. JBoss AS 6 is obviously just an earlier version of JBoss AS 7, which is itself an earlier version of JBoss EAP 6 (although JBoss markets it as a separate product). NetWeaver is said to be a version of TomEE, etc.

Also complicating the certification and first version story is that a number of vendors chose to have beta or technical preview versions certified. In one occasion a vendor even certified a snapshot version. Obviously those versions are not intended for any practical (production) use. It's perhaps somewhat questionable that servers that in the eyes of their own vendors are very far from the stability required by their customers can be certified at all.

The following two tables show how long it took the Java EE 6 Full- and Web Profile to be implemented for each server.

Java EE 6 Full Profile server implementation times
Server Release date Days since spec released
GlassFish 3.0 10 Dec, 2009 0
* JEUS 7 Tech Preview 1 15 Jan, 2010 36
WebSphere 8.0 22 June, 2011 559 (1 year, 6 months)
* Geronimo 3.0 BETA 1 14 November, 2011 704 (1 year, 11 months)
WebLogic 12.1.1 1 Dec, 2011 721 (1 year, 11 months)
Interstage AS 10.1 27 December, 2011 747 (2 years)
* JBoss AS 7.1 17 Feb, 2012 799 (2 years, 2 months)
(u)Cosminexus 9.0 ~16 April, 2012 858 (2 years, 4 months)
JEUS 7.0 ~1 June, 2012 904 (2 years, 5 months)
JBoss EAP 6 20 June, 2012 923 (2 years, 6 months)
Geronimo 3.0 13 July, 2012 946 (2 years, 7 months)
WebOTX AS 9.1 30 May, 2013 1267 (3 years, 5 months)
InforSuite AS 9.1 ~July, 2014 ~1664 (4 years, 6 months)
* denotes a server that's a tech preview, community, developer preview, beta, etc version

Java EE 6 Web Profile server implementation times
Server Release date Days since spec released
* JBoss AS 6.0 28 December, 2010 322 (10 months)
Resin 4.0.17 May, 2011 507 (1 year, 4 months)
* JBoss AS 7.0 12 July, 2011 579 (1 year, 7 months)
* TomEE beta 4 Oct, 2011 663 (1 year, 9 months)
TomEE 1.0 08 May, 2012 880 (2 years, 4 months)
* JOnAS 5.3.0-M8-SNAPSHOT [14 Nov, 2012 ~ 07 Jan 2013] 1070~1124 (~3 years)
Liberty 8.5.5 14 Jun, 2013 1282 (3 years, 6 months)
JOnAS 5.3 04 Oct 2013 1394 (3 years, 9 months)
* denotes a server that's a tech preview, community, developer preview, beta, etc version

As we can see here, excluding GlassFish and the tech preview of JEUS, it took 1 year and 6 months for the first production ready (according to the vendor!) Java EE 6 full profile server to appear on the market, while most other servers appeared after around two and half years.

Do note that "production ready according to the vendor" is a state that can not easily be quantified with respect to quality. What some vendor calls 1.0 Final, may correspond to what another vendor calls 0.5 Beta. From the above table it doesn't mean that say WebLogic 12.1.1 (production ready according to its vendor) is either more or less stable than e.g. JEUS 7 Tech Preview 1 (not production ready according to its vendor).

The Java EE 7 spec was released at 28 May, 2013, which is 522 days (1 year, 5 months) ago at the time of writing. So let's take a look at what the current situation is with respect to available Java EE 7 servers:

Java EE 7 Full Profile server implementation times
Server Release date Days since spec released
GlassFish 4.0 28 May, 2013 0
* JEUS 8 developer preview ~26 Aug, 2013 90 (2 months, 29 days)
* JBoss WildFly 8.0 11 Feb, 2014 259 (8 months, 14 days)
* denotes a server that's a tech preview, community, developer preview, beta, etc version

Although there are just a few entries, those largely follow the same pattern as the Java EE 6 implementation cycle.

GlassFish is by definition the first release, while JEUS is again the second one with a developer preview (a pattern that goes all the way back to J2EE 1.2). There's unfortunately no information available on when JEUS 8 developer preview was exactly released, but a blog posting about this was published at 26 aug, 2003 so I took that date.

For JBoss the situation for Java EE 7 compared to EE 6 is not really that much different either. WildFly 8 was released after 259 days (the plan was 167 days), which is not that different from JBoss AS 6 which was released after 322 days. One big difference here though is that AS 6 was only certified for the web profile, while in fact practically implementing the full profile. The similarities don't end there, as just as with Java EE 6 the eventual production version (JBoss EAP 6) wasn't based on JBoss AS 6.x, but on the major new version JBoss AS 7. This time around, again it strongly looks like JBoss EAP 7 will not be based on JBoss WildFly 8.x, but on the major new version JBoss WildFly 9.

If history is anything to go by, we may see one or two additional Java EE 7 implementations in a few months, while after a little more than a year from now most servers should be available in a Java EE 7 flavor. At the moment of writing it looks like Web Profile implementations TomEE 2.0 and Liberty.next (both are actually Web Profile++ or a Full Profile--) indeed aren't that far away.

Arjan Tijms

Sunday, September 14, 2014

Getting the target of value expressions

In the Java EE platform programmers have a way to reference values in beans via textual expressions. These textual expressions are then compiled by the implementation (via the Expression Language, AKA EL spec) to instances of ValueExpression.

E.g. the following EL expression can be used to refer to the named bean "foo" and its property "bar":

#{foo.bar}

Expressions can be chains of arbitrary length, and can include method calls as well. E.g.:

#{foo.bar(1).kaz.zak(test)}

An important aspect of these expressions is that they are highly contextual, specifically where it concerns the top level variables. These consists of the object that starts the chain ("foo" here) and any EL variables used as method arguments ("test" here). Because of this, it's not a totally unknown requirement for wanting to resolve the expression when it's still in context in order to obtain the so-called final base and the final property/method, the last one including the resolved and bound parameters.

Now the EL API does provide a method to get the final base and property of an expression if there is one, but this one unfortunately only supports properties, not methods. When method invocations were introduced in EL 2.2 for usage in ValueExpressions and chains (which is subtly different from a MethodExpression that existed before that) this seems to have been done in the most minimal way. As a result, a lot of JavaDoc and supporting APIs were seemingly not updated.

For instance, the JavaDoc for ValueExpression still says:

For any of the five methods, the ELResolver.getValue[...] method is used to resolve all properties up to but excluding the last one. This provides the base object.
There is no mention here that ELResolver.invoke is used as well if any of the intermediate nodes in the chain is a method invocation (like bar(1) in #{foo.bar(1).kaz.zak(test)}).

The fact that there's a ValueReference only supporting properties and no corresponding MethodReference is extra curious, since method invocations in chains and ValueExpressions and the ValueReference type were both introduced in EL 2.2.

So is there any hope of getting the final base and method if a ValueExpression happens to be pointing to a method? There appears to be a way, but it's a little tricky. The trick in question consists of using a special tracing ELResolver and taking advantage of the fact that some methods on ValueExpression are specified to resolve the expression "up to but excluding the last [node]". Using this we can use the following approach:

  • Instantiate an EL context which contains the special tracing EL resolver
  • Call a method on the ValueExpression that resolves the chain until the next to last node (e.g. getType()) using the special EL context
  • In the tracing EL resolver count each intermediate call, so when getType() returns the length of the chain is known
  • Call a method on the ValueExpression that resolves the entire chain (e.g. getValue()) using the same special EL context instance
  • When the EL resolver reaches the next to last node (determined by counting intermediate calls again), wrap the return value from ElResolver.getValue or ElResolver.invoke
  • If either ElResolver.getValue or ElResolver.invoke is called again later with our special wrapped type, we know this is the final node and can collect all details that we need; the base, property or method name and the resolved method parameters (if any). All of these are simply passed to us by the EL implementation
The return value wrapping of the next to last node (at call count N) may need some extra explanation. After all, why not just wait till we're called the Nth + 1 time? The issue is that this Nth + 1 call may be for resolving variables that are passed as parameters into the final node if this final node is a method invocation. The amount of such parameters is unknown and each parameter can consist of a chain of arbitrary length.

E.g. consider the following expression:

#{foo.bar.kaz(test.a.b.c(x.r), bean.x.y.z(o).p)}
In such a case the first pass of the above given approach will count the calls up until the point of resolving "bar", which is thus at call count N. If "kaz" was a simple property, our EL resolver would be asked to resolve [return value of "bar"]."kaz" at call count N + 1. However, since "kaz" is not a simple property but a complex method invocation with EL variables, the next call after N will be for resolving the base of the first EL variable used in the method invocation ("test" here).

One may also wonder why we do not "simply" get the textual EL representation of an EL expression, chop off the last node using simple string manipulation and resolve that. The reason is two fold. First, it may work for very simple expressions (like #{a.b.c}), but doesn't work in general for complex ones (e.g. #{empty foo? a.b.c : x.y.z}). A second issue is that a given ValueExpression instance all too often contains state (like an embedded VariableMapper instance), which is lost when we just get the EL string from a ValueExpression and evaluate that.

The approach outlined above has been implemented in OmniFaces 2.0. For completeness the most important bit of it, the tracing EL resolver is given below:

class InspectorElResolver extends ELResolverWrapper {

  private int passOneCallCount;
  private int passTwoCallCount;

  private Object lastBase;
  private Object lastProperty; // Method name in case VE referenced a method, otherwise property name
  private Object[] lastParams; // Actual parameters supplied to a method (if any)

  private boolean subchainResolving;

  // Marker holder via which we can track our last base. This should become
  // the last base in a next iteration. This is needed because if the very last property is a
  // method node with a variable, we can't track resolving that variable anymore since it will not have been processed by the
  // getType() call of the first pass.
  // E.g. a.b.c(var.foo())
  private FinalBaseHolder finalBaseHolder;

  private InspectorPass pass = InspectorPass.PASS1_FIND_NEXT_TO_LAST_NODE;

  public InspectorElResolver(ELResolver elResolver) {
    super(elResolver);
  }

  @Override
  public Object getValue(ELContext context, Object base, Object property) {

    if (base instanceof FinalBaseHolder) {
      // If we get called with a FinalBaseHolder, which was set in the next to last node,
      // we know we're done and can set the base and property as the final ones.
      lastBase = ((FinalBaseHolder) base).getBase();
      lastProperty = property;

      context.setPropertyResolved(true);
      return ValueExpressionType.PROPERTY;
    }

      checkSubchainStarted(base);

      if (subchainResolving) {
          return super.getValue(context, base, property);
      }

      recordCall(base, property);

      return wrapOutcomeIfNeeded(super.getValue(context, base, property));
  }

  @Override
  public Object invoke(ELContext context, Object base, Object method, Class<?>[] paramTypes, Object[] params) {

    if (base instanceof FinalBaseHolder) {
      // If we get called with a FinalBaseHolder, which was set in the next to last node,
      // we know we're done and can set the base, method and params as the final ones.
      lastBase = ((FinalBaseHolder) base).getBase();
      lastProperty = method;
      lastParams = params;

      context.setPropertyResolved(true);
      return ValueExpressionType.METHOD;
    }

    checkSubchainStarted(base);

    if (subchainResolving) {
      return super.invoke(context, base, method, paramTypes, params);
    }

    recordCall(base, method);

    return wrapOutcomeIfNeeded(super.invoke(context, base, method, paramTypes, params));
  }

  @Override
  public Class<?> getType(ELContext context, Object base, Object property) {

    // getType is only called on the last element in the chain (if the EL
    // implementation actually calls this, which might not be the case if the
    // value expression references a method)
    //
    // We thus do know the size of the chain now, and the "lastBase" and "lastProperty"
    // that were set *before* this call are the next to last now.
    //
    // Alternatively, this method is NOT called by the EL implementation, but then
    // "lastBase" and "lastProperty" are still the next to last.
    //
    // Independent of what the EL implementation does, "passOneCallCount" should thus represent
    // the total size of the call chain minus 1. We use this in pass two to capture the
    // final base, property/method and optionally parameters.

    context.setPropertyResolved(true);

    // Special value to signal that getType() has actually been called (this value is
    // not used by the algorithm now, but may be useful when debugging)
    return InspectorElContext.class;
  }

  private boolean isAtNextToLastNode() {
    return passTwoCallCount == passOneCallCount;
  }

  private void checkSubchainStarted(Object base) {
    if (pass == InspectorPass.PASS2_FIND_FINAL_NODE && base == null && isAtNextToLastNode()) {
        // If "base" is null it means a new chain is being resolved.
          // The main expression chain likely has ended with a method that has one or more EL variables
        // as parameters that now need to be resolved.
        // E.g. a.b().c.d(var1)
        subchainResolving = true;
        }
  }

  private void recordCall(Object base, Object property) {

    switch (pass) {
      case PASS1_FIND_NEXT_TO_LAST_NODE:

        // In the first "find next to last" pass, we'll be collecting the next to last element
        // in an expression.
        // E.g. given the expression a.b().c.d, we'll end up with the base returned by b() and "c" as
        // the last property.

        passOneCallCount++;
        lastBase = base;
        lastProperty = property;

        break;

      case PASS2_FIND_FINAL_NODE:

        // In the second "find final node" pass, we'll collecting the final node
        // in an expression. We need to take care that we're not actually calling / invoking
        // that last element as it may have a side-effect that the user doesn't want to happen
        // twice (like storing something in a DB etc).

        passTwoCallCount++;

        if (passTwoCallCount == passOneCallCount) {

          // We're at the same call count as the first phase ended with.
          // If the chain has resolved the same, we should be dealing with the same base and property now

          if (base != lastBase || property != lastProperty) {
            throw new IllegalStateException(
              "First and second pass of resolver at call #" + passTwoCallCount +
              " resolved to different base or property.");
          }

        }

        break;
    }
  }

  private Object wrapOutcomeIfNeeded(Object outcome) {
    if (pass == InspectorPass.PASS2_FIND_FINAL_NODE && finalBaseHolder == null && isAtNextToLastNode()) {
      // We're at the second pass and at the next to last node in the expression chain.
      // "outcome" which we have just resolved should thus represent our final base.

      // Wrap our final base in a special class that we can recognize when the EL implementation
      // invokes this resolver later again with it.
      finalBaseHolder = new FinalBaseHolder(outcome);
      return finalBaseHolder;
    }

    return outcome;
  }

  public InspectorPass getPass() {
    return pass;
  }

  public void setPass(InspectorPass pass) {
    this.pass = pass;
  }

  public Object getBase() {
    return lastBase;
  }

  public Object getProperty() {
    return lastProperty;
  }

  public Object[] getParams() {
    return lastParams;
  }

}

As seen, the support for ValueExpressions that point to methods is not optimal in the current EL specification. With some efforts we can workaround this, but arguably such functionality should be present in the specification itself.

Arjan Tijms