Sunday, October 4, 2015

How Servlet containers all implement identity stores differently

In Java EE security two artefacts play a major role, the authentication mechanism and the identity store.

The authentication mechanism is responsible for interacting with the caller and the environment. E.g. it causes a UI to be rendered that asks for details such as a username and password, and after a postback retrieves these from the request. As such it's roughly equivalent to a controller in the MVC architecture.

Java EE has standardised 4 authentication mechanisms for a Servlet container, as well as a JASPIC API profile to provide a custom authentication mechanism for Servlet (and one for SOAP, but let's ignore that for now). Unfortunately standard custom mechanisms are only required to be supported by a full Java EE server, which means the popular web profile and standalone servlet containers are left in the dark.

Servlet vendors can adopt the standard API if they want and the Servlet spec even encourages this, but in practice few do so developers can't depend on this. (Spec text is typically quite black and white. *Must support* means it's there, anything else like *should*, *is encouraged*, *may*, etc simply means it's not there)

The following table enumerates the standard options:

  1. Basic
  2. Digest (encouraged to be supported, not required)
  3. Client-cert
  4. Form
  5. Custom/JASPIC (encouraged for standalone/web profile Servlet containers, required for full profile Servlet containers)

The identity store on its turn is responsible for providing access to a storage system where caller data and credentials are stored. E.g. when being given a valid caller name and password as input it returns a (possibly different) caller name and zero or more groups associated with the caller. As such it's roughly equivalent to a model in the MVC architecture; the identity store knows nothing about its environment and does not interact with the caller. It only performs the {credentials in, caller data out} function.

Identity stores are somewhat shrouded in mystery, and not without reason. Java EE has not standardised any identity store, nor has it really standardised any API or interface for them. There is a bridge profile for JAAS LoginModules, which are arguably the closest thing to a standard interface, but JAAS LoginModules can not be used in a portable way in Java EE since essential elements of them are not standardised. Furthermore, this bridge profile can only be used for custom authentication mechanisms (using JASPIC), which is itself only guaranteed to be available for Servlet containers that reside within a full Java EE server as mentioned above.

What happens now is that every Servlet container provides a proprietary interface and lookup method for identity stores. Nearly all of them ship with a couple of default implementations for common storage systems that the developer can choose to use. The most common ones are listed below:

  • In-memory (properties file/xml file based)
  • Database (JDBC/DataSource based)
  • LDAP

As a direct result of not being standardised, not only do Servlet containers provide their own implementations, they also each came up with their own names. Up till now no less than 16(!) terms were discovered for essentially the same thing:

  1. authenticator
  2. authentication provider
  3. authentication repository
  4. authentication realm
  5. authentication store
  6. identity manager
  7. identity provider
  8. identity store
  9. login module
  10. login service
  11. realm
  12. relying party
  13. security policy domain
  14. security domain
  15. service provider
  16. user registry

Following a vote in the EG for the new Java EE security JSR, it was decided to use the term "identity store" going forward. This is therefor also the term used in this article.

To give an impression of how a variety of servlet containers have each implemented the identity store concept we analysed a couple of them. For each one we list the main interface one has to implement for a custom identity store, and if possible an overview of how the container actually uses this interface in an authentication mechanism.

The servlet containers and application servers containing such containers that we've looked at are given in the following list. Each one is described in greater detail below.

  1. Tomcat
  2. Jetty
  3. Undertow
  4. JBoss EAP/WildFly
  5. Resin
  6. GlassFish
  7. Liberty
  8. WebLogic



Tomcat calls its identity store "Realm". It's represented by the interface shown below:

public interface Realm {
    Principal authenticate(String username);
    Principal authenticate(String username, String credentials);
    Principal authenticate(String username, String digest, String nonce, String nc, String cnonce, String qop, String realm, String md5a2);
    Principal authenticate(GSSContext gssContext, boolean storeCreds);
    Principal authenticate(X509Certificate certs[]);

    void backgroundProcess();
    SecurityConstraint [] findSecurityConstraints(Request request, Context context);
    boolean hasResourcePermission(Request request, Response response, SecurityConstraint[] constraint, Context context) throws IOException;
    boolean hasRole(Wrapper wrapper, Principal principal, String role);
    boolean hasUserDataPermission(Request request, Response response, SecurityConstraint[] constraint) throws IOException;

    void addPropertyChangeListener(PropertyChangeListener listener);
    void removePropertyChangeListener(PropertyChangeListener listener);

    Container getContainer();
    void setContainer(Container container);
    CredentialHandler getCredentialHandler();
    void setCredentialHandler(CredentialHandler credentialHandler);

According to the documentation, "A Realm [identity store] is a "database" of usernames and passwords that identify valid users of a web application (or set of web applications), plus an enumeration of the list of roles associated with each valid user."

Tomcat's bare identity store interface is rather big as can be seen. In practice though implementations inherit from RealmBase, which is a base class (as its name implies). Somewhat confusingly its JavaDoc says that it's a realm "that reads an XML file to configure the valid users, passwords, and roles".

The only methods that most of Tomcat's identity stores implement are authenticate(String username, String credentials) for the actual authentication, String getName() to return the identity store's name (this would perhaps have been an annotation if this was designed today), and startInternal() to do initialisation (would likely be done via an @PostConstruct annotation today).

Example of usage

The code below shows an example of how Tomcat actually uses its identity store. The following shortened fragment is taken from the implementation of the Servlet FORM authentication mechanism in Tomcat.

// Obtain reference to identity store
Realm realm = context.getRealm();

if (characterEncoding != null) {
String username = request.getParameter(FORM_USERNAME);
String password = request.getParameter(FORM_PASSWORD);

// Delegating of authentication mechanism to identity store
principal = realm.authenticate(username, password);

if (principal == null) {
    forwardToErrorPage(request, response, config);
    return false;

if (session == null) {
    session = request.getSessionInternal(false);

// Save the authenticated Principal in our session
session.setNote(FORM_PRINCIPAL_NOTE, principal);

What sets Tomcat aside from most other systems is that the authenticate() call in most cases directly goes to the custom identity store implementation instead of through many levels of wrappers, bridges, delegators and what have you. This is even true when the provided base class RealmBase is used.



Jetty calls its identity store LoginService. It's represented by the interface shown below:

public interface LoginService {
    String getName();
    UserIdentity login(String username, Object credentials, ServletRequest request);
    boolean validate(UserIdentity user);

    IdentityService getIdentityService();
    void setIdentityService(IdentityService service);
    void logout(UserIdentity user);

According to its JavaDoc, a "Login service [identity store] provides an abstract mechanism for an [authentication mechanism] to check credentials and to create a UserIdentity using the set [injected] IdentityService".

There are a few things to remark here. The getName() method names the identity store. This would likely be done via an annotation had this interface been designed today.

The essential method of the Jetty identity store is login(). It's username/credentials based, where the credentials are an opaque Object. The ServletRequest is not often used, but a JAAS bridge uses it to provide a RequestParameterCallback to Jetty specific JAAS LoginModules.

validate() is essentially a kind of shortcut method for login() != null, albeit without using the credentials.

A distinguishing aspect of Jetty is that its identity stores get injected with an IdentityService, which the store has to use to create user identities (users) based on a Subject, (caller) Principal and a set of roles. It's not 100% clear what this was intended to accomplish, since the only implementation of this service just returns new DefaultUserIdentity(subject, userPrincipal, roles), where DefaultUserIdentity is mostly just a simple POJO that encapsulates those three data items.

Another remarkable method is logout(). This is remarkable since the identity store typically just returns authentication data and doesn't hold state per user. It's the authentication mechanism that knows about the environment in which this authentication data is used (e.g. knows about the HTTP request and session). Indeed, almost no identity stores make use of this. The only one that does is the special identity store that bridges to JAAS LoginModules. This one isn't stateful, but provides an operation on the passed in user identity. As it appears, the principal returned by this bridge identity store encapsulates the JAAS LoginContext, on which the logout() method is called at this point.

Example of usage

The code below shows an example of how Jetty uses its identity store. The following shortened and 'unfolded' fragment is taken from the implementation of the Servlet FORM authentication mechanism in Jetty.

if (isJSecurityCheck(uri)) {
    String username = request.getParameter(__J_USERNAME);
    String password = request.getParameter(__J_PASSWORD);

    // Delegating of authentication mechanism to identity store
    UserIdentity user = _loginService.login(username, password, request);
    if (user != null) {
        renewSession(request, (request instanceof Request? ((Request)request).getResponse() : null));
        HttpSession session = request.getSession(true);
        session.setAttribute(__J_AUTHENTICATED, new SessionAuthentication(getAuthMethod(), user, password));

        // ...

        base_response.sendRedirect(redirectCode, response.encodeRedirectURL(nuri));
        return form_auth;
    // ...

In Jetty a call to the identity store's login() method will in most cases directly call the installed identity store, and will not go through many layers of delegation, bridges, etc. There is a convenience base class that identity store implementations can use, but this is not required.

If the base class is used, two abstract methods have to be implemented; UserIdentity loadUser(String username) and void loadUsers(), where typically only the former really does something. When this base class is indeed used, the above call to login() goes to the implementation in the base class. This first checks a cache, and if the user is not there calls the sub class via the mentioned loadUser() class.

public UserIdentity login(String username, Object credentials, ServletRequest request) {
    UserIdentity user = _users.get(username);

    if (user == null)
        user = loadUser(username);

    if (user != null) {
        UserPrincipal principal = (UserPrincipal) user.getUserPrincipal();
        if (principal.authenticate(credentials))
            return user;

    return null;

The user returned from the sub class has a feature that's a little different from most other servers; it contains a Jetty specific principal that knows how to process the opaque credentials. It delegates this however to a Credential implementation as shown below:

public boolean authenticate(Object credentials) {
    return credential != null && credential.check(credentials);

The credential used here is put into the user instance and represents the -expected- credential and can be of a multitude of types e.g. Crypt, MD5 or Password. MD5 means the expected password is MD5 hashed, while just Password means the expected password is plain text. The check for the latter looks as follows:

public boolean check(Object credentials) {
    if (this == credentials) 
        return true;
    if (credentials instanceof Password)
        return credentials.equals(_pw);
    if (credentials instanceof String)
        return credentials.equals(_pw);
    if (credentials instanceof char[]) 
        return Arrays.equals(_pw.toCharArray(), (char[]) credentials);
    if (credentials instanceof Credential) 
        return ((Credential) credentials).check(_pw);
    return false;



Undertow is one of the newest Servlet containers. It's created by Red Hat to replace Tomcat (JBossWeb) in JBoss EAP, and can already be used in WildFly 8/9/10 which are the unsupported precursors for JBoss EAP 7. Undertow can also be used standalone.

The native identity store interface of Undertow is the IdentityManager, which is shown below:

public interface IdentityManager {
    Account verify(Credential credential);
    Account verify(String id, Credential credential);
    Account verify(Account account);
Peculiar enough there are no direct implementations for actual identity stores shipped with Undertow.

Example of usage

The code below shows an example of how Undertow actually uses its identity store. The following shortened fragment is taken from the implementation of the Servlet FORM authentication mechanism in Undertow.

FormData data = parser.parseBlocking();
FormData.FormValue jUsername = data.getFirst("j_username");
FormData.FormValue jPassword = data.getFirst("j_password");
if (jUsername == null || jPassword == null) {

String userName = jUsername.getValue();
String password = jPassword.getValue();
AuthenticationMechanismOutcome outcome = null;
PasswordCredential credential = new PasswordCredential(password.toCharArray());

// Obtain reference to identity store
IdentityManager identityManager = securityContext.getIdentityManager();

// Delegating of authentication mechanism to identity store
Account account = identityManager.verify(userName, credential);

if (account != null) {
    securityContext.authenticationComplete(account, name, true);
    outcome = AUTHENTICATED;
} else {
    securityContext.authenticationFailed(MESSAGES.authenticationFailed(userName), name);

if (outcome == AUTHENTICATED) {

return outcome != null ? outcome : NOT_AUTHENTICATED;


JBoss EAP/WildFly

JBoss identity stores are based on the JAAS LoginModule, which is shown below:

public interface LoginModule {
    void initialize(Subject subject, CallbackHandler callbackHandler, Map<String,?> sharedState, Map<String,?> options);
    boolean login() throws LoginException;
    boolean commit() throws LoginException;
    boolean abort() throws LoginException;
    boolean logout() throws LoginException;
As with most application servers, the JAAS LoginModule interface is used in a highly application server specific way.

It's a big question why this interface is used at all, since you can't just implement that interface. Instead you have to inherit from a credential specific base class. Therefor the LoginModule interface is practically an internal implementation detail here, not something the user actually uses. Despite that, it's not uncommon for users to think "plain" JAAS is being used and that JAAS login modules are universal and portable, but they are anything but.

For the username/password credential the base class to inherit from is UsernamePasswordLoginModule. As per the JavaDoc of this class, there are two methods that need to be implemented: getUsersPassword() and getRoleSets().

getUsersPassword() has to return the actual password for the provided username, so the base code can compare it against the provided password. If those passwords match getRoleSets() is called to retrieve the roles associated with the username. Note that JBoss typically does not map groups to roles, so it returns roles here which are then later on passed into APIs that normally would expect groups. In both methods the username is available via a call to getUsername().

The "real" contract as *hypothetical* interface could be thought of to look as follows:

public interface JBossIdentityStore {
    String getUsersPassword(String username);
    Group[] getRoleSets(String username) throws LoginException;

Example of usage

There's no direct usage of the LoginModule in JBoss. JBoss EAP 7/WildFly 8-9-10 directly uses Undertow as its Servlet container, which means the authentication mechanisms shipped with that uses the IdentityManager interface exactly as shown above in the Undertow section.

For usage in JBoss there's a bridge implementation of the IdentityManager to the JBoss specific JAAS LoginModule available.

The "identityManager.verify(userName, credential)" call shown above ends up at JAASIdentityManagerImpl#verify. This first wraps the username, but extracts the password from PasswordCredential. Abbreviated it looks as follows:

public Account verify(String id, Credential credential) {
    if (credential instanceof DigestCredential) {
        // ..
    } else if(credential instanceof PasswordCredential) {
        return verifyCredential(
            new AccountImpl(id), 
            copyOf(((PasswordCredential) credential).getPassword())

    return verifyCredential(new AccountImpl(id), credential); 
The next method called in the "password chain" is somewhat troublesome, as it doesn't just return the account details, but as an unavoidable side-effect also puts the result of authentication in TLS. It takes a credential as an Object and delegates further to an isValid() method. This one uses a Subject as an output parameter (meaning it doesn't return the authentication data but puts it inside the Subject that's passed in). The calling method then extracts this authentication data from the subject and puts it into its own type instead.

Abbreviated again this looks as follows:

private Account verifyCredential(AccountImpl account, Object credential) 
    Subject subject = new Subject();   
    boolean isValid = securityDomainContext
                          .isValid(account.getOriginalPrincipal(), credential, subject);

    if (isValid) {
        // Stores details in TLS
            .createSubjectInfo(account.getOriginalPrincipal(), credential, subject);

        return new AccountImpl(
            getPrincipal(subject), getRoles(subject),
            credential, account.getOriginalPrincipal()

     return null;
The next method being called is isValid() on a type called AuthenticationManager. Via two intermediate methods this ends up calling proceedWithJaasLogin.

This method obtains a LoginContext, which wraps a Subject, which wraps the Principal and roles shown above (yes, there's a lot of wrapping going on). Abbreviated the method looks as follows:

private boolean proceedWithJaasLogin(Principal principal, Object credential, Subject theSubject) {
    try {
        copySubject(defaultLogin(principal, credential).getSubject(), theSubject);
        return true;
    } catch (LoginException e) {
        return false;  

The defaultLogin() method finally just calls plain Java SE JAAS code, although just before doing that it uses reflection to call a setSecurityInfo() method on the CallbackHandler. It's remarkable that even though this method seems to be required and known in advance, there's no interface used for this. The handler being used here is often of the type JBossCallbackHandler.

Brought back to its essence the method looks like this:

private LoginContext defaultLogin(Principal principal, Object credential) throws LoginException {
    CallbackHandler theHandler = (CallbackHandler) handler.getClass().newInstance();
    setSecurityInfo.invoke(theHandler, new Object[] {principal, credential});
    LoginContext lc = new LoginContext(securityDomain, subject, handler);

    return lc;

Via some reflective magic the JAAS code shown here will locate, instantiate and at long last will call our custom LoginModule's initialize(), login() and commit() methods, which on their turn will call the two methods that we needed to implement in our subclass.



Resin calls its identity store "Authenticator". It's represented by a single interface shown below:

public interface Authenticator {
    String getAlgorithm(Principal uid);
    Principal authenticate(Principal user, Credentials credentials, Object details);
    boolean isUserInRole(Principal user, String role);
    void logout(Principal user);
There are a few things to remark here. The logout() method doesn't seem to make much sense, since it's the authentication mechanism that keeps track of the login state in the overarching server. Indeed, the method does not seem to be called by Resin, and there are no identity stores implementing it except for the AbstractAuthenticator that does nothing there.

isUserInRole() is somewhat remarkable as well. This method is not intended to check for the roles of any given user, such as you could for instance use in an admin UI. Instead, it's intended to be used by the HttpServletRequest#isUserInRole call, and therefor only for the *current* user. This is indeed how it's used by Resin. This is remarkable, since most other systems keep the roles in memory. Retrieving it from the identity store every time can be rather heavyweight. To combat this, Resin uses a CachingPrincipal, but an identity store implementation has to opt-in to actually use this.

Example of usage

The code below shows an example of how Resin actually uses its identity store. The following shortened fragment is taken from the implementation of the Servlet FORM authentication mechanism in Resin.

// Obtain reference to identity store
Authenticator auth = getAuthenticator();

// ..

String userName = request.getParameter("j_username");
String passwordString = request.getParameter("j_password");

if (userName == null || passwordString == null)
    return null;

char[] password = passwordString.toCharArray();
BasicPrincipal basicUser = new BasicPrincipal(userName);
Credentials credentials = new PasswordCredentials(password);

// Delegating of authentication mechanism to identity store
user = auth.authenticate(basicUser, credentials, request);

return user;

A nice touch here is that Resin obtains the identity store via CDI injection. A somewhat unknown fact is that Resin has its own CDI implementation, CanDI and uses it internally for a lot of things. Unlike some other servers, the call to authenticate() here goes straight to the identity store. There are no layers of lookup or bridge code in between.

That said, Resin does encourage (but not require) the usage of an abstract base class it provides: AbstractAuthenticator. IFF this base class is indeed used (again, this is not required), then there are a few levels of indirection the flow goes through before reaching one's own code. In that case, the authenticate() call shown above will start with delegating to one of three methods for known credential types. This is shown below:

public Principal authenticate(Principal user, Credentials credentials, Object details) {
    if (credentials instanceof PasswordCredentials)
        return authenticate(user, (PasswordCredentials) credentials, details);
    if (credentials instanceof HttpDigestCredentials)
        return authenticate(user, (HttpDigestCredentials) credentials, details);
    if (credentials instanceof DigestCredentials)
        return authenticate(user, (DigestCredentials) credentials, details);
    return null;

Following the password trail, the next level will merely extract the password string:

protected Principal authenticate(Principal principal, PasswordCredentials cred, Object details) {
    return authenticate(principal, cred.getPassword());

The next authenticate method will call into a more specialized method that only obtains a User instance from the store. This instance has the expected password embedded, which is then verified against the provided password. Abbreviated it looks as follows:

protected Principal authenticate(Principal principal, char[] password) {
    PasswordUser user = getPasswordUser(principal);

    if (user == null || user.isDisabled() || (!isMatch(principal, password, user.getPassword()) && !user.isAnonymous()))
        return null;
    return user.getPrincipal();

The getPasswordUser() method goes through one more level of convenience, where it extracts the caller name that was wrapped by the Principal:

protected PasswordUser getPasswordUser(Principal principal) {
    return getPasswordUser(principal.getName());

This last call to getPasswordUser(String) is what typically ends up in our own custom identity store.

Finally, it's interesting to see what data PasswordUser contains. Abbreviated again this is shown below:

public class PasswordUser {
  Principal principal;
  char[] password;
  boolean disabled;
  boolean anonymous;
  String[] roles;



GlassFish identity stores are based on the JAAS LoginModule, which is shown below:

public interface LoginModule {
    void initialize(Subject subject, CallbackHandler callbackHandler, Map<String,?> sharedState, Map<String,?> options);
    boolean login() throws LoginException;
    boolean commit() throws LoginException;
    boolean abort() throws LoginException;
    boolean logout() throws LoginException;

Just as we saw with JBoss above, the LoginModule interface is again used in a very application server specific way. In practice, you don't just implement a LoginModule but inherit from or it's empty subclass for password based logins, or for certificate ones.

As per the JavaDoc of those classes, the only method that needs to be implemented is authenticateUser(). Inside that method the username is available via the protected variable(!) "_username", while the password can be obtained via getPasswordChar(). When a custom identity store is done with its work commitUserAuthentication() has to be called with an array of groups when authentication succeeded and a LoginException thrown when it failed. So essentially that's the "real" contract for a custom login module. The fact that the other functionality is in the same class is more a case of using inheritance where aggregation might have made more sense. As we saw with JBoss, the LoginModule interface itself seems more like an implementation detail instead of something a client can really take advantage of.

The "real" contract as *hypothetical* interface looks as follows:

public interface GlassFishIdentityStore {
    String[] authenticateUser(String username, char[] password) throws LoginException;

Even though a LoginModule is specific for a type of identity store (e.g. File, JDBC/database, LDAP, etc), LoginModules in GlassFish are mandated to be paired with another construct called a Realm. While having the same name as the Tomcat equivalent and even a nearly identical description, the type is completely different. In GlassFish it's actually a kind of DAO, albeit one with a rather heavyweight contract.

Most of the methods of this DAO are not actually called by the runtime for authentication, nor are they used by application themselves. They're likely intended to be used by the GlassFish admin console, so a GlassFish administrator can add and delete users. However, very few actual realms support this and with good reason. It just doesn't make much sense for many realms really. E.g. LDAP and Solaris have their own management UI already, and JDBC/database is typically intended to be application specific so there the application already has its own DAOs and services to manage users, and exposes its own UI as well.

A custom LoginModule is not forced to use this Realm, but the base class code will try to instantiate one and grab its name, so one must still be paired to the LoginModule.

The following lists the public and protected methods of this Realm class. Note that the body is left out for the non-abstract methods.

public abstract class Realm implements Comparable {

    public  static synchronized Realm getDefaultInstance();
    public  static synchronized String getDefaultRealm();
    public  static synchronized Enumeration getRealmNames();
    public  static synchronized void getRealmStatsProvier();
    public  static synchronized Realm  getInstance(String);
    public  static synchronized Realm instantiate(String, File);
    public  static synchronized Realm instantiate(String, String, Properties);
    public  static synchronized void setDefaultRealm(String);
    public  static synchronized void unloadInstance(String);
    public  static boolean isValidRealm(String);
    protected static synchronized void updateInstance(Realm, String);

    public  abstract void addUser(String, String, String[]);
    public  abstract User getUser(String);
    public  abstract void updateUser(String, String, String, String[]);
    public  abstract void removeUser(String);

    public  abstract Enumeration getUserNames();
    public  abstract Enumeration getGroupNames();
    public  abstract Enumeration getGroupNames(String);
    public  abstract void persist();
    public  abstract void refresh();

    public  abstract AuthenticationHandler getAuthenticationHandler();
    public  abstract boolean supportsUserManagement();
    public  abstract String getAuthType(); 

    public  int compareTo(Object);
    public  String FinalgetName();
    public  synchronized String getJAASContext();
    public  synchronized String getProperty(String);
    public  synchronized void setProperty(String, String);

    protected  void init(Properties);
    protected  ArrayList<String> getMappedGroupNames(String);
    protected  String[] addAssignGroups(String[]);
    protected  final void setName(String);
    protected  synchronized Properties getProperties();

Example of usage

To make matters a bit more complicated, there's no direct usage of the LoginModule in GlassFish either. GlassFish' Servlet container is internally based on Tomcat, and therefor the implementation of the FORM authentication mechanism is a Tomcat class (which strongly resembles the class in Tomcat itself, but has small differences here and there). Confusingly, this uses a class named Realm again, but it's a totally different Realm than the one shown above. This is shown below:

// Obtain reference to identity store
Realm realm = context.getRealm();

String username = hreq.getParameter(FORM_USERNAME);
String pwd = hreq.getParameter(FORM_PASSWORD);
char[] password = ((pwd != null)? pwd.toCharArray() : null);

// Delegating of authentication mechanism to identity store
principal = realm.authenticate(username, password);
if (principal == null) {
    forwardToErrorPage(request, response, config);
    return (false);

if (session == null)
    session = getSession(request, true);

session.setNote(FORM_PRINCIPAL_NOTE, principal);

This code is largely identical to the Tomcat version shown above. The Tomcat Realm in this case is not the identity store directly, but an adapter called RealmAdapter. It first calls the following slightly abbreviated method for the password credential:

public Principal authenticate(String username, char[] password) {
    if (authenticate(username, password, null)) {
        return new WebPrincipal(username, password, SecurityContext.getCurrent());
    return null;
Which on its turn calls the following abbreviated method that handles two supported types of credentials:
protected boolean authenticate(String username, char[] password, X509Certificate[] certs) {
    try {
        if (certs != null) {
            // ... create subject
            LoginContextDriver.doX500Login(subject, moduleID);
        } else {
            LoginContextDriver.login(username, password, _realmName);
        return true;
    } catch (Exception le) {}
    return false;
Again (strongly) abbreviated the login method called looks as follows:
public static void login(String username, char[] password, String realmName){
    Subject subject = new Subject();
    subject.getPrivateCredentials().add(new PasswordCredential(username, password, realmName));

    LoginContextDriver.login(subject, PasswordCredential.class);

This new login method checks for several credential types, which abbreviated looks as follows:

public static void login(Subject subject, Class cls) throws LoginException {
    if (cls.equals(PasswordCredential.class))
    else if (cls.equals(X509CertificateCredential.class))
    else if (cls.equals(AnonCredential.class)) {
    else if (cls.equals(GSSUPName.class)) {
    else if (cls.equals(X500Name.class)) {
        doX500Login(subject, null);
        throw new LoginException("Unknown credential type, cannot login.");

As we're following the password trail, we're going to look at the doPasswordLogin() method here, which strongly abbreviated looks as follows:

private static void doPasswordLogin(Subject subject) throws LoginException
    try {
        new LoginContext(
                getPrivateCredentials(subject, PasswordCredential.class).getRealm()
    } catch (Exception e) {
        throw new LoginException("Login failed: " + e.getMessage()).initCause(e);

We're now 5 levels deep, and we're about to see our custom login module being called.

At this point it's down to plain Java SE JAAS code. First the name of the realm that was stuffed into a PasswordCredential which was stuffed into a Subject is used to obtain a Realm instance of the type that was shown way above; the GlassFish DAO like type. Via this instance the realm name is mapped to another name; the "JAAS context". This JAAS context name is the name under which our LoginModule has to be registered. The LoginContext does some magic to obtain this LoginModule from a configuration file and initializes it with the Subject among others. The login(), commit() and logout() methods can then make use of this Subject later on.

At long last, the login() method call (via 2 further private helper methods, not shown here) will at 7 levels deep cause the login() method of our LoginModule to be called. This happens via reflective code which looks as follows:

// methodName == "login" here

// find the requested method in the LoginModule
for (mIndex = 0; mIndex < methods.length; mIndex++) {
    if (methods[mIndex].getName().equals(methodName))

// set up the arguments to be passed to the LoginModule method
Object[] args = { };

// invoke the LoginModule method
boolean status = ((Boolean) methods[mIndex].invoke(moduleStack[i].module, args)).booleanValue();
But remember that in GlassFish we didn't directly implemented LoginModule#login() but the abstract authenticateUser() method of the BasePasswordLoginModule, so we still have one more level to go. The final call at level 8 that causes our very own custom method to be called can be seen below:
final public boolean login() throws LoginException {

    // Extract the username, password and realm name from the Subject
   // Delegate the actual authentication to subclass (finally!)
   return true;



Liberty calls its identity stores "user registry". It's shown below:

public interface UserRegistry {
    void initialize(Properties props) throws CustomRegistryException, RemoteException;

    String checkPassword(String userSecurityName, String password) throws PasswordCheckFailedException, CustomRegistryException, RemoteException;
    String mapCertificate(X509Certificate[] certs) throws CertificateMapNotSupportedException, CertificateMapFailedException, CustomRegistryException, RemoteException;
    String getRealm() throws CustomRegistryException, RemoteException;

    Result getUsers(String pattern, int limit) throws CustomRegistryException, RemoteException;
    String getUserDisplayName(String userSecurityName) throws EntryNotFoundException, CustomRegistryException, RemoteException;
    String getUniqueUserId(String userSecurityName) throws EntryNotFoundException, CustomRegistryException, RemoteException;
    String getUserSecurityName(String uniqueUserId) throws EntryNotFoundException, CustomRegistryException, RemoteException;
    boolean isValidUser(String userSecurityName) throws CustomRegistryException, RemoteException;

    Result getGroups(String pattern, int limit) throws CustomRegistryException, RemoteException;
    String getGroupDisplayName(String groupSecurityName) throws EntryNotFoundException, CustomRegistryException, RemoteException;
    String getUniqueGroupId(String groupSecurityName) throws EntryNotFoundException, CustomRegistryException, RemoteException;
    List getUniqueGroupIds(String uniqueUserId) throws EntryNotFoundException, CustomRegistryException, RemoteException;
    String getGroupSecurityName(String uniqueGroupId) throws EntryNotFoundException, CustomRegistryException, RemoteException;
    boolean isValidGroup(String groupSecurityName) throws CustomRegistryException, RemoteException;

    List getGroupsForUser(String groupSecurityName) throws EntryNotFoundException, CustomRegistryException, RemoteException;
    WSCredential createCredential(String userSecurityName) throws NotImplementedException, EntryNotFoundException, CustomRegistryException, RemoteException;    

As can be seen it's clearly one of the most heavyweight interfaces for an identity store that we've seen till this far. As Liberty is closed source we can't exactly see what the server uses all these methods for.

As can be seen though it has methods to list all users and groups that the identity store manages (getUsers(), getGroups()) as well as methods to get what IBM calls a "display name", "unique ID" and "security name" which are apparently associated with both user and role names. According to the published JavaDoc display names are optional. It's perhaps worth it to ask the question if the richness that these name mappings potentially allow for are worth the extra complexity that's seen here.

createCredential() stands out as the JavaDoc mentions it's never been called for at least the 8.5.5 release of Liberty.

The main method that does the actual authentication is checkPassword(). It's clearly username/password based. Failure has to be indicated by trowing an exception, success returns the passed in username again (or optionally any other valid name, which is a bit unlike what most other systems do). There's support for certificates via a separate method, mapCertificate(), which seemingly has to be called first, and then the resulting username passed into checkPassword() again.

Example of usage

Since Liberty is closed source we can't actually see how the server uses its identity store. Some implementation examples are given by IBM and myself.



It's not entirely clear what an identity store in WebLogic is really called. There are many moving parts. The overall term seems to be "security provider", but these are subdivided in authentication providers, identity assertion providers, principal validation providers, authorization providers, adjudication providers and many more providers.

One of the entry points seems to be an "Authentication Provider V2", which is given below:

public interface AuthenticationProviderV2 extends SecurityProvider {

    AppConfigurationEntry getAssertionModuleConfiguration();      
    IdentityAsserterV2 getIdentityAsserter();
    AppConfigurationEntry getLoginModuleConfiguration(); 
    PrincipalValidator getPrincipalValidator();

Here it looks like the getLoginModuleConfiguration() has to return an AppConfigurationEntry that holds the fully qualified class name of a JAAS LoginModule, which is given below:

public interface LoginModule {
    void initialize(Subject subject, CallbackHandler callbackHandler, Map<String,?> sharedState, Map<String,?> options);
    boolean login() throws LoginException;
    boolean commit() throws LoginException;
    boolean abort() throws LoginException;
    boolean logout() throws LoginException;
It seems WebLogic's usage of the LoginModule is not as highly specific to the application server as we saw was the case for JBoss and GlassFish. The user can implement the interface directly, but has to put WebLogic specific principals in the Subject as these are not standardized.

Example of usage

Since WebLogic is closed source it's not possible to see how it actually uses the Authentication Provider V2 and its associated Login Module.



We took a look at how a number of different servlet containers implemented the identity store concept. The variety of ways to accomplish essentially the same thing is nearly endless. Some containers pass two strings for a username and password, others pass a String for the username, but a dedicated Credential type for the password, a char[] or even an opaque Object for the password. Two containers pass in a third parameter; the http servlet request.

The return type is varied as well. A (custom) Principal was used a couple of times, but several other representations of "caller data" were seen as well; like an "Account" and a "UserIdentity". In one case the container deemed it necessary to modify TLS to set the result.

The number of levels (call depth) needed to go through before reaching the identity store was different as well between containers. In some cases the identity store was called immediately with absolutely nothing in between, while in other cases up to 10 levels of bridging, adapting and delegating was done before the actual identity store was called.

Taking those intermediate levels into account revealed even more variety. We saw complete LoginContext instances being returned, we saw Subjects being used as output parameters, etc. Likewise, the mechanism to indicate success or failure ranged from an exception being thrown, via a boolean being returned, to a null being returned for groups.

One thing that all containers had in common though was that there's always an authentication mechanism that interacts with the caller and environment and delegates to the identity store. Then, no matter how different the identity store interfaces looked, every one of them had a method to perform the {credentials in, caller data out} function.

It's exactly this bare minimum of functionality that is arguably in most dire need of being standardised in Java EE. As it happens to be the case this is indeed what we're currently looking at in the security EG.

Arjan Tijms

Friday, August 21, 2015

Activating JASPIC in JBoss WildFly

JBoss WildFly has a rather good implementation of JASPIC, the Java EE standard API to build authentication modules.

Unfortunately there's one big hurdle for using JASPIC on JBoss WildFly; it has to be activated. This activation is somewhat of a hack itself, and is done by putting the following XML in a file called standalone.xml that resides with the installed server:

<security-domain name="jaspitest" cache-type="default">
        <login-module-stack name="dummy">
            <login-module code="Dummy" flag="optional"/>
        <auth-module code="Dummy"/>

Subsequently in the application a file called WEB-INF/jboss-web.xml needs to be created that references this (dummy) domain:

<?xml version="1.0"?>

While this works it requires the installed server to be modified. For a universal Java EE application that has to run on multiple servers this is a troublesome requirement. While not difficult, it's something that's frequently forgotten and can take weeks if not months to resolve. And when it finally is resolved the entire process of getting someone to add the above XML fragment may have to be repeated all over again when a new version of JBoss is installed.

Clearly having to activate JASPIC using a server configuration file is less than ideal. The best solution would be to not require any kind of activation at all (like is the case for e.g. GlassFish, Geronimo and WebLogic). But this is currently not implemented for JBoss WildFly.

The next best thing is doing this activation from within the application. As it appears this is indeed possible using some reflective magic and the usage of JBoss (Undertow) internal APIs. Here's where the OmniSecurity JASPIC Undertow project comes in. With this project JASPIC can be activated by putting the following in the pom.xml of a Maven project:


The above causes JBoss WildFly/Undertow to load an extension that uses a number of internal APIs. It's not entirely clear why, but some of those are directly available, while other ones have to be declared as available. Luckily this can be done from within the application as well by creating a META-INF/jboss-deployment-structure.xml file with the following content:

<?xml version='1.0' encoding='UTF-8'?>
<jboss-deployment-structure xmlns="urn:jboss:deployment-structure:1.2">
            <module name="org.wildfly.extension.undertow" services="export" export="true" />

So how does the extension exactly work?

The most important code consists out of two parts. A reflective part to retrieve what JBoss calls the "security domain" (the default is "other") and another part that uses the Undertow internal APIs to activate JASPIC. This is basically the same code Undertow would execute if the dummy domain is put in standalone.xml.

For completeness, the reflective part to retrieve the domain is:

String securityDomain = "other";

IdentityManager identityManager = deploymentInfo.getIdentityManager();
if (identityManager instanceof JAASIdentityManagerImpl) {
 try {
  Field securityDomainContextField = JAASIdentityManagerImpl.class.getDeclaredField("securityDomainContext");
  SecurityDomainContext securityDomainContext = (SecurityDomainContext) securityDomainContextField.get(identityManager);

  securityDomain = securityDomainContext.getAuthenticationManager().getSecurityDomain();

 } catch (NoSuchFieldException | SecurityException | IllegalArgumentException | IllegalAccessException e) {
  logger.log(Level.SEVERE, "Can't obtain name of security domain, using 'other' now", e);

The part that uses Undertow APIs to activate JASPIC is:

ApplicationPolicy applicationPolicy = new ApplicationPolicy(securityDomain);
JASPIAuthenticationInfo authenticationInfo = new JASPIAuthenticationInfo(securityDomain);

deploymentInfo.setJaspiAuthenticationMechanism(new JASPIAuthenticationMechanism(securityDomain, null));
deploymentInfo.setSecurityContextFactory(new JASPICSecurityContextFactory(securityDomain));
The full source can be found on GitHub.


For JBoss WildFly it's needed to activate JASPIC. There are two hacks available to do this. One requires a modification to standalone.xml and a jboss-web.xml, while the other requires a jar on the classpath of the application and a jboss-deployment-structure.xml file.

It would be best if such activation was not required at all. Hopefully this will indeed be the case in a future JBoss.

Arjan Tijms

Friday, July 17, 2015

JSF 2.3 new feature: registrable DataModels

Iterating components in JSF such as h:dataTable and ui:repeat have the DataModel class as their native input type. Other datatypes such as List are supported, but these are handled by build-in wrappers; e.g. an application provided List is wrapped into a ListDataModel.

While JSF has steadily expanded the number of build-in wrappers and JSF 2.3 has provided new ones for Map and Iterable, a long standing request is for users (or libraries) to be able to register their own wrappers.

JSF 2.3 will now (finally) let users do this. The way this is done is by creating a wrapper DataModel for a specific type, just as one may have done years ago when returning data from a backing bean, and then annotating it with the new @FacesDataModel annotation. A “forClass” attribute has to be specified on this annotation that designates the type this wrapper is able to handle.

The following gives an abbreviated example of this:

@FacesDataModel(forClass = MyCollection.class)
public class MyCollectionModel<E> extends DataModel<E> {
    public E getRowData() {
        // access MyCollection here

    public void setWrappedData(Object myCollection) {
        // likely just store myCollection

    // Other methods omitted for brevity

Note that there are two types involved here. The “forClass” attribute is the collection or container type that the DataModel wraps, while the generic parameter E concerns the data this collection contains. E.g. Suppose we have a MyCollection<User>, then “forClass” would correspond to MyCollection, and E would correspond to User. If set/getWrappedData was generic the “forClass” attribute may not have been needed, as generic parameters can be read from class definitions, but alas.

With a class definition as given above present, a backing bean can now return a MyCollection as in the following example:

public class MyBacking {
    public MyCollection<User> getUsers() {
        // return myCollection
h:dataTable will be able to work with this directly, as shown in the example below:
<h:dataTable value="#{myBacking.users}" var="user">

There are a few things noteworthy here.

Traditionally JSF artefacts like e.g. ViewHandlers are registered using a JSF specific mechanism, kept internally in a JSF data structure and are looked up using a JSF factory. @FacesDataModel however has none of this and instead fully delegates to CDI for all these concerns. The registration is done automatically by CDI by the simple fact that @FacesDataModel is a CDI qualifier, and lookup happens via the CDI BeanManager (although with a small catch, as explained below).

This is a new direction that JSF is going in. It has already effectively deprecated its own managed bean facility in favour of CDI named beans, but is now also favouring CDI for registration and lookup of the pluggable artefacts it supports. New artefacts will henceforth very likely exclusively use CDI for this, while some existing ones are retrofitted (like e.g. Converters and Validators). Because of the large number of artefacts involved and the subtle changes in behaviour that can occur, not all existing JSF artefacts will however change overnight to registration/lookup via CDI.

Another thing to note concerns the small catch with the CDI lookup that was mentioned above. The thing is that with a direct lookup using the BeanManager we’d get a very specific wrapper type. E.g. suppose there was no build-in wrapper for List and one was provided via @FacesDataModel. Now also suppose the actual data type encountered at runtime is an ArrayList. Clearly, a direct lookup for ArrayList will do us no good as there’s no wrapper available for exactly this type.

This problem is handled via a CDI extension that observes all definitions of @FacesDataModel that are found by CDI during startup and stores the types they handle in a collection. This is afterwards sorted such that for any 2 classes X and Y from this collection, if an object of X is an instanceof an object of Y, X appears in the collection before Y. The collection's sorting is otherwise arbitrary.

With this collection available, the logic behind @FacesDataModel scans this collection of types from beginning to end to find the first match which is assignable from the type that we encountered at runtime. Although it’s an implementation detail, the following shows an example of how the RI implements this:

    .filter(e -> e.getKey().isAssignableFrom(forClass))
        e -> dataModel.add(
                new FacesDataModelAnnotationLiteral(e.getKey())

In effect this means we either lookup the wrapper for our exact runtime type, or the closest super type. I.e. following the example above, the wrapper for List is found and used when the runtime type is ArrayList.

Before JSF 2.3 is finalised there are a couple of things that may still change. For instance, Map and Iterable have been added earlier as build-in wrappers, but could be refactored to be based on @FacesDataModel as well. The advantage is be that the runtime would be a client of the new API as well, which on its turn means its easier for the user to comprehend and override.

A more difficult and controversial change is to allow @FacesDataModel wrappers to override build-in wrappers. Currently it’s not possible to provide one own’s List wrapper, since List is build in and takes precedence. If @FacesDataModel would take precedence, then a user or library would be able to override this. This by itself is not that bad, since JSF lives and breathes by its ability to let users or libraries override or extend core functionality. However, the fear is that via this particular way of overriding a user may update one if its libraries that happens to ship with an @FacesDataModel implementation for List, which would then take that user by surprise.

Things get even more complicated when both the new Iterable and Map would be implemented as @FacesDataModel AND @FacesDataModel would take precedence over the build-in types. In that case the Iterable wrapper would always match before the build-in List wrapper, making the latter unreachable. Now logically this would not matter as Iterable handles lists just as well, but in practice this may be a problem for applications that in some subtle way depend on the specific behaviour of a given List wrapper (in all honestly, such applications will likely fail too when switching JSF implementations).

Finally, doing totally away with the build-in wrappers and depending solely on @FacesDataModel is arguably the best option, but problematic too for reasons of backwards compatibility. This thus poses an interesting challenge between two opposite concerns: “Nothing can ever change, ever” and “Modernise to stay relevant and competitive”.


With @FacesDataModel custom DataModel wrappers can be registered, but those wrappers can not (yet) override any of the build-in types.

Arjan Tijms

Wednesday, June 3, 2015

OmniFaces 2.1 released!

We're proud to announce that today we've released OmniFaces 2.1. OmniFaces is a utility library for JSF that provides a lot of utilities to make working with JSF much easier.

OmniFaces 2.1 is the second release that will depend on JSF 2.2 and CDI 1.1 from Java EE 7. Since Java EE 7 availability remains somewhat scarce, we maintain a no-frills 1.x branch for JSF 2.0 (without CDI) as well.

The easiest way to use OmniFaces 2.1 is via Maven by adding the following to pom.xml:


Alternatively the jars files can be downloaded directly.

A complete overview of all that's new can be found on the what's new page, and some more details can be found in BalusC's blogpost about this release.

As usual the release contains an assortment of new features, some changes and a bunch of fixes. One particular fix that took some time to get right is getting a CDI availability check to work correctly with Tomcat + OpenWebBeans (OWB). After a long discussion we finally got this to work, with special thanks to Mark Struberg and Ludovic PĂ©net.

One point worth noting is that since we joined the JSF EG, our time has to be shared between that and working on OmniFaces. In addition some code that's now in OmniFaces might move to JSF core (such as already happened for the IterableDataModel in order to support the Iterable interface in UIData and UIRepeat). For the OmniFaces 2.x line this will have no effect though, but for OmniFaces 3.x (which will focus on JSF 2.3) it may.

We will start planning soon for OmniFaces 2.2. Feature requests are always welcome ;)

Arjan Tijms

Tuesday, May 12, 2015

NEC's WebOTX - a commercial GlassFish derivative

In a previous article we took a look at an obscure Java EE application server that's only known in Korea and virtually unknown everywhere else. Korea is not the only country that has a national application server though. Japan is the other country. In fact, it has not one, but three obscure application servers.

These Japanese servers, the so-called obscure 3, are so unknown outside of Japan that major news events like a Java EE 7 certification simply just does not make it out here.

Those servers are the following:

  1. NEC WebOTX
  2. Hitachi Application Server
  3. Fujitsu Interstage AS

In this article we're going to take a quick look at the first one of this list: NEC WebOTX.

While NEC does have an international English page where a trial can be downloaded it only contains a very old version of WebOTX; 8.4, which implements Java EE 5. This file is called otx84_win32bitE.exe and is about 92MB in size.

As with pretty much all of the Asian application servers, the native language pages contain much more and much newer versions. In this case the Japanese page contains a recent version of WebOTX; 9.2, which implements Java EE 6. This file is called OTXEXP92.exe and is about 111MB in size. A bit of research revealed that a OTXEXP91.exe also once existed, but no other versions were found.

The file is a Windows installer, that presents several dialogs in Japanese. If you can't read Japanese it's a bit difficult to follow. Luckily, there are English instructions for the older WebOTX 8.4 available that still apply to the WebOTX 9.2 installer process as well. Installation takes a while and several scripts seem to start running, and it even wants to reboot the computer (a far cry from download & unzip, start server), but after a while WebOTX was installed in e:\webotx.

Jar and file comparison

One of the first things I often do after installing a new server is browse a little through the folders of the installation. This gives me some general idea about how the server is structured, and quite often will reveal what implementation components a particular server is using.

Surprisingly, the folder structure somewhat resembled that of GlassFish, but with some extra directories. E.g.

GlassFish main dirWebOTX 9.2 main dir


Looking at the modules directory in fact did make it clear that WebOTX is in fact strongly based on GlassFish:

GlassFish modules dirWebOTX 9.2 modules dir


The jar files are largely identical in the part shown, although WebOTX does have the extra jar here and there. It's a somewhat different story when it comes to the glassfish-* and gf-* jars. None of these are present in WebOTX, although for many similar ones are present but just prefixed by webotx- as shown below:

glassfish- prefixed jarswebotx- prefixed jars


When actually looking inside one of the jars with a matching name except for the prefix e.g. glassfish.jar vs webotx.jar, then it becomes clear that at least the file names are largely the same again, except for the package being renamed. See below:



Curiously a few jars with similar names have internally renamed package names. This is for instance the case for the well known Jersey (JAX-RS) jar, but for some reason not for Mojarra (JSF). See below:

glassfish jersey-core.jarwebotx jersey-core.jar


Besides the differences shown above, name changes occur at a number of other places. For instance well known GlassFish environment variables have been renamed to corresponding WebOTX ones, and pom.xml as well as MANIFEST.FM files in jar files have some renamed elements as well. For instance, the embedded pom.xml for the mojarra jar contains this:

    <!-- upds start 20121122 org.glassfish to -->
    <!-- upds end   20121122 org.glassfish to -->
        Oracle's implementation of the JSF 2.1 specification.
        This is the master POM file for Oracle's Implementation of the JSF 2.1 Specification.
With the MANIFEST.FM containing this:
Implementation-Title: Mojarra
Implementation-Version: 9.2.1
Tool: Bnd-0.0.249
DSTAMP: 20131217
TODAY: December 17 2013
Bundle-Name: Mojarra JSF Implementation 9.2.1 (20131217-1350) https://
TSTAMP: 1350
DocName: Mojarra Implementation Javadoc
Implementation-Vendor: Oracle America, Inc.


Trying out the server

Rather peculiar to say the least for a workstation is that WebOTX is automatically started when the computer is rebooted. Unlike most other Java EE servers the default HTTP port after installation is 80. There's no default application installed and requesting http://localhost results in the following screen:

The admin interface is present on port 5858. For some reason the initial login screen asks for very specific browser versions though:

After logging in with username "admin", password "adminadmin", we're presented with a colorful admin console:

As is not rarely the case with admin consoles for Java EE servers there's a lot of ancient J2EE stuff there. Options for generating stubs for EJB CMP beans are happily being shown to the user. In a way this is not so strange. Modern Java EE doesn't mandate a whole lot of things to be configured via a console, thanks to the ongoing standardization and simplification efforts, so what's left is not rarely old J2EE stuff.

I tried to upload a .war file of the OmniFaces showcase, but unfortunately this part of the admin console was still really stuck in ancient J2EE times as it politely told me it only accepted .ear files:

After zipping the .war file into a second zip file and then renaming it to .ear (a rather senseless exercise), the result was accepted and after requesting http://localhost again the OmniFaces showcase home screen was displayed:

As we can see, it's powered by Mojarra 9.2.1. Now we all know that Mojarra moves at an amazing pace, but last time I looked it was still at 2.3 m2. Either NEC travelled some time into the future and got its Mojarra version there, or the renaming in MANIFEST.FM as shown above was a little bit too eagerly done ;)

At any length, all of the functionality in the showcase seemed to work, but as it was tested on GlassFish 3 before this wasn't really surprising.


We took a short look at NEC's WebOTX and discovered it's a GlassFish derivative. This is perhaps a rather interesting thing. Since Oracle stopped commercial support for GlassFish a while ago, many wondered if the code base wouldn't wither at least a little when potentially fewer people would use it in production. However, if a large and well known company such as NEC offers a commercial offering based on GlassFish then this means that next to Payara there remains more interest in the GlassFish code beyond being "merely" an example for other vendors.

While we mainly looked at the similarities with respect to the jar files in the installed product we didn't look at what value NEC exactly added to GlassFish. From a very quick glance it seems that at least some of it is related to management and monitoring, but to be really sure a more in depth study would be needed.

It remains remarkable though that while the company NEC is well known outside Japan for many products, it has its own certified Java EE server that's virtually unheard of outside of Japan.

Arjan Tijms

Monday, May 4, 2015

OmniFaces 2.1-RC1 has been released!

We are proud to announce that OmniFaces 2.1 release candidate 1 has been made available for testing.

OmniFaces 2.1 is the second release that will depend on JSF 2.2 and CDI 1.1 from Java EE 7. Since Java EE 7 availability remains somewhat scarce, we maintain a no-frills 1.x branch for JSF 2.0 (without CDI). For this branch we've simultaneously released a release candidate as well: 1.11-RC1.

A full list of what's new and changed is available here.

OmniFaces 2.1 RC1 can be tested by adding the following dependency to your pom.xml:


Alternatively the jars files can be downloaded directly.

For the 1.x branch the coordinates are:

This one too can be downloaded directly.

If no major bugs surface we hope to release OmniFaces 2.1 final soon.

Arjan Tijms

Wednesday, April 22, 2015

Testing JASPIC 1.1 on IBM Liberty EE 7 beta

In this article we take a look at the latest April 2015 beta version of IBM's Liberty server, and specifically look at how well it implements the Java EE authentication standard JASPIC.

The initial version of Liberty implemented only a seemingly random assortment of Java EE APIs, but the second version that we looked at last year officially implemented the (Java EE 6) web profile. This year however the third incarnation is well on target to implement the full profile of Java EE 7.

This means IBM's newer and much lighter Liberty (abbreviated WLP), will be a true alternative for the older and incredibly obese WebSphere (abbreviated WAS) where it purely concerns the Java EE standard APIs. From having by far the most heavyweight server on the market (weighing in at well over 2GB), IBM can now offer a server that's as light and small as various offerings from its competition.

For this article we'll be specifically looking at how well JASPIC works on Liberty. Please take into account that the EE 7 version of Liberty is still a beta, so this only concerns an early look. Bugs and missing functionality are basically expected.

We started by downloading Liberty from the beta download page. The download page initially looked a little confusing, but it's constantly improving and by the time that this article was written it was already a lot clearer. Just like the GlassFish download page, IBM now offers a very straightforward Java EE Web profile download and a Java EE full profile one.

For old time WebSphere users who were used to installers that were themselves 200GB in size and only run on specific operating systems, and then happily downloaded 2GB of data that represented the actual server, it beggars belief that Liberty is now just an archive that you unzip. While the last release of Liberty already greatly improved matters by having an executable jar as download, effectively a self-extracting archive, nothing beats the ultimate simplicity of an "install" that solely consists of an archive that you unzip. This represents the pure zen of installing, shaving every non-essential component off it and leaving just the bare essentials. GlassFish has an unzip install, JBoss has it, TomEE and Tomcat has it, even the JDK has it these days, and now finally IBM has one too :)

We downloaded the Java EE 7 archive,, weighing in at a very reasonable 100MB, which is about the same size as the latest beta of JBoss (WildFly 9.0 beta2). Like last year there is no required registration or anything. A license has to be accepted (just like e.g. the JDK), but that's it. The experience up to this point is as perfect as can be.

A small disappointment is that the download page lists a weird extra step that supposedly needs to be performed. It says something called a "server" needs to be created after the unzip, but luckily it appeared this is not the case. After unzipping Liberty can be started directly on OS X by pointing Eclipse to the directory where Liberty was extracted, or by typing the command "./server start" from the "./bin" directory where Liberty was extracted. Why this unnecessary step is listed is not clear. Hopefully it's just a remainder of some early alpha version. On Linux (we tried Ubuntu 14.10) there's an extra bug. The file permissions of the unzipped archive are wrong, and a "chmod +x ./bin/server" is needed to get Liberty to start using either Eclipse or the commandline.

(UPDATE: IBM responded right away by removing the redundant step mentioned by the download page)

A bigger disappointment is that the Java EE full profile archive is by default configured to only be a JSP/Servlet container. Java EE 7 has to be "activated" by manually editing a vendor specific XML file called "server.xml" and finding out that in its "featureManager" section one needs to type <feature>javaee-7.0</feature>. For some reason or the other this doesn't include JASPIC and JACC. Even though they really are part of Java EE (7), they have to be activated separately. In the case of JASPIC this means adding the following as well: <feature>jaspic-1.1</feature>. Hopefully these two issues are just packaging errors and will be resolved in the next beta or at least in the final version.

On to trying out JASPIC, we unfortunately learned that by default JASPIC doesn't really work as it should. Liberty inherited a spec compliance issue from WebSphere 8.x where the runtime insists that usernames and groups that an auth module wishes to set as the authenticated identity also exist in an IBM specific server internal identity store that IBM calls "user registry". This is however not the intend of JASPIC, and existing JASPIC modules will not take this somewhat strange requirement into account which means they will therefor not work on WebSphere and now Liberty. We'll be looking at a hack to work around this below.

Another issue is that Liberty still mandates so called group to role mapping, even when such mapping is not needed. Unlike some other servers that also mandate this by default there's currently no option to switch this requirement off, but there's an open issue for this in IBM's tracker. Another problem is that the group to role mapping file can only be supplied by the application when using an EAR archive. With lighter weight applications a war archive is often the initial choice, but when security is needed and you don't want or can't pollute the server itself with (meaningless) application specific data, then the current beta of Liberty forces the EAR archive upon you. Here too however there's already an issue filed to remedy this.

One way to work around the spec compliance issue mentioned above is by implementing a custom user registry that effectively does nothing. IBM has some documentation on how to do this, but unfortunately it's not giving exact instructions but merely outlines the process. The structure is also not entirely logical.

For instance, step 1 says "Implement the custom user registry (". But in what kind of project? Where should the dependencies come from? Then step 2 says: "Creating an OSGi bundle with Bundle Activation. [...] Import the file". Why not create the bundle project right away and then create the mentioned file inside that bundle project? Step 4 says "Register the services", but gives no information on how to do this. Which services are we even talking about, and should they be put in an XML file or so and if so which one and what syntax? Step 3.4 asks to install the feature into Liberty using Eclipse (this works very nicely), but then step 4 and 5 are totally redundant, since they explain another more manually method to install the feature.

Even though it's outdated, IBM's general documentation on how to create a Liberty feature is much clearer. With those two articles side by side and cross checking it with the source code of the example used in the first article, I was able to build a working NOOP user registry. I had to Google for the example's source code though as the link in the article resulted in a 404. A good thing to realize is that the .esa file that's contained in the example .jar is also an archive that once unzipped contains the actual source code. Probably a trivial bit of knowledge for OSGi users, but myself being an OSGi n00b completely overlooked this and spent quite some time looking for the .java files.

The source code of the actual user registry is as follows:

package noopregistrybundle;

import static java.util.Collections.emptyList;

import java.rmi.RemoteException;
import java.util.ArrayList;
import java.util.List;
import java.util.Properties;

import javax.naming.InvalidNameException;
import javax.naming.ldap.LdapName;
import javax.naming.ldap.Rdn;


public class NoopUserRegistry implements UserRegistry {

    public void initialize(Properties props) throws CustomRegistryException, RemoteException {

    public String checkPassword(String userSecurityName, String password) throws PasswordCheckFailedException, CustomRegistryException, RemoteException {
        return userSecurityName;

    public String mapCertificate(X509Certificate[] certs) throws CertificateMapNotSupportedException, CertificateMapFailedException, CustomRegistryException, RemoteException {
        try {
            for (X509Certificate cert : certs) {
                for (Rdn rdn : new LdapName(cert.getSubjectX500Principal().getName()).getRdns()) {
                    if (rdn.getType().equalsIgnoreCase("CN")) {
                        return rdn.getValue().toString();
        } catch (InvalidNameException e) {

        throw new CertificateMapFailedException("No valid CN in any certificate");

    public String getRealm() throws CustomRegistryException, RemoteException {
        return "customRealm"; // documentation says can be null, but should really be non-null!

    public Result getUsers(String pattern, int limit) throws CustomRegistryException, RemoteException {
        return emptyResult();

    public String getUserDisplayName(String userSecurityName) throws EntryNotFoundException, CustomRegistryException, RemoteException {
        return userSecurityName;

    public String getUniqueUserId(String userSecurityName) throws EntryNotFoundException, CustomRegistryException, RemoteException {
        return userSecurityName;

    public String getUserSecurityName(String uniqueUserId) throws EntryNotFoundException, CustomRegistryException, RemoteException {
        return uniqueUserId;

    public boolean isValidUser(String userSecurityName) throws CustomRegistryException, RemoteException {
        return true;

    public Result getGroups(String pattern, int limit) throws CustomRegistryException, RemoteException {
        return emptyResult();

    public String getGroupDisplayName(String groupSecurityName) throws EntryNotFoundException, CustomRegistryException, RemoteException {
        return groupSecurityName;

    public String getUniqueGroupId(String groupSecurityName) throws EntryNotFoundException, CustomRegistryException, RemoteException {
        return groupSecurityName;

    public List<String> getUniqueGroupIds(String uniqueUserId) throws EntryNotFoundException, CustomRegistryException, RemoteException {
        return new ArrayList<>(); // Apparently needs to be mutable

    public String getGroupSecurityName(String uniqueGroupId) throws EntryNotFoundException, CustomRegistryException, RemoteException {
        return uniqueGroupId;

    public boolean isValidGroup(String groupSecurityName) throws CustomRegistryException, RemoteException {
        return true;

    public List<String> getGroupsForUser(String groupSecurityName) throws EntryNotFoundException, CustomRegistryException, RemoteException {
        return emptyList();

    public Result getUsersForGroup(String paramString, int paramInt) throws NotImplementedException, EntryNotFoundException, CustomRegistryException, RemoteException {
        return emptyResult();

    public WSCredential createCredential(String userSecurityName) throws NotImplementedException, EntryNotFoundException, CustomRegistryException, RemoteException {
        return null;
    private Result emptyResult() {
        Result result = new Result();
        return result;

There were two small caveats here. The first is that the documentation for getRealm says it may return null and that "customRealm" will be used as the default then. But when you actually return null authentication will fail with many null pointer exceptions appearing in the log. The second is that getUniqueGroupIds() has to return a mutable collection. If Collections#emptyList is returned it will throw an exception that no element can be inserted. Likely IBM merges the list of groups this method returns with those that are being provided by the JASPIC auth module, and directly uses this collection for that merging.

The Activator class that's mentioned in the article referenced above looks as follows:

package noopregistrybundle;

import static org.osgi.framework.Constants.SERVICE_PID;

import java.util.Dictionary;
import java.util.Hashtable;

import org.osgi.framework.BundleActivator;
import org.osgi.framework.BundleContext;
import org.osgi.framework.ServiceRegistration;


public class Activator extends NoopUserRegistry implements BundleActivator, ManagedService {

    private static final String CONFIG_PID = "noopUserRegistry";
    private ServiceRegistration<ManagedService> managedServiceRegistration;
    private ServiceRegistration<UserRegistry> userRegistryRegistration;

    @SuppressWarnings({ "rawtypes", "unchecked" })
    Hashtable getDefaults() {
        Hashtable defaults = new Hashtable();
        defaults.put(SERVICE_PID, CONFIG_PID);
        return defaults;

    public void start(BundleContext context) throws Exception {
        managedServiceRegistration = context.registerService(ManagedService.class, this, getDefaults());
        userRegistryRegistration = context.registerService(UserRegistry.class, this, getDefaults());
    public void updated(Dictionary<String, ?> properties) throws ConfigurationException {


    public void stop(BundleContext context) throws Exception {
        if (managedServiceRegistration != null) {
            managedServiceRegistration = null;
        if (userRegistryRegistration != null) {
            userRegistryRegistration = null;

Here we learned what that cryptic "Register the services" instruction from the article meant; it are the two calls to context.registerService here. Surely something that's easy to guess, or isn't it?

Finally a MANIFEST.FM file had to be created. The Eclipse tooling should normally help here, but it our case it worked badly. The "Analyze code and add dependencies to the MANIFEST.MF" command in the manifest editor (under the Dependencies tab) didn't work at all, and "" couldn't be chosen from the Imported Packages -> Add dialog. Since this import is actually used (and OSGi requires you to list each and every import used by your code) I added this manually. The completed file looks as follows:

Manifest-Version: 1.0
Bundle-ManifestVersion: 2
Bundle-Name: NoopRegistryBundle
Bundle-SymbolicName: NoopRegistryBundle
Bundle-Version: 1.0.0.qualifier
Bundle-Activator: noopregistrybundle.Activator
Bundle-RequiredExecutionEnvironment: JavaSE-1.7
Export-Package: noopregistrybundle

Creating yet another project for the so-called feature, importing this OSGi bundle there and installing the build feature into Liberty was all pretty straightforward when following the above mentioned articles.

The final step consisted of adding the noop user registry to Liberty's server.xml, which looked as follows:

<?xml version="1.0" encoding="UTF-8"?>
<server description="new server">


    <httpEndpoint httpPort="9080" httpsPort="9443" id="defaultHttpEndpoint"/>


With this in place, JASPIC indeed worked on Liberty, which is absolutely great! To do some more thorough testing of how compatible Liberty exactly is we used the JASPIC tests that I contributed to the Java EE 7 samples project. These tests have been used by various other server vendors already and give a basic impression of what things work and do not work.

The tests had to be adjusted for Liberty because of its requirement to add an EAR wrapper that hosts the mandated group to role mapping.

After running the tests, the following failures were reported:

Test Class Comment
testPublicPageNotRememberLogin org.javaee7.jaspic.basicauthentication.BasicAuthenticationPublicTest
testPublicPageLoggedin org.javaee7.jaspic.basicauthentication.BasicAuthenticationPublicTest
testProtectedAccessIsStateless org.javaee7.jaspic.basicauthentication.BasicAuthenticationStatelessTest
testPublicServletWithLoginCallingEJB org.javaee7.jaspic.ejbpropagation.ProtectedEJBPropagationTest
testProtectedServletWithLoginCallingEJB org.javaee7.jaspic.ejbpropagation.PublicEJBPropagationLogoutTest
testProtectedServletWithLoginCallingEJB org.javaee7.jaspic.ejbpropagation.PublicEJBPropagationTest
testLogout org.javaee7.jaspic.lifecycle.AuthModuleMethodInvocationTest SAM method cleanSubject not called, but should have been
testJoinSessionIsOptional org.javaee7.jaspic.registersession.RegisterSessionTest
testRemembersSession org.javaee7.jaspic.registersession.RegisterSessionTest
testResponseWrapping org.javaee7.jaspic.wrapping.WrappingTest Response wrapped by SAM did not arrive in Servlet
testRequestWrapping org.javaee7.jaspic.wrapping.WrappingTest Request wrapped by SAM did not arrive in Servlet

Specifically the EJB, "logout calls cleanSubject" & register session (both new JASPIC 1.1 features) and request/response wrapper tests failed.

Two of those are new JASPIC 1.1 features and likely IBM just hasn't implemented those yet for the beta. Request/response wrapper failures is a known problem from JASPIC 1.0 times. Although most servers implement it now curiously not a single JASPIC implementation did so back in the Java EE 6 time frame (even though it was a required feature by the spec).

First Java EE 7 production ready server?

At the time of writing, which is 694 days (1 year, ~10 months) after the Java EE 7 spec was finalized, there are 3 certified Java EE servers but none of them is deemed by their vendor as "production ready". With the implementation cycle of Java EE 6 we saw that IBM was the first vendor to release a production ready server after 559 days (1 year, 6 months), with Oracle following suit at 721 days (1 year, 11 months).

Oracle (perhaps unfortunately) doesn't do public beta releases and is a little tight lipped about their up coming Java EE 7 WebLogic 12.2.1 release, but it's not difficult to guess that they are working hard on it (I have it on good authority that they indeed are). Meanwhile IBM has just released a beta that starts to look very complete. Looking at the amount of time it took both vendors last time around it might be a tight race between the two for releasing the first production ready Java EE 7 server. Although JBoss' WildFly 8.x is certified, a production ready and supported release is likely still at least a full year ahead when looking at the current state of the WildFly branch and if history is anything to go by (it took JBoss 923 days (2 years, 6 months) last time).


Despite a few bugs in the packaging of the full and web profile servers, IBM's latest beta shows incredible promise. The continued effort in making its application server yet again simpler to install for developers is nothing but applaudable. IBM clearly meant it when they started the Liberty project a few years ago and told their mission was to optimize the developer experience.

There are a few small bugs and one somewhat larger violation in its JASPIC implementation, but we have to realize it's just a beta. In fact, IBM engineers are already looking at the JASPIC issues.

To summarize the good and not so good points:


  • Runs on all operating systems (no special IBM JDK required)
  • Monthly betas of EE 7 server
  • Liberty to support Java EE 7 full profile
  • Possibly on its way to become the first production ready EE 7 server
  • Public download page without required registration
  • Very good file size for full profile (100MB)
  • Extremely easy "download - unzip - ./server start" experience

Not (yet) so good

  • Download page lists totally unnecessary step asking to "create a server" (update: now fixed by IBM)
  • Wrong file permissions in archive for usage on Linux; executable attribute missing on bin/server (update: now fixed by IBM)
  • Wrong configuration of server.xml; both web and full profile by default configured as JSP/Servlet only
  • "javaee-7.0" feature in server.xml doesn't imply JASPIC and JACC, while both are part of Java EE (update: now fixed by IBM)
  • JASPIC runtime tries to validate usernames/groups in internal identity store (violation of JASPIC spec)
  • Mandatory group to role mapping, even when this is not needed
  • Mandatory usage of EAR archive when group to role mapping has to be provided by the application
  • Not all JASPIC features implemented yet (but remember that we looked at a beta version)

Arjan Tijms