Thursday, January 22, 2015

The most popular upcoming Java EE 8 technologies according to ZEEF users

I maintain a page on zeef.com about the upcoming Java EE 8 specification. On this page I collect all interesting links about the various sub-specs that will be updated or newly introduced in EE 8. The page is up since April last year and therefor currently has almost 10 months worth of data (at the moment 8.7k views, 5k clicks).

While there still aren't any discussions and thus links available for quite a couple of specs, it does give us some early insight in what's popular. At the moment the ranking is as follows:

Position Link Category
1 Java EE 8 roadmap [png] Java EE 8 overal
2 JSF MVC discussion JSF 2.3
3 Let's get started on JSF 2.3 JSF 2.3
4 Servlet 4.0 Servlet 4.0
5 Java EE 8 Takes Off! Java EE 8 overal
6 An MVC action-based framework in Java EE 8 MVC 1.0
7 JSF and MVC 1.0, a comparison in code MVC 1.0
8 Let's get started on Servlet 4.0 Servlet 4.0
9 JavaOne Replay: 'Java EE 8 Overview' by Linda DeMichiel Java EE 8 overal
10 A CDI 2 Wish List CDI 2.0

If we look at the single highest ranking link for each spec, we'll get to the following global ranking:

  1. Java EE 8 overal
  2. JSF 2.3
  3. Servlet 4.0
  4. MVC 1.0
  5. CDI 2.0
  6. JAX-RS 2.1
  7. JSON-B 1.0
  8. JCache 1.0
  9. JMS 2.1
  10. Java (EE) Configuration
  11. Java EE Security API 1.0
  12. JCA.next
  13. Java EE Management API 2.0
  14. JSON-P 1.1
Interestingly, when we don't look at the single highest clicked link per spec, but aggregate the clicks for all top links, we get a somewhat different ranking as shown below (the relative positions compared to the first ranking are shown behind each spec):

  1. Java EE 8 overal (=)
  2. MVC 1.0 (+2)
  3. JSF 2.3 (-1)
  4. CDI 2.0 (+1)
  5. Servlet 4.0 (-2)
  6. JCache 1.0 (+2)
  7. Java (EE) Configuration (+3)
  8. JAX-RS 2.1 (-2)
  9. JMS 2.1 (=)
  10. JSON-B 1.0 (-3)
  11. Java EE Security API 1.0 (=)
  12. JCA.next (=)
  13. Java EE Management API 2.0 (=)
  14. JSON-P 1.1 (=)

As we can see the specs that occupy the top 5 are still the same, but whereas JSF 2.3 was the most popular sub-spec where it concerned a single link, looking at all links together it's now MVC 1.0. The umbrella spec Java EE however is still firmly on top. The bottom segment is even exactly the same, but for most of them very few information is available so a block is basically the same as a link. Specifically for the Java EE Management API and JSON-P 1.1 there's no information at all available beyond a single announcement that the initial JSR was posted.

While the above ranking does give us some data points, we have to take into account that it's not just about the technologies themselves but also about a number of other factors. E.g. the position on the page does influence clicks. The Java EE 8 block is on the top left of the page and will be seen first by most visitors. Then again, CDI 2.0 is at a pretty good position at the top middle of the page, but got relatively few clicks. JSF 2.3 and especially MVC 1.0 are at a less ideal position at the middle left of the page, below the so-called "fold" of many screens (meaning, you have to scroll to see it). Yet, both of them received the most clicks after the umbrella spec.

The observant reader may notice that some key Java EE technologies such as JPA, EJB, Bean Validation and Expression Language are missing. It's likely that these specs will either not be updated at all for Java EE 8, or will only receive a very small update (called a MR or Maintenance Release in the JCP process).

Oracle has indicated on multiple occasions that this is almost entirely due to resource issues. Apparently there just aren't enough resources available to be able to update all specs. Even though there are e.g. dozens of JPA JIRA issues filed and persistence is arguably one of the most important aspect of the majority of (web) applications, it's just not possible to have a major update for it, unfortunately.

Conclusion

In general we can say that for this particular data point the web technologies gather the most interest, while the back end/business and supporting technologies are a little less popular. It will be interesting to see if and if so how the numbers will change when more information become available. Java EE Management API 2.0 for one seems really unpopular now, but there simply isn't much to measure yet.

Arjan Tijms

Tuesday, January 6, 2015

Java EE authorization - JACC revisited part II

This is the second part of a series where we revisit JACC after taking an initial look at it last year. In the first part we somewhat rectified a few of the disadvantages that were initially discovered and looked at various role mapping strategies.

In this second part we'll take an in-depth look at obtaining the container specific role mapper and the container specific way of how a JACC provider is deployed. In the next and final part we'll be bringing it all together and present a fully working JACC provider.

Container specifics

The way in which to obtain the role mapper and what data it exactly provides differs greatly for each container, and is something that containers don't really document either. Also, although the two system properties that need to be specified for the two JACC artifacts are standardized, it's often not at all clear how the jar file containing the JACC provider implementation classes has to be added to the container's class path.

After much research I obtained the details on how to do this for the following servers:

  • GlassFish 4.1
  • WebLogic 12.1.3
  • Geronimo 3.0.1
This list is admittedly limited, but as it appeared the process of finding out these details can be rather time consuming and frankly maddening. Given the amount of time that already went into this research I decided to leave it at these three, but hope to look into additional servers at a later date.

The JACC provider that we'll present in the next part will use a RoleMapper class that at runtime tries to obtain the native mapper from each known server using reflection (so to avoid compile dependencies). Whatever the native role mapper returns is transformed to a group to roles map first (see part I for more details on the various mappings). In the section below the specific reflective code for each server is given first. The full RoleMapper class is given afterwards.

GlassFish

The one server where the role mapper was simple to obtain was GlassFish. The code how to do this is clearly visible in the in-memory example JACC provider that ships with GlassFish. A small confusing thing is that the example class and its interface contain many methods that aren't actually used. Based on this example the reflective code and mapping became as follows:

private boolean tryGlassFish(String contextID, Collection<String> allDeclaredRoles) {

    try {
        Class<?> SecurityRoleMapperFactoryClass = Class.forName("org.glassfish.deployment.common.SecurityRoleMapperFactory");

        Object factoryInstance = Class.forName("org.glassfish.internal.api.Globals")
                                      .getMethod("get", SecurityRoleMapperFactoryClass.getClass())
                                      .invoke(null, SecurityRoleMapperFactoryClass);

        Object securityRoleMapperInstance = SecurityRoleMapperFactoryClass.getMethod("getRoleMapper", String.class)
                                                                          .invoke(factoryInstance, contextID);

        @SuppressWarnings("unchecked")
        Map<String, Subject> roleToSubjectMap = (Map<String, Subject>) Class.forName("org.glassfish.deployment.common.SecurityRoleMapper")
                                                                            .getMethod("getRoleToSubjectMapping")
                                                                            .invoke(securityRoleMapperInstance);

        for (String role : allDeclaredRoles) {
            if (roleToSubjectMap.containsKey(role)) {
                Set<Principal> principals = roleToSubjectMap.get(role).getPrincipals();

                List<String> groups = getGroupsFromPrincipals(principals);
                for (String group : groups) {
                    if (!groupToRoles.containsKey(group)) {
                        groupToRoles.put(group, new ArrayList<String>());
                    }
                    groupToRoles.get(group).add(role);
                }

                if ("**".equals(role) && !groups.isEmpty()) {
                    // JACC spec 3.2 states:
                    //
                    // "For the any "authenticated user role", "**", and unless an application specific mapping has
                    // been established for this role,
                    // the provider must ensure that all permissions added to the role are granted to any
                    // authenticated user."
                    //
                    // Here we check for the "unless" part mentioned above. If we're dealing with the "**" role here
                    // and groups is not
                    // empty, then there's an application specific mapping and "**" maps only to those groups, not
                    // to any authenticated user.
                    anyAuthenticatedUserRoleMapped = true;
                }
            }
        }

        return true;

    } catch (ClassNotFoundException | NoSuchMethodException | SecurityException | IllegalAccessException | IllegalArgumentException
            | InvocationTargetException e) {
        return false;
    }
}

Finding out how to install the JACC provider took a bit more time. For some reason the documentation doesn't mention it, but the location to put the mentioned jar file is simply:

[glassfish_home]/glassfish/lib
GlassFish has a convenience mechanism to put a named JACC configuration in the following file:
[glassfish_home]/glassfish/domains1/domain1/config/domain.xml
This name has to be added to the security-config element and a jacc-provider element that specifies both the policy and factory classes as follows:
<security-service jacc="test">
    <!-- Other elements here -->
    <jacc-provider policy-provider="test.TestPolicy" name="test" policy-configuration-factory-provider="test.TestPolicyConfigurationFactory"></jacc-provider>
</security-service>

WebLogic

WebLogic turned out to be a great deal more difficult than GlassFish. Being closed source you can't just look into any default JACC provider, but as it happens the WebLogic documentation mentioned (actually, requires) a pluggable role mapper:

-Dweblogic.security.jacc.RoleMapperFactory.provider=weblogic.security.jacc.simpleprovider.RoleMapperFactoryImpl
Unfortunately, even though an option for a role mapper factory class is used, there's no documentation on what one's own role mapper factory should do (which interfaces it should implement, which interfaces the actual role mapper it returns should implement etc).

After a fair amount of Googling I did eventually found that what appears to be a super class is documented. Furthermore, the interface of a type called RoleMapper is documented as well.

Unfortunately that last interface does not contain any of the actual methods to do role mapping, so you can't use an implementation of just this. This all was really surprising; WebLogic gives the option to specify a role mapper factory, but key details are missing. Still, the above gave just enough hints to do some reflective experiments, and after a lot of trial and error I came to the following code that seemed to do the trick:

private boolean tryWebLogic(String contextID, Collection<String> allDeclaredRoles) {

    try {

        // See http://docs.oracle.com/cd/E21764_01/apirefs.1111/e13941/weblogic/security/jacc/RoleMapperFactory.html
        Class<?> roleMapperFactoryClass = Class.forName("weblogic.security.jacc.RoleMapperFactory");

        // RoleMapperFactory implementation class always seems to be the value of what is passed on the commandline
        // via the -Dweblogic.security.jacc.RoleMapperFactory.provider option.
        // See http://docs.oracle.com/cd/E57014_01/wls/SCPRG/server_prot.htm
        Object roleMapperFactoryInstance = roleMapperFactoryClass.getMethod("getRoleMapperFactory")
                                                                 .invoke(null);

        // See http://docs.oracle.com/cd/E21764_01/apirefs.1111/e13941/weblogic/security/jacc/RoleMapperFactory.html#getRoleMapperForContextID(java.lang.String)
        Object roleMapperInstance = roleMapperFactoryClass.getMethod("getRoleMapperForContextID", String.class)
                                                          .invoke(roleMapperFactoryInstance, contextID);

        // This seems really awkward; the Map contains BOTH group names and user names, without ANY way to
        // distinguish between the two.
        // If a user now has a name that happens to be a role as well, we have an issue :X
        @SuppressWarnings("unchecked")
        Map<String, String[]> roleToPrincipalNamesMap = (Map<String, String[]>) Class.forName("weblogic.security.jacc.simpleprovider.RoleMapperImpl")
                                                                                     .getMethod("getRolesToPrincipalNames")
                                                                                     .invoke(roleMapperInstance);

        for (String role : allDeclaredRoles) {
            if (roleToPrincipalNamesMap.containsKey(role)) {

                List<String> groupsOrUserNames = asList(roleToPrincipalNamesMap.get(role));

                for (String groupOrUserName : roleToPrincipalNamesMap.get(role)) {
                    // Ignore the fact that the collection also contains user names and hope
                    // that there are no user names in the application with the same name as a group
                    if (!groupToRoles.containsKey(groupOrUserName)) {
                        groupToRoles.put(groupOrUserName, new ArrayList<String>());
                    }
                    groupToRoles.get(groupOrUserName).add(role);
                }

                if ("**".equals(role) && !groupsOrUserNames.isEmpty()) {
                    // JACC spec 3.2 states: [...]
                    anyAuthenticatedUserRoleMapped = true;
                }
            }
        }

        return true;

    } catch (ClassNotFoundException | NoSuchMethodException | SecurityException | IllegalAccessException | IllegalArgumentException
            | InvocationTargetException e) {
        return false;
    }
}

Adding the two standard system properties for WebLogic appeared to be done most conveniently in the file:

[wls_home]/user_projects/domains/mydomain/bin/setDomainEnv.sh
There's a comment in the file that says to uncomment a section to use JACC, but that however is completely wrong. If you do indeed uncomment it, the server will not start: it are a few -D options, each on the beginning of a line, but at that point in the file you can't specify -D options that way. Furthermore it suggests that it's required to activate the Java SE security manager, but LUCKILY this is NOT the case. From WebLogic 12.1.3 onwards the security manager is no longer required (which is a huge win for working with JACC on WebLogic). The following does work though for our own JACC provider:
JACC_PROPERTIES="-Djavax.security.jacc.policy.provider=test.TestPolicy  -Djavax.security.jacc.PolicyConfigurationFactory.provider=test.TestPolicyConfigurationFactory -Dweblogic.security.jacc.RoleMapperFactory.provider=weblogic.security.jacc.simpleprovider.RoleMapperFactoryImpl "

JAVA_PROPERTIES="${JAVA_PROPERTIES} ${EXTRA_JAVA_PROPERTIES} ${JACC_PROPERTIES}"
export JAVA_PROPERTIES
For completeness and future reference, the following definition for JACC_PROPERTIES activates the provided JACC provider:
# JACC_PROPERTIES="-Djavax.security.jacc.policy.provider=weblogic.security.jacc.simpleprovider.SimpleJACCPolicy -Djavax.security.jacc.PolicyConfigurationFactory.provider=weblogic.security.jacc.simpleprovider.PolicyConfigurationFactoryImpl -Dweblogic.security.jacc.RoleMapperFactory.provider=weblogic.security.jacc.simpleprovider.RoleMapperFactoryImpl "
(Do note that WebLogic violates the Java EE spec here. Such activation should NOT be needed, as a JACC provider should be active by default.)

The location of where to put the JACC provider jar was not as straightforward. I tried the [wls_home]/user_projects/domains/mydomain/lib] folder, and although WebLogic did seem to detect "something" here as it would log during startup that it encountered a library and was adding it, it would not actually work and class not found exceptions followed. After some fiddling I got around this by adding the following at the point the CLASSPATH variable is exported:

CLASSPATH="${DOMAIN_HOME}/lib/jacctest-0.0.1-SNAPSHOT.jar:${CLASSPATH}"
export CLASSPATH
I'm not sure if this is the recommended approach, but it seemed to do the trick.

Geronimo

Where WebLogic was a great deal more difficult than GlassFish, Geronimo unfortunately was extremely more difficult. In 2 decades of working with a variety of platforms and languages I think getting this to work ranks pretty high on the list of downright bizarre things that are required to get something to work. The only thing that comes close is getting some obscure undocumented activeX control to work in a C++ Windows app around 1997.

The role mapper in Geronimo is not directly accessibly via some factory or service as in GlassFish and WebLogic, but instead there's a map containing the mapping, which is injected in a Geronimo specific JACC provider that extends something and implements many interfaces. As we obviously don't have or want to have a Geronimo specific provider I tried to find out how this injection exactly works.

Things start with a class called GeronimoSecurityBuilderImpl that parses the XML that expresses the role mapping. Nothing too obscure here. This class then registers a so-called GBean (a kind of Geronimo specific JMX bean) that it passes the previously mentioned Map, and then registers a second GBean that it gives a reference to this first GBean. Meanwhile, the Geronimo specific policy configuration factory, called GeronimoPolicyConfigurationFactory "registers" itself via a static method on one of the GBeans mentioned before. Those GBeans at some point start running, and use the factory that was set by the static method to get a Geronimo specific policy configuration and then call a method on that to pass the Map containing the role mapping.

Now this scheme is not only rather convoluted to say the least, there's also no way to get to this map from anywhere else without resorting to very ugly hacks and using reflection to hack into private instance variables. It was possible to programmatically obtain a GBean, but the one we're after has many instances and it didn't prove easy to get the one that applies to the current web app. There seemed to be an option if you know the maven-like coordinates of your own app, but I didn't wanted to hardcode these and didn't found an API to obtain those programmatically. Via the source I noticed another way was via some meta data about a GBean, but there was no API available to obtain this.

After spending far more hours than willing to admit, I finally came to the following code to obtain the Map I was after:

private void tryGeronimoAlternative() {
    Kernel kernel = KernelRegistry.getSingleKernel();
    
    try {
        ClassLoader contextClassLoader = Thread.currentThread().getContextClassLoader();
        
        Field registryField = kernel.getClass().getDeclaredField("registry");
        registryField.setAccessible(true);
        BasicRegistry registry = (BasicRegistry) registryField.get(kernel);
        
        Set<GBeanInstance> instances = registry.listGBeans(new AbstractNameQuery(null, Collections.EMPTY_MAP, ApplicationPrincipalRoleConfigurationManager.class.getName()));
        
        Map<Principal, Set<String>> principalRoleMap = null;
        for (GBeanInstance instance : instances) {
            
            Field classLoaderField = instance.getClass().getDeclaredField("classLoader");
            classLoaderField.setAccessible(true);
            ClassLoader gBeanClassLoader = (ClassLoader) classLoaderField.get(instance);
            
            if (gBeanClassLoader.equals(contextClassLoader)) {
                
                ApplicationPrincipalRoleConfigurationManager manager = (ApplicationPrincipalRoleConfigurationManager) instance.getTarget();
                Field principalRoleMapField = manager.getClass().getDeclaredField("principalRoleMap");
                principalRoleMapField.setAccessible(true);
                
                principalRoleMap = (Map<Principal, Set<String>>) principalRoleMapField.get(manager);
                break;
                
            }
            
            // process principalRoleMap here
           
        }
        
    } catch (InternalKernelException | IllegalStateException | NoSuchFieldException | SecurityException | IllegalArgumentException | IllegalAccessException e1) {
        // Ignore
    }
}
Note that this is the "raw" code and not yet converted to be fully reflection based like the GlassFish and WebLogic examples and not is not yet converting the principalRoleMap to the uniform format we use.

In order to install the custom JACC provider I looked for a config file or startup script, but there didn't seem to be an obvious one. So I just supplied the standardized options directly on the command line as follows:

-Djavax.security.jacc.policy.provider=test.TestPolicy  
-Djavax.security.jacc.PolicyConfigurationFactory.provider=test.TestPolicyConfigurationFactory
I then tried to find a place to put the jar again, but simply couldn't find one. There just doesn't seem to be any mechanism to extend Geronimo's class path for the entire server, which is (perhaps unfortunately) what JACC needs. There were some options for individual deployments, but this cannot work for JACC since the Policy instance is called at a very low level and for everything that is deployed on the server. Geronimo by default deploys about 10 applications for all kinds of things. Mocking with each and every one of them just isn't feasible.

What I eventually did is perhaps one of the biggest hacks ever; I injected the required classes directly into the Geronimo library that contains the default JACC provider. After all, this provider is already used, so surely Geronimo has to be able to load my custom provider from THIS location :X

All libraries in Geronimo are OSGI bundles, so in addition to just injecting my classes I also had to adjust the MANIFEST, but after doing that Geronimo was FINALLY able to find my custom JACC provider. The MANIFEST was updated by copying the existing one from the jar and adding the following to it:

test;uses:="org.apa
 che.geronimo.security.jaspi,javax.security.auth,org.apache.geronimo.s
 ecurity,org.apache.geronimo.security.realm.providers,org.apache.geron
 imo.security.jaas,javax.security.auth.callback,javax.security.auth.lo
 gin,javax.security.auth.message.callback"
And then running the zip command as follows:
zip /test/geronimo-tomcat7-javaee6-3.0.1/repository/org/apache/geronimo/framework/geronimo-security/3.0.1/geronimo-security-3.0.1.jar META-INF/MANIFEST.MF 
From the root directory where my compiled classes live I executed the following command to inject them:
jar uf /test/geronimo-tomcat7-javaee6-3.0.1/repository/org/apache/geronimo/framework/geronimo-security/3.0.1/geronimo-security-3.0.1.jar test/*
I happily admit it's pretty insane to do it like this. Hopefully this is not really the way to do it, and there's a sane way that I just happened to miss, or that someone with deep Geronimo knowledge would "just know".

Much to my dismay, the absurdity didn't end there. As it appears the previously mentioned GBeans act as a kind of protection mechanism to ensure only Geronimo specific JACC providers are installed. Since the entire purpose of the exercise is to install a general universal JACC provider, turning it into a Geronimo specific one obviously wasn't an option. The scarce documentation vaguely hints at replacing some of these GBeans or the security builder specifically for your application, but since JACC is installed for the entire server this just isn't feasible.

Eventually I tricked Geronimo into thinking a Geronimo specific JACC provider was installed by instantiating (via reflection) a dummy Geronimo policy provider factory and putting intercepting proxies into it to prevent a NPE that would otherwise ensue. As a side effect of this hack to beat Geronimo's "protection" I could capture the map I previously grabbed via reflective hacks somewhat easier.

The code to install the dummy factory:

try {
    // Geronimo 3.0.1 contains a protection mechanism to ensure only a Geronimo policy provider is installed.
    // This protection can be beat by creating an instance of GeronimoPolicyConfigurationFactory once. This instance
    // will statically register itself with an internal Geronimo class
    geronimoPolicyConfigurationFactoryInstance = Class.forName("org.apache.geronimo.security.jacc.mappingprovider.GeronimoPolicyConfigurationFactory").newInstance();
    geronimoContextToRoleMapping = new ConcurrentHashMap<>();
} catch (Exception e) {
    // ignore
}
The code to put the capturing policy configurations in place:
// Are we dealing with Geronimo?
if (geronimoPolicyConfigurationFactoryInstance != null) {
    
    // PrincipalRoleConfiguration
    
    try {
        Class<?> geronimoPolicyConfigurationClass = Class.forName("org.apache.geronimo.security.jacc.mappingprovider.GeronimoPolicyConfiguration");
        
        Object geronimoPolicyConfigurationProxy = Proxy.newProxyInstance(TestRoleMapper.class.getClassLoader(), new Class[] {geronimoPolicyConfigurationClass}, new InvocationHandler() {
            
            @SuppressWarnings("unchecked")
            @Override
            public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {

                // Take special action on the following method:
                
                // void setPrincipalRoleMapping(Map<Principal, Set<String>> principalRoleMap) throws PolicyContextException;
                if (method.getName().equals("setPrincipalRoleMapping")) {
                    
                    geronimoContextToRoleMapping.put(contextID, (Map<Principal, Set<String>>) args[0]);
                    
                }
                return null;
            }
        });
        
        // Set the proxy on the GeronimoPolicyConfigurationFactory so it will call us back later with the role mapping via the following method:
        
        // public void setPolicyConfiguration(String contextID, GeronimoPolicyConfiguration configuration) {
        Class.forName("org.apache.geronimo.security.jacc.mappingprovider.GeronimoPolicyConfigurationFactory")
             .getMethod("setPolicyConfiguration", String.class, geronimoPolicyConfigurationClass)
             .invoke(geronimoPolicyConfigurationFactoryInstance, contextID, geronimoPolicyConfigurationProxy);
        
        
    } catch (ClassNotFoundException | NoSuchMethodException | SecurityException | IllegalAccessException | IllegalArgumentException | InvocationTargetException e) {
        // Ignore
    }
}
And finally the code to transform the map into our uniform target map:
private boolean tryGeronimo(String contextID, Collection<String> allDeclaredRoles) {
    if (geronimoContextToRoleMapping != null) {
        
        if (geronimoContextToRoleMapping.containsKey(contextID)) {
            Map<Principal, Set<String>> principalsToRoles = geronimoContextToRoleMapping.get(contextID);
            
            for (Map.Entry<Principal, Set<String>> entry : principalsToRoles.entrySet()) {
                
                // Convert the principal that's used as the key in the Map to a list of zero or more groups.
                // (for Geronimo we know that using the default role mapper it's always zero or one group)
                for (String group : principalToGroups(entry.getKey())) {
                    if (!groupToRoles.containsKey(group)) {
                        groupToRoles.put(group, new ArrayList<String>());
                    }
                    groupToRoles.get(group).addAll(entry.getValue());
                    
                    if (entry.getValue().contains("**")) {
                        // JACC spec 3.2 states: [...]
                        anyAuthenticatedUserRoleMapped = true;
                    }
                }
            }
        }
        
        return true;
    }
    
    return false;
}

The role mapper class

After having taken a look at the code for each individual server in isolation above, it's now time to show the full code for the RoleMapper class. This is the class that the JACC provider that we'll present in the next part will use as the universal way to obtain the server's role mapping, as-if this was already standardized:

package test;

import static java.util.Arrays.asList;
import static java.util.Collections.list;

import java.lang.reflect.InvocationHandler;
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.lang.reflect.Proxy;
import java.security.Principal;
import java.security.acl.Group;
import java.util.ArrayList;
import java.util.Collection;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;

import javax.security.auth.Subject;

public class TestRoleMapper {
    
    private static Object geronimoPolicyConfigurationFactoryInstance;
    private static ConcurrentMap<String, Map<Principal, Set<String>>> geronimoContextToRoleMapping;
    
    private Map<String, List<String>> groupToRoles = new HashMap<>();

    private boolean oneToOneMapping;
    private boolean anyAuthenticatedUserRoleMapped = false;
    
    public static void onFactoryCreated() {
        tryInitGeronimo();
    }
    
    private static void tryInitGeronimo() {
        try {
            // Geronimo 3.0.1 contains a protection mechanism to ensure only a Geronimo policy provider is installed.
            // This protection can be beat by creating an instance of GeronimoPolicyConfigurationFactory once. This instance
            // will statically register itself with an internal Geronimo class
            geronimoPolicyConfigurationFactoryInstance = Class.forName("org.apache.geronimo.security.jacc.mappingprovider.GeronimoPolicyConfigurationFactory").newInstance();
            geronimoContextToRoleMapping = new ConcurrentHashMap<>();
        } catch (Exception e) {
            // ignore
        }
    }
    
    public static void onPolicyConfigurationCreated(final String contextID) {
        
        // Are we dealing with Geronimo?
        if (geronimoPolicyConfigurationFactoryInstance != null) {
            
            // PrincipalRoleConfiguration
            
            try {
                Class<?> geronimoPolicyConfigurationClass = Class.forName("org.apache.geronimo.security.jacc.mappingprovider.GeronimoPolicyConfiguration");
                
                Object geronimoPolicyConfigurationProxy = Proxy.newProxyInstance(TestRoleMapper.class.getClassLoader(), new Class[] {geronimoPolicyConfigurationClass}, new InvocationHandler() {
                    
                    @SuppressWarnings("unchecked")
                    @Override
                    public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {

                        // Take special action on the following method:
                        
                        // void setPrincipalRoleMapping(Map<Principal, Set<String>> principalRoleMap) throws PolicyContextException;
                        if (method.getName().equals("setPrincipalRoleMapping")) {
                            
                            geronimoContextToRoleMapping.put(contextID, (Map<Principal, Set<String>>) args[0]);
                            
                        }
                        return null;
                    }
                });
                
                // Set the proxy on the GeronimoPolicyConfigurationFactory so it will call us back later with the role mapping via the following method:
                
                // public void setPolicyConfiguration(String contextID, GeronimoPolicyConfiguration configuration) {
                Class.forName("org.apache.geronimo.security.jacc.mappingprovider.GeronimoPolicyConfigurationFactory")
                     .getMethod("setPolicyConfiguration", String.class, geronimoPolicyConfigurationClass)
                     .invoke(geronimoPolicyConfigurationFactoryInstance, contextID, geronimoPolicyConfigurationProxy);
                
                
            } catch (ClassNotFoundException | NoSuchMethodException | SecurityException | IllegalAccessException | IllegalArgumentException | InvocationTargetException e) {
                // Ignore
            }
        }
    }
    

    public TestRoleMapper(String contextID, Collection<String> allDeclaredRoles) {
        // Initialize the groupToRoles map

        // Try to get a hold of the proprietary role mapper of each known
        // AS. Sad that this is needed :(
        if (tryGlassFish(contextID, allDeclaredRoles)) {
            return;
        } else if (tryWebLogic(contextID, allDeclaredRoles)) {
            return;
        } else if (tryGeronimo(contextID, allDeclaredRoles)) {
            return;
        } else {
            oneToOneMapping = true;
        }
    }

    public List<String> getMappedRolesFromPrincipals(Principal[] principals) {
        return getMappedRolesFromPrincipals(asList(principals));
    }

    public boolean isAnyAuthenticatedUserRoleMapped() {
        return anyAuthenticatedUserRoleMapped;
    }

    public List<String> getMappedRolesFromPrincipals(Iterable<Principal> principals) {

        // Extract the list of groups from the principals. These principals typically contain
        // different kind of principals, some groups, some others. The groups are unfortunately vendor
        // specific.
        List<String> groups = getGroupsFromPrincipals(principals);

        // Map the groups to roles. E.g. map "admin" to "administrator". Some servers require this.
        return mapGroupsToRoles(groups);
    }

    private List<String> mapGroupsToRoles(List<String> groups) {

        if (oneToOneMapping) {
            // There is no mapping used, groups directly represent roles.
            return groups;
        }

        List<String> roles = new ArrayList<>();

        for (String group : groups) {
            if (groupToRoles.containsKey(group)) {
                roles.addAll(groupToRoles.get(group));
            }
        }

        return roles;
    }

    private boolean tryGlassFish(String contextID, Collection<String> allDeclaredRoles) {

        try {
            Class<?> SecurityRoleMapperFactoryClass = Class.forName("org.glassfish.deployment.common.SecurityRoleMapperFactory");

            Object factoryInstance = Class.forName("org.glassfish.internal.api.Globals")
                                          .getMethod("get", SecurityRoleMapperFactoryClass.getClass())
                                          .invoke(null, SecurityRoleMapperFactoryClass);

            Object securityRoleMapperInstance = SecurityRoleMapperFactoryClass.getMethod("getRoleMapper", String.class)
                                                                              .invoke(factoryInstance, contextID);

            @SuppressWarnings("unchecked")
            Map<String, Subject> roleToSubjectMap = (Map<String, Subject>) Class.forName("org.glassfish.deployment.common.SecurityRoleMapper")
                                                                                .getMethod("getRoleToSubjectMapping")
                                                                                .invoke(securityRoleMapperInstance);

            for (String role : allDeclaredRoles) {
                if (roleToSubjectMap.containsKey(role)) {
                    Set<Principal> principals = roleToSubjectMap.get(role).getPrincipals();

                    List<String> groups = getGroupsFromPrincipals(principals);
                    for (String group : groups) {
                        if (!groupToRoles.containsKey(group)) {
                            groupToRoles.put(group, new ArrayList<String>());
                        }
                        groupToRoles.get(group).add(role);
                    }

                    if ("**".equals(role) && !groups.isEmpty()) {
                        // JACC spec 3.2 states:
                        //
                        // "For the any "authenticated user role", "**", and unless an application specific mapping has
                        // been established for this role,
                        // the provider must ensure that all permissions added to the role are granted to any
                        // authenticated user."
                        //
                        // Here we check for the "unless" part mentioned above. If we're dealing with the "**" role here
                        // and groups is not
                        // empty, then there's an application specific mapping and "**" maps only to those groups, not
                        // to any authenticated user.
                        anyAuthenticatedUserRoleMapped = true;
                    }
                }
            }

            return true;

        } catch (ClassNotFoundException | NoSuchMethodException | SecurityException | IllegalAccessException | IllegalArgumentException
                | InvocationTargetException e) {
            return false;
        }
    }

    private boolean tryWebLogic(String contextID, Collection<String> allDeclaredRoles) {

        try {

            // See http://docs.oracle.com/cd/E21764_01/apirefs.1111/e13941/weblogic/security/jacc/RoleMapperFactory.html
            Class<?> roleMapperFactoryClass = Class.forName("weblogic.security.jacc.RoleMapperFactory");

            // RoleMapperFactory implementation class always seems to be the value of what is passed on the commandline
            // via the -Dweblogic.security.jacc.RoleMapperFactory.provider option.
            // See http://docs.oracle.com/cd/E57014_01/wls/SCPRG/server_prot.htm
            Object roleMapperFactoryInstance = roleMapperFactoryClass.getMethod("getRoleMapperFactory")
                                                                     .invoke(null);

            // See http://docs.oracle.com/cd/E21764_01/apirefs.1111/e13941/weblogic/security/jacc/RoleMapperFactory.html#getRoleMapperForContextID(java.lang.String)
            Object roleMapperInstance = roleMapperFactoryClass.getMethod("getRoleMapperForContextID", String.class)
                                                              .invoke(roleMapperFactoryInstance, contextID);

            // This seems really awkward; the Map contains BOTH group names and user names, without ANY way to
            // distinguish between the two.
            // If a user now has a name that happens to be a role as well, we have an issue :X
            @SuppressWarnings("unchecked")
            Map<String, String[]> roleToPrincipalNamesMap = (Map<String, String[]>) Class.forName("weblogic.security.jacc.simpleprovider.RoleMapperImpl")
                                                                                         .getMethod("getRolesToPrincipalNames")
                                                                                         .invoke(roleMapperInstance);

            for (String role : allDeclaredRoles) {
                if (roleToPrincipalNamesMap.containsKey(role)) {

                    List<String> groupsOrUserNames = asList(roleToPrincipalNamesMap.get(role));

                    for (String groupOrUserName : roleToPrincipalNamesMap.get(role)) {
                        // Ignore the fact that the collection also contains user names and hope
                        // that there are no user names in the application with the same name as a group
                        if (!groupToRoles.containsKey(groupOrUserName)) {
                            groupToRoles.put(groupOrUserName, new ArrayList<String>());
                        }
                        groupToRoles.get(groupOrUserName).add(role);
                    }

                    if ("**".equals(role) && !groupsOrUserNames.isEmpty()) {
                        // JACC spec 3.2 states: [...]
                        anyAuthenticatedUserRoleMapped = true;
                    }
                }
            }

            return true;

        } catch (ClassNotFoundException | NoSuchMethodException | SecurityException | IllegalAccessException | IllegalArgumentException
                | InvocationTargetException e) {
            return false;
        }
    }
    
    private boolean tryGeronimo(String contextID, Collection<String> allDeclaredRoles) {
        if (geronimoContextToRoleMapping != null) {
            
            if (geronimoContextToRoleMapping.containsKey(contextID)) {
                Map<Principal, Set<String>> principalsToRoles = geronimoContextToRoleMapping.get(contextID);
                
                for (Map.Entry<Principal, Set<String>> entry : principalsToRoles.entrySet()) {
                    
                    // Convert the principal that's used as the key in the Map to a list of zero or more groups.
                    // (for Geronimo we know that using the default role mapper it's always zero or one group)
                    for (String group : principalToGroups(entry.getKey())) {
                        if (!groupToRoles.containsKey(group)) {
                            groupToRoles.put(group, new ArrayList<String>());
                        }
                        groupToRoles.get(group).addAll(entry.getValue());
                        
                        if (entry.getValue().contains("**")) {
                            // JACC spec 3.2 states: [...]
                            anyAuthenticatedUserRoleMapped = true;
                        }
                    }
                }
            }
            
            return true;
        }
        
        return false;
    }

    /**
     * Extracts the roles from the vendor specific principals. SAD that this is needed :(
     * 
     * @param principals
     * @return
     */
    public List<String> getGroupsFromPrincipals(Iterable<Principal> principals) {
        List<String> groups = new ArrayList<>();

        for (Principal principal : principals) {
            if (principalToGroups(principal, groups)) {
                // return value of true means we're done early. This can be used
                // when we know there's only 1 principal holding all the groups
                return groups;
            }
        }

        return groups;
    }
    
    public List<String> principalToGroups(Principal principal) {
        List<String> groups = new ArrayList<>();
        principalToGroups(principal, groups);
        return groups;
    }
    
    public boolean principalToGroups(Principal principal, List<String> groups) {
        switch (principal.getClass().getName()) {

            case "org.glassfish.security.common.Group": // GlassFish
            case "org.apache.geronimo.security.realm.providers.GeronimoGroupPrincipal": // Geronimo
            case "weblogic.security.principal.WLSGroupImpl": // WebLogic
            case "jeus.security.resource.GroupPrincipalImpl": // JEUS
                groups.add(principal.getName());
                break;
    
            case "org.jboss.security.SimpleGroup": // JBoss
                if (principal.getName().equals("Roles") && principal instanceof Group) {
                    Group rolesGroup = (Group) principal;
                    for (Principal groupPrincipal : list(rolesGroup.members())) {
                        groups.add(groupPrincipal.getName());
                    }
    
                    // Should only be one group holding the roles, so can exit the loop
                    // early
                    return true;
                }
            }
        return false;
    }

}

Server mapping overview

Each server essentially provides the same core data; a role to group mapping, but each server puts this data in a different format. The table below summarizes this:

Role to group format per server
ServerMapKeyValue
GlassFish 4.1Map<String, Subject>Role nameSubject containing Principals representing groups and users (different class type for each)
WebLogic 12.1.3Map<String, String[]>Role nameGroups and user names (impossible to distinguish which is which)
Geronimo 3.0.1Map<Principal, Set<String>>Principal representing group or user (different class type for each)Role names

As we can see above, GlassFish and WebLogic both have a "a role name to groups and users" format. In the case of GlassFish the groups and users are for some reason wrapped in a Subject. A Map<String, Set<Principal>> would perhaps have been more logical here. WebLogic unfortunately uses a String to represent both group- and user names, meaning there's no way to know if a given name represents a group or a user. One can only guess at what the idea behind this design decision must have been.

Geronimo finally does the mapping exactly the other way around; it has a "group or user to role names" format. After all the insanity we saw with Geronimo this actually is a fairly sane mapping.

Conclusion

As we saw obtaining the container specific role mapping for a universal JACC provider is no easy feat. Finding out how to deploy a JACC provider appeared to be surprisingly difficult, and in case of Geronimo even nearly impossible. It's hard to say what can be done to improve this. Should JACC define an extra standardized property where you can provide the path to a jar file? E.g. something like
-Djavax.security.jacc.provider.jar=/usr/lib/myprovider.jar
At least for testing, and probably for regular usage as well, it would be extremely convenient if JACC providers could additionally be registered from within an application archive.

Arjan Tijms

Sunday, December 28, 2014

Java EE authorization - JACC revisited part I

A while ago we took a look at container authorization in Java EE, which we saw was taken care of by a specification called JACC.

We saw that JACC offered a clear standardized hook into what's often seen as a completely opaque and container specific process, but that it also had a number of disadvantages. Furthermore we provided a partial (non-working) implementation of a JACC provider to illustrate the idea.

In this part of the article we'll revisit JACC by taking a closer look at some of the mentioned disadvantages and dive a little deeper in the concept of role mapping. In part II we'll be looking at the first element of a more complete implementation of the JACC provider that was shown before.

To refresh our memory, the following were the disadvantages that we previously discovered:

  • Arcane & verbose API
  • No portable way to see what the groups/roles are in a collection of Principals
  • No portable way to use the container's role to group mapper
  • No default implementation of a JACC provider active or even available
  • Mixing Java SE and EE permissions (which protect against totally different things) when security manager is used
  • JACC provider has to be installed for the entire AS; can not be registered from or for a single application

As it later on appeared though, there's a little more to say about a few of these items.

Role mapping

While it's indeed the case that there's no portable way to get to either the groups or the container's role to group mapper, it appeared there was something called the primary use case for which JACC was originally conceived.

For this primary use case the idea was that a custom JACC provider would be coupled with a (custom) authentication module that only provided a caller principal (which contains the user name). That JACC provider would then contact an (external) authorization system to fetch authorization data based on this single caller principal. This authorization data can then be a collection of roles or anything that the JACC provider can either locally map to roles, or something to which it can map the permissions that a PolicyConfiguration initially collects. For this use case it's indeed not necessary to have portable access to groups or a role to groups mapper.

Building on this primary use case, it also appears that JASPIC auth modules in fact do have a means to put a specific implementation of a caller principal into the subject. JASPIC being JASPIC with its bare minimum of TCK tests this of course didn't work on all containers and there's still a gap present where the container is allowed to "map" that principal (whatever this means), but the basic idea is there. A JACC provider that knows about the auth module being used can then unambiguously pick out the caller principal from the set of principals in a subject. All of this would be so much simpler though if the caller principal was simply standardized in the first place, but alas.

To illustrate the basic process for a custom JACC provider according to this primary use case:

Auth module  ——provides——►  Caller Principal (name = "someuser")

JACC provider  ——contacts—with—"someuser"——►  Authorization System

Authorization System  ——returns——►  roles ["admin", "architect"] 

JACC provider  ——indexes—with—"admin"——►  rolesToPermissions
JACC provider  ——indexes with—"architect"——►  rolesToPermissions

As can be seen above there is no need for role mapping in this primary use case.

For the default implementation of a proprietary JACC provider that ships with a Java EE container the basic process is a little bit different as shown next:

role to group mapping in place
RoleGroups
"admin"["admin-group"]
"architect"["architect-group"]
"expert"["expert-group"]
JACC provider ——calls—with—["admin", "architect", "expert"] ——► Role Mapper Role mapper ——returns——► ["admin-group", "architect-group", "expert-group"] Auth module ——provides——► Caller Principal (name = "someuser") Auth module ——provides——► Group Principal (name = "admin-group", name = "architect-group") JACC provider maps "admin-group" to "admin" JACC provider maps "architect-group "to "architect" JACC provider ——indexes—with—"admin"——► rolesToPermissions JACC provider ——indexes—with—"architect"——► rolesToPermissions

In the second use case the role mapper and possibly knowledge of which principals represent groups is needed, but since this JACC provider is the one that ships with a Java EE container it's arguably "allowed" to use proprietary techniques.

Do note that the mapping technique shown maps a subject's groups to roles, and uses that to check permissions. While this may conceptually be the most straightforward approach, it's not the only way.

Groups to permission mapping

An alternative approach is to remap the roles-to-permission collection to a groups-to-permission collection using the information from the role mapper. This is what both GlassFish and WebLogic implicitly do when they write out their granted.policy file.

The following is an illustration of this process. Suppose we have a role to permissions map as shown in the following table:

Role-to-permissions
RolePermission
"admin"[WebResourcePermission ("/protected/*" GET)]

This means a user that's in the logical application role "admin" is allowed to do a GET request for resources in the /protected folder. Now suppose the role mapper gave us the following role to group mapping:

Role-to-groups
RoleGroups
"admin"["admin-group", "adm"]

This means the logical application role "admin" is mapped to the groups "admin-group" and "adm". What we can now do is first reverse the last mapping into a group-to-roles map as shown in the following table:

Group-to-roles
GroupRoles
"admin-group"["admin"]
"adm"["admin"]

Subsequently we can then iterate over this new map and look up the permissions associated with each role in the existing role to permissions map to create our target group to permissions map. This is shown in the table below:

Group-to-permissions
GroupPermissions
"admin-group"[WebResourcePermission ("/protected/*" GET)]
"adm"[WebResourcePermission ("/protected/*" GET)]

Finally, consider a current subject with principals as shown in the next table:

Subject's principals
TypeName
com.somevendor.CallerPrincipalImpl"someuser"
com.somevendor.GroupPrincipalImpl"admin-group"
com.somevendor.GroupPrincipalImpl"architect-group"

Given the above shown group to permissions map and subject's principals, a JACC provider can now iterate over the group principals that belong to this subject and via the map check each such group against the permissions for that group. Note that the JACC provider does have to know that com.somevendor.GroupPrincipalImpl is the principal type that represents groups.

Principal to permission mapping

Yet another alternative approach is to remap the roles-to-permission collection to a principals-to-permission collection, again using the information from the role mapper. This is what both Geronimo and GlassFish' optional SimplePolicyProvider do.

Principal to permission mapping basically works like group to permission mapping, except that the JACC provider doesn't need to have knowledge of the principals involved. For the JACC provider those principals are pretty much opaque then, and it doesn't matter if they represent groups, callers, or something else entirely. All the JACC provider does is compare (using equals() or implies()) principals in the map against those in the subject.

The following code fragment taken from Geronimo 3.0.1 demonstrates the mapping algorithm:

for (Map.Entry<Principal, Set<String>> principalEntry : principalRoleMapping.entrySet()) {
    Principal principal = principalEntry.getKey();
    Permissions principalPermissions = principalPermissionsMap.get(principal);

    if (principalPermissions == null) {
        principalPermissions = new Permissions();
        principalPermissionsMap.put(principal, principalPermissions);
    }

    Set<String> roleSet = principalEntry.getValue();
    for (String role : roleSet) {
        Permissions permissions = rolePermissionsMap.get(role);
        if (permissions == null) continue;
        for (Enumeration<Permission> rolePermissions = permissions.elements(); rolePermissions.hasMoreElements();) {
            principalPermissions.add(rolePermissions.nextElement());
        }
    }

}

In the code fragment above rolePermissions is the map the provider created before the mapping, principalRoleMapping is the mapping from the role mapper and principalPermissions is the final map that's used for access decisions.

Default JACC provider

Several full Java EE implementations do not ship with an activated JACC provider, which makes it extremely troublesome for portable Java EE applications to just make use of JACC for things like asking if a user will be allowed to access say a URL.

As it appears, Java EE implementations are actually required to ship with an activated JACC provider and are even required to use it for access decisions. Clearly there's no TCK test for this, so just as we saw with JASPIC, vendors take different approaches in absence of such test. In the end it doesn't matter so much what the spec says, as it's the TCK that has the final word on compatibility certification. In this case, the TCK clearly says it's NOT required, while as mentioned the spec says it is. Why both JASPIC and JACC have historically tested so little is still not entirely clear, but I have it on good authority (no pun ;)) that the situation is going to be improved.

So while this is theoretically not a spec issue, it is still very much a practical issue. I looked at 6 Java EE implementations and found the following:

JACC default providers
ServerJACC provider presentJACC provider activatedVendor discourages to activate JACC
JBoss EAP 6.3VVX
GlassFish 4.1VVX
Geronimo 3.0.1VVX
WebLogic 12.1.3VXV
JEUS 8 previewVXV
WebSphere 8.5XX- (no provider present so nothing to discourage)

As can be seen only half of the servers investigated have JACC actually enabled. WebLogic 12.1.3 and JEUS 8 preview both do ship with a JACC policy provider, but it has to be enabled explicitly. Both WebLogic and JEUS 8 in their documentation somewhat advice against using JACC. TMaxSoft warns in its JEUS 7 security manual (there's not one for JEUS 8 yet) that the default JACC provider that will be activated is mainly for testing and doesn't advise to use it for real production usage.

WebSphere does not even ship with any default JACC policy provider, at least not that I could find. There's only a Tivoli Access Manager client, for which you have to install a separate external authorization server.

I haven't yet investigated Interstage AS, Cosminexus and WebOTX, but I hope to be able to look at them at a later stage.

Conclusion

Given the historical background of JACC it's a little bit more understandable why access to the role mapper was never standardized. Still, it is something that's needed for other use cases than the historical primary use case, so after all this time is still something that would be welcome to have. Another huge disadvantage of JACC, the fact that it's simply not always there in Java EE, appeared to be yet another case of incomplete TCK coverage.

Continue reading at part II.

Arjan Tijms

Tuesday, November 25, 2014

JSF and MVC 1.0, a comparison in code

One of the new specs that will debut in Java EE 8 will be MVC 1.0, a second MVC framework alongside the existing MVC framework JSF.

A lot has been written about this. Discussions have mostly been about the why, whether it isn't introduced too late in the game, and what the advantages (if any) above JSF exactly are. Among the advantages that were initially mentioned were the ability to have different templating engines, have better performance and the ability to be stateless. Discussions have furthermore also been about the name of this new framework.

This name can be somewhat confusing. Namely, the term MVC to contrast with JSF is perhaps technically not entirely accurate, as both are MVC frameworks. The flavor of MVC intended to be implemented by MVC 1.0 is actually "action-based MVC", most well known among Java developers as "MVC the way Spring MVC implements it". The flavor of MVC that JSF implements is "Component-based MVC". Alternative terms for this are MVC-push and MVC-pull.

One can argue that JSF since 2.0 has been moving to a more hybrid model; view parameters, the PreRenderView event and view actions have been key elements of this, but the best practice of having a single backing bean back a single view and things like injectable request parameters and eager request scoped beans have been contributing to this as well. The discussion of component-based MVC vs action-based MVC is therefore a little less black and white than it may initially seem, but of course in it's core JSF clearly remains a component-based MVC framework.

When people took a closer look at the advantages mentioned above it became quickly clear they weren't quite specific to action-based MVC. JSF most definitely supports additional templating engines, there's a specific plug-in mechanism for that called the VDL (View Declaration Language). Stacked up against an MVC framework, JSF actually performs rather well, and of course JSF can be used stateless.

So the official motivation for introducing a second MVC framework in Java EE is largely not about a specific advantage that MVC 1.0 will bring to the table, but first and foremost about having a "different" approach. Depending on one's use case, either one of the approaches can be better, or suit one's mental model (perhaps based on experience) better, but very few claims are made about which approach is actually better.

Here we're also not going to investigate which approach is better, but will take a closer look at two actual code examples where the same functionality is implemented by both MVC 1.0 and JSF. Since MVC 1.0 is still in its early stages I took code examples from Spring MVC instead. It's expected that MVC 1.0 will be rather close to Spring MVC, not as to the actual APIs and plumbing used, but with regard to the overall approach and idea.

As I'm not a Spring MVC user myself, I took the examples from a Reddit discussion about this very topic. They are shown and discussed below:

CRUD

The first example is about a typical CRUD use case. The Spring controller is given first, followed by a backing bean in JSF.

Spring MVC

@Named
@RequestMapping("/appointments")
public class AppointmentsController {

    @Inject
    private AppointmentBook appointmentBook;

    @RequestMapping(value="/new", method = RequestMethod.GET)
    public String getNewForm(Model model) {
        model.addAttribute("appointment", new Appointment();
        return "appointment-edit";
    }

    @RequestMapping(value="/new", method = RequestMethod.POST)
    public String add(@Valid Appointment appointment, BindingResult result, RedirectAttributes redirectAttributes) {
        if (result.hasErrors()) {
            return "appointments/new";
        }
        appointmentBook.addAppointment(appointment);
        redirectAttributes.addFlashAttribute("message", "Successfully added"+appointment.getTitle();

        return "redirect:/appointments";
    }

}

JSF

@Named
@ViewScoped
public class NewAppointmentsBacking {

    @Inject
    private AppointmentBook appointmentBook;

    private Appointment appointment = new Appointment();

    public Appointment getAppointment() {
         return appointment;
    }

    public String add() {
        appointmentBook.addAppointment(appointment);
        addFlashMessage("Successfully added " + appointment.getTitle());

        return "/appointments?faces-redirect=true";
    }
}

As can be seen from the two code examples, there are at a first glance quite a number of similarities. However there are also a number of fundamental differences that are perhaps not immediately obvious.

Starting with the similarities, both versions are @Named and have the same service injected via the same @Inject annotation. When a URL is requested (via a GET) then in both versions there's a new Appointment instantiated. In the Spring version this happens in getNewForm(), in the JSF version this happens via the instance field initializer. Both versions subsequently make this instance available to the view. In the Spring MVC version this happens by setting it as an attribute of the model object that's passed in, while in the JSF version this happens via a getter.

The view typically contains a form where a user is supposed to edit various properties of the Appointment shown above. When this form is posted back to the server, in both versions an add() method is called where the (edited) Appointment instance is saved via the service that was previously injected and a flash message is set.

Finally both versions return an outcome that redirects the user to a new page (PRG pattern). Spring MVC uses the syntax "redirect:/appointments" for this, while JSF uses "/appointments?faces-redirect=true" to express the same thing.

Despite the large number of similarities as observed above, there is a big fundamental difference between the two; the class shown for Spring MVC represents a controller. It's mapped directly to a URL and it's pretty much the first thing that is invoked. All of the above runs without having determined what the view will be. Values computed here will be stored in a contextual object and a view is selected. We can think of this store as pushing values (the view didn't ask for it, since it's not even selected at this point). Hence the alternative name "MVC push" for this approach.

The class shown for the JSF example is NOT a controller. In JSF the controller is provided by the framework. It selects a view based on the incoming URL and the outcome of a ResourceHandler. This will cause a view to execute, and as part of that execution a (backing) bean at some point will be pulled in. Only after this pull has been done will the logic of the class in question start executing. Because of this the alternative name for this approach is "MVC pull".

Over to the concrete differences; in the Spring MVC sample instantiating the Appointment had to be explicitly mapped to a URL and the view to be rendered afterwards is explicitly defined. In the JSF version, both URL and view are defaulted; it's the view from which the bean is pulled. A backing bean can override the default view to be rendered by using the aforementioned view action. This gives it some of the "feel" of a controller, but doesn't change the fundamental fact that the backing bean had to be pulled into scope by the initial view first (things like @Eager in OmniFaces do blur the lines further by instantiating beans before a view pulls them in).

The post back case shows something similar. In the Spring version the add() method is explicitly mapped to a URL, while in the JSF version it corresponds to an action method of the view that pulled the bean in.

There's another difference with respect to validation. In the Spring MVC example there's an explicit check to see if validation has failed and an explicit selection of a view to display errors. In this case that view is the same one again ("appointments/new"), but it's still provided explicitly. In the JSF example there's no explicit check. Instead, the code relies on the default of staying on the same view and not invoking the action method. In effect, the exact same thing happens in both cases but the mindset to get there is different.

Dynamically loading images

The second example is about a case where a list of images is rendered first and where subsequently the content of those images is dynamically provided by the beans in question. The Spring code is again given first, followed by the JSF code.

Spring MVC

<c:forEach value="${thumbnails}" var="thumbnail">
    <div>
        <div class="thumbnail">
            <img src="/thumbnails/${thumbnail.id}" />
        </div>
        <c:out value="${thumbnail.caption}" />
    </div>
</c:forEach>
@Controller
public ThumbnailsController {

    @Inject
    private ThumbnailsDAO thumbnails;

    @RequestMapping(value = "/", method = RequestMethod.GET)
    public ModelAndView images() {
        ModelAndView mv = new ModelAndView("images");
        mv.addObject("thumbnails", thumbnailsDAO.getThumbnails());
        return mv;
    }

    @RequestMapping(value = "/thumbnails/{id}", method = RequestMethod.GET, produces = "image/jpeg")
    public @ResponseBody byte[] thumbnail(@PathParam long id)  {
        return thumbnailsDAO.getThumbnail(id);
    }
}

JSF

<ui:repeat value="#{thumbnails}" var="thumbnail">
    <div>
        <div class="thumbnail">
            <o:graphicImage value="#{thumbnailsBacking.thumbnail(thumbnail.id)}" />
        </div>
        #{thumbnail.caption}
    </div>
</ui:repeat>
@Model
public class ThumbnailsBacking {

    @Inject
    private ThumbnailsDAO thumbnailsDAO;

    @Produces @RequestScoped @Named("thumbnails") 
    public List<Thumbnail> getThumbnails() {
        return thumbnailsDAO.getThumbnails();
    }

    public byte[] thumbnail(Long id) {
        return thumbnailsDAO.getThumbnail(id);
    }
}

Starting with the similarities again, we see that the markup for both views is fairly similar in structure. Both have an iteration tag that takes values from an input list called thumbnails and during each round of the iteration the ID of each individual thumbnail is used to render an image link.

Both the classes for Spring MVC and JSF call getThumbnails() on the injected DAO for the initial GET request, and both have a nearly identical thumbnail() method where getThumbnail(id) is called on the DAO in response to each request for a dynamic image that was rendered before.

Both versions also show that each framework has an alternative way to do what they do. In the Spring MVC example we see that instead of having a Model passed-in and returning a String based outcome, there's an alternative version that uses a ModelAndView instance, where the outcome is set on this object.

In the JSF version we see that instead of having an instance field + getter, there's an alternative version based an a producer. In that variant the data is made available under the EL name "thumbnails", just as in the Spring MVC version.

On to the differences, we see that the Spring MVC version is again using explicit URLs. The otherwise identical thumbnail() method has an extra annotation for specifying the URL to which it's mapped. This very URL is the one that's used in the img tag in the view. JSF on the other hand doesn't ask to map the method to a URL. Instead, there's an EL expression used to point directly to the method that delivers the image content. The component (o:graphicImage here) then generates the URL.

While the producer method that we showed in the JSF example (getThumbnails()) looked like JSF was declarative pushing a value, it's in fact still about a push. The method will not be called, and therefor a value not produced, until the EL variable "thumbnails" is resolved for the first time.

Another difference is that the view in the JSF example contains two components (ui:repeat and o:graphicImage) that adhere to JSF's component model, and that the view uses a templating language (Facelets) that is part of the JSF spec itself. Spring MVC (of course) doesn't specify a component model, and while it could theoretically come with its own templating language it doesn't have that one either. Instead, Spring MVC relies on external templating systems, e.g. JSP or Thymeleaf.

Finally, a remarkable difference is that the two very similar classes ThumbnailsController and ThumbnailsBacking are annotated by @Controller respectively @Model, two completely opposite responsibilities of the MVC pattern. Indeed, in JSF everything that's referenced by the view (via EL expressions) if officially called the model. ThumbnailsBacking is from JSF's point of the view the model. In practice the lines are bit more blurred, and the backing bean is more akin to a plumbing component that sits between the model, view and controller.

Conclusion

We haven't gone in-depth to what it means to have a component model and what advantages that has, nor have we discussed in any detail what a RESTful architecture brings to the table. In passing we mentioned the concept of state, but did not look at that either. Instead, we mainly focussed on code examples for two different use cases and compared and contrasted these. In that comparison we tried as much as possible to refrain from any judgement about which approach is better, component based MVC or action-oriented MVC (as I'm one of the authors of the JSF utility library OmniFaces and a member of the JSF EG such a judgement would always be biased of course).

We saw that while the code examples at first glance have remarkable similarities there are in fact deep fundamental differences between the two approaches. It's an open question whether the future is with either one of those two, with a hybrid approach of them, or with both living next to each other. Java EE 8 at least will opt for that last option and will have both a component based MVC framework and an action-oriented one.

Arjan Tijms

Monday, November 24, 2014

OmniFaces 2.0 released!

After a poll regarding the future dependencies of OmniFaces 2.0 and two release candidates we're proud to announce that today we've finally released OmniFaces 2.0.

OmniFaces 2.0 is a direct continuation of OmniFaces 1.x, but has started to build on newer dependencies. We also took the opportunity to do a little refactoring here and there (specifically noticeable in the Events class).

The easiest way to use OmniFaces is via Maven by adding the following to pom.xml:

<dependency>
    <groupId>org.omnifaces</groupId>
    <artifactId>omnifaces</artifactId>
    <version>2.0</version>
</dependency>

A detailed description of the biggest items of this release can be found on the blog of BalusC.

One particular new feature not mentioned there is a new capability that has been added to <o:validateBean>; class level bean validation. While JSF core and OmniFaces both have had a validateBean for some time, one thing it curiously did not do despite its name is actually validating a bean. Instead, those existing versions just controlled various aspects of bean validation. Bean validation itself was then only applied to individual properties of a bean, namely those ones that were bound to input components.

With OmniFaces 2.0 it's now possible to specify that a bean should be validated at the class level. The following gives an example of this:

<h:inputText value="#{bean.product.item}" />
<h:inputText value="#{bean.product.order}" />
 
<o:validateBean value="#{bean.product}" validationGroups="com.example.MyGroup" />

Using the existing bean validation integration of JSF, only product.item and product.order can be validated, since these are the properties that are directly bound to an input component. Using <o:validateBean> the product itself can be validated as well, and this will happen at the right place in the JSF lifecycle. The right place in the lifecycle means that it will be in the "process validation" phase. True to the way JSF works, if validation fails the actual model will not be updated. In order to prevent this update class level bean validation will be performed on a copy of the actual product (with a plug-in structure to chose between multiple ways to copy the model object).

More information about this class level bean validation can be found on the associated showcase page. A complete overview of all thats new can be found on the what's new page.

Arjan Tijms

Thursday, November 20, 2014

OmniFaces 2.0 RC2 available for testing

After an intense debugging session following the release of OmniFaces 2.0, we have decided to release one more release candidate; OmniFaces 2.0 RC2.

For RC2 we mostly focused on TomEE 2.0 compatibility. Even though TomEE 2.0 is only available in a SNAPSHOT release, we're happy to see that it passed almost all of our tests and was able to run our showcase application just fine. The only place where it failed was with the viewParamValidationFailed page, but this appeared to be an issue in MyFaces and unrelated to TomEE itself.

To repeat from the RC1 announcement: OmniFaces 2.0 is the first release that will depend on JSF 2.2 and CDI 1.1 from Java EE 7. Our Servlet dependency is now Servlet 3.0 from Java EE 6 (used to be 2.5, although we optionally used 3.0 features before). The minimal Java SE version is now Java 7.

A full list of what's new and changed is available here.

OmniFaces 2.0 RC2 can be tested by adding the following dependency to your pom.xml:

<dependency>
    <groupId>org.omnifaces</groupId>
    <artifactId>omnifaces</artifactId>
    <version>2.0-RC2</version>
</dependency>

Alternatively the jars files can be downloaded directly.

We're currently investigating one last issue, if that's resolved and no other major bugs appear we'd like to release OmniFaces 2.0 at the end of this week.

Arjan Tijms

Sunday, November 16, 2014

Header based stateless token authentication for JAX-RS

Authentication is a topic that comes up often for web applications. The Java EE spec supports authentication for those via the Servlet and JASPIC specs, but doesn't say too much about how to authenticate for JAX-RS.

Luckily JAX-RS is simply layered on top of Servlets, and one can therefore just use JASPIC's authentication modules for the Servlet Container Profile. There's thus not really a need for a separate REST profile, as there is for SOAP web services.

While using the same basic technologies as authentication modules for web applications, the requirements for modules that are to be used for JAX-RS are a bit different.

JAX-RS is often used to implement an API that is used by scripts. Such scripts typically do not engage into an authentication dialog with the server, i.e. it's rare for an API to redirect to a form asking for credentials, let alone asking to log-in with a social provider.

An even more fundamental difference is that in web apps it's commonplace to establish a session for among others authentication purposes. While possible to do this for JAX-RS as well, it's not exactly a best practice. Restful APIs are supposed to be fully stateless.

To prevent the need for going into an arbitrary authentication dialog with the server, it's typically for scripts to send their credentials upfront with a request. For this BASIC authentication can be used, which does actually initiates a dialog albeit a standardised one. An other option is to provide a token as either a request parameter or as an HTTP header. It should go without saying that in both these case all communication should be done exclusively via https.

Preventing a session to be created can be done in several ways as well. One way is to store the authentication data in an encrypted cookie instead of storing that data in the HTTP session. While this surely works it does feel somewhat weird to "blindly" except the authenticated identity from what the client provides. If the encryption is strong enough it *should* be okayish, but still. Another method is to quite simply authenticate every time over again with each request. This however has its own problem, namely the potential for bad performance. An in-memory user store will likely be very fast to authenticate against, but anything involving an external system like a database or ldap server probably is not.

The performance problem of authenticating with each request can be mitigated though by using an authentication cache. The question is then whether this isn't really the same as creating a session?

While both an (http) session and a cache consume memory at the server, a major difference between the two is that a session is a store for all kinds of data, which includes state, but a cache is only about data locality. A cache is thus by definition never the primary source of data.

What this means is that we can throw data away from a cache at arbitrary times, and the client won't know the difference except for the fact its next request may be somewhat slower. We can't really do that with session data. Setting a hard limit on the size of a cache is thus a lot easier for a cache then it is for a session, and it's not mandatory to replicate a cache across a cluster.

Still, as with many things it's a trade off; having zero data stored at the server, but having a cookie send along with the request and needing to decrypt that every time (which for strong encryption can be computational expensive), or having some data at the server (in a very manageable way), but without the uneasiness of directly accepting an authenticated state from the client.

Here we'll be giving an example for a general stateless auth module that uses header based token authentication and authenticates with each request. This is combined with an application level component that processes the token and maintains a cache. The auth module is implemented using JASPIC, the Java EE standard SPI for authentication. The example uses a utility library that I'm incubating called OmniSecurity. This library is not a security framework itself, but provides several convenience utilities for the existing Java EE security APIs. (like OmniFaces does for JSF and Guava does for Java)

One caveat is that the example assumes CDI is available in an authentication module. In practice this is the case when running on JBoss, but not when running on most other servers. Another caveat is that OmniSecurity is not yet stable or complete. We're working towards an 1.0 version, but the current version 0.6-ALPHA is as the name implies just an alpha version.

The module itself look as follows:

public class TokenAuthModule extends HttpServerAuthModule {
    
    private final static Pattern tokenPattern = compile("OmniLogin\\s+auth\\s*=\\s*(.*)");
    
    @Override
    public AuthStatus validateHttpRequest(HttpServletRequest request, HttpServletResponse response, HttpMsgContext httpMsgContext) throws AuthException {
        
        String token = getToken(request);
        if (!isEmpty(token)) {
            
            // If a token is present, authenticate with it whether this is strictly required or not.
            
            TokenAuthenticator tokenAuthenticator = getReferenceOrNull(TokenAuthenticator.class);
            if (tokenAuthenticator != null) {
                
                if (tokenAuthenticator.authenticate(token)) {
                    return httpMsgContext.notifyContainerAboutLogin(tokenAuthenticator.getUserName(), tokenAuthenticator.getApplicationRoles());
                }                
            }            
        }
        
        if (httpMsgContext.isProtected()) {
            return httpMsgContext.responseNotFound();
        }
        
        return httpMsgContext.doNothing();
    }
    
    private String getToken(HttpServletRequest request) { 
        String authorizationHeader = request.getHeader("Authorization");
        if (!isEmpty(authorizationHeader)) {
            
            Matcher tokenMatcher = tokenPattern.matcher(authorizationHeader);
            if (tokenMatcher.matches()) {
                return tokenMatcher.group(1);
            }
        }
        
        return null;
    }

}
Below is a quick primer on Java EE's authentication modules:
A server auth module (SAM) is not entirely unlike a servlet filter, albeit one that is called before every other filter. Just as a servlet filter it's called with an HttpServletRequest and HttpServletResponse, is capable of including and forwarding to resources, and can wrap both the request and the response. A key difference is that it also receives an object via which it can pass a username and optionally a series of roles to the container. These will then become the authenticated identity, i.e. the username that is passed to the container here will be what HtttpServletRequest.getUserPrincipal().getName() returns. Furthermore, a server auth module doesn't control the continuation of the filter chain by calling or not calling FilterChain.doFilter(), but by returning a status code.

In the example above the authentication module extracts a token from the request. If one is present, it obtains a reference to a TokenAuthenticator, which does the actual authentication of the token and provides a username and roles if the token is valid. It's not strictly necessary to have this separation and the authentication module could just as well contain all required code directly. However, just like the separation of responsibilities in MVC, it's typical in authentication to have a separation between the mechanism and the repository. The first contains the code that does interaction with the environment (aka the authentication dialog, aka authentication messaging), while the latter doesn't know anything about an environment and only keeps a collection of users and roles that are accessed via some set of credentials (e.g. username/password, keys, tokens, etc).

If the token is found to be valid, the authentication module retrieves the username and roles from the authenticator and passes these to the container. Whenever an authentication module does this, it's supposed to return the status "SUCCESS". By using the HttpMsgContext this requirement is largely made invisible; the code just returns whatever HttpMsgContext.notifyContainerAboutLogin returns.

If authentication did not happen for whatever reason, it depends on whether the resource (URL) that was accessed is protected (requires an authenticated user) or is public (does not require an authenticated user). In the first situation we always return a 404 to the client. This is a general security precaution. According to HTTP we should actually return a 403 here, but if we did users can attempt to guess what the protected resources are. For applications where it's already clear what all the protected resources are it would make more sense to indeed return that 403 here. If the resource is a public one, the code "does nothing". Since authentication modules in Java EE need to return something and there's no status code that indicates nothing should happen, in fact doing nothing requires a tiny bit of work. Luckily this work is largely abstracted by HttpMsgContext.doNothing().

Note that the TokenAuthModule as shown above is already implemented in the OmniSecurity library and can be used as is. The TokenAuthenticator however has to be implemented by user code. An example of an implementation is shown below:

@RequestScoped
public class APITokenAuthModule implements TokenAuthenticator {

    @Inject
    private UserService userService;

    @Inject
    private CacheManager cacheManager;
    
    private User user;

    @Override
    public boolean authenticate(String token) {
        try {
            Cache<String, User> usersCache = cacheManager.getDefaultCache();

            User cachedUser = usersCache.get(token);
            if (cachedUser != null) {
                user = cachedUser;
            } else {
                user = userService.getUserByLoginToken(token);
                usersCache.put(token, user);
            }
        } catch (InvalidCredentialsException e) {
            return false;
        }

        return true;
    }

    @Override
    public String getUserName() {
        return user == null ? null : user.getUserName();
    }

    @Override
    public List<String> getApplicationRoles() {
        return user == null ? emptyList() : user.getRoles();
    }

    // (Two empty methods omitted)
}
This TokenAuthenticator implementation is injected with both a service to obtain users from, as well as a cache instance (InfiniSpan was used here). The code simply checks if a User instance associated with a token is already in the cache, and if it's not gets if from the service and puts it in the cache. The User instance is subsequently used to provide a user name and roles.

Installing the authentication module can be done during startup of the container via a Servlet context listener as follows:

@WebListener
public class SamRegistrationListener extends BaseServletContextListener {
 
    @Override
    public void contextInitialized(ServletContextEvent sce) {
        Jaspic.registerServerAuthModule(new TokenAuthModule(), sce.getServletContext());
    }
}
After installing the authentication module as outlined in this article in a JAX-RS application, it can be tested as follows:
curl -vs -H "Authorization: OmniLogin auth=ABCDEFGH123" https://localhost:8080/api/foo 

As shown in this article, adding an authentication module for JAX-RS that's fully stateless and doesn't store an authenticated state on the client is relatively straightforward using Java EE authentication modules. Big caveats are that the most straightforward approach uses CDI which is not always available in authentication modules (in WildFly it's available), and that the example uses the OmniSecurity library to simplify some of JASPIC's arcane native APIs, but OmniSecurity is still only in an alpha status.

Arjan Tijms