How to Loadbalance GlassFish Cluster with Apache Loadbalancer
How to Loadbalance GlassFish Cluster with Apache Loadbalancer
Since GlassFish V1, it has been possible to front-end a GlassFish instance with Apache'shttpd
web server, after following a few
simple configuration steps , which include defining the
com.sun.enterprise.web.connector.enableJK
system property on the GlassFish instance, and specifying the port number of the
mod_jk
listener on the GlassFish instance as its value. By specifying this system property, the
mod_jk
connector, which comes standard with GlassFish (minus the JAR files that need to be copied from a Tomcat installation as per the configuration steps referenced above), will be started automatically and will listen on the specified port to any traffic sent by the
httpd
front-end over the
AJP
protocol. (Please notice that when you follow the configuration steps referenced above, you must use the
tomcat-ajp.jar
from Tomcat 5.5.23. Using the
tomcat-ajp.jar
bundled with a more recent Tomcat release will not work.)
A common use case for front-ending GlassFish with httpd
is to have httpd
serve any requests for static resources, while having any requests for dynamic resources, such as servlets and JavaServer(TM) Pages (JSPs), forwarded to, and handled by the GlassFish backend instance.
However, up until now, support for Apache's httpd
has been limited to a single GlassFish instance, and there has been great interest on the GlassFish user forum in having an entire cluster of GlassFish instances load-balanced by Apache, allowing users to transition from an Apache-loadbalanced cluster of Tomcat instances to an Apache-loadbalanced cluster of GlassFish instances and take advantage of the in-memory session replication feature introduced in GlassFish V2 .
We have listened to the GlassFish user community and added the requested feature to the SJSAS 9.1 UR1 release. In other words, with the upcoming SJSAS 9.1 UR1 release, it will be possible to load-balance a cluster of GlassFish instances with Apache.
In order to support stickiness, Apache's loadbalancer relies on a jvmRoute
to be included in any JSESSIONID received by it. The jvmRoute
, which is separated from the session id via ".", and whose value is configured via a system property of the same name, identifies the cluster instance on which the HTTP session was generated, or on which it was last resumed. This means that every GlassFish instance in a cluster that is front-ended by Apache's loadbalancer must be configured with a jvmRoute
system property whose value is unique within the cluster.
For example, if an HTTP session was generated on a cluster instance with a jvmRoute
system property equal to instance1
, the JSESSIONID returned to the client (via an HTTP cookie or URL rewriting) will contain the session id with the string .instance1
appended to it. A subsequent request that is intercepted by the Apache loadbalancer will include the same JSESSIONID value that was returned to the client, from whose jvmRoute
suffix the Apache loadbalancer can determine the instance on which the HTTP session was last served, and direct the request to it. Should that instance have failed in the meantime, the Apache loadbalancer will select a different instance from the remaining healthy instances, and have the request failover to it. For example, if the request fails over to an instance whose jvmRoute
system property is equal to instance2
, the response generated from that instance will include a JSESSIONID containing the session id with .instance2
(instead of .instance1
) appended to it.
The challenge we were facing when adding support for the jvmRoute
feature to GlassFish has been that while the Apache loadbalancer expects the jvmRoute
, whose value may change over the lifetime of its associated HTTP session, to be part of the JSESSIONID, we had to shield the session management in GlassFish from the jvmRoute
, to preserve the invariant (from the session management's perspective) that session ids are immutable and remain constant over the lifetime of a session.
We've addressed this challenge by having the web container strip any jvmRoute
off an incoming JSESSIONID (and use the remainder as the session id of the session to be resumed), and append a jvmRoute
to the session id when forming a JSESSIONID. Of course, we have the web container process a JSESSIONID in this way only if the jvmRoute
system property has been set.
One of the side effects of this change has been that since a jvmRoute
is dynamic, the web container now adds a JSESSIONID cookie to every response, regardless of whether an HTTP session was created or resumed by the corresponding request, provided that the jvmRoute
system property has been set.
The remainder of this blog covers important configuration aspects.
In order to load-balance a GlassFish cluster via Apache, follow these steps:
- Define the
jvmRoute
andcom.sun.enterprise.web.connector.enableJK
system properties at the GlassFish cluster level. For example, in the case of a cluster named "cluster1", run these commands:asadmin create-jvm-options --target cluster1 "-DjvmRoute=\${AJP_INSTANCE_NAME}" asadmin create-jvm-options --target cluster1 "-Dcom.sun.enterprise.web.connector.enableJK=\${AJP_PORT}"
- Configure the above system properties for each instance in the cluster. For example, for a cluster instance named "instance9", run these commands:
asadmin create-system-properties --target instance9 AJP_INSTANCE_NAME=instance9 asadmin create-system-properties --target instance9 AJP_PORT=8020
Notice how the port number (8020) specified for themod_jk
connector on "instance9" matches the value of the correspondingworker.instance9.port
in the sampleworkers.properties
below. - List each GlassFish instance, including the port number of its
mod_jk
connector, in Apache'sworkers.properties
configuration file. Make sure that the name of eachworker
equals the value of thejvmRoute
system property of the GlassFish instance to which theworker
connects. This convention makes it possible for an HTTP session to remain sticky to the GlassFish instance on which the session was created, or on which the session was last resumed. - The following sample
workers.properties
configuration file is used to load-balance a 9-instance GlassFish cluster, in which the instances are spread over three physical server machines:my.domain1.com
,my.domain2.com
, andmy.domain3.com
:# Define 1 real worker using ajp13 worker.list=loadbalancer # Set properties for instance1 worker.instance1.type=ajp13 worker.instance1.host=my.domain1.com worker.instance1.port=8012 worker.instance1.lbfactor=50 worker.instance1.cachesize=10 worker.instance1.cache_timeout=600 worker.instance1.socket_keepalive=1 worker.instance1.socket_timeout=300 # Set properties for instance4 worker.instance4.type=ajp13 worker.instance4.host=my.domain1.com worker.instance4.port=8015 worker.instance4.lbfactor=50 worker.instance4.cachesize=10 worker.instance4.cache_timeout=600 worker.instance4.socket_keepalive=1 worker.instance4.socket_timeout=300 # Set properties for instance7 worker.instance7.type=ajp13 worker.instance7.host=my.domain1.com worker.instance7.port=8018 worker.instance7.lbfactor=50 worker.instance7.cachesize=10 worker.instance7.cache_timeout=600 worker.instance7.socket_keepalive=1 worker.instance7.socket_timeout=300 # Set properties for instance2 worker.instance2.type=ajp13 worker.instance2.host=my.domain2.com worker.instance2.port=8013 worker.instance2.lbfactor=50 worker.instance2.cachesize=10 worker.instance2.cache_timeout=600 worker.instance2.socket_keepalive=1 worker.instance2.socket_timeout=300 # Set properties for instance5 worker.instance5.type=ajp13 worker.instance5.host=my.domain2.com worker.instance5.port=8016 worker.instance5.lbfactor=50 worker.instance5.cachesize=10 worker.instance5.cache_timeout=600 worker.instance5.socket_keepalive=1 worker.instance5.socket_timeout=300 # Set properties for instance8 worker.instance8.type=ajp13 worker.instance8.host=my.domain2.com worker.instance8.port=8019 worker.instance8.lbfactor=50 worker.instance8.cachesize=10 worker.instance8.cache_timeout=600 worker.instance8.socket_keepalive=1 worker.instance8.socket_timeout=300 # Set properties for instance3 worker.instance3.type=ajp13 worker.instance3.host=my.domain3.com worker.instance3.port=8014 worker.instance3.lbfactor=50 worker.instance3.cachesize=10 worker.instance3.cache_timeout=600 worker.instance3.socket_keepalive=1 worker.instance3.socket_timeout=300 # Set properties for instance6 worker.instance6.type=ajp13 worker.instance6.host=my.domain3.com worker.instance6.port=8017 worker.instance6.lbfactor=50 worker.instance6.cachesize=10 worker.instance6.cache_timeout=600 worker.instance6.socket_keepalive=1 worker.instance6.socket_timeout=300 # Set properties for instance9 worker.instance9.type=ajp13 worker.instance9.host=my.domain3.com worker.instance9.port=8020 worker.instance9.lbfactor=50 worker.instance9.cachesize=10 worker.instance9.cache_timeout=600 worker.instance9.socket_keepalive=1 worker.instance9.socket_timeout=300 worker.loadbalancer.type=lb worker.loadbalancer.balance_workers=instance1,instance2,instance3,instance4,instance5,instance6,instance7,instance8,instance9
- Reference the
loadbalancer
worker specified in yourworkers.properties
file from yourhttpd.conf
. The following snippet fromhttpd.conf
causes any JSP requests to be load-balanced over the GlassFish cluster configured in the aboveworkers.properties
file:JkWorkersFile workers.properties # Loadbalance all JSP requests over GlassFish cluster JkMount /*.jsp loadbalancer
As soon as the cluster instance to which an HTTP session has been sticky has failed, the loadbalancer will route any subsequent requests for the same HTTP session to a different instance. This instance will be able to load and resume the requested session using the in-memory session replication feature that has been available since GlassFish V2. The in-memory session replication feature is enabled only for those web applications that have been marked as distributable
in their web.xml
deployment descriptor, and that have been deployed to the cluster with the --availabilityenabled
option of the asadmin deploy
command set to true
(default is false
).