The clustering guide describes concerns and configuration guidelines for deploying CAS in a high availability (HA) environment.
OverviewClustering is essential if your CAS instance is to be "highly available," or HA in manager-speak. Since CAS is a stateful application, there must be a way for each CAS instance to know about what the other CAS instance has done. It would be nice to just use one CAS instance (and one instance on the appropriate hardware can probably easily handle your login needs), but if that instance fails, you do not want all of your users to have to log in again. As mentioned above, CAS is a stateful application, and stateful in more than one way. CAS keeps track of users in the application's session, and it keeps track of the services the user visits and the tickets used to visit those services. Although the service and proxy tickets are only stored in memory for a brief amount of time, if you are load balancing and clustering CAS, each instance of CAS must immediately know about those tickets. If they do not, CAS simply will not work (most of the time). You may think that LB sticky sessions will save you, but they won't! Sticky sessions are good for sending the user (via a web browser) back to the same CAS instance, but it does not solve the problem that applications also use CAS, and the LB may have already determined that a particular application should be using another CAS instance (via sticky sessions)! So, there are several things that need to be be done for clustering to work:
Since CAS is a Java application (and based on Spring at that), there are many ways to do clustering. Furthermore, there is no easy "on/off" switch for clustering, hence this document. The CAS clustering described here takes advantage of the Spring aspects of CAS, and implements the clustering purely via XML configuration! (Of course, we do use Java classes that have already been written by the CAS team.) AssumptionsThis HOW TO makes the following assumptions:
ClusteringGuaranteeing Ticket Uniqueness*If you are using CAS 3.2.x, feel free to skip this step. It is already part of your implementation. Since all the tickets need to be unique across JVMs, we will configure this part first, and it is the easiest part to do, too. The first problem you need to solve is what unique identifier to use. I choose the hostname of the server from which CAS is being served. Because this is Java and we do everything via XML configuration and not Java code, we will solve this problem using the By default CAS gets vital host-specific configuration properties from the cas.properties file that is packed in the WAR file. Place that file on a convenient filesystem location that is accessible by the Java process running the servlet container, e.g., /apps/local/share/etc/cas.properties
The contents of cas.properties should be exactly the same as that distributed with the CAS distribution:
cas.properties
In order for CAS to load properties from the filesystem instead of the classpath of the unpacked WAR file, you must modify the file
applicationContext.xml
The host.name property placeholder is used by ticket generators to tag tickets issued by a particular cluster node:
uniqueIdGenerators.xml
This creates tickets, for example, like the following: TGT-2-Lj1aIVkEqGDCSLaXwXVQlIcYQcyyqcI0tuR-<hostname of your server>
Tomcat Session ReplicationSince CAS stores the login information in the application session 2 we need to setup session replication between our Tomcat instances. The first thing you need to do is tell CAS (the application) that it is distributable 3. So, in the CAS web.xml file you need to add the CAS 3.0.x
CAS 3.1.x & CAS 3.2.x
In this file, I put the
web.xml
Now you need to tell Tomcat to replicate the session information by adding
Tomcat 5.5.x server.xml
Tomcat 6.x server.xml
See http://tomcat.apache.org/tomcat-6.0-doc/cluster-howto.html and http://tomcat.apache.org/tomcat-6.0-doc/config/cluster.html for more information on Tomcat 6 clustering. Note 1: Again, please check with your network administrator before turning this on. I have set Note 2: You will see a lot of references to the Note 3: If your Tomcat cluster doesn't work (Tomcat instance not seeing other member), perhaps you must change Note 4: If your Tomcat cluster still doesn't work ensure that the TCP and UDP ports on the servers are not being blocked by a host-based firewall, that your network interface has multicast enabled, and that it has the appropriate routes for multicast. Note 5: If you see a large stacktrace in the cas.log file that ends with a root cause of: "java.net.BindException: Cannot assign requested address", it's likely due to the JVM trying to use IPv6 sockets while your system is using IPv4. Set the JVM to prefer IPv4 by setting the Java system property -Djava.net.preferIPv4Stack=true. You can set the CATALINA_OPTS environment variable so Tomcat will pick it up automatically with: export CATALINA_OPTS=-Djava.net.preferIPv4Stack=true
Now start up your two (or more) Tomcat instances (on separate hosts!) and you should see something like the following in the May 22, 2007 4:25:54 PM org.apache.catalina.cluster.tcp.SimpleTcpCluster memberAdded
INFO: Replication member added:org.apache.catalina.cluster.mcast.McastMember
[tcp://128.32.143.78:4001,catalina,128.32.143.78,4001, alive=5]
Conversly, in the May 22, 2007 4:27:13 PM org.apache.catalina.cluster.tcp.SimpleTcpCluster memberAdded
INFO: Replication member added:org.apache.catalina.cluster.mcast.McastMember
[tcp://128.32.143.79:4001,catalina,128.32.143.79,4001, alive=5]
Excellent, you now have clustering of the user's login information for CAS. Test it out by logging into CAS, then stopping Tomcat on the server you logged in at, and then hit the login page again, and CAS should show you the "you are already logged in page." Ticket Cache ReplicationNow you we need to setup the ticket cache replication using the
applicationContext.xml
Note 1: No space between In the
CAS 3.0.x
CAS 3.1.x & CAS 3.2.x
Open this file up and get ready for some editing. I discovered that the default file did not work in my installation, as was noted by some others on the CAS mailing list. Scott Battaglia sent an edited version to the list. 5 You have to comment-out the following lines:
and:
Next, you have to edit the
Now that you have edited this file, you have to get it onto your
For JBOSS, this is a good location:
If you know of a better way to get it on your Now, the hard part: Rounding up the 10 jars needed to make JBossCache work! JBossCache for CAS requires the following jars concurrent.jar
jboss-cache-jdk50.jar
jboss-common.jar
jboss-j2ee.jar
jboss-jmx.jar
jboss-minimal.jar
jboss-serialization.jar
jboss-system.jar
jgroups.jar
trove.jar
CAS 3.0.x You can get all of these jar files in the JBossCache distribution. 7 Once you have these jars, put them in your cas-distribution/localPlugins/lib
CAS 3.1.x Using Maven 2, it is not as hard as CAS 3.0.x branch. Add the following dependency to the pom.xml file located at the folder cas-server-webapp and it will include the JBoss cache stuff in cas.war Remarks: The dependency is needed if you are NOT using JBoss Application Server.
pom.xml
CAS 3.2.x on JBOSS (or probably any CAS implementation on JBoss) You need to exclude some jars from the deployment otherwise they will conflict with JBOSS.
pom.xml
Ok, now let's test this thing! Build cas.war and redeploy to your two (or more) Tomcat instances and you should see the 2007-05-23 16:59:34,486 INFO [org.jasig.cas.util.JBossCacheFactoryBean] - <Starting TreeCache service.>
-------------------------------------------------------
GMS: address is 128.32.143.78:51052
-------------------------------------------------------
In the 2007-05-23 17:01:22,113 INFO [org.jasig.cas.util.JBossCacheFactoryBean] - <Starting TreeCache service.>
-------------------------------------------------------
GMS: address is 128.32.143.79:56023
-------------------------------------------------------
If you see this, and no Java exceptions, you are doing well! If you see Java exceptions, they are probably related to Tomcat not being able to find the Ensuring Ticket Granting Ticket Cookie VisibilityThe last step before you can test out whether CAS is set up to be clustered correctly is to ensure that the ticket granting ticket (TGT) cookie set in the users' browsers is visible by all of the nodes in the CAS cluster. Using your favorite text editor (shameless plug for vim), open the cas-servlet.xml file and look for the warnCookieGenerator and ticketGrantingTicketCookieGenerator beans. Both of these beans need to have the cookieDomain property set to the domain where the TGT cookie should be visible to. Edit the bean declarations based on the following example (substitute your domain as necessary):
warnCookieGenerator.xml
ticketGrantingCookieGenerator.xml
VerificationJBOSS You will need to deploy as a .war file into JBoss's farm at:
After you have started your cluster servers, insure you have a cluster by checking the JBoss DefaultPartition. The CurrentView should show all the ip's of your cluster. If not, you will need to research why your cluster is not finding the other nodes.
Service ManagementIf you use the service management feature to restrict access to the CAS server based on CAS client service URLs/URL patterns, a Quartz job like the following must be added to one of your Spring contexts. The purpose of the job is to refresh the other nodes of service changes by reloading the services from the backing store. A service registry implementation that supports clustering, e.g. JpaServiceRegistryDaoImpl, LdapServiceRegistryDao, is required for proper clustering support. Both the Service Manager Reload Job and Trigger should be added to ticketRegistry.xml
Service Manager Reload Job
In order for the above job to fire, the trigger must be added to the Quartz scheduler bean as follows:
ticketRegistry.xml
ReferencesThe following references are used in this document:
|
cas 集群部署配置
最新推荐文章于 2024-08-09 16:47:07 发布
https://wiki.jasig.org/display/CASUM/Clustering+CAS
<!-- -->
Labels: