1、Tomcat集群配置关键点:
begin///
<1>All your session attributes must implement java.io.Serializable
==>session中存储的对象必须是可序列化的,即实现类【java.io.Serializable】
<2>Uncomment the Cluster element in server.xml
==>在server.xml中使用Cluster元素;
<3>If you have defined custom cluster valves, make sure you have the ReplicationValve defined as well under the Cluster element in server.xml
==>如果定义了自定义的cluster值,确保在server.xml的元素【ReplicationValve】中包含进去;
<4>If your Tomcat instances are running on the same machine, make sure the tcpListenPort attribute is unique for each instance,
in most cases Tomcat is smart enough to resolve this on it's own by autodetecting available ports in the range 4000-4100
==>如果tomcat实例运行在同一个主机上,确保tcp监听端口号彼此不同,默认有:8080,8009,8443,8005。
<5>Make sure your web.xml has the <distributable/> element
==>确保你的应用的web.xml中有元素【<distributable/>】,建议放到根下;
<6>If you are using mod_jk, make sure that jvmRoute attribute is set at your Engine <Engine name="Catalina" jvmRoute="node01" >
and that the jvmRoute attribute value matches your worker name in workers.properties
==>如果你使用了mod_jk,确保server.xml中【<Engine name="Catalina" jvmRoute="node01" >】的jvmRoute属性和负载均衡器的worker中的jvmRoute属性一致;
<7>Make sure that all nodes have the same time and sync with NTP service!
==>确保所有的节点有时间一致,并且通过NTP服务进行同步;
<8>Make sure that your loadbalancer is configured for sticky session mode.
==>负载均衡器开启session粘贴
<9>注意:Tomcat的content.xml文件不要放开元素 <Manager pathname="" />
如下:
begin/
<!-- Uncomment this to disable session persistence across Tomcat restarts -->
<!--
<Manager pathname="" />
-->
end///
<10>注意:如果有多个节点时,需要保证前面的节点启动完全后,再启动后面的节点;
///end//
<11>需要检查tomcat日志,看是否有以下类似信息:tomcat6,tomcat7
///begin//
Cluster is about to start ==>集群启动
Receiver Server Socket bound to:/10.88.147.205:4000
Setting cluster mcast soTimeout to 500
JvmRouteBinderValve started
Register manager /bar to cluster element Host with name localhost =>注册受管应用到集群中,和【<distributable/>】相关;
Starting clustering manager at /bar
Manager [/bar]: session state send at 8/28/14 4:22 PM received in 875 ms. =>session是否得到同步;
Received memberDisappeared[org.apache.catalina.tribes.membership.MemberImpl[tcp://{10, 88, 147, 148}:5000,{10, 88, 147, 148},5000, alive=111615,id={49 -1 97 74 27 -115 72 -25 -118 -35 -115 84 -3 -57 77 -61 }, payload={}, command={}, domain={}, ]] message. Will verify.
org.apache.catalina.tribes.group.interceptors.TcpFailureDetector memberDisappeared
Verification complete. Member already disappeared[org.apache.catalina.tribes.membership.MemberImpl[tcp://{10, 88, 147, 148}:5000,{10, 88, 147, 148},5000, alive=111615,id={49 -1 97 74 27 -115 72 -25 -118 -35 -115 84 -3 -57 77 -61 }, payload={}, command={}, domain={}, ]]
///end
2、使用Apache做负载均衡的样例配置,此处采用ajp,而没有使用http,使用http可能导致出现实际地址的问题出现;
///begin//
Listen 80
<VirtualHost *:80>
ProxyRequests off
ProxyPass / balancer://test/ lbmethod=byrequests stickysession=JSESSIONID nofailover=Off timeout=5 maxattempts=3
#ProxyPassReverse / balancer://test/
<Proxy balancer://test>
BalancerMember ajp://10.88.147.148:8009 route=jvm1
BalancerMember ajp://10.88.147.148:9009 route=jvm2
</Proxy>
SetEnv proxy-nokeepalive 0
</VirtualHost>
///end/
3、使用Nginx做负载均衡的样例配置,
/begin/
upstream 10.88.147.205 {
#enable session stick
ip_hash;
server 10.88.112.165:8080 max_fails=3 fail_timeout=5s;
server 10.88.112.165:80 max_fails=3 fail_timeout=5s;
}
location / {
root html;
index index.html index.htm;
proxy_pass http://10.88.147.205;
proxy_redirect default;
proxy_connect_timeout 10;
}
注意:层次结构上,http元素在最外层,其中包含元素:upstream和server,而server中才包含location元素。
/end///
begin///
<1>All your session attributes must implement java.io.Serializable
==>session中存储的对象必须是可序列化的,即实现类【java.io.Serializable】
<2>Uncomment the Cluster element in server.xml
==>在server.xml中使用Cluster元素;
<3>If you have defined custom cluster valves, make sure you have the ReplicationValve defined as well under the Cluster element in server.xml
==>如果定义了自定义的cluster值,确保在server.xml的元素【ReplicationValve】中包含进去;
<4>If your Tomcat instances are running on the same machine, make sure the tcpListenPort attribute is unique for each instance,
in most cases Tomcat is smart enough to resolve this on it's own by autodetecting available ports in the range 4000-4100
==>如果tomcat实例运行在同一个主机上,确保tcp监听端口号彼此不同,默认有:8080,8009,8443,8005。
<5>Make sure your web.xml has the <distributable/> element
==>确保你的应用的web.xml中有元素【<distributable/>】,建议放到根下;
<6>If you are using mod_jk, make sure that jvmRoute attribute is set at your Engine <Engine name="Catalina" jvmRoute="node01" >
and that the jvmRoute attribute value matches your worker name in workers.properties
==>如果你使用了mod_jk,确保server.xml中【<Engine name="Catalina" jvmRoute="node01" >】的jvmRoute属性和负载均衡器的worker中的jvmRoute属性一致;
<7>Make sure that all nodes have the same time and sync with NTP service!
==>确保所有的节点有时间一致,并且通过NTP服务进行同步;
<8>Make sure that your loadbalancer is configured for sticky session mode.
==>负载均衡器开启session粘贴
<9>注意:Tomcat的content.xml文件不要放开元素 <Manager pathname="" />
如下:
begin/
<!-- Uncomment this to disable session persistence across Tomcat restarts -->
<!--
<Manager pathname="" />
-->
end///
<10>注意:如果有多个节点时,需要保证前面的节点启动完全后,再启动后面的节点;
///end//
<11>需要检查tomcat日志,看是否有以下类似信息:tomcat6,tomcat7
///begin//
Cluster is about to start ==>集群启动
Receiver Server Socket bound to:/10.88.147.205:4000
Setting cluster mcast soTimeout to 500
JvmRouteBinderValve started
Register manager /bar to cluster element Host with name localhost =>注册受管应用到集群中,和【<distributable/>】相关;
Starting clustering manager at /bar
Manager [/bar]: session state send at 8/28/14 4:22 PM received in 875 ms. =>session是否得到同步;
Received memberDisappeared[org.apache.catalina.tribes.membership.MemberImpl[tcp://{10, 88, 147, 148}:5000,{10, 88, 147, 148},5000, alive=111615,id={49 -1 97 74 27 -115 72 -25 -118 -35 -115 84 -3 -57 77 -61 }, payload={}, command={}, domain={}, ]] message. Will verify.
org.apache.catalina.tribes.group.interceptors.TcpFailureDetector memberDisappeared
Verification complete. Member already disappeared[org.apache.catalina.tribes.membership.MemberImpl[tcp://{10, 88, 147, 148}:5000,{10, 88, 147, 148},5000, alive=111615,id={49 -1 97 74 27 -115 72 -25 -118 -35 -115 84 -3 -57 77 -61 }, payload={}, command={}, domain={}, ]]
///end
2、使用Apache做负载均衡的样例配置,此处采用ajp,而没有使用http,使用http可能导致出现实际地址的问题出现;
///begin//
Listen 80
<VirtualHost *:80>
ProxyRequests off
ProxyPass / balancer://test/ lbmethod=byrequests stickysession=JSESSIONID nofailover=Off timeout=5 maxattempts=3
#ProxyPassReverse / balancer://test/
<Proxy balancer://test>
BalancerMember ajp://10.88.147.148:8009 route=jvm1
BalancerMember ajp://10.88.147.148:9009 route=jvm2
</Proxy>
SetEnv proxy-nokeepalive 0
</VirtualHost>
///end/
3、使用Nginx做负载均衡的样例配置,
/begin/
upstream 10.88.147.205 {
#enable session stick
ip_hash;
server 10.88.112.165:8080 max_fails=3 fail_timeout=5s;
server 10.88.112.165:80 max_fails=3 fail_timeout=5s;
}
location / {
root html;
index index.html index.htm;
proxy_pass http://10.88.147.205;
proxy_redirect default;
proxy_connect_timeout 10;
}
注意:层次结构上,http元素在最外层,其中包含元素:upstream和server,而server中才包含location元素。
/end///