四:Tomcat 会话保持的实现方式
4.1:实验环境准备
-
服务器:
- node104:192.168.1.104,负载均衡(Nginx/httpd);
- node105:192.168.1.105,Tomcat8、JDK8、MSM(Memcached、Redis);
- node106:192.168.1.106,Tomcat8、JDK8、MSM(Memcached);
-
域名解析:
[root@node104 ~]# vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.104 node104.yqc.com node104
192.168.1.105 node105.yqc.com node105
192.168.1.106 node106.yqc.com node106
- 拷贝到每台服务器:
[root@node104 ~]# scp /etc/hosts node105:/etc/hosts
[root@node104 ~]# scp /etc/hosts node106:/etc/hosts
4.2:准备 Tomcat 应用
4.2.1:node105
- Tomcat 环境变量:
[root@node105 ~]# vim /etc/profile.d/tomcat.sh
export CATALINA_HOME=/usr/local/tomcat
export PATH=$CATALINA_HOME:/bin:$PATH
[root@node106 ~]# source /etc/profile.d/tomcat.sh
- 创建 Tomcat 应用目录:
[root@node105 ~]# mkdir -pv /data/webapps/ROOT/
- 编写测试 jsp 文件:
[root@node105 ~]# vim /data/webapps/ROOT/index.jsp
<%@ page import="java.util.*" %>
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>lbjsptest</title>
</head>
<body>
<div>On <%=request.getServerName() %></div>
<div><%=request.getLocalAddr() + ":" + request.getLocalPort() %></div>
<div>SessionID = <span style="color:blue"><%=session.getId() %></span></div>
<%=new Date()%>
</body>
</html>
- 虚拟主机配置:
[root@node105 ~]# vim /usr/local/tomcat/conf/server.xml
<Engine name="Catalina" defaultHost="node105.yqc.com">
<Host name="node105.yqc.com" appBase="/data/webapps"
unpackWARs="true" autoDeploy="true" />
</Engine>
4.2.2:node106
- Tomcat 环境变量:
[root@node106 ~]# vim /etc/profile.d/tomcat.sh
export CATALINA_HOME=/usr/local/tomcat
export PATH=$CATALINA_HOME:/bin:$PATH
[root@node106 ~]# source /etc/profile.d/tomcat.sh
- 创建 Tomcat 应用目录:
[root@node106 ~]# mkdir -pv /data/webapps/ROOT/
- 编写测试 jsp 文件:
[root@node106 ~]# vim /data/webapps/ROOT/index.jsp
<%@ page import="java.util.*" %>
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>lbjsptest</title>
</head>
<body>
<div>On <%=request.getServerName() %></div>
<div><%=request.getLocalAddr() + ":" + request.getLocalPort() %></div>
<div>SessionID = <span style="color:blue"><%=session.getId() %></span></div>
<%=new Date()%>
</body>
</html>
- 虚拟主机配置:
[root@node106 ~]# vim /usr/local/tomcat/conf/server.xml
<Engine name="Catalina" defaultHost="node106.yqc.com">
<Host name="node106.yqc.com" appBase="/data/webapps"
unpackWARs="true" autoDeploy="true" />
</Engine>
4.3:Session Sticky 实验
基本不用这种方式来进行会话保持,因为一旦某个后端服务器故障而丢失了 session 信息,即使做了 session 持久化保存,用户再服务器故障期间获取了新的 SessionID 后,此服务器重新上线后,之前的 session 信息也失去了作用。
4.3.1:Nginx 负载均衡
Nginx 配置轮询调度
-
添加后端服务器组,并配置代理转发:
先注释 ip_hash,采用默认的轮询调度;
[root@node104 ~]# vim /etc/nginx/conf.d/tomcat.conf
http {
……
upstream tomcat-servers {
#ip_hash;
server node105.yqc.com:8080;
server node106.yqc.com:8080;
}
server {
……
location / {
proxy_pass http://tomcat-servers;
}
……
}
……
}
- 检查配置文件并重载 Nginx:
[root@node104 ~]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@node104 ~]# systemctl reload nginx
-
访问测试:
轮询调度;
Nginx 配置会话黏性
采用 ip_hash 来实现 Session Sticky,将同一地址的请求转发至相同的后端服务器;
- 开启 ip_hash:
[root@node104 ~]# vim /etc/nginx/conf.d/tomcat.conf
http {
……
upstream tomcat-servers {
ip_hash;
server node105.yqc.com:8080;
server node106.yqc.com:8080;
}
server {
……
location / {
proxy_pass http://tomcat-servers;
}
……
}
……
}
- 检查配置文件并重载 Nginx:
[root@node104 ~]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@node104 ~]# systemctl reload nginx
-
访问测试:
多次访问均被调度到同一后端服务器,且 SessionID 保持不变;
4.3.2:httpd 负载均衡
Tomcat 配置
- 在 server.xml 中的 Engine 使用 jvmRoute 属性:
[root@node105 ~]# vim /usr/local/tomcat/conf/server.xml
<Engine name="Catalina" defaultHost="node105.yqc.com" jvmRoute="node105">
[root@node106 ~]# vim /usr/local/tomcat/conf/server.xml
<Engine name="Catalina" defaultHost="node106.yqc.com" jvmRoute="node106">
- 重启 Tomcat
httpd 配置轮询调度
- 先不启用会话黏性:
[root@node104 ~]# vim /etc/httpd/conf.d/tomcat.conf
<VirtualHost *:80>
ProxyRequests Off
ProxyVia On
ProxyPreserveHost On
ProxyPass / balancer://lbtomcats/
ProxyPassReverse / balancer://lbtomcats/
</VirtualHost>
<Proxy balancer://lbtomcats>
BalancerMember http://node105.yqc.com:8080 loadfactor=1
BalancerMember http://node106.yqc.com:8080 loadfactor=1
</Proxy>
- 检查配置文件并重载 httpd:
[root@node104 conf.d]# httpd -t
Syntax OK
[root@node104 conf.d]# systemctl reload httpd
-
访问测试:
两台后端服务器的 loadfactor 都为1,则采用轮询调度,1:1;
httpd 配置会话黏性
- 编辑配置文件:
[root@node104 ~]# vim /etc/httpd/conf.d/tomcat.conf
Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/"
<VirtualHost *:80>
ProxyRequests Off
ProxyVia On
ProxyPreserveHost On
ProxyPass / balancer://lbtomcats/
ProxyPassReverse / balancer://lbtomcats/
</VirtualHost>
<Proxy balancer://lbtomcats>
BalancerMember http://node105.yqc.com:8080 loadfactor=1 route=node105
BalancerMember http://node106.yqc.com:8080 loadfactor=1 route=node106
ProxySet stickysession=ROUTEID
</Proxy>
- 检查配置文件并重载 httpd:
[root@node104 conf.d]# httpd -t
Syntax OK
[root@node104 conf.d]# systemctl reload httpd
-
访问测试:
被调度到同一台服务器;
4.4:Session Cluster 实验
参考文档:https://tomcat.apache.org/tomcat-8.5-doc/cluster-howto.html
Tomcat 的 Session Cluster 配置可以在两个地方生效:
- 配置在 Host 中,仅对当前 Host 生效,即只有对当前 Host 的访问启用 Session 复制;
- 配置在 Engine 中,对所有 Host 生效;
需要注意:
- Session Cluster 中的各服务器要保证时间同步;
- 注意防火墙规则,必要时关闭防火墙;
4.4.1:node105 配置
- 为 node105 的 node105.yqc.com 虚拟主机添加 Cluster 配置:
[root@node105 tomcat]# vim conf/server.xml
<Host name="node105.yqc.com" appBase="/data/webapps"
unpackWARs="true" autoDeploy="true" >
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
channelSendOptions="8">
<Manager className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"/>
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<Membership className="org.apache.catalina.tribes.membership.McastService"
address="228.0.0.4"
port="45564"
frequency="500"
dropTime="3000"/>
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="192.168.1.105"
port="4000"
autoBind="100"
selectorTimeout="5000"
maxThreads="6"/>
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
</Sender>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatchInterceptor"/>
</Channel>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
filter=""/>
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
<Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
tempDir="/tmp/war-temp/"
deployDir="/tmp/war-deploy/"
watchDir="/tmp/war-listen/"
watchEnabled="false"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
</Cluster>
</Host>
- 准备应用自己的 web.xml:
[root@node105 ~]# vim /data/webapps/ROOT/WEB-INF/web.xml
<?xml version="1.0" encoding="ISO-8859-1"?>
<web-app>
<distributable/>
</web-app>
- 重启 Tomcat,查看监听端口:
[root@node105 tomcat]# ss -tnlp | grep 4000
LISTEN 0 50 ::ffff:192.168.1.105:4000 :::* users:(("java",pid=14879,fd=66))
4.4.2:node106 配置
为 node106 的 node106.yqc.com 虚拟主机添加 Cluster 配置:
[root@node106 tomcat]# vim conf/server.xml
<Host name="node106.yqc.com" appBase="/data/webapps"
unpackWARs="true" autoDeploy="true" >
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
channelSendOptions="8">
<Manager className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"/>
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<Membership className="org.apache.catalina.tribes.membership.McastService"
address="228.0.0.4"
port="45564"
frequency="500"
dropTime="3000"/>
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="192.168.1.106"
port="4000"
autoBind="100"
selectorTimeout="5000"
maxThreads="6"/>
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
</Sender>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatchInterceptor"/>
</Channel>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
filter=""/>
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
<Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
tempDir="/tmp/war-temp/"
deployDir="/tmp/war-deploy/"
watchDir="/tmp/war-listen/"
watchEnabled="false"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
</Cluster>
</Host>
- 准备应用自己的 web.xml:
[root@node106 ~]# vim /data/webapps/ROOT/WEB-INF/web.xml
<?xml version="1.0" encoding="ISO-8859-1"?>
<web-app>
<distributable/>
</web-app>
- 重启 Tomcat,查看监听端口:
[root@node106 tomcat]# ss -tnlp | grep 4000
LISTEN 0 50 ::ffff:192.168.1.106:4000 :::* users:(("java",pid=15836,fd=66))
4.4.3:启用 Nginx 轮询调度
- node104 重新启用 Nginx 作为前端负载均衡,注释 ip_hash,采用轮询调度:
[root@node104 ~]# vim /etc/nginx/conf.d/tomcat.conf
http {
……
upstream tomcat-servers {
#ip_hash;
server node105.yqc.com:8080;
server node106.yqc.com:8080;
}
server {
……
location / {
proxy_pass http://tomcat-servers;
}
……
}
……
}
4.4.4:访问测试
-
SessionID 保持不变,变动的只是后端的服务器:
说明 node105 和 node106 都保存了 81C1AA40452DB7E0D955836AD6BDF7D7 这个 SessionID;
4.5:Session Server 实验
MSM (Memcached Session Manager)
msm(memcached session manager)提供将Tomcat的session保持到memcached或redis的程序,可以实现高可用。
目前支持Tomcat的6.x、7.x、8.x、9.x 版本。
github 托管地址:
https://github.com/magro/memcached-session-manager
配置文档:
https://github.com/magro/memcached-session-manager/wiki/SetupAndConfiguration
4.5.1:MSM Sticky 部署
部署 MSM 前, 需要删除 tomcat cluster 的配置,并关闭集群 rm -f /data/webapps/ROOT/WEB-INF/web.xml;
sticky sessions + kryo
- 部署示意图:
<t105> <t106>
. \ / .
. X .
. / \ .
<m105> <m106>
导入 jar 包
- 将memcached-session-manage、 spymemcached.jar、kyro 相关的 jar 文件都放到
$CATALINA_HOME/lib/
目录中:
[root@node106 msm]# ll
total 1852
-rw-r--r-- 1 root root 53259 May 31 2020 asm-5.2.jar
-rw-r--r-- 1 root root 586620 May 31 2020 jedis-3.0.0.jar
-rw-r--r-- 1 root root 285211 May 31 2020 kryo-3.0.3.jar
-rw-r--r-- 1 root root 126366 May 31 2020 kryo-serializers-0.45.jar
-rw-r--r-- 1 root root 167294 May 31 2020 memcached-session-manager-2.3.2.jar
-rw-r--r-- 1 root root 10826 May 31 2020 memcached-session-manager-tc8-2.3.2.jar
-rw-r--r-- 1 root root 5923 May 31 2020 minlog-1.3.1.jar
-rw-r--r-- 1 root root 38372 May 31 2020 msm-kryo-serializer-2.3.2.jar
-rw-r--r-- 1 root root 55684 May 31 2020 objenesis-2.6.jar
-rw-r--r-- 1 root root 72265 May 31 2020 reflectasm-1.11.9.jar
-rw-r--r-- 1 root root 473774 May 31 2020 spymemcached-2.12.3.jar
[root@node106 msm]# cp ./* /usr/local/tomcat/lib/
编辑 Tomcat 的 context.xml
-
编辑 Tomcat105 的 $CATALINA_HOME/conf/context.xml
注意,failoverNodes 配置为本节点;
以 node105 的 tomcat 内存为主session,node106 的 memcached 为备session,当 tomcat 中有 session ,就不会去找 memcached(node106);
[root@node105 ~]# vim /usr/local/tomcat/conf/context.xml
<Context>
...
<Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
memcachedNodes="m105:node105.yqc.com:11211,m106:node106.yqc.com:11211"
failoverNodes="m105"
requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$"
transcoderFactoryClass="de.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory"
/>
</Context>
-
编辑 Tomcat106 的 $CATALINA_HOME/conf/context.xml
以 node106 的 tomcat 内存为主session,node105 的 memcached 为备session,当 tomcat 中有 session ,就不会去找 memcached(node105);
[root@node106 ~]# vim /usr/local/tomcat/conf/context.xml
<Context>
...
<Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
memcachedNodes="m105:node105.yqc.com:11211,m106:node106.yqc.com:11211"
failoverNodes="m106"
requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$"
transcoderFactoryClass="de.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory"
/>
</Context>
启动 Tomcat 和 Memcached
- 启动服务:
~]# systemctl start memcached
tomcat]# bin/startup.sh
- 查看 catalina.out 中的 MSM 相关日志信息:
# node105
14-Dec-2020 11:42:51.048 INFO [node105.yqc.com-startStop-1] de.javakaffee.web.msm.MemcachedSessionService.startInternal --------
- finished initialization:
- sticky: true
- operation timeout: 1000
- node ids: [m106]
- failover node ids: [m105]
- storage key prefix: null
- locking mode: null (expiration: 5s)
--------
# node106
14-Dec-2020 11:25:27.230 INFO [node106.yqc.com-startStop-1] de.javakaffee.web.msm.MemcachedSessionService.startInternal --------
- finished initialization:
- sticky: true
- operation timeout: 1000
- node ids: [m105]
- failover node ids: [m106]
- storage key prefix: null
- locking mode: null (expiration: 5s)
--------
访问测试
访问前端负载均衡服务器 node104.yqc,com(轮询调度),查看调度至不同的后端Tomcat时,SessionID会不会保持不变;
- SessionID 保持不变:
4.5.2:MSM non-sticky 部署
编辑 Tomcat 的 context.xml
- 编辑 Tomcat105 的 $CATALINA_HOME/conf/context.xml
<Context>
...
<Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
memcachedNodes="m105:node105.yqc.com:11211,m106:node106.yqc.com:11211"
sticky="false"
sessionBackupAsync="false"
lockingMode="uriPattern:/path1|/path2"
requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$"
transcoderFactoryClass="de.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory"
/>
</Context>
-
编辑 Tomcat106 的 $CATALINA_HOME/conf/context.xml
和 Tomcat105 一致;
<Context>
...
<Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
memcachedNodes="m105:node105.yqc.com:11211,m106:node106.yqc.com:11211"
sticky="false"
sessionBackupAsync="false"
lockingMode="uriPattern:/path1|/path2"
requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$"
transcoderFactoryClass="de.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory"
/>
</Context>
重启 Tomcat
- 重启 Tomcat:
[root@node105 tomcat]# bin/shutdown.sh
[root@node105 tomcat]# bin/startup.sh
[root@node106 tomcat]# bin/shutdown.sh
[root@node106 tomcat]# bin/startup.sh
- 查看 catalina.out 中的 MSM 相关日志信息:
# node105
14-Dec-2020 15:09:37.518 INFO [node105.yqc.com-startStop-1] de.javakaffee.web.msm.MemcachedSessionService.startInternal --------
- finished initialization:
- sticky: false
- operation timeout: 1000
- node ids: [m105, m106]
- failover node ids: []
- storage key prefix: null
- locking mode: uriPattern:/path1|/path2 (expiration: 5s)
--------
#node106
14-Dec-2020 15:09:41.522 INFO [node106.yqc.com-startStop-1] de.javakaffee.web.msm.MemcachedSessionService.startInternal --------
- finished initialization:
- sticky: false
- operation timeout: 1000
- node ids: [m105, m106]
- failover node ids: []
- storage key prefix: null
- locking mode: uriPattern:/path1|/path2 (expiration: 5s)
--------
访问测试
-
重新访问 node104.yqc.com:
无论调度到哪台 Tomcat,同一页面访问的 SessionID 保持不变;
4.5.3:MSM non-sticky 部署(Redis)
redis desktop manager 0.88版本是最后一个免费版本
用 MSM + Redis 的方式部署 Session Server时,不支持配置多个 Redis 节点,也不支持 failover 故障转移;
如果需要部署高可用,可以通过部署 Redis Cluster 来实现;The built-in support for Redis does currently not allow connections to multiple Redis nodes, nor does it support the failoverNodes property. However, with Redis, automatic failover could be implemented directly in Redis by building a Redis cluster.
安装并启动 Redis
此次实验将 Redis 部署在 node105 上;
- node105 安装 Redis;
yum install redis -y
- 编辑 Redis 监听:
[root@node105 tomcat]# vim /etc/redis.conf
bind 192.168.1.105
- 关闭 Memcached,并启动 Redis:
[root@node105 tomcat]# systemctl stop memcached
[root@node105 tomcat]# systemctl disable memcached
[root@node105 tomcat]# systemctl start redis
[root@node105 tomcat]# systemctl enable redis
[root@node106 tomcat]# systemctl stop memcached
[root@node106 tomcat]# systemctl disable memcached
编辑 Tomcat 的 Context.xml
-
编辑 Tomcat105 的 $CATALINA_HOME/conf/context.xml
如果 redis 使用默认的 6379 端口,memcachedNodes 中也可以不指定端口;
<Context>
...
<Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
memcachedNodes="redis://node105.yqc.com:6379"
sticky="false"
sessionBackupAsync="false"
lockingMode="uriPattern:/path1|/path2"
requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$"
transcoderFactoryClass="de.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory"
/>
</Context>
-
编辑 Tomcat106 的 $CATALINA_HOME/conf/context.xml
也指向 node105 的 Redis;
<Context>
...
<Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
memcachedNodes="redis://node105.yqc.com:6379"
sticky="false"
sessionBackupAsync="false"
lockingMode="uriPattern:/path1|/path2"
requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$"
transcoderFactoryClass="de.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory"
/>
</Context>
重启 Tomcat
- 重启 Tomcat:
[root@node105 tomcat]# bin/shutdown.sh
[root@node105 tomcat]# bin/startup.sh
[root@node106 tomcat]# bin/shutdown.sh
[root@node106 tomcat]# bin/startup.sh
访问测试
- 访问 node104.yqc.com,同一页面访问的 SessionID 保持不变:
redis desktop manager
- 使用 redis desktop manager 查看 redis 存储的 Session: