ginx+keepalived+tomcat实现tomcat高可用性负载均衡

转载 2013年12月04日 09:47:23
Nginx+keepalived+tomcat实现tomcat高可用性负载均衡
试验环境: 
CentOS5.4、pcre-8.12、nginx-upstream-jvm-route-0.1、nginx-1.0.10、apache-tomcat-7.0.23 、keepalived-1.1.17.tar.gz、jdk-7u2-linux-x64.tar.gz
主nginx服务器地址:10.29.9.200 
辅nginx服务器地址:10.29.9.201 
tomcat1: 10.29.9.202 
tomcat2: 10.29.9.203
VIP: 10.29.9.188
拓扑如下:
绘图1
 
1.分别在10.29.9.200和10.29.9.201上面安装nginx
tar zxf pcre-8.12.tar.gz 
cd pcre-8.12 
./configure 
make;make install

 
下载下面的插件安装,否则nginx无法识别tomcat中jvmRoute,从而无法达到session复制的效果。
wget http://friendly.sinaapp.com/LinuxSoft/nginx-upstream-jvm-route-0.1.tar.gz 
tar xzf nginx-upstream-jvm-route-0.1.tar.gz 
tar xzf nginx-1.0.10.tar.gz 
cd nginx-1.0.10 
patch -p0 <../nginx_upstream_jvm_route/jvm_route.patch 
./configure --prefix=/usr/local/nginx --with-http_stub_status_module / --with-pcre=/root/pcre-8.12 --add-module=../nginx_upstream_jvm_route/ 
#--with-pcre=指向的是pcre的源码包 
make;make install
2.配置nginx
vim /usr/local/nginx/conf/nginx.conf
user www www;
worker_processes 4;
error_log /home/wwwlogs/nginx_error.log crit;
pid /usr/local/nginx/logs/nginx.pid;
#Specifies the value for maximum file descriptors that can be opened by this process.
worker_rlimit_nofile 51200;
events         
        {      
                use epoll;
                worker_connections 51200;
        }      
http
        {
   upstream backend {
        server 10.29.9.202:8080 srun_id=tomcat1;
        server 10.29.9.203:8080 srun_id=tomcat2;
        jvm_route $cookie_JSESSIONID|sessionid reverse;
        }
include       mime.types;
default_type application/octet-stream;
server_names_hash_bucket_size 128;
client_header_buffer_size 32k;
large_client_header_buffers 4 32k;
client_max_body_size 50m;
sendfile on;
tcp_nopush     on;
keepalive_timeout 60;
tcp_nodelay on;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
fastcgi_buffer_size 64k;
fastcgi_buffers 4 64k;
fastcgi_busy_buffers_size 128k;
fastcgi_temp_file_write_size 256k;
gzip on;
gzip_min_length 1k;
gzip_buffers     4 16k;
charset UTF-8
gzip_http_version 1.0;
gzip_comp_level 2;
gzip_types       text/plain application/x-javascript text/css application/xml;
gzip_vary on;
#limit_zone crawler $binary_remote_addr 10m;
 
server
   listen       80;
   server_name www.8090u.com;
   index index.jsp index.htm index.html;
   root /home/wwwroot/;
location / {
     proxy_pass http://backend;
     proxy_redirect    off;
     proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
     proxy_set_header X-Real-IP $remote_addr;
     proxy_set_header Host $http_host;
     }
location ~ .*/.(gif|jpg|jpeg|png|bmp|swf)$
           {
        expires      30d;
     }
location ~ .*/.(js|css)?$
   {
        expires      1h;
   }
location /Nginxstatus {
       stub_status on;
       access_log   off;
   }
 log_format access '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" $http_x_forwarded_for';
access_log /home/wwwlogs/access.log access;
        }
include vhost/*.conf;
}
 
3. 分别在两台nginx服务器上安装keepalived
tar zxvf keepalived-1.1.17.tar.gz
cd keepalived-1.1.17
./configure --prefix=/usr/local/keepalived
make && make install
cp /usr/local/keepalived/sbin/keepalived /usr/sbin/
cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
mkdir /etc/keepalived
cd /etc/keepalived/
 
主keepalived配置
vim keepalived.conf
vrrp_script chk_http_port {
                script "/opt/nginx_pid.sh"
                interval 2
                weight 2
}
vrrp_instance VI_1 {
        state MASTER
        interface eth0
        virtual_router_id 51
        mcast_src_ip 10.29.9.200
        priority 150
        authentication {
 
                     auth_type PASS
                     auth_pass 1111
        }
        track_script {
                chk_http_port
        }
        virtual_ipaddress {
             10.29.9.188
        }


辅keepalived 配置
vrrp_script chk_http_port {
                script "/opt/nginx_pid.sh"
                interval 2
                weight 2
}
vrrp_instance VI_1 {
        state BACKUP
        interface eth0
        virtual_router_id 51
        mcast_src_ip 10.29.9.201
        priority 100
        authentication {
                     auth_type PASS
                     auth_pass 1111
        }
        track_script {
                chk_http_port
        }
        virtual_ipaddress {
                 10.29.9.188
        }
}
启动keepalived,检查虚拟IP是否邦定,在主keepalived
[root@xenvps0 ~]# /etc/init.d/keepalived start
启动 keepalived:                                          [确定]
[root@xenvps0 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:16:36:68:a4:fc brd ff:ff:ff:ff:ff:ff
    inet 10.29.9.200/24 brd 10.29.9.255 scope global eth0
inet 10.29.9.188/32 scope global eth0
在eth0上面我们已经看到虚拟IP 10.29.9.188已经邦定成功
 
4.安装tomcat 
1)安装tomcat_1 
tar zxvf apache-tomcat-7.0.23.tar.gz 
mv apache-tomcat-7.0.23 /usr/local/tomcat 
2)安装tomcat_2,步骤同1)
5.分别在tomcat服务器安装jdk
tar zxvf  jdk-7u2-linux-x64.tar.gz
mv jdk1.7.0_02 /usr/local/jdk1.7.0_02
cat >>/etc/profile <<EOF
export JAVA_HOME=/usr/local/jdk1.7.0_02
export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib 
export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH:$HOMR/bin
EOF;
source /etc/profile //使环境变量立即生效
5.tomcat集群配置 
tomcat1配置: 
修改conf/server.xml配置文件
<Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcat1"> 
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" 
channelSendOptions="8"> 
<Manager className="org.apache.catalina.ha.session.DeltaManager" 
expireSessionsOnShutdown="false" 
notifyListenersOnReplication="true"/> 
<Channel className="org.apache.catalina.tribes.group.GroupChannel"> 
<Membership className="org.apache.catalina.tribes.membership.McastService" 
address="224.0.0.4" 
port="45564" 
frequency="500" 
dropTime="3000"/> 
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver" 
address="10.29.9.202" //tomcat1 所在服务器的IP地址
port="4000" //端口号
autoBind="100" 
selectorTimeout="5000" 
maxThreads="6"/> 
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter"> 
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender" /> 
</Sender> 
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/> 
<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/> 
<Interceptor className="org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor"/>
</Channel> 
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve" 
filter=""/> 
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/> 
<Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer" 
tempDir="/tmp/war-temp/" 
deployDir="/tmp/war-deploy/" 
watchDir="/tmp/war-listen/" 
watchEnabled="false"/> 
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/> 
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/> 
</Cluster> 

在<host>…</host>添加下面这句:
<Context path="" docBase="/opt/project " reloadable="false" crossContext="true" />
 
tomcat2配置: 
修改conf/server.xml配置文件
<Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcat2"> 
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" 
channelSendOptions="8"> 
<Manager className="org.apache.catalina.ha.session.DeltaManager" 
expireSessionsOnShutdown="false" 
notifyListenersOnReplication="true"/> 
<Channel className="org.apache.catalina.tribes.group.GroupChannel"> 
<Membership className="org.apache.catalina.tribes.membership.McastService" 
address="224.0.0.4" 
port="45564" 
frequency="500" 
dropTime="3000"/> 
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver" 
address="10.29.9.203" //tomcat2所在服务器IP
port="4001" //端口号不能和tomcat1重复
autoBind="100" 
selectorTimeout="5000" 
maxThreads="6"/> 
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter"> 
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender" /> 
</Sender> 
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/> 
<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/> 
<Interceptor className="org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor"/>
</Channel> 
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve" 
filter=""/> 
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
<Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer" 
tempDir="/tmp/war-temp/" 
deployDir="/tmp/war-deploy/" 
watchDir="/tmp/war-listen/" 
watchEnabled="false"/> 
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/> 
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/> 
</Cluster>
在<host>…</host>添加下面这句:
<Context path="" docBase="/opt/project " reloadable="false" crossContext="true" />
 
6.session配置
修改web应用里面WEB-INF目录下的web.xml文件,加入标签 
<distributable/> 
直接加在</web-app>之前 
开启网卡组播功能: 
route add -net 224.0.0.0 netmask 240.0.0.0 dev eth0
7.创建jsp测试页面 
mkdir /opt/project
cd /opt/project 
vi index.jsp 
<html> 
<title> 
tomcat1 jsp 
</title> 
<% 
String showMessage="Hello,This is 10.29.9.202 server"; 
out.print(showMessage); 
%> 
</html> 
---------------------------- 
mkdir /opt/project
cd /opt/project 
vi index.jsp 
<html> 
<title> 
tomcat2 jsp 
</title> 
<% 
String showMessage=" Hello,This is 10.29.9.203 server"; 
out.print(showMessage); 
%> 
</html>

Keepalived+Haproxy实现高可用负载均衡

高可用负载(主备节点)部署步骤完成下图一个简单的高可用负载机制:1.拷贝keepalived-1.2.19.tar.gz至Linux的/home目录(也可以是其它目录)2.解压并安装keepalive...
  • hexieshangwang
  • hexieshangwang
  • 2015年11月03日 10:40
  • 1433

Nginx+keepalived+tomcat实现tomcat高可用性负载均衡

Redhat Linux安装keepalived实现双机热备 参考:http://www.liusuping.com/ubuntu-linux/Redhat-linux-keepalived-h...
  • giianhui
  • giianhui
  • 2012年03月01日 11:53
  • 4227

Nginx+tomcat实现负载均衡

Nginx 反向代理初印象Nginx (“engine x”) 是一个高性能的HTTP和反向代理 服务器,也是一个IMAP/POP3/SMTP服务器。其特点是占有内存少,并发能力强,事实上nginx的...
  • dylanzr
  • dylanzr
  • 2016年08月06日 11:20
  • 6230

使用Apache通过JK实现多Tomcat负载均衡集群实现总结及自己的感悟

第一次玩负载均衡集群,使用的是Apache、Tomcat,通过JK来实现。由于没有这方面经验,自己摸索了好多天,直到今天才基本完全搞定了。也了解了里面的一些相关原理,自己也亲自动手验证了一些原理。现将...
  • fuxiaohui
  • fuxiaohui
  • 2015年05月02日 15:09
  • 1231

基于Apache+Tomcat负载均衡的两种实现方法

http://yijiu.blog.51cto.com/433846/1435995 基于Apache+Tomcat负载均衡的两种实现方法 2014-07-08 1...
  • xiangbq
  • xiangbq
  • 2016年08月19日 17:44
  • 1024

Solaris10 SPARC操作系统中使用Nginx+Tomcat实现负载均衡,并实现websocket代理(二)

2 实施步骤上一节我们简单介绍了负载均衡的知识,下面我们就来看看具体如何实现负载均衡。2.1 准备资料Nginx依赖很多其他的东西,因此我们需要提前准备好。2.1.1 Nginx软件首先是...
  • ghgzczxcvxv
  • ghgzczxcvxv
  • 2015年05月07日 17:45
  • 1117

【Linux运维-集群技术进阶】Nginx+Tomcat实现Web服务器的负载均衡

拓扑环境 服务器名称 系统版本 预装软件 IP地址 Nginx服务器 CentOS 7 最小安装 Nginx 192.168.22.227 Web服务器A Ce...
  • u010028869
  • u010028869
  • 2016年01月08日 21:00
  • 6241

采用nginx让多个tomcat实现负载均衡

由于目录已将项目正式部署并发布了,但由于时不时地会出现bug,修复bug再次提交后,会让项目出现短时间的无法访问的问题,虽然时间虽短,但还是会影响用户的体验。为了不让用户察觉出项目的变动,于是我便采用...
  • puhaiyang
  • puhaiyang
  • 2016年03月03日 20:03
  • 1021

Linux下配置Nginx + 双Tomcat负载均衡

引言 Nginx简 Nginx ("engine x") 是一个高性能的 HTTP 和 反向代理 服务器,也是一个 IMAP/POP3/SMTP 代理服务器 。 Nginx 是由 ...
  • s77108887
  • s77108887
  • 2016年06月05日 08:43
  • 2830

Redis学习笔记(六)Nginx+Tomcat+Redis实现负载均衡、资源分离、session共享

CentOS安装Nginx http://centoscn.com/CentosServer/www/2013/0910/1593.html CentOS安装Tomcat http://blog.cs...
  • u014756827
  • u014756827
  • 2016年05月31日 14:56
  • 1077
内容举报
返回顶部
收藏助手
不良信息举报
您举报文章:ginx+keepalived+tomcat实现tomcat高可用性负载均衡
举报原因:
原因补充:

(最多只允许输入30个字)