ginx+keepalived+tomcat实现tomcat高可用性负载均衡

转载 2013年12月04日 09:47:23
Nginx+keepalived+tomcat实现tomcat高可用性负载均衡
试验环境: 
CentOS5.4、pcre-8.12、nginx-upstream-jvm-route-0.1、nginx-1.0.10、apache-tomcat-7.0.23 、keepalived-1.1.17.tar.gz、jdk-7u2-linux-x64.tar.gz
主nginx服务器地址:10.29.9.200 
辅nginx服务器地址:10.29.9.201 
tomcat1: 10.29.9.202 
tomcat2: 10.29.9.203
VIP: 10.29.9.188
拓扑如下:
绘图1
 
1.分别在10.29.9.200和10.29.9.201上面安装nginx
tar zxf pcre-8.12.tar.gz 
cd pcre-8.12 
./configure 
make;make install

 
下载下面的插件安装,否则nginx无法识别tomcat中jvmRoute,从而无法达到session复制的效果。
wget http://friendly.sinaapp.com/LinuxSoft/nginx-upstream-jvm-route-0.1.tar.gz 
tar xzf nginx-upstream-jvm-route-0.1.tar.gz 
tar xzf nginx-1.0.10.tar.gz 
cd nginx-1.0.10 
patch -p0 <../nginx_upstream_jvm_route/jvm_route.patch 
./configure --prefix=/usr/local/nginx --with-http_stub_status_module / --with-pcre=/root/pcre-8.12 --add-module=../nginx_upstream_jvm_route/ 
#--with-pcre=指向的是pcre的源码包 
make;make install
2.配置nginx
vim /usr/local/nginx/conf/nginx.conf
user www www;
worker_processes 4;
error_log /home/wwwlogs/nginx_error.log crit;
pid /usr/local/nginx/logs/nginx.pid;
#Specifies the value for maximum file descriptors that can be opened by this process.
worker_rlimit_nofile 51200;
events         
        {      
                use epoll;
                worker_connections 51200;
        }      
http
        {
   upstream backend {
        server 10.29.9.202:8080 srun_id=tomcat1;
        server 10.29.9.203:8080 srun_id=tomcat2;
        jvm_route $cookie_JSESSIONID|sessionid reverse;
        }
include       mime.types;
default_type application/octet-stream;
server_names_hash_bucket_size 128;
client_header_buffer_size 32k;
large_client_header_buffers 4 32k;
client_max_body_size 50m;
sendfile on;
tcp_nopush     on;
keepalive_timeout 60;
tcp_nodelay on;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
fastcgi_buffer_size 64k;
fastcgi_buffers 4 64k;
fastcgi_busy_buffers_size 128k;
fastcgi_temp_file_write_size 256k;
gzip on;
gzip_min_length 1k;
gzip_buffers     4 16k;
charset UTF-8
gzip_http_version 1.0;
gzip_comp_level 2;
gzip_types       text/plain application/x-javascript text/css application/xml;
gzip_vary on;
#limit_zone crawler $binary_remote_addr 10m;
 
server
   listen       80;
   server_name www.8090u.com;
   index index.jsp index.htm index.html;
   root /home/wwwroot/;
location / {
     proxy_pass http://backend;
     proxy_redirect    off;
     proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
     proxy_set_header X-Real-IP $remote_addr;
     proxy_set_header Host $http_host;
     }
location ~ .*/.(gif|jpg|jpeg|png|bmp|swf)$
           {
        expires      30d;
     }
location ~ .*/.(js|css)?$
   {
        expires      1h;
   }
location /Nginxstatus {
       stub_status on;
       access_log   off;
   }
 log_format access '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" $http_x_forwarded_for';
access_log /home/wwwlogs/access.log access;
        }
include vhost/*.conf;
}
 
3. 分别在两台nginx服务器上安装keepalived
tar zxvf keepalived-1.1.17.tar.gz
cd keepalived-1.1.17
./configure --prefix=/usr/local/keepalived
make && make install
cp /usr/local/keepalived/sbin/keepalived /usr/sbin/
cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
mkdir /etc/keepalived
cd /etc/keepalived/
 
主keepalived配置
vim keepalived.conf
vrrp_script chk_http_port {
                script "/opt/nginx_pid.sh"
                interval 2
                weight 2
}
vrrp_instance VI_1 {
        state MASTER
        interface eth0
        virtual_router_id 51
        mcast_src_ip 10.29.9.200
        priority 150
        authentication {
 
                     auth_type PASS
                     auth_pass 1111
        }
        track_script {
                chk_http_port
        }
        virtual_ipaddress {
             10.29.9.188
        }


辅keepalived 配置
vrrp_script chk_http_port {
                script "/opt/nginx_pid.sh"
                interval 2
                weight 2
}
vrrp_instance VI_1 {
        state BACKUP
        interface eth0
        virtual_router_id 51
        mcast_src_ip 10.29.9.201
        priority 100
        authentication {
                     auth_type PASS
                     auth_pass 1111
        }
        track_script {
                chk_http_port
        }
        virtual_ipaddress {
                 10.29.9.188
        }
}
启动keepalived,检查虚拟IP是否邦定,在主keepalived
[root@xenvps0 ~]# /etc/init.d/keepalived start
启动 keepalived:                                          [确定]
[root@xenvps0 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:16:36:68:a4:fc brd ff:ff:ff:ff:ff:ff
    inet 10.29.9.200/24 brd 10.29.9.255 scope global eth0
inet 10.29.9.188/32 scope global eth0
在eth0上面我们已经看到虚拟IP 10.29.9.188已经邦定成功
 
4.安装tomcat 
1)安装tomcat_1 
tar zxvf apache-tomcat-7.0.23.tar.gz 
mv apache-tomcat-7.0.23 /usr/local/tomcat 
2)安装tomcat_2,步骤同1)
5.分别在tomcat服务器安装jdk
tar zxvf  jdk-7u2-linux-x64.tar.gz
mv jdk1.7.0_02 /usr/local/jdk1.7.0_02
cat >>/etc/profile <<EOF
export JAVA_HOME=/usr/local/jdk1.7.0_02
export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib 
export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH:$HOMR/bin
EOF;
source /etc/profile //使环境变量立即生效
5.tomcat集群配置 
tomcat1配置: 
修改conf/server.xml配置文件
<Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcat1"> 
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" 
channelSendOptions="8"> 
<Manager className="org.apache.catalina.ha.session.DeltaManager" 
expireSessionsOnShutdown="false" 
notifyListenersOnReplication="true"/> 
<Channel className="org.apache.catalina.tribes.group.GroupChannel"> 
<Membership className="org.apache.catalina.tribes.membership.McastService" 
address="224.0.0.4" 
port="45564" 
frequency="500" 
dropTime="3000"/> 
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver" 
address="10.29.9.202" //tomcat1 所在服务器的IP地址
port="4000" //端口号
autoBind="100" 
selectorTimeout="5000" 
maxThreads="6"/> 
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter"> 
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender" /> 
</Sender> 
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/> 
<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/> 
<Interceptor className="org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor"/>
</Channel> 
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve" 
filter=""/> 
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/> 
<Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer" 
tempDir="/tmp/war-temp/" 
deployDir="/tmp/war-deploy/" 
watchDir="/tmp/war-listen/" 
watchEnabled="false"/> 
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/> 
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/> 
</Cluster> 

在<host>…</host>添加下面这句:
<Context path="" docBase="/opt/project " reloadable="false" crossContext="true" />
 
tomcat2配置: 
修改conf/server.xml配置文件
<Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcat2"> 
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" 
channelSendOptions="8"> 
<Manager className="org.apache.catalina.ha.session.DeltaManager" 
expireSessionsOnShutdown="false" 
notifyListenersOnReplication="true"/> 
<Channel className="org.apache.catalina.tribes.group.GroupChannel"> 
<Membership className="org.apache.catalina.tribes.membership.McastService" 
address="224.0.0.4" 
port="45564" 
frequency="500" 
dropTime="3000"/> 
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver" 
address="10.29.9.203" //tomcat2所在服务器IP
port="4001" //端口号不能和tomcat1重复
autoBind="100" 
selectorTimeout="5000" 
maxThreads="6"/> 
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter"> 
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender" /> 
</Sender> 
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/> 
<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/> 
<Interceptor className="org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor"/>
</Channel> 
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve" 
filter=""/> 
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
<Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer" 
tempDir="/tmp/war-temp/" 
deployDir="/tmp/war-deploy/" 
watchDir="/tmp/war-listen/" 
watchEnabled="false"/> 
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/> 
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/> 
</Cluster>
在<host>…</host>添加下面这句:
<Context path="" docBase="/opt/project " reloadable="false" crossContext="true" />
 
6.session配置
修改web应用里面WEB-INF目录下的web.xml文件,加入标签 
<distributable/> 
直接加在</web-app>之前 
开启网卡组播功能: 
route add -net 224.0.0.0 netmask 240.0.0.0 dev eth0
7.创建jsp测试页面 
mkdir /opt/project
cd /opt/project 
vi index.jsp 
<html> 
<title> 
tomcat1 jsp 
</title> 
<% 
String showMessage="Hello,This is 10.29.9.202 server"; 
out.print(showMessage); 
%> 
</html> 
---------------------------- 
mkdir /opt/project
cd /opt/project 
vi index.jsp 
<html> 
<title> 
tomcat2 jsp 
</title> 
<% 
String showMessage=" Hello,This is 10.29.9.203 server"; 
out.print(showMessage); 
%> 
</html>

相关文章推荐

Nginx+keepalived+tomcat实现tomcat高可用性负载均衡

Redhat Linux安装keepalived实现双机热备 参考:http://www.liusuping.com/ubuntu-linux/Redhat-linux-keepalived-h...

Keepalived + nginx实现高可用性和负载均衡

在前面的一篇中讲到了Heartbeat作为高可用服务架构的解决方案,今天有试验了一种全新的解决方案,即采用Keepalived来实现这个功能。 Keepalived 是一种高性能的服务器高可用或...

LVS+Keepalived负载均衡实现web服务器的高可用性(虚拟机中测试)

8章-LVS负载均衡 一、为什么要使用这个负载均衡技术? 考虑一个问题: 比如某公司有一台服务器目前支撑了2000左右的用户,但是随着用户的业务扩展,用户量爆增到5000.或者一万, 这个时候...
  • codyanh
  • codyanh
  • 2014年08月21日 11:41
  • 536

nginx + keepalived 实现高可用性和负载均衡

一、场景需求 二、Keepalived 简要介绍 Keepalived 是一种高性能的服务器高可用或热备解决方案,Keepalived 可以用来防止服务器单点故障的发生,通过配合 Ng...

LVS + keepalived + nginx + tomcat 实现主从热备 + 负载均衡

http://www.cnblogs.com/youzhibing/p/5061786.html 前言   首先声明下,由于这两天找资料,看了不少博客 ,但是出于不细心,参考者...

keepalived+httpd+tomcat实现高可用负载均衡

一、环境 centos 6.5 keepalived keepalived-1.2.19.tar.gz httpd httpd-2.4.12.tar.gz tomcat apache-tomc...
  • ty_nhf
  • ty_nhf
  • 2015年07月25日 14:36
  • 371

centos部署lvs+keepalived+apache/tomcat实现高性能高可用负载均衡

前言: 常用的有的负载均衡软件有lvs、haproxy、nginx 一般lvs和keeplavied一起使用 lvs是实现负载均衡作用的,即将客户端的需求采用特定的负载均衡算法分发到后端的Web应用服...
  • nuli888
  • nuli888
  • 2016年07月14日 19:10
  • 3358

集群与负载均衡系列(8)——redis主从复制+哨兵实现高可用性架构

主从复制         redis主从复制非常简单,只需要在从数据节点配置slaveof master-ip master-port即可。我就不多说了。      举个例子,分别创建3个配置文件...

keepalive + Nginx实现高可用性及负载均衡

目录[-] 1. 安装Keeplived依赖 2. 安装Keepalived 3. 配置Keepalived 4. 运行Keepalived 5. 总结 前几天使用了Heartbeat作为高...

Linux配置双网卡绑定实现负载均衡和高可用性配置

1. Bonding简述双网卡配置设置虚拟为一个网卡实现网卡的冗余,其中一个网卡坏掉后网络通信仍可正常使用,实现网卡层面的负载均衡和高可用性1.1 Bonding原理网卡工作在混杂(promisc)模...
  • garyond
  • garyond
  • 2017年07月16日 00:47
  • 148
内容举报
返回顶部
收藏助手
不良信息举报
您举报文章:ginx+keepalived+tomcat实现tomcat高可用性负载均衡
举报原因:
原因补充:

(最多只允许输入30个字)