tomcat的lb集群

部署环境:两台tomcat,一台nginx做web处理。

ip地址角色
10.5.100.183(centos7–>node3)tomcat1
10.5.100.146(centos7—>node4)tomcat2
10.5.100.119(centos6---->node2)web前端

一:构建两台tomcat的应用服务器。

[root@node3 local]# vi /etc/hosts   编辑hosts文件使得三台主机能够相互解析。
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.5.100.183 node3.magedu.com node3
10.5.100.146 node4.magedu.com node4
10.5.100.119 node2.magedu.com node2

[root@node3 ~]# cd /etc/yum.repos.d/
[root@node3 yum.repos.d]# ls
CentOS-Base.repo  docker-ce.repo  epel.repo  kubernetes.repo  zabbix.repo
[root@node3 yum.repos.d]# cd
[root@node3 ~]# 
[root@node3 ~]# ls
anaconda-ks.cfg  apache-tomcat-9.0.36.tar.gz  harbor  key
[root@node3 ~]# yum list all | grep jdk 
[root@node3 ~]# yum install java-11-openjdk.x86_64 -y   部署jdk环境。
 ...步骤省略
[root@node3 ~]# tar -xf apache-tomcat-9.0.36.tar.gz -C /usr/local  官方网站下载二元发布tomcat包。解压到指定目录
[root@node3 ~]# cd /usr/local/
[root@node3 local]# ls
apache-tomcat-9.0.36  bin  etc  games  harbor  include  lib  lib64  libexec  sbin  share  src  tomcat
[root@node3 local]# ln -sv apache-tomcat-9.0.36/ tomcat
‘tomcat/apache-tomcat-9.0.36-> ‘apache-tomcat-9.0.36/[root@node3 ~]# mkdir -pv /data/webapps/ROOT/{lib,classes,META-INF,WEB-INF} 创建一个应用实列。
mkdir: created directory ‘/data/webapps/ROOT/lib’
mkdir: created directory ‘/data/webapps/ROOT/classes’
mkdir: created directory ‘/data/webapps/ROOT/META-INF’
mkdir: created directory ‘/data/webapps/ROOT/WEB-INF’
[root@node3 ~]# cd /data/webapps/ROOT/
[root@node3 ROOT]# ls
classes  lib  META-INF  WEB-INF
[root@node3 ROOT]# vi index.jsp
<%@ page language="java" %>
<%@ page import="java.util.*" %>
<html>
        <head>
                <title>JSP Test Page</title>
        </head>
        <body>
                <% out.println("node3.magedu.com."); %>
        </body>
</html>
[root@node3 ~]# cd /usr/local/tomcat/conf/
[root@node3 conf]# ls
catalina.policy      context.xml           jaspic-providers.xsd  server.xml       tomcat-users.xml  web.xml
catalina.properties  jaspic-providers.xml  logging.properties    server.xml.back  tomcat-users.xsd
[root@node3 conf]# vi server.xml   修改主配置文件,构建一个虚拟主机。来处理web动态请求
 <Host name="node3.magedu.com" appBase="/data/webapps" autoDeploy="true">
        <Context path="" docBase="ROOT" redeploy="true"/>
        <Valve className="org.apache.catalina.valves.AccessLogValve" directory="/data/logs"
               prefix="node3_access_log" suffix=".txt"
               pattern="%h %l %u %t &quot;%r&quot; %s %b" />
      </Host>

[root@node3 conf]# catalina.sh configtest   检验配置文件的信息是否错误。
Using CATALINA_BASE:   /usr/local/tomcat
Using CATALINA_HOME:   /usr/local/tomcat
Using CATALINA_TMPDIR: /usr/local/tomcat/temp
Using JRE_HOME:        /usr
Using CLASSPATH:       /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar
Jul 13, 2020 3:41:48 AM org.apache.tomcat.util.digester.SetPropertiesRule begin
[root@node3 ~]# ss -tnlp | grep "8080"
LISTEN     0      100         :::8080                    :::*                   users:(("java",pid=25093,fd=55))

构建node4 tomcat应用服务器。跟上述操作一样。

[root@node4 local]# vi /etc/hosts   编辑hosts文件使得三台主机能够相互解析。
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.5.100.183 node3.magedu.com node3
10.5.100.146 node4.magedu.com node4
10.5.100.119 node2.magedu.com node2

[root@node4 ~]# cd /etc/yum.repos.d/
[root@node4 yum.repos.d]# ls
CentOS-Base.repo  docker-ce.repo  epel.repo  kubernetes.repo  zabbix.repo
[root@node4 yum.repos.d]# cd
[root@node4 ~]# 
[root@node4 ~]# ls
anaconda-ks.cfg  apache-tomcat-9.0.36.tar.gz  harbor  key
[root@node4 ~]# yum list all | grep jdk 
[root@node4 ~]# yum install java-11-openjdk.x86_64 -y   部署jdk环境。
 ...步骤省略
[root@node4 ~]# tar -xf apache-tomcat-9.0.36.tar.gz -C /usr/local  官方网站下载二元发布tomcat包。解压到指定目录
[root@node4 ~]# cd /usr/local/
[root@node4 local]# ls
apache-tomcat-9.0.36  bin  etc  games  harbor  include  lib  lib64  libexec  sbin  share  src  tomcat
[root@node4 local]# ln -sv apache-tomcat-9.0.36/ tomcat
‘tomcat/apache-tomcat-9.0.36-> ‘apache-tomcat-9.0.36/[root@node4 ~]# mkdir -pv /data/webapps/ROOT/{lib,classes,META-INF,WEB-INF} 创建一个应用实列。
mkdir: created directory ‘/data/webapps/ROOT/lib’
mkdir: created directory ‘/data/webapps/ROOT/classes’
mkdir: created directory ‘/data/webapps/ROOT/META-INF’
mkdir: created directory ‘/data/webapps/ROOT/WEB-INF’
[root@node4 ~]# cd /data/webapps/ROOT/
[root@node4 ROOT]# ls
classes  lib  META-INF  WEB-INF
[root@node4 ROOT]# vi index.jsp
<%@ page language="java" %>
<%@ page import="java.util.*" %>
<html>
        <head>
                <title>JSP Test Page</title>
        </head>
        <body>
                <% out.println("node3.magedu.com."); %>
        </body>
</html>
[root@node4 ~]# cd /usr/local/tomcat/conf/
[root@node4 conf]# ls
catalina.policy      context.xml           jaspic-providers.xsd  server.xml       tomcat-users.xml  web.xml
catalina.properties  jaspic-providers.xml  logging.properties    server.xml.back  tomcat-users.xsd
[root@node4 conf]# vi server.xml   修改主配置文件,构建一个虚拟主机。来处理web动态请求
 <Host name="node3.magedu.com" appBase="/data/webapps" autoDeploy="true">
        <Context path="" docBase="ROOT" redeploy="true"/>
        <Valve className="org.apache.catalina.valves.AccessLogValve" directory="/data/logs"
               prefix="node3_access_log" suffix=".txt"
               pattern="%h %l %u %t &quot;%r&quot; %s %b" />
      </Host>

[root@node4 conf]# catalina.sh configtest   检验配置文件的信息是否错误。
Using CATALINA_BASE:   /usr/local/tomcat
Using CATALINA_HOME:   /usr/local/tomcat
Using CATALINA_TMPDIR: /usr/local/tomcat/temp
Using JRE_HOME:        /usr
Using CLASSPATH:       /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar
Jul 13, 2020 3:41:48 AM org.apache.tomcat.util.digester.SetPropertiesRule begin
[root@node4 ~]# ss -tnlp | grep "8080"
LISTEN     0      100         :::8080                    :::*                   users:(("java",pid=25093,fd=55))

三:构建web服务器nginx。实现nginx负载均衡之tomcat

[root@node2 yum.repos.d]# yum install nginx -y
Loaded plugins: fastestmirror, refresh-packagekit, security
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package nginx.x86_64 0:1.18.0-1.el6.ngx will be installed
--> Finished Dependency Resolution
[root@node2 nginx]# vi nginx.conf 编辑nginx的配置文件
http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include /etc/nginx/conf.d*.conf;

    upstream tcserver {      定义后端服务器实现负载均衡
        server node3.magedu.com:8080;
        server node4.magedu.com:8080;
}
}

server {
    listen       80;
    server_name  localhost;

    #charset koi8-r;
    #access_log  /var/log/nginx/host.access.log  main;

    location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
    }
    location ~* \.(jsp|do)$ {      定义匹配规则,表示将请求资源交给后端tomcat服务器处理。
        proxy_pass http://tcserver;
    }
[root@node2 ~]# curl 10.5.100.119/index.jsp   客户端发起请求,因为我们负载均衡归则默认是轮询。


<html>
        <head>
                <title>JSP Test Page</title>
        </head>
        <body>
                node4.magedu.com.

        </body>
</html>
[root@node2 ~]# curl 10.5.100.119/index.jsp


<html>
        <head>
                <title>JSP Test Page</title>
        </head>
        <body>
                node3.magedu.com.

        </body>
</html>

四:构建LAMT实现httpd负载均衡之tomcat

基于mod_proxy实现负载均衡
[root@node2 ~]# rpm -qa httpd  检查安装的httpd
httpd-2.2.15-29.el6.centos.x86_64
[root@node2 conf.d]# vi ../conf/httpd.conf  注释
#DocumentRoot "/var/www/html"
[root@node2 conf.d]# pwd
/etc/httpd/conf.d
[root@node2 conf.d]# vi vhost.conf  创建一个虚拟主机实列。
<proxy balancer://lbcluster1>    定义后端反向代理的相关主机 lbcluster1为集群名称
   balancerMember http://10.5.100.183:8080 loadfactor=10 route=TomcatA  route:表示后端tomcat标识。loadfactor:表示权重。
   balancerMember http://10.5.100.146:8080 loadfactor=10 route=TomcatB
</proxy>

<VirtualHost *:80>
    ProxyVia on
    ProxyRequests Off   关闭正向代理
    ProxyPreserveHost On   
    <Proxy *>
      Allow from all
    </Proxy>
    ProxyPass / balancer://lbcluster1/
    ProxyPassReverse / balancer://lbcluster1/
    <Location />
      Allow from all
    </Location>
</VirtualHost>

关于上述apache指令的相关信息:
ProxyPreserveHost {on|off}:如果启用此功能,代理会将用户请求报文中host行发给后端的服务器,而不再使用proxypass指定的服务器地址,如果想再反向代理中支持虚拟主机,则需要开启此项。否则无需打开 

proxyVia {on|off|full|Block}:用于控制http首部是否使用via;主要用于在多级代理中控制代理请求的流向,默认为OFF既不启用 on表示每个请求和响应报文均添加via:full表示每个via:行都会添加当前apache服务器的版本号信息,block表示每个代理请求报文中的via:都会被移除

proxyrequests {on|off}是否开启apache正向代理的功能,启用此项时为了代理http协议必须启用mod_proxy_http模块,同时,如果为apache设置了Proxypass,则必须将这项设置为off

proxypass [Path] !|url  [key=value key=value ...] 将后端服务器某URL与当前服务器的某虚拟路径关联起来作为提供服务的路径
path为当前服务器上的某虚拟路径,url为后端服务器上某url路径,使用此指令时必须将proxyrequests的值设置为off,需要注意的是,如果path 以”/“为结尾,则对应的url也必须以/结尾,
另外,mod_proxy模块在httpd 2.1的版本之后支持与后端服务器的连接池功能,连接池大小或其他设定可以通过在proxypass中使用key=value的方式定义。
常用的key如下所示
min:表示连接池的最小容量
max:连接池的最大容量
loadfactor:用于做负载均衡集群配置中,定义对应后端服务器的权重,范围1-100
retry:当apache将请求发送至后端服务器得到的错误响应时间时等待多长时间以后再重试单位秒钟
ProxypassReverse:用于让apache调整http重定向响应报文中的location,content-
location及URL标签所对应的URL,再反向代理环境中必须使用此指令避免重定向报文绕过proxy服务器

如果Proxy指定是以balance:// 开头,即用于负载均衡集群时,其还可以接受一些特殊参数
(1)lbmethod:apache实现负载均衡的调度方法,默认是byrequests,即基于权重将统计请求个数进行调度,bytraffic则执行基于权重的流量技术调度,bybusyness通过考量每个后端服务器的当前负载进行调度。

(2)stickysession:调度器的sticky session的名字,根据web程序的不同,其值为JSESSIONID或PHPSESSIONID
上述指令除了能在banlancer://或proxypass中设定之外,也可使用proxyset指令直接进行设置
例如:
<proxy balancer://lbcluster1>   
   balancerMember http://10.5.100.183:8080 loadfactor=10 route=TomcatA  
   balancerMember http://10.5.100.146:8080 loadfactor=10 route=TomcatB
   Proxyset lbmethod=bytraffic
</proxy>



[root@node2 conf.d]# httpd -t   验证配置文件是否正确
Syntax OK 
[root@node2 conf.d]# service httpd start

配置后端两台tomcat主机。
[root@node3 ~]# vi /usr/local/tomcat/conf/server.xml
<Engine name="Catalina" defaultHost="node3.magedu.com" jvmRoute="TomcatA">  添加这项JVMroute标识
engine中虚拟主机不变。
[root@node3 ~]# vi /data/webapps/ROOT/index.jsp   修改测试页,便于区分
<%@ page language="java" %> 
<html>
   <head><title>TomcatA</title></head>
   <body>
     <h1><font color="blue">TomcatA.magedu.com</font></h1>
     <table align="centre" border="1">
       <tr>
        <td>session ID</td>
     <% session.setAttribute("magedu.com","magedu.com"); %>
        <td><%= session.getId() %></td>
       </tr>
       <tr>
        <td>created on</td>
        <td><%= session.getCreationTime() %></td>
       </tr>
     </table>
   </body>
</html>

node4节点
[root@node4 ~]# vi /usr/local/tomcat/conf/server.xml
<Engine name="Catalina" defaultHost="node4.magedu.com" jvmRoute="TomcatB">  添加这项JVMroute标识
engine中虚拟主机不变。
[root@node4 ~]# vi /data/webapps/ROOT/index.jsp   修改测试页,便于区分
<%@ page language="java" %> 
<html>
   <head><title>TomcatB</title></head>
   <body>
     <h1><font color="blue">TomcatB.magedu.com</font></h1>
     <table align="centre" border="1">
       <tr>
        <td>session ID</td>
     <% session.setAttribute("magedu.com","magedu.com"); %>
        <td><%= session.getId() %></td>
       </tr>
       <tr>
        <td>created on</td>
        <td><%= session.getCreationTime() %></td>
       </tr>
     </table>
   </body>
</html>

命令行进行验证:
[root@node2 conf.d]# curl http://10.5.100.119/index.jsp

<html>
   <head><title>TomcatB</title></head>
   <body>
     <h1><font color="red">TomcatB.magedu.com</font></h1>
     <table align="centre" border="1">
       <tr>
        <td>session ID</td>
     
        <td>34D523D00C6CD8EEA921600011636DED</td>
       </tr>
       <tr>
        <td>created on</td>
        <td>1594719367462</td>
       </tr>
     </table>
   </body>
</html>
[root@node2 conf.d]# curl http://10.5.100.119/index.jsp

<html>
   <head><title>TomcatA</title></head>
   <body>
     <h1><font color="blue">TomcatA.magedu.com</font></h1>
     <table align="centre" border="1">
       <tr>
        <td>session ID</td>
     
        <td>204CA2B752B9B915E94815F1765CA5C0.TomcatA</td>
       </tr>
       <tr>
        <td>created on</td>
        <td>1594690661074</td>
       </tr>
     </table>
   </body>
</html>

[root@node2 conf.d]# 

五,在构建LAMT中遇到的一些错误

(1)[root@node2 conf.d]# cat /var/log/httpd/error_log  查看错误日志中出现以下错误时,说的是认证相关问题,配置错误。
[Tue Jul 14 07:02:45 2020] [notice] SELinux policy enabled; httpd running as context unconfined_u:system_r:httpd_t:s0
[Tue Jul 14 07:02:45 2020] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Tue Jul 14 07:02:45 2020] [notice] Digest: generating secret for digest authentication ...
[Tue Jul 14 07:02:45 2020] [notice] Digest: done
[Tue Jul 14 07:02:45 2020] [notice] Apache/2.2.15 (Unix) DAV/2 configured -- resuming normal operations
[Tue Jul 14 07:03:07 2020] [crit] [client 10.5.100.48] configuration error:  couldn't perform authentication. AuthType not set!: /
[Tue Jul 14 07:04:25 2020] [crit] [client 10.5.100.48] configuration error:  couldn't perform authentication. AuthType not set!: /index.jsp
[Tue Jul 14 07:04:29 2020] [crit] [client 10.5.100.48] configuration error:  couldn't perform authentication. AuthType not set!: /index.jsp

问题所在:我们在定义虚拟主机时,访问控制机制使用了require all granted 我们只需将他改为allow from all
<VirtualHost *:80>
    ProxyVia on
    ProxyRequests Off
    ProxyPreserveHost On
    <Proxy *>
      Allow from all
    </Proxy>
    ProxyPass / balancer://lbcluster1/
    ProxyPassReverse / balancer://lbcluster1/
    <Location />
      Allow from all
    </Location>
</VirtualHost>

[root@node2 conf.d]# httpd -v  我们通过 -v选项查看httpd版本,2.4以下的版本应使用allow from all,不能使用require all granted
Server version: Apache/2.2.15 (Unix)
Server built:   Aug 13 2013 17:29:28

(2)
[root@node2 conf.d]# cat /var/log/httpd/error_log
[Tue Jul 14 07:37:04 2020] [error] (13)Permission denied: proxy: HTTP: attempt to connect to 10.5.100.183:8080 (10.5.100.183) failed
说的是没有权限访问,连接失败

解决办法:
[root@node2 conf.d]# /usr/sbin/setsebool httpd_can_network_connect 1 运行这条命令或者关闭本机防火墙。
这条命令含义:允许通过网络连接反向代理至httpd服务,只是临时配置,重启后失效
setsebool -P httpd_can_network_connect 1   使用-p表示写入配置文件,重启后保留。

六:依据mod_jk方式实现反向代理功能,注意:我们根据mod_jk实现必须要编译安装。地址http://tomcat.apache.org/download-connectors.cgi下载源码包。

[root@node2 ~]# cd tomcat.tar/
[root@node2 tomcat.tar]# ls
apache-tomcat-9.0.36.tar.gz  tomcat-connectors-1.2.48-src.tar.gz
[root@node2 tomcat.tar]# tar -xf tomcat-connectors-1.2.48-src
[root@node2 tomcat.tar]# ls
apache-tomcat-9.0.36.tar.gz  tomcat-connectors-1.2.48-src  tomcat-connectors-1.2.48-src.tar.gz
[root@node2 ~]# yum install httpd-devel gcc gcc-devel -y  安装httpd所需开发包,和gcc编译环境。
[root@node2 ~]# yum grouplist  查看所需开发工具(deployment tools)及开发包组是否安装。
[root@node2 tomcat.tar]# cd tomcat-connectors-1.2.48-src
[root@node2 tomcat-connectors-1.2.48-src]# ls
conf  docs  HOWTO-RELEASE.txt  jkstatus  LICENSE  native  NOTICE  README.txt  support  tools  xdocs
[root@node2 tomcat-connectors-1.2.48-src]# cd native/   编译目录
[root@node2 native]# which apxs  查看程序包的安装目录。
/usr/sbin/apxs
[root@node2 native]# 
[root@node2 native]# ./configure --with-apxs=/usr/sbin/apxs  进行编译指定程序安装目录,不指定也可以。
[root@node2 native]# make && make install
If you ever happen to want to link against installed libraries
in a given directory, LIBDIR, you must either use libtool, and
specify the full pathname of the library, or use the `-LLIBDIR'
flag during linking and do at least one of the following:
   - add LIBDIR to the `LD_LIBRARY_PATH' environment variable
     during execution
   - add LIBDIR to the `LD_RUN_PATH' environment variable
     during linking
   - use the `-Wl,-rpath -Wl,LIBDIR' linker flag
   - have your system administrator add LIBDIR to `/etc/ld.so.conf'

See any operating system documentation about shared libraries for
more information, such as the ld(1) and ld.so(8) manual pages.
----------------------------------------------------------------------
chmod 755 /usr/lib64/httpd/modules/mod_jk.so

Please be sure to arrange /etc/httpd/conf/httpd.conf...

make[1]: Leaving directory `/root/tomcat.tar/tomcat-connectors-1.2.48-src/native/apache-2.0'
make[1]: Entering directory `/root/tomcat.tar/tomcat-connectors-1.2.48-src/native'
make[2]: Entering directory `/root/tomcat.tar/tomcat-connectors-1.2.48-src/native'
make[2]: Nothing to be done for `install-exec-am'.
make[2]: Nothing to be done for `install-data-am'.
make[2]: Leaving directory `/root/tomcat.tar/tomcat-connectors-1.2.48-src/native'
make[1]: Leaving directory `/root/tomcat.tar/tomcat-connectors-1.2.48-src/native'

apache要使用mod_jk.conf连接器,需要在启动时加载此连接器模块,为了便于管理与mod_jk模块相关的配置,这里使用一个专门得配置文件/etc/httpd/extra/httpd-jk.conf来保存相关指令及设置,其内容如下。
[root@node2 conf.d]# vim mod_jk.conf 
LoadModule jk_module modules/mod_jk.so 依赖得模块文件
JkWorkersFile /etc/httpd/conf.d/workers.properties  指明一个works,将后端的一个works tomcat实列写带一个文件中
JkLogFile logs/mod_jk.log
JkLogLevel debug
JkMount / * TomcatA   将根下的所有url都发给tomcatA ,在这里tomcatA是一个works名称,
JkMount /status/ stat1

[root@node2 conf.d]# vim workers.properties  定义一个works,后端如果有多个work,这里我只写了一个work,没有做负载均衡
worker.list=TomactA,stat1     这里的名称我们要与tomcat中jvm-route的名称一至。
worker.TomcatA.port=8009
worker.TomcatA.host=10.5.100.183
worker.TomcatA.type=ajp13
worker.TomcatA.lbfactor=1
worker.stat1.type = status

至此:一个基于mod_jk模块与后端名为TomcatA的worker通信的配置已经完成,重启httpd服务即可生效

上述我们基于mod_jk实现的是调度到固定某一个tomcat实例,下述讲述我们如何使用
基于mod_jk实现负载均衡。使用的连接器为Ajp

1,为了避免用户直接访问后端tomcat实列,影响负载均衡的效果,建议在tomcat 7的各实列禁用http/1.1连接器
2,为每一个tomcat实列的引擎添加jvm route参数,并通过其为当前引擎设置唯一标识符,
例如:<Engine name="Catalina" defaultHost="node3.magedu.com" jvmRoute="TomcatA">

编辑配置apache,修改/etc/httpd/conf.d/httpd-jk.conf
[root@node2 conf.d]# vim mod_jk.conf
LoadModule jk_module modules/mod_jk.so 依赖得模块文件
JkWorkersFile /etc/httpd/conf.d/workers.properties  指明一个works,将后端的一个works tomcat实列写带一个文件中
JkLogFile logs/mod_jk.log
JkLogLevel debug
JkMount / * lbcluster1   将根下的所有url都发给一个集群lbcluster1 
JkMount /status/ stat1

[root@node2 conf.d]# vim workers.properties  定义后端处理实列,实现负载均衡
worker.list=lbcluster1,stat1
worker.TomcatB.port=8009
worker.TomcatB.host=10.5.100.146
worker.TomcatB.type=ajp13
worker.TomcatB.lbfactor=5        负载均衡的权重
worker.stat1.type = status
worker.TomcatA.port=8009
worker.TomcatA.host=10.5.100.183
worker.TomcatA.type=ajp13
worker.TomcatA.lbfactor=5
worker.lbcluster1.type = lb      集群的工作类型
worker.lbcluster1.sticky_session = 1  
worker.lbcluster1.balance_workers = TomcatA,TomcatB   定义集群处理实列有哪些
worker.stat1.type = status
[root@node2 ~]# httpd -t
Syntax OK
[root@node2 ~]# service httpd start

进入浏览器直接访问 10.5.100.119
访问status状态页面:10.5.100.119/status
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值