LVS+Keepalived部署
一.项目准备
准备四台虚拟机,两台做代理服务器,两台做真实服务器(真实服务器只是用来进行web测试)
1、选择两台LVS服务器作为keepalived(一台master 一台backup)。
真实服务器需要nginx来提供web服务
2、给两台LVS服务器安装keepalived制作高可用生成VIP
准备四台机器: (主)master:192.168.182.144 配置vip:192.168.182.100 (备)backup:192.168.182.150 (部署项目的服务器)RS1:192.168.182.143 (部署项目的服务器)RS2:192.168.182.151 给所有的机器关闭防火墙和selinux # systemctl stop firewalld && setenforce 0
3.给RS1配置tomcat+Java环境做单机多实例部署
1.配置jdk环境 下载地址 https://www.oracle.com/cn/java/technologies/javase/javase-jdk8-downloads.html //选择 linux-x64.tar.gz (1)解压包: # tar xvzf jdk-8u271-linux-x64.tar.gz -C /usr/local/ (2)配置环境变量 # vim /etc/profile.d/java.sh JAVA_HOME=/usr/local/jdk1.8.0_271 PATH=$PATH:$JAVA_HOME/bin export JAVA_HOME PATH (3)使环境变量生效 source /etc/profile.d/java.sh (4)测试是否生效 # java -version [如果看到使open-jdk,则是因为yum安装了jdk,卸载并删除/usr/bin/java] java version "1.8.0_271" Java(TM) SE Runtime Environment (build 1.8.0_271-b09) Java HotSpot(TM) 64-Bit Server VM (build 25.271-b09, mixed mode) 2.tomcat部署 下载地址 https://tomcat.apache.org/download-80.cgi 选择core版的 (1)解压包 # tar xvzf apache-tomcat-8.5.57.tar.gz -C /usr/local/ (2)复制tomcat并分别改名为tomcat_qf1、tomcat_qf2、tomcat_qf3,且放同一目录下 # cd /usr/local # mv apache-tomcat-8.0.27 tomcat_qf1 # cp -r apache-tomcat-8.0.27 tomcat_qf2 # cp -r apache-tomcat-8.0.27 tomcat_qf3 (3)修改端口 # sed -i 's#8005#8011#;s#8080#8081#' tomcat_qf2/conf/server.xml # sed -i 's#8005#8012#;s#8080#8082#' tomcat_qf3/conf/server.xml (4)启动tomcat服务 # /usr/local/tomcat_qf1/bin/startup.sh # /usr/local/tomcat_qf2/bin/startup.sh # /usr/local/tomcat_qf3/bin/startup.sh (5)浏览器分别访问 http://ip:8080 //(web1) http://ip:8081 //(web2) http://ip:8082 //(web3)
4.给RS2配置nginx服务
# yum install epel-release -y //配置epel源 # yum install -y nginx //yum安装nginx 部署nginx单机多实例项目 # vim /etc/nginx/nginx.conf worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; server_name _; location / { root html; index index.html; } } server { listen 81; server_name _; location / { root /usr/share/nginx/html/81; index index.html; } } server { listen 82; server_name _; location / { root /usr/share/nginx/html/82; index index.html; # nginx -t //效验文件是否有误 # nginx -s reload //启动nginx 创建81,82目录 # mkdir /etc/share/nginx/html/8{1,2} # echo 8181 > /usr/share/nginx/html/81/index.html //区分web页面看是否无误 # echo 8282 > /usr/share/nginx/html/82/index.html 浏览器访问ip:80 ip:81 ip:82
5.主/备调度器安装软件及配置
主:master # yum -y install ipvsadm keepalived # cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak #备份 # vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id lvs-master #只是名字而已,辅节点改为lvs-backup(两个名字一定不能一样) } vrrp_instance VI_1 { state MASTER #定义主还是备,备用的话写backup interface ens32 #VIP绑定接口 virtual_router_id 80 #整个集群的调度器一致(在同一个集群) priority 100 #(优先权)back改为50(50一间隔) advert_int 1 #检查间隔,默认为1s authentication { auth_type PASS #主备节点认证 auth_pass 1111 } virtual_ipaddress { 192.168.182.100 #VIP(自己网段的) } } virtual_server 192.168.182.100 80 { #LVS配置 delay_loop 6 #启动6个进程 lb_algo rr #LVS调度算法 lb_kind DR #LVS集群模式(路由模式) nat_mask 255.255.255.0 protocol TCP #健康检查使用的协议 real_server 192.168.182.144 80 { weight 1 inhibit_on_failure #当该节点失败时,把权重设置为0,而不是从IPVS中删除 TCP_CHECK { #健康检查 connect_port 80 #检查的端口 connect_timeout 3 #连接超时的时间 } } } 备:backup # yum -y install ipvsadm keepalived # cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak #备份 # vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id lvs-backup #只是名字而已,辅节点改为lvs-backup(两个名字一定不能一样) } vrrp_instance VI_1 { state BACKUP #定义主还是备,备用的话写backup interface ens32 #VIP绑定接口 virtual_router_id 80 #整个集群的调度器一致(在同一个集群) priority 50 #(优先权)back改为50(50一间隔) advert_int 1 #检查间隔,默认为1s authentication { auth_type PASS #主备节点认证 auth_pass 1111 } virtual_ipaddress { 192.168.182.100 #VIP(自己网段的) } } virtual_server 192.168.182.100 80 { #LVS配置 delay_loop 6 #启动6个进程 lb_algo rr #LVS调度算法 lb_kind DR #LVS集群模式(路由模式) nat_mask 255.255.255.0 protocol TCP #健康检查使用的协议 real_server 192.168.182.150 80 { weight 1 inhibit_on_failure #当该节点失败时,把权重设置为0,而不是从IPVS中删除 TCP_CHECK { #健康检查 connect_port 80 #检查的端口 connect_timeout 3 #连接超时的时间 } } }
6.给主/备调度器配置反向代理+负载均衡
主/备调度器配置一样 # vim /etc/nginx/nginx.conf worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; upstream myweb { server 192.168.182.143:8080 weight=1; server 192.168.182.143:8081 weight=2; server 192.168.182.143:8082 weight=3; server 192.168.182.151:80 weight=3; server 192.168.182.151:81 weight=2; server 192.168.182.151:82 weight=1; } server { listen 80; server_name _; location / { proxy_pass http://myweb/; } } }
7.启动KeepAlived(主备均启动)
主 # systemctl start keepalived 备 # systemctl start keepalived 主 # systemctl enable keepalived //开机自启(可以不设置) 备 # systemctl enable keepalived 主 # ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.192.100:80 rr -> 192.168.192.144:80 Route 1 0 0 备 # ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.182.100:80 rr -> 192.168.182.150:80 Route 1 0 0
测试
浏览器访问http://vip
如果主节点服务器宕机了(我们把服务停止了用来测试),VIP会自己漂移到备用节点上。
但是用户访问时却感觉不到