lvs没有健康监测,并不知道此时后端是什么情况
[root@server2 ~]# systemctl stop httpd.service
此时后端2已经停掉了
[root@chihao Desktop]# curl 172.25.254.100
curl: (7) Failed to connect to 172.25.254.100 port 80: Connection refused
[root@chihao Desktop]# curl 172.25.254.100
server3
[root@chihao Desktop]# curl 172.25.254.100
curl: (7) Failed to connect to 172.25.254.100 port 80: Connection refused
[root@chihao Desktop]# curl 172.25.254.100
server3
[root@chihao Desktop]# curl 172.25.254.100
curl: (7) Failed to connect to 172.25.254.100 port 80: Connection refused
[root@chihao Desktop]# curl 172.25.254.100
server3
在使用宿主机curl的时候,已经显示2是不通的
[root@server1 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.254.100:80 rr
-> 172.25.254.12:80 Route 1 0 3
-> 172.25.254.13:80 Route 1 0 3
但是在调度器上,依旧是依次轮询
如何让lvs进行后端健康检测呢?
与lvs配合的应用叫做keepalived,非常契合
[root@server1 ~]# yum install keepalived -y
[root@server1 ~]# cd /etc/keepalived/
[root@server1 keepalived]# ls
keepalived.conf
[root@server1 keepalived]# vim keepalived.conf
编辑keepalive的配置文件
line5:修改为给本地发送邮件报告情况
line8:ip修改为本地的ip
line12:注释掉,不然会drop所有
line21:优先级
line27:改为自己的VIP
以上为修改后的内容,修改了VIP和realserver等配置,修改为TCPCHECK
[root@server1 keepalived]# ip addr del 172.25.254.100/24 dev eth0
删除刚才手动添加的vip,以防冲突
刚才配置文件编写有问题,修改了一下,不然刚才检测不到sevrer3。一定要注意格式。
[root@server1 keepalived]# ipvsadm -C
刷新整个策略,并且开启keepalive
[root@server1 keepalived]# systemctl restart keepalived.service
[root@server1 keepalived]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.254.100:80 rr
-> 172.25.254.12:80 Route 1 0 0
-> 172.25.254.13:80 Route 1 0 0
**在这个项目中,不要安装iptables-service
[root@server1 ~]# vim /etc/postfix/main.cf
[root@server1 ~]# systemctl start postfix
[root@server1 ~]# netstat -antlp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 3198/sshd
tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN 5002/master
tcp 0 0 172.25.254.11:22 172.25.254.1:48804 ESTABLISHED 4767/sshd: root@pts
tcp6 0 0 :::22 :::* LISTEN 3198/sshd
tcp6 0 0 :::25 :::* LISTEN 5002/master
要在调度器收取邮件,需要开启postfix,修改其配置文件如图,再开启,可以看到负责收发邮件的25端口已经打开了
此时:
[root@server2 ~]# systemctl stop httpd
[root@server1 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.254.100:80 rr
[root@server1 ~]# mail
Heirloom Mail version 12.5 7/5/10. Type ? for help.
"/var/spool/mail/root": 2 messages 1 new 2 unread
U 1 keepalived@localhost Fri Jul 16 12:38 18/641 "[LVS_DEVEL] Realserver [172.25.254.13]:80 - DOWN"
>N 2 keepalived@localhost Fri Jul 16 12:44 17/631 "[LVS_DEVEL] Realserver [172.25.254.12]:80 - DOWN"
& 2
Message 2:
From keepalived@localhost.localdomain Fri Jul 16 12:44:33 2021
Return-Path: <keepalived@localhost.localdomain>
X-Original-To: root@localhost
Delivered-To: root@localhost.localdomain
Date: Fri, 16 Jul 2021 12:44:33 +0800
From: keepalived@localhost.localdomain
Subject: [LVS_DEVEL] Realserver [172.25.254.12]:80 - DOWN
X-Mailer: Keepalived
To: root@localhost.localdomain
Status: R
=> TCP CHECK failed on service <=
把2和3的httpd都停掉,可以在调度器看到收到了报错信息
[root@server2 ~]# systemctl start httpd
2和3重新开启以后,又会收到新的邮件
[root@server1 ~]# mail
Heirloom Mail version 12.5 7/5/10. Type ? for help.
"/var/spool/mail/root": 4 messages 2 new 3 unread
U 1 keepalived@localhost Fri Jul 16 12:38 18/641 "[LVS_DEVEL] Realserver [172.25.254.13]:80 - DOWN"
2 keepalived@localhost Fri Jul 16 12:44 18/642 "[LVS_DEVEL] Realserver [172.25.254.12]:80 - DOWN"
>N 3 keepalived@localhost Fri Jul 16 12:46 17/630 "[LVS_DEVEL] Realserver [172.25.254.12]:80 - UP"
N 4 keepalived@localhost Fri Jul 16 12:46 17/630 "[LVS_DEVEL] Realserver [172.25.254.13]:80 - UP"
& q
Held 4 messages in /var/spool/mail/root
这就实现了lvs的健康监测
下一步,实现lvs的高可用
准备server4与server1配合做高可用
[root@server4 ~]# yum install keepalived ipvsadm -y
[root@server1 ~]# scp /etc/keepalived/keepalived.conf server4:/etc/keepalived/
server1直接把keepalive的配置文件scopy给4,做修改
处于backup模式,优先级小于1的优先级即可
[root@server4 keepalived]# systemctl start keepalived.service
Jul 14 01:55:19 server1 Keepalived_vrrp[23847]: VRRP_Instance(VI_1) Entering BACKUP STATE
日志中可以看到已经处于backup的模式了
测试一下:
[root@chihao Desktop]# curl 172.25.254.100
server3
[root@chihao Desktop]# curl 172.25.254.100
server2
[root@chihao Desktop]# curl 172.25.254.100
server3
[root@chihao Desktop]# curl 172.25.254.100
server2
[root@chihao Desktop]# curl 172.25.254.100
server3
[root@chihao Desktop]# curl 172.25.254.100
server2
[root@chihao Desktop]# curl 172.25.254.100
server3
客户端访问real server没有问题
[root@server1 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.254.100:80 rr
-> 172.25.254.12:80 Route 1 0 3
-> 172.25.254.13:80 Route 1 0 3
server1调度没有问题
[root@server4 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.254.100:80 rr
-> 172.25.254.12:80 Route 1 0 0
-> 172.25.254.13:80 Route 1 0 0
4目前待机
[root@server1 ~]# systemctl stop keepalived.service
1挂了
[root@chihao Desktop]# curl 172.25.254.100
server3
[root@chihao Desktop]# curl 172.25.254.100
server2
[root@chihao Desktop]# curl 172.25.254.100
server3
[root@chihao Desktop]# curl 172.25.254.100
server2
[root@chihao Desktop]# curl 172.25.254.100
server3
[root@chihao Desktop]# curl 172.25.254.100
server2
客户端依旧无感,直接访问即可
Jul 14 02:09:05 server1 Keepalived_vrrp[23847]: VRRP_Instance(VI_1) Transition to MASTER STATE
Jul 14 02:09:06 server1 Keepalived_vrrp[23847]: VRRP_Instance(VI_1) Entering MASTER STATE
查看4的日志,发现4此时已经转变为master身份,接管调度器
[root@server4 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.254.100:80 rr
-> 172.25.254.12:80 Route 1 0 3
-> 172.25.254.13:80 Route 1 0 3
[root@server1 ~]# systemctl start keepalived.service
重启1
[root@chihao Desktop]# curl 172.25.254.100
server3
[root@chihao Desktop]# curl 172.25.254.100
server2
[root@chihao Desktop]# curl 172.25.254.100
server3
[root@chihao Desktop]# curl 172.25.254.100
server2
客户端依旧随意访问
Jul 14 02:11:17 server1 Keepalived_vrrp[23847]: VRRP_Instance(VI_1) Entering BACKUP STATE
查看4的日志,因为4优先级不够所以回到backup模式
[root@server1 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.254.100:80 rr
-> 172.25.254.12:80 Route 1 0 2
-> 172.25.254.13:80 Route 1 0 2
1接管了调度,1和4的配合实现了高可用的负载均衡
以上均为dr模式的rr算法,轮叫调度
如何使用加权轮叫算法呢?
编辑配置文件,rr轮叫改为wrr加权轮叫
server2的权重改为2,重启keepalive服务
[root@server1 ~]# systemctl restart keepalived.service
[root@chihao Desktop]# curl 172.25.254.100
server2
[root@chihao Desktop]# curl 172.25.254.100
server2
[root@chihao Desktop]# curl 172.25.254.100
server3
[root@chihao Desktop]# curl 172.25.254.100
server2
[root@chihao Desktop]# curl 172.25.254.100
server2
[root@chihao Desktop]# curl 172.25.254.100
server3
[root@server1 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.254.100:80 wrr
-> 172.25.254.12:80 Route 2 0 4
-> 172.25.254.13:80 Route 1 0 2
此时,2的权重是3的二倍,所以调度器会侧重于2的调度,这就是加权轮叫调度算法.