我们知道Keepalived原生设计目的是为了高可用LVS集群的,但Keepalived除了可以高可用LVS服务之外,还可以基于vrrp_script和track_script高可用其它服务,如Nginx等。本篇主要演示如何使用Keepalived高可用Nginx服务(双实例),关于vrrp_script、track_script的更多介绍可以见上一篇博客《Keepalived学习总结》。
实验要求 ==> 实现Keepalived基于vrrp_script、track_script高可用Nginx服务,要求配置为双实例,两个vrrp实例的VIP分别为192.168.10.77、192.168.10.78,可以分别将其配置在网卡别名上。
实验环境 ==> CentOS 7.x
实验目的 ==> 实现Keepalived双实例高可用Nginx服务
实验主机 ==> 192.168.10.6(主机名:node1)、192.168.10.8(主机名:node2)
实验前提 ==> 高可用对之间时间同步(可通过周期性任务来实现)
实验操作如下。
1、编辑Keepalived配置文件
2、在两个节点上分别使用yum安装Nginx,并启动Web服务
(1) 在node1(192.168.10.6)上
[root@node1 ~]# yum -y install nginx [root@node1 ~]# systemctl start nginx.service [root@node1 ~]# ss -tnl | grep :80 LISTEN 0 128 *:80 *:* LISTEN 0 128 :::80 :::*
(2) 在node2(192.168.10.8)上
[root@node2 ~]# yum -y install nginx [root@node2 ~]# systemctl start nginx.service [root@node2 ~]# ss -tnl | grep :80 LISTEN 0 128 *:80 *:*
3、在两个节点上分别启动Keepalived服务
(1) 在node1(192.168.10.6)上
[root@node1 ~]# systemctl start keepalived [root@node1 ~]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.10.6 netmask 255.255.255.0 broadcast 192.168.10.255 ether 00:0c:29:f7:b3:4e txqueuelen 1000 (Ethernet) RX packets 46064 bytes 21104876 (20.1 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 43548 bytes 3735943 (3.5 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens33:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.10.77 netmask 255.255.255.255 broadcast 0.0.0.0 ether 00:0c:29:f7:b3:4e txqueuelen 1000 (Ethernet) lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 0 (Local Loopback) RX packets 195 bytes 12522 (12.2 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 195 bytes 12522 (12.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
(2) 在node2(192.168.10.8)上
[root@node2 ~]# systemctl start keepalived [root@node2 ~]# ifconfig ens34: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.10.8 netmask 255.255.255.0 broadcast 192.168.10.255 ether 00:0c:29:ef:52:87 txqueuelen 1000 (Ethernet) RX packets 60646 bytes 17433464 (16.6 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 29539 bytes 2636829 (2.5 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens34:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.10.78 netmask 255.255.255.255 broadcast 0.0.0.0 ether 00:0c:29:ef:52:87 txqueuelen 1000 (Ethernet) lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 0 (Local Loopback) RX packets 319 bytes 20447 (19.9 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 319 bytes 20447 (19.9 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
可以发现,因为在第一个vrrp实例(VIP为192.168.10.77)中,node1节点为主节点,因此该VIP配置在node1上;而在第二个vrrp实例(VIP为192.168.10.78)中,node2节点为主节点,因此该VIP配置在node2上。
4、人为制造故障进行测试
(1) 在node2上杀死nginx进程,查看第二个vrrp实例的VIP(192.168.10.78)是否会漂移至node1。
[root@node2 ~]# killall nginx
#查看日志
[root@node2 ~]# tail /var/log/messages ...(其他省略)... Aug 8 16:29:28 node2 Keepalived_vrrp[6832]: VRRP_Script(chk_nginx) failed Aug 8 16:29:30 node2 Keepalived_vrrp[6832]: VRRP_Instance(VI_2) Received higher prio advert Aug 8 16:29:30 node2 Keepalived_vrrp[6832]: VRRP_Instance(VI_2) Entering BACKUP STATE Aug 8 16:29:30 node2 Keepalived_vrrp[6832]: VRRP_Instance(VI_2) removing protocol VIPs. Aug 8 16:29:30 node2 Keepalived_healthcheckers[6831]: Netlink reflector reports IP 192.168.10.78 removed
#查看node2上的IP地址
[root@node2 ~]# ifconfig ens34: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.10.8 netmask 255.255.255.0 broadcast 192.168.10.255 ether 00:0c:29:ef:52:87 txqueuelen 1000 (Ethernet) RX packets 61483 bytes 17487867 (16.6 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 29994 bytes 2672261 (2.5 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 0 (Local Loopback) RX packets 323 bytes 20647 (20.1 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 323 bytes 20647 (20.1 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
可以发现第二个vrrp实例的VIP(192.168.10.78)已经从node2上移除。
#查看node1上的IP地址
[root@node1 ~]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.10.6 netmask 255.255.255.0 broadcast 192.168.10.255 ether 00:0c:29:f7:b3:4e txqueuelen 1000 (Ethernet) RX packets 46700 bytes 21146944 (20.1 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 44417 bytes 3794925 (3.6 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens33:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.10.77 netmask 255.255.255.255 broadcast 0.0.0.0 ether 00:0c:29:f7:b3:4e txqueuelen 1000 (Ethernet) ens33:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.10.78 netmask 255.255.255.255 broadcast 0.0.0.0 ether 00:0c:29:f7:b3:4e txqueuelen 1000 (Ethernet) lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 0 (Local Loopback) RX packets 199 bytes 12722 (12.4 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 199 bytes 12722 (12.4 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
此时node1不仅是第一个vrrp实例(VIP为192.168.10.77)的主节点,还是第二个vrrp实例(VIP为192.168.10.78)的主节点。
(2) 在node1上杀死nginx进程,在node2上重新启动nginx进程,查看第一个vrrp实例的VIP(192.168.10.77)和第二个vrrp实例的VIP(192.168.10.78)是否会漂移至node2。
#先在node2上重启nginx进程,此时第二个vrrp实例的VIP(192.168.10.78)应该会重新漂移至node2上
[root@node2 ~]# systemctl restart nginx.service [root@node2 ~]# ss -tnl | grep :80 LISTEN 0 128 *:80 *:* [root@node2 ~]# ifconfig ens34: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.10.8 netmask 255.255.255.0 broadcast 192.168.10.255 ether 00:0c:29:ef:52:87 txqueuelen 1000 (Ethernet) RX packets 62438 bytes 17551143 (16.7 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 30145 bytes 2688809 (2.5 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens34:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.10.78 netmask 255.255.255.255 broadcast 0.0.0.0 ether 00:0c:29:ef:52:87 txqueuelen 1000 (Ethernet) lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 0 (Local Loopback) RX packets 331 bytes 21047 (20.5 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 331 bytes 21047 (20.5 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
果然在意料之中!
#在node1上杀死nginx进程,查看第一个vrrp实例的VIP(192.168.10.77)是否会重新漂移至node2上
[root@node1 ~]# killall nginx # 杀死nginx进程 [root@node1 ~]# tail /var/log/messages # 查看日志 Aug 8 16:36:41 node1 Keepalived_vrrp[7026]: VRRP_Instance(VI_2) Sending gratuitous ARPs on ens33 for 192.168.10.78 Aug 8 16:37:10 node1 Keepalived_vrrp[7026]: VRRP_Instance(VI_2) Received higher prio advert Aug 8 16:37:10 node1 Keepalived_vrrp[7026]: VRRP_Instance(VI_2) Entering BACKUP STATE Aug 8 16:37:10 node1 Keepalived_vrrp[7026]: VRRP_Instance(VI_2) removing protocol VIPs. Aug 8 16:37:10 node1 Keepalived_healthcheckers[7025]: Netlink reflector reports IP 192.168.10.78 removed Aug 8 16:38:44 node1 Keepalived_vrrp[7026]: VRRP_Script(chk_nginx) failed Aug 8 16:38:45 node1 Keepalived_vrrp[7026]: VRRP_Instance(VI_1) Received higher prio advert Aug 8 16:38:45 node1 Keepalived_vrrp[7026]: VRRP_Instance(VI_1) Entering BACKUP STATE Aug 8 16:38:45 node1 Keepalived_vrrp[7026]: VRRP_Instance(VI_1) removing protocol VIPs. Aug 8 16:38:45 node1 Keepalived_healthcheckers[7025]: Netlink reflector reports IP 192.168.10.77 removed [root@node1 ~]# ifconfig # 查看IP地址 ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.10.6 netmask 255.255.255.0 broadcast 192.168.10.255 ether 00:0c:29:f7:b3:4e txqueuelen 1000 (Ethernet) RX packets 47288 bytes 21191119 (20.2 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 45193 bytes 3854623 (3.6 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 0 (Local Loopback) RX packets 215 bytes 13522 (13.2 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 215 bytes 13522 (13.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
两个vrrp实例的VIP都不见了。
#重新查看node2上的IP地址
[root@node2 ~]# ifconfig ens34: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.10.8 netmask 255.255.255.0 broadcast 192.168.10.255 ether 00:0c:29:ef:52:87 txqueuelen 1000 (Ethernet) RX packets 62612 bytes 17562546 (16.7 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 30521 bytes 2712799 (2.5 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens34:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.10.77 netmask 255.255.255.255 broadcast 0.0.0.0 ether 00:0c:29:ef:52:87 txqueuelen 1000 (Ethernet) ens34:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.10.78 netmask 255.255.255.255 broadcast 0.0.0.0 ether 00:0c:29:ef:52:87 txqueuelen 1000 (Ethernet) lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 0 (Local Loopback) RX packets 335 bytes 21247 (20.7 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 335 bytes 21247 (20.7 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
两个vrrp实例的VIP都漂移到node2去了。
当然,本配置示例中自定义的vrrp_script过于简单,此处提供一个vrrp_script监控脚本,仅供参考。
[root@node1 ~]# vim /etc/keepalived/vrrp_script.sh #!/bin/bash # server=nginx restart_server() { if ! systemctl restart $server &> /dev/null; then sleep 1 return 0 else exit 0 fi } if ! killall -0 $server &> /dev/null; then restart_server restart_server restart_server [ $? -ne 0 ] && exit 1 || exit 0 fi
在keepalived.conf配置文件中需要指明脚本路径。
[root@node1 ~]# vim /etc/keepalived/keepalived.conf ...(其他省略)... vrrp_script chk_nginx { script "/etc/keepalived/vrrp_script.sh" interval 2 weight -5 } ...(其他省略)...
实验完成!
转载于:https://blog.51cto.com/xuweitao/1954608