keepalived实现nginx高可用
文章目录
一. keepalived简介
1. keepalived是什么?
-
Keepalived 软件起初是专为LVS负载均衡软件设计的,用来管理并监控LVS集群系统中各个服务节点的状态,后来又加入了可以实现高可用的VRRP功能。(LVS,不能做健康检查,以前用来管理lvs,现在用来做高可用)
-
Keepalived除了能够管理LVS软件外,还可以作为其他服务(例如:Nginx、Haproxy、MySQL等)的高可用解决方案软件。
-
Keepalived软件主要是通过VRRP协议实现高可用功能的。
-
VRRP是Virtual Router RedundancyProtocol(虚拟路由器冗余协议)的缩写
-
VRRP出现的目的就是为了解决静态路由单点故障问题的,它能够保证当个别节点宕机时,整个网络可以不间断地运行。(当一个节点挂掉,走另外一个节点)
-
Keepalived 一方面具有配置管理LVS的功能,同时还具有对LVS下面节点进行健康检查的功能,另一方面也可实现系统网络服务的高可用功能。(系统网络服务的高可用:所有关键性业务,nginx,lvs和haproxy的调度器)
1.1 keepalived官网
2. keepalived的重要功能
2.1 keepalived 有三个重要的功能
- 管理LVS负载均衡软件
- 实现LVS集群节点的健康检查
- 作为系统网络服务的高可用性(failover)
3. keepalived高可用故障转移的原理
- Keepalived 高可用服务之间的故障切换转移,是通过 VRRP (Virtual Router Redundancy Protocol ,虚拟路由器冗余协议)来实现的。
3.1 故障转移的工作原理:
- 在 Keepalived 服务正常工作时,主节点会不断地向备节点发送(多播的方式)心跳消息,用以告诉备节点自己还活看
- 当主 Master 节点发生故障时,就无法发送心跳消息,备节点也就无法检测到来自主 Master 节点的心跳,于是调用自身的接管程序,接管主 Master 节点的 IP 资源(虚拟IP,VIP)及服务
- 而当主 Master 节点恢复时,备 Backup 节点又会释放主节点故障时自身接管的IP资源及服务,恢复到原来的备用角色
3.2 VRRP
- VRRP ,全 称 Virtual Router Redundancy Protocol ,中文名为虚拟路由冗余协议 ,VRRP的出现就是为了解决静态踣甶的单点故障问题
- VRRP是通过一种竞选机制来将路由的任务交给某台VRRP路由器的。
4. keepalived原理
- 通过竞选机制
4.1 keepalived高可用架构图
4.2 keepalived工作原理描述
- Keepalived高可用对之间是通过VRRP通信的,因此,我们从 VRRP开始了解起:
- VRRP,全称 Virtual Router Redundancy Protocol,中文名为虚拟路由冗余协议,VRRP的出现是为了解决静态路由的单点故障。
- VRRP是通过一种竟选协议机制来将路由任务交给某台 VRRP路由器的。
- VRRP用 IP多播的方式(默认多播地址(224.0_0.18))实现高可用对之间通信。
- 工作时主节点发包,备节点接包,当备节点接收不到主节点发的数据包的时候,就启动接管程序接管主节点的开源。备节点可以有多个,通过优先级竞选,但一般 Keepalived系统运维工作中都是一对。
- VRRP使用了加密协议加密数据,但Keepalived官方目前还是推荐用明文的方式配置认证类型和密码。
4.3 Keepalived服务的工作原理:
- Keepalived高可用是通过 VRRP 进行通信的, VRRP是通过竞选机制来确定主备的,主的优先级高于备,工作时主会优先获得所有的资源,备节点处于等待状态,当主挂了的时候,备节点就会接管主节点的资源,然后顶替主节点对外提供服务。
- 在 Keepalived 服务之间,只有作为主的服务器会一直发送 VRRP 组播包,告诉备它还活着,此时备不会抢占主,当主不可用时,即备监听不到主发送的组播包时,就会启动相关服务接管资源,保证业务的连续性.接管速度最快可以小于1秒。
- 1s收不到消息,就认为他挂掉了
- 默认是主的服务挂掉了,备抢占,主好了,然后抢回来
二. keepalived实现nginx高可用
- 环境说明
系统信息 | 主机名 | IP |
---|---|---|
red 8.5 | master | 192.168.232.134 |
red 8.5 | backup | 192.168.232.128 |
- 本次高可用虚拟IP(VIP)地址暂定为 192.168.232.250
1. 在主备机上分别安装nginx
1.1 在master上安装nginx
[root@master ~]# dnf module -y install nginx:1.20
[root@master ~]# cd /usr/share/testpage/
[root@master testpage]# ls
index.html
[root@master testpage]# echo '123456 master pkg' > index.html
[root@master testpage]# cat index.html
123456 master pkg
[root@master testpage]# cd
[root@master ~]#
启动nginx
[root@master ~]# systemctl enable --now nginx
Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service 鈫� /usr/lib/systemd/system/nginx.service.
[root@master ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:80 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 [::]:80 [::]:*
LISTEN 0 128 [::]:22 [::]:*
[root@master ~]#
关闭防火墙和selinux
[root@master ~]# vim /etc/selinux/config
[root@master ~]# setenforce 0
[root@master ~]# systemctl disable --now firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@master ~]#
1.2 在backup上安装nginx
[root@backup ~]# dnf module -y install nginx:1.20
[root@backup ~]# echo '66666 backup' > /usr/share/testpage/index.html
[root@backup ~]# cat /usr/share/testpage/index.html
66666 backup
[root@backup ~]#
启动nginx
[root@backup ~]# systemctl enable --now nginx
Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service → /usr/lib/systemd/system/nginx.service.
[root@backup ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:80 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 [::]:80 [::]:*
LISTEN 0 128 [::]:22 [::]:*
[root@backup ~]#
关闭防火墙和selinux
[root@backup ~]# systemctl disable --now firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@backup ~]# setenforce 0
[root@backup ~]# vim /etc/selinux/config
- 在浏览器上访问试试,确保master上的nginx服务能够正常访问
2. 安装keepalived
2.1 配置主keepalived
安装keepalived
[root@master ~]# dnf list all|grep keepalived
Failed to set locale, defaulting to C.UTF-8
keepalived.x86_64 2.1.5-6.el8 AppStream
[root@master ~]# yum -y install keepalived
查看安装生成的文件
[root@master ~]# rpm -ql keepalived
/etc/keepalived //配置目录
/etc/keepalived/keepalived.conf //此为主配置文件
/etc/sysconfig/keepalived
/usr/bin/genhash
/usr/lib/.build-id
/usr/lib/.build-id/0a
/usr/lib/.build-id/0a/410997e11c666114ca6d785e58ff0cc248744e
/usr/lib/.build-id/6f
/usr/lib/.build-id/6f/ba0d6bad6cb5ff7b074e703849ed93bebf4a0f
/usr/lib/systemd/system/keepalived.service //此为服务控制文件
/usr/libexec/keepalived
/usr/sbin/keepalived
/usr/share/doc/keepalived
/usr/share/doc/keepalived/AUTHOR
/usr/share/doc/keepalived/CONTRIBUTORS
/usr/share/doc/keepalived/COPYING
/usr/share/doc/keepalived/ChangeLog
/usr/share/doc/keepalived/README
/usr/share/doc/keepalived/TODO
/usr/share/doc/keepalived/keepalived.conf.HTTP_GET.port
/usr/share/doc/keepalived/keepalived.conf.IPv6
/usr/share/doc/keepalived/keepalived.conf.PING_CHECK
/usr/share/doc/keepalived/keepalived.conf.SMTP_CHECK
/usr/share/doc/keepalived/keepalived.conf.SSL_GET
/usr/share/doc/keepalived/keepalived.conf.SYNOPSIS
/usr/share/doc/keepalived/keepalived.conf.UDP_CHECK
/usr/share/doc/keepalived/keepalived.conf.conditional_conf
/usr/share/doc/keepalived/keepalived.conf.fwmark
/usr/share/doc/keepalived/keepalived.conf.inhibit
/usr/share/doc/keepalived/keepalived.conf.misc_check
/usr/share/doc/keepalived/keepalived.conf.misc_check_arg
/usr/share/doc/keepalived/keepalived.conf.quorum
/usr/share/doc/keepalived/keepalived.conf.sample
/usr/share/doc/keepalived/keepalived.conf.status_code
/usr/share/doc/keepalived/keepalived.conf.track_interface
/usr/share/doc/keepalived/keepalived.conf.virtual_server_group
/usr/share/doc/keepalived/keepalived.conf.virtualhost
/usr/share/doc/keepalived/keepalived.conf.vrrp
/usr/share/doc/keepalived/keepalived.conf.vrrp.localcheck
/usr/share/doc/keepalived/keepalived.conf.vrrp.lvs_syncd
/usr/share/doc/keepalived/keepalived.conf.vrrp.routes
/usr/share/doc/keepalived/keepalived.conf.vrrp.rules
/usr/share/doc/keepalived/keepalived.conf.vrrp.scripts
/usr/share/doc/keepalived/keepalived.conf.vrrp.static_ipaddress
/usr/share/doc/keepalived/keepalived.conf.vrrp.sync
/usr/share/man/man1/genhash.1.gz
/usr/share/man/man5/keepalived.conf.5.gz
/usr/share/man/man8/keepalived.8.gz
/usr/share/snmp/mibs/KEEPALIVED-MIB.txt
/usr/share/snmp/mibs/VRRP-MIB.txt
/usr/share/snmp/mibs/VRRPv3-MIB.txt
[root@master ~]#
2.2 用同样的方法在备服务器上安装keepalived
安装keepalived
[root@backup ~]# yum -y install keepalived
3. keepalived配置
3.1 配置主keepalived
[root@master ~]# cd /etc/keepalived/
[root@master keepalived]# mv keepalived.conf{,.bak}
[root@master keepalived]# ls
keepalived.conf.bak
[root@master keepalived]# vim keepalived.conf
[root@master keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
router_id lb01
}
vrrp_instance VI_1 {
state MASTER
interface ens160
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass mushuang
}
virtual_ipaddress {
192.168.232.250
}
}
virtual_server 192.168.232.250 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.232.134 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.232.128 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@master keepalived]#
[root@master ~]# systemctl enable --now keepalived
[root@master ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:80 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 [::]:80 [::]:*
LISTEN 0 128 [::]:22 [::]:*
[root@master ~]#
3.2 配置备keepalived
[root@backup ~]# cd /etc/keepalived/
[root@backup keepalived]# ls
keepalived.conf
[root@backup keepalived]# mv keepalived.conf{,.bak}
[root@backup keepalived]# ls
keepalived.conf keepalived.conf.bak
[root@backup keepalived]# vim keepalived.conf
[root@backup keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
router_id lb02
}
vrrp_instance VI_1 {
state BACKUP
interface ens160
virtual_router_id 51
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass mushuang
}
virtual_ipaddress {
192.168.232.250
}
}
virtual_server 192.168.232.250 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.232.134 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.232.128 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@backup keepalived]#
[root@backup ~]# systemctl enable --now keepalived
Created symlink /etc/systemd/system/multi-user.target.wants/keepalived.service → /usr/lib/systemd/system/keepalived.service.
[root@backup ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:80 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 [::]:80 [::]:*
LISTEN 0 128 [::]:22 [::]:*
[root@backup ~]#
3.3 查看VIP在哪里
- 在MASTER上查看
[root@master ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:3c:93:21 brd ff:ff:ff:ff:ff:ff
inet 192.168.232.134/24 brd 192.168.232.255 scope global noprefixroute ens160
valid_lft forever preferred_lft forever
inet 192.168.232.250/32 scope global ens160
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe3c:9321/64 scope link
valid_lft forever preferred_lft forever
[root@master ~]#
- 在BACKUP上查看
[root@backup ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:03:67:8a brd ff:ff:ff:ff:ff:ff
inet 192.168.232.128/24 brd 192.168.232.255 scope global noprefixroute ens160
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe03:678a/64 scope link
valid_lft forever preferred_lft forever
[root@backup ~]#
4. 访问
4.1 两台主机的nginx服务全打开,使用VIP访问不了
C:\Users\Administrator>curl 192.168.232.250
curl: (28) Failed to connect to 192.168.232.250 port 80 after 21033 ms: Timed out
- 关了之后可以访问
C:\Users\Administrator>curl 192.168.232.250
123456 master pkg
5. 修改内核参数,开启监听VIP功能
5.1 此步可做可不做,该功能可用于仅监听VIP的时候
- 在master上修改内核参数
[root@master ~]# echo 'net.ipv4.ip_nonlocal_bind = 1' >>/etc/sysctl.conf
[root@master ~]# sysctl -p
net.ipv4.ip_nonlocal_bind = 1
[root@master ~]# cat /proc/sys/net/ipv4/ip_nonlocal_bind
1
- 在backup上修改内核参数
[root@backup ~]# echo 'net.ipv4.ip_nonlocal_bind = 1' >>/etc/sysctl.conf
[root@backup ~]# sysctl -p
net.ipv4.ip_nonlocal_bind = 1
[root@backup ~]# cat /proc/sys/net/ipv4/ip_nonlocal_bind
1
6. 让keepalived监控nginx负载均衡机
6.1 keepalived通过脚本来监控nginx负载均衡机的状态
- 在master上编写脚本
[root@master ~]# mkdir /scripts
[root@master ~]# cd /scripts/
[root@master scripts]# vim check_nginx.sh
[root@master scripts]# cat check_nginx.sh
#!/bin/bash
nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep '\bnginx\b'|wc -l)
if [ $nginx_status -lt 1 ];then
systemctl stop keepalived
fi
[root@master scripts]#
[root@master scripts]# chmod +x check_nginx.sh
解释:
nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep '\bnginx\b'|wc -l)
nginx_status:查看nginx的状态
grep -Ev "grep|$0":grep -v 取反,grep本身没有了,$0脚本本身
grep '\bnginx\b'|wc -l:过滤nginx,显示行数
-lt:小于
[root@master scripts]# vim notify.sh
[root@master scripts]# chmod +x notify.sh
[root@master scripts]# ls
check_nginx.sh notify.sh
[root@master scripts]# cat notify.sh
#!/bin/bash
VIP=$2
case "$1" in
master)
nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep '\bnginx\b'|wc -l)
if [ $nginx_status -lt 1 ];then
systemctl start nginx
fi
;;
backup)
nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep '\bnginx\b'|wc -l)
if [ $nginx_status -gt 0 ];then
systemctl stop nginx
fi
;;
*)
echo "Usage:$0 master|backup VIP"
;;
esac
[root@master scripts]#
VIP=$2 给脚本传参数,第二个为虚拟IP
如果$1是master:启动nginx
如果$1是backup:停掉nginx
否则告诉怎么使用
notify.sh示例:包括发送邮件,我没配置
#!/bin/bash
VIP=$2
sendmail (){
subject="${VIP}'s server keepalived state is translate"
content="`date +'%F %T'`: `hostname`'s state change to master"
echo $content | mail -s "$subject" 1470044516@qq.com
}
case "$1" in
master)
nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep '\bnginx\b'|wc -l)
if [ $nginx_status -lt 1 ];then
systemctl start nginx
fi
sendmail
;;
backup)
nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep '\bnginx\b'|wc -l)
if [ $nginx_status -gt 0 ];then
systemctl stop nginx
fi
;;
*)
echo "Usage:$0 master|backup VIP"
;;
esac
- 在backup上编写脚本
[root@backup ~]# mkdir /scripts
[root@backup ~]# cd /scripts/
[root@backup scripts]# ls
[root@backup scripts]#
[root@master scripts]# scp notify.sh 192.168.232.128:/scripts/
The authenticity of host '192.168.232.128 (192.168.232.128)' can't be established.
ECDSA key fingerprint is SHA256:Blx3FlbkuzuEWE0fW0IcvAT1vM9oBVG/LrPyW+M99Es.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.232.128' (ECDSA) to the list of known hosts.
root@192.168.232.128's password:
notify.sh 100% 435 336.9KB/s 00:00
[root@master scripts]#
[root@backup scripts]# ls
notify.sh
[root@backup scripts]# ls
notify.sh
7. 配置keepalived加入监控脚本的配置
7.1 配置主keepalived
[root@master ~]# cd /etc/keepalived/
[root@master keepalived]# vi keepalived.conf
[root@master keepalived]# ls
keepalived.conf keepalived.conf.bak
[root@master keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
router_id lb01
}
vrrp_script nginx_check {
script "/scripts/check_nginx.sh"
interval 1
weight -20
}
vrrp_instance VI_1 {
state MASTER
interface ens160
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass mushuang
}
virtual_ipaddress {
192.168.232.250
}
track_script {
nginx_check
}
notify_master "/scripts/notify.sh master 192.168.232.250"
notify_backup "/scripts/notify.sh backup 192.168.232.250"
}
virtual_server 192.168.232.250 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.232.134 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.232.128 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@master keepalived]#
[root@master ~]# systemctl restart keepalived
[root@master ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 [::]:22 [::]:*
[root@master ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:3c:93:21 brd ff:ff:ff:ff:ff:ff
inet 192.168.232.134/24 brd 192.168.232.255 scope global noprefixroute ens160
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe3c:9321/64 scope link
valid_lft forever preferred_lft forever
[root@master ~]#
vrrp_script nginx_check {
script "/scripts/check_nginx.sh"//真实脚本名字
interval 1//间隔1秒
weight -20//权重,如果脚本执行失败了,将权重-20,变成备
}
track_script { //跟踪脚本
nginx_check
}
notify_master "/scripts/notify.sh master 192.168.232.250"
notify_backup "/scripts/notify.sh backup 192.168.232.250"
notify_master "/scripts/notify.sh master 192.168.232.250":变成master,给脚本传参数,传一个master,给一个虚拟ip
7.2 配置备keepalived
- backup无需检测nginx是否正常,当升级为MASTER时启动nginx,当降级为BACKUP时关闭
[root@backup ~]# cd /etc/keepalived/
[root@backup keepalived]# ls
keepalived.conf keepalived.conf.bak
[root@backup keepalived]# vi keepalived.conf
[root@backup keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
router_id lb02
}
vrrp_instance VI_1 {
state BACKUP
interface ens160
virtual_router_id 51
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass mushuang
}
virtual_ipaddress {
192.168.232.250
}
notify_master "/scripts/notify.sh master 192.168.232.250"
notify_backup "/scripts/notify.sh backup 192.168.232.250"
}
virtual_server 192.168.232.250 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.232.134 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.232.128 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@backup keepalived]#
[root@backup ~]# systemctl restart keepalived
[root@backup ~]# systemctl restart keepalived
[root@backup ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:80 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 [::]:80 [::]:*
LISTEN 0 128 [::]:22 [::]:*
[root@backup ~]#
[root@backup ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:03:67:8a brd ff:ff:ff:ff:ff:ff
inet 192.168.232.128/24 brd 192.168.232.255 scope global noprefixroute ens160
valid_lft forever preferred_lft forever
inet 192.168.232.250/32 scope global ens160
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe03:678a/64 scope link
valid_lft forever preferred_lft forever
[root@backup ~]#
7.3 主节点变成备,备变成主
- 192.168.232.250
7.4 两台主机环境
-
检查脚本配置上去,主坏了,备抢占,主好了,抢不过来
-
将master主配置文件检查脚本注释
-
环境:keepalived在运行,有VIP,nginx在运行
[root@master ~]# cd /etc/keepalived/
[root@master keepalived]# vim keepalived.conf
[root@master keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
router_id lb01
}
vrrp_script nginx_check {
script "/scripts/check_nginx.sh"
interval 1
weight -20
}
vrrp_instance VI_1 {
state MASTER
interface ens160
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass mushuang
}
virtual_ipaddress {
192.168.232.250
}
# track_script {
# nginx_check
# }
notify_master "/scripts/notify.sh master 192.168.232.250"
notify_backup "/scripts/notify.sh backup 192.168.232.250"
}
virtual_server 192.168.232.250 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.232.134 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.232.128 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@master keepalived]#
[root@master keepalived]# systemctl start keepalived
[root@master keepalived]# systemctl status keepalived
鈼� keepalived.service - LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service;>
Active: active (running) since Wed 2022-08-31 10:47:48 CST;>
Process: 578045 ExecStart=/usr/sbin/keepalived $KEEPALIVED_O>
Main PID: 578046 (keepalived)
Tasks: 3 (limit: 23502)
Memory: 2.2M
CGroup: /system.slice/keepalived.service
鈹溾攢578046 /usr/sbin/keepalived -D
鈹溾攢578047 /usr/sbin/keepalived -D
鈹斺攢578048 /usr/sbin/keepalived -D
Aug 31 10:47:48 master Keepalived_vrrp[578048]: SECURITY VIOLA>
Aug 31 10:47:48 master Keepalived_vrrp[578048]: Assigned addre>
Aug 31 10:47:48 master Keepalived_vrrp[578048]: Assigned addre>
Aug 31 10:47:48 master Keepalived_vrrp[578048]: Warning - scri>
Aug 31 10:47:48 master Keepalived_vrrp[578048]: Registering gr>
Aug 31 10:47:48 master Keepalived_vrrp[578048]: (VI_1) removin>
Aug 31 10:47:48 master Keepalived_vrrp[578048]: (VI_1) Enterin>
Aug 31 10:47:48 master Keepalived_vrrp[578048]: VRRP sockpool:>
Aug 31 10:47:49 master Keepalived_vrrp[578048]: (VI_1) receive>
Aug 31 10:47:50 master Keepalived_vrrp[578048]: (VI_1) receive>
[root@master keepalived]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:3c:93:21 brd ff:ff:ff:ff:ff:ff
inet 192.168.232.134/24 brd 192.168.232.255 scope global noprefixroute ens160
valid_lft forever preferred_lft forever
inet 192.168.232.250/32 scope global ens160
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe3c:9321/64 scope link
valid_lft forever preferred_lft forever
[root@master keepalived]#
- 备的正常环境:keepalived启动,没有VIP,nginx停止
[root@backup ~]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service;>
Active: active (running) since Wed 2022-08-31 10:22:28 CST;>
Process: 525648 ExecStart=/usr/sbin/keepalived $KEEPALIVED_O>
Main PID: 525652 (keepalived)
Tasks: 3 (limit: 23502)
Memory: 2.2M
CGroup: /system.slice/keepalived.service
├─525652 /usr/sbin/keepalived -D
├─525654 /usr/sbin/keepalived -D
└─525655 /usr/sbin/keepalived -D
[root@backup ~]#
[root@backup ~]# systemctl stop nginx
[root@backup ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 [::]:22 [::]:*
[root@backup ~]#
[root@backup ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:03:67:8a brd ff:ff:ff:ff:ff:ff
inet 192.168.232.128/24 brd 192.168.232.255 scope global noprefixroute ens160
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe03:678a/64 scope link
valid_lft forever preferred_lft forever
[root@backup ~]#
- 主节点将检查脚本打开,且state为master,将notify_backup去掉
[root@master keepalived]# vim keepalived.conf
[root@master keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
router_id lb01
}
vrrp_script nginx_check {
script "/scripts/check_nginx.sh"
interval 1
weight -20
}
vrrp_instance VI_1 {
state MASTER
interface ens160
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass mushuang
}
virtual_ipaddress {
192.168.232.250
}
track_script {
nginx_check
}
notify_master "/scripts/notify.sh master 192.168.232.250"
}
virtual_server 192.168.232.250 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.232.134 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.232.128 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@master keepalived]#
[root@master ~]# systemctl restart nginx
[root@master ~]# systemctl restart keepalived
- master主机环境为keepalived在运行,有VIP,nginx在运行,而且nginx为开机自启
[root@master keepalived]# systemctl enable nginx
[root@master keepalived]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:80 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 [::]:80 [::]:*
LISTEN 0 128 [::]:22 [::]:*
[root@master keepalived]#
[root@master keepalived]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:3c:93:21 brd ff:ff:ff:ff:ff:ff
inet 192.168.232.134/24 brd 192.168.232.255 scope global noprefixroute ens160
valid_lft forever preferred_lft forever
inet 192.168.232.250/32 scope global ens160
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe3c:9321/64 scope link
valid_lft forever preferred_lft forever
[root@master keepalived]#
[root@master keepalived]# systemctl status keepalived
鈼� keepalived.service - LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service;>
Active: active (running) since Wed 2022-08-31 10:47:48 CST;>
Main PID: 578046 (keepalived)
Tasks: 3 (limit: 23502)
Memory: 2.2M
CGroup: /system.slice/keepalived.service
鈹溾攢578046 /usr/sbin/keepalived -D
鈹溾攢578047 /usr/sbin/keepalived -D
鈹斺攢578048 /usr/sbin/keepalived -D
- 备节点,nginx不启动,开机不启动,keepalived启动
[root@backup ~]# systemctl disable nginx
Removed /etc/systemd/system/multi-user.target.wants/nginx.service.
[root@backup ~]#
[root@backup ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:03:67:8a brd ff:ff:ff:ff:ff:ff
inet 192.168.232.128/24 brd 192.168.232.255 scope global noprefixroute ens160
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe03:678a/64 scope link
valid_lft forever preferred_lft forever
[root@backup ~]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service;>
Active: active (running) since Wed 2022-08-31 10:22:28 CST;>
Process: 525648 ExecStart=/usr/sbin/keepalived $KEEPALIVED_O>
Main PID: 525652 (keepalived)
Tasks: 3 (limit: 23502)
Memory: 2.2M
CGroup: /system.slice/keepalived.service
├─525652 /usr/sbin/keepalived -D
├─525654 /usr/sbin/keepalived -D
└─525655 /usr/sbin/keepalived -D
7.5 人为将master主机nginx服务停止,模拟其故障,会把主节点keepalived停止,将备节点nginx和keepalived启动,会不会成为主,会不会出现VIP
- 将主节点nginx服务停止
[root@master ~]# systemctl stop nginx
[root@master ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:3c:93:21 brd ff:ff:ff:ff:ff:ff
inet 192.168.232.134/24 brd 192.168.232.255 scope global noprefixroute ens160
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe3c:9321/64 scope link
valid_lft forever preferred_lft forever
[root@master ~]#
[root@master ~]# systemctl status keepalived
鈼� keepalived.service - LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
Active: inactive (dead) since Wed 2022-08-31 11:42:31 CST; 28s ago
Process: 702395 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 702396 (code=exited, status=0/SUCCESS)
- 备节点抢占,自动变为主节点
[root@backup ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:80 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 [::]:80 [::]:*
LISTEN 0 128 [::]:22 [::]:*
[root@backup ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:03:67:8a brd ff:ff:ff:ff:ff:ff
inet 192.168.232.128/24 brd 192.168.232.255 scope global noprefixroute ens160
valid_lft forever preferred_lft forever
inet 192.168.232.250/32 scope global ens160
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe03:678a/64 scope link
valid_lft forever preferred_lft forever
[root@backup ~]#
- 当主节点好了之后,master又抢回来了
[root@master ~]# systemctl restart nginx keepalived
[root@master ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:80 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 [::]:80 [::]:*
LISTEN 0 128 [::]:22 [::]:*
[root@master ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:3c:93:21 brd ff:ff:ff:ff:ff:ff
inet 192.168.232.134/24 brd 192.168.232.255 scope global noprefixroute ens160
valid_lft forever preferred_lft forever
inet 192.168.232.250/32 scope global ens160
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe3c:9321/64 scope link
valid_lft forever preferred_lft forever
[root@master ~]#
- 备
[root@backup ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:03:67:8a brd ff:ff:ff:ff:ff:ff
inet 192.168.232.128/24 brd 192.168.232.255 scope global noprefixroute ens160
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe03:678a/64 scope link
valid_lft forever preferred_lft forever
[root@backup ~]#
8. 不抢占
8.1 master主配置
[root@master ~]# vim /etc/keepalived/keepalived.conf
[root@master ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id lb01
}
vrrp_script nginx_check {
script "/scripts/check_nginx.sh"
interval 1
weight -20
}
vrrp_instance VI_1 {
state BACKUP
interface ens160
virtual_router_id 51
priority 100
nopreempt
advert_int 1
authentication {
auth_type PASS
auth_pass mushuang
}
virtual_ipaddress {
192.168.232.250
}
track_script {
nginx_check
}
notify_master "/scripts/notify.sh master 192.168.232.250"
notify_backup "/scripts/notify.sh backup 192.168.232.250"
}
virtual_server 192.168.232.250 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.232.134 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.232.128 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@master ~]#
[root@master ~]# systemctl restart nginx keepalived
- master环境,nginx和keepalived停止,变成备
[root@master ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 [::]:22 [::]:*
[root@master ~]# systemctl status keepalived
鈼� keepalived.service - LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service;>
Active: inactive (dead) since Wed 2022-08-31 11:55:22 CST; >
Process: 733253 ExecStart=/usr/sbin/keepalived $KEEPALIVED_O>
Main PID: 733255 (code=exited, status=0/SUCCESS)
- 备变成主
[root@backup ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:80 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 [::]:80 [::]:*
LISTEN 0 128 [::]:22 [::]:*
[root@backup ~]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service;>
Active: active (running) since Wed 2022-08-31 11:25:49 CST;>
Process: 656165 ExecStart=/usr/sbin/keepalived $KEEPALIVED_O>
Main PID: 656166 (keepalived)
Tasks: 3 (limit: 23502)
Memory: 2.2M
CGroup: /system.slice/keepalived.service
├─656166 /usr/sbin/keepalived -D
├─656167 /usr/sbin/keepalived -D
└─656168 /usr/sbin/keepalived -D
- 将backup主机的nginx停止
[root@backup ~]# systemctl stop nginx
[root@backup ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 [::]:22 [::]:*
[root@backup ~]#
- master主机nginx启动
[root@master ~]# systemctl start nginx
[root@master ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:80 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 [::]:80 [::]:*
LISTEN 0 128 [::]:22 [::]:*
[root@master ~]#
master主机好了之后,没有抢占
[root@master ~]# systemctl start keepalived
[root@master ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 [::]:22 [::]:*
[root@master ~]#
8.2 将master主机的notify_backup和nopreempt不抢占功能注释掉,backup主机关闭,master主机又抢回来
[root@master ~]# vim /etc/keepalived/keepalived.conf
# nopreempt
# notify_backup "/scripts/notify.sh backup 192.168.232.250"
[root@backup ~]# systemctl stop nginx
[root@backup ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
[root@backup ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 [::]:22 [::]:*
重启服务
[root@master ~]# systemctl restart nginx keepalived
[root@master ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:80 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 [::]:80 [::]:*
LISTEN 0 128 [::]:22 [::]:*
[root@master ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:3c:93:21 brd ff:ff:ff:ff:ff:ff
inet 192.168.232.134/24 brd 192.168.232.255 scope global noprefixroute ens160
valid_lft forever preferred_lft forever
inet 192.168.232.250/32 scope global ens160
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe3c:9321/64 scope link
valid_lft forever preferred_lft forever
[root@master ~]#
9. 两种情况:
9.1 主挂了之后,设置不强占,主好了之后,将备节点干掉,让主恢复
9.2 主挂了之后,抢占,主恢复,主自动抢占,设置多执行脚本,间隔一会,在检查
三. keepalived配置文件讲解
1. keepalived默认配置文件
- keepalived 的主配置文件是
/etc/keepalived/keepalived.conf
。
[root@master ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs { //全局配置
notification_email { //定义报警收件人邮件地址
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc //定义报警发件人邮箱
smtp_server 192.168.232.1 //邮箱服务器地址
smtp_connect_timeout 30 //定义邮箱超时时间
router_id LVS_DEVEL //路由器ID定义路由标识信息,同局域网内唯一
vrrp_skip_check_adv_addr//vrp的跳过检查功能
vrrp_strict//严格功能
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_instance VI_1 { //定义实例
state MASTER //指定keepalived节点的初始状态,可选值为MASTER|BACKUP
interface ens33 //VRRP实例绑定的网卡接口,用户发送VRRP包
virtual_router_id 51 //虚拟路由的ID,同一集群要一致
priority 100 //定义优先级,按优先级来决定主备角色,优先级越大越优先
nopreempt //设置不抢占,必须跟state的backup使用
advert_int 1 //主备通讯时间间隔
authentication { //配置认证
auth_type PASS //认证方式,此处为密码
auth_pass 1111 //同一集群中的keepalived配置里的此处必须一致,推荐使用8位随机数
}
virtual_ipaddress { //配置要使用的VIP地址
192.168.232.250
}
}
virtual_server 192.168.232.250 80 { //配置虚拟服务器
delay_loop 6 //健康检查的时间间隔
lb_algo rr //lvs调度算法
lb_kind NAT //lvs模式
persistence_timeout 50 //持久化超时时间,单位是秒
protocol TCP //4层协议,tcp协议
sorry_server 192.168.232.150 80 //定义备用服务器,当所有RS都故障时用sorry_server来响应客户端,公益网站
real_server 192.168.223.134 80 { //定义真实处理请求的服务器,表示两台调度器
weight 1 //给服务器指定权重,默认为1
HTTP_GET {
url {
path /testurl/test.jsp //指定要检查的URL路径
digest 640205b7b0fc66c1ea91c463fac6334d //摘要信息
}
url {
path /testurl2/test.jsp
digest 640205b7b0fc66c1ea91c463fac6334d
}
url {
path /testurl3/test.jsp
digest 640205b7b0fc66c1ea91c463fac6334d
}
connect_timeout 3 //连接超时时间
nb_get_retry 3 //get尝试次数
delay_before_retry 3 //在尝试之前延迟多长时间
}
}
real_server 192.168.232.128 80 {
weight 1
HTTP_GET {
url {
path /testurl/test.jsp
digest 640205b7b0fc66c1ea91c463fac6334c
}
url {
path /testurl2/test.jsp
digest 640205b7b0fc66c1ea91c463fac6334c
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@master ~]#
2. 定制主配置文件
2.1 vrrp_instance段配置
nopreempt //设置为不抢占。默认是抢占的,当高优先级的机器恢复后,会抢占低优先 \
级的机器成为MASTER,而不抢占,则允许低优先级的机器继续成为MASTER,即使高优先级 \
的机器已经上线。如果要使用这个功能,则初始化状态必须为BACKUP。
preempt_delay //设置抢占延迟。单位是秒,范围是0---1000,默认是0.发现低优先 \
级的MASTER后多少秒开始抢占。
2.2 vrrp_script段配置
//作用:添加一个周期性执行的脚本。脚本的退出状态码会被调用它的所有的VRRP Instance记录。
//注意:至少有一个VRRP实例调用它并且优先级不能为0.优先级范围是1-254.
vrrp_script <SCRIPT_NAME> {
...
}
SCRIPT_NAME:自己定义的,不等于你所编写的脚本
//选项说明:
script "/path/to/somewhere" //指定要执行的脚本的路径,真实脚本的名字。
interval <INTEGER> //指定脚本执行的间隔。单位是秒。默认为1s。
timeout <INTEGER> //指定在多少秒后,脚本被认为执行失败。
weight <-254 --- 254> //调整优先级。默认为2.假设weight为-20,主为100,备为90,100-20为80,备变成主节点
rise <INTEGER> //执行成功多少次才认为是成功。INTEGER:数字
fall <INTEGER> //执行失败多少次才认为失败。
user <USERNAME> [GROUPNAME] //运行脚本的用户和组。
init_fail //假设脚本初始状态是失败状态。
//weight说明:
1. 如果脚本执行成功(退出状态码为0),weight大于0,则priority增加。
2. 如果脚本执行失败(退出状态码为非0),weight小于0,则priority减少。
3. 其他情况下,priority不变。
2.3 real_server段配置
weight <INT> //给服务器指定权重。默认是1
inhibit_on_failure //当服务器健康检查失败时,将其weight设置为0, \
//而不是从Virtual Server中移除
notify_up <STRING> //当服务器健康检查成功时,执行的脚本
notify_down <STRING> //当服务器健康检查失败时,执行的脚本
uthreshold <INT> //到这台服务器的最大连接数
lthreshold <INT> //到这台服务器的最小连接数
2.4 tcp_check段配置
connect_ip <IP ADDRESS> //连接的IP地址。默认是real server的ip地址
connect_port <PORT> //连接的端口。默认是real server的端口
bindto <IP ADDRESS> //发起连接的接口的地址。
bind_port <PORT> //发起连接的源端口。
connect_timeout <INT> //连接超时时间。默认是5s。
fwmark <INTEGER> //使用fwmark对所有出去的检查数据包进行标记。
warmup <INT> //指定一个随机延迟,最大为N秒。可防止网络阻塞。如果为0,则关闭该功能。
retry <INIT> //重试次数。默认是1次。
delay_before_retry <INT> //默认是1秒。在重试之前延迟多少秒。
2.5 实例
global_defs {
router_id LVS_Server
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51
priority 150
nopreempt
advert_int 1
authentication {
auth_type PASS
auth_pass wangqing
}
virtual_ipaddress {
192.168.232.250 dev ens33
}
}
virtual_server 192.168.232.250 80 {
delay_loop 3//循环
lvs_sched rr//lvs轮询
lvs_method DR//算法
protocol TCP
real_server 192.168.232.134 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.232.128 8080 {
weight 1
TCP_CHECK {
connect_port 8080
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
四. 脑裂
1. 脑裂介绍
-
在高可用(HA)系统中,一主,一备,通过心跳去维持,当联系2个节点的“心跳线”断开时,本来为一整体、动作协调的HA系统,就分裂成为2个独立的个体。由于相互失去了联系,都以为是对方出了故障。两个节点上的HA软件像“裂脑人”一样,争抢“共享资源”、争起“应用服务”,就会发生严重后果——或者共享资源被瓜分、两边“服务”都起不来了;或者两边“服务”都起来了,但同时读写“共享存储(nfs和rsync)”,导致数据损坏(常见如数据库轮询着的联机日志出错)。
-
心跳线断开时,主以为备挂了,备以为主挂了,相互争抢资源,两边都以为对方出现故障。两边都认为自己是好的,备认为自己有VIP,是主,将服务启动,访问失败。
-
对付HA系统“裂脑”的对策,目前达成共识的的大概有以下几条:
- 添加冗余的心跳线(网线),例如:双线条线(心跳线也HA),尽量减少“裂脑”发生几率;(多个网卡当成一个网卡使用,逻辑上形成一个整体,当成一个网卡使用,去访问,是链路聚合)
- 启用磁盘锁。正在服务一方锁住共享磁盘,“裂脑”发生时,让对方完全“抢不走”共享磁盘资源。 但使用锁磁盘也会有一个不小的问题,如果占用共享盘的一方不主动“解锁”, 另一方就永远得不到共享磁盘。 现实中假如服务节点突然死机或崩溃,就不可能执行解锁命令。后备节点也就接管不了共享资源和应用服务。 于是有人在HA中设计了“智能”锁。即:正在服务的一方只在发现心跳线全部断开(察觉不到对端)时才启用磁盘锁。平时就不上锁了。
- 设置仲裁机制。例如设置参考IP(如网关IP),当心跳线完全断开时,2个节点都各自ping一下参考IP,不通则表明断点就出在本端。不仅“心跳”、还兼对外“服务”的本端网络链路断了,即使启动(或继续)应用服务也没有用了,那就主动放弃竞争,让能够ping通参考IP的一端去起服务。更保险一些,ping不通参考IP的一方干脆就自我重启,以彻底释放有可能还占用着的那些共享资源
2. 脑裂产生的原因
2.1 一般来说,脑裂的发生,有以下几种原因:
- 高可用服务器对之间心跳线链路发生故障,导致无法正常通信
- 因心跳线坏了(断了,老化)
- 因网卡及相关驱动坏了(安装vmware时没有vmnet8),ip配置及冲突问题(网卡直连),ip相同,跟别人的ip一样
- 因心跳线间连接的设备故障(网卡及交换机),连接主机的交换机的接口
- 因仲裁的机器出问题(采用仲裁的方案),中间人挂了,ping中间人ping不通,自己ping中间人,ping不通将自己挂掉
- 高可用服务器上开启了 iptables防火墙阻挡了心跳消息传输,将主和备上面防火墙和selinux关闭
- 高可用服务器上心跳网卡地址等信息配置不正确,导致发送心跳失败。(两个不同网段,联系不上,则ping不通)
- 其他服务配置不当等原因,如心跳方式不同,心跳广插冲突、软件Bug等
2.2 怎样出现脑裂
Keepalived配置里同一 VRRP实例如果 virtual_router_id两端参数配置不一致也会导致裂脑问题发生。
3. 脑裂的常见解决方案
3.1 在实际生产环境中,我们可以从以下几个方面来防止裂脑问题的发生:
- 同时使用串行电缆和以太网电缆连接,同时用两条心跳线路,这样一条线路坏了,另一个还是好的,依然能传送心跳消息
- 当检测到裂脑时强行关闭一个心跳节点(这个功能需特殊设备支持,如Stonith、feyce)。相当于备节点接收不到心跳消患,通过单独的线路发送关机命令关闭主节点的电源
- 做好对裂脑的监控报警(如邮件及手机短信等或值班).在问题发生时人为第一时间介入仲裁,降低损失。例如,百度的监控报警短信就有上行和下行的区别。报警消息发送到管理员手机上,管理员可以通过手机回复对应数字或简单的字符串操作返回给服务器.让服务器根据指令自动处理相应故障,这样解决故障的时间更短.
3.2 注意
- 在实施高可用方案时,要根据业务实际需求确定是否能容忍这样的损失。对于一般的网站常规业务.这个损失是可容忍的
4. 对脑裂进行监控
4.1 监控脑裂
-
对脑裂的监控应在备用服务器上进行,通过添加zabbix自定义监控进行。
-
监控备上有无VIP地址
备机上出现VIP有两种情况:
- 发生了脑裂
- 正常的主备切换
-
监控只是监控发生脑裂的可能性,不能保证一定是发生了脑裂,因为正常的主备切换VIP也是会到备上的。
-
监控脚本如下:
[root@slave ~]# mkdir -p /scripts && cd /scripts
[root@slave scripts]# vim check_keepalived.sh
#!/bin/bash
if [ `ip a show ens160 |grep 192.168.232.250|wc -l` -ne 0 ]
then
echo "keepalived is error!"
else
echo "keepalived is OK !"
fi
- 编写脚本时要注意,网卡要改成你自己的网卡名称,VIP也要改成你自己的VIP,最后不要忘了给脚本赋予执行权限,且要修改/scripts目录的属主属组为zabbix
4.2 安装zabbix
- 环境
主机 | 安装的服务 | ip |
---|---|---|
master | keepalived,nginx | 192.168.232.134 |
backup | keepalived,nginx,zabbix客户端 | 192.168.232.128 |
zabbix | zabbix服务端 | 192.168.232.132 |
-
在backup主机安装zabbix的客户端,在192.168.232.132主机安装zabbix服务端用于使用web网页管理监控
-
监控出现异常的两种状态:
- 正常情况下master主机nginx和keepalived为启动状态,backup主机keepalived为开启,nginx为关闭
- 当master主机发生异常时backup主机通过脚本抢夺vip
- 当出现脑裂时主备的两台主机都会有vip,虚拟IP
4.3 编写监控脚本
- 在backup主机或者zabbix客户端编写脚本
[root@backup ~]# cd /scripts/
[root@backup scripts]# pwd
/scripts
[root@backup scripts]# cat check_keepalived.sh
#!/bin/bash
if [ `ip a show ens160 |grep 192.168.232.250|wc -l` -ne 0 ]
then
echo "1"
else
echo "0"
fi
[root@backup scripts]#
[root@backup scripts]# chmod +x check_keepalived.sh
[root@backup scripts]# ./check_keepalived.sh //测试脚本
0
[root@backup scripts]# chown -R zabbix.zabbix /scripts/
- 修改zabbix配置文件
[root@backup etc]# vim zabbix_agentd.conf
[root@backup etc]# pwd
/usr/local/etc
[root@backup etc]#
修改这两行:
UserParameter=check_keepalived,/bin/bash /scripts/check_keepalived.sh
UnsafeUserParameters=1
然后重启
[root@backup ~]# pkill zabbix_agentd
[root@backup ~]# zabbix_agentd
- 服务端测试
[root@zabbix ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:10050 0.0.0.0:*
LISTEN 0 128 0.0.0.0:10051 0.0.0.0:*
LISTEN 0 128 127.0.0.1:9000 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 80 *:3306 *:*
LISTEN 0 128 *:80 *:*
LISTEN 0 128 [::]:22 [::]:*
[root@zabbix ~]# zabbix_get -s 192.168.232.128 -k check_keepalived
0
[root@zabbix ~]#
4.4 添加监控
- 添加主机
- 添加监控项
- 添加触发器
- 添加告警
4.5 测试:让主备产生脑裂,将虚拟路由id改为不一样
[root@master ~]# cd /etc/keepalived/
[root@master keepalived]# ls
keepalived.conf keepalived.conf.bak
[root@master keepalived]# vim keepalived.conf
[root@master keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
router_id lb01
}
vrrp_script nginx_check {
script "/scripts/check_nginx.sh"
interval 1
weight -20
}
vrrp_instance VI_1 {
state BACKUP
interface ens160
virtual_router_id 50
# nopreempt
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass mushuang
}
virtual_ipaddress {
192.168.232.250
}
track_script {
nginx_check
}
notify_master "/scripts/notify.sh master 192.168.232.250"
notify_backup "/scripts/notify.sh backup 192.168.232.250"
}
virtual_server 192.168.232.250 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.232.134 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.232.128 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@master keepalived]#
[root@master keepalived]# systemctl restart keepalived
4.6 查看主vip
[root@master keepalived]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:3c:93:21 brd ff:ff:ff:ff:ff:ff
inet 192.168.232.134/24 brd 192.168.232.255 scope global noprefixroute ens160
valid_lft forever preferred_lft forever
inet 192.168.232.250/32 scope global ens160
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe3c:9321/64 scope link
valid_lft forever preferred_lft forever
[root@master keepalived]#
4.7 查看备主机vip
[root@backup scripts]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:03:67:8a brd ff:ff:ff:ff:ff:ff
inet 192.168.232.128/24 brd 192.168.232.255 scope global noprefixroute ens160
valid_lft forever preferred_lft forever
inet 192.168.232.250/32 scope global ens160
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe03:678a/64 scope link
valid_lft forever preferred_lft forever
[root@backup scripts]#
4.8 查看
- 服务端界面
[root@zabbix ~]# zabbix_get -s 192.168.232.128 -k check_keepalived
0
[root@zabbix ~]# zabbix_get -s 192.168.232.128 -k check_keepalived
1
[root@zabbix ~]#