Keepalived详解

目录

1、高可用集群keepalived

高可用集群

VRRP 相关概念

keepalived 简介

2、基础实验环境搭建

 3、keepalived的虚拟路由管理

全局配置

​编辑 配置虚拟路由器

 4、虚拟路由的通讯设置

 5、keepalived的日志分离

6、实现独立子配置文件 

 7、keepalived的抢占模式和非抢占模式

 非抢占模式

 抢占延迟模式

8、VIP单播配置

9、邮件通知

10、实现 master/master 的 Keepalived 双主架构

11、实现IPVS的高可用性

       实现双主的 LVS-DR 模式

       编辑利用脚本实现主从角色切换 

        实现haproxy高可用


1、高可用集群keepalived

高可用集群

集群类型:

  • 负载均衡(LB):包括 LVS、HAProxy、nginx(http/upstream, stream/upstream),用于分配网络流量。
  • 高可用集群(HA):例如数据库、Redis 等,确保服务持续可用。 高性能集群(HPC):专注于提供强大的计算能力。

系统可用性:通过服务等级协议(SLA)来定义,可用性指标用公式 A = MTBF / (MTBF + MTTR)计算,常见指标有 99.9%、99.99%、99.999%、99.9999%。

系统故障:分为硬件故障(如设计缺陷、损耗、非人为不可抗拒因素)和软件故障(设计缺陷 bug)。

实现高可用:提升系统高可用性的关键在于降低平均故障修复时间(MTTR),可通过建立冗余机制实现,包括主/备(active/passive)、双主(active/active)等模式,以及通过心跳机制(HEARTBEAT)进行状态监测和切换。

虚拟路由冗余协议(VRRP):用于解决静态网关单点风险,可在物理层的路由器、三层交换机和软件层的 keepalived 中应用。

VRRP 相关概念

虚拟路由器: Virtual Router
虚拟路由器标识: VRID(0-255) ,唯一标识虚拟路由器
VIP Virtual IP
VMAC Virutal MAC (00-00-5e-00-01-VRID)
物理路由器:
master :主设备
backup :备用设备
priority :优先级
通告相关: 包括心跳和优先级等,具有周期性特点。
安全认证方式: 无认证或简单字符认证或采用预共享密钥以及MD5 认证。
工作模式:
主/备:单虚拟路由器模式。
主/主:分为主/备(虚拟路由器 1)和备/主(虚拟路由器 2)两种情况。

keepalived 简介

        负载均衡是一种在真实集群之间分配 IP 流量的方法 服务器,提供一个或多个高可用性虚拟服务。在设计负载均衡拓扑时,必须考虑负载均衡器本身以及后面的真实服务器的可用性。

        Keepalived 为负载均衡和高可用性提供了框架。 负载均衡框架依赖于众所周知且广泛使用的 Linux 虚拟服务器 (IPVS) 内核模块,提供第 4 层负载均衡。 Keepalived 实现了一组健康检查器,以动态和自适应的方式 根据服务器池的运行状况维护和管理负载均衡的服务器池。 高可用性是通过虚拟冗余路由协议实现的 (VRRP)。VRRP是路由器故障切换的基础砖块。keepalived也 实现了一组到 VRRP 有限状态机的钩子 提供低级和高速协议交互。每个Keepalived 框架可以单独使用,也可以一起使用,以提供弹性基础设施。

简而言之,Keepalived 提供了两个主要功能:

  • LVS系统的健康检查
  • 实施 VRRPv2 堆栈以处理负载均衡器故障切换

2、基础实验环境搭建

克隆4台主机:realserver1、realserver2、KAT1、KAT2

注意修改一下虚拟机内存和处理器大小

各节点时间必须同步: ntp, chrony
关闭防火墙及 SELinux
各节点之间可通过主机名互相通信:非必须
建议使用 /etc/hosts 文件实现:非必须
各节点之间的 root 用户可以基于密钥认证的 ssh 服务完成互相通信:非必须

①为各个主机添加网卡信息

[root@localhost ~]# vmset.sh eth0 172.25.254.110 realserver1.zf.org
连接已成功激活(D-Bus 活动路径:/org/freedesktop/NetworkManager/ActiveConnection/4)

[root@localhost ~]# vmset.sh eth0 172.25.254.120 realserver2.zf.org
连接已成功激活(D-Bus 活动路径:/org/freedesktop/NetworkManager/ActiveConnection/4)

[root@localhost ~]# vmset.sh eth0 172.25.254.10 KAT1.zf.org
连接已成功激活(D-Bus 活动路径:/org/freedesktop/NetworkManager/ActiveConnection/4)

[root@localhost ~]# vmset.sh eth0 172.25.254.20 KAT2.zf.org
连接已成功激活(D-Bus 活动路径:/org/freedesktop/NetworkManager/ActiveConnection/4)

②server上安装软件httpd,并做重定向配置

[root@realserver1 ~]# yum install httpd -y
[root@realserver2 ~]# yum install httpd -y
[root@realserver1 ~]# echo 172.25.254.110 > /var/www/html/index.html
[root@realserver2 ~]# echo 172.25.254.120 > /var/www/html/index.html
[root@realserver1 ~]# systemctl enable --now httpd
[root@realserver2 ~]# systemctl enable --now httpd

基础环境测试

 3、keepalived的虚拟路由管理

全局配置

! Configuration File for keepalived
global_defs {
notification_email {
594233887@qq.com #keepalived 发生故障切换时邮件发送的目标邮箱,可以按行区
分写多个
timiniglee-zln@163.com
}
notification_email_from keepalived@KA1.timinglee.org #发邮件的地址
smtp_server 127.0.0.1 #邮件服务器地址
smtp_connect_timeout 30 #邮件服务器连接timeout
router_id KA1.timinglee.org #每个keepalived主机唯一标识
#建议使用当前主机名,但多节点
重名不影响
vrrp_skip_check_adv_addr #对所有通告报文都检查,会比较消耗性能
#启用此配置后,如果收到的通告报文和上一
个报文是同一 #个路由器,则跳过检查,默认
值为全检查
vrrp_strict #严格遵循vrrp协议
#启用此项后以下状况将无法启动服务:
#1.无VIP地址
#2.配置了单播邻居
#3.在VRRP版本2中有IPv6地址
#建议不加此项配置
vrrp_garp_interval 0 #报文发送延迟,0表示不延迟
vrrp_gna_interval 0 #消息发送延迟
vrrp_mcast_group4 224.0.0.18 #指定组播IP地址范围:
}

在KAT1和KAT2中安装keepalived安装包

[root@kat1 ~]# yum install keepalived -y
[root@kat2 ~]# yum install keepalived -y

编辑配置文件

[root@kat1 ~]# vim /etc/keepalived/keepalived.conf

 配置虚拟路由器

vrrp_instance VI_1 {
state MASTER
interface eth0 #绑定为当前虚拟路由器使用的物理接口,如:eth0,可以和VIP不在一
个网卡
virtual_router_id 51 #每个虚拟路由器惟一标识,范围:0-255,每个虚拟路由器此值必须唯一
#否则服务无法启动
#同属一个虚拟路由器的多个keepalived节点必须相同
#务必要确认在同一网络中此值必须唯一
priority 100 #当前物理节点在此虚拟路由器的优先级,范围:1-254
#值越大优先级越高,每个keepalived主机节点此值不同
advert_int 1 #vrrp通告的时间间隔,默认1s
authentication { #认证机制
auth_type AH|PASS #AH为IPSEC认证(不推荐),PASS为简单密码(建议使用)
uth_pass 1111 #预共享密钥,仅前8位有效
#同一个虚拟路由器的多个keepalived节点必须一样
}
virtual_ipaddress { #虚拟IP,生产环境可能指定上百个IP地址
<IPADDR>/<MASK> brd <IPADDR> dev <STRING> scope <SCOPE> label <LABEL>
172.25.254.100 #指定VIP,不指定网卡,默认为eth0,注意:不指定/prefix,默认32
172.25.254.101/24 dev eth1
172.25.254.102/24 dev eth2 label eth2:1
 }
}

 编辑配置文件

[root@kat1 ~]# vim /etc/keepalived/keepalived.conf
[root@kat1 ~]# systemctl enable --now keepalived.service
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.
[root@kat1 ~]# systemctl restart keepalived.service 
[root@kat1 ~]# scp /etc/keepalived/keepalived.conf  root@172.25.254.20:/etc/keepalived/keepalived.conf 
The authenticity of host '172.25.254.20 (172.25.254.20)' can't be established.
ECDSA key fingerprint is SHA256:p8+SUh5ckDQItOAIxbzYL28fpdswAsYDOXJUm6sD/6k.
ECDSA key fingerprint is MD5:30:56:50:67:5e:d4:ca:37:33:ff:e0:ca:c3:71:cc:be.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.25.254.20' (ECDSA) to the list of known hosts.
root@172.25.254.20's password: 
keepalived.conf                                               100% 3542     2.4MB/s   00:00

 设定KAT1的router-id和VIP的地址

查看KAT1的VIP

 修改KAT2的优先级

[root@kat2 ~]# vim /etc/keepalived/keepalived.conf 
[root@kat2 ~]# systemctl restart keepalived.service
[root@kat2 ~]# systemctl enable --now keepalived.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.

 由于优先级低于kat1,KAT2没有VIP

抓包测试:在realserver1上远程登录kat1,关闭keepalived服务,模拟故障,在kat1上利用tcpdump命令检测发现有20的vip,重新启动keepalived服务,此时kat1上利用tcpdump命令检测发现有10的vip,kat2无vip

 

 

 4、虚拟路由的通讯设置

DROP参数限制了虚拟路由的通讯

编辑kat1和kat2的配置文件:vim /etc/keepalived/keepalived.conf
                       systemctl restart keepalived.service

 realserver测试:

 或者kat1和kat2注释掉2个全局参数

测试:

 

 5、keepalived的日志分离

[root@kat1 ~]# vim /etc/sysconfig/keepalived 
[root@kat1 ~]# systemctl restart keepalived.service

[root@kat1 ~]# vim /etc/rsyslog.conf
[root@kat1 ~]# systemctl restart rsyslog.service
[root@kat1 ~]# systemctl restart keepalived.service

 查看独立出的日志信息

[root@kat1 ~]# ll /var/log/keepalived.log 
-rw------- 1 root root 8268 Aug 12 18:19 /var/log/keepalived.log
[root@kat1 ~]# cat /var/log/keepalived.log 
Aug 12 18:19:14 kat1 Keepalived[5702]: Stopping
Aug 12 18:19:14 kat1 Keepalived_vrrp[5704]: VRRP_Instance(VI_1) sent 0 priority
Aug 12 18:19:14 kat1 Keepalived_vrrp[5704]: VRRP_Instance(VI_1) removing protocol VIPs.
Aug 12 18:19:14 kat1 Keepalived_healthcheckers[5703]: Stopped
Aug 12 18:19:15 kat1 Keepalived_vrrp[5704]: Stopped
Aug 12 18:19:15 kat1 Keepalived[5702]: Stopped Keepalived v1.3.5 (03/19,2017), git commit v1.3.5-6-g6fa32f2
Aug 12 18:19:15 kat1 Keepalived[5822]: Starting Keepalived v1.3.5 (03/19,2017), git commit v1.3.5-6-g6fa32f2
Aug 12 18:19:15 kat1 Keepalived[5822]: Opening file '/etc/keepalived/keepalived.conf'.
Aug 12 18:19:15 kat1 Keepalived[5823]: Starting Healthcheck child process, pid=5824
Aug 12 18:19:15 kat1 Keepalived[5823]: Starting VRRP child process, pid=5825
Aug 12 18:19:15 kat1 Keepalived_healthcheckers[5824]: Initializing ipvs
Aug 12 18:19:15 kat1 Keepalived_healthcheckers[5824]: Opening file '/etc/keepalived/keepalived.conf'.
Aug 12 18:19:15 kat1 Keepalived_healthcheckers[5824]: Activating healthchecker for service [192.168.200.100]:443
Aug 12 18:19:15 kat1 Keepalived_healthcheckers[5824]: Activating healthchecker for service [10.10.10.2]:1358
Aug 12 18:19:15 kat1 Keepalived_healthcheckers[5824]: Activating healthchecker for service [10.10.10.2]:1358
Aug 12 18:19:15 kat1 Keepalived_healthcheckers[5824]: Activating healthchecker for service [10.10.10.3]:1358
Aug 12 18:19:15 kat1 Keepalived_healthcheckers[5824]: Activating healthchecker for service [10.10.10.3]:1358
Aug 12 18:19:15 kat1 Keepalived_vrrp[5825]: Registering Kernel netlink reflector
Aug 12 18:19:15 kat1 Keepalived_vrrp[5825]: Registering Kernel netlink command channel
Aug 12 18:19:15 kat1 Keepalived_vrrp[5825]: Registering gratuitous ARP shared channel
Aug 12 18:19:15 kat1 Keepalived_vrrp[5825]: Opening file '/etc/keepalived/keepalived.conf'.
Aug 12 18:19:15 kat1 Keepalived_vrrp[5825]: VRRP_Instance(VI_1) removing protocol VIPs.
Aug 12 18:19:15 kat1 Keepalived_vrrp[5825]: Using LinkWatch kernel netlink reflector...
Aug 12 18:19:15 kat1 Keepalived_vrrp[5825]: VRRP sockpool: [ifindex(2), proto(112), unicast(0), fd(10,11)]
Aug 12 18:19:16 kat1 Keepalived_vrrp[5825]: VRRP_Instance(VI_1) Transition to MASTER STATE
Aug 12 18:19:17 kat1 Keepalived_vrrp[5825]: VRRP_Instance(VI_1) Entering MASTER STATE
Aug 12 18:19:17 kat1 Keepalived_vrrp[5825]: VRRP_Instance(VI_1) setting protocol VIPs.
Aug 12 18:19:17 kat1 Keepalived_vrrp[5825]: Sending gratuitous ARP on eth0 for 172.25.254.100
Aug 12 18:19:17 kat1 Keepalived_vrrp[5825]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth0 for 172.25.254.100
Aug 12 18:19:17 kat1 Keepalived_vrrp[5825]: Sending gratuitous ARP on eth0 for 172.25.254.100
Aug 12 18:19:17 kat1 Keepalived_vrrp[5825]: Sending gratuitous ARP on eth0 for 172.25.254.100
Aug 12 18:19:17 kat1 Keepalived_vrrp[5825]: Sending gratuitous ARP on eth0 for 172.25.254.100
Aug 12 18:19:17 kat1 Keepalived_vrrp[5825]: Sending gratuitous ARP on eth0 for 172.25.254.100
Aug 12 18:19:21 kat1 Keepalived_healthcheckers[5824]: Timeout connecting server [192.168.200.2]:1358.
Aug 12 18:19:21 kat1 Keepalived_healthcheckers[5824]: Timeout connecting server [192.168.200.4]:1358.
Aug 12 18:19:22 kat1 Keepalived_vrrp[5825]: Sending gratuitous ARP on eth0 for 172.25.254.100
Aug 12 18:19:22 kat1 Keepalived_vrrp[5825]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth0 for 172.25.254.100
Aug 12 18:19:22 kat1 Keepalived_vrrp[5825]: Sending gratuitous ARP on eth0 for 172.25.254.100
Aug 12 18:19:22 kat1 Keepalived_vrrp[5825]: Sending gratuitous ARP on eth0 for 172.25.254.100
Aug 12 18:19:22 kat1 Keepalived_vrrp[5825]: Sending gratuitous ARP on eth0 for 172.25.254.100
Aug 12 18:19:22 kat1 Keepalived_vrrp[5825]: Sending gratuitous ARP on eth0 for 172.25.254.100
Aug 12 18:19:22 kat1 Keepalived_healthcheckers[5824]: Timeout connecting server [192.168.200.5]:1358.
Aug 12 18:19:24 kat1 Keepalived_healthcheckers[5824]: Timeout connecting server [192.168.200.3]:1358.
Aug 12 18:19:24 kat1 Keepalived_healthcheckers[5824]: Timeout connecting server [192.168.201.100]:443.
Aug 12 18:19:27 kat1 Keepalived_healthcheckers[5824]: Timeout connecting server [192.168.200.2]:1358.
Aug 12 18:19:27 kat1 Keepalived_healthcheckers[5824]: Timeout connecting server [192.168.200.4]:1358.
Aug 12 18:19:28 kat1 Keepalived_healthcheckers[5824]: Timeout connecting server [192.168.200.5]:1358.
Aug 12 18:19:30 kat1 Keepalived_healthcheckers[5824]: Timeout connecting server [192.168.200.3]:1358.
Aug 12 18:19:30 kat1 Keepalived_healthcheckers[5824]: Timeout connecting server [192.168.201.100]:443.
Aug 12 18:19:33 kat1 Keepalived_healthcheckers[5824]: Timeout connecting server [192.168.200.2]:1358.
Aug 12 18:19:33 kat1 Keepalived_healthcheckers[5824]: Timeout connecting server [192.168.200.4]:1358.
Aug 12 18:19:34 kat1 Keepalived_healthcheckers[5824]: Timeout connecting server [192.168.200.5]:1358.
Aug 12 18:19:36 kat1 Keepalived_healthcheckers[5824]: Timeout connecting server [192.168.200.3]:1358.
Aug 12 18:19:36 kat1 Keepalived_healthcheckers[5824]: Timeout connecting server [192.168.201.100]:443.
Aug 12 18:19:39 kat1 Keepalived_healthcheckers[5824]: Timeout connecting server [192.168.200.2]:1358.
Aug 12 18:19:39 kat1 Keepalived_healthcheckers[5824]: Check on service [192.168.200.2]:1358 failed after 3 retry.
Aug 12 18:19:39 kat1 Keepalived_healthcheckers[5824]: Removing service [192.168.200.2]:1358 from VS [10.10.10.2]:1358
Aug 12 18:19:39 kat1 Keepalived_healthcheckers[5824]: Remote SMTP server [127.0.0.1]:25 connected.
Aug 12 18:19:39 kat1 Keepalived_healthcheckers[5824]: Timeout connecting server [192.168.200.4]:1358.
Aug 12 18:19:39 kat1 Keepalived_healthcheckers[5824]: Check on service [192.168.200.4]:1358 failed after 3 retry.
Aug 12 18:19:39 kat1 Keepalived_healthcheckers[5824]: Removing service [192.168.200.4]:1358 from VS [10.10.10.3]:1358
Aug 12 18:19:39 kat1 Keepalived_healthcheckers[5824]: Remote SMTP server [127.0.0.1]:25 connected.
Aug 12 18:19:39 kat1 Keepalived_healthcheckers[5824]: SMTP alert successfully sent.
Aug 12 18:19:39 kat1 Keepalived_healthcheckers[5824]: SMTP alert successfully sent.
Aug 12 18:19:40 kat1 Keepalived_healthcheckers[5824]: Timeout connecting server [192.168.200.5]:1358.
Aug 12 18:19:40 kat1 Keepalived_healthcheckers[5824]: Check on service [192.168.200.5]:1358 failed after 3 retry.
Aug 12 18:19:40 kat1 Keepalived_healthcheckers[5824]: Removing service [192.168.200.5]:1358 from VS [10.10.10.3]:1358
Aug 12 18:19:40 kat1 Keepalived_healthcheckers[5824]: Lost quorum 1-0=1 > 0 for VS [10.10.10.3]:1358
Aug 12 18:19:40 kat1 Keepalived_healthcheckers[5824]: Remote SMTP server [127.0.0.1]:25 connected.
Aug 12 18:19:40 kat1 Keepalived_healthcheckers[5824]: SMTP alert successfully sent.
Aug 12 18:19:42 kat1 Keepalived_healthcheckers[5824]: Timeout connecting server [192.168.200.3]:1358.
Aug 12 18:19:42 kat1 Keepalived_healthcheckers[5824]: Check on service [192.168.200.3]:1358 failed after 3 retry.
Aug 12 18:19:42 kat1 Keepalived_healthcheckers[5824]: Removing service [192.168.200.3]:1358 from VS [10.10.10.2]:1358
Aug 12 18:19:42 kat1 Keepalived_healthcheckers[5824]: Lost quorum 1-0=1 > 0 for VS [10.10.10.2]:1358
Aug 12 18:19:42 kat1 Keepalived_healthcheckers[5824]: Adding sorry server [192.168.200.200]:1358 to VS [10.10.10.2]:1358
Aug 12 18:19:42 kat1 Keepalived_healthcheckers[5824]: Removing alive servers from the pool for VS [10.10.10.2]:1358
Aug 12 18:19:42 kat1 Keepalived_healthcheckers[5824]: Remote SMTP server [127.0.0.1]:25 connected.
Aug 12 18:19:42 kat1 Keepalived_healthcheckers[5824]: SMTP alert successfully sent.
Aug 12 18:19:42 kat1 Keepalived_healthcheckers[5824]: Timeout connecting server [192.168.201.100]:443.
Aug 12 18:19:42 kat1 Keepalived_healthcheckers[5824]: Check on service [192.168.201.100]:443 failed after 3 retry.
Aug 12 18:19:42 kat1 Keepalived_healthcheckers[5824]: Removing service [192.168.201.100]:443 from VS [192.168.200.100]:443
Aug 12 18:19:42 kat1 Keepalived_healthcheckers[5824]: Lost quorum 1-0=1 > 0 for VS [192.168.200.100]:443
Aug 12 18:19:42 kat1 Keepalived_healthcheckers[5824]: Remote SMTP server [127.0.0.1]:25 connected.
Aug 12 18:19:42 kat1 Keepalived_healthcheckers[5824]: SMTP alert successfully sent.

6、实现独立子配置文件 

[root@kat1 ~]# vim /etc/keepalived/keepalived.conf
[root@kat1 ~]# systemctl restart keepalived.service
Job for keepalived.service failed because the control process exited with error code. See "systemctl status keepalived.service" and "journalctl -xe" for details.

[root@kat1 ~]# mkdir -p /etc/keepalived/conf.d #建立子目录
[root@kat1 ~]# vim /etc/keepalived/conf.d/172.25.254.100.conf
[root@kat1 ~]# systemctl restart keepalived.service

测试:

 7、keepalived的抢占模式和非抢占模式

    默认为抢占模式 preempt ,即当高优先级的主机恢复在线后,会抢占低先级的主机的 master 角色,这样会使vip KA 主机中来回漂移,造成网络抖动,建议设置为非抢占模式 nopreempt ,即高优先级主机恢复后,并不会抢占低优先级主机的 master 角色。
    非抢占模块下 , 如果原主机 down , VIP 迁移至的新主机 , 后续也发生 down , 仍会将 VIP 迁移回原主机。

 非抢占模式

参数:nopreempt #非抢占模式

修改kat1和kat2主配置文件

[root@kat1 ~]# vim /etc/keepalived/keepalived.conf 
[root@kat1 ~]# systemctl restart keepalived.service
[root@kat2 ~]# vim /etc/keepalived/keepalived.conf 
[root@kat2 ~]# systemctl restart keepalived.service

 测试:

kat2关闭keepalived服务,测试

开启kat2的keepalived服务 ,再次测试

 抢占延迟模式

     抢占延迟模式,即优先级高的主机恢复后,不会立即抢回 VIP ,而是延迟一段时间(默认 300s)再抢回VIP,参数:preempt_delay  # 指定抢占延迟时间为 #s ,默认延迟 300s
为了实验效果,设定抢占延迟时间为5s
[root@kat1 ~]# vim /etc/keepalived/keepalived.conf
[root@kat1 ~]# systemctl restart keepalived.service

[root@kat2 ~]# vim /etc/keepalived/keepalived.conf 
[root@kat2 ~]# systemctl restart keepalived.service

测试:先关闭kat1的服务,kat2有VIP

[root@kat1 ~]# systemctl stop keepalived.service 

 再次打开kat1的服务,5s后,kat1有VIP

[root@kat1 ~]# systemctl start keepalived.service 

 注意:建议做完实验后,注释掉延迟抢占的参数,恢复到默认的抢占模式。

8、VIP单播配置

       默认keepalived主机之间利用多播相互通告消息,会造成网络拥塞,可以替换成单播,减少网络流量。注意:启用 vrrp_strict 时,不能启用单播 , 否则服务无法启动 , 并在 messages 文件中记录下面信息 。
[root@kat1 ~]# vim /etc/keepalived/keepalived.conf
[root@kat1 ~]# systemctl restart keepalived.service
[root@kat2 ~]# vim /etc/keepalived/keepalived.conf
[root@kat2 ~]# systemctl restart keepalived.service

抓包测试:

 

此时关闭kat1的服务,再次抓包

开启kat1的服务

9、邮件通知

安装邮件发送工具
[root@kat1 ~]# yum install mailx -y
[root@kat2 ~]# yum install mailx -y

在QQ邮箱页面设定POP3/IMAP/SMTP/Exchange/CardDAV 服务,查看授权码

[root@kat1 ~]# vim /etc/mail.rc
[root@kat2 ~]# vim /etc/mail.rc

[root@kat1 ~]# echo hello world | mail -s test 1373771818@qq.com
[root@kat2 ~]# echo test | mail -s test 1373771818@qq.com

到QQ邮箱查看

 通知脚本配置

[root@kat1 ~]# vim /etc/keepalived/mail.sh
[root@kat1 ~]# chmod +x /etc/keepalived/mail.sh
[root@kat2 ~]# vim /etc/keepalived/mail.sh
[root@kat2 ~]# chmod +x /etc/keepalived/mail.sh

编辑主配置文件 

[root@kat1 ~]# vim /etc/keepalived/keepalived.conf
[root@kat1 ~]# systemctl restart keepalived.service

[root@kat2 ~]# vim /etc/keepalived/keepalived.conf 
[root@kat2 ~]# systemctl restart keepalived.service

 查看邮件:

 

 测试:


[root@kat1 ~]# systemctl stop  keepalived.service

[root@kat1 ~]# systemctl start  keepalived.service

10、实现 master/master Keepalived 双主架构

[root@kat1 ~]# vim /etc/keepalived/keepalived.conf
[root@kat1 ~]# systemctl start  keepalived.service 

[root@kat2 ~]# vim /etc/keepalived/keepalived.conf 
[root@kat2 ~]# systemctl restart keepalived.service

测试:

11、实现IPVS的高可用性

   实现双主的 LVS-DR 模式

[root@realserver1 ~]# ip a a 172.25.254.100/32 dev lo
[root@realserver1 ~]# cd /etc/sysconfig/network-scripts/
[root@realserver1 network-scripts]# ls
ifcfg-ens33  ifdown-bnep  ifdown-isdn    ifdown-Team      ifup-bnep  ifup-isdn   ifup-routes    ifup-wireless
ifcfg-eth0   ifdown-eth   ifdown-post    ifdown-TeamPort  ifup-eth   ifup-plip   ifup-sit       init.ipv6-global
ifcfg-eth1   ifdown-ib    ifdown-ppp     ifdown-tunnel    ifup-ib    ifup-plusb  ifup-Team      network-functions
ifcfg-lo     ifdown-ippp  ifdown-routes  ifup             ifup-ippp  ifup-post   ifup-TeamPort  network-functions-ipv6
ifdown       ifdown-ipv6  ifdown-sit     ifup-aliases     ifup-ipv6  ifup-ppp    ifup-tunnel
[root@realserver1 network-scripts]# rm -rf ifcfg-ens33
[root@realserver1 network-scripts]# rm -rf ifcfg-eth1
[root@realserver1 network-scripts]# systemctl restart network


[root@realserver2 ~]# ip a a 172.25.254.100/32 dev lo
[root@realserver2 ~]# vim /etc/sysctl.d/arp.conf
[root@realserver2 ~]# scp /etc/sysctl.d/arp.conf root@172.25.254.110:/etc/sysctl.d/arp.conf
The authenticity of host '172.25.254.110 (172.25.254.110)' can't be established.
ECDSA key fingerprint is SHA256:p8+SUh5ckDQItOAIxbzYL28fpdswAsYDOXJUm6sD/6k.
ECDSA key fingerprint is MD5:30:56:50:67:5e:d4:ca:37:33:ff:e0:ca:c3:71:cc:be.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.25.254.110' (ECDSA) to the list of known hosts.
root@172.25.254.110's password: 
arp.conf                                                   100%  126   217.5KB/s   00:00                                                                                

开启内核路由 

[root@kat1 ~]# yum install ipvsadm -y
[root@kat2 ~]# yum install ipvsadm -y

[root@kat1 ~]# vim /etc/keepalived/keepalived.conf
[root@kat1 ~]# systemctl restart keepalived.service
[root@kat2 ~]# vim /etc/keepalived/keepalived.conf
[root@kat2 ~]# systemctl restart keepalived.service

 ​​​​​​虚拟服务器配置、应用层检测及TCP监测相关参数详解

virtual_server IP port { #VIP和PORT
delay_loop <INT> #检查后端服务器的时间间隔
lb_algo rr|wrr|lc|wlc|lblc|sh|dh #定义调度方法
lb_kind NAT|DR|TUN #集群的类型,注意要大写
persistence_timeout <INT> #持久连接时长
protocol TCP|UDP|SCTP #指定服务协议,一般为TCP
sorry_server <IPADDR> <PORT> #所有RS故障时,备用服务器地址
real_server <IPADDR> <PORT> { #RS的IP和PORT
weight <INT> #RS权重
notify_up <STRING>|<QUOTED-STRING> #RS上线通知脚本
notify_down <STRING>|<QUOTED-STRING> #RS下线通知脚本
HTTP_GET|SSL_GET|TCP_CHECK|SMTP_CHECK|MISC_CHECK { ... } #定义当前主机健康状态检测方法

HTTP_GET|SSL_GET {
url {
path <URL_PATH> #定义要监控的URL
status_code <INT> #判断上述检测机制为健康状态的响应码,一般为 200
}
connect_timeout <INTEGER> #客户端请求的超时时长, 相当于haproxy的timeout server
nb_get_retry <INT> #重试次数
delay_before_retry <INT> #重试之前的延迟时长
connect_ip <IP ADDRESS> #向当前RS哪个IP地址发起健康状态检测请求
connect_port <PORT> #向当前RS的哪个PORT发起健康状态检测请求
bindto <IP ADDRESS> #向当前RS发出健康状态检测请求时使用的源地址
bind_port <PORT> #向当前RS发出健康状态检测请求时使用的源端口
}
TCP_CHECK {
connect_ip <IP ADDRESS> #向当前RS的哪个IP地址发起健康状态检测请求
connect_port <PORT> #向当前RS的哪个PORT发起健康状态检测请求
bindto <IP ADDRESS> #发出健康状态检测请求时使用的源地址
bind_port <PORT> #发出健康状态检测请求时使用的源端口
connect_timeout <INTEGER> #客户端请求的超时时长
#等于haproxy的timeout server
}

 查看策略结果

 测试:

[root@realserver1 ~]# systemctl stop httpd.service

 

再次开启realserver1的httpd服务

利用脚本实现主从角色切换 

      定义脚本

  • vrrp_script:自定义资源监控脚本,vrrp实例根据脚本返回值,公共定义,可被多个实例调用,定义在vrrp实例之外的独立配置块,一般放在global_defs设置块之后。
  • 通常此脚本用于监控指定应用的状态。一旦发现应用的状态异常,则触发对MASTER节点的权重减至低于SLAVE节点,从而实现 VIP 切换到 SLAVE 节点。
  • vrrp_script <SCRIPT_NAME> { #定义一个检测脚本,在global_defs 之外配置
    script <STRING>|<QUOTED-STRING> #shell命令或脚本路径
    interval <INTEGER> #间隔时间,单位为秒,默认1秒
    timeout <INTEGER> #超时时间
    weight <INTEGER:-254..254> #默认为0,如果设置此值为负数,
    #当上面脚本返回值为非0时
    #会将此值与本节点权重相加可以降低本节点权重,
    #即表示fall.
    #如果是正数,当脚本返回值为0,
    #会将此值与本节点权重相加可以提高本节点权重
    #即表示 rise.通常使用负值
    fall <INTEGER> #执行脚本连续几次都失败,则转换为失败,建议设为2以上
    rise <INTEGER> #执行脚本连续几次都成功,把服务器从失败标记为成功
    user USERNAME [GROUPNAME] #执行监测脚本的用户或组
    init_fail #设置默认标记为失败状态,监测成功之后再转换为成功状态
    }

    调用脚本

  • track_script:调用vrrp_script定义的脚本去监控资源,定义在VRRP实例之内,调用事先定义的 vrrp_script。
  • vrrp_instance test {
    ... ...
    track_script {
    check_down
      }
    }

具体配置如下: 

[root@kat1 ~]# vim /etc/keepalived/test.sh
[root@kat1 ~]# sh /etc/keepalived/test.sh
[root@kat1 ~]# echo $?
0

[root@kat1 ~]# vim /etc/keepalived/test.sh
[root@kat1 ~]# chmod +x /etc/keepalived/test.sh

 查看脚本返回值的真假

[root@kat1 ~]# sh /etc/keepalived/test.sh
0

删除echo $?后,测试:

① 定义 VRRP script

[root@kat1 ~]# vim /etc/keepalived/keepalived.conf
[root@kat1 ~]# ls /mnt/zf
/mnt/zf
[root@kat1 ~]# systemctl restart keepalived.service

②调用 VRRP script  

 测试:

实现haproxy高可用

安装软件包

[root@kat1 ~]# yum install haproxy -y
[root@kat2 ~]# yum install haproxy -y

在两个kat1kat2两个节点启用内核参数 

[root@kat1 ~]# vim /etc/sysctl.conf
[root@kat1 ~]# sysctl -p
net.ipv4.ip_nonlocal_bind = 1
[root@kat2 ~]# vim /etc/sysctl.conf
[root@kat2 ~]# sysctl -p
net.ipv4.ip_nonlocal_bind = 1

在两个 kat1 kat2 实现 haproxy 的配置
[root@kat1 ~]# vim /etc/haproxy/haproxy.cfg
[root@kat1 ~]# systemctl enable --now haproxy.service
Created symlink from /etc/systemd/system/multi-user.target.wants/haproxy.service to /usr/lib/systemd/system/haproxy.service.
[root@kat2 ~]# vim /etc/haproxy/haproxy.cfg

重启realserver上的网卡服务 

 改回原先的配置环境

[root@realserver1 ~]# vim /etc/sysctl.d/arp.conf
[root@realserver1 ~]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
kernel.kptr_restrict = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /usr/lib/sysctl.d/60-libvirtd.conf ...
fs.aio-max-nr = 1048576
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/arp.conf ...
net.ipv4.conf.all.arp_ignore = 0
net.ipv4.conf.all.arp_announce = 0
net.ipv4.conf.lo.arp_ignore = 0
net.ipv4.conf.lo.arp_announce = 0
* Applying /etc/sysctl.conf ...


[root@realserver2 ~]# vim /etc/sysctl.d/arp.conf
[root@realserver2 ~]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
kernel.kptr_restrict = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /usr/lib/sysctl.d/60-libvirtd.conf ...
fs.aio-max-nr = 1048576
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/arp.conf ...
net.ipv4.conf.all.arp_ignore = 0
net.ipv4.conf.all.arp_announce = 0
net.ipv4.conf.lo.arp_ignore = 0
net.ipv4.conf.lo.arp_announce = 0
* Applying /etc/sysctl.conf ...

 测试一下

[root@kat1 ~]# curl 172.25.254.110
172.25.254.110
[root@kat1 ~]# curl 172.25.254.120
172.25.254.120

 修改配置文件

[root@kat1 ~]# vim /etc/haproxy/haproxy.cfg 
[root@kat1 ~]# systemctl restart keepalived.service 
[root@kat1 ~]# systemctl restart haproxy.service
[root@kat2 ~]# vim /etc/haproxy/haproxy.cfg 
[root@kat2 ~]# systemctl enable --now haproxy.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/haproxy.service to /usr/lib/systemd/system/haproxy.service.

 测试:

 haproxy服务不能与keepalived服务同时存在

[root@kat1 ~]# vim /etc/keepalived/keepalived.conf
[root@kat1 ~]# systemctl restart keepalived.service
[root@kat2 ~]# vim /etc/keepalived/keepalived.conf
[root@kat2 ~]# systemctl restart keepalived.service

 修改haproxy的配置文件

[root@kat1 ~]# vim /etc/haproxy/haproxy.cfg 
[root@kat1 ~]# systemctl restart haproxy.service 
[root@kat1 ~]# systemctl restart keepalived.service 

[root@kat2 ~]# vim /etc/haproxy/haproxy.cfg 
[root@kat2 ~]# systemctl restart keepalived.service 
[root@kat2 ~]# systemctl restart haproxy.service 

 访问测试:

[root@kat1 ~]# curl 172.25.254.100
172.25.254.110
[root@kat1 ~]# curl 172.25.254.100
172.25.254.120

 检测命令:

[root@kat1 ~]# vim /etc/keepalived/test.sh
[root@kat2 ~]# vim /etc/keepalived/test.sh
[root@kat2 ~]# cat /etc/keepalived/test.sh
#!/bin/bash
killall -0 haproxy

[root@kat1 ~]# vim /etc/keepalived/keepalived.conf
[root@kat1 ~]# systemctl restart keepalived.service 
[root@kat1 ~]# systemctl start haproxy.service
[root@kat2 ~]# vim /etc/keepalived/keepalived.conf
[root@kat2 ~]# systemctl restart keepalived.service 

最终测试

[C:\~]$ while true
> do
> curl 172.25.254.100;sleep 0.5
>done
172.25.254.120
172.25.254.110
172.25.254.120
172.25.254.110
  • 26
    点赞
  • 21
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值