keepalived

一、高可用集群


1.1集群类型


LB:load Balance 负载均衡

LVS/HAProxy/nginx(http/upstream,stream/upstrream)

HA:Hight Availibility 高可用集群

数据库,Redis

SPoF:Snigle Point of Fallure,解决单点故障

HPC:High Performabce Computing高性能集群

1.2 系统可用性

SLA:Service-Level Agreement 服务等级协议(提供服务的企业与客户之间就服务的品质、水准、性能 等方面所达成的双方共同认可的协议或契约)

A = MTBF / (MTBF+MTTR)

99.95%:(60*24*30)*(1-0.9995)=21.6分钟 #一般按一个月停机时间统计

1.3系统故障

硬件故障:设计缺陷,wear out(损耗),非认为不可抗因素

软件故障:设计缺陷,bug

1.4实现高可用


提升系统高性能的解决方案:降低MTTR-Mean Time To Repair(平均故障时间)

解决方案:家里冗余机制

  • active/passive 主/备
  • active/active 主/主
  • active->HEARTBEAT->passive
  • active<->HEATBEAT<->active

1.5VRRP:Virtual Router Redundancy Protocol


虚拟路由冗余协议,解决静态网关单点风险

  • 物理层:路由器,三层交换机
  • 软件层:keepalived

1.5.1VRRP相关术语

  • 虚拟路由器:Virtual Router
  • 虚拟路由器标识:VIRD(0.255),唯一标识虚拟路由器
  • VIP:Virtual IP
  • VMAV:Virtual MAC
  • 物理路由器

               master:主设备
               backup:备用设备
               priority:优先级


1.5.2VRRP相关技术:

通告:心跳,优先级等,周期性

工作方式:抢占式,非抢占式

安全认证:

  • 无认证
  • 简单字符认证:预共享密钥
  • MDS

工作模式:

  • 主/从:单虚拟路由
  • 主/主:主/备(虚拟路由器1),备/主(虚拟路由器2)

二、keepalived部署

2.1keepalived简介


Keepalived本质就是为ipvs服务的,它也不需要共享存储。IPVS其实就是一些规则,Keepalived主要的任务就是去调用ipvsadm命令,来生成规则,并自动实现将用户需要访问的地址转移到可用LVS节点实现

主要目的就是它自身启动为一个服务,它工作在多个LVS主机节点上,当前活动的节点叫做Master备用节点叫做Backup,Master会不停的向Backup节点通告自己的心跳,这种通告是基于VRRP协议的。Backup节点一旦接收不到Master的通告信息,它就会把LVS的VIP拿过来,并且把ipvs的规则也拿过来,在自己身上生效,从而替代Master节点

总的来说keepalived是vrrp的软件实现,原生设计目的为了高可用ipvs服务

功能:

  • 基于vrrp协议完成地址流动
  • 为vip地址所在的节点生成ipvs规则(在配置文件中预先定义)
  • 为ipvs集群的各RS做健康状态检测
  • 基于脚本调用接口完成脚本中定义的功能,进而影响集群事务,以此支持nginx,haproxy等服务

用户空间核心组件:

  • vrrp stack:VIP消息通告
  • checkers:检测real server
  • system call:实现vrrp协议状态转换时调用脚本的功能
  • SMTP:邮件组件
  • IPVS wrapper:生成IPVS规则
  • Netlink Reflctor:网络接口
  • WatchDog:监控进程

控制组件:提供keepalived.conf的解析器,完成keepalived配置

IO复用器:针对网络目的的优化的自己的线程抽象

内存管理组件:为某些通用的内存管理功能(例如分配,重新分配,发布等)提供网络访问权限

配置文件组成部分
配置文件:/etc/keepalived/keepalived.conf

解释说明:

! Configuration File for keepalived
#全局配置
global_defs {
   notification_email {
   666688889@qq.com                                    #keepalived发生故障切换时邮件发送的目标邮箱,可以按行区分写多个
}
   notification_email_from keepalived@lanjinli.org    #发邮件的地址
   smtp_server 127.0.0.1                            #邮件服务器地址
   smtp_connect_timeout 30                            #邮件服务器连接timeout
   router_id ka1.lanjinli.org                        #每个keepalived主机唯一标识,建议使用当前主机名,但节点多不影响
   vrrp_skip_check_adv_addr                            #对所有通告报文都检查,会比较消耗性能,启用此配置后如果收到的通告报文和上一个报文是同一个路由器,则跳过检查,默认值为全检查
   vrrp_strict                                        #严格遵循vrrp协议,启用此配置后,如果在无VIP地址或配置了单播邻居,或在VRRP版本2中有IPV6地址的建议不适应此配置
   vrrp_garp_interval 0                                #报文发送延迟
   vrrp_gna_interval 0                                #消息发送延迟
   vrrp_mcast_group4 224.0.0.18                        #指定组播IP地址范围
}


#配置虚拟路由器
vrrp_instance VI_1 {
    state BACKUP                                    #指定是主(MASTER)还是备(BACKUP)
    interface eth0                                    #绑定为当前虚拟路由器使用的物理接口,如:eth0,可以和VIP不再同一个网卡
    virtual_router_id 100                            #每个虚拟路由器唯一标识,范围:0-255,每个虚拟路由器此值必须唯一,否则服务无法启动,同属一个虚拟路由器的多个keepalived节点必须相同,务必要确认在同一网络中此值必须唯一
    priority 80                                        #当前物理节点在此虚拟路由器的优先级,范围:1-255,值越大优先级越高,每个keepalived主机节点此值不同
    advert_int 1                                    #vrrp通告的时间间隔,默认为1s
    authentication {                                #认证机制
        auth_type AH|PASS                            #AH为IPSEC认证(不推荐),PASS为简单密码(建议使用)    
        auth_pass 1111                                #预共享密钥,仅前8位有效,同一个虚拟路由器的多个keepalived节点必须一样
    }
    virtual_ipaddress {                                #虚拟IP,生产环境可能指定上百个IP地址
        <IPADDR>/<MASK> brd <IPADDR> dev <STRING> scope <SCOPE> label <label>
        172.25.254.100                                #指定VIP,不指定网卡,默认为eth0,注意:不指定/prefix,默认32
        172.25.254.101/24 dev eth1
        172.25.254.100/24 dev eth0 label eth0:1
    }
}

2.2 环境部署

ka1:eth0 172.25.254.10 安装keepalived 关闭防火墙和selinux

[root@ka1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:92:51:bd brd ff:ff:ff:ff:ff:ff
    inet 172.25.254.10/24 brd 172.25.254.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::435f:2ba:94d2:582/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::4da5:5424:6c11:6bc/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:6b:1d:b7 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:6b:1d:b7 brd ff:ff:ff:ff:ff:ff
#安装keepalived
[root@ka1 ~]# yum install keepalived -y
[root@ka1 ~]# systemctl stop firewalld.service
[root@ka1 ~]# setenforce 0

ka2:eth0 172.25.254.20 安装keepalived 关闭防火墙和selinux

[root@ka2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:68:ef:05 brd ff:ff:ff:ff:ff:ff
    inet 172.25.254.20/24 brd 172.25.254.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet 172.25.254.100/24 scope global secondary eth0:1
       valid_lft forever preferred_lft forever
    inet6 fe80::435f:2ba:94d2:582/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:6b:1d:b7 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:6b:1d:b7 brd ff:ff:ff:ff:ff:ff
[root@ka2 ~]# yum install keepalived -y
[root@ka2 ~]# systemctl stop firewalld.service
[root@ka2 ~]# setenforce 0

realserver1: eth0 172.25.254.110 下载httpd

[root@realserver1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:54:22:6a brd ff:ff:ff:ff:ff:ff
    inet 172.25.254.110/24 brd 172.25.254.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::4da5:5424:6c11:6bc/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::435f:2ba:94d2:582/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::d318:4046:600c:390c/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:6b:1d:b7 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:6b:1d:b7 brd ff:ff:ff:ff:ff:ff
[root@realserver1 ~]# yum install httpd -y
#配置httpd
[root@realserver1 ~]# echo hehehe > /var/www/html/index.html
#启动并设置开机自启
[root@realserver1 ~]# systemctl enable --now httpd
[root@realserver1 ~]# systemctl stop firewalld.service
[root@realserver1 ~]# setenforce 0

realserver2: eth0 172.25.254.120 下载httpd

[root@realserver2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:69:b5:ed brd ff:ff:ff:ff:ff:ff
    inet 172.25.254.120/24 brd 172.25.254.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::4da5:5424:6c11:6bc/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::435f:2ba:94d2:582/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::d318:4046:600c:390c/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:6b:1d:b7 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:6b:1d:b7 brd ff:ff:ff:ff:ff:ff
[root@realserver2 ~]# yum install httpd -y
#配置httpd
[root@realserver2 ~]# echo hehehe > /var/www/html/index.html
#启动并设置开机自启
[root@realserver2 ~]# systemctl enable --now httpd
[root@realserver2 ~]# systemctl stop firewalld.service
[root@realserver2 ~]# setenforce 0

2.2 抢占模式实验

当一个主机VIP挂掉之后。另一台主机就获取VIP

[root@realserver2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:69:b5:ed brd ff:ff:ff:ff:ff:ff
    inet 172.25.254.120/24 brd 172.25.254.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::4da5:5424:6c11:6bc/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::435f:2ba:94d2:582/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::d318:4046:600c:390c/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:6b:1d:b7 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:6b:1d:b7 brd ff:ff:ff:ff:ff:ff
[root@realserver2 ~]# yum install httpd -y
#配置httpd
[root@realserver2 ~]# echo hehehe > /var/www/html/index.html
#启动并设置开机自启
[root@realserver2 ~]# systemctl enable --now httpd
[root@realserver2 ~]# systemctl stop firewalld.service
[root@realserver2 ~]# setenforce 0

[root@ka2 ~]# vim /etc/keepalived/keepalived.conf 
! Configuration File for keepalived

global_defs {
   notification_email {
        6668889@qq.com
   }
   notification_email_from keepalived@lanjinli.org
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id ka2.lanjinli.org
   vrrp_skip_check_adv_addr
   #vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
   vrrp_mcast_group4 224.0.0.18
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 100
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
       172.25.254.100/24 dev eth0 label eth0:1
    }
}

......
#重启服务
[root@ka2 ~]# systemctl restart keepalived.service

通过抓包工具测试一下

tcpdump -i ens33 -nn host 224.0.0.18

此时将KA1的keepalived服务停掉

2.3日志和子配置文件

独立日志

#日志修改
[root@ka1 ~]# cat /etc/keepalived/keepalived.conf 
...
global_defs {
   notification_email {
    666688889@qq.com
   }
   notification_email_from keepalived@lanjinli.org
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id ka1.lanjinli.org
   vrrp_skip_check_adv_addr
   #vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
   vrrp_mcast_group4 224.0.0.18
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 100
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.100/24 dev eth0 label eth0:1
    }
}


virtual_server 192.168.200.100 443 {

...
[root@ka1 ~]# cat /etc/sysconfig/keepalived 
# Options for keepalived. See `keepalived --help' output and keepalived(8) and
# keepalived.conf(5) man pages for a list of all options. Here are the most
# common ones :
#
# --vrrp               -P    Only run with VRRP subsystem.
# --check              -C    Only run with Health-checker subsystem.
# --dont-release-vrrp  -V    Dont remove VRRP VIPs & VROUTEs on daemon stop.
# --dont-release-ipvs  -I    Dont remove IPVS topology on daemon stop.
# --dump-conf          -d    Dump the configuration data.
# --log-detail         -D    Detailed log messages.
# --log-facility       -S    0-7 Set local syslog facility (default=LOG_DAEMON)
#

KEEPALIVED_OPTIONS="-D -S 6"

[root@ka1 ~]# 
#修改日志文件
[root@ka1 ~]# vim /etc/rsyslog.conf 
...
# Save boot messages also to boot.log
local7.*                                                /var/log/boot.log
local6.*                                                /var/log/keepalived.log
...
#重启服务
[root@ka1 ~]# systemctl restart keepalived.service  #使格式文件生效
[root@ka1 ~]# systemctl restart rsyslog.service     #使定向文件生效
[root@ka1 ~]# systemctl restart keepalived.service     #使生成日志文件
#查看是否生成日志文件
[root@ka1 ~]# ll /var/log/keepalived.log 
-rw-------. 1 root root 29956 Aug 12 02:14 /var/log/keepalived.log

子配置文件

当生产环境复杂时, /etc/keepalived/keepalived.conf 文件中内容过多,不易管理

这时候可以将不同集群的配置,比如不同集群的VIP配置放在独立的子配置文件中利用include指令可以实现包含子配置文件

格式:

include pathfile

#编辑主配置文件
[root@ka1 ~]# cat /etc/keepalived/keepalived.conf 
...
   vrrp_gna_interval 0
   vrrp_mcast_group4 224.0.0.18
}

#vrrp_instance VI_1 {
#    state MASTER
#    interface eth0
#    virtual_router_id 100
#    priority 100
#    advert_int 1
#    authentication {
#        auth_type PASS
#        auth_pass 1111
#    }
#    virtual_ipaddress {
#        172.25.254.100/24 dev eth0 label eth0:1
#    }
#}
#定义子配置文件目录
include "/etc/keepalived/conf.d/*.conf"


virtual_server 192.168.200.100 443 {

...
#创建子配置文件目录
[root@ka1 ~]# mkdir -p /etc/keepalived/conf.d
[root@ka1 ~]# vim /etc/keepalived/conf.d/172.25.254.100.conf
[root@ka1 ~]# cat /etc/keepalived/conf.d/172.25.254.100.conf
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 100
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.100/24 dev eth0 label eth0:1
    }    
}
[root@ka1 ~]# 
#重启keepalived
[root@ka1 ~]# systemctl restart keepalived.service 
#查看是否运行成功获得了vip172.25.254.100,注意权重的问题
[root@ka1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:92:51:bd brd ff:ff:ff:ff:ff:ff
    inet 172.25.254.10/24 brd 172.25.254.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet 172.25.254.100/24 scope global secondary eth0:1
       valid_lft forever preferred_lft forever
    inet6 fe80::435f:2ba:94d2:582/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::4da5:5424:6c11:6bc/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:6b:1d:b7 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:6b:1d:b7 brd ff:ff:ff:ff:ff:ff

2.4非抢占式模式与延迟抢占模式

非抢占模式:
默认为抢占模式 preempt ,即当高优先级的主机恢复在线后,会抢占低先级的主机的 master 角色,这样会使 vip 在 KA 主机中来回漂移,造成网络抖动,建议设置为非抢占模式 nopreempt ,即高优先级主机恢复后,并不会抢占低优先级主机的 master 角色 非抢占模块下, 如果原主机 down 机 , VIP 迁移至的新主机 , 后续也发生 down 时 , 仍会将 VIP 迁移回原主机。
抢占延迟模式:
即优先级高的主机恢复后,不会立即抢回 VIP ,而是延迟一段时间(默认 300s )再抢回 VIP

(1)非抢占模式
注意:要关闭 VIP抢占,必须将各 keepalived 服务器state配置为BACKUP

1.进入到keepalived的主配置文件:(KA1与KA2都需要配置)

vim /etc/keepalived/keepalived.conf 

KA1:

​​​KA2:

重启服务   systemctl restart keepalived.service 

测试结果:当重启keepalived时候就挂掉了 所以抓包效果图 首先回答的是KA2

当在把KA2停止服务时才会回到KA1

这个就是非抢占式的效果(简单解释当VIP不会因为优先级回到谁的手上只要当一台主机挂掉时才会将vip移到另外一台主机)

(2)延迟抢占模式

1.设定抢占延迟为5秒

1.进入到keepalived的主配置文件:(KA1与KA2都需要配置)

vim /etc/keepalived/keepalived.conf 

测试:

将KA1的keepalived停用

systemctl stop keepalived.service

此时VIP在ka2这里

在KA1重启keepalived服务

systemctl restart keepalived.service

等5秒后利用ifconfig查看一下

此时KA2的VIP就消失到KA1上去了

2.5 VIP单播配置

默认keepalined主机之间利用多播相互通告消息,会造成网络拥塞,可以替换成单播,减少网络流量

注意: 单播模式需要注释掉global中的#vrrp_strict

启用单播格式

#在国有节点vrrp_instance语句中设置对方主机IP,建议设置为专用对应心跳线网络的地址,而非使用业务网络
unicast_src_ip <IPADDR> #指定发送单播的源IP
unicast_peer {
    <IPADDR>    #指定接收单播的对方目标主机IP
    .....        #如果有多台需要接收单播直接加就行
}

编辑ka1的keepalived配置文件

[root@ka1 ~]# cat /etc/keepalived/keepalived.conf
...
global_defs {
   notification_email {
        666688889@qq.com
   }
   notification_email_from keepalived.lanjinli.org
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id ka1.lanjinli.org
   vrrp_skip_check_adv_addr
   #vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
   vrrp_mcast_group4 224.0.0.18
}

vrrp_instance VI_1 {
    state MASTER
    #state BACKUP
    interface eth0
    virtual_router_id 100
    priority 80
    advert_int 1
    #nopreempt
    #preempt_delay 5s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.100/24 dev eth0 label eth0:1
    }
    unicast_src_ip 172.25.254.10
    unicast_peer {
        172.25.254.20
    }
}

#include "/etc/keepalived/conf.d/*.conf"

...
#重启服务
[root@ka1 ~]# systemctl restart keepalived.service 

编辑ka2的keepalived配置文件

[root@ka2 ~]# cat /etc/keepalived/keepalived.conf
...
global_defs {
   notification_email {
   666688889@qq.com
}
   notification_email_from keepalived@lanjinli.org
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id ka2.lanjinli.org
   vrrp_skip_check_adv_addr
   #vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
   vrrp_mcast_group4 224.0.0.18
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 100
    priority 80
    advert_int 1
    preempt_delay 60
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.100/24 dev eth0 label eth0:1
    }
    uncast_src_ip 172.25.254.20
    uncast_peer {
        172.25.254.10
    }
}

virtual_server 192.168.200.100 443 {

...
#重启服务
[root@ka2 ~]# systemctl restart keepalived.service 

抓包查看

#由于权重的问题,VIP现在在ka1,包由10转发到20
[root@realserver2 ~]# tcpdump -i eth0 -nn host 172.25.254.10 and dst 172.25.254.20
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
02:56:01.907466 IP 172.25.254.10 > 172.25.254.20: VRRPv2, Advertisement, vrid 100, prio 100, authtype simple, intvl 1s, length 20
02:56:02.915078 IP 172.25.254.10 > 172.25.254.20: VRRPv2, Advertisement, vrid 100, prio 100, authtype simple, intvl 1s, length 20
02:56:03.923058 IP 172.25.254.10 > 172.25.254.20: VRRPv2, Advertisement, vrid 100, prio 100, authtype simple, intvl 1s, length 20
02:56:04.931880 IP 172.25.254.10 > 172.25.254.20: VRRPv2, Advertisement, vrid 100, prio 100, authtype simple, intvl 1s, length 20
#现在关闭ka1上的keepalived服务,vip就装换到了ka2上了,包由20发往10
[root@realserver1 ~]# tcpdump -i eth0 -nn host 172.25.254.20 and dst 172.25.254.10
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
11:51:45.872241 IP 172.25.254.20 > 172.25.254.10: VRRPv2, Advertisement, vrid 100, prio 70, authtype simple, intvl 1s, length 20
11:51:46.872633 IP 172.25.254.20 > 172.25.254.10: VRRPv2, Advertisement, vrid 100, prio 70, authtype simple, intvl 1s, length 20
11:51:47.873942 IP 172.25.254.20 > 172.25.254.10: VRRPv2, Advertisement, vrid 100, prio 70, authtype simple, intvl 1s, length 20

2.6 邮件通知

ka1和ka2上安装mailx邮件脚本工具

yum install mailx -y

用自己的邮箱生成一个POP3/IMAP/SMTP/Exchange/CardDAV的授权码

编辑ka1和ka2中的邮件服务配置文件

编写发邮件的脚本

#ka1
[root@ka1 ~]# cat /etc/keepalived/mail.sh
#!/bin/bash
mail_dst="775762675@qq.com"
send_message()
{
        mail_sub="$HOSTNAME to be $1 vip move"
        mail_msg="`date +%F\ %T`:vrrp move $HOSTNAME chage $1"
        echo $mail_msg | mail -s "$mail_sub" $mail_dst
}

case $1 in
        master)
                send_message master
        ;;
        backup)
                send_message backup
        ;;
        fault)
                send_message fault
        ;;
        *)
        ;;
esac
#给脚本加权限
[root@ka1 ~]# chmod +x /etc/keepalived/mail.sh
#ka2
[root@ka2 ~]# cat /etc/keepalived/mail.sh
#!/bin/bash
mail_dst="775762675@qq.com"
send_message()
{
        mail_sub="$HOSTNAME to be $1 vip move"
        mail_msg="`date +%F\ %T`:vrrp move $HOSTNAME chage $1"
        echo $mail_msg | mail -s "$mail_sub" $mail_dst
}

case $1 in
        master)
                send_message master
        ;;
        backup)
                send_message backup
        ;;
        fault)
                send_message fault
        ;;
        *)
        ;;
esac
#给脚本加权限
[root@ka2 ~]# chmod +x /etc/keepalived/mail.sh

修改keepalived配置文件

#ka1
[root@ka1 ~]# cat /etc/keepalived/keepalived.conf
...
vrrp_instance VI_1 {
    state MASTER
    #state BACKUP
    interface eth0
    virtual_router_id 100
    priority 80
    advert_int 1
    #nopreempt
    #preempt_delay 5s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.100/24 dev eth0 label eth0:1
    }
    unicast_src_ip 172.25.254.10
    unicast_peer {
        172.25.254.20
    }
    notify_master "/etc/keepalived/mail.sh master"
    notify_backup "/etc/keepalived/mail.sh backup"
    notify_fault "/etc/keepalived/mail.sh fault"
}
...
#重启keepalived服务
[root@ka1 ~]# systemctl restart keepalived.service 
#ka2
[root@ka2 ~]# cat /etc/keepalived/keepalived.conf
...
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 100
    priority 70
    advert_int 1
    #nopreemprt
    #preempt_delay 10s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
       172.25.254.100/24 dev eth0 label eth0:1
    }
    unicast_src_ip 172.25.254.20
    unicast_peer {
        172.25.254.10
    }
    notify_master "/etc/keepalived/mail.sh master"
    notify_backup "/etc/keepalived/mail.sh backup"
    notify_fault "/etc/keepalived/mail.sh fault"
}
...
#重启keepalived服务
[root@ka2 ~]# systemctl restart keepalived.service
 

此时会收到邮件

当关闭ka1上的keepalived后ka1上就不能发送邮件了,这时候只有ka2发送邮件

2.7 实现 master/master 的 Keepalived 双主架构

master/slave的单主架构,同一时间只有一个keepalived对外提供服务,此主机繁忙,而另一台主机却很空闲,利用率低下,可以使用master/master的双主架构,解决此问题,

master/master的双主架构:

即将两个或以上VIP分别运行在不同的keepalived服务器,以实现服务器并行提供web访问的目的,提高服务器资源利用率

#ka1的keepalived配置文件编写
[root@ka1 ~]# cat /etc/keepalived/keepalived.conf
...

vrrp_instance VI_1 {
    state MASTER
    #state BACKUP
    interface eth0
    virtual_router_id 100
    priority 80
    advert_int 1
    #nopreempt
    #preempt_delay 5s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.100/24 dev eth0 label eth0:1
    }
    unicast_src_ip 172.25.254.10
    unicast_peer {
        172.25.254.20
    }
}

vrrp_instance VI_2 {
    #state MASTER
    state BACKUP
    interface eth0
    virtual_router_id 200
    priority 50
    advert_int 1
    #nopreempt
    #preempt_delay 5s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.200/24 dev eth0 label eth0:2
    }
    unicast_src_ip 172.25.254.10
    unicast_peer {
        172.25.254.20
    }
}
#重启服务
[root@ka1 ~]# systemctl restart keepalived.service
#ka2的keepalived配置文件编写
[root@ka2 ~]# cat /etc/keepalived/keepalived.conf
...
vrrp_instance VI_1 {
    state BACKUP
    #state MASTER
    interface eth0
    virtual_router_id 100
    priority 70
    advert_int 1
    #nopreemprt
    #preempt_delay 10s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
       172.25.254.100/24 dev eth0 label eth0:1
    }
    unicast_src_ip 172.25.254.20
    unicast_peer {
        172.25.254.10
    }
}

vrrp_instance VI_2 {
    #state BACKUP
    state MASTER
    interface eth0
    virtual_router_id 200
    priority 100
    advert_int 1
    #nopreemprt
    #preempt_delay 10s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
       172.25.254.200/24 dev eth0 label eth0:2
    }
    unicast_src_ip 172.25.254.20
    unicast_peer {
        172.25.254.10
    }
}
...
#重启服务
[root@ka2 ~]# systemctl restart keepalived.service 

查看两台主机的VIP在这个时候172.25.254.100对于ka1是主,对于ka2来说是备,172.25.254.200对于ka1来说是备,对于ka2来说是主,当ka2的keepalived服务关闭过后ka1上有两个VIP,或者关闭了ka1上的keepalived服务,这时候ka2上就有两个VIP

#ka1
[root@ka1 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.10  netmask 255.255.255.0  broadcast 172.25.254.255
        inet6 fe80::435f:2ba:94d2:582  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:92:51:bd  txqueuelen 1000  (Ethernet)
        RX packets 19514  bytes 1558959 (1.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 28926  bytes 2270962 (2.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.100  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:92:51:bd  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 1869  bytes 142905 (139.5 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1869  bytes 142905 (139.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:6b:1d:b7  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ka1 ~]# 
#ka2
[root@ka2 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.20  netmask 255.255.255.0  broadcast 172.25.254.255
        inet6 fe80::435f:2ba:94d2:582  prefixlen 64  scopeid 0x20<link>
        inet6 fe80::4da5:5424:6c11:6bc  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:68:ef:05  txqueuelen 1000  (Ethernet)
        RX packets 25352  bytes 1940929 (1.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 24099  bytes 1946973 (1.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0:2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.200  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:68:ef:05  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 1689  bytes 129040 (126.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1689  bytes 129040 (126.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:6b:1d:b7  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

2.8实现IPVS的高可用性

配置文件结构段

virtual_server IP port {

     ...

     real_server {

     ...

      }

      real_server {

      ...

      }

      ...

}

virtual server定义格式

virtual_server IP port    #定义虚拟主机IP地址及其端口

virtual_server fwmark int     #ipvs的防火墙打标,实现基于防火墙的负载均衡集群 virtual_server group string     #使用虚拟服务器组

虚拟服务器配置

virtual_server IP port {                    #VIP和PORT
    delay_loop <INT>                         #检查后端服务器的时间间隔
    lb_algo rr|wrr|lc|wlc|lblc|sh|dh         #定义调度方法
    lb_kind NAT|DR|TUN                         #集群的类型,注意要大写
    persistence_timeout <INT>                 #持久连接时长
    protocol TCP|UDP|SCTP                    #指定服务协议,一般为TCP
    sorry_server <IPADDR> <PORT>             #所有RS故障时,备用服务器地址
    real_server <IPADDR> <PORT> {             #RS的IP和PORT
    weight <INT>                             #RS权重
    notify_up <STRING>|<QUOTED-STRING>         #RS上线通知脚本
    notify_down <STRING>|<QUOTED-STRING>     #RS下线通知脚本
    HTTP_GET|SSL_GET|TCP_CHECK|SMTP_CHECK|MISC_CHECK { ... }     #定义当前主机健康状态检测方法
    }
}

应用层监测

HTTP_GET|SSL_GET {
    url {
        path <URL_PATH>         #定义要监控的URL
        status_code <INT>         #判断上述检测机制为健康状态的响应码,一般为 200
    }
    connect_timeout <INTEGER>     #客户端请求的超时时长, 相当于haproxy的timeout server
    nb_get_retry <INT>             #重试次数
    delay_before_retry <INT>     #重试之前的延迟时长
    connect_ip <IP ADDRESS>     #向当前RS哪个IP地址发起健康状态检测请求
    connect_port <PORT>         #向当前RS的哪个PORT发起健康状态检测请求
    bindto <IP ADDRESS>         #向当前RS发出健康状态检测请求时使用的源地址
    bind_port <PORT>             #向当前RS发出健康状态检测请求时使用的源端口
}

TCP监测

TCP_CHECK {
    connect_ip <IP ADDRESS>     #向当前RS的哪个IP地址发起健康状态检测请求
    connect_port <PORT>         #向当前RS的哪个PORT发起健康状态检测请求
    bindto <IP ADDRESS>         #发出健康状态检测请求时使用的源地址
    bind_port <PORT>             #发出健康状态检测请求时使用的源端口
    connect_timeout <INTEGER>     #客户端请求的超时时长,等于haproxy的timeout server
}

2.9 实现单主机的LVS-DR模式

server1

#加入VIP这只是临时的
[root@realserver1 ~]# ip a a 172.25.254.100/32 dev lo
#永久加入VIP
#编写配置文件
[root@realserver1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-lo
DEVICE=lo
IPADDR0=127.0.0.1
NETMASK0=255.0.0.0
IPADDR1=172.25.254.100
NETMASK1=255.255.255.255
NETWORK=127.0.0.0
# If you're having problems with gated making 127.0.0.0/8 a martian,
# you can change this to something else (255.255.255.255, for example)
BROADCAST=127.255.255.255
ONBOOT=yes
NAME=loopback
[root@realserver1 ~]# 
#重启网络服务注意如果重启不成功可以使用journalctl -xe查看,有可能是网卡不对
[root@realserver1 ~]# systemctl restart network
Job for network.service failed because the control process exited with error code. See "systemctl status network.service" and "journalctl -xe" for details.
[root@realserver1 ~]# rm -rf /etc/sysconfig/network-scripts/ifcfg-
ifcfg-ens33  ifcfg-lo     
[root@realserver1 ~]# rm -rf /etc/sysconfig/network-scripts/ifcfg-ens33 
[root@realserver1 ~]# systemctl restart network 
#写入rip路由
[root@realserver1 ~]# cat /etc/sysctl.d/arp.conf 
net.ipv4.conf.all.arp_ignore=1
net.ipv4.conf.all.arp_announce=2
net.ipv4.conf.lo.arp_ignore=1
net.ipv4.conf.lo.arp_announce=2
#重新加载
[root@realserver1 ~]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
kernel.kptr_restrict = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /usr/lib/sysctl.d/60-libvirtd.conf ...
fs.aio-max-nr = 1048576
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/arp.conf ...
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
* Applying /etc/sysctl.conf ...
[root@realserver1 ~]# 
#重启httpd服务和关闭防火墙与SELinux
[root@realserver1 ~]# systemctl stop firewalld.service 
[root@realserver1 ~]# setenforce 0
[root@realserver1 ~]# systemctl restart httpd.service

server2

#加入vip
[root@realserver2 ~]# ip a a 172.25.254.100/32 dev lo
#写入rip路由
[root@realserver2 ~]# cat /etc/sysctl.d/arp.conf 
net.ipv4.conf.all.arp_ignore=1
net.ipv4.conf.all.arp_announce=2
net.ipv4.conf.lo.arp_ignore=1
net.ipv4.conf.lo.arp_announce=2
#重新加载
[root@realserver2 ~]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
kernel.kptr_restrict = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /usr/lib/sysctl.d/60-libvirtd.conf ...
fs.aio-max-nr = 1048576
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/arp.conf ...
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
* Applying /etc/sysctl.conf ...
[root@realserver2 ~]#
#重启httpd服务和关闭防火墙与SELinux
[root@realserver2 ~]# systemctl stop firewalld.service 
[root@realserver2 ~]# setenforce 0
[root@realserver2 ~]# systemctl restart httpd.service

ka1

#下载ipvsadm
[root@ka1 ~]# yum install ipvsadm -y
#编辑keepalived配置文件
[root@ka1 ~]# cat /etc/keepalived/keepalived.conf
...
vrrp_instance VI_1 {
    state MASTER
    #state BACKUP
    interface eth0
    virtual_router_id 100
    priority 80
    advert_int 1
    #nopreempt
    #preempt_delay 5s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.100/24 dev eth0 label eth0:1
    }
    unicast_src_ip 172.25.254.10
    unicast_peer {
        172.25.254.20
    }
}

#include "/etc/keepalived/conf.d/*.conf"


virtual_server 172.25.254.100 80  {
    delay_loop 6
    lb_algo wrr
    lb_kind DR
    #persistence_timeout 50
    protocol TCP

    real_server 172.25.254.110 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
    real_server 172.25.254.120 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
...
[root@ka1 ~]# systemctl restart keepalived.service 
#查看ipvs策略
[root@ka1 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.25.254.100:80 wrr
  -> 172.25.254.110:80            Route   1      0          0         
  -> 172.25.254.120:80            Route   1      0          0         
[root@ka1 ~]# 

ka2

#下载ipvsadm
[root@ka2 ~]# yum install ipvsadm -y
#编辑keepalived配置文件
[root@ka2 ~]# cat /etc/keepalived/keepalived.conf
...

vrrp_instance VI_1 {
    state BACKUP
    #state MASTER
    interface eth0
    virtual_router_id 100
    priority 70
    advert_int 1
    #nopreemprt
    #preempt_delay 10s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
       172.25.254.100/24 dev eth0 label eth0:1
    }
    unicast_src_ip 172.25.254.20
    unicast_peer {
        172.25.254.10
    }
}

virtual_server 172.25.254.100 80 {
    delay_loop 6
    lb_algo wrr
    lb_kind DR
    #persistence_timeout 50
    protocol TCP

    real_server 172.25.254.110 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
    real_server 172.25.254.120 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
...
[root@ka2 ~]# systemctl restart keepalived.service 
#查看ipvs策略
[root@ka2 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.200.100:80 wrr
  -> 172.25.254.110:80            Route   1      0          0         
  -> 172.25.254.120:80            Route   1      0          0         
[root@ka2 ~]# 

测试

[root@ka1 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.10  netmask 255.255.255.0  broadcast 172.25.254.255
        inet6 fe80::435f:2ba:94d2:582  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:92:51:bd  txqueuelen 1000  (Ethernet)
        RX packets 18487  bytes 1552110 (1.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 23923  bytes 1781264 (1.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0:2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.100  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:92:51:bd  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 357  bytes 27304 (26.6 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 357  bytes 27304 (26.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:6b:1d:b7  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ka1 ~]# 
#访问测试
[root@client ~]# curl 172.25.254.100
hehehe
[root@client ~]# curl 172.25.254.100
hahah
[root@client ~]# curl 172.25.254.100
hehehe
[root@client ~]# curl 172.25.254.100
hahah
[root@client ~]# curl 172.25.254.100
hehehe
[root@client ~]# curl 172.25.254.100
hahah

如果关闭了ka1的keepalived服务,ka1上的vip会消失,ka2上会出现vip

#关闭ka1的keepalived服务
[root@ka1 ~]# systemctl restart keepalived.service
#查看ka2上的vip
[root@ka2 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.20  netmask 255.255.255.0  broadcast 172.25.254.255
        inet6 fe80::435f:2ba:94d2:582  prefixlen 64  scopeid 0x20<link>
        inet6 fe80::4da5:5424:6c11:6bc  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:68:ef:05  txqueuelen 1000  (Ethernet)
        RX packets 22123  bytes 1701850 (1.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 32484  bytes 2443518 (2.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0:2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.100  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:68:ef:05  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 486  bytes 37044 (36.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 486  bytes 37044 (36.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:6b:1d:b7  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ka2 ~]# 
#访问测试
[root@client ~]# curl 172.25.254.100
hahah
[root@client ~]# curl 172.25.254.100
hehehe
[root@client ~]# curl 172.25.254.100
hahah
[root@client ~]# curl 172.25.254.100
hehehe
[root@client ~]# curl 172.25.254.100
hahah
[root@client ~]#
实现其他应用的高可用性VRRP Script

keepalived利用VRRP Script技术,可以调用外部的辅助脚本进行资源监控,并根据监控的结果实现优先动态调整,从而实现其他应用的高可用性能

2.10  VRRP Script配置

1、定义脚本

vrrp_script:自定义资源监控脚本,vrrp实例根据脚本返回值,公共定义,可被多个实例调用,定 义在vrrp实例之外的独立配置块,一般放在global_defs设置块之后,通常此脚本用于监控指定应用的状态。一旦发现应用的状态异常,则触发对MASTER节点的权重减至 低于SLAVE节点,从而实现 VIP 切换到 SLAVE 节点

vrrp_script <SCRIPT_NAME> {

   script <STRING>|<QUOTED-STRING> #此脚本返回值为非0时,会触发下面OPTIONS执行

   OPTIONS

}

2、调用脚本

track_script:调用vrrp_script定义的脚本去监控资源,定义在VRRP示例之外,调用事先定义的vrrp_script

track_script {

      SCRIPT_NAME_1

      SCRIPT_NAME_2

}

定义VRRP script

vrrp_script <SCRIPT_NAME> { 				#定义一个检测脚本,在global_defs 之外配置
	script <STRING>|<QUOTED-STRING> 		#shell命令或脚本路径
	interval <INTEGER> 						#间隔时间,单位为秒,默认1秒
	timeout <INTEGER> 						#超时时间
	weight <INTEGER:-254..254> 				#默认为0,如果设置此值为负数,当上面脚本返回值为非0时,会将此值与本节点权重相加可以降低本节点权重,即表示fall,如果是正数,当脚本返回值为0,会将此值与本节点权重相加可以提高本节点权重,即表示 rise.通常使用负值
	fall <INTEGER> 							#执行脚本连续几次都失败,则转换为失败,建议设为2以上
	rise <INTEGER>							#执行脚本连续几次都成功,把服务器从失败标记为成功
	user USERNAME [GROUPNAME] 				#执行监测脚本的用户或组
	init_fail 								#设置默认标记为失败状态,监测成功之后再转换为成功状态
}

调用VRRP script

vrrp_instance test {
	... ...
	track_script {
		check_down
	}
}

2.11 利用脚本实现主从角色切换

#ka1编写读取脚本的脚本
[root@ka1 ~]# cat /mnt/check_lanjinli.sh 
#!/bin/bash
[ ! -f "/mnt/lanjinli" ]
#给脚本加权限
[root@ka1 ~]# chmod 777 /mnt/check_lanjinli.sh 
#测试脚本返回值
[root@ka1 ~]# bash /mnt/check_lanjinli.sh
[root@ka1 ~]# echo $?
0
[root@ka1 ~]# touch /mnt/lanjinli
[root@ka1 ~]# bash /mnt/check_lanjinli.sh
[root@ka1 ~]# echo $?
1

编写ka1的keepalived配置文件

vrrp_script check_lanjinli {
   script "/mnt/check_lanjinli.sh"	#需要执行的脚本
   interval 1						#间隔时间为1s
   weight -30						#当返回值为非0的时候权重减30
   fall 2							#执行脚本连续两次失败则转换为失败
   rise 2							#执行脚本连续两次都成功则把服务器从失败装换位成功
   timeout 2						#超时时间为2s
}

vrrp_instance VI_1 {
    state MASTER
    #state BACKUP
    interface eth0
    virtual_router_id 100
    priority 80
    advert_int 1
    #nopreempt
    #preempt_delay 5s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.100/24 dev eth0 label eth0:1
    }
    unicast_src_ip 172.25.254.10
    unicast_peer {
        172.25.254.20
    }
    track_script {				#调用vrrp_script
        check_lanjinli
    }
}

测试

#当/mnt/下没有lanjinli这个文件的时候返回0,权重不变,则表示开启服务器,获取vip
[root@ka1 ~]# ll /mnt/
total 4
-rwxrwxrwx 1 root root 46 Aug 16 15:24 check_lanjinli.sh
[root@ka1 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.10  netmask 255.255.255.0  broadcast 172.25.254.255
        inet6 fe80::435f:2ba:94d2:582  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:92:51:bd  txqueuelen 1000  (Ethernet)
        RX packets 37284  bytes 2866245 (2.7 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 44216  bytes 3616117 (3.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.100  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:92:51:bd  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 2059  bytes 157403 (153.7 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2059  bytes 157403 (153.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:6b:1d:b7  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ka1 ~]#
#当/mnt/下有lanjinli这个文件的时候返回1,权重减30,则表示关闭服务器,不会获取vip
[root@ka1 ~]# rm -rf /mnt/lanjinli
[root@ka1 ~]# ll /mnt
total 4
-rwxrwxrwx 1 root root 46 Aug 16 15:24 check_lanjinli.sh
-rw-r--r-- 1 root root  0 Aug 16 15:30 lanjinli
[root@ka1 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.10  netmask 255.255.255.0  broadcast 172.25.254.255
        inet6 fe80::435f:2ba:94d2:582  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:92:51:bd  txqueuelen 1000  (Ethernet)
        RX packets 38869  bytes 3015686 (2.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 45686  bytes 3755223 (3.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 2143  bytes 163823 (159.9 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2143  bytes 163823 (159.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:6b:1d:b7  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

2.12 keepalived结合HAProxy高可用

注意: haproxy和ipvsadm不能同时使用同一个VIP,如果想要实现keepalived和haproxy同时使用可以用不同的vip,也就是双主模式,下haproxy使用172.25.254.200,ipvsadm受用172.25.254.100,以下使用的是双主模式

#ka1

[root@ka1 ~]# yum install haproxy -y

#ka2

[root@ka2 ~]# yum install haproxy -y

删掉realser1和realserver2的vip,修改arp响应

#realserver1
[root@realserver1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-lo
DEVICE=lo
IPADDR0=127.0.0.1
NETMASK0=255.0.0.0
#IPADDR1=172.25.254.100
#NETMASK1=255.255.255.255
NETWORK=127.0.0.0
# If you're having problems with gated making 127.0.0.0/8 a martian,
# you can change this to something else (255.255.255.255, for example)
BROADCAST=127.255.255.255
ONBOOT=yes
NAME=loopback
[root@realserver1 ~]# systemctl restart network
[root@realserver1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:54:22:6a brd ff:ff:ff:ff:ff:ff
    inet 172.25.254.110/24 brd 172.25.254.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::4da5:5424:6c11:6bc/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::435f:2ba:94d2:582/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::d318:4046:600c:390c/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:6b:1d:b7 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:6b:1d:b7 brd ff:ff:ff:ff:ff:ff
[root@realserver1 ~]# cat /etc/sysctl.d/arp.conf 
net.ipv4.conf.all.arp_ignore=0
net.ipv4.conf.all.arp_announce=0
net.ipv4.conf.lo.arp_ignore=0
net.ipv4.conf.lo.arp_announce=0
[root@realserver1 ~]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
kernel.kptr_restrict = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /usr/lib/sysctl.d/60-libvirtd.conf ...
fs.aio-max-nr = 1048576
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/arp.conf ...
net.ipv4.conf.all.arp_ignore = 0
net.ipv4.conf.all.arp_announce = 0
net.ipv4.conf.lo.arp_ignore = 0
net.ipv4.conf.lo.arp_announce = 0
* Applying /etc/sysctl.conf ...
[root@realserver1 ~]# 


#realserver2
[root@realserver2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-lo 
DEVICE=lo
IPADDR0=127.0.0.1
NETMASK0=255.0.0.0
#IPADDR1=172.25.254.100
#NETMASK1=255.255.255.255
NETWORK=127.0.0.0
# If you're having problems with gated making 127.0.0.0/8 a martian,
# you can change this to something else (255.255.255.255, for example)
BROADCAST=127.255.255.255
ONBOOT=yes
NAME=loopback
[root@realserver2 ~]# systemctl restart network
[root@realserver2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:69:b5:ed brd ff:ff:ff:ff:ff:ff
    inet 172.25.254.120/24 brd 172.25.254.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::4da5:5424:6c11:6bc/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::435f:2ba:94d2:582/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::d318:4046:600c:390c/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:6b:1d:b7 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:6b:1d:b7 brd ff:ff:ff:ff:ff:ff
[root@realserver2 ~]# cat /etc/sysctl.d/arp.conf 
net.ipv4.conf.all.arp_ignore=0
net.ipv4.conf.all.arp_announce=0
net.ipv4.conf.lo.arp_ignore=0
net.ipv4.conf.lo.arp_announce=0
[root@realserver2 ~]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
kernel.kptr_restrict = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /usr/lib/sysctl.d/60-libvirtd.conf ...
fs.aio-max-nr = 1048576
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/arp.conf ...
net.ipv4.conf.all.arp_ignore = 0
net.ipv4.conf.all.arp_announce = 0
net.ipv4.conf.lo.arp_ignore = 0
net.ipv4.conf.lo.arp_announce = 0
* Applying /etc/sysctl.conf ...
[root@realserver2 ~]# 

在两个ka1和ka2两个节点启用内核参数与配置ka1和ka2的

#ka1
[root@ka1 ~]# cat /etc/sysctl.conf
...
#在末尾加入
net.ipv4.ip_nonlocal_bind = 1
#使用其生效
[root@ka1 ~]# sysctl -p
net.ipv4.ip_nonlocal_bind = 1
[root@ka1 ~]#
#编辑haproxy的配置文件
[root@ka1 ~]# cat /etc/haproxy/haproxy.cfg
...
#在末尾加入
listen webserver
        bind 172.25.254.100:80
        mode http
        balance roundrobin
        server web1 172.25.254.110:80 check inter 3 fall 2 rise 5
        server web2 172.25.254.120:80 check inter 3 fall 2 rise 5
#重启haproxy服务
[root@ka1 ~]# systemctl enable --now haproxy.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/haproxy.service to /usr/lib/systemd/system/haproxy.service.
#查看是否加入端口
[root@ka1 ~]# netstat -antlupe | grep haproxy
tcp        0      0 0.0.0.0:5000            0.0.0.0:*               LISTEN      0          1269940    18519/haproxy       
tcp        0      0 172.25.254.100:80       0.0.0.0:*               LISTEN      0          1269942    18519/haproxy       
udp        0      0 0.0.0.0:40148           0.0.0.0:*                           0          1269941    18517/haproxy       
[root@ka1 ~]#   


#ka2
[root@ka2 ~]# cat /etc/sysctl.conf
...
#在末尾加入
net.ipv4.ip_nonlocal_bind = 1
[root@ka2 ~]# sysctl -p
net.ipv4.ip_nonlocal_bind = 1
[root@ka2 ~]# 
#编辑haproxy的配置文件
[root@ka2 ~]# cat /etc/haproxy/haproxy.cfg
...
#在末尾加入
listen webserver
        bind 172.25.254.100:80
        mode http
        balance roundrobin
        server web1 172.25.254.110:80 check inter 3 fall 2 rise 5
        server web2 172.25.254.120:80 check inter 3 fall 2 rise 5
#重启haproxy服务
[root@ka2 ~]# systemctl enable --now haproxy.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/haproxy.service to /usr/lib/systemd/system/haproxy.service.
#查看是否加入端口
[root@ka2 ~]# netstat -antlupe | grep haproxy
tcp        0      0 0.0.0.0:5000            0.0.0.0:*               LISTEN      0          748750     8584/haproxy        
tcp        0      0 172.25.254.100:80       0.0.0.0:*               LISTEN      0          748752     8584/haproxy        
udp        0      0 0.0.0.0:57756           0.0.0.0:*                           0          748751     8581/haproxy        
[root@ka2 ~]#   

最终ka1和ka2的keepalived配置文件

#ka1
[root@ka1 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
	6668889@qq.com
   }
   notification_email_from keepalived.lanjinli.org
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id ka1.lanjinli.org
   vrrp_skip_check_adv_addr
   #vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
   vrrp_mcast_group4 224.0.0.18
}

vrrp_script check_lanjinli {
   script "/mnt/check_lanjinli.sh"
   interval 1
   weight -30
   fall 2
   rise 2
   timeout 2
}

vrrp_instance VI_1 {
    state MASTER
    #state BACKUP
    interface eth0
    virtual_router_id 100
    priority 80
    advert_int 1
    #nopreempt
    #preempt_delay 5s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.100/24 dev eth0 label eth0:1 
    }
    unicast_src_ip 172.25.254.10
    unicast_peer {
	172.25.254.20
    }
    track_script {
	check_lanjinli
    }
}

vrrp_instance VI_2 {
    #state MASTER
    state BACKUP
    interface eth0
    virtual_router_id 200
    priority 50
    advert_int 1
    #nopreempt
    #preempt_delay 5s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.200/24 dev eth0 label eth0:2
    }
    unicast_src_ip 172.25.254.10
    unicast_peer {
        172.25.254.20
    }
}

#include "/etc/keepalived/conf.d/*.conf"

virtual_server 172.25.254.200 80  {
    delay_loop 6
    lb_algo wrr
    lb_kind DR
    #persistence_timeout 50
    protocol TCP

    real_server 172.25.254.110 80 {
        weight 1
        HTTP_GET {
            url {
	      path /
	      status_code 200		
	    }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
    real_server 172.25.254.120 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

virtual_server 10.10.10.2 1358 {
    delay_loop 6
    lb_algo rr 
    lb_kind NAT
    persistence_timeout 50
    protocol TCP

    sorry_server 192.168.200.200 1358

    real_server 192.168.200.2 1358 {
        weight 1
        HTTP_GET {
            url { 
              path /testurl/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            url { 
              path /testurl2/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            url { 
              path /testurl3/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.200.3 1358 {
        weight 1
        HTTP_GET {
            url { 
              path /testurl/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334c
            }
            url { 
              path /testurl2/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334c
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

virtual_server 10.10.10.3 1358 {
    delay_loop 3
    lb_algo rr 
    lb_kind NAT
    persistence_timeout 50
    protocol TCP

    real_server 192.168.200.4 1358 {
        weight 1
        HTTP_GET {
            url { 
              path /testurl/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            url { 
              path /testurl2/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            url { 
              path /testurl3/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.200.5 1358 {
        weight 1
        HTTP_GET {
            url { 
              path /testurl/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            url { 
              path /testurl2/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            url { 
              path /testurl3/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
[root@ka1 ~]# 





#ka2
[root@ka2 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
   	6668889@qq.com
   }
   notification_email_from keepalived@lanjinli.org
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id ka2.lanjinli.org
   vrrp_skip_check_adv_addr
   #vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
   vrrp_mcast_group4 224.0.0.18
}

vrrp_instance VI_1 {
    state BACKUP
    #state MASTER
    interface eth0
    virtual_router_id 100
    priority 70
    advert_int 1
    #nopreemprt
    #preempt_delay 10s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
       172.25.254.100/24 dev eth0 label eth0:1 
    }
    unicast_src_ip 172.25.254.20
    unicast_peer {
	172.25.254.10
    }
}

vrrp_instance VI_2 {
    #state BACKUP
    state MASTER
    interface eth0
    virtual_router_id 200
    priority 100
    advert_int 1
    #nopreemprt
    #preempt_delay 10s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
       172.25.254.200/24 dev eth0 label eth0:2
    }
    unicast_src_ip 172.25.254.20
    unicast_peer {
        172.25.254.10
    }
}

virtual_server 172.25.254.200 80 {
    delay_loop 6
    lb_algo wrr
    lb_kind DR
    #persistence_timeout 50
    protocol TCP

    real_server 172.25.254.110 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
	    }
	    connect_timeout 3
	    nb_get_retry 3
            delay_before_retry 3
        }
    }
    real_server 172.25.254.120 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

virtual_server 10.10.10.2 1358 {
    delay_loop 6
    lb_algo rr 
    lb_kind NAT
    persistence_timeout 50
    protocol TCP

    sorry_server 192.168.200.200 1358

    real_server 192.168.200.2 1358 {
        weight 1
        HTTP_GET {
            url { 
              path /testurl/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            url { 
              path /testurl2/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            url { 
              path /testurl3/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.200.3 1358 {
        weight 1
        HTTP_GET {
            url { 
              path /testurl/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334c
            }
            url { 
              path /testurl2/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334c
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

virtual_server 10.10.10.3 1358 {
    delay_loop 3
    lb_algo rr 
    lb_kind NAT
    persistence_timeout 50
    protocol TCP

    real_server 192.168.200.4 1358 {
        weight 1
        HTTP_GET {
            url { 
              path /testurl/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            url { 
              path /testurl2/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            url { 
              path /testurl3/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.200.5 1358 {
        weight 1
        HTTP_GET {
            url { 
              path /testurl/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            url { 
              path /testurl2/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            url { 
              path /testurl3/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
[root@ka2 ~]# 

测试

[root@client ~]# curl 172.25.254.100
hahah
[root@client ~]# curl 172.25.254.100
hehehe
[root@client ~]# curl 172.25.254.100
hahah
[root@client ~]# curl 172.25.254.100
hehehe
[root@client ~]# curl 172.25.254.100
hahah
[root@client ~]# curl 172.25.254.100
hehehe
[root@client ~]# curl 172.25.254.100
hahah
[root@client ~]# curl 172.25.254.100
hehehe
[root@client ~]# curl 172.25.254.100
hahah
[root@client ~]# 
#当ka1上的keepalived服务关闭再进行访问
[root@client ~]# curl 172.25.254.100
hehehe
[root@client ~]# curl 172.25.254.100
hahah
[root@client ~]# curl 172.25.254.100
hehehe
[root@client ~]# curl 172.25.254.100
hahah
[root@client ~]# curl 172.25.254.100
hehehe

可以利用以下状态来配合vrrp script来使用

#检测ka1上的heproxy的状态
[root@client ~]# killall -0 haproxy
haproxy: no process found				#说明haproxy
[root@client ~]# killall -0 haproxy
haproxy: no process found
[root@client ~]# echo $?
1
[root@client ~]# 
#检测ka2上的heproxy的状态
[root@ka2 ~]# killall -0 haproxy
[root@ka2 ~]# echo $?
0

配置ka1和ka2

   }
    unicast_src_ip 172.25.254.20
    unicast_peer {
        172.25.254.10
    }
    track_script {
        check_lanjinli
    }
}

vrrp_instance VI_2 {
    #state BACKUP
    state MASTER
    interface eth0
    virtual_router_id 200
    priority 100
    advert_int 1
    #nopreemprt
    #preempt_delay 10s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {#ka1
[root@ka1 ~]# cat /mnt//check_lanjinli.sh 
#!/bin/bash
killall -0 haproxy
[root@ka1 ~]# cat /etc/keepalived/keepalived.conf
...
vrrp_script check_lanjinli {
   script "/mnt/check_lanjinli.sh"
   interval 1
   weight -30
   fall 2
   rise 2
   timeout 2
}

vrrp_instance VI_1 {
    state MASTER
    #state BACKUP
    interface eth0
    virtual_router_id 100
    priority 80
    advert_int 1
    #nopreempt
    #preempt_delay 5s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.100/24 dev eth0 label eth0:1
    }
    unicast_src_ip 172.25.254.10
    unicast_peer {
        172.25.254.20
    }
    track_script {
        check_lanjinli
    }
}


#virtual_server 172.25.254.200 80  {
#    delay_loop 6
#    lb_algo wrr
#    lb_kind DR
 #   #persistence_timeout 50
#    protocol TCP
#
#    real_server 172.25.254.110 80 {
#        weight 1
#        HTTP_GET {
#            url {
#             path /
#             status_code 200
##          }
 #           connect_timeout 3
#            nb_get_retry 3
#            delay_before_retry 3
#        }
#    }
#    real_server 172.25.254.120 80 {
#        weight 1
 #       HTTP_GET {
 #           url {
#              path /
#              status_code 200
#            }
 #           connect_timeout 3
  #          nb_get_retry 3
#            delay_before_retry 3
#        }
#    }
#}
...



#ka2
[root@ka2 ~]# cat /mnt//check_lanjinli.sh
#!/bin/bash
killall -0 haproxy
[root@ka2 ~]# cat /etc/keepalived/keepalived.conf 
vrrp_script check_lanjinli {
   script "/mnt/check_lanjinli.sh"
   interval 1
   weight -30
   fall 2
   rise 2
   timeout 2
}

vrrp_instance VI_1 {
    state BACKUP
    #state MASTER
    interface eth0
    virtual_router_id 100
    priority 70
    advert_int 1
    #nopreemprt
    #preempt_delay 10s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
       172.25.254.100/24 dev eth0 label eth0:1
    }
    unicast_src_ip 172.25.254.20
    unicast_peer {
        172.25.254.10
    }
    track_script {
        check_lanjinli
    }
}

vrrp_instance VI_2 {
    #state BACKUP
    state MASTER
    interface eth0
    virtual_router_id 200
    priority 100
    advert_int 1
    #nopreemprt
    #preempt_delay 10s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
       172.25.254.200/24 dev eth0 label eth0:2
    }
    unicast_src_ip 172.25.254.20
    unicast_peer {
        172.25.254.10
    }
}

#virtual_server 172.25.254.200 80 {
#    delay_loop 6
#    lb_algo wrr
#    lb_kind DR
 #   #persistence_timeout 50
#    protocol TCP
#
#    real_server 172.25.254.110 80 {
#        weight 1
#        HTTP_GET {
#            url {
#              path /
 #             status_code 200
#           }
#           connect_timeout 3
#           nb_get_retry 3
#            delay_before_retry 3
 #       }
 #   }
#    real_server 172.25.254.120 80 {
#        weight 1
#        HTTP_GET {
#            url {
 #             path /
  #            status_code 200
#            }
 #           connect_timeout 3
 #           nb_get_retry 3
 #           delay_before_retry 3
  #      }
   # }
#}

测试

[root@client ~]# curl 172.25.254.100
hahah
[root@client ~]# curl 172.25.254.100
hehehe
[root@client ~]# curl 172.25.254.100
hahah
[root@client ~]# curl 172.25.254.100
hehehe
[root@client ~]# curl 172.25.254.100
hahah
[root@client ~]# curl 172.25.254.100
hehehe
[root@client ~]# 


#ka1vip
[root@ka1 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.10  netmask 255.255.255.0  broadcast 172.25.254.255
        inet6 fe80::435f:2ba:94d2:582  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:92:51:bd  txqueuelen 1000  (Ethernet)
        RX packets 1954855  bytes 145104268 (138.3 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3863276  bytes 271408393 (258.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.100  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:92:51:bd  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 24386  bytes 1333534 (1.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 24386  bytes 1333534 (1.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:6b:1d:b7  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ka1 ~]#
#ka2vip
[root@ka2 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.20  netmask 255.255.255.0  broadcast 172.25.254.255
        inet6 fe80::435f:2ba:94d2:582  prefixlen 64  scopeid 0x20<link>
        inet6 fe80::4da5:5424:6c11:6bc  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:68:ef:05  txqueuelen 1000  (Ethernet)
        RX packets 1434842  bytes 106995057 (102.0 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2810630  bytes 197355601 (188.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0:2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.200  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:68:ef:05  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 17777  bytes 972962 (950.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 17777  bytes 972962 (950.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:6b:1d:b7  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ka2 ~]#
#查看ka2上的haproxy服务的状态,由于ka2上的没有172.25.254.100这个vip,所以本来haproxy这个服务不能起来的,但是由于启用了启用内核参数
[root@ka2 ~]# systemctl is-active haproxy.service 
active
[root@ka2 ~]# sysctl -p
net.ipv4.ip_nonlocal_bind = 1
[root@ka2 ~]# 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值