高可用集群KEEPALIVED

1.高可用概念

1.1 概念引入

上图是一个负载均衡的简易图,负载均衡用于解决单个服务器压力过大,使用多台服务器来分担访问流量,因为流量没办法自己合理分配到多个后端服务器,所以需要一台调度器。

但是这样仍然有一个致命缺陷,就是作为调度器服务器损坏会使公司业务瘫痪,所以调度器也需要有备用的,这样才能使框架更加安全,这种增加备用调度器就是高可用。

1.2 高可用类型

active/passive 主/备:当主设备故障,vip会移动到备设备上

active/active 双主:两台设备互为主备,即两个vip

active --> HEARTBEAT --> passive

active <--> HEARTBEAT <--> active

1.3 keepalived简介

vrrp 协议的软件实现,原生设计目的为了高可用 ipvs服务

官网:http://keepalived.org/

功能:

  • 基于vrrp协议完成地址流动
  • 为vip地址所在的节点生成ipvs规则(在配置文件中预先定义)
  • 为ipvs集群的各RS做健康状态检测
  • 基于脚本调用接口完成脚本中定义的功能,进而影响集群事务,以此支持nginx、haproxy等服务

架构:

1.用户空间核心组件:

  • vrrp stack:VIP消息通告
  • checkers:监测real server
  • system call:实现 vrrp 协议状态转换时调用脚本的功能
  • SMTP:邮件组件
  • IPVS wrapper:生成IPVS规则
  • Netlink Reflector:网络接口
  • WatchDog:监控进程

2.控制组件:提供keepalived.conf 的解析器,完成Keepalived配置

3.IO复用器:针对网络目的而优化的自己的线程抽象

4.内存管理组件:为某些通用的内存管理功能(例如分配,重新分配,发布等)提供访问权限

1.4 Keepalived 安装

[root@KA1 ~]# dnf install keepalived -y

[root@KA1 ~]# systemctl start keepalived

[root@KA1 ~]# ps axf | grep keepalived

  2385 pts/0   S+     0:00             \_ grep --color=auto keepalived

  2326 ?       Ss     0:00 /usr/sbin/keepalived -D

  2327 ?       S     0:00 \_ /usr/sbin/keepalived -D

1.5 Keepalived 相关文件

软件包名:keepalived

主程序文件:/usr/sbin/keepalived

主配置文件:/etc/keepalived/keepalived.conf

配置文件示例:/usr/share/doc/keepalived/

Unit File:/lib/systemd/system/keepalived.serviceUnit File的环境配置文件:/etc/sysconfig/keepalived

 1.6 Keepalived配置语法说明

配置文件:/etc/keepalived/keepalived.conf

配置文件组成

GLOBAL CONFIGURATION

Global definitions:定义邮件配置,route_id,vrrp配置,多播地址等

VRRP CONFIGURATION

VRRP instance(s):定义每个vrrp虚拟路由器

LVS CONFIGURATION

Virtual server group(s)

Virtual server(s):LVS集群的VS和RS

查看帮助

man keepalived.conf

 全局配置说明

! Configuration File for keepalived

global_defs {

  notification_email {

      594233887@qq.com #keepalived 发生故障切换时邮件发送的目标邮箱,可以按行区

分写多个

      timiniglee-zln@163.com

  }

  notification_email_from keepalived@KA1.timinglee.org #发邮件的地址

  smtp_server 127.0.0.1 #邮件服务器地址

  smtp_connect_timeout 30 #邮件服务器连接timeout

  router_id KA1.timinglee.org #每个keepalived主机唯一标识

  #建议使用当前主机名,但多节点

重名不影响

vrrp_skip_check_adv_addr #对所有通告报文都检查,会比较消耗性能

#启用此配置后,如果收到的通告报文和上一

个报文是同一 #个路由器,则跳过检查,默认

值为全检查

  vrrp_strict #严格遵循vrrp协议

#启用此项后以下状况将无法启动服务:

#1.无VIP地址

#2.配置了单播邻居

#3.在VRRP版本2中有IPv6地址

#建议不加此项配置

vrrp_garp_interval 0 #报文发送延迟,0表示不延迟

vrrp_gna_interval 0 #消息发送延迟

vrrp_mcast_group4 224.0.0.18 #指定组播IP地址范围:

}

2.实验环境准备

  1. 各节点时间必须同步:ntp, chrony
  2. 关闭防火墙及SELinux
  3. 各节点之间可通过主机名互相通信:非必须
  4. 建议使用/etc/hosts文件实现:非必须
  5. 各节点之间的root用户可以基于密钥认证的ssh服务完成互相通信:非必须

新建四台redhat9.3虚拟机(RHEL7可能会遇到bug)

如上图配置ip

server1: 172.25.254.110/24,下载配置httpd

server2: 172.25.254.120/24,  下载配置httpd

KA1: 172.25.254.10/24

KA2: 172.25.254.20/24

 四台设备上都关闭防火墙和SELinux

vim  /etc/selinux/config

reboot    重启

关闭防火墙

做完所有步骤可以再在所有主机上访问一下server1和server2的httpd

curl 172.25.254.110

curl 172.25.254.120

如果访问不了就去检查server1和server2的httpd服务有没有启动,防火墙有没有关

3.haproxy的基本部署

3.1 下载 keepalived

KA1和KA2上都要下载

dnf install keepalived -y

3.2 修改配置文件

ka1

[root@ka1 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
        211459813@qq.com
   }
   notification_email_from keepalived.lanjinli.org
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id ka1.lanjinli.org
   vrrp_skip_check_adv_addr
   #vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
   vrrp_mcast_group4 224.0.0.18
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 100
    priority 80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.100/24 dev eth0 label eth0:1
    }
}

......

修改完配置文件都要重启服务
systemctl restart keepalived.service 

ka2

[root@ka2 ~]# vim /etc/keepalived/keepalived.conf 
! Configuration File for keepalived

global_defs {
   notification_email {
        888888888@qq.com
   }
   notification_email_from keepalived@lanjinli.org
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id ka2.lanjinli.org
   vrrp_skip_check_adv_addr
   #vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
   vrrp_mcast_group4 224.0.0.18
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 100
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
       172.25.254.100/24 dev eth0 label eth0:1
    }
}

重启服务
systemctl restart keepalived.service 

测试

通过抓包查看数据的流向,抓的是发往组播上的包

[root@realserver1 ~]# tcpdump -i eth0 -nn host 224.0.0.18
#由于ka2的权重比较高,所以ka2包含的有VIP,tcp包从20发往224.0.0.18
[root@ka2 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.20  netmask 255.255.255.0  broadcast 172.25.254.255
        inet6 fe80::435f:2ba:94d2:582  prefixlen 64  scopeid 0x20<link>
        inet6 fe80::4da5:5424:6c11:6bc  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:68:ef:05  txqueuelen 1000  (Ethernet)
        RX packets 30159  bytes 4025655 (3.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 43306  bytes 3358755 (3.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.100  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:68:ef:05  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 1450  bytes 111001 (108.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1450  bytes 111001 (108.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:6b:1d:b7  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ka2 ~]# 

关闭ka2上的keepalived服务,则ka1上加入了VIP,tcp包从10发往224.0.0.18

[root@ka1 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.10  netmask 255.255.255.0  broadcast 172.25.254.255
        inet6 fe80::435f:2ba:94d2:582  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:92:51:bd  txqueuelen 1000  (Ethernet)
        RX packets 38254  bytes 5109536 (4.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 41709  bytes 3230481 (3.0 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.100  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:92:51:bd  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 1535  bytes 117492 (114.7 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1535  bytes 117492 (114.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:6b:1d:b7  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ka1 ~]# 

4.实现keepalived日志分离

默认情况下keepalived的日志是和其他服务在一个文件中,比较乱,所以我们下面要实现将keepalived的日志单独分离出来

4.1 修改/etc/sysconfig/keepalived 

vim  /etc/sysconfig/keepalived  打开文件在末尾添加下面的内容,指定采集日志的ID为6或者8

KEEPALIVED_OPTIONS="-D -S 6"

重启服务

systemctl   restart  keepalived.service  

4.2 修改日志配置文件

vim /etc/rsyslog.conf  在local7.*的下面添加local6,上面的ID写的多少这里就是local几

...
# Save boot messages also to boot.log
local7.*                                                /var/log/boot.log
local6.*                                                /var/log/keepalived.log
...

重启服务

systemctl restart rsyslog.service 

查看是否生成日志文件,结果是没有,因为keepalived还没有任何日志产生

cat   /var/log/keepalived.log

重启keepalived服务,使日志产生

systemctl   restart  keepalived.service  

再查看

cat   /var/log/keepalived.log

5.keepalived独立子配置文件

/etc/keepalived/keepalived.conf 文件中内容过多,不易管理这时候可以将不同集群的配置,比如不同集群的VIP配置放在独立的子配置文件中利用include指令可以实现包含子配置文件

格式:include pathfile

5.1 创建子配置文件目录

[root@ka1 ~]# mkdir -p /etc/keepalived/conf.d
[root@ka1 ~]# vim /etc/keepalived/conf.d/172.25.254.100.conf
[root@ka1 ~]# cat /etc/keepalived/conf.d/172.25.254.100.conf
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 100
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.100/24 dev eth0 label eth0:1
    }    
}

5.2 编辑主配置文件

[root@ka1 ~]# cat /etc/keepalived/keepalived.conf 
...
   vrrp_gna_interval 0
   vrrp_mcast_group4 224.0.0.18
}

#vrrp_instance VI_1 {
#    state MASTER
#    interface eth0
#    virtual_router_id 100
#    priority 100
#    advert_int 1
#    authentication {
#        auth_type PASS
#        auth_pass 1111
#    }
#    virtual_ipaddress {
#        172.25.254.100/24 dev eth0 label eth0:1
#    }
#}
#定义子配置文件目录
include "/etc/keepalived/conf.d/*.conf"


virtual_server 192.168.200.100 443 {

...

重启keepalived

systemctl restart keepalived.service 

测试

#查看是否运行成功获得了vip172.25.254.100,注意权重的问题
[root@ka1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:92:51:bd brd ff:ff:ff:ff:ff:ff
    inet 172.25.254.10/24 brd 172.25.254.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet 172.25.254.100/24 scope global secondary eth0:1
       valid_lft forever preferred_lft forever
    inet6 fe80::435f:2ba:94d2:582/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::4da5:5424:6c11:6bc/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:6b:1d:b7 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:6b:1d:b7 brd ff:ff:ff:ff:ff:ff
[root@ka1 ~]#
 

6.抢占模式和非抢占模式

抢占模式和非抢占模式的作用:

主设备坏掉以后,vip到备用设备上

设置抢占模式时,主设备可用后,vip会回到主设备;

而非抢占模式是即使主设备可用了,vip也不会回到主设备

要关闭vip抢占,必须要将各个keepalived服务器state配置成BACKUP

#可以通过抓包查看数据的流向,抓的是发往组播上的包
[root@realserver1 ~]# tcpdump -i eth0 -nn host 224.0.0.18
#ka1编写keepalived配置文件
[root@ka1 ~]# cat /etc/keepalived/keepalived.conf
...
global_defs {
   notification_email {
        888888888@qq.com
   }
   notification_email_from keepalived.lanjinli.org
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id ka1.lanjinli.org
   vrrp_skip_check_adv_addr
   #vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
   vrrp_mcast_group4 224.0.0.18
}

vrrp_instance VI_1 {
    #state MASTER
    state BACKUP
    interface eth0
    virtual_router_id 100
    priority 80                #优先级高
    advert_int 1    
    nopreempt                #非抢占模式
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.100/24 dev eth0 label eth0:1
    }
}
...


#ka2编写keepalived配置文件
[root@ka2 ~]# cat /etc/keepalived/keepalived.conf
...
global_defs {
   notification_email {
        888888888@qq.com
   }
   notification_email_from keepalived@lanjinli.org
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id ka2.lanjinli.org
   vrrp_skip_check_adv_addr
   #vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
   vrrp_mcast_group4 224.0.0.18
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 100
    priority 70                #优先级低
    advert_int 1
    nopreemprt                #非抢占模式
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
       172.25.254.100/24 dev eth0 label eth0:1
    }
}
...

#首先重启ka2的keepalived服务这时候vip在ka2
[root@ka2 ~]# systemctl restart keepalived.service 
[root@ka2 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.20  netmask 255.255.255.0  broadcast 172.25.254.255
        inet6 fe80::435f:2ba:94d2:582  prefixlen 64  scopeid 0x20<link>
        inet6 fe80::4da5:5424:6c11:6bc  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:68:ef:05  txqueuelen 1000  (Ethernet)
        RX packets 6284  bytes 562318 (549.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 7751  bytes 615356 (600.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.100  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:68:ef:05  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 319  bytes 24414 (23.8 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 319  bytes 24414 (23.8 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:6b:1d:b7  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ka2 ~]# 
#然后开启ka1上的keepalived服务,这时候vip还在ka2没有转换到ka1
[root@ka1 ~]# systemctl restart keepalived.service 
#这时候如果关闭了ka2上的keepalived服务则vip去到了ka1,这时候发送包的地址发生了改变
[root@ka2 ~]# systemctl stop keepalived.service
#查看ka1上的vip
[root@ka1 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.10  netmask 255.255.255.0  broadcast 172.25.254.255
        inet6 fe80::435f:2ba:94d2:582  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:92:51:bd  txqueuelen 1000  (Ethernet)
        RX packets 7801  bytes 656054 (640.6 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 7540  bytes 642736 (627.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.100  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:92:51:bd  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 525  bytes 40131 (39.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 525  bytes 40131 (39.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:6b:1d:b7  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ka1 ~]# 
 

抢占延迟模式(preempt_delay)

实际用例中如果直接将vip转移可能会打断正在执行的任务,所以可以设置抢占延迟模式

抢占延迟模式,即优先级高的主机恢复后,不会立即抢回VIP,而是延迟一段时间(默认300s)再抢回VIP

修改配置:preempt_delay    time        #指定抢占的时间为time`s,默认延迟为300s

需要各keepalived服务器state为BACKUP,并且不要启用vrrp_strict

#可以通过抓包查看数据的流向,抓的是发往组播上的包
[root@realserver1 ~]# tcpdump -i eth0 -nn host 224.0.0.18
#ka1编写keepalived配置文件
[root@ka1 ~]# cat /etc/keepalived/keepalived.conf
...
global_defs {
   notification_email {
        888888888@qq.com
   }
   notification_email_from keepalived.lanjinli.org
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id ka1.lanjinli.org
   vrrp_skip_check_adv_addr
   #vrrp_strict                    #禁用了vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
   vrrp_mcast_group4 224.0.0.18
}

vrrp_instance VI_1 {
    #state MASTER
    state BACKUP
    interface eth0
    virtual_router_id 100
    priority 80
    advert_int 1
    #nopreempt
    preempt_delay 5s            #抢占延迟为5s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.100/24 dev eth0 label eth0:1
    }
}
...
#ka2编写keepalived配置文件
[root@ka2 ~]# cat /etc/keepalived/keepalived.conf
...
global_defs {
   notification_email {
        888888888@qq.com
   }
   notification_email_from keepalived@lanjinli.org
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id ka2.lanjinli.org
   vrrp_skip_check_adv_addr
   #vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
   vrrp_mcast_group4 224.0.0.18
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 100
    priority 70
    advert_int 1
    #nopreemprt
    preempt_delay 10s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
       172.25.254.100/24 dev eth0 label eth0:1
    }
}
...
#首先关闭ka1上的keepalived服务,这时候vip在ka2上,包由20发往组播地址
[root@ka1 ~]# systemctl stop keepalived.service
#查看ka2上的vip
[root@ka2 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.20  netmask 255.255.255.0  broadcast 172.25.254.255
        inet6 fe80::435f:2ba:94d2:582  prefixlen 64  scopeid 0x20<link>
        inet6 fe80::4da5:5424:6c11:6bc  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:68:ef:05  txqueuelen 1000  (Ethernet)
        RX packets 9507  bytes 804739 (785.8 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 10743  bytes 875796 (855.2 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.100  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:68:ef:05  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 739  bytes 56521 (55.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 739  bytes 56521 (55.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:6b:1d:b7  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ka2 ~]# 
#重新启动ka1上的keepalived服务,这时候会vip会在5s后切换到ka1,由20发往组播地址的包变为了由10发往组播地址
[root@ka1 ~]# systemctl restart keepalived.service
#查看ka1的vip
[root@ka1 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.10  netmask 255.255.255.0  broadcast 172.25.254.255
        inet6 fe80::435f:2ba:94d2:582  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:92:51:bd  txqueuelen 1000  (Ethernet)
        RX packets 9964  bytes 831383 (811.8 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 11441  bytes 941439 (919.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.100  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:92:51:bd  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 1050  bytes 80278 (78.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1050  bytes 80278 (78.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:6b:1d:b7  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ka1 ~]# 
 

7.VIP单播配置

默认keepalined主机之间利用多播相互通告消息,会造成网络拥塞,可以替换成单播,减少网络流量

 单播模式需要注释掉global中的#vrrp_strict

启用单播格式:

unicast_src_ip <IPADDR> #指定发送单播的源IP
unicast_peer {
    <IPADDR>    #指定接收单播的对方目标主机IP
    .....        #如果有多台需要接收单播直接加就行
}

 实验配置:

ka1

#编辑ka1的keepalived配置文件
[root@ka1 ~]# cat /etc/keepalived/keepalived.conf
...
global_defs {
   notification_email {
        666688889@qq.com
   }
   notification_email_from keepalived.lanjinli.org
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id ka1.lanjinli.org
   vrrp_skip_check_adv_addr
   #vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
   vrrp_mcast_group4 224.0.0.18
}

vrrp_instance VI_1 {
    state MASTER
    #state BACKUP
    interface eth0
    virtual_router_id 100
    priority 80
    advert_int 1
    #nopreempt
    #preempt_delay 5s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.100/24 dev eth0 label eth0:1
    }
    unicast_src_ip 172.25.254.10
    unicast_peer {
        172.25.254.20
    }
}

#include "/etc/keepalived/conf.d/*.conf"

...
#重启服务
[root@ka1 ~]# systemctl restart keepalived.service 
 

ka2

#编辑ka2的keepalived配置文件
[root@ka2 ~]# cat /etc/keepalived/keepalived.conf
...
global_defs {
   notification_email {
   666688889@qq.com
}
   notification_email_from keepalived@lanjinli.org
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id ka2.lanjinli.org
   vrrp_skip_check_adv_addr
   #vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
   vrrp_mcast_group4 224.0.0.18
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 100
    priority 80
    advert_int 1
    preempt_delay 60
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.100/24 dev eth0 label eth0:1
    }
    uncast_src_ip 172.25.254.20
    uncast_peer {
        172.25.254.10
    }
}

virtual_server 192.168.200.100 443 {

...
#重启服务
[root@ka2 ~]# systemctl restart keepalived.service 
 

抓包查看

#由于权重的问题,VIP现在在ka1,包由10转发到20
[root@realserver2 ~]# tcpdump -i eth0 -nn host 172.25.254.10 and dst 172.25.254.20
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
02:56:01.907466 IP 172.25.254.10 > 172.25.254.20: VRRPv2, Advertisement, vrid 100, prio 100, authtype simple, intvl 1s, length 20
02:56:02.915078 IP 172.25.254.10 > 172.25.254.20: VRRPv2, Advertisement, vrid 100, prio 100, authtype simple, intvl 1s, length 20
02:56:03.923058 IP 172.25.254.10 > 172.25.254.20: VRRPv2, Advertisement, vrid 100, prio 100, authtype simple, intvl 1s, length 20
02:56:04.931880 IP 172.25.254.10 > 172.25.254.20: VRRPv2, Advertisement, vrid 100, prio 100, authtype simple, intvl 1s, length 20
#现在关闭ka1上的keepalived服务,vip就装换到了ka2上了,包由20发往10
[root@realserver1 ~]# tcpdump -i eth0 -nn host 172.25.254.20 and dst 172.25.254.10
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
11:51:45.872241 IP 172.25.254.20 > 172.25.254.10: VRRPv2, Advertisement, vrid 100, prio 70, authtype simple, intvl 1s, length 20
11:51:46.872633 IP 172.25.254.20 > 172.25.254.10: VRRPv2, Advertisement, vrid 100, prio 70, authtype simple, intvl 1s, length 20
11:51:47.873942 IP 172.25.254.20 > 172.25.254.10: VRRPv2, Advertisement, vrid 100, prio 70, authtype simple, intvl 1s, length 20

8.Keepalived 通知脚本配置

当keepalived的状态变化时,可以自动触发脚本的执行,比如:发邮件通知用户

默认以用户keepalived_script身份执行脚本

如果此用户不存在,以root执行脚本可以用下面指令指定脚本执行用户的身份

通知脚本类型:

1.当前节点成为主节点时触发脚本
notify_master <STRING>|<QUOTED-STRING>

2.当前节点转为备节点时触发脚本
notify_backup <STRING>|<QUOTED-STRING>

3.当前节点转为失败状态时触发的脚本
notfy_fault <STRING>|<QUOTED-STRING>

4.通用格式的通知触发机制,一个脚本可以完成以上三种状态的转换的通知
notfy <STRING>|<QUOTED-STRING>

5.当停止VRRP时触发的脚本
notify_stop <STRING>|<QUOTED-STRING>

脚本的调用方法:

在vrrp_instance VI_1语句块的末尾加下面行

notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"

给自己qq发邮箱实验配置:

1.安装malix

#ka1
[root@ka1 ~]# yum install mailx -y
#ka2
[root@ka2 ~]# yum install mailx -y

2.生成qq邮箱授权码

在Windows浏览器中登录自己的qq邮箱,然后生成一个POP3/IMAP/SMTP/Exchange/CardDAV的授权码

记住生成的授权码,配置中要用

3.编辑邮件服务配置文件

ka1和ka2都要写

[root@ka1 ~]# cat /etc/mail.rc
#在文件最后加入
...
set from=8888888888@qq.com                #自己的qq邮箱号
set smtp=smtp.qq.com                     #SMTP服务器
set smtp-auth-user=888888888@qq.com        #发往的邮箱号
set smtp-auth-password=agvasdavasdva    #邮箱的授权码
set smtp-auth=login
set ssl-verify=ignore                    #加密协议跳过
#测试
[root@ka1 ~]# echo hello world | mail -s test 8888888888@qq.com
 

4.编写发邮件的脚本

#ka1
[root@ka1 ~]# cat /etc/keepalived/mail.sh
#!/bin/bash
mail_dst="888888888@qq.com"
send_message()
{
        mail_sub="$HOSTNAME to be $1 vip move"
        mail_msg="`date +%F\ %T`:vrrp move $HOSTNAME chage $1"
        echo $mail_msg | mail -s "$mail_sub" $mail_dst
}

case $1 in
        master)
                send_message master
        ;;
        backup)
                send_message backup
        ;;
        fault)
                send_message fault
        ;;
        *)
        ;;
esac
#给脚本加权限
[root@ka1 ~]# chmod +x /etc/keepalived/mail.sh
#ka2
[root@ka2 ~]# cat /etc/keepalived/mail.sh
#!/bin/bash
mail_dst="888888888@qq.com"
send_message()
{
        mail_sub="$HOSTNAME to be $1 vip move"
        mail_msg="`date +%F\ %T`:vrrp move $HOSTNAME chage $1"
        echo $mail_msg | mail -s "$mail_sub" $mail_dst
}

case $1 in
        master)
                send_message master
        ;;
        backup)
                send_message backup
        ;;
        fault)
                send_message fault
        ;;
        *)
        ;;
esac
#给脚本加权限
[root@ka2 ~]# chmod +x /etc/keepalived/mail.sh

5.修改keepalived配置文件

#ka1
[root@ka1 ~]# cat /etc/keepalived/keepalived.conf
...
vrrp_instance VI_1 {
    state MASTER
    #state BACKUP
    interface eth0
    virtual_router_id 100
    priority 80
    advert_int 1
    #nopreempt
    #preempt_delay 5s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.100/24 dev eth0 label eth0:1
    }
    unicast_src_ip 172.25.254.10
    unicast_peer {
        172.25.254.20
    }
    notify_master "/etc/keepalived/mail.sh master"
    notify_backup "/etc/keepalived/mail.sh backup"
    notify_fault "/etc/keepalived/mail.sh fault"
}
...
#重启keepalived服务
[root@ka1 ~]# systemctl restart keepalived.service 
#ka2
[root@ka2 ~]# cat /etc/keepalived/keepalived.conf
...
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 100
    priority 70
    advert_int 1
    #nopreemprt
    #preempt_delay 10s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
       172.25.254.100/24 dev eth0 label eth0:1
    }
    unicast_src_ip 172.25.254.20
    unicast_peer {
        172.25.254.10
    }
    notify_master "/etc/keepalived/mail.sh master"
    notify_backup "/etc/keepalived/mail.sh backup"
    notify_fault "/etc/keepalived/mail.sh fault"
}
...
#重启keepalived服务
[root@ka2 ~]# systemctl restart keepalived.service
 

查看是否收到邮件

当关闭ka1上的keepalived后ka1上就不能发送邮件了,这时候只有ka2发送邮件

9.实现 master/master 的 Keepalived 双主架构

master/slave的单主架构,同一时间只有一个keepalived对外提供服务,此主机繁忙,而另一台主机却很空闲,利用率低下,可以使用master/master的双主架构,解决此问题,双主结构就是互为主备,即将两个或以上VIP分别运行在不同的keepalived服务器,以实现服务器并行提供web访问的目的,提高服务器资源利用率

master/master的双主架构:

#ka1的keepalived配置文件编写
[root@ka1 ~]# cat /etc/keepalived/keepalived.conf
...

vrrp_instance VI_1 {
    state MASTER
    #state BACKUP
    interface eth0
    virtual_router_id 100
    priority 80
    advert_int 1
    #nopreempt
    #preempt_delay 5s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.100/24 dev eth0 label eth0:1
    }
    unicast_src_ip 172.25.254.10
    unicast_peer {
        172.25.254.20
    }
}

vrrp_instance VI_2 {
    #state MASTER
    state BACKUP
    interface eth0
    virtual_router_id 200
    priority 50
    advert_int 1
    #nopreempt
    #preempt_delay 5s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.200/24 dev eth0 label eth0:2
    }
    unicast_src_ip 172.25.254.10
    unicast_peer {
        172.25.254.20
    }
}
#重启服务
[root@ka1 ~]# systemctl restart keepalived.service
#ka2的keepalived配置文件编写
[root@ka2 ~]# cat /etc/keepalived/keepalived.conf
...
vrrp_instance VI_1 {
    state BACKUP
    #state MASTER
    interface eth0
    virtual_router_id 100
    priority 70
    advert_int 1
    #nopreemprt
    #preempt_delay 10s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
       172.25.254.100/24 dev eth0 label eth0:1
    }
    unicast_src_ip 172.25.254.20
    unicast_peer {
        172.25.254.10
    }
}

vrrp_instance VI_2 {
    #state BACKUP
    state MASTER
    interface eth0
    virtual_router_id 200
    priority 100
    advert_int 1
    #nopreemprt
    #preempt_delay 10s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
       172.25.254.200/24 dev eth0 label eth0:2
    }
    unicast_src_ip 172.25.254.20
    unicast_peer {
        172.25.254.10
    }
}
...
#重启服务
[root@ka2 ~]# systemctl restart keepalived.service 
 

查看两台主机的VIP在这个时候172.25.254.100对于ka1是主,对于ka2来说是备,172.25.254.200对于ka1来说是备,对于ka2来说是主,当ka2的keepalived服务关闭过后ka1上有两个VIP,或者关闭了ka1上的keepalived服务,这时候ka2上就有两个VIP

#ka1
[root@ka1 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.10  netmask 255.255.255.0  broadcast 172.25.254.255
        inet6 fe80::435f:2ba:94d2:582  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:92:51:bd  txqueuelen 1000  (Ethernet)
        RX packets 19514  bytes 1558959 (1.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 28926  bytes 2270962 (2.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.100  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:92:51:bd  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 1869  bytes 142905 (139.5 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1869  bytes 142905 (139.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:6b:1d:b7  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ka1 ~]# 
#ka2
[root@ka2 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.20  netmask 255.255.255.0  broadcast 172.25.254.255
        inet6 fe80::435f:2ba:94d2:582  prefixlen 64  scopeid 0x20<link>
        inet6 fe80::4da5:5424:6c11:6bc  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:68:ef:05  txqueuelen 1000  (Ethernet)
        RX packets 25352  bytes 1940929 (1.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 24099  bytes 1946973 (1.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0:2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.200  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:68:ef:05  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 1689  bytes 129040 (126.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1689  bytes 129040 (126.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:6b:1d:b7  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ka2 ~]# 
 

10.实现IPVS的高可用性

virtual server定义格式

virtual_server IP port                #定义虚拟主机IP地址及其端口
virtual_server fwmark int            #ipvs的防火墙打标,实现基于防火墙的负载均衡集群
virtual_server group string            #使用虚拟服务器组
 

虚拟服务器配置

virtual_server IP port {                    #VIP和PORT
    delay_loop <INT>                         #检查后端服务器的时间间隔
    lb_algo rr|wrr|lc|wlc|lblc|sh|dh         #定义调度方法
    lb_kind NAT|DR|TUN                         #集群的类型,注意要大写
    persistence_timeout <INT>                 #持久连接时长
    protocol TCP|UDP|SCTP                    #指定服务协议,一般为TCP
    sorry_server <IPADDR> <PORT>             #所有RS故障时,备用服务器地址
    real_server <IPADDR> <PORT> {             #RS的IP和PORT
    weight <INT>                             #RS权重
    notify_up <STRING>|<QUOTED-STRING>         #RS上线通知脚本
    notify_down <STRING>|<QUOTED-STRING>     #RS下线通知脚本
    HTTP_GET|SSL_GET|TCP_CHECK|SMTP_CHECK|MISC_CHECK { ... }     #定义当前主机健康状态检测方法
    }
}
 

应用层监测

HTTP_GET|SSL_GET {
    url {
        path <URL_PATH>         #定义要监控的URL
        status_code <INT>         #判断上述检测机制为健康状态的响应码,一般为 200
    }
    connect_timeout <INTEGER>     #客户端请求的超时时长, 相当于haproxy的timeout server
    nb_get_retry <INT>             #重试次数
    delay_before_retry <INT>     #重试之前的延迟时长
    connect_ip <IP ADDRESS>     #向当前RS哪个IP地址发起健康状态检测请求
    connect_port <PORT>         #向当前RS的哪个PORT发起健康状态检测请求
    bindto <IP ADDRESS>         #向当前RS发出健康状态检测请求时使用的源地址
    bind_port <PORT>             #向当前RS发出健康状态检测请求时使用的源端口
}
 

TCP监测

TCP_CHECK {
    connect_ip <IP ADDRESS>     #向当前RS的哪个IP地址发起健康状态检测请求
    connect_port <PORT>         #向当前RS的哪个PORT发起健康状态检测请求
    bindto <IP ADDRESS>         #发出健康状态检测请求时使用的源地址
    bind_port <PORT>             #发出健康状态检测请求时使用的源端口
    connect_timeout <INTEGER>     #客户端请求的超时时长,等于haproxy的timeout server
}
 

11.单主机的LVS-DR模式

server1

#加入VIP这只是临时的
[root@realserver1 ~]# ip a a 172.25.254.100/32 dev lo
#永久加入VIP
#编写配置文件
[root@realserver1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-lo
DEVICE=lo
IPADDR0=127.0.0.1
NETMASK0=255.0.0.0
IPADDR1=172.25.254.100
NETMASK1=255.255.255.255
NETWORK=127.0.0.0
# If you're having problems with gated making 127.0.0.0/8 a martian,
# you can change this to something else (255.255.255.255, for example)
BROADCAST=127.255.255.255
ONBOOT=yes
NAME=loopback
[root@realserver1 ~]# 
#重启网络服务注意如果重启不成功可以使用journalctl -xe查看,有可能是网卡不对
[root@realserver1 ~]# systemctl restart network
Job for network.service failed because the control process exited with error code. See "systemctl status network.service" and "journalctl -xe" for details.
[root@realserver1 ~]# rm -rf /etc/sysconfig/network-scripts/ifcfg-
ifcfg-ens33  ifcfg-lo     
[root@realserver1 ~]# rm -rf /etc/sysconfig/network-scripts/ifcfg-ens33 
[root@realserver1 ~]# systemctl restart network 
#写入rip路由
[root@realserver1 ~]# cat /etc/sysctl.d/arp.conf 
net.ipv4.conf.all.arp_ignore=1
net.ipv4.conf.all.arp_announce=2
net.ipv4.conf.lo.arp_ignore=1
net.ipv4.conf.lo.arp_announce=2
#重新加载
[root@realserver1 ~]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
kernel.kptr_restrict = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /usr/lib/sysctl.d/60-libvirtd.conf ...
fs.aio-max-nr = 1048576
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/arp.conf ...
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
* Applying /etc/sysctl.conf ...
[root@realserver1 ~]# 
#重启httpd服务和关闭防火墙与SELinux
[root@realserver1 ~]# systemctl stop firewalld.service 
[root@realserver1 ~]# setenforce 0
[root@realserver1 ~]# systemctl restart httpd.service

server2

#加入vip
[root@realserver2 ~]# ip a a 172.25.254.100/32 dev lo
#写入rip路由
[root@realserver2 ~]# cat /etc/sysctl.d/arp.conf 
net.ipv4.conf.all.arp_ignore=1
net.ipv4.conf.all.arp_announce=2
net.ipv4.conf.lo.arp_ignore=1
net.ipv4.conf.lo.arp_announce=2
#重新加载
[root@realserver2 ~]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
kernel.kptr_restrict = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /usr/lib/sysctl.d/60-libvirtd.conf ...
fs.aio-max-nr = 1048576
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/arp.conf ...
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
* Applying /etc/sysctl.conf ...
[root@realserver2 ~]#
#重启httpd服务和关闭防火墙与SELinux
[root@realserver2 ~]# systemctl stop firewalld.service 
[root@realserver2 ~]# setenforce 0
[root@realserver2 ~]# systemctl restart httpd.service
 

ka1

#下载ipvsadm
[root@ka1 ~]# yum install ipvsadm -y
#编辑keepalived配置文件
[root@ka1 ~]# cat /etc/keepalived/keepalived.conf
...
vrrp_instance VI_1 {
    state MASTER
    #state BACKUP
    interface eth0
    virtual_router_id 100
    priority 80
    advert_int 1
    #nopreempt
    #preempt_delay 5s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.100/24 dev eth0 label eth0:1
    }
    unicast_src_ip 172.25.254.10
    unicast_peer {
        172.25.254.20
    }
}

#include "/etc/keepalived/conf.d/*.conf"


virtual_server 172.25.254.100 80  {
    delay_loop 6
    lb_algo wrr
    lb_kind DR
    #persistence_timeout 50
    protocol TCP

    real_server 172.25.254.110 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
    real_server 172.25.254.120 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
...
[root@ka1 ~]# systemctl restart keepalived.service 
#查看ipvs策略
[root@ka1 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.25.254.100:80 wrr
  -> 172.25.254.110:80            Route   1      0          0         
  -> 172.25.254.120:80            Route   1      0          0         
[root@ka1 ~]# 

ka2

#下载ipvsadm
[root@ka2 ~]# yum install ipvsadm -y
#编辑keepalived配置文件
[root@ka2 ~]# cat /etc/keepalived/keepalived.conf
...

vrrp_instance VI_1 {
    state BACKUP
    #state MASTER
    interface eth0
    virtual_router_id 100
    priority 70
    advert_int 1
    #nopreemprt
    #preempt_delay 10s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
       172.25.254.100/24 dev eth0 label eth0:1
    }
    unicast_src_ip 172.25.254.20
    unicast_peer {
        172.25.254.10
    }
}

virtual_server 172.25.254.100 80 {
    delay_loop 6
    lb_algo wrr
    lb_kind DR
    #persistence_timeout 50
    protocol TCP

    real_server 172.25.254.110 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
    real_server 172.25.254.120 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
...
[root@ka2 ~]# systemctl restart keepalived.service 
#查看ipvs策略
[root@ka2 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.200.100:80 wrr
  -> 172.25.254.110:80            Route   1      0          0         
  -> 172.25.254.120:80            Route   1      0          0         
[root@ka2 ~]# 
 

测试

ka1中有vip,即eth0:2:172.25.254.100

[root@ka1 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.10  netmask 255.255.255.0  broadcast 172.25.254.255
        inet6 fe80::435f:2ba:94d2:582  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:92:51:bd  txqueuelen 1000  (Ethernet)
        RX packets 18487  bytes 1552110 (1.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 23923  bytes 1781264 (1.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0:2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.100  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:92:51:bd  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 357  bytes 27304 (26.6 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 357  bytes 27304 (26.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:6b:1d:b7  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ka1 ~]# 
#访问测试
[root@client ~]# curl 172.25.254.100
hehehe
[root@client ~]# curl 172.25.254.100
hahah
[root@client ~]# curl 172.25.254.100
hehehe
[root@client ~]# curl 172.25.254.100
hahah
[root@client ~]# curl 172.25.254.100
hehehe
[root@client ~]# curl 172.25.254.100
hahah
[root@client ~]#
 

关闭ka1的keepalived服务,vip会转移到ka2

#关闭ka1的keepalived服务
[root@ka1 ~]# systemctl restart keepalived.service
#查看ka2上的vip
[root@ka2 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.20  netmask 255.255.255.0  broadcast 172.25.254.255
        inet6 fe80::435f:2ba:94d2:582  prefixlen 64  scopeid 0x20<link>
        inet6 fe80::4da5:5424:6c11:6bc  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:68:ef:05  txqueuelen 1000  (Ethernet)
        RX packets 22123  bytes 1701850 (1.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 32484  bytes 2443518 (2.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0:2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.100  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:68:ef:05  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 486  bytes 37044 (36.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 486  bytes 37044 (36.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:6b:1d:b7  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ka2 ~]# 
#访问测试
[root@client ~]# curl 172.25.254.100
hahah
[root@client ~]# curl 172.25.254.100
hehehe
[root@client ~]# curl 172.25.254.100
hahah
[root@client ~]# curl 172.25.254.100
hehehe
[root@client ~]# curl 172.25.254.100
hahah
[root@client ~]# 
 

12.利用脚本实现主从角色切换

keepalived利用VRRP Script技术,可以调用外部的辅助脚本进行资源监控,并根据监控的结果实现优先动态调整,从而实现其他应用的高可用性能

VRRP Script配置
定义脚本
vrrp_script:自定义资源监控脚本,vrrp实例根据脚本返回值,公共定义,可被多个实例调用,定 义在vrrp实例之外的独立配置块,一般放在global_defs设置块之后,通常此脚本用于监控指定应用的状态。一旦发现应用的状态异常,则触发对MASTER节点的权重减至 低于SLAVE节点,从而实现 VIP 切换到 SLAVE 节点

ka1上编写脚本

vim  /mnt/check_lanjinli.sh 

#!/bin/bash
[ ! -f "/mnt/lanjinli" ]
#给脚本加权限
[root@ka1 ~]# chmod 777 /mnt/check_lanjinli.sh 

测试脚本返回值

[root@ka1 ~]# bash /mnt/check_lanjinli.sh
[root@ka1 ~]# echo $?
0
[root@ka1 ~]# touch /mnt/lanjinli
[root@ka1 ~]# bash /mnt/check_lanjinli.sh
[root@ka1 ~]# echo $?
1
[root@ka1 ~]# 

修改ka1的keepalived配置文件

vrrp_script check_lanjinli {
   script "/mnt/check_lanjinli.sh"    #需要执行的脚本
   interval 1                        #间隔时间为1s
   weight -30                        #当返回值为非0的时候权重减30
   fall 2                            #执行脚本连续两次失败则转换为失败
   rise 2                            #执行脚本连续两次都成功则把服务器从失败装换位成功
   timeout 2                        #超时时间为2s
}

vrrp_instance VI_1 {
    state MASTER
    #state BACKUP
    interface eth0
    virtual_router_id 100
    priority 80
    advert_int 1
    #nopreempt
    #preempt_delay 5s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.100/24 dev eth0 label eth0:1
    }
    unicast_src_ip 172.25.254.10
    unicast_peer {
        172.25.254.20
    }
    track_script {                #调用vrrp_script
        check_lanjinli
    }
}
 

测试:

#当/mnt/下没有lanjinli这个文件的时候返回0,权重不变,则表示开启服务器,获取vip
[root@ka1 ~]# ll /mnt/
total 4
-rwxrwxrwx 1 root root 46 Aug 16 15:24 check_lanjinli.sh
[root@ka1 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.10  netmask 255.255.255.0  broadcast 172.25.254.255
        inet6 fe80::435f:2ba:94d2:582  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:92:51:bd  txqueuelen 1000  (Ethernet)
        RX packets 37284  bytes 2866245 (2.7 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 44216  bytes 3616117 (3.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.100  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:92:51:bd  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 2059  bytes 157403 (153.7 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2059  bytes 157403 (153.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:6b:1d:b7  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ka1 ~]#
#当/mnt/下有lanjinli这个文件的时候返回1,权重减30,则表示关闭服务器,不会获取vip
[root@ka1 ~]# rm -rf /mnt/lanjinli
[root@ka1 ~]# ll /mnt
total 4
-rwxrwxrwx 1 root root 46 Aug 16 15:24 check_lanjinli.sh
-rw-r--r-- 1 root root  0 Aug 16 15:30 lanjinli
[root@ka1 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.10  netmask 255.255.255.0  broadcast 172.25.254.255
        inet6 fe80::435f:2ba:94d2:582  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:92:51:bd  txqueuelen 1000  (Ethernet)
        RX packets 38869  bytes 3015686 (2.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 45686  bytes 3755223 (3.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 2143  bytes 163823 (159.9 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2143  bytes 163823 (159.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:6b:1d:b7  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ka1 ~]# 
 

13.keepalived与haproxy实现高可用

haproxy和ipvsadm不能同时使用同一个VIP,如果想要实现keepalived和haproxy同时使用可以用不同的vip,也就是双主模式,下haproxy使用172.25.254.200,ipvsadm受用172.25.254.100,以下使用的是双主模式

ka1和ka2安装HAProxy

ka1和ka2安装HAProxy

 删掉srealser1和srealserver2的vip,修改arp响应

#realserver1
[root@realserver1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-lo
DEVICE=lo
IPADDR0=127.0.0.1
NETMASK0=255.0.0.0
#IPADDR1=172.25.254.100
#NETMASK1=255.255.255.255
NETWORK=127.0.0.0
# If you're having problems with gated making 127.0.0.0/8 a martian,
# you can change this to something else (255.255.255.255, for example)
BROADCAST=127.255.255.255
ONBOOT=yes
NAME=loopback
[root@realserver1 ~]# systemctl restart network
[root@realserver1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:54:22:6a brd ff:ff:ff:ff:ff:ff
    inet 172.25.254.110/24 brd 172.25.254.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::4da5:5424:6c11:6bc/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::435f:2ba:94d2:582/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::d318:4046:600c:390c/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:6b:1d:b7 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:6b:1d:b7 brd ff:ff:ff:ff:ff:ff
[root@realserver1 ~]# cat /etc/sysctl.d/arp.conf 
net.ipv4.conf.all.arp_ignore=0
net.ipv4.conf.all.arp_announce=0
net.ipv4.conf.lo.arp_ignore=0
net.ipv4.conf.lo.arp_announce=0
[root@realserver1 ~]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
kernel.kptr_restrict = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /usr/lib/sysctl.d/60-libvirtd.conf ...
fs.aio-max-nr = 1048576
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/arp.conf ...
net.ipv4.conf.all.arp_ignore = 0
net.ipv4.conf.all.arp_announce = 0
net.ipv4.conf.lo.arp_ignore = 0
net.ipv4.conf.lo.arp_announce = 0
* Applying /etc/sysctl.conf ...
[root@realserver1 ~]# 


#realserver2
[root@realserver2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-lo 
DEVICE=lo
IPADDR0=127.0.0.1
NETMASK0=255.0.0.0
#IPADDR1=172.25.254.100
#NETMASK1=255.255.255.255
NETWORK=127.0.0.0
# If you're having problems with gated making 127.0.0.0/8 a martian,
# you can change this to something else (255.255.255.255, for example)
BROADCAST=127.255.255.255
ONBOOT=yes
NAME=loopback
[root@realserver2 ~]# systemctl restart network
[root@realserver2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:69:b5:ed brd ff:ff:ff:ff:ff:ff
    inet 172.25.254.120/24 brd 172.25.254.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::4da5:5424:6c11:6bc/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::435f:2ba:94d2:582/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::d318:4046:600c:390c/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:6b:1d:b7 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:6b:1d:b7 brd ff:ff:ff:ff:ff:ff
[root@realserver2 ~]# cat /etc/sysctl.d/arp.conf 
net.ipv4.conf.all.arp_ignore=0
net.ipv4.conf.all.arp_announce=0
net.ipv4.conf.lo.arp_ignore=0
net.ipv4.conf.lo.arp_announce=0
[root@realserver2 ~]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
kernel.kptr_restrict = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /usr/lib/sysctl.d/60-libvirtd.conf ...
fs.aio-max-nr = 1048576
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/arp.conf ...
net.ipv4.conf.all.arp_ignore = 0
net.ipv4.conf.all.arp_announce = 0
net.ipv4.conf.lo.arp_ignore = 0
net.ipv4.conf.lo.arp_announce = 0
* Applying /etc/sysctl.conf ...
[root@realserver2 ~]# 
 

 ka1和ka2启用内核参数

#ka1
[root@ka1 ~]# cat /etc/sysctl.conf
...
#在末尾加入
net.ipv4.ip_nonlocal_bind = 1
#使用其生效
[root@ka1 ~]# sysctl -p
net.ipv4.ip_nonlocal_bind = 1
[root@ka1 ~]#
#编辑haproxy的配置文件
[root@ka1 ~]# cat /etc/haproxy/haproxy.cfg
...
#在末尾加入
listen webserver
        bind 172.25.254.100:80
        mode http
        balance roundrobin
        server web1 172.25.254.110:80 check inter 3 fall 2 rise 5
        server web2 172.25.254.120:80 check inter 3 fall 2 rise 5
#重启haproxy服务
[root@ka1 ~]# systemctl enable --now haproxy.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/haproxy.service to /usr/lib/systemd/system/haproxy.service.
#查看是否加入端口
[root@ka1 ~]# netstat -antlupe | grep haproxy
tcp        0      0 0.0.0.0:5000            0.0.0.0:*               LISTEN      0          1269940    18519/haproxy       
tcp        0      0 172.25.254.100:80       0.0.0.0:*               LISTEN      0          1269942    18519/haproxy       
udp        0      0 0.0.0.0:40148           0.0.0.0:*                           0          1269941    18517/haproxy       
[root@ka1 ~]#   


#ka2
[root@ka2 ~]# cat /etc/sysctl.conf
...
#在末尾加入
net.ipv4.ip_nonlocal_bind = 1
[root@ka2 ~]# sysctl -p
net.ipv4.ip_nonlocal_bind = 1
[root@ka2 ~]# 
#编辑haproxy的配置文件
[root@ka2 ~]# cat /etc/haproxy/haproxy.cfg
...
#在末尾加入
listen webserver
        bind 172.25.254.100:80
        mode http
        balance roundrobin
        server web1 172.25.254.110:80 check inter 3 fall 2 rise 5
        server web2 172.25.254.120:80 check inter 3 fall 2 rise 5
#重启haproxy服务
[root@ka2 ~]# systemctl enable --now haproxy.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/haproxy.service to /usr/lib/systemd/system/haproxy.service.
#查看是否加入端口
[root@ka2 ~]# netstat -antlupe | grep haproxy
tcp        0      0 0.0.0.0:5000            0.0.0.0:*               LISTEN      0          748750     8584/haproxy        
tcp        0      0 172.25.254.100:80       0.0.0.0:*               LISTEN      0          748752     8584/haproxy        
udp        0      0 0.0.0.0:57756           0.0.0.0:*                           0          748751     8581/haproxy        
[root@ka2 ~]#   
 

ka1和ka2的配置文件

#ka1
[root@ka1 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
     888888888@qq.com
   }
   notification_email_from keepalived.lanjinli.org
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id ka1.lanjinli.org
   vrrp_skip_check_adv_addr
   #vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
   vrrp_mcast_group4 224.0.0.18
}

vrrp_script check_lanjinli {
   script "/mnt/check_lanjinli.sh"
   interval 1
   weight -30
   fall 2
   rise 2
   timeout 2
}

vrrp_instance VI_1 {
    state MASTER
    #state BACKUP
    interface eth0
    virtual_router_id 100
    priority 80
    advert_int 1
    #nopreempt
    #preempt_delay 5s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.100/24 dev eth0 label eth0:1 
    }
    unicast_src_ip 172.25.254.10
    unicast_peer {
    172.25.254.20
    }
    track_script {
    check_lanjinli
    }
}

vrrp_instance VI_2 {
    #state MASTER
    state BACKUP
    interface eth0
    virtual_router_id 200
    priority 50
    advert_int 1
    #nopreempt
    #preempt_delay 5s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.200/24 dev eth0 label eth0:2
    }
    unicast_src_ip 172.25.254.10
    unicast_peer {
        172.25.254.20
    }
}

#include "/etc/keepalived/conf.d/*.conf"

virtual_server 172.25.254.200 80  {
    delay_loop 6
    lb_algo wrr
    lb_kind DR
    #persistence_timeout 50
    protocol TCP

    real_server 172.25.254.110 80 {
        weight 1
        HTTP_GET {
            url {
          path /
          status_code 200        
        }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
    real_server 172.25.254.120 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

virtual_server 10.10.10.2 1358 {
    delay_loop 6
    lb_algo rr 
    lb_kind NAT
    persistence_timeout 50
    protocol TCP

    sorry_server 192.168.200.200 1358

    real_server 192.168.200.2 1358 {
        weight 1
        HTTP_GET {
            url { 
              path /testurl/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            url { 
              path /testurl2/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            url { 
              path /testurl3/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.200.3 1358 {
        weight 1
        HTTP_GET {
            url { 
              path /testurl/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334c
            }
            url { 
              path /testurl2/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334c
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

virtual_server 10.10.10.3 1358 {
    delay_loop 3
    lb_algo rr 
    lb_kind NAT
    persistence_timeout 50
    protocol TCP

    real_server 192.168.200.4 1358 {
        weight 1
        HTTP_GET {
            url { 
              path /testurl/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            url { 
              path /testurl2/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            url { 
              path /testurl3/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.200.5 1358 {
        weight 1
        HTTP_GET {
            url { 
              path /testurl/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            url { 
              path /testurl2/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            url { 
              path /testurl3/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
[root@ka1 ~]# 

#ka2
[root@ka2 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
        888888888@qq.com
   }
   notification_email_from keepalived@lanjinli.org
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id ka2.lanjinli.org
   vrrp_skip_check_adv_addr
   #vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
   vrrp_mcast_group4 224.0.0.18
}

vrrp_instance VI_1 {
    state BACKUP
    #state MASTER
    interface eth0
    virtual_router_id 100
    priority 70
    advert_int 1
    #nopreemprt
    #preempt_delay 10s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
       172.25.254.100/24 dev eth0 label eth0:1 
    }
    unicast_src_ip 172.25.254.20
    unicast_peer {
    172.25.254.10
    }
}

vrrp_instance VI_2 {
    #state BACKUP
    state MASTER
    interface eth0
    virtual_router_id 200
    priority 100
    advert_int 1
    #nopreemprt
    #preempt_delay 10s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
       172.25.254.200/24 dev eth0 label eth0:2
    }
    unicast_src_ip 172.25.254.20
    unicast_peer {
        172.25.254.10
    }
}

virtual_server 172.25.254.200 80 {
    delay_loop 6
    lb_algo wrr
    lb_kind DR
    #persistence_timeout 50
    protocol TCP

    real_server 172.25.254.110 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
        }
        connect_timeout 3
        nb_get_retry 3
            delay_before_retry 3
        }
    }
    real_server 172.25.254.120 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

virtual_server 10.10.10.2 1358 {
    delay_loop 6
    lb_algo rr 
    lb_kind NAT
    persistence_timeout 50
    protocol TCP

    sorry_server 192.168.200.200 1358

    real_server 192.168.200.2 1358 {
        weight 1
        HTTP_GET {
            url { 
              path /testurl/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            url { 
              path /testurl2/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            url { 
              path /testurl3/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.200.3 1358 {
        weight 1
        HTTP_GET {
            url { 
              path /testurl/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334c
            }
            url { 
              path /testurl2/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334c
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

virtual_server 10.10.10.3 1358 {
    delay_loop 3
    lb_algo rr 
    lb_kind NAT
    persistence_timeout 50
    protocol TCP

    real_server 192.168.200.4 1358 {
        weight 1
        HTTP_GET {
            url { 
              path /testurl/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            url { 
              path /testurl2/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            url { 
              path /testurl3/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.200.5 1358 {
        weight 1
        HTTP_GET {
            url { 
              path /testurl/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            url { 
              path /testurl2/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            url { 
              path /testurl3/test.jsp
              digest 640205b7b0fc66c1ea91c463fac6334d
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
[root@ka2 ~]# 
 

客户端访问测试

hehehe
[root@client ~]# curl 172.25.254.100
hahah
[root@client ~]# curl 172.25.254.100
hehehe
[root@client ~]# curl 172.25.254.100
hahah
[root@client ~]# curl 172.25.254.100
hehehe
[root@client ~]# curl 172.25.254.100
hahah
[root@client ~]# curl 172.25.254.100
hehehe
[root@client ~]# curl 172.25.254.100
hahah
[root@client ~]# curl 172.25.254.100
hehehe
[root@client ~]# curl 172.25.254.100
hahah
[root@client ~]# 
#当ka1上的keepalived服务关闭再进行访问
[root@client ~]# curl 172.25.254.100
hehehe
[root@client ~]# curl 172.25.254.100
hahah
[root@client ~]# curl 172.25.254.100
hehehe
[root@client ~]# curl 172.25.254.100
hahah
[root@client ~]# curl 172.25.254.100
hehehe
[root@client ~]# 

#检测ka1上的heproxy的状态
[root@client ~]# killall -0 haproxy
haproxy: no process found                #说明haproxy
[root@client ~]# killall -0 haproxy
haproxy: no process found
[root@client ~]# echo $?
1
[root@client ~]# 
#检测ka2上的heproxy的状态
[root@ka2 ~]# killall -0 haproxy
[root@ka2 ~]# echo $?
0
[root@ka2 ~]# 
 

配置ka1和ka2

    }
    unicast_src_ip 172.25.254.20
    unicast_peer {
        172.25.254.10
    }
    track_script {
        check_lanjinli
    }
}

vrrp_instance VI_2 {
    #state BACKUP
    state MASTER
    interface eth0
    virtual_router_id 200
    priority 100
    advert_int 1
    #nopreemprt
    #preempt_delay 10s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {#ka1
[root@ka1 ~]# cat /mnt//check_lanjinli.sh 
#!/bin/bash
killall -0 haproxy
[root@ka1 ~]# cat /etc/keepalived/keepalived.conf
...
vrrp_script check_lanjinli {
   script "/mnt/check_lanjinli.sh"
   interval 1
   weight -30
   fall 2
   rise 2
   timeout 2
}

vrrp_instance VI_1 {
    state MASTER
    #state BACKUP
    interface eth0
    virtual_router_id 100
    priority 80
    advert_int 1
    #nopreempt
    #preempt_delay 5s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.100/24 dev eth0 label eth0:1
    }
    unicast_src_ip 172.25.254.10
    unicast_peer {
        172.25.254.20
    }
    track_script {
        check_lanjinli
    }
}


#virtual_server 172.25.254.200 80  {
#    delay_loop 6
#    lb_algo wrr
#    lb_kind DR
 #   #persistence_timeout 50
#    protocol TCP
#
#    real_server 172.25.254.110 80 {
#        weight 1
#        HTTP_GET {
#            url {
#             path /
#             status_code 200
##          }
 #           connect_timeout 3
#            nb_get_retry 3
#            delay_before_retry 3
#        }
#    }
#    real_server 172.25.254.120 80 {
#        weight 1
 #       HTTP_GET {
 #           url {
#              path /
#              status_code 200
#            }
 #           connect_timeout 3
  #          nb_get_retry 3
#            delay_before_retry 3
#        }
#    }
#}
...

#ka2
[root@ka2 ~]# cat /mnt//check_lanjinli.sh
#!/bin/bash
killall -0 haproxy
[root@ka2 ~]# cat /etc/keepalived/keepalived.conf 
vrrp_script check_lanjinli {
   script "/mnt/check_lanjinli.sh"
   interval 1
   weight -30
   fall 2
   rise 2
   timeout 2
}

vrrp_instance VI_1 {
    state BACKUP
    #state MASTER
    interface eth0
    virtual_router_id 100
    priority 70
    advert_int 1
    #nopreemprt
    #preempt_delay 10s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
       172.25.254.100/24 dev eth0 label eth0:1
    }
    unicast_src_ip 172.25.254.20
    unicast_peer {
        172.25.254.10
    }
    track_script {
        check_lanjinli
    }
}

vrrp_instance VI_2 {
    #state BACKUP
    state MASTER
    interface eth0
    virtual_router_id 200
    priority 100
    advert_int 1
    #nopreemprt
    #preempt_delay 10s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
       172.25.254.200/24 dev eth0 label eth0:2
    }
    unicast_src_ip 172.25.254.20
    unicast_peer {
        172.25.254.10
    }
}

#virtual_server 172.25.254.200 80 {
#    delay_loop 6
#    lb_algo wrr
#    lb_kind DR
 #   #persistence_timeout 50
#    protocol TCP
#
#    real_server 172.25.254.110 80 {
#        weight 1
#        HTTP_GET {
#            url {
#              path /
 #             status_code 200
#           }
#           connect_timeout 3
#           nb_get_retry 3
#            delay_before_retry 3
 #       }
 #   }
#    real_server 172.25.254.120 80 {
#        weight 1
#        HTTP_GET {
#            url {
 #             path /
  #            status_code 200
#            }
 #           connect_timeout 3
 #           nb_get_retry 3
 #           delay_before_retry 3
  #      }
   # }
#}
 

测试

[root@client ~]# curl 172.25.254.100
hahah
[root@client ~]# curl 172.25.254.100
hehehe
[root@client ~]# curl 172.25.254.100
hahah
[root@client ~]# curl 172.25.254.100
hehehe
[root@client ~]# curl 172.25.254.100
hahah
[root@client ~]# curl 172.25.254.100
hehehe
[root@client ~]# 


#ka1vip
[root@ka1 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.10  netmask 255.255.255.0  broadcast 172.25.254.255
        inet6 fe80::435f:2ba:94d2:582  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:92:51:bd  txqueuelen 1000  (Ethernet)
        RX packets 1954855  bytes 145104268 (138.3 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3863276  bytes 271408393 (258.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.100  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:92:51:bd  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 24386  bytes 1333534 (1.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 24386  bytes 1333534 (1.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:6b:1d:b7  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ka1 ~]#
#ka2vip
[root@ka2 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.20  netmask 255.255.255.0  broadcast 172.25.254.255
        inet6 fe80::435f:2ba:94d2:582  prefixlen 64  scopeid 0x20<link>
        inet6 fe80::4da5:5424:6c11:6bc  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:68:ef:05  txqueuelen 1000  (Ethernet)
        RX packets 1434842  bytes 106995057 (102.0 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2810630  bytes 197355601 (188.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0:2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.200  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:68:ef:05  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 17777  bytes 972962 (950.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 17777  bytes 972962 (950.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:6b:1d:b7  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ka2 ~]#
#查看ka2上的haproxy服务的状态,由于ka2上的没有172.25.254.100这个vip,所以本来haproxy这个服务不能起来的,但是由于启用了启用内核参数
[root@ka2 ~]# systemctl is-active haproxy.service 
active
[root@ka2 ~]# sysctl -p
net.ipv4.ip_nonlocal_bind = 1
[root@ka2 ~]# 
 

明天续写

  • 15
    点赞
  • 18
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值