高可用集群KEEPALIVED(内含实验)

一.高可用集群

原因:如果我们上面所学习的haproxy这台主机死机,那么服务会瘫痪,所以需要添加一台备用来保证服务正常,这就是我们接下来学习的高可用集群。

集群类型

LB:Load Balance 负载均衡

LVS/HAProxy/nginx(http/upstream, stream/upstream)

HA:High Availability 高可用集群

数据库、Redis

SPoF: Single Point of Failure,解决单点故障

HPC:High Performance Computing 高性能集

实现高可用

提升系统高用性的解决方案:降低MTTR- Mean Time To Repair(平均故障时间)

解决方案:建立冗余机制

active/passive 主/备

active/active 双主

active --> HEARTBEAT --> passive

active <--> HEARTBEAT <--> active 

二、keepalived部署

2.1搭建环境

实验准备:四台虚拟机都是克隆的rhel7
realserver1: 172.25.254.110
realserver2: 172.25.254.120
KA1: 172.25.254.10
KA2: 172.25.254.20

 

[root@realserver1 ~]# yum install httpd -y
[root@realserver1 ~]# echo 172.25.254.110 > /var/www/html/index.html
[root@realserver1 ~]# systemctl enable --now httpd

[root@realserver2 ~]# yum install httpd -y
[root@realserver2 ~]# echo 172.25.254.120 > /var/www/html/index.html
[root@realserver2 ~]# systemctl enable --now httpd

#测试环境是否成功
[root@ka1 ~]# curl 172.25.254.110
172.25.254.110
[root@ka1 ~]# curl 172.25.254.120
172.25.254.120

配置文件组成部分
配置文件:/etc/keepalived/keepalived.conf
配置文件组成
GLOBAL CONFIGURATION
Global definitions: 定义邮件配置,route_id,vrrp配置,多播地址等
VRRP CONFIGURATION
VRRP instance(s): 定义每个vrrp虚拟路由器
LVS CONFIGURATION
Virtual server group(s)
Virtual server(s): LVS集群的VS和RS

2.2开始实验

需求:当 一台vip挂了,该如何保证服务正常?

[root@ka21~]# yum install keepalived -y
[root@ka2 ~]# yum install keepalived -y
[root@ka1 ~]# vim /etc/keepalived/keepalived.conf 
[root@ka1 ~]# systemctl restart keepalived.service

 

#此时会出现vip为172.25.254.100
[root@ka1 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.10  netmask 255.255.255.0  broadcast 172.25.254.255
        inet6 fe80::20c:29ff:fe30:dff9  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:30:df:f9  txqueuelen 1000  (Ethernet)
        RX packets 5003  bytes 2528873 (2.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3179  bytes 364124 (355.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.100  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:30:df:f9  txqueuelen 1000  (Ethernet)

#复制ka1的配置文件,只需要把ka2优先级改为80。
[root@ka2 ~]# scp root@172.25.254.10:/etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf

#因为优先级为80比ka1的优先级100低,所以当我们抓包时发现一直都是ka1。
[root@ka1 ~]# tcpdump -i eth0 -nn host 224.0.0.18
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
11:10:44.007931 IP 172.25.254.10 > 224.0.0.18: VRRPv2, Advertisement, vrid 100, prio 100, authtype simple, intvl 1s, length 20
11:10:45.008996 IP 172.25.254.10 > 224.0.0.18: VRRPv2, Advertisement, vrid 100, prio 100, authtype simple, intvl 1s, length 20

#如果把ka1的keepalived的服务停止,在另一个地方登录ka1停止keepalived的服务,此时抓包的结果会更改。
[root@realserver1 ~]# ssh root@ka1
[root@ka1 ~]# systemctl stop keepalived.service 

11:16:24.498883 IP 172.25.254.10 > 224.0.0.18: VRRPv2, Advertisement, vrid 100, prio 100, authtype simple, intvl 1s, length 20
11:16:24.781268 IP 172.25.254.10 > 224.0.0.18: VRRPv2, Advertisement, vrid 100, prio 0, authtype simple, intvl 1s, length 20
11:16:25.469770 IP 172.25.254.20 > 224.0.0.18: VRRPv2, Advertisement, vrid 100, prio 80, authtype simple, intvl 1s, length 20
11:16:26.471110 IP 172.25.254.20 > 224.0.0.18: VRRPv2, Advertisement, vrid 100, prio 80, authtype simple, intvl 1s, length 20

#此时KA2会自动出现vip进而接替KA1的工作
[root@ka2 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.20  netmask 255.255.255.0  broadcast 172.25.254.255
        inet6 fe80::20c:29ff:fea5:1656  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:a5:16:56  txqueuelen 1000  (Ethernet)
        RX packets 3634  bytes 360173 (351.7 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3137  bytes 279478 (272.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.100  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:a5:16:56  txqueuelen 1000  (Ethernet)

重点:这个实验解决了这种突发情况,当一台vip挂了,另一台就会自动接替其工作。

3.1启用keepalived日志功能

[root@ka1 ~]# vim /etc/keepalived/keepalived.conf 
[root@ka1 ~]# systemctl restart keepalived.service 

 

[root@ka1 ~]# vim /etc/sysconfig/keepalived 
[root@ka1 ~]# vim /etc/rsyslog.conf
[root@ka1 ~]# systemctl restart keepalived.service 
[root@ka1 ~]# systemctl restart rsyslog.service

#测试结果
[root@ka1 ~]# ll /var/log/keepalived.log
-rw------- 1 root root 911 Aug 12 13:54 /var/log/keepalived.log

 

3.2实现独立子配置文件

当生产环境复杂时, /etc/keepalived/keepalived.conf 文件中内容过多,不易管理。
将不同集群的配置,比如:不同集群的VIP配置放在独立的子配置文件中利用include 指令可以实现包含子配置文件

[root@ka1 ~]# vim /etc/keepalived/keepalived.conf 
[root@ka1 ~]# mkdir -p /etc/keepalived/conf.d
[root@ka1 ~]# vim /etc/keepalived/conf.d/172.25.254.100.conf
[root@ka1 ~]# systemctl restart keepalived.service 
[root@ka1 ~]# ifconfig
[root@ka1 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.10  netmask 255.255.255.0  broadcast 172.25.254.255
        inet6 fe80::20c:29ff:fe30:dff9  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:30:df:f9  txqueuelen 1000  (Ethernet)
        RX packets 28324  bytes 4260028 (4.0 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 15699  bytes 1392969 (1.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.100  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:30:df:f9  txqueuelen 1000  (Ethernet)

 在主配置文件中注释掉这一段,并加上include "/etc/keepalived/conf.d/*.conf"

 

在子配置文件中写入刚刚注释的内容即可。

3.3 抢占模式和非抢占模式

3.2.1 非抢占模式 nopreempt

默认为抢占模式preempt,即当高优先级的主机恢复在线后,会抢占低先级的主机的master角色,这样会使vip在KA主机中来回漂移,造成网络抖动, 建议设置为非抢占模式 nopreempt ,即高优先级主机恢复后,并不会抢占低优先级主机的master角色非抢占模块下,如果原主机down机, VIP迁移至的新主机, 后续也发生down时,仍会将VIP迁移回原主机。

[root@ka1 ~]# vim /etc/keepalived/keepalived.conf

[root@ka1 ~]# systemctl restart keepalived.service 
[root@ka1 ~]# tcpdump -i eth0 -nn host 224.0.0.18
#如果把ka1的keepalived的服务停止,在另一个地方登录ka1停止keepalived的服务,此时抓包的结果会更改。
[root@realserver1 ~]# ssh root@ka1
[root@ka1 ~]# systemctl stop keepalived.service 

14:30:22.135998 IP 172.25.254.10 > 224.0.0.18: VRRPv2, Advertisement, vrid 100, prio 0, authtype simple, intvl 1s, length 20
14:30:22.825098 IP 172.25.254.20 > 224.0.0.18: VRRPv2, Advertisement, vrid 100, prio 80, authtype simple, intvl 1s, length 20

#重启后抓包结果仍然为172.25.254.20,ka1不会抢占回来。
[root@ka1 ~]# systemctl restart keepalived.service 
14:30:22.825098 IP 172.25.254.20 > 224.0.0.18: VRRPv2, Advertisement, vrid 100, prio 80, authtype simple, intvl 1s, length 20

3.2.2 抢占延迟模式 preempt_delay

抢占延迟模式,即优先级高的主机恢复后,不会立即抢回VIP,而是延迟一段时间(默认300s)再抢回VIP。

做完记得把延迟抢占注释

3.3.3 VIP单播配置

默认keepalived主机之间利用多播相互通告消息,会造成网络拥塞,可以替换成单播,减少网络流量。

注意:启用 vrrp_strict 时,不能启用单播。

#注意:需要将vrrp_strict注释掉
[root@ka1 ~]# vim /etc/keepalived/keepalived.conf
[root@ka1 ~]# systemctl restart keepalived.service

 

[root@ka1 ~]# tcpdump -i eth0 -nn src host 172.25.254.10 and dst 172.25.2
14:39:14.406304 IP 172.25.254.10 > 172.25.254.20: VRRPv2, Advertisement, vrid 100, prio 100, authtype simple, intvl 1s, length 20

#因为此时vip在ka1上工作,所以必须把ka上的keepalived服务关闭才能测试2
[root@ka1 ~]# systemctl stop keepalived.service
[root@ka2 ~]# tcpdump -i eth0 -nn src host 172.25.254.20 and dst 172.25.254.10
14:40:14.999670 IP 172.25.254.20 > 172.25.254.10: VRRPv2, Advertisement, vrid 100, prio 80, authtype simple, intvl 1s, length 20

3.5 实现 master/master Keepalived 双主架构

master/slave的单主架构,同一时间只有一个Keepalived对外提供服务,此主机繁忙,而另一台主机却很空闲,利用率低下,可以使用master/master的双主架构,解决此问题。

master/master 的双主架构:

即将两个或以上VIP分别运行在不同的keepalived服务器,以实现服务器并行提供web访问的目的,提高服务器资源利用率。

注意:一定要注释掉vrrp_strict

# ka1备
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 100
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.100/24 dev eth0 label eth0:1
    }
    unicast_src_ip 172.25.254.10
    unicast_peer {
        172.25.254.20
    }
    track_script {
        check_haproxy
    }
}
vrrp_instance VI_2 {
    state BACKUP
    interface eth0
    virtual_router_id 200
    priority 80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.200/24 dev eth0 label eth0:2
    }
    unicast_src_ip 172.25.254.10
    unicast_peer {
        172.25.254.20
    }
}

# ka2
vrrp_instance VI_1 {
    state BACKUP 
    interface eth0
    virtual_router_id 100
    priority 80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.100/24 dev eth0 label eth0:1
    }
    unicast_src_ip 172.25.254.20
    unicast_peer {
        172.25.254.10
    }
}
vrrp_instance VI_2 {
    state MASTER
    interface eth0
    virtual_router_id 200
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.200/24 dev eth0 label eth0:2
    }
    unicast_src_ip 172.25.254.20
    unicast_peer {
        172.25.254.10
    }
}

4 实现IPVS的高可用性

4.1 virtual server (虚拟服务器)的定义格式

virtual_server IP port #定义虚拟主机IP地址及其端口
virtual_server fwmark int #ipvs的防火墙打标,实现基于防火墙的负载均衡集群
virtual_server group string #使用虚拟服务器组

4.2 虚拟服务器配置

virtual_server IP port { #VIP和PORT
delay_loop <INT> #检查后端服务器的时间间隔
lb_algo rr|wrr|lc|wlc|lblc|sh|dh #定义调度方法
lb_kind NAT|DR|TUN #集群的类型,注意要大写
persistence_timeout <INT> #持久连接时长
protocol TCP|UDP|SCTP #指定服务协议,一般为TCP
sorry_server <IPADDR> <PORT> #所有RS故障时,备用服务器地址
real_server <IPADDR> <PORT> { #RS的IP和PORT
weight <INT> #RS权重
notify_up <STRING>|<QUOTED-STRING> #RS上线通知脚本
notify_down <STRING>|<QUOTED-STRING> #RS下线通知脚本
HTTP_GET|SSL_GET|TCP_CHECK|SMTP_CHECK|MISC_CHECK { ... } #定义当前主机健康状
态检测方法
}
}
#注意:括号必须分行写,两个括号写在同一行,如: }} 会出错
实验-实现单主的 LVS-DR 模式
[root@realserver1 ~]# ip a a 172.25.254.100/32 dev lo
[root@realserver1 ~]# cd /etc/sysconfig/network-scripts
[root@realserver1 network-scripts]# cat ifcfg-lo 
DEVICE=lo
IPADDR0=127.0.0.1
NETMASK0=255.0.0.0
IPADDR1=172.25.254.100
NETWORK1=255.255.255.255
NETWORK=127.0.0.0
# If you're having problems with gated making 127.0.0.0/8 a martian,
# you can change this to something else (255.255.255.255, for example)
BROADCAST=127.255.255.255
ONBOOT=yes
NAME=loopback

[root@realserver1 ~]# nmcli connection reload
[root@realserver1 ~]# nmcli connection up eht0
[root@realserver1 ~]# systemctl restart network

[root@realserver1 ~]# cat /etc/sysctl.d/arp.conf
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
[root@realserver1 ~]# sysctl --system
#在webder2上做相同的操作。

[root@ka1 ~]# yum install ipvsadm -y
[root@ka2 ~]# yum install ipvsadm -y
[root@ka1 ~]# vim /etc/keepalived/keepalived.conf
[root@ka1 ~]# systemctl restart keepalived.conf
[root@ka2 ~]# vim /etc/keepalived/keepalived.conf
[root@ka2 ~]# systemctl restart keepalived.conf
#ka1与ka2的操作一样。

[root@ka1 ~]# ipvsadm -A -t 172.255.254.100:80 -s wrr
[root@ka1 ~]# ipsvadm -Ln
[root@ka2 ~]# ipvsadm -Ln (自动有)

# 测试
[root@ka2 ~]# for i in {1..6}; do curl 172.25.254.100; done
realserver2 - 172.25.254.120
realserver1 - 172.25.254.110
realserver2 - 172.25.254.120
realserver1 - 172.25.254.110
realserver2 - 172.25.254.120
realserver1 - 172.25.254.110
实现其它应用的高可用性 VRRP Script

keepalived利用 VRRP Script 技术,可以调用外部的辅助脚本进行资源监控,并根据监控的结果实现优先动态调整,从而实现其它应用的高可用性功能。

注意:一定要注释掉vrrp_strict

总结:keepalive + haproxy 高可用负载均衡。

[root@ka1 ~]# cat /etc/sysctl.conf
# sysctl settings are defined through files in
# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
#
# Vendors settings live in /usr/lib/sysctl.d/.
# To override a whole file, create a new file with the same in
# /etc/sysctl.d/ and put new settings there. To override
# only specific settings, add a file with a lexically later
# name in /etc/sysctl.d/ and put new settings there.
#
# For more information, see sysctl.conf(5) and sysctl.d(5).
net.ipv4.ip_nonlocal_bind=1
[root@ka1 ~]# sysctl -p
net.ipv4.ip_nonlocal_bind = 1

[root@ka1 ~]# yum install haproxy
[root@ka1 ~]# vim /etc/haproxy/haproxy.cfg 
backend app
    balance     roundrobin
    server  app1 127.0.0.1:5001 check
    server  app2 127.0.0.1:5002 check
    server  app3 127.0.0.1:5003 check
    server  app4 127.0.0.1:5004 check
listen webcluster
    bind 172.25.254.100:80
    mode http
    balance roundrobin
    server web1 172.25.254.110:80 check inter 3 fall 2 rise 5
    server web2 172.25.254.120:80 check inter 3 fall 2 rise 5
[root@ka1 ~]# systemctl enable --now haproxy.service
Created symlink from /etc/systemd/system/multi-user.target.wants/haproxy.service to /usr/lib/systemd/system/haproxy.service.

#我的这里有错误,正确的应该如下图:
[root@ka1 ~]# netstat -antlupe | grep haproxy
tcp        0      0 0.0.0.0:5000            0.0.0.0:*               LISTEN      0          175328     59155/haproxy       
tcp        0      0 172.25.254.100:80       0.0.0.0:*               LISTEN      0          175330     59155/haproxy       
udp        0      0 0.0.0.0:35871           0.0.0.0:*                           0          175329     59154/haproxy   
#ka2做相同的操作
#此时不能curl成功
[root@ka1 ~]# curl 172.25.254.100
[root@realserver2 ~]# cat /etc/sysctl.d/arp.conf 
net.ipv4.conf.all.arp_ignore=0
net.ipv4.conf.all.arp_announce=0
net.ipv4.conf.lo.arp_ignore=0
net.ipv4.conf.lo.arp_announce=0
[root@realserver2 ~]# sysctl --system

[root@realserver1 ~]# cat /etc/sysctl.d/arp.conf 
net.ipv4.conf.all.arp_ignore=0
net.ipv4.conf.all.arp_announce=0
net.ipv4.conf.lo.arp_ignore=0
net.ipv4.conf.lo.arp_announce=0
[root@realserver1 ~]# sysctl --system

#realserver1和realserver2做相同的操作
[root@realserver2 ~]# vim /etc/sysconfig/netwrok-scripts/ifcfg-lo
DEVICE=lo
IPADDR=127.0.0.1
NETMASK=255.0.0.0
NETWORK=127.0.0.0
# If you're having problems with gated making 127.0.0.0/8 a martian,
# you can change this to something else (255.255.255.255, for example)
BROADCAST=127.255.255.255
ONBOOT=yes
NAME=loopback

[root@realserver2 ~]# systemctl restart network
[root@realserver2 ~]# ip a


[root@ka1 ~]# curl 172.25.254.110
172.25.254.110
[root@ka1 ~]# curl 172.25.254.120
172.25.254.120
[root@ka2 ~]# curl 172.25.254.110
172.25.254.110
[root@ka2 ~]# curl 172.25.254.120
172.25.254.120

#把ka1和ka2上题添加的都注释掉,但是只在ka1上添加以下内容
[root@ka1 ~]# vim /etc/keepalived/keepalived.conf 
vrrp_script check_haproxy {
    script "/etc/keepalived/test.sh"
    interval 1
    weight -30
    fall 2
    rise 2
    timeout 2
}
#以及更改下图内容

[root@ka1 ~]# systemctl restart keepalived.service 
[root@ka1 ~]# systemctl restart haproxy.service 
[root@ka2 ~]# systemctl restart keepalived.service 
[root@ka2 ~]# systemctl restart haproxy.service 

#只需要在ka1里面写脚本来判断haproxy是否能检测到,如果没有检测到那么ipv飘到ka2。
[root@ka1 ~]# cat /etc/keepalived/test.sh 
#!/bin/bash
killall -0 haproxy
[root@ka1 ~]# chmod +x /etc/keepalived/test.sh 

[root@ka1 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.254.10  netmask 255.255.255.0  broadcast 172.25.254.255
        inet6 fe80::20c:29ff:fe30:dff9  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:30:df:f9  txqueuelen 1000  (Ethernet)
        RX packets 1319150  bytes 102626039 (97.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2471641  bytes 174368903 (166.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 13622  bytes 692970 (676.7 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 13622  bytes 692970 (676.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

#开台新的虚拟机来看
[root@ka2 ~]# while true ; do  curl 172.25.254.100;sleep 0.5; done
172.25.254.110
172.25.254.120
172.25.254.110
172.25.254.120

#当我们将ka1的haproxy服务停止后,此时因为我们写了脚本,脚本检测不到haproxy了,所以就会飘到ka2上,所以当我们在ka1上停止后访问没有停止,说明实验成功。
[root@ka1 ~]# systemctl stop haproxy.service 
172.25.254.110
172.25.254.120

  • 11
    点赞
  • 29
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值