目录
keepalived高可用
1.keepalived简介
1.1什么是keepalived
Keeplived是一个用于实现高可用性的Linux工具,它可以确保服务器的持续可用性。它通过在多个服务器之间共享虚拟IP地址来实现负载均衡和故障转移。
1.2keepalived的主要功能
- 虚拟服务器(Virtual Server):Keeplived可以将多个真实服务器组合成一个虚拟服务器,通过共享一个虚拟IP地址来提供服务。
- 健康检查(Health Checking):Keeplived可以定期检查真实服务器的健康状况,如果某个服务器出现故障,Keeplived会将其从虚拟服务器中移除。
- 负载均衡(Load Balancing):Keeplived可以根据服务器的负载情况,将请求分发到不同的真实服务器上,以实现负载均衡。
- 故障转移(Failover):如果某个真实服务器发生故障,Keeplived可以将虚拟服务器的IP地址迁移到其他健康的服务器上,以实现故障转移
1.3keepalived高可用故障转移的原理
Keepalived 高可用服务之间的故障切换转移,是通过 VRRP (Virtual Router Redundancy Protocol ,虚拟路由器冗余协议)来实现的。
在 Keepalived 服务正常工作时,主 Master 节点会不断地向备节点发送(多播的方式)心跳消息,用以告诉备 Backup 节点自己还活看,当主 Master 节点发生故障时,就无法发送心跳消息,备节点也就因此无法继续检测到来自主 Master 节点的心跳了,于是调用自身的接管程序,接管主 Master 节点的 IP 资源及服务。而当主 Master 节点恢复时,备 Backup 节点又会释放主节点故障时自身接管的IP资源及服务,恢复到原来的备用角色。
VRRP ,全 称 Virtual Router Redundancy Protocol ,中文名为虚拟路由冗余协议 ,VRRP的出现就是为了解决静态踣甶的单点故障问题,VRRP是通过一种竞选机制来将路由的任务交给某台VRRP路由器的。
1.4 keepalived工作原理
Keepalived的工作原理是将多个服务器组成一个集群,其中一个服务器被指定为主服务器,其余服务器则为备份服务器。主服务器负责接收和处理所有的网络请求,而备份服务器则监控主服务器的状态,一旦主服务器出现故障,备份服务器会立即接管其工作,从而保证服务的连续性和可用性。
2.keepalived实现nginx负载均衡机高可用
部署环境:
系统 | 角色 | ip |
---|---|---|
redhat8 | haproxy服务器(haproxy01)master | 192.168.200.43 |
redhat8 | haproxy服务器(haproxy02)slave | 192.168.200.44 |
redhat8 | 网站服务器(rs1) | 192.168.200.46 |
redhat8 | 网站服务器(rs2) | 192.168.200.47 |
2.1配置网站服务器上准备测试的http页面(rs1、rs2)
#关闭防火墙和selinux
[root@rs1 ~]# systemctl disable --now firewalld
[root@rs1 ~]# sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config
[root@rs1 ~]# setenforce 0
[root@rs2 ~]# systemctl disable --now firewalld
[root@rs2 ~]# sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config
[root@rs2 ~]# setenforce 0
[root@rs1 ~]# yum -y install httpd
[root@rs1 ~]# echo "RS1" > /var/www/html/index.html
[root@rs1 ~]# systemctl restart httpd
[root@rs1 ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 *:80 *:*
LISTEN 0 128 [::]:22 [::]:*
[root@rs2 ~]# yum -y install httpd
[root@rs2 ~]# systemctl restart httpd
[root@rs2 ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 *:80 *:*
LISTEN 0 128 [::]:22 [::]:*
验证
2.2haproxy部署http负载均衡
#配置epel源
[root@haproxy01 ~]# yum install -y https://mirrors.aliyun.com/epel/epel-release-latest-8.noarch.rpm
[root@haproxy01 ~]# sed -i 's|^#baseurl=https://download.example/pub|baseurl=https://mirrors.aliyun.com|' /etc/yum.repos.d/epel*
[root@haproxy01 ~]# sed -i 's|^metalink|#metalink|' /etc/yum.repos.d/epel*
[root@haproxy02 ~]# yum install -y https://mirrors.aliyun.com/epel/epel-release-latest-8.noarch.rpm
[root@haproxy02 ~]# sed -i 's|^#baseurl=https://download.example/pub|baseurl=https://mirrors.aliyun.com|' /etc/yum.repos.d/epel*
[root@haproxy02 ~]# sed -i 's|^metalink|#metalink|' /etc/yum.repos.d/epel*
#配置haproxy服务器
#安装依赖包
[root@haproxy01 ~]# yum -y install make gcc pcre-devel bzip2-devel openssl-devel systemd-devel
[root@haproxy02 ~]# yum -y install make gcc pcre-devel bzip2-devel openssl-devel systemd-devel
#安装haproxy并创建用户
[root@haproxy01 ~]# useradd -r -M -s /sbin/nologin haproxy
[root@haproxy02 ~]# useradd -r -M -s /sbin/nologin haproxy
#在haproxy官网下载软件包
#haproxy01和haproxy02进行同样的操作
[root@haproxy01 ~]# wget https://www.haproxy.org/download/2.7/src/haproxy-2.7.10.tar.gz
[root@haproxy01 ~]# ls
anaconda-ks.cfg haproxy-2.7.10.tar.gz
[root@haproxy01 ~]# tar xf haproxy-2.7.10.tar.gz
[root@haproxy01 ~]# cd haproxy-2.7.10
[root@haproxy01 haproxy-2.7.10]# make clean //该命令用于清理之前编译的参数
[root@haproxy01 haproxy-2.7.10]# make -j $(nproc) TARGET=linux-glibc USE_OPENSSL=1 USE_ZLIB=1 USE_PCRE=1 USE_SYSTEMD=1
.......
CC src/version.o
CC dev/flags/flags.o
LD haproxy
LD dev/flags/flags
#指定路径进行安装
[root@haproxy01 haproxy-2.7.10]# make install PREFIX=/usr/local/haproxy
[root@haproxy01 haproxy-2.7.10]# ls /usr/local/
bin etc games haproxy include lib lib64 libexec sbin share src
[root@haproxy haproxy-2.7.10]# ls /usr/local/haproxy/
doc sbin share
[root@haproxy01 haproxy-2.7.10]# cd /usr/local/haproxy/
[root@haproxy01 haproxy]# ls sbin/
haproxy
[root@haproxy01 haproxy]# file sbin/haproxy
sbin/haproxy: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 3.2.0, BuildID[sha1]=40be265b503d85a68364ac4ec3d2a9d27a1d63f9, with debug_info, not stripped, too many notes (256)
#通过软链接的方式设置环境变量
[root@haproxy01 haproxy]# ln -s /usr/local/haproxy/sbin/* /usr/sbin/
[root@haproxy01 haproxy]# which haproxy
/usr/sbin/haproxy
[root@haproxy01 haproxy]# haproxy -v
HAProxy version 2.7.10-d796057 2023/08/09 - https://haproxy.org/
Status: stable branch - will stop receiving fixes around Q1 2024.
Known bugs: http://www.haproxy.org/bugs/bugs-2.7.10.html
Running on: Linux 4.18.0-193.el8.x86_64 #1 SMP Fri Mar 27 14:35:58 UTC 2020 x86_64
#配置各个负载的内核参数
[root@haproxy01 haproxy]# echo 'net.ipv4.ip_nonlocal_bind = 1' >> /etc/sysctl.conf
[root@haproxy01 haproxy]# echo 'net.ipv4.ip_forward = 1' >> /etc/sysctl.conf
[root@haproxy01 haproxy]# sysctl -p
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
#编写haproxys.service文件
[root@haproxy01 ~]# vim /usr/lib/systemd/system/haproxy.service
[Unit]
Description=HAProxy Load Balancer
After=syslog.target network.target
[Service]
ExecStartPre=/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -c -q
ExecStart=/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid
ExecReload=/bin/kill -USR2 $MAINPID
[Install]
WantedBy=multi-user.target
[root@haproxy ~]# systemctl daemon-reload
#启动日志
配置日志记录功能
[root@haproxy01 ~]# vim /etc/rsyslog.conf
# Save boot messages also to boot.log
local7.* /var/log/boot.log
local0.* /var/log/haproxy.log //添加此行
#重启日志服务
[root@haproxy01 ~]# systemctl restart rsyslog.service
#提供配置文件
[root@haproxy ~]# mkdir /etc/haproxy
[root@haproxy01 ~]# vim /etc/haproxy/haproxy.cfg
[root@haproxy01 ~]# cat /etc/haproxy/haproxy.cfg
#--------------全局配置----------------
global
log 127.0.0.1 local0 info
#log loghost local0 info
maxconn 20480
#chroot /usr/local/haproxy
pidfile /var/run/haproxy.pid
#maxconn 4000
user haproxy
group haproxy
daemon
#---------------------------------------------------------------------
#common defaults that all the 'listen' and 'backend' sections will
#use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option dontlognull
option httpclose
option httplog
#option forwardfor
option redispatch
balance roundrobin
timeout connect 10s
timeout client 10s
timeout server 10s
timeout check 10s
maxconn 60000
retries 3
#--------------统计页面配置------------------
listen admin_stats
bind 0.0.0.0:8189
stats enable
mode http
log global
stats uri /haproxy_stats //访问状态页面的URI
stats realm Haproxy\ Statistics
stats auth admin:admin //登录状态页面的用户名和密码,可自行修改
#stats hide-version
stats admin if TRUE
stats refresh 30s
#---------------web设置-----------------------
listen webcluster
bind 0.0.0.0:80
mode http
#option httpchk GET /index.html
log global
maxconn 3000
balance roundrobin
cookie SESSION_COOKIE insert indirect nocache
server rs1 192.168.200.43:80 check inter 2000 fall 5 //加入做负载均衡的主机
server rs2 192.168.200.44:80 check inter 2000 fall 5 //加入做负载均衡的主机
//重启haproxy服务,并将haproxy服务设置开机自启
[root@haproxy01 ~]# systemctl restart haproxy
[root@haproxy01 ~]# systemctl enable --now haproxy.service
Created symlink /etc/systemd/system/multi-user.target.wants/haproxy.service → /usr/lib/systemd/system/haproxy.service.
[root@haproxy01 ~]# systemctl status haproxy.service
● haproxy.service - HAProxy Load Balancer
Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabl>
Active: active (running) since Tue 2023-10-10 21:41:07 CST; 21s ago
Process: 16679 ExecStartPre=/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -c -q (code=ex>
Main PID: 16683 (haproxy)
#查看端口
[root@haproxy01 ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 0.0.0.0:80 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 0.0.0.0:8189 0.0.0.0:*
LISTEN 0 128 [::]:22 [::]:*
验证
通过192.168.200.43访问
通过192.168.200.44访问
2.3keepalived配置
#: haproxy01和haproxy02都要进行相同操作
[root@haproxy01 ~]# yum -y install keepalived
[root@haproxy01 ~]# rpm -ql keepalived
/etc/keepalived //配置目录
/etc/keepalived/keepalived.conf //主配置文件
/usr/lib/systemd/system/keepalived.service //服务控制文件
#配置haproxy01,配置主keepalived
[root@haproxy01 ~]# vim /etc/keepalived/keepalived.conf
[root@haproxy01 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id lb01
}
vrrp_instance VI_1 {
state MASTER
interface ens160
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass wangqing
}
virtual_ipaddress {
192.168.200.100
}
}
virtual_server 192.168.200.100 80 { #:配置虚拟ip
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.200.43 80 { #:真实服务器haproxy ip地址
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.200.44 80 { #:真实服务器haproxy ip地址
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
#启动服务
[root@haproxy01 ~]# systemctl enable --now keepalived
Created symlink /etc/systemd/system/multi-user.target.wants/keepalived.service → /usr/lib/systemd/system/keepalived.service.
#配置haproxy02,配置备keepalived
[root@haproxy02 ~]# vim /etc/keepalived/keepalived.conf
[root@haproxy02 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id lb02
}
vrrp_instance VI_1 {
state BACKUP
interface ens160
virtual_router_id 51
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass wangqing
}
virtual_ipaddress {
192.168.200.100
}
}
virtual_server 192.168.200.100 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.200.43 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.200.44 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
#启动服务
[root@haproxy02 ~]# systemctl enable --now keepalived
Created symlink /etc/systemd/system/multi-user.target.wants/keepalived.service → /usr/lib/systemd/system/keepalived.service.
[root@haproxy01 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:d2:ac:63 brd ff:ff:ff:ff:ff:ff
inet 192.168.200.43/24 brd 192.168.200.255 scope global dynamic noprefixroute ens160
valid_lft 961sec preferred_lft 961sec
inet 192.168.200.100/32 scope global ens160
valid_lft forever preferred_lft forever
inet6 fe80::897d:1419:4055:47ae/64 scope link noprefixroute
valid_lft forever preferred_lft forever
[root@haproxy02 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:46:04:56 brd ff:ff:ff:ff:ff:ff
inet 192.168.200.44/24 brd 192.168.200.255 scope global dynamic noprefixroute ens160
valid_lft 950sec preferred_lft 950sec
inet6 fe80::db8c:a937:f734:84fb/64 scope link noprefixroute
valid_lft forever preferred_lft forever
#:可以看出IP地址在h1主机上
2.4通过vip访问页面
可以ping通
访问不了是因为备节点在做干扰,因为现在备节点也可以做流量转发,导致流量不知道如何做选择,所以需要关闭slave主机(haproxy2)的haproxy服务后,再访问页面
[root@haproxy02 ~]# systemctl disable --now haproxy
[root@haproxy02 ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 [::]:22 [::]:*
再次启动slave上面的haproxy服务也不影响