Haproxy + pacemaker + fence
文章目录
简介
- haproxy : 负载均衡
- pacemaker : 集群管理
- fence : 高可用
参考文档
haproxy 参考文档:http://cbonte.github.io/haproxy-dconv/
pacemaker 官方参考文档:https://clusterlabs.org/pacemaker/doc/
实验
主机 | ip | 角色 |
---|---|---|
server1 | 172.25.9.1/24 | haproxy 、pacemaker 、fence-virt |
server4 | 172.25.9.4/24 | haproxy 、pcaemaker 、fence-virt |
server5 | 172.25.9.250/24 | fence-virtd |
server2 | 172.25.9.2/24 | httpd |
server3 | 172.25.9.3/24 | httpd |
server2 、3 部署
# server2 , 3 :
yum install -y httpd
# server2
echo server2 > /var/www/html/index.html
# server3
echo server3 > /var/www/html/index.html
# server2 、3
systemctl enable --now httpd
server1 、4 haproxy 部署
# server1 、4
# 安装,配置 haproxy
yum install -y haproxy
## 配置文件参照后面代码块 “haproxy 配置文件内容”
vim /etc/haproxy/haproxy.cfg
systemctl start haproxy
# haproxy 的日志
vim /etc/rsyslog.conf
……
local2.* /var/log/haproxy.log
……
systemctl restart rsyslog
# server1 做备用后端,当 server2 与 server3 都 宕机 时,备用。
yum install -y httpd
vim /etc/httpd/conf/httpd.conf
……
Listen 8080
……
systemctl enable --now httpd
# 负载状态详情 web 展示
172.25.9.1/status
haproxy 配置文件内容
# haproxy 配置文件
vim /etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
# Example configuration for a possible web application. See the
# full configuration options online.
#
# http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
# web 访问路径,用户密码
stats uri /status
stats auth admin:westos
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
# 监听 80 端口
frontend main *:80
# 访问控制,访问静态文件
acl url_static path_beg -i /static /images /javascript /stylesheets
acl url_static path_end -i .jpg .gif .png .css .js
acl blacklist src 172.25.9.0/24
# 访问 /images/1.jpg 文件
acl denyjpg path /images/1.jpg
acl write method PUT
acl write method POST HEAD
#tcp-request content accept if blacklist
#tcp-request content reject if blacklist
#block if blacklist
#errorloc 403 http://www.baidu.com
#redirect location http://www.baidu.com if blacklist
#http-request deny if denyjpg blacklist
#use_backend static if url_static
#use_backend static if write
default_backend app
#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
# 访问满足 static ,则请求会被调度到 server3 上
#backend static
# balance roundrobin
# server app1 172.25.9.3:80 check
#
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend app
balance roundrobin
server app1 172.25.9.3:80 check
server app2 172.25.9.2:80 check
server backup 127.0.0.1:8080 backup
#balance source
#server app1 172.25.9.2:80 check
server1 、4 pacemaker 部署
# server1 、4
# 配置 yum 源
vim /etc/yum.repos.d/dvd.repo
[dvd]
name=rhel7.6
baseurl=http://172.25.9.250/rhel7.6
gpgcheck=0
[HighAvailability]
name=rhel7.6
baseurl=http://172.25.9.250/rhel7.6/addons/HighAvailability/
gpgcheck=0
# 安装 pacemaker 及其依赖性
yum install -y pacemaker pcs psmisc policycoreutils-python3
# 启动
systemctl enable --now pcsd
systemctl stop haproxy
echo westos | passwd --stdin hacluster
# server1
# 验证,输入之前密码
pcs cluster auth server1 server4
# 创建集群
pcs cluster setup --name mycluster server1 server4
# 启动
pcs cluster start --all
pcs cluster enable --all
# 一般可能的报错可能是 和 stonith-enabled=true 有关
crm_verify -LV
# 关闭
pcs property set stonith-enabled=false
# 创建 VIP
pcs resource create vip ocf:heartbeat:IPaddr2 ip=172.25.9.100 cidr_netmask=24 op monitor interval=30s
# 创建haproxy
pcs resource create haproxy systemd:haproxy op monitor interval=30s
# 创建组 vip 与 haproxy 绑在一起
pcs resource group add hagroup vip haproxy
fence 配置
# server5 fence 配置
yum install -y fence-virtd fence-virtd-libvirt fence-virtd-multicast
# key 存放位置
mkdir /etc/cluster
# 配置 (详情参照 server5 fence 配置 ( fence_virtd -c ) 代码块)
fence_virtd -c
# 生成 key
cd /etc/cluster
dd if=/dev/urandom of=fence_xvm.key bs=128 count=1
systemctl restart fence_virtd
# 查看 fence_virtd 端口是否启动
netstat -anlup | grep :1229
# 拷贝 key 到其它节点
ssh server1 mkdir /etc/cluster
ssh server4 mkdir /etc/cluster
scp /etc/cluster/fence_xvm.key root@server1:/etc/cluster
scp /etc/cluster/fence_xvm.key root@server4:/etc/cluster
# server1 、4 fence 配置
yum install -y fence-virt
stonith_admin -I
stonith_admin -M -a fence_xvm
pcs stonith describe fence_xvm
# 创建一个 fence 项
pcs stonith create vmfence fence_xvm pcmk_host_map="server1:lvs1;server4:lvs4" op monitor interval=60s
# 启动集群的 fencing 进程
pcs property set stonith-enabled=true
# 查看集群中任务状态
pcs status
# 制造系统崩溃,在hagroup 组所在的主机上执行,效果是 hagroup 组迁徙到另一台主机上,
# 并重启之前所在的主机,重启成功后,fence 的任务会迁移过去。
echo c > /proc/sysrq-tigger
# 手动启动主机, server1 (虚拟机名称 lvs1),在server4 可以手动重启 server1
fence_xvm -H lvs1
server5 fence 配置 ( fence_virtd -c )
Module search path [/usr/lib64/fence-virt/]:
Available backends:
libvirt 0.3
Available listeners:
vsock 0.1
multicast 1.2
Listener modules are responsible for accepting requests
from fencing clients.
Listener module [multicast]:
The multicast listener module is designed for use environments
where the guests and hosts may communicate over a network using
multicast.
The multicast address is the address that a client will use to
send fencing requests to fence_virtd.
Multicast IP Address [225.0.0.12]:
Using ipv4 as family.
Multicast IP Port [1229]:
Setting a preferred interface causes fence_virtd to listen only
on that interface. Normally, it listens on all interfaces.
In environments where the virtual machines are using the host
machine as a gateway, this *must* be set (typically to virbr0).
Set to 'none' for no interface.
Interface [br0]:
The key file is the shared key information which is used to
authenticate fencing requests. The contents of this file must
be distributed to each physical host and virtual machine within
a cluster.
Key File [/etc/cluster/fence_xvm.key]:
Backend modules are responsible for routing requests to
the appropriate hypervisor or management layer.
Backend module [libvirt]:
The libvirt backend module is designed for single desktops or
servers. Do not use in environments where virtual machines
may be migrated between hosts.
Libvirt URI [qemu:///system]:
Configuration complete.
=== Begin Configuration ===
backends {
libvirt {
uri = "qemu:///system";
}
}
listeners {
multicast {
port = "1229";
family = "ipv4";
interface = "br0";
address = "225.0.0.12";
key_file = "/etc/cluster/fence_xvm.key";
}
}
fence_virtd {
module_path = "/usr/lib64/fence-virt/";
backend = "libvirt";
listener = "multicast";
}
=== End Configuration ===
Replace /etc/fence_virt.conf with the above [y/N]? y