haproxy实现7层的负载均衡、haproxy结合pacemaker实现高可用
4层负责数据转发,7层负责策略文件
整理实验环境
## server1 server4 关掉keepalived服务
[root@server4 keepalived]# systemctl stop keepalived
[root@server1 keepalived]# systemctl stop keepalived
## 删除虚拟vip 刷新arptables策略 打开apache服务
[root@server2 ~]# arptables -F
[root@server2 ~]# ip addr del 172.25.15.100/32 dev eth0
[root@server2 ~]# systemctl start httpd
[root@server3 ~]# arptables -F
[root@server3 ~]# ip addr del 172.25.15.100/32 dev eth0
[root@server3 ~]# systemctl start httpd
haproxy的配置及实验操作
安装及测试
## 安装haproxy 编辑配置文件
[root@server1 haproxy]# systemctl stop httpd
[root@server1 keepalived]# yum install haproxy -y
[root@server1 keepalived]# cd /etc/haproxy/
[root@server1 haproxy]# vim haproxy.cfg
[root@server1 haproxy]# systemctl restart haproxy
[root@server1 haproxy]# netstat -antlp
[root@foundation50 Desktop]# for i in {1..6};do curl 192.168.0.101;done
两个app分别是server2 server3
设置日至信息的存放
[root@server1 haproxy]# vim /etc/rsyslog.conf
[root@server1 haproxy]# systemctl restart rsyslog
[root@server1 haproxy]# systemctl start haproxy.service
[root@foundation15 Desktop]# curl 172.25.15.1
[root@server1 haproxy]# cat /var/log/haproxy.log
设置访问密码
[root@server1 haproxy]# vim haproxy.cfg
stats auth admin:westos
[root@server1 haproxy]# systemctl reload haproxy.service
source模式
- 介绍
- 源地址散列调度(Source Hashing Scheduling)算法正好与目标地址
- 散列调度算法相反,它根据请求的源 IP 地址,作为散列键(Hash Key)从静
- 态分配的散列表找出对应的服务器,若该服务器是可用的且未超载,将请求发送
- 到该服务器,否则返回空。
[root@server1 haproxy]# vim haproxy.cfg
[root@server1 haproxy]# systemctl reload haproxy.service
default_backend和use_backend的使用
- 配置文件解释
- default_backend <backend>
use_backend <static>
在没有匹配的”use_backend”规则时为实例指定使用的默认后端,因此,
其不可应用于backend区段。在”frontend”和”backend”之间进行内容交换时,
通常使用”use-backend”定义其匹配规则;而没有被规则匹配到的请求将由此
参数指定的后端接收。
default_backend app 默认
自动定位到server2,但是默认访问的是server3 app后端
backup
[root@server1 haproxy]# vim haproxy.cfg
[root@server1 haproxy]# systemctl reload haproxy.service
[root@server1 haproxy]# vim /etc/httpd/conf/httpd.conf
8080
[root@server1 haproxy]# systemctl start httpd
[root@server1 haproxy]# vim /var/www/html/index.html
server1 please try again later
[root@server3 haproxy]# systemctl stop httpd
[root@server3 haproxy]# systemctl start httpd
设置黑名单及自动跳转网址
设置黑名单,屏蔽192.168.0.100
自动掉转到百度
redirect location默认访问地址(重定向)
访问控制
acl write method POST | PUT
(读的时候是默认的app,在server2上,写的时候的在server3上,
数据存储在server3上,server2和server3上需要同样的文件,实现读写分离)
[root@server3 ~]# cd /var/www/html/
[root@server3 html]# ls
index.html upload
[root@server3 html]# cd upload/
[root@server3 upload]# ls
index.php upload_file.php
[root@server3 upload]# mv * ..
[root@server3 upload]# cd ..
[root@server3 html]# ls
[root@server3 html]# yum install -y php
[root@server3 html]# systemctl restart httpd
[root@server3 html]# vim upload_file.php
&& ($_FILES["file"]["size"] < 2000000))
[root@server3 html]# chmod 777 upload
[root@server3 html]# cd upload/
[root@server3 upload]# ls
上传照片后 没有
192.168.0.1/index.php 默认访问server3
[root@foundation15 html]# scp upload_file.php 172.25.15.2:/var/www/html/
[root@server2 html]# ls
index.html upload_file.php
[root@server2 html]# yum install -y php
[root@server2 html]# vim upload_file.php
&& ($_FILES["file"]["size"] < 2000000))
[root@server2 html]# systemctl restart httpd
[root@server2 html]# mkdir upload
[root@server2 upload]# chmod 777 /var/www/html/upload
提交成功
[root@server2 upload]# ls
vim.jpg
192.168.0.1/index.php haproxy定向到server2
haproxy+pacemaker高可用
pacemaker的搭建
实验前配置pacemaker
[root@server1 haproxy]# cd /etc/yum.repos.d/
[root@server1 yum.repos.d]# ls
redhat.repo westos.repo
[root@server1 yum.repos.d]# vim westos.repo
baseurl=http://192.168.0.100/rhel7.6/addons/HighAvailability
## 安装
[root@server1 yum.repos.d]# scp westos.repo server4:/etc/yum.repos.d/
[root@server1 yum.repos.d]# yum install -y pacemaker pcs psmisc policycoreutils-python.x86_64
[root@server1 yum.repos.d]# ssh server4 yum install -y pacemaker pcs psmisc policycoreutils-python.x86_64
## server1 4 安装并开启pacemaker服务,
[root@server1 yum.repos.d]# systemctl enable --now pcsd.service
[root@server1 yum.repos.d]# ssh server4 systemctl enable --now pcsd.service
## 安装后会自动生成一个用户hacluster,需要为他设置一个密码。
[root@server1 ~]# echo westos | passwd --stdin hacluster
[root@server1 ~]# ssh server4 "echo westos | passwd --stdin hacluster"
## 将server1 4 设置成管理用户
[root@server1 ~]# pcs cluster auth server1 server4
hacluster westos
创建名为mycluster的集群,生成并同步server1 4 两个节点
[root@server1 ~]# pcs cluster setup --name mycluster server1 server4
[root@server1 ~]# pcs cluster start --all ##开启集群心跳程序
[root@server1 ~]# pcs cluster enable --all ## 开启所有节点,并设置开机自启
[root@server1 ~]# pcs status ##查看集群状态
[root@server1 ~]# crm_verify -LV ## 验证是否出错
[root@server1 haproxy]# pvc
[root@server1 haproxy]# pcs resource standards ##查看系统自带标准脚本
[root@server1 haproxy]# pcs status corosync
[root@server1 haproxy]# pcs resource providers ## 查看资源的提供者
[root@server1 haproxy]# pcs resource create vip ocf:heartbeat:IPaddr2 ip=172.25.15.100 op monitor interval=30s ##创建一个名为vip的资源。
[root@server1 haproxy]# pcs property set stonith-enabled=false
[root@server1 haproxy]# pcs status
[root@server1 haproxy]# ip addr
关闭集群指定节点
[root@server1 haproxy]# pcs cluster stop server1
[root@server4 haproxy]# pcs status
[root@server1 haproxy]# pcs cluster start server1
[root@server1 haproxy]# pcs status
不会切回来 没有主副
禁止掉haproxy服务
配置自动启动haproxy服务
[root@server4 ~]# yum install -y haproxy
[root@server1 haproxy]# systemctl disable --now haproxy
[root@server1 haproxy]# cd /etc/haproxy/
[root@server1 haproxy]# scp haproxy.cfg server4:/etc/haproxy/
[root@server4 ~]# systemctl start haproxy.service
[root@foundation15 mnt]# curl 172.25.15.4
server3
[root@server4 ~]# systemctl stop haproxy.service
[root@server1 haproxy]# pcs resource standards
[root@server1 haproxy]# systemctl status haproxy.service
dead
[root@server1 haproxy]# pcs resource create haproxy systemd:haproxy op monitor interval=30s
[root@server1 haproxy]# pcs status
两个资源是分开的,需要统一到一台节点上
[root@server1 haproxy]# pcs resource group add hagroup vip haproxy
[root@foundation15 mnt]# curl 172.25.15.100
server3
[root@foundation15 mnt]# curl 172.25.15.100
server3
standy节点的使用
[root@server4 haproxy]# pcs node standby
[root@server1 haproxy]# pcs status
server1
[root@server4 ~]# pcs node unstandby
[root@server1 haproxy]# pcs status
server1
转换成standy节点 server4状态变成了standy,资源跳转到server1,接管成功。
全部上线,资源不会跳转回去。
删除vip 关掉haproxy服务
[root@server1 haproxy]# systemctl stop haproxy
[root@server1 haproxy]# pcs status
[root@server1 haproxy]# ip addr del 192.168.0.101/24 dev eth0
[root@server1 haproxy]# pcs status
节点server1恢复,自动添加vip成功。
down掉网卡
stonith的设置(fencing设备)
网卡坏了,内核坏了也不可能手动去重启网卡,重启电脑。所以需要设置stonith。
stonith相当于电源交换机,插排,可以发信息告诉stonith需要断开哪个电源(直接断电),
开启哪个电源,实现了远程自动开关机。
[root@foundation15 mnt]# rpm -qa | grep fence
fence-virtd-0.4.0-4.el8.x86_64
libxshmfence-1.3-2.el8.x86_64
fence-virtd-libvirt-0.4.0-4.el8.x86_64
fence-virtd-multicast-0.4.0-4.el8.x86_64
[root@foundation15 cluster]# fence_virtd -c
其他都是默认
[root@foundation15 cluster]# dd if=/dev/urandom of=fence_xvm.key bs=128 count=1
[root@foundation15 cluster]# systemctl restart fence_virtd
[root@foundation15 cluster]# systemctl status fence_virtd
[root@foundation15 cluster]# ll fence_xvm.key
-rw-r--r--. 1 root root 128 Jan 10 15:59 fence_xvm.key
[root@foundation15 cluster]# netstat -anulp | grep :1229
udp 0 0 0.0.0.0:1229 0.0.0.0:*
[root@server1 ~]# mkdir /etc/cluster
[root@server4 ~]# mkdir /etc/cluster
[root@foundation15 cluster]# scp fence_xvm.key root@172.25.15.1:/etc/cluster
[root@foundation15 cluster]# scp fence_xvm.key root@172.25.15.4:/etc/cluster
[root@server4 cluster]# yum install fence-virt.x86_64 -y
[root@server1 ~]# yum install fence-virt.x86_64 -y
[root@server1 ~]# stonith_admin -I
fence_xvm
fence_virt
2 devices found
[root@server4 ~]# cd /etc/cluster
[root@server4 cluster]# ls
fence_xvm.key
## 主机名和域名映射
[root@server1 ~]# pcs stonith create vmfence fence_xvm pcmk_host_map="server1:vm1;server4:vm4" op monitor interval=60s
[root@server1 ~]# pcs property set stonith-enabled=true
[root@server1 ~]# crm_verify -LV ##检测没有错误
检测配置了stonith之后的效果
- 可以实现摧毁内核,然后自动重启。
- 注意点:如果stonith设备老是自动停止,可能是真机的防火墙和selinux设置问题。
- 可以关闭防火墙,将selinux配置文件改为disabled。
server1内核崩溃
[root@server1 haproxy]# echo c > /proc/sysrq-trigger
server1自动重启
nginx负载均衡
nginx最小化安装
[root@server1 ~]# pcs cluster stop --all
[root@server1 ~]# pcs cluster disable --all
[root@server1 ~]# tar zxf nginx-1.18.0.tar.gz
[root@server1 ~]# cd nginx-1.18.0
[root@server1 nginx-1.18.0]# yum install -y gcc
[root@server1 nginx-1.18.0]# yum install -y pcre-devel
[root@server1 nginx-1.18.0]# yum install -y openssl-devel
[root@server1 nginx-1.18.0]# cd auto/
[root@server1 auto]# cd cc/
[root@server1 cc]# vim gcc
#CFLAGS="$CFLAGS -g"
[root@server1 cc]# cd ..
[root@server1 auto]# cd ..
[root@server1 nginx-1.18.0]# pwd
/root/nginx-1.18.0
[root@server1 nginx-1.18.0]# ./configure --prefix=/usr/local/nginx --with-http_ssl_module
[root@server1 nginx-1.18.0]# make
[root@server1 nginx-1.18.0]# make install
[root@server1 nginx-1.18.0]# cd /usr/local/nginx/
[root@server1 nginx]# ls
conf html logs sbin
[root@server1 nginx]# du -sh
976K .
[root@server1 sbin]# pwd
/usr/local/nginx/sbin
[root@server1 sbin]# cd
[root@server1 ~]# vim .bash_profile
PATH=$PATH:$HOME/bin:/usr/local/nginx/sbin
[root@server1 ~]# source .bash_profile
[root@server1 ~]# which nginx
/usr/local/nginx/sbin/nginx
启动nginx并编写配置文件
[root@server1 ~]# nginx
[root@server1 ~]# netstat -antlp
[root@server1 ~]# curl localhost -I
[root@server1 ~]# cd /usr/local/nginx/conf/
[root@server1 conf]# vim nginx.conf
[root@server1 conf]# nginx -t
nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok
nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful
[root@server1 conf]# nginx -s reload
[root@server1 conf]# curl 192.168.0.1