haproxy负载均衡及高可用
一.haproxy基本概念
- HAProxy是一个使用C语言编写的自由及开放源代码软件,其提供高可用性、负载均衡,以及基于TCP和HTTP的应用程序代理。
- HAProxy特别适用于那些负载特大的web站点,这些站点通常又需要会话保持或七层处理。HAProxy运行在当前的硬件上,完全可以支持数以万计的并发连接。并且它的运行模式使得它可以很简单安全的整合进您当前的架构中, 同时可以保护你的web服务器不被暴露到网络上。
二.haproxy负载均衡及相关配置
!!!注:haproxy配置文件修改后,必须使用reload,不能restart,否则会使连接断开
1.负载均衡
在server1上安装haproxy:
yum install -y haproxy
编辑内核文件,扩展最大文件量(默认1024)
cd /etc/security/limits.d/
cd
vim /etc/security/limits.conf
haproxy - nofile 4096
编辑haproxy主配置文件,实现负载均衡
vim /etc/haproxy/haproxy.cfg
systemctl start haproxy.service
haproxy.cfg:
#---------------------------------------------------------------------
# Example configuration for a possible web application. See the
# full configuration options online.
#
# http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
stats uri /status
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend main *:80
# acl url_static path_beg -i /static /images /javascript /stylesheets
# acl url_static path_end -i .jpg .gif .png .css .js
#use_backend static if url_static
default_backend app
#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
#backend static
#balance roundrobin
#server static 127.0.0.1:4331 check
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend app
balance roundrobin
server app1 172.25.3.2:80 check
server app2 172.25.3.3:80 check
在server2/3上安装apache,并启动服务:
systemctl start httpd.service
测试负载均衡:
查看管理状态:
http://172.25.3.1/status
当server2/3宕机,可以显示异常状态(红色):
2.访问验证
为status设置访问限制,重载服务:
vim /etc/haproxy/haproxy.cfg
systemctl reload haproxy.service
测试:
3.监控特定访问
设置监控器
当访问路径为/images开头, .jpg为结尾时,跳入backend static模块,由该模块内server提供服务。
vim /etc/haproxy/haproxy.cfg
systemctl reload haproxy.service
在server3的apchche共享目录下创建images/1.png
测试:
错误输入格式:
4.负载均衡权重
编辑主配置文件,设置server2的权重为2,即访问三次两次为server2,一次为server3:
vim /etc/haproxy/haproxy.cfg
systemctl reload haproxy.service
5.设置备份
机制:当所有运行server宕机后,启动备用server
vim /etc/haproxy/haproxy.cfg
systemctl reload haproxy.service
关闭server2/3:
systemctl stop httpd
测试访问,访问内容为备份机内容:
6.启用haproxy日志
重载,查看日志:
7.访问黑名单
修改主配置文件:
测试:
访问失败,自动跳转:
8.访问重载
将访问跳转至指定路径:
测试:
9.监控.php访问
server3安装php:
yum install -y php
vim /var/www/html/index.php
haproxy设置监控:
测试:
10.读写分离
server2/3
server2/3安装php:
yum install -y php
编写php文件
vim index.html #提供上传图片按钮及选图功能
<html>
<body>
<form action="upload_file.php" method="post"
enctype="multipart/form-data">
<label for="file">Filename:</label>
<input type="file" name="file" id="file" />
<br />
<input type="submit" name="submit" value="Submit" />
</form>
</body>
</html>
vim upload_file.php #将获取图片传至uoload目录
<?php
if ((($_FILES["file"]["type"] == "image/gif")
|| ($_FILES["file"]["type"] == "image/jpeg")
|| ($_FILES["file"]["type"] == "image/pjpeg"))
&& ($_FILES["file"]["size"] < 20000000))
{
if ($_FILES["file"]["error"] > 0)
{
echo "Return Code: " . $_FILES["file"]["error"] . "<br />";
}
else
{
echo "Upload: " . $_FILES["file"]["name"] . "<br />";
echo "Type: " . $_FILES["file"]["type"] . "<br />";
echo "Size: " . ($_FILES["file"]["size"] / 1024) . " Kb<br />";
echo "Temp file: " . $_FILES["file"]["tmp_name"] . "<br />";
if (file_exists("upload/" . $_FILES["file"]["name"]))
{
echo $_FILES["file"]["name"] . " already exists. ";
}
else
{
move_uploaded_file($_FILES["file"]["tmp_name"],
"upload/" . $_FILES["file"]["name"]);
echo "Stored in: " . "upload/" . $_FILES["file"]["name"];
}
}
}
else
{
echo "Invalid file";
}
?>
创建upload目录,存储图片:
mkdir upload
chmod 777 upload
读写分离思路:在server3上进行访问,当设计写入数据时切换至server2
haproxy主配置文件添加写入监控,重载服务:
测试:
访问
查看日志,此时为app2::
选择图片,进行上传:
此时日志内显示,切换为static下server2
查看server2下文件,文件以上传,读写分立成功:
三.pacemaker管理hapoxy集群
server1/4
免密登陆:
ssh-keygen
ssh-copy-id server4
ping 172.25.3.4
ssh-copy-id server4
ssh server4
server1/4
安装pacemarker软件包,设置开启启动:
yum install -y pacemaker pcs psmisc policycoreutils-python
ssh server4 yum install -y pacemaker pcs psmisc policycoreutils-python
scp /etc/yum.repos.d/dvd.repo server4:/etc/yum.repos.d/dvd.repo
ssh server4 yum install -y pacemaker pcs psmisc policycoreutils-python
systemctl enable --now pcsd.service
ssh server4 systemctl enable --now pcsd.service
修改密码:
echo westos | passwd --stdin hacluster
ssh server4 'echo westos| passwd --stdin hacluster'
注册集群:
pcs cluster auth server1 server4
pcs cluster setup --name mycluster server1 server4
启动服务/开机自启动:
pcs cluster start --all
pcs status
pcs cluster enable --all
关闭stonith:
pcs property set stonith-enabled=false
pcs status
server4安装haproxy:
ssh server4 'yum install -y haproxy'
scp haproxy.cfg server4:/etc/haproxy/
配置集群管理vip:
pcs resource describe ocf:heartbeat:IPaddr2
pcs resource create vip ocf:heartbeat:IPaddr2 ip=172.25.3.100 op monitor interval=30s
pcs status
ip a
禁止开机自启:
systemctl disable --now haproxy
ssh server4 systemctl disable --now haproxy
netstat -antlp |grep :80
将haproxy放入集群管理
pcs resource create haproxy systemd:haproxy op monitor interval=60s
pcs status
创建组,将vip、haproxy放入,解决不同布问题:
pcs resource group add hagroup vip haproxy
pcs status
测试
暂停集群
pcs node standby
pcs node unstandby
pcs status
停止haproxy
systemctl stop haproxy.service
pcs status
systemctl start haproxy.service
四.fence实现haproxy故障重启
真机安装fence组件
yum install -y fence-virtd-*
mkdir /etc/cluster
fence_virtd -c
dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=128 count=1
systemctl restart fence_virtd.service
netstat -anulp|grep :1229
其余配置默认,网卡改为br0:
配置内容:
将密码认证文件传至server1/4,此时已经在server1/4创建/etc/cluster目录
scp /etc/cluster/fence_xvm.key root@172.25.3.1:/etc/cluster
scp /etc/cluster/fence_xvm.key root@172.25.3.4:/etc/cluster
在server1端配置:
安装fence客户端
yum install fence-virt -y
ssh server4 yum install -y fence-virt
添加fence模块,连接真机与虚拟机
pcs stonith create vmfence fence_xvm pcmk_host_map="server1:v1;server4:v4" op monitor interval=60s
pcs status
pcs property set stonith-enabled=true #打开stonith权限,不然无效
crm_verify -LV #检查是否有错误
pcs status
测试:
ip link set down eth0
pcs status
关闭servr4 网卡:
server4自动重启恢复:
完成切换,此时server4正在重启:
server4重启完毕,连入集群,恢复成功: