lvs四种工作模式(DR模式下):原理 四层调度 lvs nat模式支持端口转发
- 负载均衡(七层调度)haproxy
自带健康检查
支持端口转发
(可是用非80端口)
配置文件
server:(LB)
-
yum install haproxy -y
-
cd /etc/haproxy/
-
vim haproxy.cfg
frontend main *:80
# acl url_static path_beg -i /static /images /javascript /stylesheets
# acl url_static path_end -i .jpg .gif .png .css .js
# use_backend static if url_static
default_backend app
#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
#backend static
# balance roundrobin
# server static 172.25.9.2:80 check
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend app
balance roundrobin
server app1 172.25.9.2:80 check
server app2 172.25.9.3:80 check
- systemctl restart haproxy
netstat -antlp
(指令不存在时,
yum search netstat
yum install -y net-tools
)
server2,server3:
systemctl start httpd ##开启http服务即可
curl 172.25.9.1
haproxy可以自检
server3:
systemctl stop httpd ##关闭httpd服务,主机只会访问到server2的主页,不会出现错误信息
主机:
curl 172.25.9.1 ##只可检查到server2主页
日至记录到指定文件中
server1: ##编写主配置文件
vim haproxy.cfg
listen stats *:8000 ##监听端口为8000 监听
stats uri /status
stats auth admin:westos
vim /etc/rsyslog.conf ##日志文件
$ModLoad imudp
$UDPServerRun 514
*.info;mail.none;authpriv.none;cron.none;local2.none /var/log/messages ##不在messages中显示local2日志
local2.* /var/log/haproxy.log ##将local2日志转至指定文件中
- systemctl restart haproxy
systemctl restart rsyslog ##重启服务,日志文件自主创建
netstat -antlp ##查看端口 端口有80 8000
cd /var/log ##日志文件记录了详细调用记录
cat haproxy.log
最大连接量 并发65535
kernel > system > app : 65535 (何时设置都需遵守的规则)
修改操作系统限制(任何服务都需考虑)
- 软限 硬限制都一样
软限 超过警告
硬限 不可超过 - ulimit -a
sysctl -a | grep file
ps aux ##进程查看
vim /etc/security/limits.conf
tail -n 3 /etc/security/limits.conf
# End of file
haproxy - nofile 65535
##编辑文件上限
调度算法
server1:
source ##默认调度第一个
backend app
# balance roundrobin
balance source
server app1 172.25.9.2:80 check
systemctl restart haproxy
主机:
curl 172.25.9.1
简单的访问控制 (二层lvs不可) 七层可作
frontend main *:80
acl url_static path_beg -i /static /images /javascript /stylesheets
acl url_static path_end -i .jpg .gif .png .css .js
黑名单控制
acl blacklist src 172.25.254.9
block if blacklist
错误重定向
acl blacklist src 172.25.254.9
block if blacklist
errorloc 403 http://172.25.9.2
lvs流量均摊
读写分离
安装php
server1:
vim haproxy.cfg
rontend main *:80
acl url_static path_beg -i /static /images /javascript /stylesheets
acl url_static path_end -i .jpg .gif .png .css .js
acl blacklist src 172.25.254.9
# block if blacklist
# errorloc 403 http://172.25.9.2
acl read method GET
acl read method HEAD
acl write method POST
acl write method PUT
#use_backend static if url_static
use_backend static if write
default_backend app
#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
backend static
balance roundrobin
server app2 172.25.9.3:80 check
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend app
balance roundrobin
server app1 172.25.9.2:80 check
systemctl restart haproxy
netstat -autlp
server2:
mkdir upload
chmod 777 upload
scp *.php 172.25.9.3:/var/www/html
systemctl restart httpd
server3:
yum install php -y ##安装php解析语言
mkdir upload
chmod 777 upload
文件传输
修改文件
$_FILES["file"]["type"] == "image/png")
##使类型包含png文件
systemctl restart httpd
fireox -> 172.25.9.1/index.php
上传文件
cd upload
ls ##查看文件上传至3中
高可用:
keepalived + lvs(内核层面策略) 可用 ##监控服务,不提供服务级别监控,需手动书写
server4配置
server1:
建立认证
ssh-keygen
ssh-copy-id server4
认证结点
软件仓库:
vim /etc/yum.repos.d/westos.repo
[rhel7.6]
name=AppStream
baseurl=http://172.25.9.250/rhel7.6/
gpgcheck=0
[HighAvailability]
name=HighAvailability
baseurl=http://172.25.9.250/rhel7.6/addons/HighAvailability/
gpgcheck=0
安装pacemaker pcs psnisc policycoreutils-python 软件
yum install -y pacemaker pcs psmisc policycoreutils-python
scp westos.repo server4:/etc/yum.repos.d/ ##创建server4软件仓库
ssh server4 yum install -y pacemaker pcs psmisc policycoreutils-python
systemctl enable --now pcsd.service ##开机自启
cat /etc/passwd
echo westos |passwd --stdin hacluster
ssh server4 'echo westos |passwd --stdin hacluster'
pcs cluster auth server1 server4
pcs cluster setup --name mycluster server1 server4
pcs status
pcs cluster start --all
pcs cluster enable --all
pcs status
pcs property set stonith-enabled=false
crm_verify -LV
pcs status
pcs resource create vip ocf💓IPaddr2 ip=172.25.0.100 cidr_netmask=24 op monitor interval=30s
pcs status
systemctl stop haproxy.service
pcs node standby ##修改server1状态为随时待命
pcs status
pcs resource create haproxy systemd:haproxy op monitor interval=60s ##增加
pcs status
pcs node unstandby ##取消随时待命状态
pcs status
pcs resource group add hagroup vip haproxy ##增加组
pcs status
主机控制虚拟机
主机操作
仓库文件配置
- rpm -qa|grep ^fence ##安装软件
fence-virtd-multicast-0.4.0-4.el8.x86_64
fence-virtd-libvirt-0.4.0-4.el8.x86_64
fence-virtd-0.4.0-4.el8.x86_64
fence_virtd -c ##配置文件生成,网卡写为br0,其余跳过
cat /etc/cluster/fence_xvm.key ##查看key是否存在
若不存在
dd if=/dev/urandom of=fence_xvm.key bs=128 count=1
netstat -anulp | grep :1229 ##查看端口是否开启
开启:
先在server1 server4 上创建目录 /etc/cluster
scp fence_xvm.key server1:/etc/cluster
scp fence_xvm.key server4:/etc/cluster
网络断开内核崩溃,存储崩溃
server1 server4:
yum install fence-virt
server1:
pcs stonith create vmfence fence_xvm pcmk_host_map=“server1:demo1;server4:demo4” op monitor interval=60s ##创建主机名和域名的映射
pcs property set stonith-enabled=true ##开启stonish
pcs status ##查看状态
server4 :
ip link set down eth0
关掉主机 网卡,查看pcs全部转为主机
server1 :
echo c > /proc/sysrq-trigger