一 LVS
1.1 集群cluster
访问压力:一个服务器P1同时被多个客户器访问,服务器因为消息太多,服务器处理不过来,导致宕机。
解决方法,在服务器前设一台处理机,专门用来接收客户机的消息,但不出来,将所有消息排个顺序,然后再传递给服务器,让服务器可以正常处理。,这台处理机就是lvs
这整个架构,就是集群
集群部署在后台主机上,指多台主机组成一个大型主机,这个大主机中的每一台主机各自负责处理一部分请求
高可用:但是若只有一台lvs,当它挂了以后,服务器又会崩溃,所以需要再加个备用机lvs,当主lvs挂了以后,立马切换备用lvs,保证服务器不宕机。这两台lvs组成的架构,就是高可用
A=MTBF/(MTBF+MTTR):故障时间,比99%大就是越差。
1.2 分布式
分布式:把一个复杂的请求拆分给多个服务器解决。多台主机处理同一个访问请求
例如:一个访问请求,we部分分给专门处理web的服务器进行处理,DB部分分给专门处理web的服务器进行处理。
1.3 集群和分布式的区别
集群就是银行1号窗口和2号窗口,服务是一样的;每个访问请求对应一台主机处理
分布式就是搬家,你搬冰箱,我搬电脑,目标是一样的;一个访问请求被拆分给多个主机处理
1.4 lvs运行原理
lvs:负载调度器,内核集成,四层设备(三层可以改变ip,二层可以改变mac地址,最多能改变端口)
为什么四层lvs还没有淘汰:因为它已经被写在内核里,处理速度快。
lvs集群体系架构
DNAT:一个ip对应另一个ip,一对一网址转换。lvs采用类似于DNAT的地址转换协议,但是可以一对多,将接收来的请求的目的ip转换为多个目的主机的ip地址之一,从而让某个主机处理该请求。
1.5 lvs 实验
lvs调度原理:lvs配置好策略后,客户机发送请求给lvs,lvs得到请求后查看调度策略,现在配置的是10机处理一次,20机处理一次。所以lvs就会根据调度,将客户机的请求先发给10机,让它进行处理,第二条再发给20机,让它进行处理。
1.5.1 环境准备
lvs机:linux9,添加一个网卡,一个网卡nat,新添加的网卡仅主机
两台主机:linux9(webserver)、linux9(webserver2),都是仅主机模式
主机:xshell,不ssh
VMware
eth0:CIP
eth1:Dip
lvs机
给eth1添加ip地址
[root@localhost ~]# vmset.sh eth1 192.168.0.100 lvs.timinglee.org
[root@localhost ~]# vim /etc/NetworkManager/system-connections/eth1.nmconnection
[connection] id=eth1 type=ethernet interface-name=eth1 [ipv4] address1=192.168.0.100/24 method=manual
[root@localhost ~]# nmcli connection reload [root@localhost ~]# nmcli connection up eth1
查看配置
[root@localhost ~]# cat /etc/NetworkManager/system-connections/eth1.nmconnection
[connection] id=eth1 type=ethernet interface-name=eth1 [ipv4] address1=192.168.0.100/24 method=manual
[root@localhost ~]# cat /etc/NetworkManager/system-connections/eth0.nmconnection
[connection] id=eth0 type=ethernet interface-name=eth0 [ipv4] address1=172.25.254.100/24,172.25.254.2 method=manual dns=114.114.114.114;
lvs打开内核路由功能,因为两块网卡的网段不一致,无法联网
查看
[root@localhost ~]# sysctl -a | grep ip_forward
net.ipv4.ip_forward = 0 net.ipv4.ip_forward_update_priority = 1 net.ipv4.ip_forward_use_pmtu = 0
修改
[root@localhost ~]# vim /etc/sysctl.conf
# sysctl settings are defined through files in # /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/. # # Vendors settings live in /usr/lib/sysctl.d/. # To override a whole file, create a new file with the same in # /etc/sysctl.d/ and put new settings there. To override # only specific settings, add a file with a lexically later # name in /etc/sysctl.d/ and put new settings there. # # For more information, see sysctl.conf(5) and sysctl.d(5). net.ipv4.ip_forward=1
[root@localhost ~]# sysctl -p
net.ipv4.ip_forward = 1
主机1
[root@localhost ~]# vmset.sh eth0 192.168.0.10 webserver1.timinglee.org
[root@localhost ~]# route -n
Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 172.25.254.2 0.0.0.0 UG 100 0 0 eth0 172.25.254.2 0.0.0.0 255.255.255.255 UH 100 0 0 eth0 192.168.0.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0
[root@localhost ~]# vim /etc/NetworkManager/system-connections/eth0.nmconnection
[connection] id=eth0 type=ethernet interface-name=eth0 [ipv4] address1=192.168.0.10/24,192.168.0.100 method=manual
[root@localhost ~]# nmcli connection reload [root@localhost ~]# nmcli connection up eth0
查看
[root@localhost ~]# cat /etc/NetworkManager/system-connections/eth0.nmconnection
[connection] id=eth0 type=ethernet interface-name=eth0 [ipv4] address1=192.168.0.10/24,192.168.0.100 method=manual
[root@localhost ~]# route -n
Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.100 0.0.0.0 UG 100 0 0 eth0 192.168.0.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0
主机2
[root@localhost ~]# vmset.sh eth0 192.168.0.20 webserver2.timinglee.org
[root@localhost ~]# vim /etc/NetworkManager/system-connections/eth0.nmconnection
[connection] id=eth0 type=ethernet interface-name=eth0 [ipv4] address1=192.168.0.20/24,192.168.0.100 method=manual dns=114.114.114.114;
[root@localhost ~]# nmcli connection reload [root@localhost ~]# nmcli connection up eth0
查看
[root@localhost ~]# cat /etc/NetworkManager/system-connections/eth0.nmconnection
[connection] id=eth0 type=ethernet interface-name=eth0 [ipv4] address1=192.168.0.20/24,192.168.0.100 method=manual dns=114.114.114.114;
[root@localhost ~]# route -n
[root@localhost ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.100 0.0.0.0 UG 100 0 0 eth0 192.168.0.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0
1.5.2 实验配置
主机1配置
[root@webserver1 ~]# dnf install httpd -y
[root@webserver1 ~]# echo webserver1 - 192.168.0.10 > /var/www/html/index.html
启动协议
[root@webserver1 ~]# systemctl enable --now httpd
Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service → /usr/lib/systemd/system/httpd.service.
主机2
[root@webserver2 ~]# dnf install httpd -y
[root@webserver2 ~]# echo webserver2 - 192.168.0.20 > /var/www/html/index.html
[root@webserver2 ~]# systemctl enable --now httpd
Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service → /usr/lib/systemd/system/httpd.service.
lvs机
检查
[root@lvs ~]# curl 192.168.0.10
webserver1 - 192.168.0.10
[root@lvs ~]# curl 192.168.0.20
webserver2 - 192.168.0.20
查看需要下载的lvs软件名
[root@lvs ~]# dnf search lvs
lvs机中安装lvs软件
[root@lvs ~]# yum install ipvsadm -y
查看保存的策略
[root@lvs ~]# ipvsadm -Ln
查看主配置文件
[root@lvs ~]# cat /etc/sysconfig/ipvsadm-config
查看主策略文件
[root@lvs ~]# cat /etc/sysconfig/ipvsadm
配置策略
[root@lvs ~]# ipvsadm -A -t 172.25.254.100:80 -s rr--------客户访问该172.25.254.100时开始调度,rr是静态协议
[root@lvs ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 172.25.254.100:80 rr
配置第一条策略
[root@lvs ~]# ipvsadm -a -t 172.25.254.100:80 -r 192.168.0.10:80 -m
[root@lvs ~]# ipvsadm -Ln
Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 172.25.254.100:80 rr -> 192.168.0.10:80 Route 1 0 0
主机-本地shell
[C:~]$ curl 172.25.254.100
lvs机
调度思路:10一次,20一次,轮换使用
配置第二条策略
[root@lvs ~]# ipvsadm -a -t 172.25.254.100:80 -r 192.168.0.20:80 -m
[root@lvs ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 172.25.254.100:80 rr -> 192.168.0.10:80 Masq 1 0 0 -> 192.168.0.20:80 Masq 1 0 0
主机-测试,用mobaxterm软件
for i in {1..10}
do curl 172.25.254.100 done webserver2 - 192.168.0.20 webserver1 - 192.168.0.10 webserver2 - 192.168.0.20 webserver1 - 192.168.0.10 webserver2 - 192.168.0.20 webserver1 - 192.168.0.10 webserver2 - 192.168.0.20 webserver1 - 192.168.0.10 webserver2 - 192.168.0.20 webserver1 - 192.168.0.10
lvs机
lvs保存策略
[root@lvs ~]# ipvsadm-save
1.6 lvs命令
策略全清了以后再复原
更改权重,可以优先使用权重大的。rr不生效,但是wrr生效。
命令的用法
清空策略
[root@lvs ~]# rm -fr /etc/sysconfig/ipvsadm
1.7 DR模式实验搭建
1.7.1 实验准备
五台主机
路由器(route):linux9.4,对内网卡(新建),仅主机;原来网卡,nat
客户端(client):linux9.4,nat
两台服务端
一台lvs
1.7.3 实验配置
承接1.5的实验
lvs机
删除所有网卡配置
[root@lvs ~]# nmcli connection delete eth0
[root@lvs ~]# nmcli connection delete eth1
创建eth1网卡并配置
[root@localhost ~]# vmset.sh eth1 192.168.0.50 lvs.timinglee.org
[root@localhost ~]# vim /etc/NetworkManager/system-connections/eth1.nmconnection
[connection] id=eth1 type=ethernet interface-name=eth1 [ipv4] address1=192.168.0.50/24,192.168.0.100 method=manual
[root@localhost NetworkManager]# nmcli connection reload [root@localhost NetworkManager]# nmcli connection up eth1
[root@localhost ~]# route -n
Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.100 0.0.0.0 UG 100 0 0 eth1 192.168.0.0 0.0.0.0 255.255.255.0 U 100 0 0 eth1
路由器
[root@localhost ~]# vmset.sh eth0 172.25.254.100 router.timinglee.org
[root@localhost ~]# vmset.sh eth1 192.168.0.100 router.timinglee.org
查看网卡配置
root@localhost ~]# vim /etc/NetworkManager/system-connections/eth0.nmconnection
[connection] id=eth0 type=ethernet interface-name=eth0 [ipv4] address1=172.25.254.100/24,172.25.254.2 method=manual dns=114.114.114.114;
[root@localhost ~]# vim /etc/NetworkManager/system-connections/eth1.nmconnection
[connection] id=eth1 type=ethernet interface-name=eth1 [ipv4] address1=192.168.0.100/24 method=manual
lvs打开内核路由功能,因为两块网卡的网段不一致,无法联网
查看
[root@localhost ~]# sysctl -a | grep ip_forward
net.ipv4.ip_forward = 0 net.ipv4.ip_forward_update_priority = 1 net.ipv4.ip_forward_use_pmtu = 0
修改
[root@localhost ~]# vim /etc/sysctl.conf
# sysctl settings are defined through files in # /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/. # # Vendors settings live in /usr/lib/sysctl.d/. # To override a whole file, create a new file with the same in # /etc/sysctl.d/ and put new settings there. To override # only specific settings, add a file with a lexically later # name in /etc/sysctl.d/ and put new settings there. # # For more information, see sysctl.conf(5) and sysctl.d(5). net.ipv4.ip_forward=1
[root@localhost ~]# sysctl -p
net.ipv4.ip_forward = 1
更新网卡配置
[root@localhost ~]# nmcli connection reload
[root@localhost ~]# nmcli connection up eth1
[root@localhost ~]# nmcli connection up eth0
路由器的全部设定
client
[root@localhost ~]# vmset.s::h eth0 172.25.254.200 client.timinglee.org
[root@localhost ~]# vim /etc/NetworkManager/system-connections/eth0.nmconnection
[connection] id=eth0 type=ethernet interface-name=eth0 [ipv4] address1=172.25.254.200/24,172.25.254.2 method=manual
更新网卡配置
[root@localhost ~]# nmcli connection reload
[root@localhost ~]# nmcli connection up eth0
检查
[root@localhost ~]# route -n
Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 172.25.254.100 0.0.0.0 UG 100 0 0 eth0 172.25.254.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0
webserver1:10
检查网卡网段
[root@webserver1 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.10 netmask 255.255.255.0 broadcast 192.168.0.255 inet6 fe80::cd65:33ae:6151:274f prefixlen 64 scopeid 0x20<link> ether 00:0c:29:d0:67:24 txqueuelen 1000 (Ethernet) RX packets 1554 bytes 140039 (136.7 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1061 bytes 107995 (105.4 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 277 bytes 26475 (25.8 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 277 bytes 26475 (25.8 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
rs1主机中使vip不对外响应
[root@webserver1 ~]# echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
[root@webserver1 ~]# echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
[root@webserver1 ~]# echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
[root@webserver1 ~]# echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
网关要指向路由器
[root@webserver1 ~]# ip a a 192.168.0.200/32 dev lo
webserver2:20
rs2主机中使vip不对外响应
[root@webserver2 ~]# echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
[root@webserver2 ~]# echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
[root@webserver2 ~]# echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
[root@webserver2 ~]# echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
网关要指向路由器
[root@webserver2 ~]# ip a a 192.168.0.200/32 dev lo
lvs机
网关要指向路由器
[root@lvs ~]# ip a a 192.168.0.200/32 dev lo
更新网卡配置
[root@lvs ~]# nmcli connection reload
[root@lvs ~]# nmcli connection up lo
[root@lvs ~]# nmcli connection up eth1
清完策略
[root@lvs ~]# ipvsadm -C
配置策略
[root@lvs ~]# ipvsadm -A -t 192.168.0.200:80 -s wrr
[root@lvs ~]# ipvsadm -a -t 192.168.0.200:80 -r 192.168.0.10:80 -g -w 1
[root@lvs ~]# ipvsadm -a -t 192.168.0.200:80 -r 192.168.0.20:80 -g -w 2
client测试
[root@client ~]# ping 192.168.0.200
[root@client ~]# for i in {1..10}
> do > curl 192.168.0.200 > done webserver2 - 192.168.0.20 webserver1 - 192.168.0.10 webserver2 - 192.168.0.20 webserver2 - 192.168.0.20 webserver1 - 192.168.0.10 webserver2 - 192.168.0.20 webserver2 - 192.168.0.20 webserver1 - 192.168.0.10 webserver2 - 192.168.0.20 webserver2 - 192.168.0.20
二、lvs的调度算法----重要
2.1 静态算法
RR:轮询。不管后端死活,不会根据具体访问请求的复杂程度给具体分析,只会死板的一边给一个。
WRR:权重。权重大的要比权重小的处理的访问请求多
SH:源地址hash。会话绑定。将访问请求的源地址哈希算法加密,然后绑定一台服务端,每次该源ip地址来的消息,都给对应服务端。
DH:目标地址hash。将访问请求的目标地址hash加密后与某个服务器进行绑定。只要有指向对应目标地址的访问请求,统统给对应绑定的服务器。
2.2 动态调度算法
LC:负载值。负载值越小越+流量,但有个问题,若后端服务器的配置不一样,一台性能差,则光看负载值依然会导致服务器忙不过来
活动连接:正在进行活动通信
WLC:负载值/权重。将性能也考虑进去。但是容易导致权重高的一直需要处理大量的访问请求,而权重小的则比较清闲
SED:初始连接高权重优先。经过前几次处理以后,依然会和WLC的问题一致
NQ:第一轮均分,后面排队,后续使用SED算法
LBLC:动态的DH算法
LBLCR:带复制的LBLC,
复制:当一台服务端需要处理的访问请求过多时,会将剩余的一些未处理的访问请求复制给另一台较空闲的服务端,让其处理,从而减轻自己的工作量。
2.3 4.15版本内核以后新增的调度算法
FQ:动态算法。当一台服务器承接太多访问请求后,就会被污点标记(过载标记),之后的访问请求给另一台服务器。
OVF:静态算法。先给权重高的。没有污点标记的,权重不为0
三、防火墙标记解决轮询调度问题
继承上次的实验
lvs机
改为rr静态算法
[root@lvs ~]# ipvsadm -E -t 192.168.0.200:80 -s rr
3.1 轮询规则中可能遇到的错误
3.1.1 发现问题
主机1
webserver1安装mod_ssl模块,让webserver1支持https
[root@webserver1 ~]# yum install mod_ssl -y
重启阿帕奇
[root@webserver1 ~]# systemctl restart httpd
主机2
webserver2安装mod_ssl模块,让webserver2支持https
[root@webserver2 ~]# yum install mod_ssl -y
重启阿帕奇
[root@webserver2 ~]# systemctl restart httpd
lvs
检查现有的lvs配置
[root@lvs ~]# curl -k https://192.168.0.20
[root@lvs ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.0.200:80 rr -> 192.168.0.10:80 Route 1 0 0 -> 192.168.0.20:80 Route 2 0 0
添加lvs策略,开启443端口
[root@lvs ~]# ipvsadm -A -t 192.168.0.200:443 -s rr [root@lvs ~]# ipvsadm -a -t 192.168.0.200:443 -r 192.168.0.10:443 -g [root@lvs ~]# ipvsadm -a -t 192.168.0.200:443 -r 192.168.0.20:443 -g
client
测试,发现问题
[root@client ~]# curl 192.168.0.200;curl -k https://192.168.0.200
webserver2 - 192.168.0.20 webserver2 - 192.168.0.20
lvs
[root@lvs ~]# ipvsadm -C
3.1.2 解决问题
lvs
[root@lvs ~]# iptables -t mangle -nL
Chain PREROUTING (policy ACCEPT) target prot opt source destination Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination
LVS主机中为端口做标记
[root@lvs ~]# iptables -t mangle -A PREROUTING -d 192.168.0.200 -p tcp -m multiport --dports 80,443 -j MARK --set-mark 66
[root@lvs ~]# iptables -t mangle -nL
Chain PREROUTING (policy ACCEPT) target prot opt source destination MARK 6 -- 0.0.0.0/0 192.168.0.200 multiport dports 80,443 MARK set 0x42 Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination
配置lvs策略
[root@lvs ~]# ipvsadm -A -f 66 -s rr
[root@lvs ~]# ipvsadm -a -f 66 -r 192.168.0.10 -g
[root@lvs ~]# ipvsadm -a -f 66 -r 192.168.0.20 -g
检查策略
[root@lvs ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn FWM 66 rr -> 192.168.0.10:0 Route 1 0 0 -> 192.168.0.20:0 Route 1 0 0
client测试
[root@client ~]# curl 192.168.0.200;curl -k https://192.168.0.200
webserver2 - 192.168.0.20 webserver1 - 192.168.0.10
四、持久链接---lvs
lvs机
修改默认的链接时间
[root@lvs ~]# ipvsadm -E -f 66 -s rr -p
解释:360s内,同一个源ip地址的访问请求都不会更改对应的服务器
检查修改是否成功
[root@lvs ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn FWM 66 rr persistent 360 #修改为360s -> 192.168.0.10:0 Route 1 0 2 -> 192.168.0.20:0 Route 1 0 2
删掉延迟时间,恢复默认
[root@lvs ~]# ipvsadm -E -f 66 -s rr
[root@lvs ~]# ipvsadm -Ln
Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn FWM 66 rr -> 192.168.0.10:0 Route 1 0 0 -> 192.168.0.20:0 Route 1 0 0
client测试
[root@client ~]# for i in {1..10}; do curl 192.168.0.200; sleep 1; done
解释:360s后才会转变为新的服务器