基于centos7.3 3.10-514的LVS双机负载均衡部署方案
主机:192.168.1
.51
备机:192.168.1
.52
LVS VIP: 192.168.1
.50
0、制作系统本地的光盘yum源
注:使用CentOS-7-x86_64-DVD-1611.iso文件上传至系统,用于在不能访问公网的情况下,完成以下安装步骤。如果可以直接访问到公网,则步骤0可以忽略。
创建ISO挂载目录
mkdir /media/cdrom
挂载ISO至/media/cdrom
mount -t iso9660 /root/CentOS-7-x86_64-DVD-1611.iso /media/cdrom
编辑repo
vi /etc/yum.repos.d/CentOS-Media.repo
[c7-media]
name=CentOS-$releasever - Media
baseurl=file:///media/CentOS/
file:///media/cdrom/
file:///media/cdrecorder/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
将/etc/yum.repos.d目录下除CentOS-Media.repo之外的配置文件都转移至其它备份路径去。
只保留以下文件:
ls /etc/yum.repos.d
CentOS-Media.repo
更新yum缓存:
yum clean all
yum makecache
随意执行一个检索命令验证下服务是否可用:
yum search ssh
注:以下步骤均在仅使用步骤0的本地yum源基础上进行了验证。
1、基础软件包
yum -y install gcc gcc-c++ make popt popt-devel libnl libnl-devel popt-static openssl-devel kernel-devel
建立一个内核信息的符号链接,如下:
ln -s /usr/src/kernels/3.10.0-514.el7.x86_64 /usr/src/linux
2、安装LVS软件
# yum -y install ipvsadm
# ipvsadm -v
ipvsadm v1.27 2008/5/15 (compiled with popt and IPVS v1.2.1)
3、安装keepalived软件
# yum install keepalived
# keepalived -v
Keepalived v1.2.13 (05/25,2017)
以下是在安装keepalived时的依赖关系:
Installed:
keepalived.x86_64 0:1.2.13-9.el7_3
Dependency Installed:
lm_sensors-libs.x86_64 0:3.4.0-4.20160601gitf9185e5.el7 net-snmp-agent-libs.x86_64 1:5.7.2-24.el7_3.2 net-snmp-libs.x86_64 1:5.7.2-24.el7_3.2
4、系统防火墙配置
修改iptables配置,在主机上放开备机进来的流量,在备机也做相似设置:
firewall-cmd --direct --permanent --add-rule ipv4 filter INPUT 0 --in-interface enp0s3 --destination 224.0.0.18 --protocol vrrp -j ACCEPT
firewall-cmd --reload
5、配置/etc/keepalived/keepalived.conf文件
注:在备机上,该配置文件仅有router_id和priority的值不同。其它值均一致。
以下为主机上的配置举例:
global_defs {
notification_email {
#system@hongshutech.com
}
notification_email_from lvs@baiwutong.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_51
}
vrrp_instance VI_1 {
state BACKUP
nopreempt
interface enp0s3
virtual_router_id 50
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1207
}
virtual_ipaddress {
192.168.1.50
}
}
virtual_server 192.168.1.50 8888 {
delay_loop 6
lb_algo wrr
lb_kind DR
#persistence_timeout 2
protocol TCP
real_server 192.168.1.61 8888 {
weight 100
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connect_port 8855
}
}
real_server 192.168.1.62 8888 {
weight 100
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connect_port 8855
}
}
}
virtual_server 192.168.1.50 8080 {
delay_loop 6
lb_algo wrr
lb_kind DR
#persistence_timeout 2
protocol TCP
real_server 192.168.1.61 8080 {
weight 100
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connect_port 8080
}
}
real_server 192.168.1.62 8080 {
weight 100
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connect_port 8080
}
}
}
启动keepalived服务
# systemctl start keepalived
查看keepalived服务状态
# systemctl status keepalived
设置为随系统自启动:
# systemctl enable keepalived
查看系统网卡VIP是否已经生效:
# ip a
抓包观察是否有主机定时发出的VRRP包。
# tcpdump -p vrrp -n
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s3, link-type EN10MB (Ethernet), capture size 65535 bytes
10:16:13.375399 IP 192.168.1.51 > 224.0.0.18: VRRPv2, Advertisement, vrid 50, prio 100, authtype simple, intvl 1s, length 20
10:16:14.376542 IP 192.168.1.51 > 224.0.0.18: VRRPv2, Advertisement, vrid 50, prio 100, authtype simple, intvl 1s, length 20
10:16:15.377596 IP 192.168.1.51 > 224.0.0.18: VRRPv2, Advertisement, vrid 50, prio 100, authtype simple, intvl 1s, length 20
10:16:16.378590 IP 192.168.1.51 > 224.0.0.18: VRRPv2, Advertisement, vrid 50, prio 100, authtype simple, intvl 1s, length 20
对keepalived的主备切换、VIP地址漂移进行测试。
# systemctl stop keepalived
# systemctl start keepalived
注:为减少服务自动抢占对业务的影响,LVS服务是配置为运行在非抢占模式下。
6、负载均衡后端应用主机的配置
使用DR模式时,在各个提供后端服务的应用主机上传脚本lvs_real_server.sh至/usr/local/src下并设置:
chmod 700 /usr/local/src/lvs_real_server.sh
echo "/usr/local/src/lvs_real_server.sh start" >> /etc/rc.d/rc.local
/usr/local/src/lvs_real_server.sh start
以下为lvs_real_server.sh的内容,注意要更新VIP参配置:
vi lvs_real_server.sh
#!/bin/bash
#written by Daniel on 2014/02/19
#version 1.0
VIP=192.168.1.50
. /etc/rc.d/init.d/functions
case "$1" in
start)
ifconfig lo:0 $VIP netmask 255.255.255.255 broadcast $VIP
route add -host $VIP dev lo:0
echo "1" > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" > /proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" > /proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" > /proc/sys/net/ipv4/conf/all/arp_announce
sysctl -p > /dev/null 2>&1
echo "Real Server Start OK"
;;
stop)
ifconfig lo:0 down
route del $VIP > /dev/null 2>&1
echo "0" > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo "0" > /proc/sys/net/ipv4/conf/lo/arp_announce
echo "0" > /proc/sys/net/ipv4/conf/all/arp_ignore
echo "0" > /proc/sys/net/ipv4/conf/all/arp_announce
echo "Real Server Stoped"
;;
*)
echo "Usage: $0 {start|stop}"
exit 1
esac
exit 0
-
测试后端负载均衡服务的方法
在完成LVS部署后,后端提供服务的应用可能还没有上线。这时需要预先测试LVS的负载均衡功能是否正常。方法是使用以下命令,在每个后端节点上临时启动一个http server,服务端口设置为与LVS负载均衡服务相对应的端口号即可。使用后,停掉该命令即可。
# python -m SimpleHTTPServer 888
Serving HTTP on 0.0.0.0 port 8888...
在
LVS
主机上执行以下命令查看负载均衡统计结果:
# ipvsadm -L -n