1.Ldirectord部署实现
(1)Ldirectord介绍
ldirectord用来实现LVS负载均衡资源在主、备节点间的故障转移。在首次启动时,ldirectord可以自动创建IPVS表。此外,它还可以监控各RealServer的运行状态,一旦发现某RealServer运行异常时,还可以将其从IPVS表中移除。
ldirectord进程通过向RealServer的RIP发送资源访问请求并通过由RealServer返回的响应信息来确定RealServer的运行状态。在Director上,每一个VIP需要一个单独的ldirectord进程。如果RealServer不能正常响应Director上ldirectord的请求,ldirectord进程将通过ipvsadm命令将此RealServer从IPVS表中移除。而一旦RealServer再次上线,ldirectord会将其重新添加至IPVS表中。
(2)Ldirectord的实现:
在上一篇博文的基础上,进一步的实现负载均衡及高可用。所需环境:
DS:(DIP)172.25.33.1 (VIP)172.25.33.100 server1
RS:172.25.33.2/3 server2/3
<1> server1所作操作:
将之前实验中的策略删除,重新添加新的lvs调度策略
[root@server1 ~]modprobe -r ipip
[root@server1 ~]ipvsadm -C
ipvsadm -ln
[root@server1 ~]ipvsadm -A -t 172.25.33.100:80 -s rr
[root@server1 ~]ipvsadm -a -t 172.25.33.100:80 -r 172.25.33.2:80 -g
[root@server1 ~]ipvsadm -a -t 172.25.33.100:80 -r 172.25.33.3:80 -g
[root@server1 ~]systemctl restart ipvsadm.service
搭建高可用yum源,安装ldirectord
[root@server1 ~]cat /etc/sysconfig/ipvsadm
[root@server1 ~]ip addr add 172.25.33.100/24 dev eth0
[root@server1 ~]vim /etc/yum.repos.d/westos.repo
写入新的一个yum源:basrurl=原有的路径/addons/HighAvailability
[root@server1 ~]yum install -y ldirectord-3.9.5-3.1.x86_64.rpm
将配置文件模板复制到配置文件中/etc/ha.d,修改配置文件
[root@server1 ~]cp /usr/share/doc/ldirectord-3.9.5/ldirectord.cf /etc/ha.d
[root@server1 ~]vim /etc/ha.d/ldirectord.cf
virtual=172.24.33.100:80
real=172.25.33.2:80 gate #两个RS
real=172.25.33.3:80 gate
fallback=127.0.0.1:80 gate #两个RS都down掉的时候,访问本机80端口
service=http
scheduler=rr
#persistent=600
#netmask=255.255.255.255
protocol=tcp
checktype=negotiate
checkport=80
request="index.html"
#receive="Test Page"
#virtualhost=www.x.y.z
下载httpd,修改默认发布页面,并启动ldirectord
[root@server1 ~]yum install httpd -y
[root@server1 ~]cd /var/www/html
[root@server1 ~]vim index.html
[root@server1 ~]systemctl start httpd
[root@server1 ~]/etc/init.d/ldirectord start
[root@server1 ~]chkconfig --list #列出开机自启的服务 (不是全部off的)
#####若本机httpd不是80端口,修改配置文件############
vim /etc/httpd/conf/httpd.conf 改端口为80
systemctl restart httpd
<2> server2/3所作操作:
modprobe -r ipip
ip addr add 172.25.33.100/24 dev eth0
<3> 测试:
2.keepalived部署实现
(1)Keepalived高可用软件
Keepalived软件起初是专为LVS负载均衡软件设计的,用来管理并监控LVS集群系统中各个服务节点的状态,后来又加入了可以实现高可用的VRRP功能。因此,keepalived除了能够管理LVS软件外,还可以作为其他服务的高可用解决方案软件。
keepalived软件主要是通过VRRP协议实现高可用功能的。VRRP是Virtual Router Redundancy Protocol(虚拟路由冗余协议)的缩写,VRRP出现的目的就是为了解决静态路由的单点故障问题的,它能保证当个别节点宕机时,整个网络可以不间断地运行。所以,keepalived一方面具有配置管理LVS的功能,同时还具有对LVS下面节点进行健康检查的功能,另一方面也可以实现系统网络服务的高可用功能。
(2)Ldirectord的实现:
<1> 所需环境如下:
主DS:server1 DIP: 172.25.33.1 VIP:172.25.33.100
备DS:server4 DIP: 172.25.33.4 VIP:172.25.33.100
RS:server2(172.25.33.2)和server3(172.25.33.3)
<2> server1和servr4同样所作操作如下:
[root@server1 ~]tar zxf keepalived-2.0.17.tar.gz #解压keepalived软件
[root@server1 ~]cd keepalived-2.0.17/
[root@server1 keepalived-2.0.17]yum install -y gcc openssl-devel
[root@server1 keepalived-2.0.17]./configure --prefix=/usr/local/keepalived --with-init=systemd #编译
[root@server1 keepalived-2.0.17]make && make install #安装软件
[root@server1 keepalived-2.0.17]cd /usr/local/keepalived/
[root@server1 keepalived]ln -s /usr/local/keepalived/etc/keepalived/ /etc #创建软连接
<3> server1所作操作如下:
ldirectord和keepalived存在冲突,关闭ldirectord,编辑keepalived配置文件
[root@server1 keepalived]/etc/init.d/ldirectord stop
[root@server1 keepalived]chkconfig ldirectord off
[root@server1 keepalived]vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost ##节点宕机了将会接收到异常邮件的主机
}
notification_email_from keepalived@localhost ##邮件发送人
smtp_server 127.0.0.1 ##发送的服务器
smtp_connect_timeout 30 #指定连接超时时间
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
# vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_instance VI_1 {
state MASTER ##主节点
interface eth0
virtual_router_id 33
priority 100 ##权重
advert_int 1 ##检查间隔时间为1s
authentication {
auth_type PASS ##认证方式
auth_pass 1111 ##认证密码
}
virtual_ipaddress {
172.25.33.100 ##虚拟ip
}
}
virtual_server 172.25.33.100 80 {
delay_loop 3 ##连接失败3次以后发送邮件
lb_algo rr ##lvs的调度算法为rr
lb_kind DR ##lvs的调度方法为DR轮询
# persistence_timeout 50
protocol TCP ##端口
real_server 172.25.33.2 80 {
TCP_CHECK {
weight 1
connect_port 80
connect_timeout 3
}
}
real_server 172.25.33.3 80 {
TCP_CHECK {
weight 1
connect_port 80
connect_timeout 3
}
}
}
[root@server1 keepalived]# systemctl start keepalived
<4> server4所作操作如下:
vim /etc/keepalived/keepalived.conf
18 state BACKUP
21 priority 50
systemctl start keepalived.service
<5> 真机测试如下:
当主DS停掉以后,转到备DS,用户无感知
RS停掉一台,可以踢除,访问转移到另外一台RS上