keepalived+lvs
什么是lvs
LVS是Linux Virtual Server的简写,意即Linux虚拟服务器,是一个虚拟的服务器集群系统。
使用集群技术和Linux操作系统实现一个高性能、高可用的服务器.
很好的可伸缩性(Scalability)
很好的可靠性(Reliability)
很好的可管理性(Manageability)。
lvs的作用是什么:
LVS主要用于服务器集群的负载均衡。它工作在网络层,可以实现高性能,高可用的服务器集群技术。它廉价,可把许多低性能的服务器组合在一起形成一个超级服务器。它易用,配置非常简单,且有多种负载均衡的方法。它稳定可靠,即使在集群的服务器中某台服务器无法正常工作,也不影响整体效果。另外可扩展性也非常好。
LVS自从1998年开始,发展到现在已经是一个比较成熟的技术项目了。可以利用LVS技术实现高可伸缩的、高可用的网络服务,例如WWW服务、Cache服务、DNS服务、FTP服务、MAIL服务、视频/音频点播服务等等,有许多比较著名网站和组织都在使用LVS架设的集群系统,例如:Linux的门户网站(www.linux.com)、向RealPlayer提供音频视频服务而闻名的Real公司(www.real.com)、全球最大的开源网站(sourceforge.net)等。
lvs的结构体系:
使用LVS架设的服务器集群系统有三个部分组成:
(1)最前端的负载均衡层,用Load Balancer表示;
(2)中间的服务器集群层,用Server Array表示;
(3)最底端的数据共享存储层,用Shared Storage表示;
在用户看来,所有的内部应用都是透明的,用户只是在使用一个虚拟服务器提供的高性能服务。
lvs的夫在均衡机制
(1)LVS是四层负载均衡,也就是说建立在OSI模型的第四层——传输层之上,传输层上有我们熟悉的TCP/UDP,LVS支持TCP/UDP的负载均衡。因为LVS是四层负载均衡,因此它相对于其它高层负载均衡的解决办法,比如DNS域名轮流解析、应用层负载的调度、客户端的调度等,它的效率是非常高的。
(2)LVS的转发主要通过修改IP地址(NAT模式,分为源地址修改SNAT和目标地址修改DNAT)、修改目标MAC(DR模式)来实现。
- NAT模式:网络地址转换
NAT(Network Address Translation)是一种外网和内网地址映射的技术。NAT模式下,网络数据报的进出都要经过LVS的处理。LVS需要作为RS(真实服务器)的网关。当包到达LVS时,LVS做目标地址转换(DNAT),将目标IP改为RS的IP。RS接收到包以后,仿佛是客户端直接发给它的一样。RS处理完,返回响应时,源IP是RS IP,目标IP是客户端的IP。这时RS的包通过网关(LVS)中转,LVS会做源地址转换(SNAT),将包的源地址改为VIP,这样,这个包对客户端看起来就仿佛是LVS直接返回给它的。客户端无法感知到后端RS的存在。
- DR模式:直接路由
DR模式下需要LVS和RS集群绑定同一个VIP(RS通过将VIP绑定在loopback实现),但与NAT的不同点在于:请求由LVS接受,由真实提供服务的服务器(RealServer, RS)直接返回给用户,返回的时候不经过LVS。详细来看,一个请求过来时,LVS只需要将网络帧的MAC地址修改为某一台RS的MAC,该包就会被转发到相应的RS处理,注意此时的源IP和目标IP都没变,LVS只是做了一下移花接木。RS收到LVS转发来的包时,链路层发现MAC是自己的,到上面的网络层,发现IP也是自己的,于是这个包被合法地接受,RS感知不到前面有LVS的存在。而当RS返回响应时,只要直接向源IP(即用户的IP)返回即可,不再经过LVS。
(3)DR负载均衡模式数据分发过程中不修改IP地址,只修改mac地址,由于实际处理请求的真实物理IP地址和数据请求目的IP地址一致,所以不需要通过负载均衡服务器进行地址转换,可将响应数据包直接返回给用户浏览器,避免负载均衡服务器网卡带宽成为瓶颈。因此,DR模式具有较好的性能,也是目前大型网站使用最广泛的一种负载均衡手段。
构建keepalived+lvs实现负载均衡
- 构建结构图
准备环境:
server1和server4 keepalived server 172.25.32.1(4)
server2和server3 apache server 172.25.32.2(3)
server1(server4)
- 下载keepalived安装包
- 解压,源码编译安装
[root@server1 keepalived]# ls
keepalived-1.3.5.tar.gz rhel6 keepalived+lvs.pdf sery-lvs-cluster.pdf
[root@server1 keepalived]# tar zxf keepalived-1.3.5.tar.gz
[root@server1 keepalived]# ls
keepalived-1.3.5 rhel6 keepalived+lvs.pdf
keepalived-1.3.5.tar.gz sery-lvs-cluster.pdf
[root@server1 keepalived]# cd keepalived-1.3.5
[root@server1 keepalived-1.3.5]# ls
aclocal.m4 ChangeLog CONTRIBUTORS genhash keepalived.spec.in missing
ar-lib compile COPYING INSTALL lib README
AUTHOR configure depcomp install-sh Makefile.am snap
bin_install configure.ac doc keepalived Makefile.in TODO
[root@server1 keepalived-1.3.5]# ./configure --help
[root@server1 keepalived-1.3.5]# ./configure --with-init=SYSV --prefix=/usr/local/keepalived
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether make supports nested variables... yes
...
[root@server1 keepalived-1.3.5]# make
Making all in lib
make[1]: Entering directory `/root/keepalived/keepalived-1.3.5/lib'
make all-am
make[2]: Entering directory `/root/keepalived/keepalived-1.3.5/lib'
CC memory.o
...
[root@server1 keepalived-1.3.5]# make install
Making install in lib
make[1]: Entering directory `/root/keepalived/keepalived-1.3.5/lib'
make install-am
make[2]: Entering directory `/root/keepalived/keepalived-1.3.5/lib'
make[3]: Entering directory `/root/keepalived/keepalived-1.3.5/lib'
make[3]: Nothing to be done for `install-exec-am'.
...
[root@server1 rc.d]# ln -s /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/ #给启动脚本创建软件列
[root@server1 rc.d]# ln -s /usr/local/keepalived/etc/keepalived/ /etc/
[root@server1 keepalived]# ln -s /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
[root@server1 keepalived]# ln -s /usr/local/keepalived/sbin/keepalived /sbin/
[root@server1 init.d]# chmod +x /usr/local/keepalived/etc/rc.d/init.d/keepalived
[root@server1 init.d]# /etc/init.d/keepalived start
Starting keepalived: [ OK ]
[root@server1 rc.d]# scp -r /usr/local/keepalived/ 172.25.32.4:/usr/local/ #配置文件给server4考一份
- keepalived 配置:
[root@server1 init.d]# cat /etc/keepalived/keepalived.conf | grep -v "#" | grep -v ";"| grep -v "^$"
#以上命令是将配置文件生效的参数过滤出来,grep 命令-v 参数(反向选择)分别去掉所有以#
(井号)和;(分号)开头的注释信息行,对于剩余的空白行可以再用^$来表示并反选过滤
! Configuration File for keepalived
global_defs {
notification_email {
root@locahost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
#vrrp_strict #不注释的话server1和server4在keepalive回切的时候,iptables会自动把vip阻挡,进而五方访问呢real-server的服务
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_instance VI_1 {
state MASTER #作为master还是buckup就看两个keepalive的优先级
interface eth0
virtual_router_id 32
priority 100 #优先级
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.25.32.100 #虚拟ip(vip)
}
}
virtual_server 172.25.32.100 80 { #virt-server的httpd服务端口
delay_loop 6
lb_algo rr #轮询,就是两个keepalive主机
lb_kind DR #lvs 工作模式
#persistence_timeout 50 #适用于持续连接服务,比如ftp服务,这时要开启这个
protocol TCP
real_server 172.25.32.2 80{ #real-server的服务端口(80即就是apache,21就是ftp)
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 172.25.32.3 80 {
weight 2
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
- server4的配置和server1一模一样
[root@server4 rc.d]# ln -s /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/ #给启动脚本创建软件列
[root@server4 rc.d]# ln -s /usr/local/keepalived/etc/keepalived/ /etc/
[root@server4 keepalived]# ln -s /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
[root@server4 keepalived]# ln -s /usr/local/keepalived/sbin/keepalived /sbin/
[root@server4 init.d]# chmod +x /usr/local/keepalived/etc/rc.d/init.d/keepalived
[root@server4 init.d]# /etc/init.d/keepalived start
Starting keepalived: [ OK ]
[root@server4 keepalived]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@locahost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
#vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_instance VI_1 {
state BUCKUP
interface eth0
virtual_router_id 32
priority 200
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.25.32.100
}
}
virtual_server 172.25.32.100 80 {
delay_loop 6
lb_algo rr
lb_kind DR
#persistence_timeout 50
protocol TCP
real_server 172.25.32.2 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 172.25.32.3 80 {
weight 2
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@server1 init.d]# /etc/init.d/keepalived start
Starting keepalived: [ OK ]
[root@server1 init.d]# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.32.100:http rr
-> server2:http Route 1 0 0
-> server3:http Route 2 0 0
[root@server4 keepalived]# /etc/init.d/keepalived start
Starting keepalived:
[root@server4 keepalived]# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.32.100:http rr
-> server2:http Route 1 0 0
-> server3:http Route 2 0 0
servre3和server2同样:
[root@server3 ~]# /etc/init.d/httpd start
Starting httpd:
[root@server3 ~]# /etc/init.d/arptables_jf start
Flushing all current rules and user defined chains: [ OK ]
Clearing all current rules and user defined chains: [ OK ]
Applying arptables firewall rules: [ OK ]
[root@server2 ~]# /etc/init.d/httpd start
Starting httpd:
[root@server2 ~]# /etc/init.d/arptables_jf start
Flushing all current rules and user defined chains: [ OK ]
Clearing all current rules and user defined chains: [ OK ]
Applying arptables firewall rules: [ OK ]
测试:
[root@server4 keepalived]# for i in {1..15}; do curl 172.25.32.100 ;done
<h2>www.linux.org apache-server3</h2>
<h1>www.linux.org-apache-server2</h1>
<h2>www.linux.org apache-server3</h2>
<h1>www.linux.org-apache-server2</h1>
<h2>www.linux.org apache-server3</h2>
<h1>www.linux.org-apache-server2</h1>
<h2>www.linux.org apache-server3</h2>
<h1>www.linux.org-apache-server2</h1>
<h2>www.linux.org apache-server3</h2>
<h1>www.linux.org-apache-server2</h1>
<h2>www.linux.org apache-server3</h2>
<h1>www.linux.org-apache-server2</h1>
<h2>www.linux.org apache-server3</h2>
<h1>www.linux.org-apache-server2</h1>
<h2>www.linux.org apache-server3</h2>
- 如果出现只能访问一个real-server的情况,重新加载以下vip所在主机的ipvsadm服务
[root@server4 keepalived]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:34:98:fb brd ff:ff:ff:ff:ff:ff
inet 172.25.32.4/24 brd 172.25.32.255 scope global eth0
inet6 fe80::5054:ff:fe34:98fb/64 scope link #vip不在server4
valid_lft forever preferred_lft forever
[root@server1 init.d]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:95:72:9b brd ff:ff:ff:ff:ff:ff
inet 172.25.32.1/24 brd 172.25.32.255 scope global eth0
inet 172.25.32.100/32 scope global eth0 #vip 地址
inet6 fe80::5054:ff:fe95:729b/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
link/ether 52:54:00:0a:fd:4b brd ff:ff:ff:ff:ff:ff
- 此时关掉server1的keepalive的,此时vip自动转移到server4(已开启keepalived)
master——>负责发送心跳信息,buckup——>负责接受,当发现master挂了,buckup会在很多的buckup中,按照优先级成为新的master。
root@server1 init.d]# /etc/init.d/keepalived stop
Stopping keepalived: [ OK ]
[root@server1 init.d]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:95:72:9b brd ff:ff:ff:ff:ff:ff
inet 172.25.32.1/24 brd 172.25.32.255 scope global eth0
inet6 fe80::5054:ff:fe95:729b/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
link/ether 52:54:00:0a:fd:4b brd ff:ff:ff:ff:ff:ff
[root@server4 keepalived]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:34:98:fb brd ff:ff:ff:ff:ff:ff
inet 172.25.32.4/24 brd 172.25.32.255 scope global eth0
inet 172.25.32.100/32 scope global eth0 #已经跳转过来了
inet6 fe80::5054:ff:fe34:98fb/64 scope link
valid_lft forever preferred_lft forever