一、LVS-TUN模式的工作原理
采用NAT模式时,由于请求和响应的报文必须通过调度器地址重写,当客户请求越来越多时,调度器处理能力将成为瓶颈。为了解决这 个问题,调度器把请求的报文通过IP隧道转发到真实的服务器。真实的服务器将响应处理后的数据直接返回给客户端。这样调度器就只处理请求入站报文,由于一般网络服务应答数据比请求报文大很多,采用VS/TUN模式后,集群系统的最大吞吐量可以提高10倍。
VS/TUN的工作流程图如下所示,它和NAT模式不同的是,它在LB和RS之间的传输不用改写IP地址。而是把客户请求包封装在一个IP tunnel里面,然后发送给RS节点服务器,节点服务器接收到之后解开IP tunnel后,进行响应处理。并且直接把包通过自己的外网地址发送给客户不用经过LB服务器。
原理图过程简述:
1)客户请求数据包,目标地址VIP发送到LB上。
2)LB接收到客户请求包,进行IP Tunnel封装。即在原有的包头加上IP Tunnel的包头。然后发送出去。
3)RS节点服务器根据IP Tunnel包头信息(此时就又一种逻辑上的隐形隧道,只有LB和RS之间懂)收到请求包,然后解开IP Tunnel包头信息,得到客户的请求包并进行响应处理。
4)响应处理完毕之后,RS服务器使用自己的出公网的线路,将这个响应数据包发送给客户端。源IP地址还是VIP地址。(RS节点服务器需要在本地回环接口配置VIP)
二、LVS-TUN模式下的负载均衡
实验环境:
Load Balance:172.25.16.3
Virtual IP: 172.25.16.100
server1(RS): 172.25.16.1
server2(RS): 172.25.16.2
1.首先关闭DR
[root@vm3 local]# /etc/init.d/keepalived stop
[root@vm4 local]# /etc/init.d/keepalived stop
[root@vm3 local]# ipvsadm -C
[root@vm3 local]# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
2.加载隧道模块
【调度器vm3】
[root@vm3 local]# modprobe ipip
[root@vm3 local]# ip addr add 172.25.16.100/24 dev tunl0
[root@vm3 local]# ip link set up tunl0
[root@vm3 local]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:10:67:2a brd ff:ff:ff:ff:ff:ff
inet 172.25.16.3/24 brd 172.25.16.255 scope global eth0
inet6 fe80::5054:ff:fe10:672a/64 scope link
valid_lft forever preferred_lft forever
3: tunl0: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN ###加载隧道模式模块后产生
link/ipip 0.0.0.0 brd 0.0.0.0
inet 172.25.16.100/24 scope global tunl0
配置yum源,安装[LoadBalancer]组包
[root@vm3 ~]# vim /etc/yum.repos.d/rhel-source.repo
[LoadBalancer]
name = LoadBalancer
baseurl=http://172.25.16.250/rhel6.5/LoadBalancer
gpgcheck=0
[root@vm3 ~]# yum repolist
3.轮询策略
[root@vm3 local]# ipvsadm -A -t 172.25.16.100:80 -s rr
[root@vm3 local]# ipvsadm -a -t 172.25.16.100:80 -r 172.25.16.1:80 -i #给vip添加rip,使用TUN模式
[root@vm3 local]# ipvsadm -a -t 172.25.16.100:80 -r 172.25.16.2:80 -i
4.后台服务器vm2
[root@vm2 html]# modprobe ipip
[root@vm2 html]# ip addr del 172.25.16.100/32 dev eth0 ###取消原dr模式下vip
[root@vm2 html]# ip addr add 172.25.16.100/32 dev tunl0 ###加tun模式下vip
[root@vm2 html]# ip link set up tunl0 ##生效
[root@vm2 html]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:fc:12:0c brd ff:ff:ff:ff:ff:ff
inet 172.25.16.2/24 brd 172.25.16.255 scope global eth0
inet6 fe80::5054:ff:fefc:120c/64 scope link
valid_lft forever preferred_lft forever
3: tunl0: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN
link/ipip 0.0.0.0 brd 0.0.0.0
inet 172.25.16.100/32 scope global tunl0 ###隧道模式vip配置成功
[root@vm2 html]# sysctl -a | grep rp_filter ###读取规则:让里面都为0,流入数据包不检测
[root@vm2 html]# sysctl -w net.ipv4.conf.default.rp_filter=0
[root@vm2 html]# sysctl -w net.ipv4.conf.lo.rp_filter=0
[root@vm2 html]# sysctl -w net.ipv4.conf.eth0.rp_filter=0
[root@vm2 html]# sysctl -w net.ipv4.conf.tunl0.rp_filter=0
[root@vm2 html]# sysctl -p
[root@vm2 html]# vim /etc/sysctl.conf
net.ipv4.conf.default.rp_filter = 0
[root@vm2 html]# sysctl -p ##刷新
5.后台服务器vm1
[root@vm1 html]# modprobe ipip
[root@vm1 html]# ip addr del 172.25.16.100/32 dev eth0
[root@vm1 html]# ip addr add 172.25.16.100/32 dev tunl0
[root@vm1 html]# ip link set up tunl0
[root@vm1 html]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:1d:a8:1c brd ff:ff:ff:ff:ff:ff
inet 172.25.16.1/24 brd 172.25.16.255 scope global eth0
inet6 fe80::5054:ff:fe1d:a81c/64 scope link
valid_lft forever preferred_lft forever
3: tunl0: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN
link/ipip 0.0.0.0 brd 0.0.0.0
inet 172.25.16.100/32 scope global tunl0
[root@vm1 html]# sysctl -a | grep rp_filter ##都改为0
[root@vm1 html]# sysctl -w net.ipv4.conf.default.rp_filter=0
[root@vm1 html]# sysctl -w net.ipv4.conf.lo.rp_filter=0
[root@vm1 html]# sysctl -w net.ipv4.conf.eth0.rp_filter=0
[root@vm1 html]# sysctl -w net.ipv4.conf.tunl0.rp_filter=0
[root@vm1 html]# sysctl -p
[root@vm1 html]# vim /etc/sysctl.conf
net.ipv4.conf.default.rp_filter = 0
[root@vm1 html]# sysctl -p
检测:
[root@vm3 local]# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.16.100:http rr
-> server1:http Tunnel 1 0 0
-> server2:http Tunnel 1 0 0
[root@foundation16 Desktop]# curl 172.25.16.100
vm2
mac地址:52:54:00:fc:12:0c
[root@foundation16 Desktop]# curl 172.25.16.100
vm1
mac地址:52:54:00:1d:a8:1c
[root@foundation16 Desktop]# curl 172.25.16.100
vm2
mac地址:52:54:00:fc:12:0c
[root@vm3 local]# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.16.100:http rr
-> server1:http Tunnel 1 0 1
-> server2:http Tunnel 1 0 2
三、哪种模式好----检压
[root@foundation16 Desktop]# scp /home/kiosk/Desktop/lvs/webbench-1.5.tar.gz.1 root@172.25.16.3:/root
[root@foundation16 Desktop]# tar zxfv webbench-1.5.tar.gz.1
[root@foundation16 Desktop]# cd webbench-1.5/
[root@foundation16 webbench-1.5]# mkdir -p /usr/local/man/man1
[root@foundation16 webbench-1.5]# yum install gcc -y
[root@foundation16 webbench-1.5]# yum install ctags -y
[root@foundation16 webbench-1.5]# make
[root@foundation16 webbench-1.5]# make install
测压:
[root@foundation16 webbench-1.5]# webbench -c 20 -t 10 http://172.25.16.100/index.html ##-c 20个client;-t 持续10s
Webbench - Simple Web Benchmark 1.5
Copyright (c) Radim Kolar 1997-2004, GPL Open Source Software.
Benchmarking: GET http://172.25.16.100/index.html
20 clients, running 10 sec.
Speed=156000 pages/min, 785200 bytes/sec.
Requests: 26000 susceed, 0 failed.
[root@vm3 ~]# vmstat 1 20 ###监控 ;间隔1s;采20次样