前言
LVS大家应该很熟悉,这款优秀的开源软件基本成为了IP负载均衡的代言词。但在实际的生产环境中会发现,LVS调度在大压力下很容易就产生瓶颈,其中瓶颈包括ipvs内核模块的限制,CPU软中断,网卡性能等,当然这些都是可以调优的,关于LVS的调优,会在这里详细讲 LVS调优攻略 。回到主题,那当无法避免的单台LVS调度机出现了性能瓶颈,有什么办法呢?在本文就来介绍如何横向扩展LVS调度机
架构简图
如上图三层设备的路由表,VIP地址183.60.153.100对应nexthop有三个地址,这三个地址是三台lvs调度机的地址。这样便可达到效果:用户访问------>VIP------>三台LVS调度机------>分发到多台RealServe
架构优势
1.LVS调度机自由伸缩,横向扩展(最大8台,受限于三层设备允许的等价路由数目)
2.LVS调度资源全利用,All Active。不存在备份机
部署方法
1.硬件资源准备
三层设备: 本文用的是h3c 5800三层交换机
LVS调度机三台: 192.168.0.2 192.168.2.2 192.168.3.2
Realserver三台: 183.60.153.101 183.60.153.102 183.60.153.103
2.三层设备OSPF配置
01 | #查找与三层交换与lvs调度相连的端口,在本文端口分别为 g1/0/2 g1/0/3 g1/0/6 |
02 |
03 | #把g1/0/2改为三层端口,并配上IP |
04 |
05 | interface GigabitEthernet1/0/2 |
06 | port link-mode route |
07 | ip address 192.168.0.1 255.255.255.0 |
08 |
09 | #配置ospf的参数, timer hello是发送hello包的间隔,timer dead是存活的死亡时间。默认是10,40。 |
10 |
11 | #hello包是ospf里面维持邻居关系的报文,这里配置是每秒发送一个,当到4秒还没有收到这个报文,就会认为这个邻居已经丢失,需要修改路由 |
12 | ospf timer hello 1 |
13 | ospf timer dead 4 |
14 | ospf dr-priority 100 |
15 |
16 | #如此类推,把g1/0/3 g1/0/6都配置上 |
17 |
18 | interface GigabitEthernet1/0/3 |
19 | port link-mode route |
20 | ip address 192.168.3.1 255.255.255.0 |
21 |
22 | ospf timer hello 1 |
23 | ospf timer dead 4 |
24 | ospf dr-priority 99 |
25 |
26 |
27 |
28 | interface GigabitEthernet1/0/6 |
29 | port link-mode route |
30 | ip address 192.168.2.1 255.255.255.0 |
31 |
32 | ospf timer hello 1 |
33 | ospf timer dead 4 |
34 | ospf dr-priority 98 |
35 |
36 |
37 |
38 | #配置ospf |
39 | ospf 1 |
40 | area 0.0.0.0 |
41 | network 192.168.0.0 0.0.0.255 |
42 | network 192.168.3.0 0.0.0.255 |
43 | network 192.168.2.0 0.0.0.255 |
3.LVS调度机的OSPF配置
a.安装软路由软件quagga
1 | yum –y install quagga |
b.配置zerba.conf
vim /etc/quagga/zebra.conf
1 | hostname lvs-route-1 |
2 | password xxxxxx |
3 | enable password xxxxxx |
4 |
5 | log file /var/log/zebra.log |
6 | service password-encryption |
c.配置ospfd.conf
vim /etc/quagga/ospfd.conf
01 | #ospf的配置类似于上面三层设备,注意需要把vip声明出去(183.60.153.100) |
02 |
03 | log file /var/log/ospf.log |
04 | log stdout |
05 | log syslog |
06 | interface eth0 |
07 | ip ospf hello-interval 1 |
08 | ip ospf dead-interval 4 |
09 | router ospf |
10 | ospf router- id 192.168.0.1 |
11 | log-adjacency-changes |
12 | auto-cost reference-bandwidth 1000 |
13 | network 183.60.153.100/32 area 0.0.0.0 |
14 | network 192.168.0.0/24 area 0.0.0.0 |
d.开启IP转发
1 | sed –i ‘/net.ipv4.ip_forward/d’ /etc/sysctl.conf |
2 |
3 | echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.confsysctl –p |
e.开启服务
1 | /etc/init.d/zebra start |
2 |
3 | /etc/init.d/ospfd start |
4 |
5 | chkconfig zebra on |
6 |
7 | chkconfig ospfd on |
4.LVS keepalived配置
在此架构下,LVS只能配置成DR模式。如果要配置成NAT模式,我的想法是,需要参照上面的方式让LVS调度机与内网三层设备配置ospf,此方法未验证,有其他方案请告知。
a.修改配置文件 keepalived.conf ,在Cluster架构中,所有调度机用相同的配置文件
vim /etc/keepalived/keepalived.conf
01 | #keepalived的全局配置global_defs { |
02 | notification_email { |
03 |
04 | lxcong@gmail.com |
05 | } |
06 | notification_email_from lvs_notice@gmail.com |
07 | smtp_server 127.0.0.1 |
08 | smtp_connect_timeout 30 |
09 | router_id Ospf_LVS_1 |
10 | } |
11 |
12 | #VRRP实例,在这个架构下所有的LVS调度机都配置成MASTER |
13 |
14 | vrrp_instance LVS_Cluster{ ##创建实例 实例名为LVS_Cluster |
15 |
16 | state MASTER #备份服务器上将MASTER改为BACKUP |
17 | interface eth0 ##VIP 捆绑网卡 |
18 | virtual_router_id 100 ##LVS_ID 在同一个网络中,LVS_ID是唯一的 |
19 | priority 100 #选举的优先级,优先级大的为MASTER 备份服务上将100改为99 |
20 | advert_int 1 #发送vrrp的检查报文的间隔,单位秒 |
21 | authentication { ##认证信息。可以是PASS或者AH |
22 | auth_type PASS |
23 | auth_pass 08856CD8 |
24 | } |
25 | virtual_ipaddress { |
26 | 183.60.153.100 |
27 | } |
28 | } |
29 |
30 | #LVS实例,在本文采用的是DR模式,WRR调度方式。其实在这种架构下也只能使用DR模式 |
31 |
32 | virtual_server 183.60.153.100 80 { |
33 | delay_loop 6 |
34 | lb_algo wrr |
35 | lb_kind DR |
36 | persistence_timeout 60 |
37 | protocol TCP |
38 |
39 | real_server 183.60.153.101 80 { |
40 | weight 1 # 权重 |
41 | inhibit_on_failure # 若此节点故障,则将权重设为零(默认是从列表中移除) |
42 | TCP_CHECK { |
43 | connect_timeout 3 |
44 | nb_get_retry 3 |
45 | delay_before_retry 3 |
46 | connect_port 80 |
47 | } |
48 |
49 | } |
50 |
51 | real_server 183.60.153.102 80 { |
52 | weight 1 # 权重 |
53 | inhibit_on_failure # 若此节点故障,则将权重设为零(默认是从列表中移除) |
54 | TCP_CHECK { |
55 | connect_timeout 3 |
56 | nb_get_retry 3 |
57 | delay_before_retry 3 |
58 | connect_port 80 |
59 | } |
60 |
61 | } |
62 |
63 | real_server 183.60.153.103 80 { |
64 | weight 1 # 权重 |
65 | inhibit_on_failure # 若此节点故障,则将权重设为零(默认是从列表中移除) |
66 | TCP_CHECK { |
67 | connect_timeout 3 |
68 | nb_get_retry 3 |
69 | delay_before_retry 3 |
70 | connect_port 80 |
71 | } |
72 |
73 | } |
74 |
75 | } |
b.启动keepalived
1 | /etc/init.d/keepalived start |
2 |
3 | chkconfig keepalived on |
5.realserver配置
a.添加启动服务脚本/etc/init.d/lvs_realserver
请自行按需要修改脚本中SNS_VIP变量
01 | #!/bin/sh |
02 | ### BEGIN INIT INFO |
03 | # Provides: lvs_realserver |
04 | # Default-Start: 3 4 5 |
05 | # Default-Stop: 0 1 6 |
06 | # Short-Description: LVS real_server service scripts |
07 | # Description: LVS real_server start and stop controller |
08 | ### END INIT INFO |
09 | # Copyright 2013 kisops.com |
10 | # |
11 | # chkconfig: - 20 80 |
12 | # |
13 | # Author: k_ops_yw@ijinshan.com |
14 |
15 | #有多个虚拟IP,以空格分隔 |
16 | SNS_VIP= "183.60.153.100" |
17 | . /etc/rc.d/init.d/functions |
18 | if [[ -z "$SNS_VIP" ]]; then |
19 | echo 'Please set vips in ' $0 ' with SNS_VIP!' |
20 | fi |
21 |
22 | start(){ |
23 | num=0 |
24 | for loop in $SNS_VIP |
25 | do |
26 | /sbin/ ifconfig lo:$num $loop netmask 255.255.255.255 broadcast $loop |
27 | /sbin/route add -host $loop dev lo:$num |
28 | ((num++)) |
29 | done |
30 | echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore |
31 | echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce |
32 | echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore |
33 | echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce |
34 | sysctl -e -p >/dev/null 2>&1 |
35 | } |
36 |
37 | stop(){ |
38 | num=0 |
39 | for loop in $WEB_VIP |
40 | do |
41 | /sbin/ ifconfig lo:$num down |
42 | /sbin/route del -host $loop >/dev/null 2>&1 |
43 | ((num++)) |
44 | done |
45 | echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore |
46 | echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce |
47 | echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignore |
48 | echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce |
49 | sysctl -e -p >/dev/null 2>&1 |
50 | } |
51 |
52 | case "$1" in |
53 | start) |
54 | start |
55 | echo "RealServer Start OK" |
56 | ;; |
57 | stop) |
58 | stop |
59 | echo "RealServer Stoped" |
60 | ;; |
61 | restart) |
62 | stop |
63 | start |
64 | ;; |
65 | *) |
66 | echo "Usage: $0 {start|stop|restart}" |
67 | exit 1 |
68 | esac |
69 | exit 0 |
b.启动服务
1 | service lvs_realserver start |
2 | chkconfig lvs_realserver on |
总结
到这里,LVS Cluster架构已部署完了,如果各位有其他更好的LVS扩展方式请留意或者联系我,互相交流 QQ:83766787。另外以前做了一个LVS的管理平台,但是一直都做得不好,也希望有相关平台开发经验的能联系我,交流交流、
来自:http://my.oschina.net/lxcong/blog/143904