在前面的博客中我们通过向一台minion上部署haproxy实现了负载均衡的效果,这篇博客我们来搭建高可用负载均衡集群。
环境
master:server1
minion:server1(部署haproxy、keepalived)
server2(部署apache)
server3(部署nginx)
server4(部署haproxy、keepalived)
server1与server4部署haproxy与keepalived组成高可用负载均衡集群
server2与server3作为后端真实服务器提供web服务
实验操作
一、
1、在server4中配置yum源,安装salt-minion,然后完成与master的认证,在master上接收server4的公钥
2、在/srv/salt目录下创建pkg目录,进入pkg目录,编写一个pre.sls文件,这个文件主要用来部署源码编译安装服务时所需要的一些依赖性,在之后我们在源码编译安装各种服务的时候只需要调用这个脚本文件就行,无需再次编写
vim /srv/salt/pkg/pre.sls
pre-installed:
pkg.installed:
- pkgs:
- gcc-c++
- zlib-devel
- openssl-devel
- pcre-devel
- mailx
3、继续在/srv/salt目录下创建keepalived目录,然后再keepalived目录中编写自动部署keepalived服务的sls脚本文件,在编写过程中,为了能够尽量的少出现错误,我们可以边写边测试,写一部分尝试推送一部分,如果推送成功则继续编写,我们可以先编写keepalived的安装部分,再做文件的软链接等
vim /srv/salt/keepalived/install.sls
include:
- pkg.pre # 调用之前写好的安装源码编译所需依赖性的脚本
ke-installed:
file.managed:
- name: /mnt/keepalived-2.0.6.tar.gz
- source: salt://keepalived/files/keepalived-2.0.6.tar.gz
cmd.run:
- name: cd /mnt && tar zxf keepalived-2.0.6.tar.gz && cd keepalived-2.0.6 && ./configure --prefix=/usr/local/keepalived --with-init=SYSV &>/dev/null && make &>/dev/null && make install &>/dev/null
- creates: /usr/local/keepalived # 检测是否需要重新编译的条件
/etc/keepalived:
file.directory:
- mode: 755
# 将配置文件与二进制命令做软链接,方便操作
/etc/sysconfig/keepalived:
file.symlink:
- target: /usr/local/keepalived/etc/sysconfig/keepalived
/sbin/keepalived:
file.symlink:
- target: /usr/local/keepalived/sbin/keepalived
4、在keepalived目录中创建files目录,用来放我们部署keepalived时所需要用到的一些资源
5、使用salt推送keepalived目录下的install.sls脚本
salt ‘server4’ state.sls keepalived.install
6、在server4中查看是否安装源码编译好keepalived
7、将server4中keepalived的配置文件与服务启动脚本发送到server1的/srv/salt/keepalived/files目录中
8、编辑service.sls,用来启动服务与编辑配置文件
vim /srv/salt/keepalived/service.sls
include:
- keepalived.install
/etc/keepalived/keepalived.conf:
file.managed:
- source: salt://keepalived/files/keepalived.conf
- template: jinja # 使用jinja模板
- context:
STATE: {{ pillar['state'] }}
VRID: {{ pillar['vrid'] }}
PRIORITY: {{ pillar['priority'] }}
kp-service:
file.managed:
- name: /etc/init.d/keepalived
- source: salt://keepalived/files/keepalived
- mode: 755
service.running:
- name: keepalived
- reload: True
- watch:
- file: /etc/keepalived/keepalived.conf
9、因为在脚本中我们使用到了jinja模板与pillar数据系统的结合,所以我们现在需要编辑pillar模板
cd /srv/pillar
mkdir keepalived
cd keepalived
vim install.sls
{% if grains['fqdn'] == 'server1' %}
state: MASTER
vrid: 1
priority: 100
{% elif grains['fqdn'] == 'server4' %}
state: BACKUP
vrid: 1
priority: 50
{% endif %}
10、cd /srv/pillar
编辑top.sls
vim top.sls
base:
'*':
- web.webserver
- keepalived.install
11、在/srv/salt目录中编辑top.sls文件,一键推送高可用负载均衡集群
vim /srv/salt/top.sls
base:
'server1':
- haproxy.service
- keepalived.service
'roles:apache:
- match: grain
- httpd.apache
'roles:nginx':
- match: grain
- nginx.service
'server4':
- haproxy.service
- keepalived.service
12、使用salt推送top.sls文件
salt ‘*’ state.highstate
13、在浏览器中测试负载均衡高可用
二、做到这一步时,经过测试,发现如果将工作中的主机的haproxy服务停掉的话,keepalived不能向BACKUP转移,整个高可用负载均衡会宕掉,因此我们需要采取措施来避免这个问题:
1、在server1中编写一个脚本
vim /srv/salt/keepalived/files/check_haproxy.sh
#! /bin/bash
/etc/init.d/haproxy status &>/dev/null || /etc/init.d/haproxy restart &>/dev/null
[ $? != 0 ] &&{
/etc/init.d/keepalived stop &>/dev/null
}
2、将该脚本添加至keepalived配置文件中(/srv/salt/keepalived/files/keepalived.conf)
3、修改keepalived目录路中的service.sls
4、重新推送入口文件top.sls
salt ‘*’ state.highstate
5、测试keepalived对haproxy的健康检查
关掉server1中的haproxy,隔几秒钟查看haproxy自动重启成功
6、通过一定手段组织haproxy自动重启
将/etc/init.d/haproxy文件暂时移动到/mnt目录下,然后关闭haproxy,然后查看haproxy与keepalived的状态
可以看到haproxy与keepalived都已关闭
7、查看keepalived是否转移成功,在server4中查看是否出现VIP
keepalived转移成功
8、恢复server1中的haproxy与keepalived,由于server1中keepalived的priority值高于server4,因此VIP会转移到server1中