Saltstack 的master、minion端配置参照前一篇文章、创建的目录在前面的对应内容也有写
一、grains:静态
GRAINS 组件是saltstack中非常重要的一个组件,其主要用于记录Minion的一些静态信息,在master端通过Grains可以获得minion对应的变量值,比如:CPU、内存、磁盘、网络等。grains信息是每次客户端启动后自动上报给master的,一旦这些静态信息发生改变需要重启minion 或者 重新同步下 grains。除此之外我们还可以自定义Grains的一些信息。自定义的方法有三种:1、通过Minion配置文件定义;2、通过Grains相关模块定义;3、通过python脚本定义。
1、利用grains获取信息
server1(master端):
指定查看server2的ipv4信息
[root@server1 haproxy]# salt server2 grains.item ipv4
指定查看server2的系统类型
[root@server1 haproxy]# salt server2 grains.item os
2、自定义Grains
方法一:通过Minion配置文件配置
在Minion端我们可以通过查看/etc/salt/minion配置文件中,查找grains可以查看到相关注释的示例。不过便于配置管理,我们不一般不会选择在该文件上进行修改,而在minion include的目录下/etc/salt/minion.d目录下单独创建grains.conf文件。
通过配置文件配置:
server2(minion端):
编辑minion配置文件
[root@server2 conf]# vim /etc/salt/minion
注释
120 grains:
121 roles:
122 - apache
重启服务
[root@server2 conf]# /etc/init.d/salt-minion restart
[root@server1 haproxy]# salt '*' grains.item roles
server2:
----------
roles:
- apache
通过创建文件配置
server3(minion):
新建文件
[root@server3 local]# vim /etc/salt/grains
roles:
nginx
直接查看不显示
[root@server1 haproxy]# salt '*' grains.item roles
server3:
----------
roles:
刷新
[root@server1 haproxy]# salt server3 saltutil.sync_grains
显示
[root@server1 haproxy]# salt '*' grains.item roles
server3:
----------
roles:
nginx
编辑文件
[root@server1 haproxy]# cd /srv/salt/
[root@server1 salt]# vim top.sls
base:
'server1':
- haproxy.install
'roles:apache':
- match: grain match: 依赖某个模块,在运行此state前,先运行依赖的state,依赖可以有多个。比如文中的nginx模块内,相关的配置必须要先依赖nginx的安装
- httpd.install
'roles:nginx':
- match: grain
- nginx.service
执行文件
[root@server1 salt]# salt '*' state.highstate
方法二:使用自定义python脚本获取grains信息
server1:
新建目录
[root@server1 salt]# mkdir _grains
[root@server1 salt]# cd _grains/
编辑一个python文件
[root@server1 _grains]# vim my_grains.py
#!/usr/bin/env python
def my_grains():
grains = {}
grains['hello'] = 'world'
grains['salt'] = 'stack'
return grains
刷新
[root@server1 _grains]# salt server2 saltutil.sync_grains
server2:
- grains.my_grains
树型查看
[root@server2 salt]# cd /var/cache/salt/
[root@server2 salt]# tree minion
导入外部模块编译成pyc
查看
[root@server1 _grains]# salt '*' grains.item hello
[root@server1 _grains]# salt '*' grains.item salt
二、pillar:动态
相对于Grains的静态参数,Pillar可以配置更灵活的参数,熟练地运用Pillar可以十分强大的发挥Saltstack的威力。
server1(master端):
[root@server1 _grains]# vim /etc/salt/master
取消注释
694 pillar_roots:
695 base:
696 - /srv/pillar
重启服务
[root@server1 pillar]# /etc/init.d/salt-master restart
建立目录
[root@server1 _grains]# mkdir /srv/pillar
[root@server1 _grains]# cd /srv/pillar/
[root@server1 pillar]# mkdir web
[root@server1 pillar]# cd web/
编辑配置文件
[root@server1 web]# vim install.sls
{% if grains['fqdn'] == 'server2' %}
webserver: httpd
{% elif grains['fqdn'] == 'server3' %}
webserver: nginx
{% endif %}
[root@server1 web]# cd ..
[root@server1 pillar]# vim top.sls
base:
'*':
- web.install
刷新
[root@server1 pillar]# salt '*' saltutil.refresh_pillar
查看
[root@server1 pillar]# salt 'server2' pillar.items
[root@server1 pillar]# salt 'server3' pillar.items
[root@server1 pillar]# salt '*' pillar.items
[root@server1 pillar]# salt -G 'roles:apache' test.ping
[root@server1 pillar]# salt -G 'roles:nginx' test.ping
[root@server1 pillar]# salt -I 'webserver:httpd' test.ping
[root@server1 pillar]# salt -I 'webserver:nginx' test.ping
[root@server1 pillar]# salt -S '172.25.10.0/24' test.ping
三、jinja模板
Jinja是基于python的模板引擎,在saltstack中我们使用yaml_jinja渲染器来根据模板生产对应的配置文件,对于不同的操作系统或者不同的情况通过jinja可以让配置文件或者操作形成一种模板的编写方式。
通过master端使用jinja模板直接修改minion的http配置文件:
server1(master端)
编辑http安装文件(使用jinja模板)
[root@server1 httpd]# vim install.sls
- template: jinja
- context:
bind: {{ pillar['bind'] }}
port: {{ pillar['port'] }}
编辑pillar配置文件
[root@server1 web]# vim install.sls
[root@server1 web]# pwd
/srv/pillar/web
{% if grains['fqdn'] == 'server2' %}
webserver: httpd
bind: 172.25.10.2
port: 8080
修改http配置文件
[root@server1 web]# cd /srv/salt/httpd/
[root@server1 httpd]# vim files/httpd.conf
Listen {{ bind }}:{{ port }}
执行文件
[root@server1 httpd]# salt server2 state.sls httpd.install
在server2(minion)端/etc/httpd/conf/httpd.conf查看参数修改
四、一键部署haproxy高可用
使用到jinja模板和pillar组件
实验环境:server1(master端)、server4(minion端)
1、配置安装haproxy的yum源
server1、4:
[LoadBalancer]
name=LoadBalancer
baseurl=http://172.25.10.250/rhel6.5/LoadBalancer
gpgcheck=0
2、一键安装配置keepalived
server1(master端):
新建目录
[root@server1 ~]# cd /srv/salt/
[root@server1 salt]# mkdir keepalived
[root@server1 salt]# cd keepalived
[root@server1 keepalived]# mkdir files ##在该目录中放入keepalived-2.0.6.tar.gz安装包
[root@server1 keepalived]# cd ..
编辑安装配置keepalived的sls文件
[root@server1 keepalived]# vim install.sls
include: ##导入安装依赖性文件
- pkgs.make
kp-install:
file.managed: ##文件模块
- name: /mnt/keepalived-2.0.6.tar.gz
- source: salt://keepalived/files/keepalived-2.0.6.tar.gz
cmd.run: ##编译安装过程
- name: cd /mnt && tar zxf keepalived-2.0.6.tar.gz && cd keepalived-2.0.6 &&./configure --prefix=/usr/local/keepalived --with-init=SYSV &> /dev/null && make &> /dev/null && make install &> /dev/null
- creates: /usr/local/keepalived
/etc/keepalived:
file.directory:
- mode: 755
/etc/sysconfig/keepalived:
file.symlink: ##制作软链接
- target: /usr/local/keepalived/etc/sysconfig/keepalived
/sbin/keepalived:
file.symlink:
- target: /usr/local/keepalived/sbin/keepalived
执行文件
[root@server1 keepalived]# salt server4 state.sls keepalived.install
在server4(minion端)中查看
ps ax查看开启进程
查看/mnt下有安装包
[root@server4 ~]# cd /mnt
[root@server4 mnt]# ls
keepalived-2.0.6.tar.gz
查看软链接
[root@server4 sbin]# ll /etc/sysconfig/keepalived
[root@server4 sbin]# ll /sbin/keepalived
查看安装目录
3、文件管理
在server4中安装好keepalived后,把配置文件传至server1
server4:
cd etc/rc.d/init.d/
[root@server4 init.d]# scp keepalived server1:/srv/salt/keepalived/files
[root@server4 init.d]# cd /etc/keepalived
[root@server4 keepalived]# scp keepalived.conf server1:/srv/salt/keepalived/files
在server1中查看:
[root@server1 keepalived]# cd files/
[root@server1 files]# ls
keepalived keepalived-2.0.6.tar.gz keepalived.conf
4、配置keepalived实现haproxy高可用文件
server1:
配置文件
[root@server1 keepalived]# vim service.sls
include: ##导入keepalived安装配置文件
- keepalived.install
/etc/keepalived/keepalived.conf:
file.managed:
- source: salt://keepalived/files/keepalived.conf
- template: jinja ##应用jinja模板
- context: ##使用pillar取值
STATE: {{ pillar['state'] }} ##状态
VRID: {{ pillar['vrid'] }} ##虚拟ID
PRIORITY: {{ pillar['priority'] }} ##优先级
ke-service:
file.managed:
- name: /etc/init.d/keepalived
- source: salt://keepalived/files/keepalived
- mode: 755
service.running: ##运行keepalived
- name: keepalived
- reload: True
- watch:
- file: /etc/keepalived/keepalived.conf
在pillar目录建立keepalived目录
[root@server1 keepalived]# cd /srv/pillar/
[root@server1 pillar]# mkdir keepalived
[root@server1 pillar]# cd keepalived/
配置pillar参数文件
[root@server1 keepalived]# cp ../web/install.sls . ##原来pillar目录里有定义参数,为使不冲突复制到此进行更改
[root@server1 keepalived]# vim install.sls
{% if grains['fqdn'] == 'server1' %}
state: MASTER
vrid: 10
priority: 100
{% elif grains['fqdn'] == 'server4' %}
state: BACKUP
vrid: 10
priority: 50
{% endif %}
编辑pillar的top文件
[root@server1 keepalived]# cd ..
[root@server1 pillar]# vim top.sls
base:
'*':
- web.install
- keepalived.install
编辑keepalived配置文件
[root@server1 pillar]# cd ..
[root@server1 srv]# cd salt/
[root@server1 salt]# cd keepalived/
[root@server1 keepalived]# vim files/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost ##修改主机名
}
notification_email_from keepalived@localhost ##修改用户名
smtp_server 127.0.0.1 ##本机IP
smtp_connect_timeout 30
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
#vrrp_strict ##禁用服务
vrrp_garp_interval 0
vrrp_gna_interval 0
}
##在我们需要获取的参数那使用pillar定义
vrrp_instance VI_1 {
state {{ STATE }} ##状态
interface eth0
virtual_router_id {{ VRID }} ##虚拟ID
priority {{ PRIORITY }} ##优先级
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.25.10.100 ##虚拟地址
}
}
执行文件
[root@server1 keepalived]# salt server4 state.sls keepalived.service
5、实现使用keepalived实现haproxy的高可用
编辑top文件,使server1(master端)与server4(minion端)都配置好haproxy和keepalived
[root@server1 salt]# vim top.sls
base:
'server1':
- haproxy.install
- keepalived.service
'server4':
- haproxy.install
- keepalived.service
'roles:apache':
- match: grain
- httpd.install
'roles:nginx':
- match: grain
- nginx.service
高执行top文件
[root@server1 salt]# salt '*' state.highstate
6、测试haproxy高可用
server4:
安装mailx服务
yum install -y mailx
server1、server4:
查看IP:server1有虚拟IP,因为优先级高
[root@server1 salt]# ip addr
查看进程:haproxy和keepalived进程开启
[root@server1 salt]# ps ax
在浏览器中访问172.25.10.100查看轮循
将server1中的keepalived服务关闭,在浏览器中依然能查看轮循
/etc/init.d/keepalived stop
此时在server4上有虚拟IP
再将server1中的keepalived服务打开,在浏览器中依然能查看轮循
/etc/init.d/keepalived start
此时虚拟IP回到server1上
扩展:上面针对的是vip的高可用,将haproxy关闭后并不能实现高可用,为了使haproxy关闭后能够实现高可用,我们需要编辑一个脚本。
编辑脚本
cd /opt
vim check_haproxy.sh
#!/bin/bash
/etc/init.d/haproxy status &> /dev/null || /etc/init.d/haproxy restart &> /dev/null
if [ $? -ne 0 ];then
/etc/init.d/keepalived stop &> /dev/null
fi
给予脚本权限
chmod + x check_haproxy.sh
执行脚本
./check_haproxy.sh
编辑keepalived配置文件
[root@server1 ~]# cd /srv/salt/keepalived/files/
[root@server1 files]# vim keepalived.conf
3 vrrp_script check_haproxy {
4 script "/opt/check_haproxy.sh"
5 interval 2
6 weight 2
7 }
36 track_script {
37 check_haproxy
38 }
执行文件
salt server1 state.sls keepalived.service
salt server4 state.sls keepalived.service
将haproxy关闭
/etc/init.d/haproxy stop
接着把haproxy文件的可写权限去掉
chmod -x /etc/init.d/haproxy #把haproxy可写权限去掉后脚本检测到haproxy关闭则自动将keepalived关闭了,虚拟IP跑到server4上了
当又重新给hapro可写权限时,虚拟IP又回到了server1上
chmod +x /etc/init.d/haproxy #给予权限
/etc/init.d/keepalived start #手动执行,因为脚本里没有写