此文是根据上文进一步进行配置
上文连接如下
企业运维实战(Saltstack的安装、远程执行、YAMY语法、grains、pillar、Jinja模板)图文解析,包看包会,卧龙出品,必是精品
对比变量文件和pillar值优先级
两个文件都在base/apache
定义变量文件
导入模板文件
vim httpd.conf
{% from 'apache/lib.sls' import http_port %}
运行
salt server2 state.sls apache
下图可以看出,pillar的优先级更高,被优先级低的覆盖掉了
对比完后,记得将server2的httpd端口恢复,可以直接删掉httpd.conf模板的导入语句,再次运行命令
使用satlstack远程执行keepalived
mkdir /srv/salt/keepalived
cd /srv/salt/keepalived
从其他地方拷贝一个keepalived.conf文件放入/srv/salt/keepalived作为配置文件模板
查看25端口是否开启
netstat -antlp |grep :25
vim keepalived.conf 编辑配置文件模板
1 ! Configuration File for keepalived
2
3 global_defs {
4 notification_email {
5 root@localhost
6 }
7 notification_email_from keepalived@localhost
8 smtp_server 127.0.0.1
9 smtp_connect_timeout 30
10 router_id LVS_DEVEL
11 vrrp_skip_check_adv_addr
12 #vrrp_strict
13 vrrp_garp_interval 0
14 vrrp_gna_interval 0
15 }
16
17 vrrp_instance VI_1 {
18 state {{ STATE }}
19 interface eth0
20 virtual_router_id {{ VRID }}
21 priority {{ PRI }}
22 advert_int 1
23 authentication {
24 auth_type PASS
25 auth_pass 1111
26 }
27 virtual_ipaddress {
28 172.25.21.100
29 }
30 }
第十二行一定要记得注释
编辑pillar下的顶级文件
vim /srv/pillar/top.sls
base:
'*':
- pkgs
- kp
编辑kp.sls,写入pillar值
vim /srv/pillar/kp.sls
{% if grains['fqdn'] == 'server2' %}
state: MASTER
vrid: 51
pri: 100
{% elif grains['fqdn'] == 'server3' %}
state: BACKUP
vrid: 51
pri: 50
{% endif %}
编写salt/keepalived目录的init.sls
vim /srv/salt/keepalived/init.sls
kp-install:
pkg.installed:
- name: keepalived
file.managed:
- name: /etc/keepalived/keepalived.conf
- source: salt://keepalived/keepalived.conf
- template: jinja
- context:
STATE: {{ pillar['state'] }}
VRID: {{ pillar['vrid'] }}
PRI: {{ pillar['pri'] }}
service.running:
- name: keepalived
- reload: true
- watch:
- file: kp-install
编辑salt下的顶级文件
vim /srv/salt/top.sls
base:
'roles:apache':
- match: grain
- apache
- keepalived
'roles:nginx':
- match: grain
- nginx.service
- keepalived
vim /srv/salt/apache/httpd.conf
导入变量
vim /srv/salt/apache/init.conf
导入pillar值
运行saltstack
salt '*' state.highstate
全局的所有的环境的所有的状态生效
在server2中查看端口80
echo 'nice job' > /var/www/html/index.html
netstat -antlp |grep :80
ip addr 可以看到vip # inet 172.25.21.100/32 scope global eth0
在真实主机测试:
curl 172.25.21.100
关闭server2的keepalived
在server3中可以看到
ip addr ###inet 172.25.21.100/32 scope global eth0
在真实主机测试:
curl 172.25.21.100
job缓存
1.master在下发指令任务时,会附带上产生的jid。
Jid: job id 格式为%Y%m%d%H%M%S%f
2.minion在接收到指令开始执行时,会在本地的/var/cache/salt/minion/proc目录下产生该jid命名的文件,用于在执行过程中master查看当前任务的执行情况。指令执行完毕将结果传送给master后,删除该临时文件。
3.Job缓存默认保存24小时
master端Job缓存目录:
/var/cache/salt/master/jobs/
两种模式:
第一种:配置外部作业缓存后,数据会像往常一样返回Salt Master上的默认作业缓存,然后使用Salt Minion上运行的Salt返回器模块将结果发送到外部作业缓存。
优点:存储数据时不会在Salt Master上增加额外负载。
缺点:每个Salt Minion都连接到外部作业缓存,这可能会导致大量连接。 还需要额外的配置才能在所有Salt Minions上获得返回者模块设置。
首先在server1(master)端下载mysql
yum install mariadb-server
systemctl start mariadb
mysql_secure_installation
mysql -pwestos < test.sql
进入数据库,可以看数据库salt,表jid,并且给salt用户所有在salt库所以权限
grant all on salt.* to salt@'%' identified by 'salt';
在server2(minion)端
vim /etc/salt/minion
return: mysql
mysql.host: '172.25.21.1'
mysql.port: 3306
mysql.user: 'salt'
mysql.pass: 'salt'
mysql.db: 'salt'
重启服务
systemctl restart salt-minion.service
下载MySQL-python工具
yum install -y MySQL-python.x86_64
在server1(master)端
salt server2 test.ping --return mysql
salt server2 my_disk.df --return mysql
mysql -pwestos
use salt;
select * from salt_returns \G;
则可以看到保存在数据库的return
第二种:Salt Minions像往常一样将数据发送到默认作业缓存,然后Salt Master使用在Salt Master上运行的Salt返回器模块将数据发送到外部系统
1.优点:外部系统需要单个连接。 这对于数据库和类似系统来说是首选。
2.缺点:在Salt Master上增加额外负载
在server1(master)端
vim /etc/salt/master
master_job_cache: mysql
mysql.host: 'localhost'
mysql.user: 'salt'
mysql.pass: 'salt'
mysql.db: 'salt'
mysql.port: 3306
systemctl restart salt-master.service
进入数据库
mysql -pwestos
grant all on salt.* to salt@'localhost' identified by 'salt';
yum install MySQL-python -y
输入命令salt '*' test.ping
查看结果:
mysql -pwestos
use salt;
select * from salt_returns \G;
salt-ssh
server3:
systemctl stop salt-minion.service
server1
yum install -y salt-ssh
vim /etc/salt/roster
server3:
host: 172.25.21.3
user: root
passwd: westos
salt-ssh ‘*’ test.ping
salt-ssh ‘*’ my_disk.df
salt-syndic
让server4作为中心节点 server1作为Syndic
server4:
yum install salt-master
vim /etc/salt/master
systemctl restart salt-master.service
/etc/salt/master 如下图 设置为true
server1:
yum install salt-syndic.noarch
vim /etc/salt/master
systemctl restart salt-master.service
systemctl start salt-syndic.service
在server4:
salt '*' state.sls keepalived
api
server1:
yum install -y salt-api
vim /etc/salt/master.d/api.conf
rest_cherrypy:
port: 8000
ssl_crt: /etc/pki/tls/certs/localhost.crt
ssl_key: /etc/pki/tls/private/localhost.key
cd /etc/pki/tls/private/
openssl genrsa 1024 > localhost.key
cd /etc/pki/tls/cert
make testcert
vim /etc/salt/master.d/autu.conf
external_auth:
pam:
saltapi:
- .*
- '@wheel'
- '@runner'
- '@jobs'
useradd saltapi
echo westos | passwd --stdin saltapi
systemctl restart salt-master.service
systemctl start salt-api.service
netstat -antlp |grep :8000
curl -sSk https://172.25.21.1:8000/login -H 'Accept: application/x-yaml' -d username=saltapi -d password=westos -d eauth=pam
curl -sSk https://172.25.21.1:8000 -H 'Accept: application/x-yaml' -H 'X-Auth-Token: 4cf1eeb0f003db3643695d498861f18c378422c4' -d username=saltapi -d password=westos -d client=local -d tgt='*' -d fun=test.ping