文章目录
saltstack 自动化运维管理
saltstack概述:
saltstack是一个配置管理系统,能够维护预定义状态的远程节点;是一个分布式远程执行系统,用来在远程节点上执行命令和查询数据;是运维人员提高工作效率、规范业务配置与操作的利器。Saltstack的特性:
SaltStack 是一种基于 C/S架构的服务器基础架构集中化管理平台,并不像ansible一样只需要服务端。
其管理端称为 Master,客户端称为Minion,通过ZeroMQ消息队列通信,默认监听4505端口。Salt Master运行的第二个网络服务就是ZeroMQ REP系统,默认监听4506端口
SaltStack优势:
- 使命令发送到远程系统是并行的而不是串行的
- 使用安全加密的协议,使用最小最快的网络载荷
- 提供简单的编程接口
- saltStack引入了更加细致化的领域控制系统来远程执行,使得系统成为目标可以通过主机名或系统属性
SaltStack缺陷:
-
需要学习SaltStack自定义的State语法规则
-
因其C/S结构,相对于另外两个系统,每台被管理机器上都需要装客户端。
-
因其架构复杂度,系统依赖组件更多,对人员运维能力要求更高。
-
因远程主控机为必备条件,若本地有文件需同步,则需先将文件传输到对应的主控机,然后才能够做分发同步
环境准备
主机域名 | ip | 服务属性 |
---|---|---|
server1 | 172.25.6.1 | Master |
server2 | 172.25.6.2 | Minion |
server3 | 172.25.6.3 | Minion |
1、搭建yum仓库
官方仓库:
yum install https://repo.saltstack.com/yum/redhat/salt-repo-latest.el7.noarch.rp
由于官网下载性能不足,所以采用本地安装
[root@server1 ~]# cd /etc/yum.repos.d/
[root@server1 yum.repos.d]# vim salt.repo
[salt]
name=salt
baseurl=http://172.25.6.250/3000 ##3000为提前下载的相关组件目录
gpgcheck=0
2、安装
2.1安装salt-master端
[root@server1 ~]# yum install -y salt-master ##设置server为master端
[root@server1 ~]# systemctl enable --now salt-master ##开机自启
Created symlink from /etc/systemd/system/multi-user.target.wants/salt-master.service to /usr/lib/systemd/system/salt-master.service.
[root@server1 ~]# systemctl start salt-master ##启动 salt-master
[root@server1 apache]# netstat -antlp | grep 4505 ## 4505为默认监控端口
tcp 0 0 0.0.0.0:4505 0.0.0.0:* LISTEN 24944/salt-master Z
tcp 0 0 172.25.6.1:4505 172.25.6.3:56546 ESTABLISHED 24944/salt-master Z
tcp 0 0 172.25.6.1:4505 172.25.6.2:33902 ESTABLISHED 24944/salt-master Z
2.2安装salt-minion端并配置
server2与server3的配置相同,同为minion端,其软件仓库与master端的软件仓库可相同,可使用scp /etc/yum.repos.d/salt.repo server2:/etc/yum.repos.d/
,不需要重复配置
[root@server2 ~]# yum install -y salt-minion.noarch -y ##minion端组件安装
root@server2 ~]# vim /etc/salt/minion 指定到master的ip
16 master: 172.25.6.1
[root@server2 salt]# systemctl enable --now salt-minion.service ##开机自启
Created symlink from /etc/systemd/system/multi-user.target.wants/salt-minion.service to /usr/lib/systemd/system/salt-minion.service.
2.3key认证与连接
[root@server1 ~]# salt-key -A ##key认证建立
The following keys are going to be accepted:
Unaccepted Keys:
server2
server3
Proceed? [n/Y] Y
Key for minion server2 accepted.
Key for minion server3 accepted.
[root@server1 ~]# salt-key -L ##key认证查看
Accepted Keys:
server2
server3
Denied Keys:
Unaccepted Keys:
Rejected Keys:
[root@server1 ~]# salt '*' test.ping ##连接测试
server2:
True
server3:
True
监控4505端口(saltstack 服务端口连接)
[root@server1 ~]# yum install lsof -y ##监控插件
[root@server1 ~]# lsof -i :4505
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
salt-mast 23179 root 15u IPv4 51403 0t0 TCP *:4505 (LISTEN)
salt-mast 23179 root 17u IPv4 54013 0t0 TCP server1:4505->server2:33620 (ESTABLISHED)##默认4505端口开启并指向minion端,连接方式为established。
salt-mast 23179 root 18u IPv4 54192 0t0 TCP server1:4505->server3:56242 (ESTABLISHED)
[root@server1 ~]# netstat -antlp ##与loft的效果相同
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:4505 0.0.0.0:* LISTEN 6359/salt-master Ze
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 3461/master
tcp 0 0 0.0.0.0:4506 0.0.0.0:* LISTEN 6365/salt-master MW
tcp 0 0 172.25.6.1:4505 172.25.6.3:57098 ESTABLISHED 6359/salt-master Ze
tcp 0 0 172.25.6.1:4505 172.25.6.2:34546
'ps ax’详细查看进程信息
[root@server1 ~]# yum install python-setproctitle.x86_64 -y ##插件,'ps ax'具体进程显现
[root@server1 ~]# ps ax
PID TTY STAT TIME COMMAND
6345 ? Ss 0:01 /usr/bin/python /usr/bin/salt-master ProcessManager
6352 ? S 0:00 /usr/bin/python /usr/bin/salt-master MultiprocessingLoggingQueue
6359 ? Sl 0:00 /usr/bin/python /usr/bin/salt-master ZeroMQPubServerChannel
6362 ? S 0:00 /usr/bin/python /usr/bin/salt-master EventPublisher
6363 ? S 0:26 /usr/bin/python /usr/bin/salt-master Maintenance
6364 ? S 0:00 /usr/bin/python /usr/bin/salt-master ReqServer_ProcessManager
6365 ? Sl 0:40 /usr/bin/python /usr/bin/salt-master MWorkerQueue
6366 ? Sl 0:01 /usr/bin/python /usr/bin/salt-master MWorker-0
6367 ? Sl 0:01 /usr/bin/python /usr/bin/salt-master MWorker-1
6374 ? Sl 0:13 /usr/bin/python /usr/bin/salt-master FileserverUpdate
6375 ? Sl 0:01 /usr/bin/python /usr/bin/salt-master MWorker-2
6376 ? Sl 0:01 /usr/bin/python /usr/bin/salt-master MWorker-3
6377 ? Sl 0:01 /usr/bin/python /usr/bin/salt-master MWorker-4
2.4master端测试与minion端的连接测试
[root@server1 ~]# salt '*' cmd.run hostname ## cmd.run :运行模块 测试server2,server3 hostname
server2:
server2
server3:
server3
注:monion端修改主机名后,需要删除/etc/salt/minion_id 否则对master端来说,修改无效
当master能够ping通monion之后,其配置文件生成的pub.key值,对master和minion来说是相同的
[root@server1 ~]# cd /etc/salt/pki/
[root@server1 pki]# ls
master minion
[root@server1 pki]# cd master/
[root@server1 master]# ls
master.pem master.pub minions minions_autosign minions_denied minions_pre minions_rejected
[root@server1 master]# md5sum master.pub
66082b55b3f4f7103c2e267e474b6051 master.pub
[root@server1 master]# cd minions
[root@server1 minions]# ls
server2 server3
[root@server1 minions]# md5sum server2
a653c0cad3005e1d1e865f39625f9fde server2
[root@server1 minions]# md5sum server3
273864e8e1c7471481f0a06310090aa3 server3
[root@server2 ~]# cd /etc/salt/pki/
[root@server2 pki]# ls
master minion
[root@server2 pki]# cd minion/
[root@server2 minion]# ls
minion_master.pub minion.pem minion.pub
[root@server2 minion]# md5sum *
66082b55b3f4f7103c2e267e474b6051 minion_master.pub
084aa06e917781c53fa0a4067c3e818f minion.pem
a653c0cad3005e1d1e865f39625f9fde minion.pub
3、saltstack远程执行,相关部署
3.1.远程执行shell命令(命令行执行)
Salt命令由三个主要部分构成:
salt ‘’ [arguments]
target: 指定哪些minion, 默认的规则是使用glob匹配minion id.
salt ‘*’ test.ping
Targets也可以使用正则表达式:
salt -E ‘server[1-3]’ test.ping
Targets也可以指定列表:
salt -L ‘server2,server3’ test.ping
例:server2的apache部署
[root@server1 ~]# salt server2 pkg.install httpd ##httpd安装
server2:
----------
apr:
----------
new:
1.4.8-3.el7_4.1
old:
apr-util:
----------
new:
1.5.2-6.el7
old:
httpd:
----------
new:
2.4.6-88.el7
old:
httpd-tools:
----------
new:
2.4.6-88.el7
old:
mailcap:
----------
new:
2.1.41-2.el7
old:
[root@server1 ~]# salt 'server2' service.start httpd ##httpd开启
server2:
True
[root@server1 ~]# salt 'server2' cmd.run 'rpm -q httpd' ##安装httpd版本监测
server2:
httpd-2.4.6-88.el7.x86_64
3.2.远程模块执行
funcation是module提供的功能,Salt内置了大量有效的functions.
salt ‘*’ cmd.run ‘uname -a’
arguments通过空格来界定参数:
salt ‘server2’ sys.doc pkg #查看模块文档
salt ‘server2’ pkg.install httpd
salt ‘server2’ pkg.remove http
远程模块存储
[root@server1 ~]# vim /etc/salt/master ##默认配置文件
file_roots:
base:
- /srv/salt
[root@server1 ~]# systemctl restart salt-master ##重启master服务
[root@server1 ~]# mkdir /srv/salt ##远程模块存储目录
远程模块执行方式1
[root@server1 ~]# mkdir /srv/salt/_modules
[root@server1 ~]# cd /srv/salt/_modules
[root@server1 _modules]# vim my_disk.py
def df():
return __salt__['cmd.run']('df -h')
[root@server1 _modules]# salt server2 saltutil.sync_modules ##模块同步,否则无法生效
server2:
- modules.my_disk
[root@server1 _modules]# salt server2 my_disk.df ##在server2中执行df -h
server2:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rhel-root 27G 1.3G 26G 5% /
devtmpfs 908M 0 908M 0% /dev
tmpfs 919M 80K 919M 1% /dev/shm
tmpfs 919M 17M 903M 2% /run
tmpfs 919M 0 919M 0% /sys/fs/cgroup
/dev/vda1 1014M 177M 838M 18% /boot
tmpfs 184M 0 184M 0% /run/user/0
3.3.远程模块执行方式2-----sls文件
3.3.1apache安装与端口修改
[root@server1 salt]# mkdir apache
[root@server1 salt]# vim apache/init.sls ##编写sls文件
1 apache: ## ID声明
2 pkg.installed: ## 状态声明,函数声明
3 - pkgs:
4 - httpd
5
6
7 file.managed: ##文件管理
8 - name: /etc/httpd/conf/httpd.conf ##目的
9 - source: salt://apache/http.conf ##来源(apache的配置文件,需放在/srv/salt/apache目录下)
10
11 service.running: ##服务启动
12 - name: httpd
13 - enable: true
14 - reload: true
15 - watch: ##触发器
16 - file: apache
[root@server1 apache]# vim httpd.conf ##端口号改为8080
[root@server1 apache]# salt server2 state.sls apache.init
[root@server2 ~]# netstat -antlp ##测试结果
tcp6 0 0 :::8080 :::* LISTEN
[root@server2 ~]# tree ##minion端文件树状图
3.2nginx的源码编译
3.2.1nginx的安装
[root@server1 salt]# mkdir nginx
[root@server1 salt]# cd nginx/
[root@server1 nginx]# vim inst.sls
1 nginx-install:
2 pkg.installed:
3 - pkgs:
4 - gcc
5 - pcre-devel
6 - openssl-devel
7
8 file.managed:
9 - name: /mnt/nginx-1.20.1.tar.gz ##server的nginx组件解压地址
10 - source: salt://nginx/nginx-1.20.1.tar.gz ##nginx-1.20.1.tar.gz需放在/srv/salt/nginx目录下
11
12 cmd.run: ##模块执行
13 - name: cd /mnt && tar zxf nginx-1.20.1.tar.gz && cd nginx-1.20.1 && sed -i 's/CFLAGS="$CFLAGS -g "/#CFLAGS="$CFLA GS -g"/g' auto/cc/gcc && ./configure --prefix=/usr/local/nginx --with-http_ssl_module --with-threads --with-file-aio & > /dev/null && make &> /dev/null && make install &>/dev/null ##nginx执行命令
14 - creates: /usr/local/nginx ##类似触发器。防止nginx重复安装
[root@server1 nginx]# salt server3 state.sls nginx/inst ##安装模块启动
3.2.1.nginx的启动
[root@server1 nginx]# vim service.sls ##service启动sls文件
1 include:
2 - nginx
3
4 nginx-user:
5 user.present:
6 - name: nginx
7 - shell: /sbin/nologin
8 - home: /usr/local/nginx
9 - createhome: false
10
11 /usr/local/nginx/conf/nginx.conf:
12 file.managed:
13 - source: salt://nginx/nginx.conf ##此nginx.conf/srv/salt/nginx目录下
14 ,在此文件中添加worker_connections 65535
15 nginx-service:
16 file.managed:
17 - name: /usr/lib/systemd/system/nginx.service
18 - source: salt://nginx/nginx.service ##此nginx.service/srv/salt/nginx目录下
14
19 service.running: ##服务开启
20 - name: nginx
21 - enable: true ##开机自启
22 - reload: true ##加载
23 - watch: ##触发器
24 - file: /usr/local/nginx/conf/nginx.conf
[root@server1 nginx]# salt server3 state.sls nginx.service ##service模块启动
[root@server3 nginx]# netstat -antlp | grep nginx ##测试结果
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 32765/nginx: master
[root@server1 nginx]# curl server3 ##测试访问
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@server1 nginx]# tree
4.grains
4.1.简介
Grains是SaltStack的一个组件,存放在SaltStack的minion端。
当salt-minion启动时会把收集到的数据静态存放在Grains当中,只有当minion重启时才会进行数据的更新。
由于grains是静态数据,因此不推荐经常去修改它。
4.2minion信息查询
[root@server1 nginx]# salt server2 grains.ls
[root@server1 nginx]# salt server2 grains.items ##查看每一项的值
[root@server1 nginx]# salt server2 grains.item ipv4 ##查看单项的ipv4值
server2:
----------
ipv4:
- 127.0.0.1
- 172.25.6.2
[root@server1 nginx]# salt '*' grains.item fqdn ##查看单项的fqdn值
server3:
----------
fqdn:
server3
server2:
----------
fqdn:
server2
4.3.自定义grains项
三种添加角色的方式。两种来自minion端,一种在master端
- 在minion 端 /etc/salt/minion中定义, 必须重启salt-minion,否则数据不会更新:
[root@server2 salt]# vim /etc/salt/minion
129 grains:
130 roles:
131 - apache
[root@server2 salt]# systemctl restart salt-minion.service
[root@server1 nginx]# salt server2 grains.item roles ##测试
server2:
----------
roles:
- apache
在minion 端/etc/salt/grains中定义:
[root@server3 nginx]# vim /etc/salt/grains
1 roles: nginx
[root@server1 nginx]# salt server3 saltutil.sync_grains #数据同步
server3:
[root@server1 nginx]# salt '*' grains.item roles #测试
server2:
----------
roles:
- apache
server3:
----------
roles:
- nginx
-
编写grains模块
[root@server1 ~]# mkdir /srv/salt/_grains [root@server1 salt]# cd /srv/salt/_grains/ [root@server1 _grains]# vim grains.py def grains(): grains = {} grains['name'] = 'zero' grains['age'] = '18' return grains ##测试 [root@server1 _grains]# salt '*' saltutil.sync_grains server2: - grains.grains server3: - grains.grains [root@server1 _grains]# salt '*' grains.item age server2: ---------- age: 18 server3: ---------- age: 18 [root@server1 _grains]# salt '*' grains.item name server2: ---------- name: zero server3: ---------- name: zero
-
编写hignstate模块与grains匹配运用
在target中匹配minion:
[root@server1 _grains]# salt -G roles:apache cmd.run hostname ## -G:指定角色 server2: server2 [root@server1 _grains]# salt -G roles:nginx cmd.run hostname server3: server3
在top文件中匹配:
[root@server1 salt]# vim top.sls
base:
'roles:apache':
- match: grain
- apache
'roles:nginx':
- match: grain
- nginx
[root@server1 salt]# salt '*' state.highstate
5.Jinja模板
5.1.简介
Jinja是一种基于python的模板引擎,在SLS文件里可以直接使用jinja模板来做一些操作。
通过jinja模板可以为不同服务器定义各自的变量。
两种分隔符: {% … %} 和 {{ … }},前者用于执行诸如 for 循环 或赋值的语句,后者把表达式的结果打印 到模板上
5.2.jinja模板的适用方法
Jinja最基本的用法是使用控制结构包装条件:
[root@server1 salt]# vim /srv/salt/test.sls
/mnt/testfile:
file.append:
{% if grains['fqdn'] == 'server2' %}
- text: server2
{% elif grains['fqdn'] == 'server3' %}
- text: server3
{% endif %}
[root@server1 salt]# salt '*' state.sls test
[root@server2 ~]# cat /mnt/testfile
server2
[root@server3 ~]# cat /mnt/testfile
server3
6.pillar结合jinja模板使用
6.1.pillar简介
pillar和grains一样也是一个数据系统,但是应用场景不同。
pillar是将信息动态的存放在master端,主要存放私密、敏感信息(如用户名密码等),而且可以指定某一个minion才可以看到对应的信息。
pillar更加适合在配置管理中运用。
2.声明pillar
- 默认就是此目录,不用做任何修改
[root@server1 ~]# vim /etc/salt/master
pillar_roots:
base:
- /srv/pillar
3.使用jinja模板实现apache的配置
创建sls文件
[root@server1 salt]# vim /srv/salt/apache/init.sls
1 apache:
2 pkg.installed:
3 - pkgs:
4 - {{ pillar['package'] }}
5
6 file.managed:
7 - name: /etc/httpd/conf/httpd.conf
8 - source: salt://apache/httpd.conf
9 - template: jinja
10 - context: 注:此三行可有可无,编写的目的是为了提高可读性
11 http_host: {{ grains['ipv4'][-1] }} ##ip
12 http_port: {{ pillar['port'] }} ##端口
13
14 service.running:
15 - name: httpd
16 - enable: true
17 - reload: true
18 - watch:
19 - file: apache
编辑apache配置文件,进行变量引用
[root@server1 salt]# vim /srv/salt/apache/httpd.conf
42 Listen {{ grains['ipv4'][-1] }}:{{ pillar['port']}}
或 Listen {{http_host }}:{{http_port }}:此时sls文件中10-12不能删除
[root@server1 salt]# vim /srv/pillar/pkgs.sls
1 {% if grains['fqdn'] == 'server3' %}
2 package: httpd
3 port: 80
4 {% elif grains['fqdn'] == 'server2' %}
5 package: httpd
6 port: 8080
7 {% endif %}
[root@server1 salt]# vim /srv/pillar/top.sls
1 base:
2 '*':
3 - pkgs
[root@server1 salt]# salt '*' state.sls apache
[root@server2 salt]# netstat -antlp
[root@server3 salt]# netstat -antlp
7.自动部署keepalived
7.1定义pillar值
[root@server1 pillar]# vim kp.sls
1 {% if grains['fqdn'] == 'server2' %}
2 state: MASTER
3 vrid: 6
4 pri: 100 ##优先级
5 {% elif grains['fqdn'] == 'server3' %}
6 state: BACKUP
7 vrid: 6
8 pri: 50
9 {% endif %}
[root@server1 pillar]# salt '*' pillar.items
server3:
----------
package:
httpd
port:
80
pri:
50
state:
BACKUP
vrid:
6
server2:
----------
package:
httpd
port:
8080
pri:
100
state:
MASTER
vrid:
6
[root@server1 pillar]# vim top.sls
1 base:
2 '*':
3 - pkgs
4 - kp
7.2.创建sls文件
[root@server1 salt]# mkdir keepalived ##目录创建
[root@server1 keepalived]# vim init.sls
1 kp-install:
2 pkg.installed:
3 - name: keepalived
4
5 file.managed:
6 - name: /etc/keepalived/keepalived.conf
7 - source: salt://keepalived/keepalived.conf ##keepalived.conf 需自行下载并放在keepalived目录下
8 - template: jinja
9 - context:
10 STATE: {{ pillar['state'] }}
11 VRID: {{ pillar['vrid'] }}
12 PRI: {{ pillar['pri'] }}
13
14 service.running:
15 - name: keepalived
16 - reload: true
17 - watch:
18 - file: kp-install
7.3.修改keepalived.conf
[root@server1 keepalived]# vim keepalived.conf
1 ! Configuration File for keepalived
2
3 global_defs {
4 notification_email {
5 root@localhost
6 }
7 notification_email_from keepalived@localhost
8 smtp_server 127.0.0.1
9 smtp_connect_timeout 30
10 router_id LVS_DEVEL
11 vrrp_skip_check_adv_addr
12 #vrrp_strict #注释,keepalived自带火墙策略,不注释就会404!
13 vrrp_garp_interval 0
14 vrrp_gna_interval 0
15 }
16
17 vrrp_instance VI_1 {
18 state {{ STATE }}
19 interface eth0
20 virtual_router_id {{ VRID }}
21 priority {{ PRI }}
22 advert_int 1
23 authentication {
24 auth_type PASS
25 auth_pass 1111
26 }
27 virtual_ipaddress {
28 172.25.6.100
29 }
30 }
7.4 修改top.sls
[root@server1 salt]# vim top.sls
1 base:
2 'roles:apache':
3 - match: grain
4 - apache
5 - keepalived
6 'roles:nginx':
7 - match: grain
8 - apache ##此时关闭server3上的nginx,开启httpd
9 - keepalived
7.5 测试
[root@server1 salt]# salt '*' state.sls keepalived
[root@server1 salt]# salt '*' state.highstate ##批量执行
minion端(server2)
[root@server2 ~]# yum install -y keepalived.x86_64 ##组件下载
[root@server3 ~]# yum install -y keepalived.x86_64 ##组件下载
注:keepalived具有高可用特性,所以可以停止分别停止 keepalived,可通过ip:172.25.6.100来查看,也可以通过物理机访问此ip(编辑apache发布文件),来辨别
[root@foundation6 images]# curl 172.25.6.100:8080
server2-172.25.6.2
[root@foundation6 images]# curl 172.25.6.100:80
server3-172.25.6.100
8.JOB管理
Job简介
master在下发指令任务时,会附带上产生的jid。
minion在接收到指令开始执行时,会在本地的/var/cache/salt/minion/proc目录下产生该jid命名的文件,用于在执行过程中master查看当前任务的执行情况。
指令执行完毕将结果传送给master后,删除该临时文件。
JOB CACHE:
#vim /etc/salt/master
keep_jobs: 24
master端Job缓存目录:
/var/cache/salt/master/jobs
8.1把Job存储到数据库
采用直接从master端传到数据库的方式
组件安装
[root@server1 ~]# yum install mariadb-server.x86_64 -y ##安装mariadb组件
[root@server1 ~]# yum install -y MySQL-python.x86_64
##安装MySQL-python组件
[root@server1 ~]# systemctl start mariadb.service ##开启mariadb
添加配置
[root@server1 apache]# vim /etc/salt/master
1302 master_job_cache: mysql
1303 mysql.host: 'localhost'
1304 mysql.user: 'salt'
1305 mysql.pass: 'salt'
1306 mysql.db: 'salt'
1307 mysql.port: 3306
[root@server1 ~]# systemctl restart salt-master.service
mysql配置
[root@server1 ~]# mysql_secure_installation ##安全初始化
[root@server1 ~]# mysql -pwestos
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 19
Server version: 5.5.60-MariaDB MariaDB Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> grant all on salt.* to salt@localhost identified by 'salt';
Query OK, 0 rows affected (0.00 sec) ##给予salt全部权限,salt登陆密码名为salt
MariaDB [salt]> use salt ##进入salt库
Database changed
MariaDB [salt]> select * from salt_returns; ##查看列表
[root@server1 ~]# salt '*' test.ping ##测试
server2:
True
server3:
True
[root@server1 ~]# salt server2 my_disk.df --return mysql
server2:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rhel-root 27G 1.4G 26G 5% /
devtmpfs 908M 0 908M 0% /dev
tmpfs 919M 320K 919M 1% /dev/shm
tmpfs 919M 17M 903M 2% /run
tmpfs 919M 0 919M 0% /sys/fs/cgroup
/dev/vda1 1014M 177M 838M 18% /boot
tmpfs 184M 0 184M 0% /run/user/0
9.salt-ssh 备用
9.1salt-ssh简介
salt-ssh可以独立运行的,不需要minion端。salt-ssh 用的是sshpass进行密码交互的。以串行模式工作,性能下降。
[root@server1 ~]# vim /etc/salt/roster
10 server2:
11 host: 172.25.6.2
12 user: root
13 passwd: westos
14 server3:
15 host: 172.25.6.3
16 user: root
17 passwd: westos
[root@server3 ~]# systemctl stop salt-minion.service
[root@server3 ~]# systemctl stop salt-minion.service
[root@server1 ~]# salt-ssh '*' test.ping
server2:
True
server3:
True
[root@server1 ~]# salt-ssh '*' my_disk.df
server2:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rhel-root 27G 1.4G 26G 6% /
devtmpfs 908M 0 908M 0% /dev
tmpfs 919M 320K 919M 1% /dev/shm
tmpfs 919M 17M 903M 2% /run
tmpfs 919M 0 919M 0% /sys/fs/cgroup
/dev/vda1 1014M 177M 838M 18% /boot
tmpfs 184M 0 184M 0% /run/user/0
server3:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rhel-root 27G 1.5G 26G 6% /
devtmpfs 908M 0 908M 0% /dev
tmpfs 919M 100K 919M 1% /dev/shm
tmpfs 919M 17M 903M 2% /run
tmpfs 919M 0 919M 0% /sys/fs/cgroup
/dev/vda1 1014M 177M 838M 18% /boot
tmpfs 184M 0 184M 0% /run/user/0
9.2 salt-syndic
类似于zabbixproxy,syndic其实就是个代理,隔离master与minion。Syndic必须要运行在master上,再连接到另一个topmaster上。
Topmaster下发的状态需要通过syndic来传递给下级master,minion传递给master的数据也是由syndic传递给topmaster。
topmaster并不知道有多少个minion。syndic与topmaster的file_roots和pillar_roots的目录要保持一致。
server4(新虚拟机)作为顶级master,ip:172.25.6.4
[root@server4 ~]# yum install -y salt-master
[root@server4 ~]# vim /etc/salt/master
order_masters: True
[root@server4 ~]# systemctl enable --now salt-master.service
server1安装配置
[root@server1 salt]# yum install -y salt-syndic.noarch
[root@server1 salt]# systemctl enable --now salt-syndic.service
[root@server1 salt]# vim /etc/salt/master
syndic_master: 172.25.6.4
[root@server1 salt]# systemctl restart salt-master.service
[root@server4 ~]# salt-key -L
[root@server4 ~]# salt-key -A
[root@server4 ~]# salt '*' test.ping
10.salt-api配置
10.1.下载api
[root@server1 ~]# yum -y install salt-api.noarch
10. 2生成证书
[root@server1 ~]# cd /etc/pki/tls/private/
[root@server1 private]# openssl genrsa 1024
[root@server1 private]# openssl genrsa 1024 > localhost.key
[root@server1 tls]# cd certs/
[root@server1 certs]# make testcert
10.3激活rest_cherrypy
[root@server1 private]# cd /etc/salt/master.d/
[root@server1 master.d]# vim api.conf
rest_cherrypy:
port: 8000
ssl_crt: /etc/pki/tls/certs/localhost.crt ##必须为绝对路径
ssl_key: /etc/pki/tls/private/localhost.key
10.4创建用户认证
[root@server1 master.d]# vim auth.conf
external_auth:
pam:
saltapi:
- .*
- '@wheel'
- '@runner'
- '@jobs'
创建saltapi用户
[root@server1 master.d]# useradd saltapi
[root@server1 master.d]# passwd saltapi
修改之后重启服务:
[root@server1 master.d]# systemctl restart salt-master.service
[root@server1 master.d]# netstat -antlp | grep :8000
[root@server1 master.d]# systemctl enable --now salt-api.service
10.5.salt-api使用
10.5.1获取认证token
[root@server1 master.d]# curl -sSk https://172.25.6.1:8000/login -H 'Accept: application/x-yaml' -d username=saltapi -d password=westos -d eauth=pam
10.5.2.推送任务
[root@server1 master.d]# curl -sSk https://172.25.0.1:8000 -H 'Accept: application/x-yaml' -H 'X-Auth-Token: ****************' -d username=saltapi -d password=westos -d client=local -d tgt='*' -d fun=test.ping