Saltstack 快速入门教程
1.介绍
Saltstack 比 Puppet 出来晚几年,是基于Python 开发的,也是基于 C/S 架构,服务端 master 和客户端 minions ;Saltstack 和 Puppet 很像,可以说 Saltstatck 整合了 Puppet 和 Chef 的功能,更加强大,更适合大规模批量管理服务器,并且它比 Puppet 更容易配置。 三大功能: 远程命令执行,配置管理(服务,文件,cron,用户,组),云管理。
2.准备工作
准备两台机器,这两台机器都关闭 selinux,清空 iptables 规则并保存。 master:192.168.1.160 slaver:192.168.1.161
更新软件源
[root@nb0 ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@nb0 ~]# yum makecache fast
Loaded plugins: fastestmirror
HuaDongBD | 2.9 kB 00:00:00
base | 3.6 kB 00:00:00
extras | 3.4 kB 00:00:00
os | 3.6 kB 00:00:00
updates | 3.4 kB 00:00:00
updates/7/x86_64/primary_db | 7.8 MB 00:00:07
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
Metadata Cache Created
[root@nb0 ~]#
3. 安装
在SaltsStack架构中服务端叫作Master,客户端叫作Minion,都是以守护进程的模式运行,一直监听配置文件中定义的ret_port(saltstack客户端与服务端通信的端口,负责接收客户端发送过来的结果,默认4506端口)和publish_port(saltstack的消息发布系统,默认4505端口)的端口。当Minion运行时会自动连接到配置文件中定义的Master地址ret_port端口进行连接认证。
- Master:控制中心,salt命令运行和资源状态管理
- Minion : 需要管理的客户端机器,会主动去连接Mater端,并从Master端得到资源状态信息,同步资源管理信息
- States:配置管理的指令集
- Modules:在命令行中和配置文件中使用的指令模块,可以在命令行中运行
- Grains:minion端的变量,静态的
- Pillar:minion端的变量,动态的比较私密的变量,可以通过配置文件实现同步minions定义
- highstate:为minion端下发永久添加状态,从sls配置文件读取.即同步状态配置
- salt_schedule:会自动保持客户端配置
3.1 服务端安装
yum install -y epel-release
yum install -y salt-master salt-minion
[root@nb0 ~]# yum install -y epel-release
[root@nb0 ~]# yum install -y salt-master salt-minion
3.2 客户端安装
[root@nb1 ~]# yum install -y epel-release
[root@nb1 ~]# yum install -y salt-minion
4.配置
4.1 Salt minion配置
用vi/vim打开/etc/salt/minion,找到配置选项master所在行,一般在第16行。修改内容,去掉#,冒号后有一个空格,将salt更改为master。
示例操作如下:
[root@nb0 ~]# vi /etc/salt/minion
[root@nb1 ~]# vi /etc/salt/minion
# Set the location of the salt master server. If the master server cannot be
# resolved, then the minion will fail to start.
master: master
如果找不到master对应的行,可以直接在文件尾部添加一行master: master
或master: 192.168.1.160
也可以使用sed命令直接修改配置文件
[root@nb2 ~]# sed -i 's/#master: salt/master: 192.168.1.160/g' /etc/salt/minion
5.启动服务
(1)服务端
[root@nb0 ~]# salt-master start
在后端运行
[root@nb0 ~]# salt-master start &
[3] 35438
[root@nb0 ~]#
(2)客户端
[root@nb0 ~]# salt-minion start &
[ERROR ] The Salt Master has cached the public key for this node, this salt minion will wait for 10 seconds before attempting to re-authenticate
[ERROR ] The Salt Master has cached the public key for this node, this salt minion will wait for 10 seconds before attempting to re-authenticate
[root@nb1 ~]# salt-minion start &
[ERROR ] The Salt Master has cached the public key for this node, this salt minion will wait for 10 seconds before attempting to re-authenticate
[ERROR ] The Salt Master has cached the public key for this node, this salt minion will wait for 10 seconds before attempting to re-authenticate
[root@nb2 ~]# salt-minion start
[ERROR ] The Salt Master has cached the public key for this node, this salt minion will wait for 10 seconds before attempting to re-authenticate
[ERROR ] The Salt Master has cached the public key for this node, this salt minion will wait for 10 seconds before attempting to re-authenticate
minion在第一次启动时,会在/etc/salt/pki/minion/(该路径在/etc/salt/minion里面设置)下自动生成minion.pem(private key)和 minion.pub(public key),然后将 minion.pub发送给master。master在接收到minion的public key后,通过salt-key命令accept minion public key,这样在master的/etc/salt/pki/master/minions下的将会存放以minion id命名的 public key,然后master就能对minion发送指令了。
6.配置认证
(1)在服务端上操作 新打开一个nb0终端
[root@nb0 ~]# salt-key -a nb0
The following keys are going to be accepted:
Unaccepted Keys:
nb0
Proceed? [n/Y] y
Key for minion nb0 accepted.
[root@nb0 ~]#
[root@nb0 ~]# salt-key -a nb1
The following keys are going to be accepted:
Unaccepted Keys:
nb1
Proceed? [n/Y]y
Key for minion nb1 accepted.
[root@nb0 ~]#
[root@nb0 ~]# salt-key -a nb2
The following keys are going to be accepted:
Unaccepted Keys:
nb2
Proceed? [n/Y] y
Key for minion nb2 accepted.
You have mail in /var/spool/mail/root
[root@nb0 ~]#
[root@nb0 ~]# salt-key
Accepted Keys:
nb0
nb1
nb2
Denied Keys:
Unaccepted Keys:
Rejected Keys:
[root@nb0 ~]#
注意: 在==大规模部署Minion==的时候可以设置自动接受指定等待认证的 key
在修改/etc/salt/master前,先备份一下
[root@nb0 ~]# cp /etc/salt/master /etc/salt/master.bak
通过vi打开/etc/salt/master
[root@nb0 ~]# vi /etc/salt/master
找到#auto_accept: False
一行,修改False为True
# Enable auto_accept, this setting will automatically accept all incoming
# public keys from the minions. Note that this is insecure.
#auto_accept: False
或者通过sed命令修改
[root@nb0 ~]# sed -i 's/#auto_accept: False/auto_accept: True/g' /etc/salt/master
ctrl+c停止salt-master,然后再启动
[root@nb0 ~]# salt-master start
^C[WARNING ] Stopping the Salt Master
[WARNING ] Stopping the Salt Master
Exiting on Ctrl-c
[WARNING ] Stopping the Salt Master
Exiting on Ctrl-c
Exiting on Ctrl-c
[root@nb0 ~]# salt-master start
(2)测试验证
[root@nb0 ~]# salt '*' test.ping
nb2:
True
nb1:
True
nb0:
True
[root@nb0 ~]#
这里的 * 必须是在 master 上已经被接受过的客户端,可以通过 salt-key 查到
远程执行命令
[root@nb0 ~]# salt '*' cmd.run 'df -h'
nb0:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/cl-root 48G 26G 22G 55% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 16K 3.9G 1% /dev/shm
tmpfs 3.9G 402M 3.5G 11% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sda1 1014M 139M 876M 14% /boot
/dev/mapper/cl-home 24G 33M 24G 1% /home
tmpfs 781M 0 781M 0% /run/user/0
/dev/loop0 7.8G 7.8G 0 100% /var/ftp/iso-home
nb1:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/cl-root 48G 4.3G 44G 9% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 12K 3.9G 1% /dev/shm
tmpfs 3.9G 377M 3.5G 10% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sda1 1014M 139M 876M 14% /boot
/dev/mapper/cl-home 24G 33M 24G 1% /home
tmpfs 781M 0 781M 0% /run/user/0
nb2:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/cl-root 48G 4.9G 43G 11% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 12K 3.9G 1% /dev/shm
tmpfs 3.9G 401M 3.5G 11% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sda1 1014M 139M 876M 14% /boot
/dev/mapper/cl-home 24G 33M 24G 1% /home
tmpfs 781M 0 781M 0% /run/user/0
[root@nb0 ~]#
首先我们要知道在安装SaltStack的时候Master 跟Minion端都分别安装了哪些文件,这样有利于我 们去了解SaltStack日后的一些日常操作。大家可以 通过以下命令查看SaltStack安装部署的时候分别安 装了哪些命令(yum安装方式)。
[root@nb0 ~]# rpm -ql salt-master
/etc/salt/master #salt master 配置文件
/usr/bin/salt #salt master 核心操作命令
/usr/bin/salt-cp #salt 文件传输命令
/usr/bin/salt-key #salt 证书管理命令
/usr/bin/salt-master #salt master 服务命令
/usr/bin/salt-run #salt master runner 命令
/usr/bin/salt-unity
/usr/lib/systemd/system/salt-master.service
/usr/share/man/man1/salt-cp.1.gz
/usr/share/man/man1/salt-key.1.gz
/usr/share/man/man1/salt-master.1.gz
/usr/share/man/man1/salt-run.1.gz
/usr/share/man/man1/salt-unity.1.gz
/usr/share/man/man7/salt.7.gz
[root@nb0 ~]#
[root@nb0 ~]# salt --version
salt 2015.5.10 (Lithium)
[root@nb0 ~]#
7管理对象
如果我们要维护好一个庞大的配置管理系统,那么首先得维护好我们的管理对象,在SaltStack系 统中我们的管理对象叫作Target,在Master上我们可以采用不同Target去管理不同的Minion。这些 Target都是通过去管理和匹配Minion的ID来做的一些集合。
[root@nb0 ~]# rpm -ql salt-minion
/etc/salt/minion #salt minion 配置文件
/usr/bin/salt-call #salt call 拉取命令
/usr/bin/salt-minion #salt minion 服务命令
/usr/lib/systemd/system/salt-minion.service
/usr/share/man/man1/salt-call.1.gz
/usr/share/man/man1/salt-minion.1.gz
You have mail in /var/spool/mail/root
[root@nb0 ~]#
(1)正则匹配
在操作与管理Minion时可以通过正则表达式来 匹配Minion ID的方式去管理它们。 比如我们想要 对匹配到’nb*’字符串的Minion进行操作,查看各节点的IP
[root@nb0 ~]# salt 'nb*' network.ip_addrs
nb0:
- 192.168.1.160
nb1:
- 192.168.1.161
nb2:
- 192.168.1.162
[root@nb0 ~]#
(2)列表匹配
-L, –list 列表匹配
[root@nb0 ~]# salt -L nb1,nb2 test.ping
nb2:
True
nb1:
True
[root@nb0 ~]#
(3)Grians匹配
[root@nb0 ~]# salt -G 'os:CentOS' test.ping
nb0:
True
nb1:
True
nb2:
True
You have mail in /var/spool/mail/root
[root@nb0 ~]#
其中os:CentOS,这里的对象是一组键值对, 这里用到了Minion的Grains的键值对。在后面介绍 Grains的时候会详细讲解,这里只需要知道可以通 过键值对的方式去匹配Minion ID。
-G, –grain grains 匹配
(4)组匹配
首先在master配置文件中定义组
[root@nb0 ~]# vi /etc/salt/master
##### Node Groups #####
##########################################
# Node groups allow for logical groupings of minion nodes. A group consists of a group
# name and a compound target.
#nodegroups:
# group1: 'L@foo.domain.com,bar.domain.com,baz.domain.com and bl*.domain.com'
# group2: 'G@os:Debian and foo.domain.com'
L@ 和G@ 分别表示minion和grain信息 L@开通的是指定的以逗号分隔的多个minionId
Letter |
Match Type |
Example |
Alt Delimiter? |
---|---|---|---|
G |
Grains glob |
G@os:Ubuntu |
Yes |
E |
PCRE Minion ID |
`E@web\d+.(dev |
qa |
P |
Grains PCRE |
P@os:(RedHat |
Fedora |
L |
List of minions |
L@minion1.example.com,minion3.domain.com or bl*.domain.com |
No |
I |
Pillar glob |
I@pdata:foobar |
Yes |
J |
Pillar PCRE |
`J@pdata:^(foo |
bar)$` |
S |
Subnet/IP address |
S@192.168.1.0/24 or S@192.168.1.100 |
No |
R |
Range cluster |
R@%foo.bar |
No |
Matchers can be joined using boolean and, or, and not operators.
修改group1:group1: 'L@nb1,nb2'
-N, –nodegroup 组匹配
(5)CIDR匹配 192.168.1.0/24是一个指定的CIDR网段,这里 CIDR匹配的IP地址是Minion连接Matser 4505端口 的来源地址。
[root@nb0 ~]# salt -S '192.168.1.0/24' test.ping
nb0:
True
nb2:
True
nb1:
True
[root@nb0 ~]#
8.管理对象属性
Grains是SaltStack组件中非常重要的组件之 一,因为我们在做配置部署的过程中会经常使用 它,Grains是SaltStack记录Minion的一些静态信息 的组件,我们可以简单地理解为Grains里面记录着 每台Minion的一些常用属性,比如CPU、内存、磁 盘、网络信息等,我们可以通过grains.items查看某 台Minion的所有Grains信息,Minions的Grains信息 是Minions启动的时候采集汇报给Master的,在实际 应用环境中我们需要根据自己的业务需求去自定义 一些Grains
8.1通过Minion配置文件定义Grains
先介绍下比较简单的Grains自定义方法,就是通过Minion配置文件定义
Minions的Grains信息是在Minions服务启动的时候汇报给Matser的,所以我们需要修改好Minion配置文 件后重启Minion服务。在Minion的/etc/salt/minion配置文件中默认有一些注释行。这里就是在Minion上 的minion配置文件中如何定义Grains信息例子。下面只需根据自动的需求按照以下格式去填写相应的 键值对就行,大家注意格式就行,SaltStack的配置文件的默认格式都是YAML格式:
# Custom static grains for this minion can be specified here and used in SLS
# files just like all other grains. This example sets 4 custom grains, with
# the 'roles' grain having two values that can be matched against.
#grains:
# roles:
# - webserver
# - memcache
# deployment: datacenter4
# cabinet: 13
# cab_u: 14-15
为了统一管理Minion的Grains信息,需要把这 些注释复制到minion.d/grains文件中
自定义 grains,客户端上配置
[root@nb1 ~]# vi /etc/salt/minion
# Custom static grains for this minion can be specified here and used in SLS
# files just like all other grains. This example sets 4 custom grains, with
# the 'roles' grain having two values that can be matched against.
grains:
roles:
- nginx
env:
- test
myname:
- hadron
# deployment: datacenter4
# cabinet: 13
# cab_u: 14-15
重启salt-minion
[root@nb1 ~]# ps -aux|grep salt-minion
root 38792 0.0 0.1 231928 15388 pts/0 S 02:32 0:00 /usr/bin/python /usr/bin/salt-minion restart
root 38795 0.5 0.3 547648 28872 pts/0 Sl 02:32 0:00 /usr/bin/python /usr/bin/salt-minion restart
root 43928 0.3 0.1 231928 15384 pts/0 S 02:34 0:00 /usr/bin/python /usr/bin/salt-minion restart
root 43933 1.8 0.3 547648 28784 pts/0 Sl 02:34 0:00 /usr/bin/python /usr/bin/salt-minion restart
root 45693 0.0 0.0 112648 960 pts/0 S+ 02:34 0:00 grep --color=auto salt-minion
root 50604 0.0 0.1 231928 15384 pts/0 S Aug17 0:00 /usr/bin/python /usr/bin/salt-minion start
root 50607 0.0 0.3 760916 29024 pts/0 Sl Aug17 0:48 /usr/bin/python /usr/bin/salt-minion start
root 92074 0.0 0.1 231928 15388 pts/0 S 01:58 0:00 /usr/bin/python /usr/bin/salt-minion restart
root 92077 0.0 0.3 547916 26832 pts/0 Sl 01:58 0:01 /usr/bin/python /usr/bin/salt-minion restart
[root@nb1 ~]# kill 38792 43928 45693 50604
-bash: kill: (45693) - No such process
[root@nb1 ~]# ps -aux|grep salt-minion
root 43933 1.2 0.3 547648 28784 pts/0 Sl 02:34 0:00 /usr/bin/python /usr/bin/salt-minion restart
root 46529 0.0 0.0 112648 956 pts/0 S+ 02:35 0:00 grep --color=auto salt-minion
root 92074 0.0 0.1 231928 15388 pts/0 S 01:58 0:00 /usr/bin/python /usr/bin/salt-minion restart
root 92077 0.0 0.3 547916 26832 pts/0 Sl 01:58 0:02 /usr/bin/python /usr/bin/salt-minion restart
[1] Terminated salt-minion start
[3]- Terminated salt-minion restart
[4]+ Terminated salt-minion restart
[root@nb1 ~]# kill 92077 92074 43933
-bash: kill: (43933) - No such process
[root@nb1 ~]# ps -aux|grep salt-minion
root 48215 0.0 0.0 112648 960 pts/0 S+ 02:36 0:00 grep --color=auto salt-minion
[2]+ Terminated salt-minion restart
[root@nb1 ~]# salt-minion restart &
[1] 49052
[root@nb1 ~]#
服务端获取 grains
[root@nb0 ~]# salt 'nb1' grains.item role env myname
nb1:
----------
env:
- test
myname:
- hadron
role:
- nginx
[root@nb0 ~]#
[root@nb0 ~]# salt 'nb1' grains.item role
nb1:
----------
role:
- nginx
[root@nb0 ~]#
注意:grains 在远程执行命令时很方便。我们可以按照 grains 的一些指标来操作。比如把所有的 web 服务器的 grains 的 role 设置为 nginx,那这样我们就可以批量对 nginx 的服务器进行操作了:
[root@nb0 ~]# salt -G role:nginx cmd.run 'hostname'
nb1:
nb1
[root@nb0 ~]#
[root@nb0 ~]# salt -G os:CentOS cmd.run 'hostname'
nb1:
nb1
nb0:
nb0
nb2:
nb2
[root@nb0 ~]#
8.2 pillar
pillar 和 grains 不一样,是在 master 上定义的,并且是针对 minion 定义的一些信息。像一些比较重要的数据(密码)可以存在 pillar 里,还可以定义变量等。
(1)服务端自定义配置 pillar
[root@nb0 ~]# vim /etc/salt/master
找到如下内容,
#pillar_roots:
# base:
# - /srv/pillar
#
去掉#号,修改为
pillar_roots:
base:
- /srv/pillar
[root@nb0 ~]# mkdir /srv/pillar
自定义配置文件,内容如下
[root@nb0 ~]# vim /srv/pillar/test.sls
[root@nb0 ~]# cat /srv/pillar/test.sls
conf: /etc/test123.conf
myname: hadron
[root@nb0 ~]#
总入口文件,内容如下
[root@nb0 ~]# vim /srv/pillar/top.sls
[root@nb0 ~]# cat /srv/pillar/top.sls
base:
'nb1':
- test
[root@nb0 ~]#
重启master
[root@nb0 ~]# ps -aux|grep salt-master
root 29178 0.0 0.3 313076 26816 pts/3 S+ Aug17 0:00 /usr/bin/python /usr/bin/salt-master start
root 29242 0.5 0.4 407192 32856 pts/3 Sl+ Aug17 1:24 /usr/bin/python /usr/bin/salt-master start
root 29243 0.0 0.2 395004 22692 pts/3 Sl+ Aug17 0:00 /usr/bin/python /usr/bin/salt-master start
root 29244 0.0 0.3 395004 24292 pts/3 Sl+ Aug17 0:00 /usr/bin/python /usr/bin/salt-master start
root 29245 0.0 0.2 313076 22016 pts/3 S+ Aug17 0:00 /usr/bin/python /usr/bin/salt-master start
root 29250 0.0 0.3 1204752 28560 pts/3 Sl+ Aug17 0:01 /usr/bin/python /usr/bin/salt-master start
root 29251 0.0 0.3 1205064 28624 pts/3 Sl+ Aug17 0:01 /usr/bin/python /usr/bin/salt-master start
root 29252 0.0 0.3 1205068 28596 pts/3 Sl+ Aug17 0:01 /usr/bin/python /usr/bin/salt-master start
root 29255 0.0 0.3 1205068 28648 pts/3 Sl+ Aug17 0:01 /usr/bin/python /usr/bin/salt-master start
root 29258 0.0 0.3 1205072 28584 pts/3 Sl+ Aug17 0:01 /usr/bin/python /usr/bin/salt-master start
root 29261 0.0 0.2 689932 22668 pts/3 Sl+ Aug17 0:00 /usr/bin/python /usr/bin/salt-master start
root 93354 0.0 0.0 112652 960 pts/2 S+ 03:07 0:00 grep --color=auto salt-master
[root@nb0 ~]# kill 29178 29242 29243 29244 29245 29250 29251 29252 29255 29258 29261
在单独终端启动
[root@nb0 ~]# salt-master start
注意:当更改完 pillar 配置文件后,我们可以通过刷新 pillar 配置来获取新的 pillar 状态
[root@nb0 ~]# salt '*' saltutil.refresh_pillar
nb1:
True
nb0:
True
nb2:
True
[root@nb0 ~]#
验证
[root@nb0 ~]# salt 'nb1' pillar.items
nb1:
----------
conf:
/etc/test123.conf
myname:
hadron
[root@nb0 ~]# salt 'nb1' pillar.item conf
nb1:
----------
conf:
/etc/test123.conf
[root@nb0 ~]# salt 'nb1' pillar.item myname
nb1:
----------
myname:
hadron
[root@nb0 ~]#
pillar 同样可以用来作为 salt 的匹配对象
[root@nb0 ~]# salt -I 'conf:/etc/test123.conf' test.ping
nb1:
True
[root@nb0 ~]# salt -I 'conf:/etc/test123.conf' cmd.run 'w'
nb1:
03:17:08 up 67 days, 14:25, 1 user, load average: 0.02, 0.12, 0.24
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
root pts/0 hadron Mon21 24:44 2.38s 0.16s -bash
[root@nb0 ~]#
9. 配置管理安装Apache
下面进行的演示是远程通过 yum 方式安装 Apache。步骤如下:
(1)配置
[root@nb0 ~]# vim /etc/salt/master
找到如下内容
# file_roots:
# base:
# - /srv/salt/
去掉#注释
file_roots:
base:
- /srv/salt
[root@nb0 ~]# mkdir /srv/salt
[root@nb0 ~]# vim /srv/salt/top.sls
[root@nb0 ~]# cat /srv/salt/top.sls
base:
'nb1':
- apache
[root@nb0 ~]#
[root@nb0 ~]# vim /srv/salt/apache.sls
[root@nb0 ~]# cat /srv/salt/apache.sls
apache-service:
pkg.installed:
- names:
- httpd
- httpd-devel
service.running:
- name: httpd
- enable: True
[root@nb0 ~]#
注意:apache-service 是自定义的 id 名。pkg.installed 为包安装函数,下面是要安装的包的名字。service.running 也是一个函数,来保证指定的服务启动,enable 表示开机启动。
(2)重启服务
[root@nb0 ~]# salt-master start
^C[WARNING ] Stopping the Salt Master
[WARNING ] Stopping the Salt Master
[WARNING ] Stopping the Salt Master
Exiting on Ctrl-c
Exiting on Ctrl-c
Exiting on Ctrl-c
You have mail in /var/spool/mail/root
[root@nb0 ~]# salt-master start
[root@nb0 ~]# salt 'nb1' state.highstate
nb1:
----------
ID: apache-service
Function: pkg.installed
Name: httpd
Result: True
Comment: Package httpd is already installed.
Started: 03:38:36.137884
Duration: 1250.258 ms
Changes:
----------
ID: apache-service
Function: pkg.installed
Name: httpd-devel
Result: True
Comment: The following packages were installed/updated: httpd-devel
Started: 03:38:37.388313
Duration: 33668.276 ms
Changes:
----------
apr-devel:
----------
new:
1.4.8-3.el7
old:
apr-util-devel:
----------
new:
1.5.2-6.el7
old:
cyrus-sasl:
----------
new:
2.1.26-20.el7_2
old:
cyrus-sasl-devel:
----------
new:
2.1.26-20.el7_2
old:
httpd:
----------
new:
2.4.6-45.el7.centos.4
old:
2.4.6-45.el7.centos
httpd-devel:
----------
new:
2.4.6-45.el7.centos.4
old:
httpd-tools:
----------
new:
2.4.6-45.el7.centos.4
old:
2.4.6-45.el7.centos
openldap-devel:
----------
new:
2.4.40-13.el7
old:
----------
ID: apache-service
Function: service.running
Name: httpd
Result: True
Comment: Service httpd has been enabled, and is running
Started: 03:39:11.080192
Duration: 6685.669 ms
Changes:
----------
httpd:
True
Summary
------------
Succeeded: 3 (changed=2)
Failed: 0
------------
Total states run: 3
[root@nb0 ~]#
说明 Apache 远程安装已成功。
[root@nb1 ~]# systemctl status httpd.service
● httpd.service - The Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2017-08-18 03:39:17 EDT; 2min 10s ago
Docs: man:httpd(8)
man:apachectl(8)
Main PID: 11613 (httpd)
Status: "Total requests: 0; Current requests/sec: 0; Current traffic: 0 B/sec"
CGroup: /system.slice/httpd.service
├─11613 /usr/sbin/httpd -DFOREGROUND
├─11715 /usr/sbin/httpd -DFOREGROUND
├─11716 /usr/sbin/httpd -DFOREGROUND
├─11717 /usr/sbin/httpd -DFOREGROUND
├─11718 /usr/sbin/httpd -DFOREGROUND
└─11719 /usr/sbin/httpd -DFOREGROUND
Aug 18 03:39:16 nb1 systemd[1]: Starting The Apache HTTP Server...
Aug 18 03:39:16 nb1 httpd[11613]: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 192.168.1.161. Set the 'ServerN...his message
Aug 18 03:39:17 nb1 systemd[1]: Started The Apache HTTP Server.
Hint: Some lines were ellipsized, use -l to show in full.
[root@nb1 ~]#
10.states文件
salt states的核心是sls文件,该文件使用YAML语法定义了一些k/v的数据。
sls文件存放根路径在master配置文件中定义,默认为/srv/salt,该目录在操作系统上不存在,需要手动创建。
在salt中可以通过salt://代替根路径,例如你可以通过salt://top.sls访问/srv/salt/top.sls。
在states中top文件也由master配置文件定义,默认为top.sls,该文件为states的入口文件。 一个简单的sls文件如下:
apache:
pkg.installed
service.running
- require:
- pkg: apache
说明:此SLS数据确保叫做”apache”的软件包(package)已经安装,并且”apache”服务(service)正在运行中。
- 第一行,被称为ID说明(ID Declaration)。ID说明表明可以操控的名字。
- 第二行和第四行是State说明(State Declaration),它们分别使用了pkg和service states。pkg state通过系统的包管理其管理关键包,service state管理系统服务(daemon)。 在pkg及service列下边是运行的方法。方法定义包和服务应该怎么做。此处是软件包应该被安装,服务应该处于运行中。
- 第六行使用require。本方法称为”必须指令”(Requisite Statement),表明只有当apache软件包安装成功时,apache服务才启动起来
salt-master是通过写sls配置管理minion上重复指令的,服务状态等等。
salt states的核心是sls文件,该文件使用YAML语法定义了一些k/v的数据。sls文件存放根路径在master配置文件中定义,默认为/srv/salt,该目录在操作系统上不存在,需要手动创建。
[root@nb0 ~]# mkdir -p /srv/salt/base
11.文件目录管理
11.1文件管理
(1)服务端配置
[root@nb0 ~]# vim /srv/salt/top.sls
[root@nb0 ~]# cat /srv/salt/top.sls
base:
'nb1':
- apache
'nb2':
- filetest
[root@nb0 ~]#
新建 filetest.sls 文件
[root@nb0 ~]# vim /srv/salt/filetest.sls
[root@nb0 ~]# cat /srv/salt/filetest.sls
file-test:
file.managed:
- name: /tmp/filetest.txt
- source: salt://test/123/1.txt
- user: root
- group: root
- mode: 644
[root@nb0 ~]#
注意:第一行的 file-test 为自定的名字,表示该配置段的名字,可以在别的配置段中引用它;source指定文件从哪里拷贝,这里的 test 目录相当于是 /srv/salt/test 目录;name指定远程客户端要生成的文件。
新建所要测试的源文件
[root@nb0 ~]# mkdir -p /srv/salt/test/123/
[root@nb0 ~]# echo "file test" > /srv/salt/test/123/1.txt
[root@nb0 ~]#
执行命令:
[root@nb0 ~]# salt 'nb2' state.highstate
nb2:
----------
ID: file-test
Function: file.managed
Name: /tmp/filetest.txt
Result: True
Comment: File /tmp/filetest.txt updated
Started: 03:59:13.664379
Duration: 505.159 ms
Changes:
----------
diff:
New file
mode:
0644
Summary
------------
Succeeded: 1 (changed=1)
Failed: 0
------------
Total states run: 1
[root@nb0 ~]#
(2)客户端验证
[root@nb2 ~]# cat /tmp/filetest.txt
file test
[root@nb2 ~]#
11.2目录管理
(1)接着编辑之前的 top.sls 文件
修改为如下
[root@nb0 ~]# vim /srv/salt/top.sls
[root@nb0 ~]# cat /srv/salt/top.sls
base:
'nb1':
- apache
'nb2':
- filedir
[root@nb0 ~]#
(2)新建 filedir.sls 文件
[root@nb0 ~]# vim /srv/salt/filedir.sls
[root@nb0 ~]# cat /srv/salt/filedir.sls
file-dir:
file.recurse:
- name: /tmp/testdir
- source: salt://test/123
- user: root
- file_mode: 644
- dir_mode: 755
- mkdir: True
- clean: True
[root@nb0 ~]#
clean: True 源删除文件或目录,目标也会跟着删除,否则不会删除。可以默认设置为 False
(3)新建所要测试的源目录
/srv/salt/test/123已经存在,且有一个文件
[root@nb0 ~]# ls /srv/salt/test/123
1.txt
[root@nb0 ~]# cat /srv/salt/test/123/1.txt
file test
(4)执行命令
[root@nb0 ~]# salt 'nb2' state.highstate
nb2:
----------
ID: file-dir
Function: file.recurse
Name: /tmp/testdir
Result: True
Comment: Recursively updated /tmp/testdir
Started: 01:38:38.129930
Duration: 392.34 ms
Changes:
----------
/tmp/testdir/1.txt:
----------
diff:
New file
mode:
0644
Summary
------------
Succeeded: 1 (changed=1)
Failed: 0
------------
Total states run: 1
[root@nb0 ~]#
(5)客户端验证
[root@nb2 ~]# ls /tmp
filetest.txt Jetty_0_0_0_0_16010_master____.6nvknp Jetty_localhost_40934_datanode____.k20t6j
hadoop-root-journalnode.pid Jetty_0_0_0_0_16030_regionserver____.45q9os Jetty_nb2_50070_hdfs____xjgcrn
hadoop-unjar4050493136279788948 Jetty_0_0_0_0_8042_node____19tj0x systemd-private-bd8f0cf7c19147208fb1f2948ed5483f-vmtoolsd.service-LQvsNz
hsperfdata_root Jetty_0_0_0_0_8480_journal____.8g4awa testdir
[root@nb2 ~]# ls /tmp/testdir/
1.txt
[root@nb2 ~]#
(6)测试增删功能
在服务端新建newDir目录以及文件a,删除1.txt 文件
[root@nb0 ~]# cd /srv/salt/test/123
[root@nb0 123]# mkdir newDir
[root@nb0 123]# echo "Hello" > newDir/a
[root@nb0 123]# rm -rf 1.txt
(7)再次执行命令
[root@nb0 ~]# salt 'nb2' state.highstate
nb2:
----------
ID: file-dir
Function: file.recurse
Name: /tmp/testdir
Result: True
Comment: Recursively updated /tmp/testdir
Started: 01:45:59.688250
Duration: 442.358 ms
Changes:
----------
/tmp/testdir/newDir:
----------
/tmp/testdir/newDir:
New Dir
/tmp/testdir/newDir/a:
----------
diff:
New file
mode:
0644
removed:
- /tmp/testdir/1.txt
Summary
------------
Succeeded: 1 (changed=1)
Failed: 0
------------
Total states run: 1
[root@nb0 ~]#
(8)再次验证
[root@nb2 ~]# ls /tmp/testdir/
newDir
[root@nb2 ~]# ls /tmp/testdir/newDir/
a
[root@nb2 ~]#
注意的是要成功创建newDir目录,前提是newDir目录下要有文件;如若没有,客户端是不会创建newDir目录的。
12.远程执行
前面提到远程执行命令 test.ping,cmd.run,点前面的是模块,点后面的是函数;这样总归是不太规范化,下面详细介绍怎么远程执行命令和脚本。
12.1远程执行命令
(1)接着编辑之前的 top.sls 文件
[root@nb0 ~]# vim /srv/salt/top.sls
[root@nb0 ~]# cat /srv/salt/top.sls
base:
'nb1':
- cmdtest
'nb2':
- filedir
[root@nb0 ~]#
(2)新建 cmdtest.sls 文件
[root@nb0 ~]# vim /srv/salt/cmdtest.sls
[root@nb0 ~]# cat /srv/salt/cmdtest.sls
cmd-test:
cmd.run:
- onlyif: test -f /tmp/1.txt
- names:
- touch /tmp/cmdtest.txt
- mkdir /tmp/cmdtest
- user: root
[root@nb0 ~]#
条件 onlyif 表示若 /tmp/1.txt文件存在,则执行后面的命令;可以使用 unless,两者正好相反。
[root@nb1 ~]# echo "hello" > /tmp/1.txt
[root@nb1 ~]# cat /tmp/1.txt
hello
[root@nb1 ~]#
(3)执行命令
[root@nb0 ~]# salt 'nb1' state.highstate
nb1:
----------
ID: cmd-test
Function: cmd.run
Name: touch /tmp/cmdtest.txt
Result: True
Comment: Command "touch /tmp/cmdtest.txt" run
Started: 02:23:07.347360
Duration: 565.866 ms
Changes:
----------
pid:
7209
retcode:
0
stderr:
stdout:
----------
ID: cmd-test
Function: cmd.run
Name: mkdir /tmp/cmdtest
Result: True
Comment: Command "mkdir /tmp/cmdtest" run
Started: 02:23:07.913505
Duration: 208.682 ms
Changes:
----------
pid:
7245
retcode:
0
stderr:
stdout:
Summary
------------
Succeeded: 2 (changed=2)
Failed: 0
------------
Total states run: 2
[root@nb0 ~]#
(4)验证
[root@nb1 ~]# ll /tmp|grep cmd
drwxr-xr-x 2 root root 6 Aug 21 02:23 cmdtest
-rw-r--r-- 1 root root 0 Aug 21 02:23 cmdtest.txt
[root@nb1 ~]#
12.2 远程执行脚本
(1)接着编辑之前的 top.sls 文件
[root@nb0 ~]# vim /srv/salt/top.sls
[root@nb0 ~]# cat /srv/salt/top.sls
base:
'nb1':
- cmdtest
'nb2':
- shelltest
[root@nb0 ~]#
(2)新建 shelltest.sls 文件
[root@nb0 ~]# vim /srv/salt/shelltest.sls
[root@nb0 ~]# cat /srv/salt/shelltest.sls
shell-test:
cmd.script:
- source: salt://test/1.sh
- user: root
[root@nb0 ~]#
(3)新建 1.sh 脚本文件
[root@nb0 ~]# vim /srv/salt/test/1.sh
[root@nb0 ~]# cat /srv/salt/test/1.sh
#!/bin/bash
touch /tmp/shelltest.txt
if [ -d /tmp/shelltest ]
then
rm -rf /tmp/shelltest
else
mkdir /tmp/shelltest
fi
[root@nb0 ~]#
(4)执行命令
[root@nb0 ~]# salt 'nb2' state.highstate
nb2:
----------
ID: shell-test
Function: cmd.script
Result: True
Comment: Command 'shell-test' run
Started: 02:35:33.341722
Duration: 585.072 ms
Changes:
----------
pid:
48228
retcode:
0
stderr:
stdout:
Summary
------------
Succeeded: 1 (changed=1)
Failed: 0
------------
Total states run: 1
[root@nb0 ~]#
(5)客户端验证
[root@nb2 ~]# ll /tmp|grep shell
drwxr-xr-x 2 root root 6 Aug 21 02:35 shelltest
-rw-r--r-- 1 root root 0 Aug 21 02:35 shelltest.txt
[root@nb2 ~]#
通过上面的例子,我们实现了远程执行脚本;如果我们想一键远程安装 LAMP 或者 LNMP,那么只需把本例中的 1.sh 脚本替换成 一键安装的脚本就行。
13.管理任务计划
13.1 建立 cron
(1)编辑 top.sls 文件
[root@nb0 ~]# vim /srv/salt/top.sls
[root@nb0 ~]# cat /srv/salt/top.sls
base:
'nb1':
- crontest
'nb2':
- shelltest
[root@nb0 ~]#
(2)编辑 crontest.sls 文件
[root@nb0 ~]# vim /srv/salt/crontest.sls
[root@nb0 ~]# cat /srv/salt/crontest.sls
cron-test:
cron.present:
- name: /bin/touch /tmp/111.txt
- user: root
- minute: '*'
- hour: 20
- daymonth: 1-10
- month: '3,5'
- dayweek: '*'
[root@nb0 ~]#
注意,*需要用单引号引起来。当然我们还可以使用 file.managed 模块来管理 cron,因为系统的 cron都是以配置文件的形式存在的。
(3)执行命令
[root@nb0 ~]# salt 'nb1' state.highstate
nb1:
----------
ID: cron-test
Function: cron.present
Name: /bin/touch /tmp/111.txt
Result: True
Comment: Cron /bin/touch /tmp/111.txt added to root's crontab
Started: 02:47:51.454886
Duration: 1478.963 ms
Changes:
----------
root:
/bin/touch /tmp/111.txt
Summary
------------
Succeeded: 1 (changed=1)
Failed: 0
------------
Total states run: 1
[root@nb0 ~]#
(4)客户端验证
[root@nb1 ~]# crontab -l
00 03 * * * ntpdate 192.168.1.81
00 03 * * * ntpdate 192.168.1.81
00 03 * * * ntpdate 192.168.1.81
00 03 * * * ntpdate 192.168.1.81
00 03 * * * ntpdate 192.168.1.160
# Lines below here are managed by Salt, do not edit
# SALT_CRON_IDENTIFIER:/bin/touch /tmp/111.txt
* 20 1-10 3,5 * /bin/touch /tmp/111.txt
[root@nb1 ~]#
13.2 删除 cron
(1)修改 crontest.sls 文件 把 cron.present: 改成 cron.absent: 注意:两者不能共存,要想删除一个 cron,那之前的 present 就得替换掉或者删除掉。
[root@nb0 ~]# vim /srv/salt/crontest.sls
[root@nb0 ~]# cat /srv/salt/crontest.sls
cron-test:
cron.absent:
- name: /bin/touch /tmp/111.txt
- user: root
- minute: '*'
- hour: 20
- daymonth: 1-10
- month: '3,5'
- dayweek: '*'
[ro
ot@nb0 ~]#
(2)执行命令
[root@nb0 ~]# salt 'nb1' state.highstate
nb1:
----------
ID: cron-test
Function: cron.absent
Name: /bin/touch /tmp/111.txt
Result: True
Comment: Cron /bin/touch /tmp/111.txt removed from root's crontab
Started: 02:56:03.583557
Duration: 29.663 ms
Changes:
----------
root:
/bin/touch /tmp/111.txt
Summary
------------
Succeeded: 1 (changed=1)
Failed: 0
------------
Total states run: 1
[root@nb0 ~]#
(3)客户端验证
[root@nb1 ~]# crontab -l
00 03 * * * ntpdate 192.168.1.81
00 03 * * * ntpdate 192.168.1.81
00 03 * * * ntpdate 192.168.1.81
00 03 * * * ntpdate 192.168.1.81
00 03 * * * ntpdate 192.168.1.160
# Lines below here are managed by Salt, do not edit
[root@nb1 ~]#
14.Saltstack 常用命令
14.1拷贝文件到客户端
[root@nb0 ~]# salt 'nb1' cp.get_file salt://apache.sls /tmp/cp.txt
nb1:
/tmp/cp.txt
[root@nb0 ~]#
[root@nb1 ~]# cat /tmp/cp.txt
apache-service:
pkg.installed:
- names:
- httpd
- httpd-devel
service.running:
- name: httpd
- enable: True
[root@nb1 ~]#
14.2 拷贝目录到客户端
[root@nb0 ~]# salt 'nb1' cp.get_dir salt://test /tmp
nb1:
- /tmp/test/1.sh
- /tmp/test/123/newDir/a
[root@nb0 ~]#
[root@nb1 ~]# ll /tmp/test/
total 4
drwxr-xr-x 3 root root 20 Aug 21 03:02 123
-rw-r--r-- 1 root root 126 Aug 21 03:02 1.sh
[root@nb1 ~]#
14.3 显示存活的客户端
[root@nb0 ~]# salt-run manage.up
- nb0
- nb1
- nb2
[root@nb0 ~]#
14.4 命令下执行服务端的脚本
[root@nb0 ~]# vim /srv/salt/test/shell.sh
[root@nb0 ~]# cat /srv/salt/test/shell.sh
#! /bin/bash
echo "hadron.cn" > /tmp/shell.txt
[root@nb0 ~]#
[root@nb0 ~]# salt 'nb2' cmd.script salt://test/shell.sh
nb2:
----------
pid:
86257
retcode:
0
stderr:
stdout:
[root@nb0 ~]#
[root@nb2 ~]# cat /tmp/shell.txt
hadron.cn
[root@nb2 ~]#
15.问题
[root@nb0 ~]# salt-master start
[ERROR ] An extra return was detected from minion nb1, please verify the minion, this could be a replay attack
[ERROR ] An extra return was detected from minion nb1, please verify the minion, this could be a replay attack
执行一次命令,返回两个值
[root@nb0 ~]# salt '*' cmd.run 'df -h'
nb1:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/cl-root 48G 4.3G 44G 9% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 24K 3.9G 1% /dev/shm
tmpfs 3.9G 385M 3.5G 10% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sda1 1014M 139M 876M 14% /boot
/dev/mapper/cl-home 24G 33M 24G 1% /home
tmpfs 781M 0 781M 0% /run/user/0
nb1:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/cl-root 48G 4.3G 44G 9% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 24K 3.9G 1% /dev/shm
tmpfs 3.9G 385M 3.5G 10% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sda1 1014M 139M 876M 14% /boot
/dev/mapper/cl-home 24G 33M 24G 1% /home
tmpfs 781M 0 781M 0% /run/user/0
nb0:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/cl-root 48G 27G 22G 55% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 16K 3.9G 1% /dev/shm
tmpfs 3.9G 394M 3.5G 11% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sda1 1014M 139M 876M 14% /boot
/dev/mapper/cl-home 24G 33M 24G 1% /home
tmpfs 781M 0 781M 0% /run/user/0
/dev/loop0 7.8G 7.8G 0 100% /var/ftp/iso-home
[root@nb0 ~]#
问题产生的原因在node2节点上重复启动
[root@nb1 ~]# salt-minion start
^C[WARNING ] Stopping the Salt Minion
[WARNING ] Exiting on Ctrl-c
[root@nb1 ~]#
ctrl+c终止第二次的salt-minion启动即可。