一、Saltstack环境准备
第一台:linux-node1,既作为salt-master,又作为salt-minion
[root@linux-node1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.0.7 linux-node1
10.0.0.8 linux-node2
[root@linux-node1 ~]# cat /etc/redhat-release
CentOS release 6.7 (Final)
[root@linux-node1 ~]# uname -m
x86_64
[root@linux-node1 ~]# uname -r
2.6.32-573.el6.x86_64
[root@linux-node1 ~]# uname -a
Linux linux-node1 2.6.32-573.el6.x86_64 #1 SMP Thu Jul 23 15:44:03 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
第二台:linux-node2,只作为salt-minion
[root@linux-node2 ~]# uname -r
2.6.32-573.el6.x86_64
[root@linux-node2 ~]# uname -m
x86_64
[root@linux-node2 ~]# uname -a
Linux linux-node2 2.6.32-573.el6.x86_64 #1 SMP Thu Jul 23 15:44:03 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
[root@linux-node2 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.0.7 linux-node1
10.0.0.8 linux-node2
二、Saltstack介绍
2.1 Salt三用运行方式
- local本地运行
- Master/Minion
- Salt ssh
2.2 Salt的三大功能
- 远程执行
- 配置管理(状态管理)
- 云管理:阿里云,aws,openstack都提供了封装好的接口,可以使用salt-cloud进行云主机的管理
三、Salt安装配置启动
此处使用yum安装,生产也建议使用yum安装,minion在装操作系统的时候就装上,也可以使用salt ssh安装minion,后续会提到
- linux-node1
[root@linux-node1 ~]#rpm -ivh http://mirrors.aliyun.com/epel/epel-release-latest-6.noarch.rpm
[root@linux-node1 ~]#yum install -y salt-master salt-minion
[root@linux-node1 ~]# chkconfig salt-master on
[root@linux-node1 ~]# chkconfig salt-minion on
- linux-node2
[root@linux-node2 ~]#rpm -ivh http://mirrors.aliyun.com/epel/epel-release-latest-6.noarch.rpm
[root@linux-node2 ~]#yum install -y salt-minion
[root@linux-node1 ~]# chkconfig salt-minion on
启动salt
[root@linux-node1 ~]# /etc/init.d/salt-master start
Starting salt-master daemon: [确定]
修改两个salt-minion的配置文件,指出salt-master的主机,这里可以使用ip地址,如果有内部dns也可以使用主机名,使用主机名方便以后salt-master的迁移
[root@linux-node1 ~]#sed -i '16s#\#master: salt#master: 10.0.0.7#g' /etc/salt/minion
[root@linux-node2 ~]#sed -i '16s#\#master: salt#master: 10.0.0.7#g' /etc/salt/minion
注意:下面配置文件中的id十分重要,在生产上可以用来配置主机名,后面会有主机面的配置策略,如果不进行配置此id,将默认使用fqdn
[root@linux-node1 ~]# sed -n '68,74p' /etc/salt/minion
# Explicitly declare the id for this minion to use, if left commented the id
# will be the hostname as returned by the python call: socket.getfqdn()
# Since salt uses detached ids it is possible to run multiple minions on the
# same machine but with different ids, this can be useful for salt compute
# clusters.
#id:
启动salt-master和salt-minion
[root@linux-node1 ~]# /etc/init.d/salt-master start
[root@linux-node1 ~]# /etc/init.d/salt-minion start
四、Saltstack的认证
minion首次启动后会在minion端看到minion的私钥和公钥,salt会把公钥发送给master
[root@linux-node2 minion]# pwd
/etc/salt/pki/minion
[root@linux-node2 minion]# ls
minion.pem minion.pub
master启动后也会生成key,此时master需要统一minion的请求
[root@linux-node1 master]# pwd
/etc/salt/pki/master
[root@linux-node1 master]# ls
master.pem master.pub minions minions_autosign minions_denied minions_pre minions_rejected
使用salt-key查看各种状态的key
[root@linux-node1 pki]# salt-key
Accepted Keys:
Denied Keys:
Unaccepted Keys:
linux-node1
linux-node2
Rejected Keys:
接受两个新的key,这里使用-A接受所有,也可使用-a指定某个minion,也可使用通配符匹配。具体salt-key的指令参数,请看salt-key –help查看
[root@linux-node1 pki]# salt-key -A
The following keys are going to be accepted:
Unaccepted Keys:
linux-node1
linux-node2
Proceed? [n/Y] y
Key for minion linux-node1 accepted.
Key for minion linux-node2 accepted.
[root@linux-node1 pki]# salt-key
Accepted Keys:
linux-node1
linux-node2
Denied Keys:
Unaccepted Keys:
Rejected Keys:
这时就可以在master端的已接受minionm看到minion端id文件了
[root@linux-node1 master]# pwd
/etc/salt/pki/master
[root@linux-node1 master]# tree
.
├── master.pem
├── master.pub
├── minions
│ ├── linux-node1
│ └── linux-node2
├── minions_autosign
├── minions_denied
├── minions_pre
└── minions_rejected
5 directories, 4 files
实际上master的minions列表文件中存放的就是minion的公钥
[root@linux-node1 minions]# cat linux-node1
-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA7tnEScZ0vLwevAwFCQp5
kADzCOcZ3pHc+zFVugnzGCxtrmymwgV0QFARSqGQU9eWL/vaY2hz8YIwmPIU5Ri2
j+A0l8K15q2X2hgKepiU+qZG1Xc9EeAX/DPD+qynxXCd9EGMH32U1nQxlbnOwHUH
dDUbfAXf6Mxm/8/5VqNEWnx8ymug6N2MAWvJbLn2+24jhMxjeJrJRxz4nVTqOa4y
cOHiPqdwCaAUc9ul/sOp6VFlE+TsRQ3mcOHbYCDy9NgGmz3GNAtsdr6LcfEvYq4q
q78DK6Y5i5eEKsVbDT8BBP5I9D8YwL8fymFB8LcTPiiRlwPaAvgL2KeL10C9Q1z6
cwIDAQAB
-----END PUBLIC KEY-----
master也会把自己的公钥发送到minion端
[root@linux-node1 minion]# pwd
/etc/salt/pki/minion
[root@linux-node1 minion]# cat minion_master.pub
-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAoPuOLwx+0cL+BKZZRmT4
JYhdGfC4ww5ku2Na8ZP4fPy73iZ5KXDG8z/fwsueHXssnsAgsY3EbyyjIa6Cx8Lh
a0T+N9U00olpHshOWUjy1kRmMjMYnveuU8cw0MDTZ327Ze6TEUfR9DbFCcz1uzCn
rCuCMUohtUA/ErwttAuERnaM5R7xZV4fG/eO8B0vXQv2nisJNIMRZbbCiaJTARir
ULqq8mpWIuqww3jZznef6R6WwhMCh+9vQTNVEXYropKQjm7cGgleQhUpRqPgtEw8
80qxybjMflOJZzOVTc1L72ah1s3unRReHU+olH+Zhxb2lb7/YpA2DoURf/b25M0h
6wIDAQAB
-----END PUBLIC KEY-----
五、Saltstack的远程执行
使用test.ping测试master和minion是否连通
salt:基本命令; *:代表所有minion主机;test:模块; ping:test模块的一个方法,这里的单引号也可以使用双引号
[root@linux-node1 minion]# salt '*' test.ping
linux-node2:
True
linux-node1:
True
使用cmd.run远程执行命令,cmd是模块,run是cmd模块的一个方法
[root@linux-node1 minion]# salt '*' cmd.run 'w'
linux-node2:
21:19:48 up 16:54, 2 users, load average: 0.00, 0.00, 0.00
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
root tty1 - 29Feb16 62days 0.10s 0.10s -bash
root pts/2 10.0.0.1 19:30 39:12 0.04s 0.04s -bash
linux-node1:
21:19:48 up 17:05, 2 users, load average: 0.12, 0.03, 0.01
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
root tty1 - 29Feb16 8days 0.15s 0.15s -bash
root pts/2 10.0.0.1 19:30 1.00s 0.89s 0.78s /usr/bin/python
六、配置管理
6.1启用配置管理
修改salt-master的配置文件,打开416至418行的注释
file_root代表文件目录所在,base指base环境,是必须存在的,这里支持多种(测试开发生产等)环境,后续会提到
[root@linux-node1 minion]# sed -n '416,418p' /etc/salt/master
file_roots:
base:
- /srv/salt
[root@linux-node1 minion]# mkdir /srv/salt
[root@linux-node1 minion]# /etc/init.d/salt-master restart
6.2简单安装一个apache服务
编写apache.sls
[root@linux-node1 salt]# pwd
/srv/salt
[root@linux-node1 salt]# cat -A apache.sls
apache-install:$ #服务ID
pkg.installed:$ #apache:模块 install:方法
- names:$ #names列表
- httpd$ #会使用yum安装httpd
- httpd-devel$ #会使用yum安装httpd-devel
apache-service:$ #服务ID
service.running:$ #service:模块 running:方法
- name: httpd$ #name:指定http的服务用来service.running
- enable: True$ #开机启动
- reload: True$ #支持重载
执行上面的状态文件,salt:命令 *:代表所有minion,具体匹配方法后面会有详解 state:模块 sls:方法 apache:要执行的state文件
[root@linux-node1 salt]# salt '*' state.sls apache
linux-node2:
ID: apache-install
Function: pkg.installed
Name: httpd
Result: True
Comment: Package httpd is already installed.
Started: 23:26:15.045492
Duration: 2256.368 ms
Changes:
ID: apache-install
Function: pkg.installed
Name: httpd-devel
Result: True
Comment: Package httpd-devel is already installed.
Started: 23:26:17.302343
Duration: 1.577 ms
Changes:
ID: apache-service
Function: service.running
Name: httpd
Result: True
Comment: Service httpd is already enabled, and is in the desired state
Started: 23:26:17.305384
Duration: 137.522 ms
Changes:
Summary
Succeeded: 3
Failed: 0
Total states run: 3
linux-node1:
ID: apache-install
Function: pkg.installed
Name: httpd
Result: True
Comment: Package httpd is already installed.
Started: 23:26:15.152083
Duration: 2307.265 ms
Changes:
ID: apache-install
Function: pkg.installed
Name: httpd-devel
Result: True
Comment: Package httpd-devel is already installed.
Started: 23:26:17.459645
Duration: 1.052 ms
Changes:
ID: apache-service
Function: service.running
Name: httpd
Result: True
Comment: Service httpd is already enabled, and is in the desired state
Started: 23:26:17.462565
Duration: 122.922 ms
Changes:
Summary
Succeeded: 3
Failed: 0
Total states run: 3
查看apahce服务状态
[root@linux-node1 salt]# lsof -i:80
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
httpd 8054 root 4u IPv6 86585 0t0 TCP *:http (LISTEN)
httpd 8058 apache 4u IPv6 86585 0t0 TCP *:http (LISTEN)
httpd 8059 apache 4u IPv6 86585 0t0 TCP *:http (LISTEN)
httpd 8060 apache 4u IPv6 86585 0t0 TCP *:http (LISTEN)
httpd 8061 apache 4u IPv6 86585 0t0 TCP *:http (LISTEN)
httpd 8062 apache 4u IPv6 86585 0t0 TCP *:http (LISTEN)
httpd 8063 apache 4u IPv6 86585 0t0 TCP *:http (LISTEN)
httpd 8064 apache 4u IPv6 86585 0t0 TCP *:http (LISTEN)
httpd 8065 apache 4u IPv6 86585 0t0 TCP *:http (LISTEN)
6.3 编写top file,执行高级状态
top.sls是默认的入口文件,名称也是top.sls,必须放在base环境下
[root@linux-node1 salt]# cat top.sls
base: #base环境
'linux-*': #指定base环境下的minion主机
- apache #高级状态需要执行服务
执行高级状态,意思是从top.sls开始读入,进行匹配执行状态文件
[root@linux-node1 salt]# salt '*' state.highstate
七、saltstack的数据系统
7.1 学习grains
grains:搜集minion启动时的系统信息,只有在minion启动时才会搜集,grains更适合做一些静态的属性值的采集,例如设备的角色(role),磁盘个数(disk_num)等诸如此类非常固定的属性,另一个作用可以用来匹配minion
7.1.1 远程执行获取信息
列出所有的grains选项
[root@linux-node1 ~]# salt 'linux-node1*' grains.ls
linux-node1:
- SSDs
- biosreleasedate
- biosversion
- cpu_flags
- cpu_model
- cpuarch
- domain
- fqdn
- fqdn_ip4
- fqdn_ip6
- gpus
- host
- hwaddr_interfaces
- id
- init
- ip4_interfaces
- ip6_interfaces
- ip_interfaces
- ipv4
- ipv6
- kernel
- kernelrelease
- locale_info
- localhost
- lsb_distrib_codename
- lsb_distrib_id
- lsb_distrib_release
- machine_id
- manufacturer
- master
- mdadm
- mem_total
- nodename
- num_cpus
- num_gpus
- os
- os_family
- osarch
- oscodename
- osfinger
- osfullname
- osmajorrelease
- osrelease
- osrelease_info
- path
- productname
- ps
- pythonexecutable
- pythonpath
- pythonversion
- saltpath
- saltversion
- saltversioninfo
- selinux
- serialnumber
- server_id
- shell
- virtual
- zmqversion
```
列出所有grains和内容
```bash
[root@linux-node1 ~]# salt 'linux-node1*' grains.items
linux-node1:
SSDs:
biosreleasedate:
05/20/2014
biosversion:
6.00
cpu_flags:
- fpu
- vme
- de
- pse
- tsc
- msr
- pae
- mce
- cx8
- apic
- sep
- mtrr
- pge
- mca
- cmov
- pat
- pse36
- clflush
- dts
- mmx
- fxsr
- sse
- sse2
- ss
- syscall
- nx
- rdtscp
- lm
- constant_tsc
- up
- arch_perfmon
- pebs
- bts
- xtopology
- tsc_reliable
- nonstop_tsc
- aperfmperf
- unfair_spinlock
- pni
- pclmulqdq
- ssse3
- cx16
- sse4_1
- sse4_2
- popcnt
- xsave
- avx
- hypervisor
- lahf_lm
- arat
- epb
- pln
- pts
- dts
cpu_model:
Intel(R) Core(TM) i3-2330M CPU @ 2.20GHz
cpuarch:
x86_64
domain:
fqdn:
linux-node1
fqdn_ip4:
- 10.0.0.7
fqdn_ip6:
gpus:
|_
----------
model:
SVGA II Adapter
vendor:
unknown
host:
linux-node1
hwaddr_interfaces:
----------
eth0:
00:0c:29:2c:10:a1
eth1:
00:0c:29:2c:10:ab
lo:
00:00:00:00:00:00
id:
linux-node1
init:
upstart
ip4_interfaces:
----------
eth0:
- 10.0.0.7
eth1:
- 172.16.1.7
lo:
- 127.0.0.1
ip6_interfaces:
----------
eth0:
- fe80::20c:29ff:fe2c:10a1
eth1:
- fe80::20c:29ff:fe2c:10ab
lo:
- ::1
ip_interfaces:
----------
eth0:
- 10.0.0.7
- fe80::20c:29ff:fe2c:10a1
eth1:
- 172.16.1.7
- fe80::20c:29ff:fe2c:10ab
lo:
- 127.0.0.1
- ::1
ipv4:
- 10.0.0.7
- 127.0.0.1
- 172.16.1.7
ipv6:
- ::1
- fe80::20c:29ff:fe2c:10a1
- fe80::20c:29ff:fe2c:10ab
kernel:
Linux
kernelrelease:
2.6.32-573.el6.x86_64
locale_info:
----------
defaultencoding:
UTF8
defaultlanguage:
zh_CN
detectedencoding:
UTF-8
localhost:
linux-node1
lsb_distrib_codename:
Final
lsb_distrib_id:
CentOS
lsb_distrib_release:
6.7
machine_id:
53d3f8757a7bdf1be87664bd00000012
manufacturer:
VMware, Inc.
master:
10.0.0.7
mdadm:
mem_total:
992
nodename:
linux-node1
num_cpus:
1
num_gpus:
1
os:
CentOS
os_family:
RedHat
osarch:
x86_64
oscodename:
Final
osfinger:
CentOS-6
osfullname:
CentOS
osmajorrelease:
6
osrelease:
6.7
osrelease_info:
- 6
- 7
path:
/sbin:/usr/sbin:/bin:/usr/bin
productname:
VMware Virtual Platform
ps:
ps -efH
pythonexecutable:
/usr/bin/python2.6
pythonpath:
- /usr/bin
- /usr/lib64/python26.zip
- /usr/lib64/python2.6
- /usr/lib64/python2.6/plat-linux2
- /usr/lib64/python2.6/lib-tk
- /usr/lib64/python2.6/lib-old
- /usr/lib64/python2.6/lib-dynload
- /usr/lib64/python2.6/site-packages
- /usr/lib64/python2.6/site-packages/gtk-2.0
- /usr/lib/python2.6/site-packages
pythonversion:
- 2
- 6
- 6
- final
- 0
saltpath:
/usr/lib/python2.6/site-packages/salt
saltversion:
2015.5.8
saltversioninfo:
- 2015
- 5
- 8
- 0
selinux:
----------
enabled:
False
enforced:
Disabled
serialnumber:
VMware-56 4d 3d be 86 1f f0 55-7e 57 0a 5a a5 2c 10 a1
server_id:
1879729795
shell:
/bin/bash
virtual:
VMware
zmqversion:
3.2.5
显示单个grains内容,get方法直接显示值,item方法会把条目名也显示出来
[root@linux-node1 ~]# salt 'linux-node1*' grains.item fqdn
linux-node1:
----------
fqdn:
linux-node1
[root@linux-node1 ~]# salt 'linux-node1*' grains.get fqdn_ip4
linux-node1:
- 10.0.0.7
7.1.2 使用grains匹配minion主机
模拟使用grains匹配minion,-G代表指定grains匹配
[root@linux-node1 ~]# salt -G 'os:centos' grains.get fqdn
linux-node2:
linux-node2
linux-node1:
linux-node1
修改minion配置文件,简单手动设置一个grains
[root@linux-node1 ~]# sed -n '84,87p' /etc/salt/minion
grains:
roles:
- webserver
- memcache
重启grains,测试手动添加结果
[root@linux-node1 ~]# /etc/init.d/salt-minion restart
Stopping salt-minion daemon: [确定]
Starting salt-minion daemon: [确定]
[root@linux-node1 ~]# salt -G 'roles:memcache' cmd.run 'uptime'
linux-node1:
20:43:25 up 1 day, 5:21, 2 users, load average: 0.15, 0.04, 0.01
添加grains,默认会到/etc/salt/grains中读取,手动添加到/etc/salt/grains即可
[root@linux-node2 ~]# cat /etc/salt/grains
app:
nginx
[root@linux-node2 ~]# /etc/init.d/salt-minion restart
Stopping salt-minion daemon: [确定]
Starting salt-minion daemon: [确定]
[root@linux-node1 ~]# salt '*' grains.item app
linux-node2:
----------
app:
nginx
linux-node1:
---
7.1.3使用grains在state文件中使用grains
[root@linux-node1 salt]# cat top.sls
base:
'app:nginx': #标记grains内容
- match: grain #指定使用grains
- apache
7.1.4 在jinja模板中使用grains
后续会有详细应用说明,此处不多赘述
keepalived-server:
file.managed:
- name: /etc/keepalived/keepalived.conf
- source: salt://cluster/files/haproxy-outside-keepalived.conf
- mode: 644
- user: root
- group: root
- template: jinja
{% if grains['fqdn'] == 'ip-172-31-43-148.eu-west-1.compute.internal' %}
- ROUTID: haproxy_ha
- ROLE: MASTER
- PRIORITYID: 150
{% elif grains['fqdn'] == 'ip-172-31-43-123.eu-west-1.compute.internal' %}
- ROUTID: haproxy_ha
- ROLE: BACKUP
- PRIORITYID: 100
{% endif %}
```
##7.2学习pillar
###7.2.1 pillar介绍
Pillar 是 Salt 非常重要的一个组件,它用于给特定的 minion 定义任何你需要的数据, 这些数据可以被 Salt 的其他组件使用。Salt 在 0.9.8 版本中引入了 Pillar。Pillar 在解析完成 后,是一个嵌套的 dict 结构;最上层的 key 是 minion ID,其 value 是该 minion 所拥有的 Pillar 数据;每一个 value 也都是 key/value。这里可以看出 Pillar 的一个特点,Pillar 数据是与特定 minion 关联的,也就是说每一个minion 都只能看到自己的数据, 所以 Pillar 可以用来传递敏感数据 (在 Salt 的设计中, Pillar 使用独立的加密 session,也是为了保证敏感数据的安全性) 。 Pillar 可以用在哪些地方?
**敏感数据**
例如 ssh key,加密证书等,由于 Pillar 使用独立的加密 session,可以确保这些敏感数据不被其他 minion 看到。
**变量**
可以在 Pillar 中处理平台差异性,比如针对不同的操作系统设置软件包的名字,然后在State 中引用。
**其他任何数据**
可以在 Pillar 中添加任何需要用到的数据。比如定义用户和 UID 的对应关系,mnion 的角色等。
###7.2.2 pillar基础
更改配置文件打开pillar,默认是关闭的
```bash
[root@linux-node1 ~]# sed -n '552p' /etc/salt/master
pillar_opts: True
[root@linux-node1 ~]# /etc/init.d/salt-master restart
Stopping salt-master daemon: [确定]
Starting salt-master daemon: [确定]
查看master自带的pillar条目,实际生产是不打开的,自带的pillar没什么卵用,所以一般都会设置成false,使用自己定义的pillar
[root@linux-node1 ~]# salt 'linux-node1*' pillar.items
linux-node1:
----------
master:
----------
__role:
master
auth_mode:
1
auto_accept:
False
cache_sreqs:
True
cachedir:
/var/cache/salt/master
cli_summary:
False
client_acl:
----------
client_acl_blacklist:
----------
cluster_masters:
cluster_mode:
paranoid
con_cache:
False
conf_file:
/etc/salt/master
config_dir:
/etc/salt
cython_enable:
False
daemon:
True
default_include:
master.d/*.conf
enable_gpu_grains:
False
enforce_mine_cache:
False
enumerate_proxy_minions:
False
environment:
None
event_return:
event_return_blacklist:
event_return_queue:
0
event_return_whitelist:
ext_job_cache:
ext_pillar:
extension_modules:
/var/cache/salt/extmods
external_auth:
----------
failhard:
False
file_buffer_size:
1048576
file_client:
local
file_ignore_glob:
None
file_ignore_regex:
None
file_recv:
False
file_recv_max_size:
100
file_roots:
----------
base:
- /srv/salt
fileserver_backend:
- roots
fileserver_followsymlinks:
True
fileserver_ignoresymlinks:
False
fileserver_limit_traversal:
False
gather_job_timeout:
10
gitfs_base:
master
gitfs_env_blacklist:
gitfs_env_whitelist:
gitfs_insecure_auth:
False
gitfs_mountpoint:
gitfs_passphrase:
gitfs_password:
gitfs_privkey:
gitfs_pubkey:
gitfs_remotes:
gitfs_root:
gitfs_user:
hash_type:
md5
hgfs_base:
default
hgfs_branch_method:
branches
hgfs_env_blacklist:
hgfs_env_whitelist:
hgfs_mountpoint:
hgfs_remotes:
hgfs_root:
id:
linux-node1
interface:
0.0.0.0
ioflo_console_logdir:
ioflo_period:
0.01
ioflo_realtime:
True
ioflo_verbose:
0
ipv6:
False
jinja_lstrip_blocks:
False
jinja_trim_blocks:
False
job_cache:
True
keep_jobs:
24
key_logfile:
/var/log/salt/key
keysize:
2048
log_datefmt:
%H:%M:%S
log_datefmt_logfile:
%Y-%m-%d %H:%M:%S
log_file:
/var/log/salt/master
log_fmt_console:
[%(levelname)-8s] %(message)s
log_fmt_logfile:
%(asctime)s,%(msecs)03.0f [%(name)-17s][%(levelname)-8s][%(process)d] %(message)s
log_granular_levels:
----------
log_level:
warning
loop_interval:
60
maintenance_floscript:
/usr/lib/python2.6/site-packages/salt/daemons/flo/maint.flo
master_floscript:
/usr/lib/python2.6/site-packages/salt/daemons/flo/master.flo
master_job_cache:
local_cache
master_pubkey_signature:
master_pubkey_signature
master_roots:
----------
base:
- /srv/salt-master
master_sign_key_name:
master_sign
master_sign_pubkey:
False
master_tops:
----------
master_use_pubkey_signature:
False
max_event_size:
1048576
max_minions:
0
max_open_files:
100000
minion_data_cache:
True
minionfs_blacklist:
minionfs_env:
base
minionfs_mountpoint:
minionfs_whitelist:
nodegroups:
----------
open_mode:
False
order_masters:
False
outputter_dirs:
peer:
----------
permissive_pki_access:
False
pidfile:
/var/run/salt-master.pid
pillar_opts:
True
pillar_roots:
----------
base:
- /srv/pillar
pillar_safe_render_error:
True
pillar_source_merging_strategy:
smart
pillar_version:
2
pillarenv:
None
ping_on_rotate:
False
pki_dir:
/etc/salt/pki/master
preserve_minion_cache:
False
pub_hwm:
1000
publish_port:
4505
publish_session:
86400
queue_dirs:
raet_alt_port:
4511
raet_clear_remotes:
False
raet_main:
True
raet_mutable:
False
raet_port:
4506
range_server:
range:80
reactor:
reactor_refresh_interval:
60
reactor_worker_hwm:
10000
reactor_worker_threads:
10
renderer:
yaml_jinja
ret_port:
4506
root_dir:
/
rotate_aes_key:
True
runner_dirs:
saltversion:
2015.5.8
search:
search_index_interval:
3600
serial:
msgpack
show_jid:
False
show_timeout:
True
sign_pub_messages:
False
sock_dir:
/var/run/salt/master
sqlite_queue_dir:
/var/cache/salt/master/queues
ssh_passwd:
ssh_port:
22
ssh_scan_ports:
22
ssh_scan_timeout:
0.01
ssh_sudo:
False
ssh_timeout:
60
ssh_user:
root
state_aggregate:
False
state_auto_order:
True
state_events:
False
state_output:
full
state_top:
salt://top.sls
state_top_saltenv:
None
state_verbose:
True
sudo_acl:
False
svnfs_branches:
branches
svnfs_env_blacklist:
svnfs_env_whitelist:
svnfs_mountpoint:
svnfs_remotes:
svnfs_root:
svnfs_tags:
tags
svnfs_trunk:
trunk
syndic_dir:
/var/cache/salt/master/syndics
syndic_event_forward_timeout:
0.5
syndic_jid_forward_cache_hwm:
100
syndic_master:
syndic_max_event_process_time:
0.5
syndic_wait:
5
timeout:
5
token_dir:
/var/cache/salt/master/tokens
token_expire:
43200
transport:
zeromq
user:
root
verify_env:
True
win_gitrepos:
- https://github.com/saltstack/salt-winrepo.git
win_repo:
/srv/salt/win/repo
win_repo_mastercachefile:
/srv/salt/win/repo/winrepo.p
worker_floscript:
/usr/lib/python2.6/site-packages/salt/daemons/flo/worker.flo
worker_threads:
5
zmq_filtering:
False
```
###7.2.3 设置pillar环境
修改master的配置文件,设置pillr_root,可以看出pillar是支持环境的,同样也许存在base环境,而且也是支持topfile的,可以指定具体哪个minion配置哪个minion
```bash
[root@linux-node1 ~]# sed -n '529,531p' /etc/salt/master
pillar_roots:
base:
- /srv/pillar
[root@linux-node1 ~]# /etc/init.d/salt-master restart
Stopping salt-master daemon: [确定]
Starting salt-master daemon: [确定]
7.2.4手动定义一个pillar
[root@linux-node1 pillar]# pwd
/srv/pillar
[root@linux-node1 pillar]# cat apache.sls
{% if grains['os'] == 'CentOS' %}
apache: httpd
{% elif grains['os'] == 'Debain' %}
apache: apache2
{% endif %}
[root@linux-node1 pillar]# cat top.sls
base:
'linux-node2*':
- apache
[root@linux-node1 pillar]# salt '*' pillar.items
linux-node1:
----------
linux-node2:
----------
apache:
httpd
```
如果对pillar具体内容进行修改,需要执行刷新pillar
```bash
[root@linux-node1 pillar]# salt '*' saltutil.refresh_pillar
linux-node2:
True
linux-node1:
True
7.2.5 使用pillar匹配minion
salt -I 指定pillar匹配
[root@linux-node1 pillar]# salt -I 'apache:httpd' cmd.run 'cd /etc/salt &&pwd'
linux-node2:
/etc/salt
7.3 grains与pillar的区别
- grains存储的是静态、不常变化的内容;pillar则相反,存储的是动态数据
- grains是存储在minion本地,可以使用saltutil.sync_grains刷新;而pillar存储在master本地,可以使用saltutil.refresh_pillar来刷新
- minion有权限操作自己的grains值,如增加、删除,可以用来做资产管理等;pillar存储在master中指定数据,只有指定的minion才可以看到,可以用来存储敏感数据,minion无权修改