saltstack高级状态的使用、top file的使用、数据系统grains和pillar的使用

saltstack高级状态的使用、top file的使用、数据系统grains和pillar的使用

1. YAML语言

YAML是一种直观的能够被电脑识别的数据序列化格式,是一个可读性高并且容易被人类阅读,容易和脚本语言交互,用来表达资料序列的编程语言。

它类似于标准通用标记语言的子集XML的数据描述语言,语法比XML简单很多。
YAML语言的格式如下:

house:
  family:
    name: Doe
    parents:
      - John
      - Jane
    children:
      - Paul
      - Mark
      - Simone
  address:
    number: 34
    street: Main Street
    city: Nowheretown
    zipcode: 12345

YAML的基本规则:

  • 使用缩进来表示层级关系,每层2个空格,禁止使用TAB键
  • 当冒号不是处于最后时,冒号后面必须有一个空格
  • 用 - 表示列表,- 的后面必须有一个空格
  • 用 # 表示注释
    YAML配置文件要放到SaltStack让我们放的位置,可以在SaltStack的 Master 配置文件中查找file_roots即可看到。
[root@master ~]# vim /etc/salt/master
...此处省略N行
file_roots:
  base:
    - /srv/salt/base
  test:
    - /srv/salt/test
  dev:
    - /srv/salt/dev
  prod:
    - /srv/salt/prod
...此处省略N行

[root@master ~]# mkdir -p /srv/salt/{base,test,dev,prod}
[root@master ~]# tree /srv/salt/
/srv/salt/
├── base
├── dev
├── prod
└── test

4 directories, 0 files
[root@master ~]# systemctl restart salt-master

需要注意:

  • base是默认的位置,如果file_roots只有一个,则base是必备的且必须叫base,不能改名

2. top file

2.1 top file介绍

直接通过命令执行sls文件时够自动化吗?答案是否定的,因为我们还要告诉某台主机要执行某个任务,自动化应该是我们让它干活时,它自己就知道哪台主机要干什么活,但是直接通过命令执行sls文件并不能达到这个目的,为了解决这个问题,top file 应运而生。

top file就是一个入口,top file的文件名可通过在 Master的配置文件中搜索top.sls找出,且此文件必须在 base 环境中,默认情况下此文件必须叫top.sls。

top file的作用就是告诉对应的主机要干什么活,比如让web服务器启动web服务,让数据库服务器安装mysql等等。
top file 实例:

[root@master ~]# tree /srv/salt/
/srv/salt/
├── base
│   ├── top.sls
│   └── web
│       └── apache
│           └── install.sls
└── test


[root@master ~]# cat /srv/salt/base/web/apache/install.sls 
apache-install:
  pkg.installed:
    - name: httpd

apache-service:
  service.running:
    - name: httpd
    - enable: true
[root@master ~]# 
[root@master ~]# cat /srv/salt/base/top.sls 
base:
  'node1':
    - web.apache.install
[root@master ~]# 


[root@master ~]# salt 'node1' test.pingnode1:
    True
[root@master ~]# salt 'node1' state.highstate
node1:
----------
          ID: apache-install
    Function: pkg.installed
        Name: httpd
      Result: True
     Comment: All specified packages are already installed
     Started: 01:04:20.611833
    Duration: 828.553 ms
     Changes:   
----------
          ID: apache-service
    Function: service.running
        Name: httpd
      Result: True
     Comment: The service httpd is already running
     Started: 01:04:21.443173
    Duration: 57.574 ms
     Changes:   

Summary for node1
------------
Succeeded: 2
Failed:    0
------------
Total states run:     2
Total run time: 886.127 ms
[root@master ~]# 

//查看minion的httpd状态
[root@mode1 salt]# systemctl status httpd
● httpd.service - The Apache HTTP Server
   Loaded: loaded (/usr/lib/systemd/system/h>
   Active: active (running) since Tue 2021-1>
     Docs: man:httpd.service(8)
 Main PID: 141238 (httpd)
   Status: "Running, listening on: port 80"
    Tasks: 213 (limit: 4743)
   Memory: 25.3M
   CGroup: /system.slice/httpd.service
           ├─141238 /usr/sbin/httpd -DFOREGROUND
           ├─141938 /usr/sbin/httpd -DFOREGROUND
           ├─141939 /usr/sbin/httpd -DFOREGROUND
           ├─141940 /usr/sbin/httpd -DFOREGROUND
           └─141941 /usr/sbin/httpd -DFOREGROUND

11月 02 01:03:01 mode1 systemd[1]: Starting The Apache HTTP Server...
11月 02 01:03:16 mode1 httpd[141238]: AH00558: httpd: Could not reliably determine the server's fully qu>
11月 02 01:03:16 mode1 systemd[1]: Started The Apache HTTP Server.
11月 02 01:03:26 mode1 httpd[141238]: Server configured, listening on: port 80
lines 10-19/19 (END)

注意:
若top file里面的目标是用 * 表示的,要注意的是,top file里面的 * 表示的是所有要执行状态的目标,而 salt '*' state.highstate 里面的 * 表示通知所有机器干活,而是否要干活则是由top file来指定的

2.2 高级状态highstate的使用

管理SaltStack时一般最常用的管理操作就是执行高级状态

[root@master ~]# salt '*' state.highstate   //生产环境禁止这样使用salt命令

注意:
上面让所有人执行高级状态,但实际工作当中,一般不会这么用,工作当中一般都是通知某台或某些台目标主机来执行高级状态,具体是否执行则是由top file来决定的。

若在执行高级状态时加上参数test=True,则它会告诉我们它将会做什么,但是它不会真的去执行这个操作。

//停掉minon上的httpd服务
[root@mode1 salt]# systemctl stop httpd
[root@mode1 salt]# systemctl status httpd
● httpd.service - The Apache HTTP Server
   Loaded: loaded (/usr/lib/systemd/system/h>
   Active: inactive (dead) since Tue 2021-11>
     Docs: man:httpd.service(8)
  Process: 141238 ExecStart=/usr/sbin/httpd >
 Main PID: 141238 (code=exited, status=0/SUC>
   Status: "Running, listening on: port 80"

11月 02 01:03:01 mode1 systemd[1]: Starting >
11月 02 01:03:16 mode1 httpd[141238]: AH0055>
11月 02 01:03:16 mode1 systemd[1]: Started T>
11月 02 01:03:26 mode1 httpd[141238]: Server>
11月 02 01:10:11 mode1 systemd[1]: Stopping >
11月 02 01:10:12 mode1 systemd[1]: httpd.ser>
11月 02 01:10:12 mode1 systemd[1]: Stopped T>
lines 1-15/15 (END)

//在master上执行高级状态的测试
[root@master ~]# salt 'node1' state.highstate test=true
node1:
----------
          ID: apache-install
    Function: pkg.installed
        Name: httpd
      Result: True
     Comment: All specified packages are already installed
     Started: 01:11:08.565015
    Duration: 882.648 ms
     Changes:   
----------
          ID: apache-service
    Function: service.running
        Name: httpd
      Result: None
     Comment: Service httpd is set to start
     Started: 01:11:09.449509
    Duration: 43.376 ms
     Changes:   

Summary for node1
------------
Succeeded: 2 (unchanged=1)
Failed:    0
------------
Total states run:     2
Total run time: 926.024 ms

//在minion上查看httpd是否启动
[root@mode1 salt]# systemctl status httpd
● httpd.service - The Apache HTTP Server
   Loaded: loaded (/usr/lib/systemd/system/h>
   Active: inactive (dead) since Tue 2021-11>
     Docs: man:httpd.service(8)
  Process: 141238 ExecStart=/usr/sbin/httpd >
 Main PID: 141238 (code=exited, status=0/SUC>
   Status: "Running, listening on: port 80"

11月 02 01:03:01 mode1 systemd[1]: Starting >
11月 02 01:03:16 mode1 httpd[141238]: AH0055>
11月 02 01:03:16 mode1 systemd[1]: Started T>
11月 02 01:03:26 mode1 httpd[141238]: Server>
11月 02 01:10:11 mode1 systemd[1]: Stopping >
11月 02 01:10:12 mode1 systemd[1]: httpd.ser>
11月 02 01:10:12 mode1 systemd[1]: Stopped T>
lines 1-15/15 (END)

//由此可见高级状态并没有执行,因为httpd并没有启动

3.SaltStack数据系统

SaltStack有两大数据系统,分别是:

  • Grains
  • Pillar

4.SaltStack数据系统组件

4.1 SaltStack组件之Grains

Grains是SaltStack的一个组件,其存放着minion启动时收集到的信息。

Grains是SaltStack组件中非常重要的组件之一,因为我们在做配置部署的过程中会经常使用它,Grains是SaltStack记录minion的一些静态信息的组件。可简单理解为Grains记录着每台minion的一些常用属性,比如CPU、内存、磁盘、网络信息等。我们可以通过grains.items查看某台minion的所有Grains信息。

Grains的功能:

  • 收集资产信息
    Grains应用场景:
  • 信息查询
  • 在命令行下进行目标匹配
  • 在top file中进行目标匹配
  • 在模板中进行目标匹配
    模板中进行目标匹配请看:https://docs.saltstack.com/en/latest/topics/pillar/
    信息查询实例:
//列出所有grains的key和value
[root@master ~]# salt 'node1' grains.items
node1:
    ----------
    biosreleasedate:
        07/22/2020
    biosversion:
        6.00
    cpu_flags:
        - fpu
        - vme
        - de
        - pse
        - tsc
        - msr
        - pae
        - mce
        - cx8
        - apic
        - sep
        - mtrr
        - pge
        - mca
        - cmov
        - pat
        - pse36
        - clflush
        - mmx
        - fxsr
        - sse
        - sse2
        - ss
        - syscall
        - nx
        - pdpe1gb
        - rdtscp
        - lm
        - constant_tsc
        - arch_perfmon
        - nopl
        - xtopology
        - tsc_reliable
        - nonstop_tsc
        - cpuid
        - pni
        - pclmulqdq
        - ssse3
        - fma
        - cx16
        - pcid
        - sse4_1
        - sse4_2
        - x2apic
        - movbe
        - popcnt
        - tsc_deadline_timer
        - aes
        - xsave
        - avx
        - f16c
        - rdrand
        - hypervisor
        - lahf_lm
        - abm
        - 3dnowprefetch
        - cpuid_fault
        - invpcid_single
        - pti
        - ssbd
        - ibrs
        - ibpb
        - stibp
        - fsgsbase
        - tsc_adjust
        - bmi1
        - avx2
        - smep
        - bmi2
        - invpcid
        - rdseed
        - adx
        - smap
        - clflushopt
        - xsaveopt
        - xsavec
        - xgetbv1
        - xsaves
        - arat
        - md_clear
        - flush_l1d
        - arch_capabilities
    cpu_model:
        Intel(R) Core(TM) i5-9300H CPU @ 2.40GHz
    cpuarch:
        x86_64
    cwd:
        /
    disks:
        - sr0
        - sda
    dns:
        ----------
        domain:
        ip4_nameservers:
            - 192.168.72.2
        ip6_nameservers:
        nameservers:
            - 192.168.72.2
        options:
        search:
            - localdomain
        sortlist:
    domain:
    efi:
        False
    efi-secure-boot:
        False
    fqdn:
        mode1
    fqdn_ip4:
        - 192.168.72.139
    fqdn_ip6:
        - fe80::8c5a:fae4:e5a7:a2b7
    fqdns:
        - mode1
    gid:
        0
    gpus:
        |_
          ----------
          model:
              SVGA II Adapter
          vendor:
              vmware
    groupname:
        root
    host:
        mode1
    hwaddr_interfaces:
        ----------
        ens33:
            00:0c:29:4c:c0:b2
        lo:
            00:00:00:00:00:00
    id:
        node1
    init:
        systemd
    ip4_gw:
        192.168.72.2
    ip4_interfaces:
        ----------
        ens33:
            - 192.168.72.139
        lo:
            - 127.0.0.1
    ip6_gw:
        False
    ip6_interfaces:
        ----------
        ens33:
            - fe80::8c5a:fae4:e5a7:a2b7
        lo:
            - ::1
    ip_gw:
        True
    ip_interfaces:
        ----------
        ens33:
            - 192.168.72.139
            - fe80::8c5a:fae4:e5a7:a2b7
        lo:
            - 127.0.0.1
            - ::1
    ipv4:
        - 127.0.0.1
        - 192.168.72.139
    ipv6:
        - ::1
        - fe80::8c5a:fae4:e5a7:a2b7
    kernel:
        Linux
    kernelparams:
        |_
          - BOOT_IMAGE
          - (hd0,msdos1)/vmlinuz-4.18.0-257.el8.x86_64
        |_
          - root
          - /dev/mapper/cs-root
        |_
          - ro
          - None
        |_
          - crashkernel
          - auto
        |_
          - resume
          - /dev/mapper/cs-swap
        |_
          - rd.lvm.lv
          - cs/root
        |_
          - rd.lvm.lv
          - cs/swap
        |_
          - rhgb
          - None
        |_
          - quiet
          - None
    kernelrelease:
        4.18.0-257.el8.x86_64
    kernelversion:
        #1 SMP Thu Dec 3 22:16:23 UTC 2020
    locale_info:
        ----------
        defaultencoding:
            UTF-8
        defaultlanguage:
            zh_CN
        detectedencoding:
            UTF-8
        timezone:
            EDT
    localhost:
        mode1
    lsb_distrib_codename:
        CentOS Stream 8
    lsb_distrib_id:
        CentOS Stream
    lsb_distrib_release:
        8
    lvm:
        ----------
        cs:
            - home
            - root
            - swap
    machine_id:
        d058b2e2d24b4493bcf7660b124debba
    manufacturer:
        VMware, Inc.
    master:
        192.168.72.141
    mdadm:
    mem_total:
        780
    nodename:
        mode1
    num_cpus:
        2
    num_gpus:
        1
    os:
        CentOS Stream
    os_family:
        RedHat
    osarch:
        x86_64
    oscodename:
        CentOS Stream 8
    osfinger:
        CentOS Stream-8
    osfullname:
        CentOS Stream
    osmajorrelease:
        8
    osrelease:
        8
    osrelease_info:
        - 8
    path:
        /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
    pid:
        6031
    productname:
        VMware Virtual Platform
    ps:
        ps -efHww
    pythonexecutable:
        /usr/bin/python3.6
    pythonpath:
        - /usr/bin
        - /usr/lib64/python36.zip
        - /usr/lib64/python3.6
        - /usr/lib64/python3.6/lib-dynload
        - /usr/lib64/python3.6/site-packages
        - /usr/lib/python3.6/site-packages
    pythonversion:
        - 3
        - 6
        - 8
        - final
        - 0
    saltpath:
        /usr/lib/python3.6/site-packages/salt
    saltversion:
        3004
    saltversioninfo:
        - 3004
    selinux:
        ----------
        enabled:
            False
        enforced:
            Disabled
    serialnumber:
        VMware-56 4d 08 50 0e a8 63 19-6a 82 c9 f9 51 4c c0 b2
    server_id:
        1797241226
    shell:
        /bin/sh
    ssds:
    swap_total:
        4031
    systemd:
        ----------
        features:
            +PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=legacy
        version:
            239
    systempath:
        - /usr/local/sbin
        - /usr/local/bin
        - /usr/sbin
        - /usr/bin
    transactional:
        False
    uid:
        0
    username:
        root
    uuid:
        50084d56-a80e-1963-6a82-c9f9514cc0b2
    virtual:
        VMware
    zfs_feature_flags:
        False
    zfs_support:
        False
    zmqversion:
        4.3.4

//只查询所有的grains的key
[root@master ~]# salt 'node1' grains.ls
node1:
    - biosreleasedate
    - biosversion
    - cpu_flags
    - cpu_model
    - cpuarch
    - cwd
    - disks
    - dns
    - domain
    - efi
    - efi-secure-boot
    - fqdn
    - fqdn_ip4
    - fqdn_ip6
    - fqdns
    - gid
    - gpus
    - groupname
    - host
    - hwaddr_interfaces
    - id
    - init
    - ip4_gw
    - ip4_interfaces
    - ip6_gw
    - ip6_interfaces
    - ip_gw
    - ip_interfaces
    - ipv4
    - ipv6
    - kernel
    - kernelparams
    - kernelrelease
    - kernelversion
    - locale_info
    - localhost
    - lsb_distrib_codename
    - lsb_distrib_id
    - lsb_distrib_release
    - lvm
    - machine_id
    - manufacturer
    - master
    - mdadm
    - mem_total
    - nodename
    - num_cpus
    - num_gpus
    - os
    - os_family
    - osarch
    - oscodename
    - osfinger
    - osfullname
    - osmajorrelease
    - osrelease
    - osrelease_info
    - path
    - pid
    - productname
    - ps
    - pythonexecutable
    - pythonpath
    - pythonversion
    - saltpath
    - saltversion
    - saltversioninfo
    - selinux
    - serialnumber
    - server_id
    - shell
    - ssds
    - swap_total
    - systemd
    - systempath
    - transactional
    - uid
    - username
    - uuid
    - virtual
    - zfs_feature_flags
    - zfs_support
    - zmqversion

//查询某个key的值,比如想获取ip地址
[root@master ~]# salt 'node1' grains.get ipv4
node1:
    - 127.0.0.1
    - 192.168.72.139
[root@master ~]# salt 'node1' grains.get ip4_interfaces
node1:
    ----------
    ens33:
        - 192.168.72.139
    lo:
        - 127.0.0.1
[root@master ~]# salt 'node1' grains.get ip4_interfaces:ens33
node1:
    - 192.168.72.139

目标匹配实例:
用Grains来匹配minion:

[root@master ~]# salt -L 'node1' cmd.run 'uptime'
node1:
     01:21:57 up  1:44,  2 users,  load average: 0.09, 0.09, 0.09

自定义Grains的两种方法:

  • minion配置文件,在配置文件中搜索grains
  • 在/etc/salt下生成一个grains文件,在此文件中定义(推荐方式)
[root@mode1 salt]# vim grains
[root@mode1 salt]# systemctl restart salt-minion
[root@mode1 salt]# cat grains 
大傻逼: 熊用民
[root@mode1 salt]# 
[root@master ~]# salt 'node1' test.ping
node1:
    True
[root@master ~]# salt 'node1' grains.get 大傻逼
node1:
    熊用民

不重启的情况下自定义Grains:

[root@mode1 salt]# cat grains 
大傻逼: 熊用民
我儿: 申龙飞

[root@master ~]# salt 'node1' grains.items
node1:
    ----------
    biosreleasedate:
        07/22/2020
    biosversion:
        6.00
    cpu_flags:
        - fpu
        - vme
        - de
        - pse
        - tsc
        - msr
        - pae
        - mce
        - cx8
        - apic
        - sep
        - mtrr
        - pge
        - mca
        - cmov
        - pat
        - pse36
        - clflush
        - mmx
        - fxsr
        - sse
        - sse2
        - ss
        - syscall
        - nx
        - pdpe1gb
        - rdtscp
        - lm
        - constant_tsc
        - arch_perfmon
        - nopl
        - xtopology
        - tsc_reliable
        - nonstop_tsc
        - cpuid
        - pni
        - pclmulqdq
        - ssse3
        - fma
        - cx16
        - pcid
        - sse4_1
        - sse4_2
        - x2apic
        - movbe
        - popcnt
        - tsc_deadline_timer
        - aes
        - xsave
        - avx
        - f16c
        - rdrand
        - hypervisor
        - lahf_lm
        - abm
        - 3dnowprefetch
        - cpuid_fault
        - invpcid_single
        - pti
        - ssbd
        - ibrs
        - ibpb
        - stibp
        - fsgsbase
        - tsc_adjust
        - bmi1
        - avx2
        - smep
        - bmi2
        - invpcid
        - rdseed
        - adx
        - smap
        - clflushopt
        - xsaveopt
        - xsavec
        - xgetbv1
        - xsaves
        - arat
        - md_clear
        - flush_l1d
        - arch_capabilities
    cpu_model:
        Intel(R) Core(TM) i5-9300H CPU @ 2.40GHz
    cpuarch:
        x86_64
    cwd:
        /
    disks:
        - sr0
        - sda
    dns:
        ----------
        domain:
        ip4_nameservers:
            - 192.168.72.2
        ip6_nameservers:
        nameservers:
            - 192.168.72.2
        options:
        search:
            - localdomain
        sortlist:
    domain:
    efi:
        False
    efi-secure-boot:
        False
    fqdn:
        mode1
    fqdn_ip4:
        - 192.168.72.139
    fqdn_ip6:
        - fe80::8c5a:fae4:e5a7:a2b7
    fqdns:
        - mode1
    gid:
        0
    gpus:
        |_
          ----------
          model:
              SVGA II Adapter
          vendor:
              vmware
    groupname:
        root
    host:
        mode1
    hwaddr_interfaces:
        ----------
        ens33:
            00:0c:29:4c:c0:b2
        lo:
            00:00:00:00:00:00
    id:
        node1
    init:
        systemd
    ip4_gw:
        192.168.72.2
    ip4_interfaces:
        ----------
        ens33:
            - 192.168.72.139
        lo:
            - 127.0.0.1
    ip6_gw:
        False
    ip6_interfaces:
        ----------
        ens33:
            - fe80::8c5a:fae4:e5a7:a2b7
        lo:
            - ::1
    ip_gw:
        True
    ip_interfaces:
        ----------
        ens33:
            - 192.168.72.139
            - fe80::8c5a:fae4:e5a7:a2b7
        lo:
            - 127.0.0.1
            - ::1
    ipv4:
        - 127.0.0.1
        - 192.168.72.139
    ipv6:
        - ::1
        - fe80::8c5a:fae4:e5a7:a2b7
    kernel:
        Linux
    kernelparams:
        |_
          - BOOT_IMAGE
          - (hd0,msdos1)/vmlinuz-4.18.0-257.el8.x86_64
        |_
          - root
          - /dev/mapper/cs-root
        |_
          - ro
          - None
        |_
          - crashkernel
          - auto
        |_
          - resume
          - /dev/mapper/cs-swap
        |_
          - rd.lvm.lv
          - cs/root
        |_
          - rd.lvm.lv
          - cs/swap
        |_
          - rhgb
          - None
        |_
          - quiet
          - None
    kernelrelease:
        4.18.0-257.el8.x86_64
    kernelversion:
        #1 SMP Thu Dec 3 22:16:23 UTC 2020
    locale_info:
        ----------
        defaultencoding:
            UTF-8
        defaultlanguage:
            zh_CN
        detectedencoding:
            UTF-8
        timezone:
            EDT
    localhost:
        mode1
    lsb_distrib_codename:
        CentOS Stream 8
    lsb_distrib_id:
        CentOS Stream
    lsb_distrib_release:
        8
    lvm:
        ----------
        cs:
            - home
            - root
            - swap
    machine_id:
        d058b2e2d24b4493bcf7660b124debba
    manufacturer:
        VMware, Inc.
    master:
        192.168.72.141
    mdadm:
    mem_total:
        780
    nodename:
        mode1
    num_cpus:
        2
    num_gpus:
        1
    os:
        CentOS Stream
    os_family:
        RedHat
    osarch:
        x86_64
    oscodename:
        CentOS Stream 8
    osfinger:
        CentOS Stream-8
    osfullname:
        CentOS Stream
    osmajorrelease:
        8
    osrelease:
        8
    osrelease_info:
        - 8
    path:
        /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
    pid:
        188010
    productname:
        VMware Virtual Platform
    ps:
        ps -efHww
    pythonexecutable:
        /usr/bin/python3.6
    pythonpath:
        - /usr/bin
        - /usr/lib64/python36.zip
        - /usr/lib64/python3.6
        - /usr/lib64/python3.6/lib-dynload
        - /usr/lib64/python3.6/site-packages
        - /usr/lib/python3.6/site-packages
    pythonversion:
        - 3
        - 6
        - 8
        - final
        - 0
    saltpath:
        /usr/lib/python3.6/site-packages/salt
    saltversion:
        3004
    saltversioninfo:
        - 3004
    selinux:
        ----------
        enabled:
            False
        enforced:
            Disabled
    serialnumber:
        VMware-56 4d 08 50 0e a8 63 19-6a 82 c9 f9 51 4c c0 b2
    server_id:
        1797241226
    shell:
        /bin/sh
    ssds:
    swap_total:
        4031
    systemd:
        ----------
        features:
            +PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=legacy
        version:
            239
    systempath:
        - /usr/local/sbin
        - /usr/local/bin
        - /usr/sbin
        - /usr/bin
    transactional:
        False
    uid:
        0
    username:
        root
    uuid:
        50084d56-a80e-1963-6a82-c9f9514cc0b2
    virtual:
        VMware
    zfs_feature_flags:
        False
    zfs_support:
        False
    zmqversion:
        4.3.4
    大傻逼:
        熊用民
    我儿:
        申龙飞
[root@master ~]# salt 'node1' grains.get 我儿node1:
    申龙飞

4.2 SaltStack组件之Pillar

Pillar也是SaltStack组件中非常重要的组件之一,是数据管理中心,经常配置states在大规模的配置管理工作中使用它。Pillar在SaltStack中主要的作用就是存储和定义配置管理中需要的一些数据,比如软件版本号、用户名密码等信息,它的定义存储格式与Grains类似,都是YAML格式。

在Master配置文件中有一段Pillar settings选项专门定义Pillar相关的一些参数:

#pillar_roots:
#  base:
#    - /srv/pillar

默认Base环境下Pillar的工作目录在/srv/pillar目录下。若你想定义多个环境不同的Pillar工作目录,只需要修改此处配置文件即可。

Pillar的特点:

  • 可以给指定的minion定义它需要的数据
  • 只有指定的人才能看到定义的数据
  • 在master配置文件里设置
//查看pillar的信息
[root@master ~]# salt 'node1' pillar.items
node1:
    ----------

默认pillar是没有任何信息的,如果想查看信息,需要在 master 配置文件上把 pillar_opts的注释取消,并将其值设为 True。

[root@master ~]# vim /etc/salt/master
# master config file that can then be used on minions.
pillar_opts: True

//重启master并查看pillar的信息
[root@master ~]# salt 'node1' pillar.itemsnode1:
    ----------
    master:
        ----------
        __cli:
            salt-master
        __role:
            master
        allow_minion_key_revoke:
            True
        archive_jobs:
            False
        auth_events:
            True
        auth_mode:
            1
        auto_accept:
            False
        azurefs_update_interval:
            60
        cache:
            localfs
        cache_sreqs:
            True
        cachedir:
            /var/cache/salt/master
        clean_dynamic_modules:
            True
        cli_summary:
            False
        client_acl_verify:
            True
        cluster_mode:
            False
        con_cache:
            False
        conf_file:
            /etc/salt/master
        config_dir:
            /etc/salt
        cython_enable:
            False
        daemon:
            False
        decrypt_pillar:
        decrypt_pillar_default:
            gpg
        decrypt_pillar_delimiter:
            :
        decrypt_pillar_renderers:
            - gpg
        default_include:
            master.d/*.conf
        default_top:
            base
        detect_remote_minions:
            False
        discovery:
            False
        django_auth_path:
        django_auth_settings:
        drop_messages_signature_fail:
            False
        dummy_pub:
            False
        eauth_acl_module:
        eauth_tokens:
            localfs
        enable_gpu_grains:
            False
        enable_ssh_minions:
            False
        enforce_mine_cache:
            False
        engines:
        env_order:
        event_match_type:
            startswith
        event_publisher_niceness:
            None
        event_return:
        event_return_blacklist:
        event_return_niceness:
            None
        event_return_queue:
            0
        event_return_whitelist:
        ext_job_cache:
        ext_pillar:
        extension_modules:
            /var/cache/salt/master/extmods
        external_auth:
            ----------
        extmod_blacklist:
            ----------
        extmod_whitelist:
            ----------
        failhard:
            False
        file_buffer_size:
            1048576
        file_client:
            local
        file_ignore_glob:
        file_ignore_regex:
        file_recv:
            False
        file_recv_max_size:
            100
        file_roots:
            ----------
            base:
                - /srv/salt/base
            test:
                - /srv/salt/test
        fileserver_backend:
            - roots
        fileserver_followsymlinks:
            True
        fileserver_ignoresymlinks:
            False
        fileserver_limit_traversal:
            False
        fileserver_update_niceness:
            None
        fileserver_verify_config:
            True
        fips_mode:
            False
        gather_job_timeout:
            10
        git_pillar_base:
            master
        git_pillar_branch:
            master
        git_pillar_env:
        git_pillar_fallback:
        git_pillar_global_lock:
            True
        git_pillar_includes:
            True
        git_pillar_insecure_auth:
            False
        git_pillar_passphrase:
        git_pillar_password:
        git_pillar_privkey:
        git_pillar_pubkey:
        git_pillar_refspecs:
            - +refs/heads/*:refs/remotes/origin/*
            - +refs/tags/*:refs/tags/*
        git_pillar_root:
        git_pillar_ssl_verify:
            True
        git_pillar_update_interval:
            60
        git_pillar_user:
        git_pillar_verify_config:
            True
        gitfs_base:
            master
        gitfs_disable_saltenv_mapping:
            False
        gitfs_fallback:
        gitfs_global_lock:
            True
        gitfs_insecure_auth:
            False
        gitfs_mountpoint:
        gitfs_passphrase:
        gitfs_password:
        gitfs_privkey:
        gitfs_pubkey:
        gitfs_ref_types:
            - branch
            - tag
            - sha
        gitfs_refspecs:
            - +refs/heads/*:refs/remotes/origin/*
            - +refs/tags/*:refs/tags/*
        gitfs_remotes:
        gitfs_root:
        gitfs_saltenv:
        gitfs_saltenv_blacklist:
        gitfs_saltenv_whitelist:
        gitfs_ssl_verify:
            True
        gitfs_update_interval:
            60
        gitfs_user:
        gpg_cache:
            False
        gpg_cache_backend:
            disk
        gpg_cache_ttl:
            86400
        hash_type:
            sha256
        hgfs_base:
            default
        hgfs_branch_method:
            branches
        hgfs_mountpoint:
        hgfs_remotes:
        hgfs_root:
        hgfs_saltenv_blacklist:
        hgfs_saltenv_whitelist:
        hgfs_update_interval:
            60
        http_connect_timeout:
            20.0
        http_max_body:
            107374182400
        http_request_timeout:
            3600.0
        id:
            node1
        interface:
            0.0.0.0
        ipc_mode:
            ipc
        ipc_write_buffer:
            0
        ipv6:
            None
        jinja_env:
            ----------
        jinja_lstrip_blocks:
            False
        jinja_sls_env:
            ----------
        jinja_trim_blocks:
            False
        job_cache:
            True
        job_cache_store_endtime:
            False
        keep_acl_in_token:
            False
        keep_jobs:
            24
        key_cache:
        key_logfile:
            /var/log/salt/key
        key_pass:
            None
        keysize:
            2048
        local:
            True
        lock_saltenv:
            False
        log_datefmt:
            %H:%M:%S
        log_datefmt_console:
            %H:%M:%S
        log_datefmt_logfile:
            %Y-%m-%d %H:%M:%S
        log_file:
            /var/log/salt/master
        log_fmt_console:
            [%(levelname)-8s] %(message)s
        log_fmt_jid:
            [JID: %(jid)s]
        log_fmt_logfile:
            %(asctime)s,%(msecs)03d [%(name)-17s:%(lineno)-4d][%(levelname)-8s][%(process)d] %(message)s
        log_granular_levels:
            ----------
        log_level:
            warning
        log_level_logfile:
            warning
        log_rotate_backup_count:
            0
        log_rotate_max_bytes:
            0
        loop_interval:
            60
        maintenance_niceness:
            None
        master_job_cache:
            local_cache
        master_pubkey_signature:
            master_pubkey_signature
        master_roots:
            ----------
            base:
                - /srv/salt-master
        master_sign_key_name:
            master_sign
        master_sign_pubkey:
            False
        master_stats:
            False
        master_stats_event_iter:
            60
        master_tops:
            ----------
        master_tops_first:
            False
        master_use_pubkey_signature:
            False
        max_event_size:
            1048576
        max_minions:
            0
        max_open_files:
            100000
        memcache_debug:
            False
        memcache_expire_seconds:
            0
        memcache_full_cleanup:
            False
        memcache_max_items:
            1024
        min_extra_mods:
        minion_data_cache:
            True
        minion_data_cache_events:
            True
        minion_id:
            node1
        minionfs_blacklist:
        minionfs_env:
            base
        minionfs_mountpoint:
        minionfs_update_interval:
            60
        minionfs_whitelist:
        module_dirs:
        mworker_niceness:
            None
        mworker_queue_niceness:
            None
        netapi_allow_raw_shell:
            False
        nodegroups:
            ----------
        on_demand_ext_pillar:
            - libvirt
            - virtkey
        open_mode:
            False
        optimization_order:
            - 0
            - 1
            - 2
        order_masters:
            False
        outputter_dirs:
        peer:
            ----------
        permissive_acl:
            False
        permissive_pki_access:
            False
        pidfile:
            /var/run/salt-master.pid
        pillar_cache:
            False
        pillar_cache_backend:
            disk
        pillar_cache_ttl:
            3600
        pillar_includes_override_sls:
            False
        pillar_merge_lists:
            False
        pillar_opts:
            True
        pillar_roots:
            ----------
            base:
                - /srv/pillar
                - /srv/spm/pillar
        pillar_safe_render_error:
            True
        pillar_source_merging_strategy:
            smart
        pillar_version:
            2
        pillarenv:
            None
        ping_on_rotate:
            False
        pki_dir:
            /etc/salt/pki/master
        preserve_minion_cache:
            False
        pub_hwm:
            1000
        pub_server_niceness:
            None
        publish_port:
            4505
        publish_session:
            86400
        publisher_acl:
            ----------
        publisher_acl_blacklist:
            ----------
        queue_dirs:
        range_server:
            range:80
        reactor:
        reactor_niceness:
            None
        reactor_refresh_interval:
            60
        reactor_worker_hwm:
            10000
        reactor_worker_threads:
            10
        regen_thin:
            False
        remote_minions_port:
            22
        renderer:
            jinja|yaml
        renderer_blacklist:
        renderer_whitelist:
        req_server_niceness:
            None
        require_minion_sign_messages:
            False
        ret_port:
            4506
        root_dir:
            /
        roots_update_interval:
            60
        rotate_aes_key:
            True
        runner_dirs:
        runner_returns:
            True
        s3fs_update_interval:
            60
        salt_cp_chunk_size:
            98304
        saltenv:
            None
        saltversion:
            3004
        schedule:
            ----------
        search:
        serial:
            msgpack
        show_jid:
            False
        show_timeout:
            True
        sign_pub_messages:
            True
        signing_key_pass:
            None
        sock_dir:
            /var/run/salt/master
        sock_pool_size:
            1
        sqlite_queue_dir:
            /var/cache/salt/master/queues
        ssh_config_file:
            /root/.ssh/config
        ssh_identities_only:
            False
        ssh_list_nodegroups:
            ----------
        ssh_log_file:
            /var/log/salt/ssh
        ssh_passwd:
        ssh_port:
            22
        ssh_priv_passwd:
        ssh_scan_ports:
            22
        ssh_scan_timeout:
            0.01
        ssh_sudo:
            False
        ssh_sudo_user:
        ssh_timeout:
            60
        ssh_use_home_key:
            False
        ssh_user:
            root
        ssl:
            None
        state_aggregate:
            False
        state_auto_order:
            True
        state_events:
            False
        state_output:
            full
        state_output_diff:
            False
        state_output_profile:
            True
        state_top:
            salt://top.sls
        state_top_saltenv:
            None
        state_verbose:
            True
        sudo_acl:
            False
        svnfs_branches:
            branches
        svnfs_mountpoint:
        svnfs_remotes:
        svnfs_root:
        svnfs_saltenv_blacklist:
        svnfs_saltenv_whitelist:
        svnfs_tags:
            tags
        svnfs_trunk:
            trunk
        svnfs_update_interval:
            60
        syndic_dir:
            /var/cache/salt/master/syndics
        syndic_event_forward_timeout:
            0.5
        syndic_failover:
            random
        syndic_forward_all_events:
            False
        syndic_jid_forward_cache_hwm:
            100
        syndic_log_file:
            /var/log/salt/syndic
        syndic_master:
            masterofmasters
        syndic_pidfile:
            /var/run/salt-syndic.pid
        syndic_wait:
            5
        tcp_keepalive:
            True
        tcp_keepalive_cnt:
            -1
        tcp_keepalive_idle:
            300
        tcp_keepalive_intvl:
            -1
        tcp_master_pub_port:
            4512
        tcp_master_publish_pull:
            4514
        tcp_master_pull_port:
            4513
        tcp_master_workers:
            4515
        test:
            False
        thin_extra_mods:
        thorium_interval:
            0.5
        thorium_roots:
            ----------
            base:
                - /srv/thorium
        thorium_top:
            top.sls
        thoriumenv:
            None
        timeout:
            5
        token_dir:
            /var/cache/salt/master/tokens
        token_expire:
            43200
        token_expire_user_override:
            False
        top_file_merging_strategy:
            merge
        transport:
            zeromq
        unique_jid:
            False
        user:
            root
        utils_dirs:
            - /var/cache/salt/master/extmods/utils
        verify_env:
            True
        winrepo_branch:
            master
        winrepo_cachefile:
            winrepo.p
        winrepo_dir:
            /srv/salt/win/repo
        winrepo_dir_ng:
            /srv/salt/win/repo-ng
        winrepo_fallback:
        winrepo_insecure_auth:
            False
        winrepo_passphrase:
        winrepo_password:
        winrepo_privkey:
        winrepo_pubkey:
        winrepo_refspecs:
            - +refs/heads/*:refs/remotes/origin/*
            - +refs/tags/*:refs/tags/*
        winrepo_remotes:
            - https://github.com/saltstack/salt-winrepo.git
        winrepo_remotes_ng:
            - https://github.com/saltstack/salt-winrepo-ng.git
        winrepo_ssl_verify:
            True
        winrepo_user:
        worker_threads:
            5
        zmq_backlog:
            1000
        zmq_filtering:
            False
        zmq_monitor:
            False

pillar自定义数据:
在master的配置文件里找pillar_roots可以看到其存放pillar的位置

[root@master ~]# vim /etc/salt/master
pillar_roots:
  base:
    - /srv/pillar/base
[root@master ~]# mkdir -p /srv/pillar/base

[root@master ~]# tree /srv/pillar/
/srv/pillar/
└── base
    ├── apache.sls
    └── top.sls

1 directory, 2 files

[root@master ~]# cat /srv/pillar/base/apache.sls 
{% if grains['os'] == 'CentOS Stream' %}
apache: httpd
{% elif grains['os'] == 'Debian' %}
apache: apache2
{% endif %}

//定义top file入口文件
[root@master ~]# cat /srv/pillar/base/top.sls 
base:
  'node1':
    - apache

[root@master ~]# salt 'node1' pillar.items
node1:
    ----------
    apache:
        httpd

//在salt下修改apache的状态文件,引用pillar的数据
[root@master ~]# cat /srv/salt/base/web/apache/install.sls 
apache-install:
  pkg.installed:
    - name: {{ pillar['apache'] }}

apache-service:
  service.running:
    - name: {{ pillar['apache'] }}
    - enable: true

//执行高级状态文件
[root@master ~]# salt 'node1' test.ping
node1:
    True
[root@master ~]# salt 'node1' state.highstate
node1:
----------
          ID: apache-install
    Function: pkg.installed
        Name: httpd
      Result: True
     Comment: All specified packages are already installed
     Started: 02:08:12.864604
    Duration: 807.063 ms
     Changes:   
----------
          ID: apache-service
    Function: service.running
        Name: httpd
      Result: True
     Comment: The service httpd is already running
     Started: 02:08:13.673670
    Duration: 38.31 ms
     Changes:   

Summary for node1
------------
Succeeded: 2
Failed:    0
------------
Total states run:     2
Total run time: 845.373 ms

4.3Grains与Pillar的区别
\存储位置类型采集方式应用场景
Grainsminion静态minion启动时采集、可通过刷新避免重启minion服务1.信息查询 2.在命令行下进行目标匹配 3.在top file中进行目标匹配 4.在模板中进行目标匹配
Pillarmaster动态指定,实时生效1.目标匹配 2.敏感数据配置
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值