运维实战 自动化运维 SaltStack入门

简介

Saltstack是一个分布式远程执行系统, 用来在远程节点上执行命令和查询数据, 能够维护预定义状态的远程节点.

核心功能

  • 并行发送命令到远端主机, 效率更高

  • 使用安全加密协议

  • 最小最快的网络载荷

  • 提供简单的编程接口

同时, 因为采用SC模式且引入了更细致的领域控制系统, 命令的执行对象不仅可以是主机名, 也可以是符合某一系统属性的主机群体.

4505是其发送端口

4506是其接收端口, 用来接收请求响应报文

Salt命令由三个主要部分构成:

salt '<target>' <function> [arguments]

target					指定哪些minion,默认使用glob匹配minion id
						也可以使用正则表达式
						也可以指定列表

安装部署

rpm --import https://repo.saltstack.com/yum/redhat/7/x86_64/latest/SALTSTACK-GPG-KEY.pub

/etc/yum.repos.d/saltstack.repo

[saltstack-repo]
name=SaltStack repo for RHEL/CentOS $releasever
baseurl=https://repo.saltstack.com/yum/redhat/$releasever/$basearch/latest
enabled=1
gpgcheck=1
gpgkey=https://repo.saltstack.com/yum/redhat/$releasever/$basearch/latest/SALTSTACK-GPG-KEY.pub

Run sudo yum clean expire-cache.

Run sudo yum update.

Install the salt-minion, salt-master, or other Salt components:

    yum install salt-master
    yum install salt-minion
    yum install salt-ssh
    yum install salt-syndic
    yum install salt-cloud
    
##在管理端安装master
yum install -y salt-master
systemctl enable salt-master
systemctl start salt-master

##在客户端安装minion
yum install -y salt-minion

##修改配置文件增加主机设置
vim /etc/salt/minion

master: 172.25.5.1

systemctl enable salt-minion
systemctl start salt-minion

简单使用

##启用管理端服务
[root@Server1 ~]# systemctl enable --now salt-master.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/salt-master.service to /usr/lib/systemd/system/salt-master.service.
##在开启了客户端后,管理端可以看到未被允许的Key
[root@Server1 ~]# salt-key -L
Accepted Keys:
Denied Keys:
Unaccepted Keys:
Server2
Rejected Keys:
[root@Server1 ~]# salt-key -L
Accepted Keys:
Denied Keys:
Unaccepted Keys:
Server2
Server3
Rejected Keys:
##同意所有key
[root@Server1 ~]# salt-key -A
The following keys are going to be accepted:
Unaccepted Keys:
Server2
Server3
Proceed? [n/Y] Y
Key for minion Server2 accepted.
Key for minion Server3 accepted.
[root@Server1 ~]# salt-key -L
Accepted Keys:
Server2
Server3
Denied Keys:
Unaccepted Keys:
Rejected Keys:
##此时,客户端与管理端已经建立连接了,进行测试
[root@Server1 ~]# salt '*' test.ping
Server2:
    True
Server3:
    True
##简单编写一个部署Apache的脚本并测试
[root@Server1 _modules]# vim /srv/salt/Apache.sls
[root@Server1 _modules]# salt '*' state.sls Apache
Server2:
----------
          ID: httpd
    Function: pkg.installed
      Result: True
     Comment: The following packages were installed/updated: httpd
     Started: 14:35:21.096991
    Duration: 6100.891 ms
     Changes:   
              ----------
              apr:
                  ----------
                  new:
                      1.4.8-3.el7_4.1
                  old:
              apr-util:
                  ----------
                  new:
                      1.5.2-6.el7
                  old:
              httpd:
                  ----------
                  new:
                      2.4.6-88.el7
                  old:
              httpd-tools:
                  ----------
                  new:
                      2.4.6-88.el7
                  old:
              mailcap:
                  ----------
                  new:
                      2.1.41-2.el7
                  old:

Summary for Server2
------------
Succeeded: 1 (changed=1)
Failed:    0
------------
Total states run:     1
Total run time:   6.101 s
Server3:
----------
          ID: httpd
    Function: pkg.installed
      Result: True
     Comment: The following packages were installed/updated: httpd
     Started: 14:35:21.290611
    Duration: 6127.828 ms
     Changes:   
              ----------
              apr:
                  ----------
                  new:
                      1.4.8-3.el7_4.1
                  old:
              apr-util:
                  ----------
                  new:
                      1.5.2-6.el7
                  old:
              httpd:
                  ----------
                  new:
                      2.4.6-88.el7
                  old:
              httpd-tools:
                  ----------
                  new:
                      2.4.6-88.el7
                  old:
              mailcap:
                  ----------
                  new:
                      2.1.41-2.el7
                  old:

Summary for Server3
------------
Succeeded: 1 (changed=1)
Failed:    0
------------
Total states run:     1
Total run time:   6.128 s

文件内容

httpd:
  pkg.installed:
    - name: httpd

自行编写模块

##编写一个查看硬盘挂载信息的模块
[root@Server1 _modules]# vim /srv/salt/_modules/mydisk.py

def df():
        return __salt__['cmd.run']('df -h')

##传输给Server2
[root@Server1 _modules]# salt Server2 saltutil.sync_modules
Server2:
    - modules.mydisk
    
##可以通过该模块对Server2进行操作了
[root@Server1 _modules]# salt Server2 mydisk.df
Server2:
    Filesystem             Size  Used Avail Use% Mounted on
    /dev/mapper/rhel-root   17G  1.2G   16G   8% /
    devtmpfs               484M     0  484M   0% /dev
    tmpfs                  496M  100K  496M   1% /dev/shm
    tmpfs                  496M   13M  483M   3% /run
    tmpfs                  496M     0  496M   0% /sys/fs/cgroup
    /dev/vda1             1014M  132M  883M  14% /boot
    tmpfs                  100M     0  100M   0% /run/user/0

编译安装源码的方式

image-20210423153453561

/nginx/install.sls

nginx-install:
  pkg.installed:
    - pkgs:
      - pcre-devel
      - gcc
      - openssl-devel
  file.managed:
    - source: salt://nginx/files/nginx-1.18.0.tar.gz
    - name: /mnt/nginx-1.18.0.tar.gz
  cmd.run:
    - name: cd /mnt && tar zxf nginx-1.18.0.tar.gz && cd nginx-1.18.0 && sed -i 's/CFLAGS="$CFLAGS -g"/#CFLAGS="$CFLAGS -g"/g' auto/cc/gcc &&  ./configure --prefix=/usr/local/nginx --with-http_ssl_module &> /dev/null && make &> /dev/null && make install &> /dev/null
    - creates: /usr/local/nginx

init.sls

include:
  - nginx.install

/usr/local/nginx/conf/nginx.conf:
  file.managed:
    - source: salt://nginx/files/nginx.conf
    
nginx-service:
  file.managed:
    - source: salt://nginx/files/nginx.service
    - name: /etc/systemd/system/nginx.service

  service.running:
    - name: nginx
    - enable: True
    - reload: True
    - watch:
      - file: /usr/local/nginx/conf/nginx.conf

top.sls

base:  'Server2':    - apache  'Server3':    - nginx

执行方式

salt '*' state.highstate

执行情况

[root@Server1 salt]# salt '*' state.highstate
Server2:
----------
          ID: httpd
    Function: pkg.installed
      Result: True
     Comment: All specified packages are already installed
     Started: 15:33:27.464898
    Duration: 706.589 ms
     Changes:   

Summary for Server2
------------
Succeeded: 1
Failed:    0
------------
Total states run:     1
Total run time: 706.589 ms
Server3:
----------
          ID: nginx-install
    Function: pkg.installed
      Result: True
     Comment: All specified packages are already installed
     Started: 15:33:27.667848
    Duration: 739.729 ms
     Changes:   
----------
          ID: nginx-install
    Function: file.managed
        Name: /mnt/nginx-1.18.0.tar.gz
      Result: True
     Comment: File /mnt/nginx-1.18.0.tar.gz is in the correct state
     Started: 15:33:28.410887
    Duration: 39.863 ms
     Changes:   
----------
          ID: nginx-install
    Function: cmd.run
        Name: cd /mnt && tar zxf nginx-1.18.0.tar.gz && cd nginx-1.18.0 && sed -i 's/CFLAGS="$CFLAGS -g"/#CFLAGS="$CFLAGS -g"/g' auto/cc/gcc &&  ./configure --prefix=/usr/local/nginx --with-http_ssl_module &> /dev/null && make &> /dev/null && make install &> /dev/null
      Result: True
     Comment: /usr/local/nginx exists
     Started: 15:33:28.452094
    Duration: 0.847 ms
     Changes:   
----------
          ID: /usr/local/nginx/conf/nginx.conf
    Function: file.managed
      Result: True
     Comment: File /usr/local/nginx/conf/nginx.conf is in the correct state
     Started: 15:33:28.453164
    Duration: 13.671 ms
     Changes:   
----------
          ID: nginx-service
    Function: file.managed
        Name: /etc/systemd/system/nginx.service
      Result: True
     Comment: File /etc/systemd/system/nginx.service is in the correct state
     Started: 15:33:28.467170
    Duration: 13.336 ms
     Changes:   
----------
          ID: nginx-service
    Function: service.running
        Name: nginx
      Result: True
     Comment: The service nginx is already running
     Started: 15:33:28.481955
    Duration: 54.933 ms
     Changes:   

Summary for Server3
------------
Succeeded: 6
Failed:    0
------------
Total states run:     6
Total run time: 862.379 ms

测试

[root@Server3 salt]# curl localhost
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Grains与Pillar详解

Grains

GrainsSaltStack的一个组件, 存放在SaltStackminion端.

salt-minion启动时会把收集到的数据静态存放在Grains当中, 只有当minion重启时才会进行数据的更新.

由于grains是静态数据, 因此不推荐经常去修改它.

应用场景

  • 信息查询
  • 在target中使用, 匹配minion.
  • 在state系统中使用, 配置管理模块.

举例

  • 显示所有可用的grains
salt '*' grains.ls
  • 使用grains.item列出信息
salt '*' grains.items
  • 尝试取值
[root@Server1 salt]# salt Server2  grains.item ipv4
Server2:
    ----------
    ipv4:
        - 127.0.0.1
        - 172.25.5.2
  • Server2上编辑
grains:
  roles:
    - webserver
    - memcache
    - Apache
  deployment: datacenter4
  cabinet: 13
  cab_u: 14-15
  • Server1同步后检测
[root@Server1 salt]# salt Server2  saltutil.sync_grains
Server2:
[root@Server1 salt]# salt Server2  grains.item roles
Server2:
    ----------
    roles:
        - webserver
        - memcache
        - Apache
  • 删除刚才添加的信息后重启被控端
[root@Server1 salt]# salt Server2  grains.item roles
Server2:
    ----------
    roles:
        - webserver
        - memcache
        - Apache
[root@Server1 salt]# salt Server2  grains.item roles
^[[B^[[A^[[A^[[BServer2:
    Minion did not return. [Not connected]
ERROR: Minions returned with non-zero exit code
[root@Server1 salt]# salt Server2  grains.item roles
Server2:
    ----------
    roles:
        - webserver
        - memcache
  • 也可以写入Grains in /etc/salt/grains
roles:
  - webserver
  - memcache
deployment: datacenter4
cabinet: 13
cab_u: 14-15

与上面的作用是一样的.

  • 如果在master端操作就需要创建目录
mkdir /srv/salt/_grains

vim mygrains.py


#!/usr/bin/env python
def yourfunction():
     # initialize a grains dictionary
     grains = {}
     # Some code for logic that sets grains like
     grains['yourcustomgrain'] = True
     grains['anothergrain'] = 'somevalue'
     return grains
     
     
salt '*' saltutil.sync_grains
  • 在两台虚拟机都作了操作后, Grains也可以用来在传输时做配置分流
[root@Server1 salt]# salt -G roles:memcache cmd.run hostname
Server2:
    Server2
Server3:
    Server3

Pillar

pillargrains一样也是一个数据系统, 但是应用场景不同.

pillar是将信息动态的存放在master端, 主要存放私密/敏感信息如(用户名密码等), 而且可以指定某一个minion才可以看到对应的信息.

pillar更加适合在配置管理中运用.

  • 动态, 存储在MASTER
  • 独立于base目录
  • 修改后不需要刷新, 更适合集群操作

声明Pillar

##定义pillar基础目录
vim /etc/salt/master

pillar_roots:
  base:
    - /srv/pillar
    
mkdir /srv/pillar

##重启salt-master服务
/etc/init.d/salt-master restart

自定义Pillar项

vim /srv/pillar/top.sls

base:
  '*':
    - packages
    

vim /srv/pillar/packages.sls

{% if grains['fqdn'] == 'server3' %}
package: httpd
{% elif grains['fqdn'] == 'server2' %}
package: mairadb
{% endif %}



##刷新pillar数据
salt '*' saltutil.refresh_pillar
##查询pillar数据
salt '*' pillar.items
salt '*' pillar.item  roles


##在命令行中匹配
salt -I 'roles:apache' test.ping
##在state系统中使用
vim /srv/salt/apache.sls

apache:
  pkg.installed:
    - name: {{ pillar['package'] }}

使用Jinja模板

使用Jinja模板需要记住的用法

两种分隔符: {% ... %}{{ ... }}

前者用于执行诸如for 循环 或 赋值 的语句

后者把表达式的结果打印到模板上

基本用法

##用来控制结构包装条件
vim /srv/salt/test.sls

/mnt/testfile:
  file.append:
    {% if grains['fqdn'] == 'server2' %}
    - text: server2
    {% elif grains['fqdn'] == 'server3' %}
    - text: server3
    {% endif %}
    
通过这种用法,可以对不同主机的同一位置文件写入不同内容

##在这个例子中, 用来往普通文件中写入信息
vim /srv/salt/apache.sls

/etc/httpd/conf/httpd.conf:
  file.managed:
  - source: salt://httpd.conf
  - template: jinja
  - context:
    bind: 172.25.0.2
##使用import方式,可在state文件之间共享信息
##使用变量文件定义变量
vim lib.sls

{% set port = 80 %}

##传递变量给配置文件并写入信息
# vim httpd.conf

{% from 'lib.sls' import port %}
...
Listen {{ prot }}			

引用变量的几种方式

##直接引用grains变量
Listen {{ grains['ipv4'][1] }}

##直接引用pillar变量
Listen {{ pillar['ip'] }}

##在state文件中引用
- template: jinja
    - context:
      bind: {{ pillar['ip'] }}

使用SaltStack配置Keepalived高可用

明确需要做的事情的步骤

  • 建立Keepalived目录并编写合适的入口文件init.sls
  • 因为机器分为主机和备机, 而需要使用同一个模板因此一定需要定义变量
  • 配置模板中使用的变量在pillar/Keepalived.sls中取值, 在Keepalived/init.sls中赋值,在模板中调用
  • salt/top.sls中为所有机器配置Keepalived
kp-install:
  pkg.installed:
    - name: keepalived

  file.managed:
    - source: salt://Keepalived/files/keepalived.conf
    - name: /etc/keepalived/keepalived.conf
    - template: jinja
    - context:
      STATE: {{ pillar['kp-state'] }}
      VRID: {{ pillar['kp-vrid'] }}
      PRI: {{ pillar['kp-pri'] }}

  service.running:
    - name: keepalived
    - enable: True
    - reload: True
    - watch:
      - file: kp-install

pillar/keepalived.sls内容

{% if grains['fqdn'] == 'Server2' %}
kp-state: MASTER
kp-vrid: 5
kp-pri: 100
{% elif grains['fqdn'] == 'Server3' %}
kp-state: BACKUP
kp-vrid: 5
kp-pri: 50
{% endif %}

pillar/top.sls内容

base:
  '*':
    - Keepalived

salt/top.sls内容

base:
  'Server2':
    - Keepalived
  'Server3':
    - Keepalived

配置文件

image-20210425100626048

image-20210425100635939

  • 执行salt '*' state.highstate

执行结果

Server2:
----------
          ID: kp-install
    Function: pkg.installed
        Name: keepalived
      Result: True
     Comment: All specified packages are already installed
     Started: 10:18:26.395059
    Duration: 701.955 ms
     Changes:   
----------
          ID: kp-install
    Function: file.managed
        Name: /etc/keepalived/keepalived.conf
      Result: True
     Comment: File /etc/keepalived/keepalived.conf updated
     Started: 10:18:27.101137
    Duration: 46.93 ms
     Changes:   
              ----------
              diff:
                  --- 
                  +++ 
                  @@ -15,10 +15,10 @@
                   }
                   
                   vrrp_instance VI_1 {
                  -    state { STATE }
                  +    state MASTER
                       interface eth0
                  -    virtual_router_id { VRID }
                  -    priority { PRI }
                  +    virtual_router_id 5
                  +    priority 100
                       advert_int 1
                       authentication {
                           auth_type PASS
----------
          ID: kp-install
    Function: service.running
        Name: keepalived
      Result: True
     Comment: Service keepalived is already enabled, and is running
     Started: 10:18:27.149461
    Duration: 146.67 ms
     Changes:   
              ----------
              keepalived:
                  True

Summary for Server2
------------
Succeeded: 3 (changed=2)
Failed:    0
------------
Total states run:     3
Total run time: 895.555 ms
Server3:
----------
          ID: kp-install
    Function: pkg.installed
        Name: keepalived
      Result: True
     Comment: All specified packages are already installed
     Started: 10:18:26.594020
    Duration: 778.537 ms
     Changes:   
----------
          ID: kp-install
    Function: file.managed
        Name: /etc/keepalived/keepalived.conf
      Result: True
     Comment: File /etc/keepalived/keepalived.conf updated
     Started: 10:18:27.376387
    Duration: 47.47 ms
     Changes:   
              ----------
              diff:
                  --- 
                  +++ 
                  @@ -15,10 +15,10 @@
                   }
                   
                   vrrp_instance VI_1 {
                  -    state { STATE }
                  +    state BACKUP
                       interface eth0
                  -    virtual_router_id { VRID }
                  -    priority { PRI }
                  +    virtual_router_id 5
                  +    priority 50
                       advert_int 1
                       authentication {
                           auth_type PASS
----------
          ID: kp-install
    Function: service.running
        Name: keepalived
      Result: True
     Comment: Service keepalived is already enabled, and is running
     Started: 10:18:27.425795
    Duration: 157.849 ms
     Changes:   
              ----------
              keepalived:
                  True

Summary for Server3
------------
Succeeded: 3 (changed=2)
Failed:    0
------------
Total states run:     3
Total run time: 983.856 ms
  • 观察到Server2上出现VIP

image-20210425101909804

  • 宿主机curl VIP可以得到反馈

  • downServer2上的keepalived, BACKUP转为MASTER, VIP漂移, 因此curl结果变更

[root@foundation5 ~]# curl 172.25.5.100
Server2
[root@foundation5 ~]# curl 172.25.5.100
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Nginx部分补漏

image-20210425103244932

nginx/init.sls内容

include:
  - Nginx.install

nginx.conf:
  file.managed:
    - source: salt://Nginx/files/nginx.conf
    - name: /usr/local/nginx/conf/nginx.conf

nginx.service:
  file.managed:
    - source: salt://Nginx/files/nginx.service
    - name: /etc/systemd/system/nginx.service

  service.running:
    - name: nginx
    - enable: True
    - reload: True
    - watch:
      - file: nginx.conf

nginx/install.sls内容

nginx-install:
  pkg.installed:
    - pkgs:
      - pcre-devel
      - gcc
      - openssl-devel
  file.managed:
    - source: salt://Nginx/files/nginx-1.18.0.tar.gz
    - name: /mnt/nginx-1.18.0.tar.gz
  cmd.run:
    - name: cd /mnt && tar zxf nginx-1.18.0.tar.gz && cd nginx-1.18.0 && sed -i 's/CFLAGS="$CFLAGS -g"/#CFLAGS="$CFLAGS -g"/g' auto/cc/gcc &&  ./configure --prefix=/usr/local/nginx --with-http_ssl_module &> /dev/null && make &> /dev/null && make install &> /dev/null
    - creates: /usr/local/nginx

nginx.service内容

[Unit]
Description=The NGINX HTTP and reverse proxy server
After=syslog.target network.target remote-fs.target nss-lookup.target

[Service]
Type=forking
PIDFile=/usr/local/nginx/logs/nginx.pid
ExecStartPre=/usr/local/nginx/sbin/nginx -t
ExecStart=/usr/local/nginx/sbin/nginx
ExecReload=/usr/local/nginx/sbin/nginx -s reload
ExecStop=/bin/kill -s QUIT $MAINPID
PrivateTmp=true

[Install]
WantedBy=multi-user.target

salt/top.sls内容

base:
  'Server2':
    - Keepalived
  'Server3':
    - Nginx
    - Keepalived

Job管理

Job缓存默认保存24小时

vim /etc/salt/master

keep_jobs: 24

MASTERJob缓存目录/var/cache/salt/master/jobs

  • 为了自动化运维一定要配置数据库

master安装

yum install -y MySQL-python.x86_64
yum install -y MySQL-python.x86_64
  • 修改配置文件master
vim /etc/salt/master

master_job_cache: mysql
mysql.host: 127.0.0.1
mysql.user: 'salt'
mysql.pass: 'salt'
mysql.db: 'salt'
mysql.port: 3306
  • 启动数据库, 导入官方提供的mysql脚本
CREATE DATABASE  `salt`
  DEFAULT CHARACTER SET utf8
  DEFAULT COLLATE utf8_general_ci;

USE `salt`;

--
-- Table structure for table `jids`
--

DROP TABLE IF EXISTS `jids`;
CREATE TABLE `jids` (
  `jid` varchar(255) NOT NULL,
  `load` mediumtext NOT NULL,
  UNIQUE KEY `jid` (`jid`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE INDEX jid ON jids(jid) USING BTREE;

--
-- Table structure for table `salt_returns`
--

DROP TABLE IF EXISTS `salt_returns`;
CREATE TABLE `salt_returns` (
  `fun` varchar(50) NOT NULL,
  `jid` varchar(255) NOT NULL,
  `return` mediumtext NOT NULL,
  `id` varchar(255) NOT NULL,
  `success` varchar(10) NOT NULL,
  `full_ret` mediumtext NOT NULL,
  `alter_time` TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
  KEY `id` (`id`),
  KEY `jid` (`jid`),
  KEY `fun` (`fun`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

--
-- Table structure for table `salt_events`
--

DROP TABLE IF EXISTS `salt_events`;
CREATE TABLE `salt_events` (
`id` BIGINT NOT NULL AUTO_INCREMENT,
`tag` varchar(255) NOT NULL,
`data` mediumtext NOT NULL,
`alter_time` TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
`master_id` varchar(255) NOT NULL,
PRIMARY KEY (`id`),
KEY `tag` (`tag`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
  • 登陆数据库做授权操作
grant all on salt.* to salt@'localhost' identified by 'salt';

grant all on salt.* to salt@'%' identified by 'salt';
  • 做登陆测试
mysql -usalt -psalt salt
  • 重启服务并测试
systemctl restart salt-master

salt server3 cmd.run hostname
  • 查看数据库
mysql -usalt -psalt salt

MariaDB [(none)]> use salt;
MariaDB [salt]> select * from salt_returns;
+---------+----------------------+-----------+---------+---------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------+
| fun     | jid                  | return    | id      | success | full_ret                                                                                                                                                                                                   | alter_time          |
+---------+----------------------+-----------+---------+---------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------+
| cmd.run | 20210425035148815274 | "Server2" | Server2 | 1       | {"fun_args": ["hostname"], "jid": "20210425035148815274", "return": "Server2", "retcode": 0, "success": true, "cmd": "_return", "_stamp": "2021-04-25T03:51:49.175288", "fun": "cmd.run", "id": "Server2"} | 2021-04-25 11:51:49 |
| cmd.run | 20210425035148815274 | "Server3" | Server3 | 1       | {"fun_args": ["hostname"], "jid": "20210425035148815274", "return": "Server3", "retcode": 0, "success": true, "cmd": "_return", "_stamp": "2021-04-25T03:51:49.245092", "fun": "cmd.run", "id": "Server3"} | 2021-04-25 11:51:49 |
+---------+----------------------+-----------+---------+---------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------+
2 rows in set (0.00 sec)

扩展部分 Salt-ssh和Salt-syndic

使用salt-ssh不需要minion端, 可以便于某些不能安装minion端的设备进入管理范围.

##首先在管理端安装salt-ssh
##编辑配置文件,填入客户端信息
vim roster

Server3:
  host: 172.25.5.3
  user: root
  passwd: westos
# sudo: True

如果需要sudo可以打开选项

因为使用ssh方式因此客户端不需要任何配置, 但同样是因为ssh使用串流方式, 执行效率明显降低.

Salt-syndic

zabbix proxy类似, syndic就是个proxy, 隔离masterminion.

Syndic必须要运行在master上, 再连接到另一个Topmaster上.

Topmaster下发的状态需要通过syndic来传递给下级master, minion传递给master的数据由syndic传递给Topmaster.

Topmaster并不知道有多少个minion也不关心这一点.

syndictopmasterfile_rootspillar_roots的目录要保持一致.

  • 这里使用Server4作为Topmaster, 在Server1上安装Syndic作为Master
yum install salt-syndic

vim master

# Set the order_masters setting to True if this master will command lower
# masters' syndic interfaces.
order_masters: True

# If this master will be running a salt syndic daemon, syndic_master tells
# this master where to receive commands from.
syndic_master: 172.25.5.4

image-20210425140844230

[root@Server1 salt]# systemctl enable --now salt-syndic.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/salt-syndic.service to /usr/lib/systemd/system/salt-syndic.service.
[root@Server1 salt]# systemctl restart salt-master.service 

Server4上的操作

[root@Server4 salt]# salt-key -L
Accepted Keys:
Denied Keys:
Unaccepted Keys:
Server1
Rejected Keys:


[root@Server4 salt]# salt-key -A
The following keys are going to be accepted:
Unaccepted Keys:
Server1
Proceed? [n/Y] y
Key for minion Server1 accepted.

[root@Server4 salt]# salt-key -L
Accepted Keys:
Server1
Denied Keys:
Unaccepted Keys:
Rejected Keys:

Server4作为Topmaster可以查询所有的minion

[root@Server4 salt]# salt '*' test.ping
Server2:
    True
Server3:
    True

之前在Server1上配置了数据库存储, 因此也可以在Server1的数据库中看到新的内容

MariaDB [salt]> select * from salt_returns;
+-----------+----------------------+-----------+---------+---------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------+
| fun       | jid                  | return    | id      | success | full_ret                                                                                                                                                                                                   | alter_time          |
+-----------+----------------------+-----------+---------+---------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------+
| cmd.run   | 20210425035649214462 | "Server2" | Server2 | 1       | {"fun_args": ["hostname"], "jid": "20210425035649214462", "return": "Server2", "retcode": 0, "success": true, "cmd": "_return", "_stamp": "2021-04-25T03:56:49.378392", "fun": "cmd.run", "id": "Server2"} | 2021-04-25 11:56:49 |
| cmd.run   | 20210425035649214462 | "Server3" | Server3 | 1       | {"fun_args": ["hostname"], "jid": "20210425035649214462", "return": "Server3", "retcode": 0, "success": true, "cmd": "_return", "_stamp": "2021-04-25T03:56:49.383171", "fun": "cmd.run", "id": "Server3"} | 2021-04-25 11:56:49 |
| test.ping | 20210425061133898794 | true      | Server3 | 1       | {"fun_args": [], "jid": "20210425061133898794", "return": true, "retcode": 0, "success": true, "cmd": "_return", "_stamp": "2021-04-25T06:11:34.657514", "fun": "test.ping", "id": "Server3"}              | 2021-04-25 14:11:34 |
| test.ping | 20210425061133898794 | true      | Server2 | 1       | {"fun_args": [], "jid": "20210425061133898794", "return": true, "retcode": 0, "success": true, "cmd": "_return", "_stamp": "2021-04-25T06:11:34.694983", "fun": "test.ping", "id": "Server2"}              | 2021-04-25 14:11:34 |
+-----------+----------------------+-----------+---------+---------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------+

Server4的查询中并不包括Server1, Server1作为master而不是minion.

如果Server4上也配置了mysql, 那么Server4Server1上的数据库都会写入内容

API调用

yum install -y salt-api

[root@Server1 master.d]# cd /etc/pki/tls/private
[root@Server1 private]# ls
[root@Server1 private]# openssl genrsa 1024 > ^C
[root@Server1 private]# openssl genrsa 1024 > localhost.key
Generating RSA private key, 1024 bit long modulus
....................++++++
................++++++

image-20210425151046532

Complete!
##创建认证文件
[root@Server1 master.d]# cd /etc/pki/tls/private
[root@Server1 private]# ls
[root@Server1 private]# openssl genrsa 1024 > localhost.key
Generating RSA private key, 1024 bit long modulus
....................++++++
................++++++
e is 65537 (0x10001)
[root@Server1 private]# cat localhost.key 
-----BEGIN RSA PRIVATE KEY-----
MIICXAIBAAKBgQC5nQwLYu2UWO6hKUDoub1wcAkED2iBX3yfl5AoHSsDX9Z72ny4
L3yUy3Q52DshnzM1s7F5aL4ESTxod0EyumW3AS+xmTYcar9PnchcakMFrix73V2C
GcluWVhcHYuWp5ld5eHQ2dks6yJ39P52a+7hWThEL9MBuF+4TDNATvN4IQIDAQAB
AoGAP6I7stujn6wtg0rlWePzskx2itHNfi0CSKRpY5c8W9fLbIKnJ24AQ/LMUdhz
zT8LC2ojegASxN1mvFnGHGIS5GjNKgVO36u9fuoDcmmrJtwsIqSCge9DNSReYN1E
MBOvpc5THELTHNazeOxeqRw9RaAylYk7d9/00TbI3+RXjBECQQDpl5PD23Pzy54v
nZQYWLUzePuJ7iFejpPfEesIAFbYBzR47iR/j9wUhkycnyH1IVEGbwxoLgK6uRr+
4fc4JmxvAkEAy2s816sWozFfZjXC7edkOKxQU72x7XEJ4aWXGTk+K+kx4VejK3yv
BeoQF7Ry69xDdzLjgxY9WGJ3EX+Duz7MbwJBAIoYKisnma0POz07E0oxZy4+37Xz
KZcVAyZlGWVpje24lLTJVJp1Gc6odrJBAXpBb/01uUf9q29n7yWvwM9ZJ9ECQGUp
X4ihvRBLbXYXJmnJuT218/yxSdsbbB6bixkwLosH3ZaDTtJBn4kBbh9bzgsd7y9I
T3zRgpCB51T8ZTapdGECQFlM+bAY+hkq/zt/kuw63WG4LzbgDptkhul3Xju2eiav
vdknOrL02NGgtTKyvaCymHR4/ST/BekIkkq3JUmptDU=
-----END RSA PRIVATE KEY-----
[root@Server1 private]# cd ../certs
[root@Server1 certs]# ls
ca-bundle.crt  ca-bundle.trust.crt  make-dummy-cert  Makefile  renew-dummy-cert
[root@Server1 certs]# make testcert
umask 77 ; \
/usr/bin/openssl req -utf8 -new -key /etc/pki/tls/private/localhost.key -x509 -days 365 -out /etc/pki/tls/certs/localhost.crt 
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:China
string is too long, it needs to be less than  2 bytes long
Country Name (2 letter code) [XX]:CN
State or Province Name (full name) []:Shannxi
Locality Name (eg, city) [Default City]:Xian
Organization Name (eg, company) [Default Company Ltd]:Westos
Organizational Unit Name (eg, section) []:Linux
Common Name (eg, your name or your server's hostname) []:Server1
Email Address []:lunarlibrary@foxmail.com
[root@Server1 certs]# ll
total 16
lrwxrwxrwx. 1 root root   49 Dec 29 09:31 ca-bundle.crt -> /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
lrwxrwxrwx. 1 root root   55 Dec 29 09:31 ca-bundle.trust.crt -> /etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt
-rw-------  1 root root 1062 Apr 25 15:10 localhost.crt
-rwxr-xr-x. 1 root root  610 Aug 14  2018 make-dummy-cert
-rw-r--r--. 1 root root 2516 Aug 14  2018 Makefile
-rwxr-xr-x. 1 root root  829 Aug 14  2018 renew-dummy-cert
[root@Server1 certs]# cd ..
[root@Server1 tls]# cd /etc/salt/master.d/
[root@Server1 master.d]# vim auth.conf
[root@Server1 master.d]# vim ssl.conf
[root@Server1 private]# vim /etc/salt/master.d/
auth.conf  ssl.conf   
[root@Server1 private]# vim /etc/salt/master.d/ssl.conf 
[root@Server1 private]# vim /etc/salt/master.d/ssl.conf 
[root@Server1 private]# systemctl restart salt-master.service 
[root@Server1 private]# systemctl enable --now salt-api.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/salt-api.service to /usr/lib/systemd/system/salt-api.service.
[root@Server1 private]# netstat -antlp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:3306            0.0.0.0:*               LISTEN      3509/mysqld         
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      3177/sshd           
tcp        0      0 0.0.0.0:4505            0.0.0.0:*               LISTEN      10546/python        
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      3592/master         
tcp        0      0 0.0.0.0:4506            0.0.0.0:*               LISTEN      10552/python        
tcp        0      0 172.25.5.1:4505         172.25.5.3:59080        ESTABLISHED 10546/python        
tcp        0      0 127.0.0.1:35866         127.0.0.1:3306          TIME_WAIT   -                   
tcp        0      0 172.25.5.1:39046        172.25.5.4:4506         TIME_WAIT   -                   
tcp        0      0 127.0.0.1:35756         127.0.0.1:3306          TIME_WAIT   -                   
tcp        0      0 127.0.0.1:35886         127.0.0.1:3306          TIME_WAIT   -                   
tcp        0      0 172.25.5.1:22           172.25.5.250:44738      ESTABLISHED 3905/sshd: root@pts 
tcp        0      0 127.0.0.1:35754         127.0.0.1:3306          TIME_WAIT   -                   
tcp        0      0 127.0.0.1:35750         127.0.0.1:3306          TIME_WAIT   -                   
tcp        0      0 127.0.0.1:35856         127.0.0.1:3306          TIME_WAIT   -                   
tcp        0      0 127.0.0.1:35868         127.0.0.1:3306          TIME_WAIT   -                   
tcp        0      0 172.25.5.1:39052        172.25.5.4:4506         TIME_WAIT   -                   
tcp        0      0 127.0.0.1:35864         127.0.0.1:3306          TIME_WAIT   -                   
tcp        0      0 127.0.0.1:35848         127.0.0.1:3306          TIME_WAIT   -                   
tcp        0      0 172.25.5.1:4505         172.25.5.3:59074        TIME_WAIT   -                   
tcp        0      0 127.0.0.1:35894         127.0.0.1:3306          TIME_WAIT   -                   
tcp        0      0 172.25.5.1:60300        172.25.5.4:4505         TIME_WAIT   -                   
tcp        0      0 127.0.0.1:35854         127.0.0.1:3306          TIME_WAIT   -                   
tcp        0      0 127.0.0.1:35870         127.0.0.1:3306          TIME_WAIT   -                   
tcp        0      0 172.25.5.1:4505         172.25.5.2:40286        TIME_WAIT   -                   
tcp        0      0 127.0.0.1:35852         127.0.0.1:3306          TIME_WAIT   -                   
tcp        0      0 127.0.0.1:35850         127.0.0.1:3306          TIME_WAIT   -                   
tcp        0      0 127.0.0.1:35890         127.0.0.1:3306          TIME_WAIT   -                   
tcp        0      0 172.25.5.1:4505         172.25.5.2:40292        ESTABLISHED 10546/python        
tcp        0      0 127.0.0.1:35892         127.0.0.1:3306          TIME_WAIT   -                   
tcp        0      0 127.0.0.1:35882         127.0.0.1:3306          TIME_WAIT   -                   
tcp        0      0 127.0.0.1:35876         127.0.0.1:3306          TIME_WAIT   -                   
tcp        0      0 127.0.0.1:35858         127.0.0.1:3306          TIME_WAIT   -                   
tcp        0      0 127.0.0.1:35860         127.0.0.1:3306          TIME_WAIT   -                   
tcp        0      0 172.25.5.1:60460        172.25.5.4:4505         ESTABLISHED 10514/python        
tcp        0      0 127.0.0.1:35874         127.0.0.1:3306          TIME_WAIT   -                   
tcp        0      0 172.25.5.1:39050        172.25.5.4:4506         TIME_WAIT   -                   
tcp        0      0 127.0.0.1:35872         127.0.0.1:3306          TIME_WAIT   -                   
tcp        0      0 127.0.0.1:35862         127.0.0.1:3306          TIME_WAIT   -                   
tcp        0      0 127.0.0.1:35758         127.0.0.1:3306          TIME_WAIT   -                   
tcp        0      0 127.0.0.1:35744         127.0.0.1:3306          TIME_WAIT   -                   
tcp6       0      0 :::22                   :::*                    LISTEN      3177/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN      3592/master         
[root@Server1 private]# vim /etc/salt/master.d/auth.conf 
[root@Server1 private]# useradd saltapi
[root@Server1 private]# systemctl restart salt-master.service 
[root@Server1 private]# systemctl restart salt-api.service
[root@Server1 private]# netstat -antlp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:3306            0.0.0.0:*               LISTEN      3509/mysqld         
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      3177/sshd           
tcp        0      0 0.0.0.0:4505            0.0.0.0:*               LISTEN      12130/python        
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      3592/master         
tcp        0      0 0.0.0.0:4506            0.0.0.0:*               LISTEN      12136/python        
tcp        0      0 172.25.5.1:4505         172.25.5.3:59080        TIME_WAIT   -                   
tcp        0      0 172.25.5.1:22           172.25.5.250:44738      ESTABLISHED 3905/sshd: root@pts 
tcp        0      0 127.0.0.1:35906         127.0.0.1:3306          TIME_WAIT   -                   
tcp        0      0 172.25.5.1:4505         172.25.5.2:40294        ESTABLISHED 12130/python        
tcp        0      0 172.25.5.1:39062        172.25.5.4:4506         TIME_WAIT   -                   
tcp        0      0 172.25.5.1:4505         172.25.5.2:40292        TIME_WAIT   -                   
tcp        0      0 172.25.5.1:60472        172.25.5.4:4505         ESTABLISHED 12098/python        
tcp        0      0 172.25.5.1:39058        172.25.5.4:4506         TIME_WAIT   -                   
tcp        0      0 172.25.5.1:60460        172.25.5.4:4505         TIME_WAIT   -                   
tcp        0      0 172.25.5.1:4505         172.25.5.3:59082        ESTABLISHED 12130/python        
tcp6       0      0 :::22                   :::*                    LISTEN      3177/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN      3592/master            
##使用API方式获取Token
[root@Server1 private]# curl -sSk https://172.25.5.1:8000/login -H 'Accept: application/x-yaml' -d username=saltapi -d password=westos -d eauth=pam
return:
- eauth: pam
  expire: 1619378822.403106
  perms:
  - .*
  - '@wheel'
  - '@runner'
  - '@jobs'
  start: 1619335622.403105
  token: 8314e44730aadff01eb0256cce536cd6dcc9908d
  user: saltapi

**简单使用**

```shell
[root@Server1 private]# curl -sSk https://172.25.5.1:8000     -H 'Accept: application/x-yaml'     -H 'X-Auth-Token: 8314e44730aadff01eb0256cce536cd6dcc9908d'    -d client=local     -d tgt='*'     -d fun=test.ping
return:
- Server2: true
  Server3: true
  • 通过python脚本调用api
[root@Server1 private]# python saltapi.py 
([u'Server2', u'Server3'], [])


def main():
    sapi = SaltAPI(url='https://172.25.5.1:8000',username='saltapi',password='westos')
    sapi.token_id()
    print sapi.list_all_key()
    sapi.delete_key('Server3')
    print sapi.list_all_key()


[root@Server1 private]# python saltapi.py 
([u'Server2', u'Server3'], [])
([u'Server2'], [])
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值