一、playbook介绍
1.1什么是Playbook
playbook是一个由yml语法编写的文本文件,它由play和task两部分组成。
play:主要定义要操作主机或者主机组
task:主要定义对主机或主机组具体执行的任务,可以是一个任务,也可以是多个任务(模块)
总结: playbook是由一个或多个play 组成,一个play 可以包含多个task任务。可以理解为:使用多个不同的模块来共同完成一件事情。
1.2 Playbook与Ad-Hoc 区别
- playbook是对AD-Hoc的一种编排方式。
- playbook 可以持久运行,而Ad-Hoc只能临时运
- playbook适合复杂的任务,而Ad-Hoc适合做快速简单的任务。
- playbook能控制任务执行的先后顺序。
1.3 Playbook书写格式
playbook是由yml语法书写,结构清晰,可读性强.
语法 | 描述 |
---|---|
缩进 | YAML使用固定的缩进风格表示层级结构, 每个缩进由两个空格组成,不能使用tab键 |
冒号 | 以冒号结尾的除外,其他所有冒号后面所有必须有空格。 |
短横线 | 表示列表项,短横线后面必须有空格; 如果多个短横线在同一级别,则视为同一列表; |
示例: 安装并启动nginx
[root@manager ~]# cat nginx.yml
- hosts: webservers
tasks:
- name: Installed nginx servers
yum:
name: nginx
state: present
- name: systemd start nginx server
systemd:
name: nginx
state: started
enabled: yes
[root@manager ~]# ansible-playbook --syntax nginx.yml # 语法检测
playbook: nginx.yml
You have new mail in /var/spool/mail/root
[root@manager ~]# ansible-playbook -C nginx.yml # 模拟执行
[root@manager ~]# ansible-playbook nginx.yml # 真实执行
二、playbook 项目实战
2.1 playbook编排nfs服务并启动
配置:
[root@manager ~]# cat exports.j2
/ansible_data 172.16.1.0/24(rw,sync,all_squash,anonuid=6666,anongid=666
[root@manager ~]# cat nfs.yml
- hosts: webservers
tasks:
- name: Installed nfs servers
yum:
name: nfs-utils
state: present
- name: Configure nfs
copy:
src: ./exports.j2
dest: /etc/exports
owner: root
group: root
mode: 0644
backup: yes
notify: Restart nfs server
- name: Create group
group:
name: ansible
gid: 6666
system: no
state: present
- name: Create user
user:
name: ansible
uid: 6666
group: ansible
shell: /sbin/nologin
create_home: no
- name: init
file:
path: /ansible_data
owner: ansible
group: ansible
mode: 0755
state: directory
recurse: yes
- name: start nfs
systemd:
name: nfs
state: started
enabled: yes
handlers:
- name: Restart nfs server
systemd:
name: nfs
state: restarted
2.2 playbook 编排rsync服务端与客户端
服务端:
1.安装
2.配置
3.创建相应的用户与组
4.创建虚拟用户密码文件
5.创建备份目录backup
6.启动
[root@manager playbook_script]# cat rsync.yml
- hosts: webservers
tasks:
- name: Installed rsync server
yum:
name: rsync
state: present
- name: Create rsync group
group:
name: rsync
system: yes
state: present
- name: Create user rsync
user:
name: rsync
group: rsync
system: yes
- name: Copy passwd file
copy:
content: rsync_backup:123
dest: /etc/rsync.passwd
mode: 0600
owner: root
group: root
- name: Create backup directory
file:
path: /backup
owner: rsync
group: rsync
mode: 0755
state: directory
- name: Copy configure file
copy:
src: ./rsyncd.conf
dest: /etc/rsyncd.conf
mode: 0644
owner: root
group: root
backup: yes
notify: Restart rsyncd server
- name: start rsyncd server
systemd:
name: rsyncd
state: started
enabled: yes
handlers:
- name: Restart rsyncd server
systemd:
name: rsyncd
state: restarted
客户端:
[root@manager playbook_script]# cat clint_rsync.yml
- hosts: localhost
tasks:
- name: Execute backup script
cron:
name: xxx
minute: "00"
hour: "03"
job: "bin/bash /scripts/clinet_rsync_backup.sh &>/dev/null"
三、ansible部署web项目
部署phpMyadmin项目
实验思路:
1.准备项目目录及hosts主机清单文件
2.redis
3.nginx+php
4. 项目代码 phpmyadmin
5. 负载均衡服务器nginx 80
6. 替换为haproxy
3.1.1 准备项目目录及hosts主机清单文件
1.准备项目目录及hosts主机清单文件
[root@manager ~]# mkdir ansible_web_cluseter
[root@manager ansible_web_cluseter]# cp /etc/ansible/ansible.cfg ./
[root@manager ansible_web_cluseter]# cp /etc/ansible/hosts ./
[root@manager ansible_web_cluseter]# vim ansible.cfg
inventory = ./hosts
host_key_checking = False
[root@manager ansible_web_cluseter]# vim hosts
[webservers]
10.0.0.7
10.0.0.8
[dbservers]
172.16.1.41
[test]
172.16.1.99
[root@manager ansible_web_cluseter]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@172.16.1.5
[root@manager ansible_web_cluseter]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@172.16.1.6
[root@manager ansible_web_cluseter]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@172.16.1.7
[root@manager ansible_web_cluseter]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@172.16.1.8
[root@manager ansible_web_cluseter]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@172.16.1.41
[root@manager ansible_web_cluseter]# ansible all -m ping # 能ping通所有,说明基础环境已经构建完毕
3.1.2 playbook 部署 redis
1.安装
2.配置
3.启动
[root@manager ansible_web_cluseter]# cat playbook_redis.yml
- hosts: dbservers
tasks:
- name: Installed redis server
yum:
name: redis
state: present
- name: Change redis confgure file
lineinfile:
path: /etc/redis.conf
regexp: '^bind'
line: 'bind 127.0.0.0 172.16.1.41'
notify: Restart redis server
- name: Start redis server
systemd:
name: redis
state: started
enabled: yes
handlers:
- name: Restart redis server
systemd:
name: redis
state: restarted
远程测试连接
[root@web02 ~]# redis-cli -h 172.16.1.41
3.1.3 playbook部署nginx
1,安装nginx 配置nginx源
2./etc/nginx/nginx.conf文件
3./etc/nginx/conf.d文件
4.准备启动用户
5.启动nginx
目录结构
[root@manager ansible_web_cluseter]# tree -L 1
.
├── ansible.cfg
├── hosts
├── playbook_nginx.yml
[root@manager ~]# cat /root/ansible_web_cluseter/playbook_nginx.yml
- hosts: webservers
tasks:
- name: Configure nginx yum repo
yum_repository:
name: playbook_nginx
description: nginx yum repo file
baseurl: http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck: yes
gpgkey: https://nginx.org/keys/nginx_signing.key
- name: Install nginx server
yum:
name: nginx
state: present
enablerepo: playbook_nginx
disablerepo: epel
- name: Create group www
group:
name: www
gid: 666
system: yes
- name: Create user www
user:
name: www
uid: 666
group: www
system: yes
- name: Change nginx default conguration file
lineinfile:
path: /etc/nginx/nginx.conf
regexp: '^user'
line: 'user www;'
notify: Restart nginx server
- name: Start nginx server
systemd:
name: nginx
state: started
enabled: yes
handlers:
- name: Restart nginx server
systemd:
name: nginx
state: restarted
3.1.4.playbook部署php
1. 下载php.zip包到 /root/ansible_web_cluseter 目录,解压
2. 将当前的php.zip包解压到远程目录
3. 本地式安装php下所有.rpm包 ,必须以php开头,否则出错
4. 创建www 系统用户和组 666
5. 修改/etc/php-fpm.d/www.conf 文件,主要是修改启动身份和session保存位置
6. 修改php.ini文件,配置连接redis信息
7. 启动
目录树结构
[root@manager ansible_web_cluseter]# tree -L 1
.
├── ansible.cfg # ansible配置文件
├── hosts # 主机清单文件
├── php # php解压包
├── php.ini # 配置文件
├── php.zip # php包
├── playbook_php.yml # yml文件
└── www.conf
1.下载php.zip包到控制端,解压,匹配安装格式。
[root@manager php]# ls ./ | xargs -n1 | sed -r 's#(.*)# - ./php/\1#'
- ./php/libevent-2.0.21-4.el7.x86_64.rpm
- ./php/libmcrypt-2.5.8-13.el7.x86_64.rpm
- ./php/libmemcached-1.0.16-5.el7.x86_64.rpm
- ./php/libX11-1.6.5-2.el7.x86_64.rpm
- ./php/libX11-1.6.7-2.el7.x86_64.rpm
- ./php/libX11-common-1.6.5-2.el7.noarch.rpm
- ./php/libX11-common-1.6.7-2.el7.noarch.rpm
- ./php/libXau-1.0.8-2.1.el7.x86_64.rpm
- ./php/libxcb-1.13-1.el7.x86_64.rpm
- ./php/libXpm-3.5.12-1.el7.x86_64.rpm
- ./php/mod_php71w-7.1.32-1.w7.x86_64.rpm
- ./php/pcre-devel-8.32-17.el7.x86_64.rpm
- ./php/php71w-cli-7.1.32-1.w7.x86_64.rpm
- ./php/php71w-common-7.1.32-1.w7.x86_64.rpm
- ./php/php71w-devel-7.1.32-1.w7.x86_64.rpm
- ./php/php71w-embedded-7.1.32-1.w7.x86_64.rpm
- ./php/php71w-fpm-7.1.32-1.w7.x86_64.rpm
- ./php/php71w-gd-7.1.32-1.w7.x86_64.rpm
- ./php/php71w-mbstring-7.1.32-1.w7.x86_64.rpm
- ./php/php71w-mcrypt-7.1.32-1.w7.x86_64.rpm
- ./php/php71w-mysqlnd-7.1.32-1.w7.x86_64.rpm
- ./php/php71w-opcache-7.1.32-1.w7.x86_64.rpm
- ./php/php71w-pdo-7.1.32-1.w7.x86_64.rpm
- ./php/php71w-pear-1.10.4-1.w7.noarch.rpm
- ./php/php71w-pecl-igbinary-2.0.5-1.w7.x86_64.rpm
- ./php/php71w-pecl-memcached-3.0.4-1.w7.x86_64.rpm
- ./php/php71w-pecl-mongodb-1.5.3-1.w7.x86_64.rpm
- ./php/php71w-pecl-redis-3.1.6-1.w7.x86_64.rpm
- ./php/php71w-process-7.1.32-1.w7.x86_64.rpm
- ./php/php71w-xml-7.1.32-1.w7.x86_64.rpm
2.编写.yml文件
[root@manager ansible_web_cluseter]# cat /root/ansible_web_cluseter/playbook_php.yml
- hosts: webservers
tasks:
- name: Unarchive current php.zip to remote directory
unarchive:
src: ./php.zip
dest: /root
remote_src: no
- name: Install php server
yum:
name: "{{ packages }}"
state: present
vars:
packages:
- /root/php/php71w-cli-7.1.32-1.w7.x86_64.rpm
- /root/php/php71w-common-7.1.32-1.w7.x86_64.rpm
- /root/php/php71w-devel-7.1.32-1.w7.x86_64.rpm
- /root/php/php71w-embedded-7.1.32-1.w7.x86_64.rpm
- /root/php/php71w-fpm-7.1.32-1.w7.x86_64.rpm
- /root/php/php71w-gd-7.1.32-1.w7.x86_64.rpm
- /root/php/php71w-mbstring-7.1.32-1.w7.x86_64.rpm
- /root/php/php71w-mcrypt-7.1.32-1.w7.x86_64.rpm
- /root/php/php71w-mysqlnd-7.1.32-1.w7.x86_64.rpm
- /root/php/php71w-opcache-7.1.32-1.w7.x86_64.rpm
- /root/php/php71w-pdo-7.1.32-1.w7.x86_64.rpm
- /root/php/php71w-pear-1.10.4-1.w7.noarch.rpm
- /root/php/php71w-pecl-igbinary-2.0.5-1.w7.x86_64.rpm
- /root/php/php71w-pecl-memcached-3.0.4-1.w7.x86_64.rpm
- /root/php/php71w-pecl-mongodb-1.5.3-1.w7.x86_64.rpm
- /root/php/php71w-pecl-redis-3.1.6-1.w7.x86_64.rpm
- /root/php/php71w-process-7.1.32-1.w7.x86_64.rpm
- /root/php/php71w-xml-7.1.32-1.w7.x86_64.rpm
- name: create group www
group:
name: www
gid: 666
system: yes
state: present
- name: Create user
user:
name: www
uid: 666
group: www
system: yes
- name: Change php-default-configuration files
copy:
src: ./www.conf
dest: /etc/php-fpm.d/www.conf
owner: root
group: root
mode: 0644
backup: yes
notify: Restart php-fpm server
- name: configure php.ini
copy:
src: ./php.ini
dest: /etc/php.ini
owner: root
group: root
mode: 0644
backup: yes
notify: Restart php-fpm server
- name: Start php-fpm server
systemd:
name: php-fpm
state: started
enabled: yes
handlers:
- name: Restart php-fpm server
systemd:
name: php-fpm
state: restarted
3.1.5 部署phpMyAdmin代码
1. 被控制端创建存放代码目录/ansible_web_code
2. 编写代理配置文件并拷贝到相应目录/etc/nginx/conf.d/ansible.bertwu.online.conf,并重新加载nginx
3. 解压phpMyadmin代码到远程创建的目录
4. 制作phpmyadmin软连接
5. 配置连接mysql文件
目录树结构:
[root@manager ansible_web_cluseter]# tree -L 1
.
├── ansible.cfg
├── config.inc.php
├── hosts
├── phpMyAdmin-5.1.1-all-languages.zip
├── phpmyadmin.conf
└── playbook_code.yml
[root@manager ansible_web_cluseter]# cat /root/ansible_web_cluseter/playbook_code.yml
- hosts: webservers
tasks:
- name: Create remote directory
file:
path: /ansible_web_code
owner: www
group: www
mode: 0755
state: directory
recurse: yes
- name: Copy nginx virtual web site
copy:
src: ./phpmyadmin.conf
dest: /etc/nginx/conf.d/ansible.bertwu.online.conf
owner: root
group: root
mode: 0644
notify: Reload nginx server
- name: unarchive web_code to remote web_cluster
unarchive:
src: ./phpMyAdmin-5.1.1-all-languages.zip
dest: /ansible_web_code
remote_src: no
creates: /ansible_web_code/phpMyAdmin-5.1.1-all-languages/config.inc.php
owner: www
group: www
- name: Make soft link
file:
src: /ansible_web_code/phpMyAdmin-5.1.1-all-languages
dest: /ansible_web_code/phpmyadmin
state: link
owner: www
group: www
- name: Configure pypmyadmin connect mysql file
copy:
src: ./config.inc.php
dest: /ansible_web_code/phpmyadmin/config.inc.php
owner: www
group: www
handlers:
- name: Reload nginx server
systemd:
name: nginx
state: reloaded
3.1.6 playbook 部署负载均衡
1.配置nginx yum源
2.安装nginx
3.推送pass_params文件
4.推送站点文件
5.启动
目录树结构:
[root@manager ansible_web_cluseter]# tree -L 1
.
├── ansible.bertwu.online.conf
├── ansible.cfg
├── config.inc.php
├── hosts
├── playbook_nginx_lb_server.yml
├── proxy_params
[root@manager ansible_web_cluseter]# cat /root/ansible_web_cluseter/playbook_nginx_lb_server.yml
- hosts: lbservers
tasks:
- name: Configure nginx yum repo
yum_repository:
name: playbook_nginx
description: nginx yum repo file
baseurl: http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck: yes
gpgkey: https://nginx.org/keys/nginx_signing.key
- name: Installed nginx server
yum:
name: nginx
state: present
enablerepo: playbook_nginx
disablerepo: epel
- name: Copy proxy_params to remote /etc/nignx
copy:
src: ./proxy_params
dest: /etc/nginx
- name: Copy anible.bertwu.online.conf to remote /etc/nginx/conf.d/
copy:
src: ./ansible.bertwu.online.conf
dest: /etc/nginx/conf.d/ansible.bertwu.online.conf
notify: Restart nginx server
- name: Start nginx server
systemd:
name: nginx
state: started
enabled: yes
handlers:
- name: Restart nginx server
systemd:
name: nginx
state: restarted
配置文件:
[root@manager ansible_web_cluseter]# cat ansible.bertwu.online.conf
upstream session {
server 172.16.1.7:80;
server 172.16.1.8:80;
}
server {
listen 80;
server_name ansible.bertwu.online;
location / {
proxy_pass http://session;
include proxy_params;
}
}
[root@manager ansible_web_cluseter]# cat proxy_params
# ip
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# http version
proxy_http_version 1.1;
proxy_set_header Connection "";
# timeout
proxy_connect_timeout 120s;
proxy_read_timeout 120s;
proxy_send_timeout 120s;
# buffer
proxy_buffering on;
proxy_buffer_size 8k;
proxy_buffers 8 8k;
3.1.7 升级http为https
步骤:
1.创建远程证书存放目录
2.解压证书到远程目录
3.推送proxy_params文件
3.推送nginx站点https.anible.bertwu.online.conf.conf配置文件
4.重启
目录树结构:
[root@manager ansible_web_cluseter]# tree -L 1
.
├── 6281382_ansible.bertwu.online.key
├── 6281382_ansible.bertwu.online_nginx.zip
├── 6281382_ansible.bertwu.online.pem
├── ansible.cfg
├── hosts
├── https.ansible.bertwu.online.conf
├── playbook_nginx_lb_https_server.yml
├── proxy_params
[root@manager ansible_web_cluseter]# cat playbook_nginx_lb_https_server.yml
- hosts: lbservers
tasks:
- name: Configure nginx yum repo
yum_repository:
name: playbook_nginx
description: nginx yum repo file
baseurl: http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck: yes
gpgkey: https://nginx.org/keys/nginx_signing.key
- name: Installed nginx server
yum:
name: nginx
state: present
enablerepo: playbook_nginx
disablerepo: epel
- name: Copy proxy_params to remote /etc/nignx
copy:
src: ./proxy_params
dest: /etc/nginx
- name: Create remote directory to ssl_key Certificate
file:
path: /ssl_key
state: directory
- name: Unarchive local certificate to remote /ssl_key
unarchive:
src: ./6281382_ansible.bertwu.online_nginx.zip
dest: /ssl_key
creates: /ssl_key/6281382_ansible.bertwu.online.key
- name: Copy anible.bertwu.online.conf to remote /etc/nginx/conf.d/
copy:
src: ./https.ansible.bertwu.online.conf
dest: /etc/nginx/conf.d/ansible.bertwu.online.conf
notify: Restart nginx server
- name: Start nginx server
systemd:
name: nginx
state: started
enabled: yes
handlers:
- name: Restart nginx server
systemd:
name: nginx
state: restarted
站点文件:
[root@manager ansible_web_cluseter]# cat https.ansible.bertwu.online.conf
upstream session {
server 172.16.1.7:80;
server 172.16.1.8:80;
}
server {
listen 443 ssl;
server_name ansible.bertwu.online;
ssl_certificate /ssl_key/6281382_ansible.bertwu.online.pem;
ssl_certificate_key /ssl_key/6281382_ansible.bertwu.online.key;
location / {
proxy_pass http://session;
include proxy_params;
}
}
server {
listen 80;
server_name ansible.bertwu.online;
return 302 https://$server_name$request_uri;
}
网页访问最终效果:
3.1.8 替换nginx 7层负载均衡替换为 haproxy负载均衡
1,安装haproxy
2。修改配置文件/etc/haproxy/haproxy.cfg 将默认站点删除
3.修改systemd 默认配置文件, 加入conf.d 自动管理
4.创建conf.d目录
4.推送 status.cfg文件,用于监控
3.推送虚拟站点配置文件
目录结构:
[root@manager ansible_web_cluseter]# tree -L 1
.
├── 1q
├── haproxy22.rpm.tar.gz
├── haproxy.cfg
├── haproxy_lb.cfg
├── haproxy.service
├── hosts
├── playbook_haproxy.yml
├── status.cfg
配置文件:
[root@manager ansible_web_cluseter]# cat playbook_haproxy.yml
- hosts: test
tasks:
- name: Unarchive haproxy.tar.gz to remote directory
unarchive:
src: ./haproxy22.rpm.tar.gz
dest: /root
remote_src: no
creates: /root/haproxy
- name: Install haproxy server
yum:
name: "{{ packages }}"
vars:
packages:
- /root/haproxy/haproxy22-2.2.9-3.el7.ius.x86_64.rpm
- /root/haproxy/lua53u-5.3.4-1.ius.el7.x86_64.rpm
- /root/haproxy/lua53u-devel-5.3.4-1.ius.el7.x86_64.rpm
- /root/haproxy/lua53u-libs-5.3.4-1.ius.el7.x86_64.rpm
- /root/haproxy/lua53u-static-5.3.4-1.ius.el7.x86_64.rpm
- name: Copy systemd start file to remote
copy:
src: ./haproxy.service
dest: /usr/lib/systemd/system/haproxy.service
- name: Create conf.d directory
file:
path: /etc/haproxy/conf.d
state: directory
- name: Copy primary confgure file to remote /etc/haproxy
copy:
src: ./haproxy.cfg
dest: /etc/haproxy
notify: Restart haproxy server
- name: Copy status webset to remote /etc/haproxy/conf.d
copy:
src: ./status.cfg
dest: /etc/haproxy/conf.d/
- name: Copy virtual host file to remote /etc/haproxy/conf.d
copy:
src: ./haproxy_lb.cfg
dest: /etc/haproxy/conf.d/
notify: Restart haproxy server
- name: Start haproxy server
systemd:
name: haproxy
state: started
handlers:
- name: Restart haproxy server
systemd:
name: haproxy
state: restarted
主要配置文件参数:
[root@manager ansible_web_cluseter]# cat status.cfg
listen haproxy-stats
mode http
bind *:7777
stats enable
stats refresh 1s
stats hide-version
stats uri /haproxy?stats
stats realm "HAProxy stats"
stats auth admin:123456
stats admin if TRUE
[root@manager ansible_web_cluseter]# cat haproxy.service
[Unit]
Description=HAProxy Load Balancer
After=network-online.target
Wants=network-online.target
[Service]
EnvironmentFile=-/etc/sysconfig/haproxy
Environment="CONFIG=/etc/haproxy/haproxy.cfg" "PIDFILE=/run/haproxy.pid"
Environment="CONFIG_D=/etc/haproxy/conf.d"
ExecStartPre=/usr/sbin/haproxy -f $CONFIG -f $CONFIG_D -c -q $OPTIONS
ExecStart=/usr/sbin/haproxy -Ws -f $CONFIG -f $CONFIG_D -p $PIDFILE $OPTIONS
ExecReload=/usr/sbin/haproxy -f $CONFIG -f $CONFIG_D -c -q $OPTIONS
ExecReload=/bin/kill -USR2 $MAINPID
KillMode=mixed
SuccessExitStatus=143
Type=notify
[Install]
WantedBy=multi-user.target
[root@manager ansible_web_cluseter]# cat haproxy_lb.cfg
frontend web
bind *:7799
mode http
use_backend webservers
backend webservers
balance roundrobin
server 172.16.1.7 172.16.1.7:80 check inter 3000 rise 2 fall 3 maxconn 2000 maxqueue 200 weight 2
server 172.16.1.8 172.16.1.8:80 check inter 3000 rise 2 fall 3 maxconn 2000 maxqueue 200 weight 2