playbook
设置vim在编写yaml语法文件时,tab键缩进两个空格
编写inventory
[root@server1 ansible]# cat inventory
[lb]
server1 STATE=MASTER VRID=25 PRIORITY=100
server4 STATE=BACKUP VRID=25 PRIORITY=50
[test]
server2 http_host=172.25.25.2
[prod]
server3 http_host=172.25.25.3
[webserver:children]
prod
test
vim playbook.yum
---
- hosts: webserver #针对webserver组的主机操作
vars:
http_port: 80 #引入变量
tasks:
- name: install httpd #任务名称,安装httpd
yum:
name: httpd #安装的服务名称
state: present #安装
- name: copy index.html #任务名称
copy:
content: "{{ ansible_facts['hostname'] }}"
dest: /var/www/html/index.html #将content写入到该文件
- name: configure httpd
template:
src: templates/httpd.conf.j2
dest: /etc/httpd/conf/httpd.conf #使用模版文件httpd.conf.j2生成该配置文件
owner: root
group: root
mode: 644
notify: restart httpd #触发
- name: start httpd and firewalld
service:
name: "{{ item }}"#下面循环的服务选项
state: started #开启服务
loop: #循环下面的选项
- httpd
- firewalld
- name: configure firewalld
firewalld:
service: http #防火墙允许http服务
permanent: yes #设置永久生效
immediate: yes #立即生效
state: enabled #打开ia防火墙
handlers: #触发器
- name: restart httpd
service:
name: httpd
state: restarted
- hosts: localhost #针对本机操作
tasks:
- name: install haproxy
yum:
name: haproxy
state: present
- name: configure haproxy
template:
src: templates/haproxy.cfg.j2
dest: /etc/haproxy/haproxy.cfg
notify: restart haproxy
- name: start haproxy
service:
name: haproxy
state: started
handlers:
- name: restart haproxy
service:
name: haproxy
state: restarted
- hosts: localhost
tasks:
- name: test httpd
uri:
url: http://172.25.25.1
status_code: 200
cat ansible/templates/httpd.cfg.j2
修改以下内容
Listen {{ http_host }}:{{ http_port }}
cat ansible/templates/haproxy.cfg.j2
修改以下内容
stats uri /status
backend app
balance roundrobin
{% for host in groups['webserver'] %} #将webserver组的所有主机添加到后端real server,做负载均衡
server {{ hostvars[host]['ansible_facts']['hostname'] }} {{ hostvars[host]['ansible_facts']['eth0']['ipv4']['address'] }}:80 check
{% endfor %}
[devops@server1 ansible]$ ls
ansible.cfg files inventory playbook.retry playbook.yml
[devops@server1 ansible]$ mkdir templates
[devops@server1 ansible]$ cp files/httpd.conf templates/httpd.conf.j2
[devops@server1 ansible]$ vim templates/httpd.conf.j2
[devops@server1 ansible]$ vim inventory
[devops@server1 ansible]$ vim playbook.yml
[devops@server1 ansible]$ vim file.yml
[devops@server1 ansible]$ ansible-playbook playbook.yml --syntax-check #语法检测
[devops@server1 ansible]$ ansible-playbook playbook.yml #执行部署
- hosts: all
tasks:- name: create file
template:
src: templates/file.j2
dest: /tmp/file
[devops@server1 ansible]$ ansible all -m setup
- name: create file
cat templates/file.j2
主机名:{{ ansible_facts['hostname'] }}
主机IP:{{ ansible_facts['eth0']['ipv4']['address'] }}
主机DNS:{{ ansible_facts['dns']['nameservers'][-1] }}
版本:{{ ansible_facts['machine'] }}
内核:{{ ansible_facts['kernel'] }}
内存空间:{{ ansible_facts['memfree_mb'] }}
[root@server2 etc]# vim /tmp/file
[root@server1 ansible]# cat apache.yml
---
- hosts: all
tasks:
- import_role:
name: apache
when: ansible_hostname in groups['webserver']
- import_role:
name: haproxy
when: ansible_hostname in groups['lb']
- import_role:
name: keepalived
when: ansible_hostname in groups['lb']
ansible roles
Ansible roles是为了层次化,结构化的组织playbook。
roles就是通过分别将变量,文件,任务,模块及处理器放置于单独的目录中,并可以便捷的include它们。
roles一般用于基于主机构建服务的场景中,在企业复杂业务场景中应用的频率很高。
以特定的层级目录结构进行组织的tasks,variables,handlers,templates,fiiles等;相当于函数的调用把各个功能切割成片段来执行。
roles目录结构
role_name:定义的role名字
files:存放copy或script等模块调用的函数
tasks:定义各种task,要有main.yml,其他文件include包含调用
handlers:定义各种handlers,要有main.yml,其他文件include包含调用
vars:定义variables,要有main.yml,其他文件include包含调用
templates:存储由template模块调用的模版文件
meta:定义当前角色的特殊设定及其依赖关系,要有main.yml
defaults:要有main.yml,用于设定默认变量
ansible-galaxy命令工具
Ansible Galaxy是一个免费共享和下载Ansible角色的网站,可以帮助我们更好的学习和定义roles。
ansible-galaxy命令默认与https://galaxy.ansible.com网站API通信,可以查找,下载各种社区开发的Ansible角色。
ansible-galaxy在Ansible1.4.2就已经被包含了。
[root@server1 ansible]# cat ansible.cfg
[defaults]
inventory = ./inventory
roles_path = ./roles
[privilege_escalation]
become=True
become_method=sudo
become_user=root
become_ask_pass=False
ansible-galaxy init apache
ansible-galaxy init haproxy
ansible-galaxy init keepalived
ansible-galaxy list
rm -fr README.md tests
[root@server1 apache]# cat handlers/main.yml
---
- name: restart httpd
service:
name: httpd
state: restarted
[root@server1 apache]# cat tasks/main.yml
---
- name: install httpd
yum:
name: httpd
state: present
- name: copy index.html
copy:
content: "{{ ansible_facts['hostname'] }}"
dest: /var/www/html/index.html
- name: configure httpd
template:
src: httpd.conf.j2
dest: /etc/httpd/conf/httpd.conf
owner: root
group: root
mode: 644
notify: restart httpd
- name: start httpd and firewalld
service:
name: "{{ item }}"
state: started
loop:
- httpd
- firewalld
- name: configure firewalld
firewalld:
service: http
permanent: yes
immediate: yes
state: enabled
[root@server1 apache]# cat vars/main.yml
---
http_port: 80
http_host: "{{ ansible_facts['eth0']['ipv4']['address'] }}"
[root@server1 apache]# vim templates/httpd.conf.j2
Listen {{ http_host }}:{{ http_port }}
[root@server1 haproxy]# ls
defaults files handlers meta tasks templates vars
[root@server1 haproxy]# cat handlers/main.yml
---
- name: restart haproxy
service:
name: haproxy
state: restarted
[root@server1 haproxy]# cat tasks/main.yml
---
- name: install haproxy
yum:
name: haproxy
state: present
- name: configure haproxy
template:
src: haproxy.cfg.j2
dest: /etc/haproxy/haproxy.cfg
notify: restart haproxy
- name: start haproxy
service:
name: haproxy
state: started
[root@server1 haproxy]# cat templates/haproxy.cfg.j2
#---------------------------------------------------------------------
# Example configuration for a possible web application. See the
# full configuration options online.
#
# http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
stats uri /status
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend main *:80
acl url_static path_beg -i /static /images /javascript /stylesheets
acl url_static path_end -i .jpg .gif .png .css .js
default_backend app
#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend app
balance roundrobin
{% for host in groups['webserver'] %}
server {{ hostvars[host]['ansible_facts']['hostname'] }} {{ hostvars[host]['ansible_facts']['eth0']['ipv4']['address'] }}:80 check
{% endfor %}
[root@server1 haproxy]# cat vars/main.yml
---
# vars file for haproxy
[root@server1 keepalived]# ls
defaults files handlers meta tasks templates vars
[root@server1 keepalived]# cat handlers/main.yml
---
- name: restart keepalived
service:
name: keepalived
state: restarted
[root@server1 keepalived]# cat vars/main.yml
---
# vars file for keepalived
[root@server1 keepalived]# cat tasks/main.yml
---
- name: install keepalived
yum:
name: keepalived
state: present
- name: configure keepalived
template:
src: keepalived.conf.j2
dest: /etc/keepalived/keepalived.conf
notify: restart keepalived
- name: start keepalived
service:
name: keepalived
state: started
[root@server1 keepalived]# cat templates/keepalived.conf.j2
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state {{ STATE }}
interface eth0
virtual_router_id {{ VRID }}
priority {{ PRIORITY }}
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.25.25.100
}
}
[root@server1 keepalived]# cat handlers/main.yml
---
- name: restart keepalived
service:
name: keepalived
state: restarted