一、准备环境
1、准备机器
本次以192.168.190.155,192.168.190.156,192.168.190.157,192.168.190.158四台虚机为例,本文操作全在管理机中。
机器IP | 节点 | CPU&内存 | 系统盘 | 存储 |
192.168.190.155 | Ansible管理机 | 2C4G | 50GB | 50GB |
192.168.190.158 | Master/Worker(被管理机) | 2C4G | 50GB | 50GB |
192.168.190.157 | Master/Worker(被管理机) | 2C4G | 50GB | 50GB |
192.168.190.156 | Worker(被管理机) | 2C4G | 50GB | 50GB |
本次主要操作都在管理机:192.168.190.155上完成
二、详细步骤
1、配置ansible的hosts,定义变量:
vi /etc/ansible/hosts
内容:
[kafka_servers] zookeeper1 ansible_host=192.168.190.158 zookeeper2 ansible_host=192.168.190.156 zookeeper3 ansible_host=192.168.190.157 [kafka_borkers] zookeeper1 kafka_broker_id=1 zookeeper2 kafka_broker_id=2 zookeeper3 kafka_broker_id=3 |
2、在管理机上创建安装kafka的必要的目录
mkdir -p /etc/ansible/roles/{kafka,java}/{defaults,tasks,templates} |
3、编辑配置java剧本
目录结构图:
①、配置defaults目录下的配置文件(设置变量):main.yml
vi /etc/ansible/roles/java/defaults/main.yml
内容:
--- JAVA_VERSION: 1.8.2_192 JAVA_HOME: /data/java JAVA_TAR: jdk-8u192-linux-x64.tar.gz |
②、配置tasks目录下的配置文件(剧本):java.yml
vi /etc/ansible/roles/java/tasks/java.yml
内容:
--- - name: Deploy Java-JDK-1.8-192 hosts: all become: yes roles: - java tasks: - name: Copy Jdk copy: src: /{{JAVA_HOME}}/{{JAVA_VERSION2}} dest: /tmp/{{JAVA_VERSION2}} - name: Create Kafka logs directory ansible.builtin.file: path: /data/java state: directory - name: Extrack Jdk unarchive: src: /tmp/{{JAVA_VERSION2}} dest: /data/java/ remote_src: yes - name: Add JAVA_HOME to /etc/profile.d/ blockinfile: path: /etc/profile.d/java.sh create: yes mode: '0644' block: | export JAVA_HOME=/{{JAVA_HOME}}/jdk-{{JAVA_VERSION}} export CLASSPATH=.:${JAVA_HOME}/jre/lib/rt.jar:${JAVA_HOME}/lib/dt.jar:${JAVA_HOME}/lib/tools.jar marker: "# {mark} ANSIBLE MANAGED BLOCK - JAVA_HOME" |
4、编辑配置kafka剧本
目录结构图:
①、配置defaults目录下的配置文件(设置变量):main.yml
vi /etc/ansible/roles/kafka/defaults/main.yml
内容:
--- KAFKA_VERSION: "2.13-2.6.0" KAFKA_TAR: "kafka_2.13-2.6.0.tgz" KAFKA_HOME: "/data/kafka/kafka_2.13-2.6.0" zookeeper: "flink-master:2181,flink-worker1:2181,flink-worker2:2181" KAFKA_INSTALL_DIR: "/data/kafka" KAFKA_INSTALL_DIR_USR: "/usr/local/kafka" |
②、配置tasks目录下的配置文件(剧本):kafka.yml
vi /etc/ansible/roles/kafka/tasks/kafka.yml
内容:
--- - name: Deploy kafka_2.13-2.6.0 hosts: all become: yes roles: - kafka vars: kafka_config_dir: /{{ KAFKA_HOME }}/config tasks: - name: Copy Kafka copy: src: /{{ KAFKA_INSTALL_DIR }}/{{ KAFKA_TAR }} dest: /tmp/{{ KAFKA_TAR }} - name: Create Kafka logs directory ansible.builtin.file: path: /data/kafka/kafka_2.13-2.6.0/kafka-logs state: directory - name: Extrack Kafka unarchive: src: /tmp/{{ KAFKA_TAR }} dest: /{{ KAFKA_INSTALL_DIR }} remote_src: yes - name: Start Kafka service ansible.builtin.systemd: name: kafka state: started enabled: yes - name: Create Kafka logs directory ansible.builtin.file: path: /opt/kafka/kafka/kafka-logs state: directory - name: Ln shell: | ln -s /{{ KAFKA_INSTALL_DIR_USR }} /{{ KAFKA_HOME }} - name: Ensure Kafka configuration directory exists ansible.builtin.file: path: "{{ kafka_config_dir }}" state: directory - name: Render kafka configuration file ansible.builtin.template: src: /etc/ansible/roles/kafka/templates/kafka-jinja.j2 dest: "{{ kafka_config_dir }}/server.properties" |
③、编辑配置jinja2文件(配合剧本使用,自动创建kakfa配置文件):kafka-jinja.j2
vi /etc/ansible/roles/kafka/templates/kafka-jinja.j2
内容:
broker.id={{ kafka_broker_id }} listeners=PLAINTEXT://{{ inventory_hostname }}:9092 log.dirs=/{{ KAFKA_HOME }}/data zookeeper.connect={{ zookeeper }} |
5、执行ansible-playbook指令即可
ansible-playbook kafka.yml ansible-playbook java.yml |