ansible自动化部署flink-初稿

一、准备环境

1、准备机器,

本次以192.168.190.155,192.168.190.156,192.168.190.157,192.168.190.158四台虚机为例,本文操作全在管理机中。

机器IP

节点

CPU&内存

系统盘

存储

192.168.190.155

Ansible管理机

2C4G

50GB

50GB

192.168.190.158

Master/Worker(被管理机)

2C4G

50GB

50GB

192.168.190.157

Master/Worker(被管理机)

2C4G

50GB

50GB

192.168.190.156

Worker(被管理机)

2C4G

50GB

50GB

  1. 登录管理机,安装配置好ansible
  • 详细步骤
  1. 配置ansible的hosts,定义变量:

vi /etc/ansible/hosts

内容:

[flink_masters]  

flink-master ansible_host=192.168.190.158 flink_version=1.15.0  

  

[flink_workers]  

flink-worker1 ansible_host=192.168.190.157

flink-worker2 ansible_host=192.168.190.156  

  

[flink_masters:vars]  

flink_master_hostname=flink-master  

  

[flink_workers:vars]  

flink_worker_pool_size=4

  1. 配置被管理机的虚机环境:

vi /etc/ansible/change.yml

内容:

---

- name: Change

  hosts: all

  become: yes

    - name: close firewalld

        service: name=firewalld state=stopped

    - name: Mkdir

      shell: |

    mkdir /data && chmod 777 data

    mkdir /data/{flink,java,hadoop,prometheus,grafana}

    - name: Ensure hosts environment variables are set in /etc/hosts

      blockinfile:

        path: /etc/hosts

        block: |

          192.168.190.158 flink-master

          192.168.190.157 flink-worker1

          192.168.190.156 flink-worker2

        marker: "# {mark} ANSIBLE MANAGED BLOCK FOR hosts ENVIRONMENT VARIABLES"

        create: yes

        state: present

        insertafter: EOF

  1. 编辑jdk自动化部署文件:

vi /etc/ansible/jdk.yml

内容:

---

- name: Deploy Java-JDK-11.0.2

  hosts: all

  become: yes

  

  tasks:

    - name: Copy Jdk

      copy:

  src: /data/java/jdk-11.0.2_linux-x64_bin.tar.gz

  dest: /tmp/jdk-11.0.2_linux-x64_bin.tar.gz

  

- name: Extrack Jdk

      unarchive:

  src: /tmp/jdk-11.0.2_linux-x64_bin.tar.gz

  dest: /data/java/

  remote_src: yes

  

- name: Add JAVA_HOME to /etc/profile.d/

      blockinfile:  

        path: /etc/profile.d/java.sh

        create: yes  

        mode: '0644'  

        block: |    

          export JAVA_HOME=/opt/jdk-11.0.2

          export PATH=$JAVA_HOME/bin:$PATH

          export CLASSPATH=.:${JAVA_HOME}/jre/lib/rt.jar:${JAVA_HOME}/lib/dt.jar:${JAVA_HOME}/lib/tools.jar  

        marker: "# {mark} ANSIBLE MANAGED BLOCK - JAVA_HOME"

  1. 配置flink变量文件:

vi /etc/ansible/flin-conf.yaml.j2

内容:

# flink-conf.yaml.j2  

jobmanager.rpc.address: localhost

jobmanager.rpc.port: 6123

jobmanager.memory.process.size: 2048m

taskmanager.memory.process.size: 4096m

taskmanager.numberOfTaskSlots: 30

parallelism.default: 10

yarn.containers.vcores: 4

yarn.application-attempts: 10

metrics.scope.jm: <host>.jobmanager

metrics.scope.jm.job: <host>.jobmanager.<job_name>

metrics.scope.tm: <host>.taskmanager.<tm_id>

metrics.scope.tm.job: <host>.taskmanager.<tm_id>.<job_name>

metrics.scope.task: <host>.taskmanager.<tm_id>.<job_name>.<task_name>.<subtask_index>

metrics.scope.operator: <host>.taskmanager.<tm_id>.<job_name>.<operator_name>.<subtask_index>

state.backend: filesystem

state.checkpoints.dir: hdfs://{{ inventory_hostname }}:9000/flink-1.15.0/flink-checkpoints

state.savepoints.dir: hdfs://{{ inventory_hostname }}:9000/flink-1.15.0/flink-savepoints

state.backend.incremental: false

jobmanager.execution.failover-strategy: region

rest.port: 8081

rest.bind-port: 8085-8087

web.tmpdir: /opt/flink/flink-1.15.0/flink-web

web.upload.dir: /opt/flink/flink-1.15.0/jars

taskmanager.memory.managed.fraction: 0.1

taskmanager.memory.network.max: 200mb

taskmanager.memory.jvm-metaspace.size: 256mb

jobmanager.archive.fs.dir: hdfs://{{ inventory_hostname }}:9000/flink-1.15.0/completed-jobs

historyserver.archive.fs.dir: hdfs://{{ inventory_hostname }}:9000/flink-1.15.0/completed-jobs

historyserver.archive.fs.refresh-interval: 10000

restart-strategy.failure-rate.max-failures-per-interval: 3

restart-strategy.failure-rate.failure-rate-interval: 5 min

restart-strategy.failure-rate.delay: 10 s

  1. 编辑flink自动化部署文件:

vi /etc/ansible/flink.yml

内容:

---

- name: Deploy Flink-1.15.0

  hosts: all

  become: yes

  

  tasks:

    - name: Copy Flink

      copy:

  src: /data/flink/flink-1.15.0-bin-scala_2.12.tgz

  dest: /tmp/flink-1.15.0-bin-scala_2.12.tgz

  

- name: Extrack Flink

      unarchive:

  src: /tmp/flink-1.15.0-bin-scala_2.12.tgz

  dest: /data/flink/

  remote_src: yes

  

    - name: Ensure Flink configuration directory exists

      file:

        path: /data/flink/flink-1.15.0/conf

        state: directory

    - name: Template Flink configuration file

      ansible.builtin.template:

        src: /data/flink-conf.yaml.j2

        dest: /data/flink/flink-1.15.0/conf/flink-conf.yaml

    - name: Ensure masters environment variables are set in /data/flink/flink-1.15.0/conf/masters

      blockinfile:

        path: /data/flink/flink-1.15.0/conf/masters

        block: |

          192.168.190.158:8081

  192.168.190.157:8081

        marker: "# {mark} ANSIBLE MANAGED BLOCK FOR masters ENVIRONMENT VARIABLES"

        create: yes

        state: present

        insertafter: EOF

- name: Add FLINK_HOME to /etc/profile.d/

      blockinfile:  

        path: /etc/profile.d/flink.sh

        create: yes  

        mode: '0644'  

        block: |    

          export FLINK_HOME=/opt/flink-1.15.0

          export PATH=$PATH:$FLINK_HOME/bin  

        marker: "# {mark} ANSIBLE MANAGED BLOCK - FLINK_HOME"

  1. 配置hadoop变量文件:

vi /etc/ansible/vars_file/vars/hadoop_vars.yml

内容:

---  

# Hadoop environment variables  

Hadoop_version: 3.3.3

HADOOP_HOME: /data/Hadoop

JAVA_HOME: /data/java/jdk-11.0.2

HADOOP_CONF_DIR: /data/hadoop/hadoop-{{ hadoop_version }}/etc/hadoop  

HADOOP_PID_DIR: /data/hadoop/hadoop-{{ hadoop_version }}/PID  

HADOOP_LOG_DIR: /data/hadoop-dir/logs-{{ hadoop_version }}

  1. 编辑hadoop自动化部署文件:

vi /etc/ansible/hadoop.yml

内容:

---

- name: Deploy Hadoop-3.3.3

  hosts: all

  become: yes

  vars_files:  

    - vars/hadoop_vars.yml

  

  tasks:

    - name: Copy Hadoop

      src: /data/hadoop/hadoop-3.3.3.tar.gz

  dest: /tmp/hadoop-3.3.3.tar.gz

  

    - name: Extrack Hadoop

      unarchive:

  src: /tmp/hadoop-3.3.3.tar.gz

  dest: /data/hadoop/

  remote_src: yes

  

    - name: Copy Core-site.xml

      copy:

  src: /file/core-site.xml

  dest: /data/hadoop/hadoop-3.3.3/etc/hadoop/core-site.xml

  

    - name: Copy Yarn-site.xml

      copy:

  src: /file/yarn-site.xml

  dest: /data/hadoop/hadoop-3.3.3/etc/hadoop/yarn-site.xml

  

    - name: Copy Mapred-site.xml

      copy:

  src: /file/mapred-site.xml

  dest: /data/hadoop/hadoop-3.3.3/etc/hadoop/mapred-site.xml

  

    - name: Copy Hdfs-site.xml

      copy:

  src: /file/hdfs-site.xml

  dest: /data/hadoop/hadoop-3.3.3/etc/hadoop/hdfs-site.xml

    - name: Ensure hadoop-env environment variables are set in /data/hadoop/hadoop-3.3.3/etc/hadoop/hadoop-env.sh

      blockinfile:

        path: "{{ hadoop_conf_dir }}/hadoop-env.sh"

        block: |

          export JAVA_HOME={{ java_home }}  

          export HADOOP_OS_TYPE=${HADOOP_OS_TYPE:-$(uname -s)}  

          export HADOOP_CONF_DIR={{ hadoop_conf_dir }}  

          export HADOOP_PID_DIR={{ hadoop_pid_dir }}  

          export HADOOP_LOG_DIR={{ hadoop_log_dir }}

        marker: "# {mark} ANSIBLE MANAGED BLOCK FOR hadoop-env ENVIRONMENT VARIABLES"

        create: yes

        state: present

        insertafter: EOF

    - name: Ensure workers environment variables are set in /data/hadoop/hadoop-3.3.3/etc/hadoop/workers

      blockinfile:

        path: /data/hadoop/hadoop-3.3.3/etc/hadoop/workers

        block: |

          {% for host in groups.flink %}  

          {{ host }}  

          {% endfor %}

        marker: "# {mark} ANSIBLE MANAGED BLOCK FOR workers ENVIRONMENT VARIABLES"

        create: yes

        state: present

        insertafter: EOF

    - name: Ensure yarn-env environment variables are set in /data/hadoop/hadoop-3.3.3/etc/hadoop/yarn-env.sh

      blockinfile:

        path: "{{ hadoop_conf_dir }}/yarn-env.sh"

        block: |

          export JAVA_HOME={{ java_home }}

        marker: "# {mark} ANSIBLE MANAGED BLOCK FOR yarn-env ENVIRONMENT VARIABLES"

        create: yes

        state: present

        insertafter: EOF

- name: Add HADOOP_HOME to /etc/profile.d/

      blockinfile:  

        path: /etc/profile.d/java.sh

        create: yes  

        mode: '0644'  

        block: |    

          export HADOOP_HOEM=/opt/hadoop/hadoop-3.3.3

          export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

          export HADOOP_PID_DIR=${HADOOP_HOME}/pids

          export HADOOP_CLASSPATH=$(hadoop classpath)

     export HADOOP_CONF_DIR=/opt/hadoop/hadoop-3.3.3/etc/hadoop

     export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native

     export FORMAT_MESSAGES_PATTERN_DISABLE_LOOKUPS=true   

        marker: "# {mark} ANSIBLE MANAGED BLOCK - FLINK_HOME"

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

CTSXWT

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值