Ansible自动化部署Flink单机版本

一、准备环境

1、准备机器

  本次以192.168.190.155,192.168.190.157两台虚机为例,本文操作全在管理机中。

机器IP

节点

CPU&内存

系统盘

存储

192.168.190.155

Ansible管理机

2C4G

50GB

50GB

192.168.190.157

Master/Worker(被管理机)

2C4G

50GB

50GB

  本次主要操作都在管理机:192.168.190.155上完成

2、为192.168.190.157创建flink用户

useradd flink

3、关闭防火墙

systemctl status firewalld

若关闭,则检查下一项,若开启,则需要执行如下命令

systemctl stop firewalld ##临时关闭防火墙,

systemctl disable firewalld ##永久关闭防火墙

4、在管理机192.168.190.155上设置免密登录被管理机:

① 执行免密指令:

ssh-keygen -t rsa

② 使用ssh-copy-id传输公钥:

ssh-copy-id root@192.168.190.158 ##向flink_sa传输公钥

5、在管理机192.168.190.155上创建必要的目录

① 创建ansible剧本目录:

mkdir -p /etc/ansible/roles/{flink_sa,java_sa}/{defaults,tasks,templates}

② 创建tar包存放目录:

mkdir -p /data/{flink_sa,java_sa}

6、在管理机的ansible的hosts文件中设置变量:

路径:vi /etc/ansible/hosts

[flink]

flink_sa ansible_host=192.168.190.157 FLINK_SA=192.168.190.157

二、编辑配置java剧本及变量

1、配置defaults目录下的配置文件(设置变量):main.yml

vi /etc/ansible/roles/flink/defaults/main.yml

内容:

---

JAVA_VERSION: 1.8.0_192

JAVA_INSTALL_DIR: /data/java_sa

JAVA_HOME: "{{ JAVA_INSTALL_DIR }}/jdk-{{ JAVA_VERSION }}"

JAVA_TAR: jdk-8u192-linux-x64.tar.gz

2、配置tasks目录下的配置文件(剧本):java.yml

vi /etc/ansible/roles/flink/tasks/java.yml

内容:

---

- name: Deploy Java-JDK-1.8.0_192

hosts: flink

become: yes

roles:

- java_sa

tasks:

- name: Create /data

ansible.builtin.file:

path: /data

state: directory

mode: '0755'

- name: Create /data/flink_sa

ansible.builtin.file:

path: /data/flink_sa

state: directory

mode: '0755'

- name: Create /data/java_sa

ansible.builtin.file:

path: /data/java_sa

state: directory

mode: '0755'

- name: Copy Java_sa

copy:

src: "{{ JAVA_INSTALL_DIR }}/{{ JAVA_TAR }}"

dest: /tmp/{{ JAVA_TAR }}

- name: Extrack Java_sa

unarchive:

src: /tmp/{{ JAVA_TAR }}

dest: "{{ JAVA_INSTALL_DIR }}"

remote_src: yes

- name: Add JAVA_SA.SH to /etc/profile.d

blockinfile:

path: /etc/profile.d/java_sa.sh

create: yes

mode: '0644'

block: |

export JAVA_HOME={{ JAVA_HOME }}

export CLASSPATH=.:${JAVA_HOME}/jre/lib/rt.jar:${JAVA_HOME}/lib/dt.jar:${JAVA_HOME}/lib/tools.jar

export PATH=$PATH:$JAVA_HOME/bin

marker: "#{mark}ANSIBLE MANAGED BLOCK - JAVA_SA.SH"

- name: Deploy Hosts

hosts: flink_cluster

become: yes

roles:

- java_sa

tasks:

- name: Ensure hosts enviroment variables are set in /etc/hosts

blockinfile:

path: /etc/hosts

block: |

{% for host in groups['flink'] %}

{{ hostvars[host]['ansible_host'] }} {{ host }}

{% endfor %}

marker: "# {mark} ANSIBLE MANAGED BLOCK FOR hosts ENVIRONMENT VARIABLES"

create: yes

mode: '0644'

- name: Template /etc/hostname

ansible.builtin.template:

src: /etc/ansible/roles/java_sa/templates/hostname.j2

dest: /etc/hostname

3、编辑配置jinja2文件(配合剧本使用,自动创建hadoop配置文件):

① hostname.j2

vi /etc/ansible/roles/java_sa/templates/hostname.j2

内容:

{{ inventory_hostname}}

详细信息请参考以下文件:

暂时无法在飞书文档外展示此内容

三、编辑配置flink剧本及变量

1、配置defaults目录下的配置文件(设置变量):main.yml

vi /etc/ansible/roles/flink_sa/defaults/main.yml

内容:

---

FLINK_VERSION: 1.12.7

FLINK_TAR: flink-1.12.7-bin-scala_2.11.tgz

FLINK_HOME: "{{ FLINK_INSTALL_DIR }}/flink-{{ FLINK_VERSION }}"

FLINK_INSTALL_DIR: /data/flink_sa

2、配置tasks目录下的配置文件(剧本):flink_sa.yml

vi /etc/ansible/roles/flink_sa/tasks/flink_sa.yml

内容:

---

- name: Deploy Flink

hosts: flink

become: yes

roles:

- flink_sa

tasks:

- name: Copy flink_sa

copy:

src: "{{ FLINK_INSTALL_DIR }}/{{ FLINK_TAR }}"

dest: /tmp/{{ FLINK_TAR }}

- name: Extrack flink_sa

unarchive:

src: /tmp/{{ FLINK_TAR }}

dest: "{{ FLINK_INSTALL_DIR }}"

remote_src: yes

- name: Template flink-conf configuration file

ansible.builtin.template:

src: /etc/ansible/roles/flink_sa/templates/flink-conf.j2

dest: "{{ FLINK_HOME }}/conf/flink-conf.yaml"

- name: Add FLINK_SA.SH to /etc/profile.d

blockinfile:

path: /etc/profile.d/flink_sa.sh

create: yes

mode: '0644'

block: |

export FLINK_HOME={{ FLINK_HOME }}

export PATH=$PATH:$FLINK_HOME/bin

marker: "# {mark} ANSIBLE MANAGED BLOCK - FLINK_SA.SH"

- name: Template flink_sa-service.j2

ansible.builtin.template:

src: /etc/ansible/roles/flink_sa/templates/flink_sa-service.j2

dest: /etc/systemd/system/flink_sa.service

3、编辑配置jinja2文件(配合剧本使用,自动创建flink配置文件):

① flink-conf.j2

vi /etc/ansible/roles/flink/templates/flink-conf.j2

内容:

jobmanager.rpc.address: {{ FLINK_SA }}

jobmanager.rpc.port: 6123

jobmanager.memory.process.size: 2048m

taskmanager.memory.process.size: 4096m

taskmanager.numberOfTaskSlots: 30

parallelism.default: 10

yarn.containers.vcores: 4

yarn.application-attempts: 10

metrics.scope.jm: <host>.jobmanager

metrics.scope.jm.job: <host>.jobmanager.<job_name>

metrics.scope.tm: <host>.taskmanager.<tm_id>

metrics.scope.tm.job: <host>.taskmanager.<tm_id>.<job_name>

metrics.scope.task: <host>.taskmanager.<tm_id>.<job_name>.<task_name>.<subtask_index>

metrics.scope.operator: <host>.taskmanager.<tm_id>.<job_name>.<operator_name>.<subtask_index>

state.backend: filesystem

state.checkpoints.dir: hdfs://{{ inventory_hostname }}:9000/flink-{{ FLINK_VERSION }}/flink-checkpoints

state.savepoints.dir: hdfs://{{ inventory_hostname }}:9000/flink-{{ FLINK_VERSION }}/flink-savepoints

state.backend.incremental: false

jobmanager.execution.failover-strategy: region

rest.port: 8081

rest.bind-port: 8085-8087

web.tmpdir: "{{ FLINK_HOME }}/flink-web"

web.upload.dir: "{{ FLINK_HOME }}/jars"

taskmanager.memory.managed.fraction: 0.1

taskmanager.memory.network.max: 200mb

taskmanager.memory.jvm-metaspace.size: 256mb

jobmanager.archive.fs.dir: hdfs://{{ inventory_hostname }}:9000/flink-{{ FLINK_VERSION }}/completed-jobs

historyserver.archive.fs.dir: hdfs://{{ inventory_hostname }}:9000/flink-{{ FLINK_VERSION }}/completed-jobs

historyserver.archive.fs.refresh-interval: 10000

restart-strategy.failure-rate.max-failures-per-interval: 3

restart-strategy.failure-rate.failure-rate-interval: 5 min

restart-strategy.failure-rate.delay: 10 s

详细信息请参考以下文件:

暂时无法在飞书文档外展示此内容

② flink_sa.j2

vi /etc/ansible/roles/flink/templates/flink_sa.j2

内容:

{{ FLINK_LA }}:8081

③ flink-server.j2

vi /etc/ansible/roles/flink/templates/flink-server.j2

内容:

[Unit]

Description=Hadoop Service

After=network.target

[Service]

Type=forking

User=flink

Group=flink

ExecStart={{ FLINK_HOME }}/bin/start-cluster.sh

ExecStop={{ FLINK_HOME }}/bin/stop-cluster.sh

Restart=on-abort

[Install]

WantedBy=multi-user.target

详细信息请参考以下文件:

暂时无法在飞书文档外展示此内容

④ flink-workers.j2

vi /etc/ansible/roles/flink/templates/flink-workers.j2

内容:

{{ FLINK_SA }}

四、执行ansible剧本指令

1、执行java.yml:

ansible-playbook java.yml ##需要先cd /etc/ansible/roles/java_sa/tasks

2、执行flink.yml:

ansible-playbook flink.yml ##需要先cd /etc/ansible/roles/flink_sa/tasks

4、也可放在脚本中批量执行

① 创建脚本(名字自定义):

vi /etc/ansible/ansible-playbook.sh

内容(ps:脚本中引用的剧本yaml文件最好写绝对路径。):

#!bin/bash

ansible-playbook /etc/ansible/roles/java_sa/tasks/java.yml

ansible-playbook /etc/ansible/roles/flink_sa/tasks/flink.yml

② 执行脚本:

sh ansible-playbook.sh ##需要先cd /etc/ansible

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

CTSXWT

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值