Apache Flink Standalone or HA搭建

Apache Flink(三)

Standalone模式

前提条件

JDK1.8+安装完成
HDFS正常启动(SSH免密认证)

  • 设置CentOS进程数和文件数(重启生效) -可选
[root@HadoopNode00 ~]# vi /etc/security/limits.conf
* soft nofile 204800
* hard nofile 204800
* soft nproc 204800
* hard nproc 204800
  • 配置主机名(重启生效)
[root@HadoopNode00 ~]# vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=HadoopNode00
  • 配置主机名和IP的关系
[root@HadoopNode00 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.126.10 HadoopNode00
  • 关闭防火墙
[root@HadoopNode00 ~]# service iptables stop
iptables: Setting chains to policy ACCEPT: filter          [  OK  ]
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Unloading modules:                               [  OK  ]
[root@HadoopNode00 ~]# chkconfig iptables off
[root@HadoopNode00 ~]# chkconfig --list | grep iptables
iptables        0:off   1:off   2:off   3:off   4:off   5:off   6:off
  • 安装JD1.8,配置JAVA_HOME(~/.bashrc)-略
  • 配置SSH面密码认证-略
  • 安装配置Hadoop,配置HADOOP_HOME和HADOOP_CALSSPATH(~/.bashrc)- 略
  • Flink 安装与配置

1,解压flink-1.8.1-bin-scala_2.11.tgz到指定目录下/home/flink

[root@HadoopNode00 ~]# mkdir /home/flink
[root@HadoopNode00 ~]# tar -zxf flink-1.8.1-bin-scala_2.11.tgz -C /home/flink/
[root@HadoopNode00 ~]# cd /home/flink/flink-1.8.1/
[root@HadoopNode00 flink-1.8.1]# ls -l
total 628
drwxr-xr-x. 2 502 games   4096 Dec 23 11:26 bin  # 执行脚本
drwxr-xr-x. 2 502 games   4096 Jun 25 16:10 conf # 配置目录
drwxr-xr-x. 6 502 games   4096 Dec 23 11:26 examples # 案例
drwxr-xr-x. 2 502 games   4096 Dec 23 11:26 lib # 系统依赖jars
-rw-r--r--. 1 502 games  11357 Jun 14  2019 LICENSE
drwxr-xr-x. 2 502 games   4096 Dec 23 11:26 licenses
drwxr-xr-x. 2 502 games   4096 Jun 24 23:02 log #系统启动日志,出错可以查看
-rw-r--r--. 1 502 games 596009 Jun 24 23:02 NOTICE
drwxr-xr-x. 2 502 games   4096 Dec 23 11:26 opt # Flink第三方可选jar,当需要的时候拷贝到lib下
-rw-r--r--. 1 502 games   1308 Jun 14  2019 README.txt
[root@HadoopNode00 flink-1.8.1]# tree conf/
conf/
├── flink-conf.yaml  # 主配置文件 √
├── log4j-cli.properties
├── log4j-console.properties
├── log4j.properties
├── log4j-yarn-session.properties
├── logback-console.xml
├── logback.xml
├── logback-yarn.xml
├── masters # 主节点信息,在单机环境下无需配置
├── slaves  # 计算节点信息 √
├── sql-client-defaults.yaml
└── zoo.cfg

2,配置slaves配置文件

[root@HadoopNode00 flink-1.8.1]# vi conf/slaves

HadoopNode00

3,配置flink-conf.yaml

#==============================================================================
# Common
#==============================================================================
jobmanager.rpc.address: HadoopNode00
# 表示从机的计算资源数
taskmanager.numberOfTaskSlots: 4 
# 配置任务的默认计算并行度 
parallelism.default: 3

4,启动Flink服务

[root@HadoopNode00 flink-1.8.1]# ./bin/start-cluster.sh
Starting cluster.
Starting standalonesession daemon on host HadoopNode00.
Starting taskexecutor daemon on host HadoopNode00.
[root@HadoopNode00 flink-1.8.1]# jps
10833 TaskManagerRunner
10340 StandaloneSessionClusterEntrypoint
10909 Jps

在这里插入图片描述

HA模式

参考:https://ci.apache.org/projects/flink/flink-docs-release-1.9/ops/jobmanager_high_availability.html

  • 搭建HDFS HA集群,保证正常启动(以前课程)
  • 配置HADOOP_CLASSPATH
  • 配置FlinkHA(准备HadoopNode01~03)
    [root@HadoopNodeXX ~]# mkdir /home/flink
    [root@HadoopNodeXX ~]# tar -zxf flink-1.8.1-bin-scala_2.11.tgz -C /home/flink
    [root@HadoopNodeXX ~]# cd /home/flink/
    [root@HadoopNodeXX flink]# cd flink-1.8.1/
    [root@HadoopNodeXX flink-1.8.1]# vi conf/masters
    HadoopNode01:8081
    HadoopNode02:8081
    HadoopNode03:8081
    [root@HadoopNodeXX flink-1.8.1]# vi conf/slaves
    HadoopNode01
    HadoopNode02
    HadoopNode03
    [root@HadoopNodeXX flink-1.8.1]# vi conf/flink-conf.yaml
    taskmanager.numberOfTaskSlots: 4
    parallelism.default: 3
    high-availability: zookeeper
    high-availability.storageDir: hdfs:///flink/ha/
    high-availability.zookeeper.quorum: HadoopNode01:2181,HadoopNode02:2181,HadoopNode03:2181
    high-availability.zookeeper.path.root: /flink
    high-availability.cluster-id: /default_ns
    
    state.backend: rocksdb
    state.checkpoints.dir: hdfs:///flink-checkpoints
    state.savepoints.dir: hdfs:///flink-savepoints
    state.backend.incremental: false
    state.backend.rocksdb.ttl.compaction.filter.enabled: true

启动Flink集群

    [root@HadoopNode01 flink-1.8.1]# ./bin/start-cluster.sh
    Starting HA cluster with 3 masters.
    Starting standalonesession daemon on host HadoopNode01.
    Starting standalonesession daemon on host HadoopNode02.
    Starting standalonesession daemon on host HadoopNode03.
    Starting taskexecutor daemon on host HadoopNode01.
    Starting taskexecutor daemon on host HadoopNode02.
    Starting taskexecutor daemon on host HadoopNode03.

用户可以通过 ./bin/jobmanager.sh start|stop来模拟主机故障

注意:

XX代表所有节点

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值