在Centos7上搭建Flink1.15.0的Standalone分布式集群

同样适用于1.14.4版本

1. 安装规划

  • 每台服务器相互设置ssh无密码登录,注意authorized_keys权限为600
服务名安装服务器安装教程
java8bigdata001/2/3
hadoopbigdata001/2/3Centos7上Hadoop 3.3.1的分布式集群安装过程

2. 下载解压(在bigdata001操作)

[root@bigdata001 opt]#
[root@bigdata001 opt]# wget https://dlcdn.apache.org/flink/flink-1.15.0/flink-1.15.0-bin-scala_2.12.tgz
[root@bigdata001 opt]# 
[root@bigdata001 opt]# tar -zxvf flink-1.15.0-bin-scala_2.12.tgz
[root@bigdata001 opt]#
[root@bigdata001 opt]# cd flink-1.15.0
[root@bigdata001 flink-1.15.0]#

3. 修改conf/flink-conf.yaml(在bigdata001操作)

新增目录

[root@bigdata001 flink-1.15.0]# 
[root@bigdata001 flink-1.15.0]# pwd
/opt/flink-1.15.0
[root@bigdata001 flink-1.15.0]# 
[root@bigdata001 flink-1.15.0]# mkdir web_upload_dir
[root@bigdata001 flink-1.15.0]# 
[root@bigdata001 flink-1.15.0]# mkdir io_tmp_dir
[root@bigdata001 flink-1.15.0]# 

修改部分


jobmanager.rpc.address: bigdata001

rest.address: bigdata001
jobmanager.bind-host: 0.0.0.0

jobmanager.memory.process.size: 2g		
taskmanager.memory.process.size: 6g		

taskmanager.numberOfTaskSlots: 2			

state.backend: rocksdb					
state.checkpoints.dir: hdfs://bigdata001:9000/flink/checkpoints/rocksdb			
state.savepoints.dir: hdfs://bigdata001:9000/flink/savepoints/rocksdb

rest.bind-address: bigdata001

io.tmp.dirs: /opt/flink-1.15.0/io_tmp_dir			

添加部分

env.java.home: /opt/jdk1.8.0_201				

execution.checkpointing.interval: 300000				

web.upload.dir: /opt/flink-1.15.0/web_upload_dir			

4. 修改conf/masters和conf/workers(在bigdata001操作)

bigdata001:8081
bigdata002			
bigdata003			

5. 环境变量的添加(在bigdata001操作)

vi /root/.bashrc(在bigdata002/3也要操作)

export HADOOP_CLASSPATH=`$HADOOP_HOME/bin/hadoop classpath`

export HADOOP_CONF_DIR=/opt/hadoop-3.3.1/etc/hadoop

vi /root/.bashrc

export FLINK_HOME=/opt/flink-1.15.0

export PATH=$PATH:$FLINK_HOME/bin

6. 启动和验证

  1. flink-1.15.0目录的分发(在bigdata001操作)
[root@bigdata001 opt]# scp -r flink-1.15.0 root@bigdata002:/opt
[root@bigdata001 opt]# scp -r flink-1.15.0 root@bigdata003:/opt
  1. 启动(在bigdata001操作)
[root@bigdata001 opt]# start-cluster.sh 
Starting cluster.
Starting standalonesession daemon on host bigdata001.
Starting taskexecutor daemon on host bigdata002.
Starting taskexecutor daemon on host bigdata003.
[root@bigdata001 opt]#
  1. 访问http://bigdata001:8081,如下图所示:
    Web UI

  2. 执行测试程序(在bigdata001操作)

[root@bigdata001 opt]# /opt/flink-1.15.0/bin/flink run /opt/flink-1.14.4/examples/streaming/WordCount.jar 
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/flink-1.15.0/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-3.3.1/share/hadoop/common/lib/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Executing WordCount example with default input data set.
Use --input to specify file input.
Printing result to stdout. Use --output to specify output path.
Job has been submitted with JobID c16df30fa6cc2cee2f5523021d08f80b
Program execution finished
Job with JobID c16df30fa6cc2cee2f5523021d08f80b has finished.
Job Runtime: 2555 ms

[root@bigdata001 opt]#
  1. stop集群(在bigdata001操作)
[root@bigdata001 opt]# stop-cluster.sh 
Stopping taskexecutor daemon (pid: 32594) on host bigdata002.
Stopping taskexecutor daemon (pid: 3484) on host bigdata003.
Stopping standalonesession daemon (pid: 1900) on host bigdata001.
[root@bigdata001 opt]#
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值