1.安装前准备
1.1安装hadoop
Hadoop 3.3.2 离线安装_shangjg3的博客-CSDN博客
1.2配置环境变量
# flink
export FLINK_HOME=/data/cmpt/flink-1.14.2
export PATH=$PATH:$FLINK_HOME/bin
1.3在HDFS上创建 flink目录
hdfs dfs -mkdir -p /flink/flink-checkpoints/
hdfs dfs -mkdir -p /flink/flink-savepoints/
hdfs dfs -mkdir -p /flink/completed-jobs/
hdfs dfs -mkdir -p /flink/completed-jobs/
2. 安装
2.1下载并解压编译好的tar包
tar -zxvf flink-1.14.2-bin-scala_2.12.tgz
修改flink-conf.yaml配置
jobmanager.rpc.address: master
jobmanager.rpc.port: 6123
jobmanager.memory.process.size: 1600m
taskmanager.memory.process.size: 1728m
taskmanager.numberOfTaskSlots: 3
parallelism.default: 1
state.backend: filesystem
state.checkpoints.dir: hdfs://master:8020/flink/flink-checkpoints
state.savepoints.dir: hdfs://master:8020/flink/flink-savepoints
jobmanager.execution.failover-strategy: region
jobmanager.archive.fs.dir: hdfs://master:8020/flink/completed-jobs/
historyserver.archive.fs.dir: hdfs://master:8020/flink/completed-jobs/
classloader.check-leaked-classloader: false
rest.port: 8082
rest.address: 0.0.0.0
rest.bind-port: 8080-8090
rest.bind-address: 0.0.0.0
2.2启动和关闭
./bin/start-cluster.sh # 启动
./bin/stop-cluster.sh # 关闭
2.3查看信息
3.测试和使用
3.1 测试案例
flink run /data/cmpt/flink-1.14.2/examples/streaming/TopSpeedWindowing.jar