阿里云CentOS7安装Flink1.17

前提条件

阿里云CentOS7安装好jdk,官方文档要求java 11,使用java 8也可以。可参 hadoop安装 的jdk安装部分。

下载安装包

下载安装包

[hadoop@node1 ~]$ cd softinstall/
[hadoop@node1 softinstall]$ wget https://archive.apache.org/dist/flink/flink-1.17.1/flink-1.17.1-bin-scala_2.12.tgz

查看安装包

[hadoop@node1 softinstall]$ ls
apache-zookeeper-3.7.1-bin.tar.gz  hadoop-3.3.4.tar.gz         kafka_2.12-3.3.1.tgz
flink-1.17.1-bin-scala_2.12.tgz    jdk-8u271-linux-x64.tar.gz

解压安装包

解压安装包

[hadoop@node1 softinstall]$ tar -zxvf flink-1.17.1-bin-scala_2.12.tgz -C ~/soft

查看解压后的文件

[hadoop@node1 softinstall]$ cd ~/soft
[hadoop@node1 soft]$ ls
apache-zookeeper-3.7.1-bin  flink-1.17.1  hadoop-3.3.4  jdk1.8.0_271  kafka_2.12-3.3.1

设置环境变量

[hadoop@node1 soft]$ sudo nano /etc/profile.d/my_env.sh

添加如下内容

export FLINK_HOME=/home/hadoop/soft/flink-1.17.1
export PATH=$PATH:$FLINK_HOME/bin

让环境变量立即生效

[hadoop@node1 soft]$ source /etc/profile

验证版本号

[hadoop@node1 soft]$ flink -v
Version: 1.17.1, Commit ID: 2750d5c

配置Flink

查看Flink配置文件

[hadoop@node1 soft]$ cd $FLINK_HOME
​
[hadoop@node1 flink]$ ls
bin  conf  examples  lib  LICENSE  licenses  log  NOTICE  opt  plugins  README.txt
​
[hadoop@node1 flink]$ cd conf
[hadoop@node1 conf]$ ls
flink-conf.yaml       log4j-console.properties  log4j-session.properties  logback-session.xml  masters  zoo.cfg
log4j-cli.properties  log4j.properties          logback-console.xml       logback.xml          workers
​

配置flink-conf.yaml

[hadoop@node1 conf]$ vim flink-conf.yaml

找到如下配置项,修改内容如下,其中node1是机器主机名,注意根据实际情况修改。

jobmanager.rpc.address: node1
jobmanager.bind-host: 0.0.0.0
taskmanager.bind-host: 0.0.0.0
taskmanager.host: node1
rest.address: node1
rest.bind-address: 0.0.0.0

配置master

[hadoop@node1 conf]$ nano masters

修改内容为

node1:8081

配置worker

[hadoop@node1 conf]$ nano workers

修改内容为

node1

启动/关闭flink集群

查看flink的命令

[hadoop@node1 conf]$ cd $FLINK_HOME/bin
[hadoop@node1 bin]$ ls
bash-java-utils.jar  flink-daemon.sh           kubernetes-taskmanager.sh  start-cluster.sh           yarn-session.sh
config.sh            historyserver.sh          pyflink-shell.sh           start-zookeeper-quorum.sh  zookeeper.sh
find-flink-home.sh   jobmanager.sh             sql-client.sh              stop-cluster.sh
flink                kubernetes-jobmanager.sh  sql-gateway.sh             stop-zookeeper-quorum.sh
flink-console.sh     kubernetes-session.sh     standalone-job.sh          taskmanager.sh
[hadoop@node1 bin]$ 
​

启动flink集群

[hadoop@node1 flink-1.17.1]$ start-cluster.sh 
Starting cluster.
Starting standalonesession daemon on host node1.
Starting taskexecutor daemon on host node1.

查看进程

[hadoop@node1 flink-1.17.1]$ jps
13527 Jps
13177 StandaloneSessionClusterEntrypoint
13503 TaskManagerRunner

关闭flink集群

[hadoop@node1 flink-1.17.1]$ stop-cluster.sh 
Stopping taskexecutor daemon (pid: 13503) on host node1.
Stopping standalonesession daemon (pid: 13177) on host node1.

查看进程

[hadoop@node1 flink-1.17.1]$ jps
14760 Jps

单独启动flink进程

[hadoop@node1 flink-1.17.1]$ jobmanager.sh start
[hadoop@node1 flink-1.17.1]$ taskmanager.sh start

操作过程如下

[hadoop@node1 flink-1.17.1]$ jobmanager.sh 
Usage: jobmanager.sh ((start|start-foreground) [args])|stop|stop-all
[hadoop@node1 flink-1.17.1]$ jobmanager.sh start
Starting standalonesession daemon on host node1.
[hadoop@node1 flink-1.17.1]$ jps
15300 StandaloneSessionClusterEntrypoint
15316 Jps
[hadoop@node1 flink-1.17.1]$ taskmanager.sh 
Usage: taskmanager.sh (start|start-foreground|stop|stop-all)
[hadoop@node1 flink-1.17.1]$ taskmanager.sh start
Starting taskexecutor daemon on host node1.
[hadoop@node1 flink-1.17.1]$ jps
15300 StandaloneSessionClusterEntrypoint
15668 TaskManagerRunner
15693 Jps

单独关闭flink进程

[hadoop@node1 flink-1.17.1]$ taskmanager.sh stop
[hadoop@node1 flink-1.17.1]$ jobmanager.sh stop

操作过程如下

[hadoop@node1 flink-1.17.1]$ taskmanager.sh stop
Stopping taskexecutor daemon (pid: 15668) on host node1.
[hadoop@node1 flink-1.17.1]$ jps
15300 StandaloneSessionClusterEntrypoint
16232 Jps
[hadoop@node1 flink-1.17.1]$ jobmanager.sh stop
Stopping standalonesession daemon (pid: 15300) on host node1.
[hadoop@node1 flink-1.17.1]$ jps
16544 Jps

访问Web UI

启动flink集群

start-cluster.sh

在阿里云的安全组放开8081端口

访问

云服务器公网IP:8081

提交flink作业

运行flink提供的WordCount程序案例

[hadoop@node1 examples]$ cd $FLINK_HOME
[hadoop@node1 flink-1.17.1]$ flink run examples/streaming/WordCount.jar
Executing example with default input data.
Use --input to specify file input.
Printing result to stdout. Use --output to specify output path.
Job has been submitted with JobID c4fcf3a5266af2d7e0df713d5bfdce85
Program execution finished
Job with JobID c4fcf3a5266af2d7e0df713d5bfdce85 has finished.
Job Runtime: 3139 ms

查看作业结果

查看输出的WordCount结果的末尾10行数据

[hadoop@node1 flink-1.17.1]$ tail log/flink-*-taskexecutor-*.out
==> log/flink-hadoop-taskexecutor-0-node1.out <==
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000e0000000, 536870912, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 536870912 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /home/hadoop/hs_err_pid10313.log
​
==> log/flink-hadoop-taskexecutor-1-node1.out <==
(nymph,1)
(in,3)
(thy,1)
(orisons,1)
(be,4)
(all,2)
(my,1)
(sins,1)
(remember,1)
(d,4)

Web UI查看提交的作业

在Task Managers查看作业输出结果

遇到的问题

启动flink,jps进程不对,查看log

[hadoop@node1 log]$ cat flink-hadoop-standalonesession-0-node1.out 
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000d5550000, 715849728, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 715849728 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /home/hadoop/soft/flink-1.17.1/bin/hs_err_pid10001.log
​

云服务器内存不足,关闭一些开启的进程来释放资源,或者提升云服务器配置,即可解决。

参考:First steps | Apache Flink

完成!enjoy it!

  • 22
    点赞
  • 23
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值