flink集群模式启动任务,正常运行,一切OK。
Yarn,运行hive mapreduce任务,正常运行,一切OK。
flink on yarn 提交任务后,任务一直卡在 INFO org.apache.flink.yarn.YarnClusterDescriptor [] - Deployment took more than 60 seconds. Please check if the requested resources are available in the YARN cluster
排查,跟着设置运行资源大小,重启,重新运行,运行时,各个组件的日志观察都没有问题,偶然看到一个 java.net.BindException: Port in use: hadoop103:8088 日志里面的错误。
端口占用了.....
flink 集群关闭 stop-cluster.sh
重新提交任务,OK
先用测试jar包测试了下,
./flink run -m yarn-cluster -yjm 1024 -ytm 1024 /opt/install/flink-1.15.2/examples/streaming/WordCount.jar --input hdfs://bigdata01/input/test.log --output hdfs://bigdata01/input/word_res1
报错:
Exception in thread "Thread-5" java.lang.IllegalStateException: Trying to access closed classloader. Please check if you store classloaders directly or indirectly in static fields. If the stacktrace suggests that the leak occurs in a third party library and cannot be fixed immediately, you can disable this check with the configuration 'classloader.check-leaked-classloader'
类加载器相关的报错,可能是类加载器问题
您可以使用配置“classloader.check-leaked-classloader”禁用此检查。
编辑flink-conf.yaml
增加 :classloader.check-leaked-classloader: false , 保存后重启任务即可
成功提交到yarn上运行。