【job排错】记一次flink on hadoop with per-job 报错排查

flink 版本1.12.5
hadoop为高可用 版本为:Hadoop 2.10.1

flink job突然提交不到yarn上,本文通过排查日志,查找一些hadoop内部通讯信息,来指导对hadoop内各daemon进程的一些检查和操作。
 

1. 认识Hadoop HA

先看下三个节点下hadoop HA下各节点daemon的布置:
在这里插入图片描述

Yarn的高可用

ResourceManager : 基于Zookeeper 实现高可用机制,避免单点故障。
ResourceManager基于Zookeeper的ActiveStandbyElector来决定哪个 RM 应该是 Active。不需要像HDFS那样运行单独的ZKFC守护进程。
开启了RM高可用,会获取到yarn.resourcemanager.ha.automatic-failover.enabled(默认true),将RM设置为启用自动故障转移。

参考: https://blog.csdn.net/mxb1234567/article/details/121431338

HDFS的高可用

在这里插入图片描述

认识一下JournalNode

上面在Active Namenode与StandBy Namenode之间的绿色区域就是JournalNode。一般三个节点的hadoop中,有三个JournalNode。
作用:相当于分布式文件系统。用于实时同步active和standby nn之间的数据,以便主备切换之后元数据的统一。
 

高可用内部操作一:Active 和standBy Namenode元数据同步

QJM(Quorum Journal Manage):hadoop2内置了QJM平台,安装节点奇数个,启动之后对应的进程为journalnode。
active nn将元数据信息写入到QJM平台上,standby对QJM中对应的文件系统添加监听,一旦发现有数据更新,立即将更新的元数据拉取到自己的本地。以便主备切换之后元数据保持一致。
 

高可用内部操作二:主备切换

active将通过zkfc将状态写入到zk中,standBy对此状态信息感兴趣,则添加监听,一旦active宕机,zkfc立即通知standy转换状态为active。
 

怎样避免脑裂现象:

脑裂是集群中出现两个active
standy会强制向原来的active的nn远程发送shell脚本(kill -9 )确保原来的active nn确实宕机。

 
 

2. yarn日志

一个失败的flink job的启动后的生命周期

2022-07-03 15:58:11,389 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1656763382714_0869_000001 (auth:SIMPLE)
2022-07-03 15:58:11,424 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_e07_1656763382714_0869_01_000001 by user commonuser
2022-07-03 15:58:11,426 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Creating a new application reference for app application_1656763382714_0869
2022-07-03 15:58:11,426 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Application application_1656763382714_0869 transitioned from NEW to INITING
2022-07-03 15:58:11,427 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Application application_1656763382714_0869 transitioned from INITING to RUNNING
2022-07-03 15:58:11,437 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Adding container_e07_1656763382714_0869_01_000001 to application application_1656763382714_0869
2022-07-03 15:58:11,437 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=commonuser        IP=10.0.16.43        OPERATION=Start Container Request        TARGET=ContainerManageImpl        RESULT=SUCCESS        APPID=application_1656763382714_0869        CONTAINERID=container_e07_1656763382714_0869_01_000001
2022-07-03 15:58:11,443 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_e07_1656763382714_0869_01_000001 transitioned from NEW to LOCALIZING
2022-07-03 15:58:11,443 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1656763382714_0869
2022-07-03 15:58:11,446 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Created localizer for container_e07_1656763382714_0869_01_000001
2022-07-03 15:58:11,457 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Writing credentials to the nmPrivate file /data/appData/hadoop/tmp/nm-local-dir/nmPrivate/container_e07_1656763382714_0869_01_000001181c3112bf8.tokens
2022-07-03 15:58:11,459 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Initializing user commonuser
2022-07-03 15:58:11,465 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Copying from /data/appData/hadoop/tmp/nm-local-dir/nmPrivate/container_e07_1656763382714_0869_01_000001181c3112bf8.tokens to /data/appData/hadoop/tmp/nm-local-dir/usercache/commonuser/appcache/application_1656763382714_0869/container_e07_1656763382714_0869_01_000001.tokens
2022-07-03 15:58:11,465 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Localizer CWD set to /data/appData/hadoop/tmp/nm-local-dir/usercache/commonuser/appcache/application_1656763382714_0869 = file:/data/appData/hadoop/tmp/nm-local-dir/usercache/commonuser/appcache/application_1656763382714_0869
2022-07-03 15:58:11,477 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer: Disk Validator: yarn.nodemanager.disk-validator is loaded.
2022-07-03 15:58:12,260 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_e07_1656763382714_0869_01_000001 transitioned from LOCALIZING to SCHEDULED
2022-07-03 15:58:12,260 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.scheduler.ContainerScheduler: Starting container [container_e07_1656763382714_0869_01_000001]
2022-07-03 15:58:12,282 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_e07_1656763382714_0869_01_000001 transitioned from SCHEDULED to RUNNING
。。。
//resoucemanager开始监控container_e07_1656763382714_0869_01_000001
。。。
2022-07-03 15:58:12,282 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_e07_1656763382714_0869_01_000001
2022-07-03 15:58:12,287 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: launchContainer: [bash, /data/appData/hadoop/tmp/nm-local-dir/usercache/commonuser/appcache/application_1656763382714_0869/container_e07_1656763382714_0869_01_000001/default_container_executor.sh]
2022-07-03 15:58:12,827 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: container_e07_1656763382714_0869_01_000001's ip = 10.0.16.43, and hostname = docp43
。。。
//关键日志:跳过监控container 因为CPU还不可用
。。。
2022-07-03 15:58:12,845 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Skipping monitoring container container_e07_1656763382714_0869_01_000001 since CPU usage is not yet available.
2022-07-03 15:59:44,409 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from container container_e07_1656763382714_0869_01_000001 is : 2
。。。
//exitCode=2
。。。

2022-07-03 15:59:44,409 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exception from container-launch with container ID: container_e07_1656763382714_0869_01_000001 and exit code: 2
ExitCodeException exitCode=2: 
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:998)
        at org.apache.hadoop.util.Shell.run(Shell.java:884)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1216)
        at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:294)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.launchContainer(ContainerLaunch.java:447)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:298)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:99)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
。。。
//Exception from container-launch.
。。。
2022-07-03 15:59:44,411 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Exception from container-launch.
2022-07-03 15:59:44,411 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Container id: container_e07_1656763382714_0869_01_000001
2022-07-03 15:59:44,411 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Exit code: 2
2022-07-03 15:59:44,412 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Container launch failed : Container exited with a non-zero exit code 2. 
。。。
//状态转为EXITED_WITH_FAILURE
。。。
2022-07-03 15:59:44,419 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_e07_1656763382714_0869_01_000001 transitioned from RUNNING to EXITED_WITH_FAILURE
。。。
//开始清理container:
DefaultContainerExecutor、 
EXITED_WITH_FAILURE to DONE、
Stopping resource-monitoring、
Stopping container、
RUNNING to APPLICATION_RESOURCES_CLEANINGUP、
Deleting absolute path、
APPLICATION_RESOURCES_CLEANINGUP to FINISHED、
Scheduling Log Deletion

。。。
2022-07-03 15:59:44,419 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_e07_1656763382714_0869_01_000001
2022-07-03 15:59:44,434 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /data/appData/hadoop/tmp/nm-local-dir/usercache/commonuser/appcache/application_1656763382714_0869/container_e07_1656763382714_0869_01_000001
2022-07-03 15:59:44,434 WARN org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=commonuser        OPERATION=Container Finished - Failed        TARGET=ContainerImpl        RESULT=FAILURE        DESCRIPTION=Container failed with state: EXITED_WITH_FAILURE        APPID=application_1656763382714_0869        CONTAINERID=container_e07_1656763382714_0869_01_000001
2022-07-03 15:59:44,435 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_e07_1656763382714_0869_01_000001 transitioned from EXITED_WITH_FAILURE to DONE
2022-07-03 15:59:44,435 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Removing container_e07_1656763382714_0869_01_000001 from application application_1656763382714_0869
2022-07-03 15:59:44,435 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_e07_1656763382714_0869_01_000001
2022-07-03 15:59:44,435 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1656763382714_0869
2022-07-03 15:59:44,451 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1656763382714_0869_000001 (auth:SIMPLE)
2022-07-03 15:59:44,456 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Stopping container with container Id: container_e07_1656763382714_0869_01_000001
2022-07-03 15:59:44,456 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=commonuser        IP=10.0.16.43        OPERATION=Stop Container Request        TARGET=ContainerManageImpl        RESULT=SUCCESS        APPID=application_1656763382714_0869        CONTAINERID=container_e07_1656763382714_0869_01_000001
2022-07-03 15:59:45,439 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed containers from NM context: [container_e07_1656763382714_0869_01_000001]
2022-07-03 15:59:45,439 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Application application_1656763382714_0869 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP
2022-07-03 15:59:45,440 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /data/appData/hadoop/tmp/nm-local-dir/usercache/commonuser/appcache/application_1656763382714_0869
2022-07-03 15:59:45,440 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event APPLICATION_STOP for appId application_1656763382714_0869
2022-07-03 15:59:45,441 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Application application_1656763382714_0869 transitioned from APPLICATION_RESOURCES_CLEANINGUP to FINISHED
2022-07-03 15:59:45,441 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: Scheduling Log Deletion for application: application_1656763382714_0869, with delay of 10800 seconds

分析一个job的日志可以看清楚flink job提交到yarn上的完整(代码被调用)过程,有助于以后对源码的理解。还是比较有意思的!

上述的日志有一个基本信息就是:提交上的job获取不到可用的资源,然后被清理了。。。

 

观察resourceManager

大量拒绝连接和故障转移的日志。

2022-07-03 15:54:52,159 INFO org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider: Failing over to rm2
2022-07-03 15:54:52,160 INFO org.apache.hadoop.io.retry.RetryInvocationHandler: java.net.ConnectException: Call From docp43/10.0.16.43 to docp43:18134 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused, while invoking ResourceTrackerPBClientImpl.registerNodeManager over rm2 after 11 failover attempts. Trying to failover after sleeping for 26911ms.
2022-07-03 15:55:19,071 INFO org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider: Failing over to rm1
2022-07-03 15:55:19,073 INFO org.apache.hadoop.io.retry.RetryInvocationHandler: java.net.ConnectException: Call From docp43/10.0.16.43 to docp45:18134 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused, while invoking ResourceTrackerPBClientImpl.registerNodeManager over rm1 after 12 failover attempts. Trying to failover after sleeping for 23598ms.
2022-07-03 15:55:42,672 INFO org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider: Failing over to rm2
。。。
//rn1和rn2相互的故障转移
。。。
2022-07-03 15:58:03,885 INFO org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider: Failing over to rm2
2022-07-03 15:58:04,044 INFO org.apache.hadoop.yarn.server.nodemanager.security.NMContainerTokenSecretManager: Rolling master-key for container-tokens, got key with id -2128690809
2022-07-03 15:58:04,045 INFO org.apache.hadoop.yarn.server.nodemanager.security.NMTokenSecretManagerInNM: Rolling master-key for container-tokens, got key with id -164338421
2022-07-03 15:58:04,045 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Registered with ResourceManager as docp43:18131 with total resource of <memory:8192, vCores:8>
2022-07-03 15:58:04,160 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Application application_1656763382714_0766 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP
2022-07-03 15:58:04,160 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Application application_1656763382714_0760 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP
2022-07-03 15:58:04,160 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Application application_1656763382714_0774 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP
2022-07-03 15:58:04,160 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Application application_1656763382714_0775 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP

。。。
//开始调度删除日志
。。。
2022-07-03 15:58:04,170 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: Scheduling Log Deletion for application: application_1656763382714_0766, with delay of 10800 seconds
。。。
2022-07-03 15:58:04,172 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: Scheduling Log Deletion for application: application_1656763382714_0770, with delay of 10800 seconds
。。。
2022-07-03 15:58:05,165 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /data/appData/hadoop/tmp/nm-local-dir/usercache/commonuser/appcache/application_1656763382714_0066

后台日志
提交任务时:报资源不够 ,猜测因为zk导致RM不正常导致和nodeManger通讯也有问题,或nodeManage也没有启动。

The number of requested virtual cores for application master 1 exceeds 
the maximum number of virtual cores 0 available in the Yarn Cluster.

查看nodes,发现宕掉一个节点,先重新启动hadoop
在这里插入图片描述
 
通过上述日志基本确定为:zkfc的故障转移功能有问题,各个节点下有些daemon没有启动,重启zk,然后再重启各个daemon。
 
重新提交任务,后端报:

Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RetriableException): org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create directory /user/commonuser/.flink/application_1656903632221_0031. Name node is in safe mode.
The reported blocks 256 needs additional 3 blocks to reach the threshold 0.9990 of total blocks 260.
The minimum number of live datanodes is not required. Safe mode will be turned off automatically once the thresholds have been reached. NamenodeHostName:docp43
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1427)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3006)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1132)

查看各节点datanode状态(未重启的重启),并关闭nn安装模式

bin/hadoop dfsadmin -safemode leave 

然后重启任务,终于可以正常提交任务!
在这里插入图片描述

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

roman_日积跬步-终至千里

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值