问题背景
最近在写完一个flink项目后打包到集群运行,提交时因为满脑子想着周末怎么去浪,结果提交完发现提交命令忘记了-d参数,无奈只能手动kill掉任务,然后加上-d参数重新提交,结果问题就出现了,flink任务刚刚提交到yarn就会报如下错误:
排查问题
没有办法,只能顺着错误日志去寻找问题,第一步,先看大致错误:
java.lang.NoSuchMethodError:
org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest.newInstance
为什么没有这个方法?那就进行第二步,去项目定位这个方法:
@Public
@Stable
public abstract class AllocateRequest {
@Public
@Stable
public static AllocateRequest newInstance(int responseID, float appProgress,
List<ResourceRequest> resourceAsk,
List<ContainerId> containersToBeReleased,
ResourceBlacklistRequest resourceBlacklistRequest) {
return newInstance(responseID, appProgress, resourceAsk,
containersToBeReleased, resourceBlacklistRequest, null);
}
@Public
@Stable
public static AllocateRequest newInstance(int responseID, float appProgress,
List<ResourceRequest> resourceAsk,
List<ContainerId> containersToBeReleased,
ResourceBlacklistRequest resourceBlacklistRequest,
List<ContainerResourceIncreaseRequest> increaseRequests) {
AllocateRequest allocateRequest = Records.newRecord(AllocateRequest.class);
allocateRequest.setResponseId(responseID);
allocateRequest.setProgress(appProgress);
allocateRequest.setAskList(resourceAsk);
allocateRequest.setReleaseList(containersToBeReleased);
allocateRequest.setResourceBlacklistRequest(resourceBlacklistRequest);
allocateRequest.setIncreaseRequests(increaseRequests);
return allocateRequest;
}
但是发现项目中有这个方法,那按理说应该会走到该逻辑,为什么会报错呢?那下一步就该考虑并不是缺少该方法,而是这个方法冲突了?
想到这里,我赶紧去看了一下flink/lib目录下的flink-shaded-hadoop-2-uber-2.8.5-7.0,发现其中也有这个类:
org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest
然后去排查项目中的这个jar包,发现项目中引入了
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-server</artifactId>
<version>1.4.9</version>
</dependency>
而这个依赖中本来带有的hadoop-yarn-api包的版本是2.7.4。
解决问题
所以现在问题的原因很清楚了,就是项目中使用的org.apache.hbase的包中带有hadoop-yarn-api的包,并且版本与自身flink与Hadoop编译包中的hadoop-yarn-api版本冲突,导致提交任务报错。
所以,我们可以换用org.apache.hbase包,或者把scope设置为provided就可以啦
问题补充
但当时还有一个小疑惑,就是为什么jar包冲突了,但是不用分离模式提交就可以运行呢?
我们可以点进去源码看一下,(这里就不详细说了,只跳到最关键的地方)
进入org.apache.flink.client.cli.CliFrontend类找到runProgram方法
private <T> void runProgram(
CustomCommandLine<T> customCommandLine,
CommandLine commandLine,
RunOptions runOptions,
PackagedProgram program) throws ProgramInvocationException, FlinkException {
final ClusterDescriptor<T> clusterDescriptor = customCommandLine.createClusterDescriptor(commandLine);
try {
final T clusterId = customCommandLine.getClusterId(commandLine);
final ClusterClient<T> client;
// directly deploy the job if the cluster is started in job mode and detached
//如果我们这里采用了分离模式,走的是deployJobCluster方法
if (clusterId == null && runOptions.getDetachedMode()) {
int parallelism = runOptions.getParallelism() == -1 ? defaultParallelism : runOptions.getParallelism();
final JobGraph jobGraph = PackagedProgramUtils.createJobGraph(program, configuration, parallelism);
final ClusterSpecification clusterSpecification = customCommandLine.getClusterSpecification(commandLine);
client = clusterDescriptor.deployJobCluster(
clusterSpecification,
jobGraph,
runOptions.getDetachedMode());
logAndSysout("Job has been submitted with JobID " + jobGraph.getJobID());
try {
client.shutdown();
} catch (Exception e) {
LOG.info("Could not properly shut down the client.", e);
}
//如果我们这里没采用分离模式,走的是executeProgram方法
} else {
final Thread shutdownHook;
if (clusterId != null) {
client = clusterDescriptor.retrieve(clusterId);
shutdownHook = null;
} else {
// also in job mode we have to deploy a session cluster because the job
// might consist of multiple parts (e.g. when using collect)
final ClusterSpecification clusterSpecification = customCommandLine.getClusterSpecification(commandLine);
client = clusterDescriptor.deploySessionCluster(clusterSpecification);
// if not running in detached mode, add a shutdown hook to shut down cluster if client exits
// there's a race-condition here if cli is killed before shutdown hook is installed
if (!runOptions.getDetachedMode() && runOptions.isShutdownOnAttachedExit()) {
shutdownHook = ShutdownHookUtil.addShutdownHook(client::shutDownCluster, client.getClass().getSimpleName(), LOG);
} else {
shutdownHook = null;
}
}
try {
client.setPrintStatusDuringExecution(runOptions.getStdoutLogging());
client.setDetached(runOptions.getDetachedMode());
LOG.debug("{}", runOptions.getSavepointRestoreSettings());
int userParallelism = runOptions.getParallelism();
LOG.debug("User parallelism is set to {}", userParallelism);
if (ExecutionConfig.PARALLELISM_DEFAULT == userParallelism) {
userParallelism = defaultParallelism;
}
executeProgram(program, client, userParallelism);
} finally {
if (clusterId == null && !client.isDetached()) {
// terminate the cluster only if we have started it before and if it's not detached
try {
client.shutDownCluster();
} catch (final Exception e) {
LOG.info("Could not properly terminate the Flink cluster.", e);
}
if (shutdownHook != null) {
// we do not need the hook anymore as we have just tried to shutdown the cluster.
ShutdownHookUtil.removeShutdownHook(shutdownHook, client.getClass().getSimpleName(), LOG);
}
}
try {
client.shutdown();
} catch (Exception e) {
LOG.info("Could not properly shut down the client.", e);
}
}
}
} finally {
try {
clusterDescriptor.close();
} catch (Exception e) {
LOG.info("Could not properly close the cluster descriptor.", e);
}
}
}
我们可以看到,当我们如果用分离模式提交,那么走的是deployJobCluster方法,否则走的是executeProgram方法,两者并不是相同的提交方式,(如果有兴趣的小伙伴可以进入各自方法详细的走一遍提交流程),所以就解释了用不同的方式提交,一种报错,一种不报错了。