踩坑——Flink报错 Cannot currently handle nodes with more than 64 outputs.

@羲凡——只为了更好的活着

踩坑——Flink报错 Caused by: org.apache.flink.optimizer.CompilerException: Cannot currently handle nodes with more than 64 outputs.

一.问题背景

要求数据按照门店写入对应的hdfs文件中(说白了就是一个门店一个文件),但是门店又很多。就导致了下面这个报错

org.apache.flink.client.program.ProgramInvocationException: The main method caused an error.
        at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:546)
        at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)
        at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:427)
        at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:813)
        at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:287)
        at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
        at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1050)
        at org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1126)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
        at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
        at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1126)
Caused by: org.apache.flink.optimizer.CompilerException: Cannot currently handle nodes with more than 64 outputs.
        at org.apache.flink.optimizer.dag.OptimizerNode.addOutgoingConnection(OptimizerNode.java:348)
        at org.apache.flink.optimizer.dag.SingleInputNode.setInput(SingleInputNode.java:202)
        at org.apache.flink.optimizer.traversals.GraphCreatingVisitor.postVisit(GraphCreatingVisitor.java:272)
        at org.apache.flink.optimizer.traversals.GraphCreatingVisitor.postVisit(GraphCreatingVisitor.java:82)
        at org.apache.flink.api.common.operators.SingleInputOperator.accept(SingleInputOperator.java:203)
        at org.apache.flink.api.common.operators.GenericDataSinkBase.accept(GenericDataSinkBase.java:220)
        at org.apache.flink.api.common.Plan.accept(Plan.java:329)
        at org.apache.flink.optimizer.Optimizer.compile(Optimizer.java:455)
        at org.apache.flink.optimizer.Optimizer.compile(Optimizer.java:399)
        at org.apache.flink.client.program.ClusterClient.getOptimizedPlan(ClusterClient.java:380)
        at org.apache.flink.client.program.ClusterClient.getOptimizedPlan(ClusterClient.java:907)
        at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:474)
        at org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:62)
        at com.yum.cpos.flink.Executor.main(Executor.java:97)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)
	... 17 more
二.问题分析

点进去看源码,发现是OptimizerNode这个抽象类中的 addOutgoingConnection 这个方法在不停的累计 outgoingConnections 的长度(它的类型是 List )。我想肯定是我的打开了很多链接,影响到flink内部的优化机制。我们就把循环去掉写成一个文件发现是可以的。那么我每写一个门店数据或者少于64个门店,就将其提交不就行了。

三.解决方案

解决方案1:加上一个 env.execute();

for (String storeCode : stroeCodeSet){
	res.writeAsText(hdfsOutputpath, FileSystem.WriteMode.OVERWRITE);
	env.execute();
}

解决方案2:在方案1的基础上加上循环次数就可以了(推荐这种方式)

int num = 0;
int cishu = 0 ;
for (String storeCode : stroeCodeSet){
	tmpHdfsDataSet.writeAsText(hdfsOutputpath, FileSystem.WriteMode.OVERWRITE)
	if (num == 60 || cishu == stroeCodeSet.size()){
		env.execute();
		num = 0 ;
	}
}

====================================================================

@羲凡——只为了更好的活着

若对博客中有任何问题,欢迎留言交流

评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值