dolphinscheduler调度datax脚本执行错误原因(我见)

2022-10-13 00:05:06.188 [job-0] ERROR HdfsWriter$Job - 判断文件路径[message:filePath =/junzun_recruit_origin/recruit_web/db_spider/position_info/2022-10-12]是否存在时发生网络IO异常,请检查您的网络是否正常!
	2022-10-13 00:05:06.191 [job-0] ERROR JobContainer - Exception when job run
	com.alibaba.datax.common.exception.DataXException: Code:[HdfsWriter-06], Description:[与HDFS建立连接时出现IO异常.]. - org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error
		at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:98)
		at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:2017)
		at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1441)
		at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3125)
		at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1173)
		at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:973)
		at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
		at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
		at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
		at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1000)
		at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
		at java.security.AccessController.doPrivileged(Native Method)
		at javax.security.auth.Subject.doAs(Subject.java:422)
		at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
		at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2916)
	
		at org.apache.hadoop.ipc.Client.call(Client.java:1476)
		at org.apache.hadoop.ipc.Client.call(Client.java:1407)
		at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
		at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
		at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
		at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
		at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
		at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
		at java.lang.reflect.Method.invoke(Method.java:498)
		at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
		at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
		at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source)
		at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2116)
		at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
		at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
		at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
		at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
		at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1424)
		at com.alibaba.datax.plugin.writer.hdfswriter.HdfsHelper.isPathexists(HdfsHelper.java:156)
		at com.alibaba.datax.plugin.writer.hdfswriter.HdfsWriter$Job.prepare(HdfsWriter.java:151)
		at com.alibaba.datax.core.job.JobContainer.prepareJobWriter(JobContainer.java:724)
		at com.alibaba.datax.core.job.JobContainer.prepare(JobContainer.java:309)
		at com.alibaba.datax.core.job.JobContainer.start(JobContainer.java:115)
		at com.alibaba.datax.core.Engine.start(Engine.java:92)
		at com.alibaba.datax.core.Engine.entry(Engine.java:171)
		at com.alibaba.datax.core.Engine.main(Engine.java:204)
	 - org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error
		at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:98)
		at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:2017)
		at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1441)
		at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3125)
		at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1173)
		at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:973)
		at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
		at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
		at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
		at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1000)
		at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
		at java.security.AccessController.doPrivileged(Native Method)
		at javax.security.auth.Subject.doAs(Subject.java:422)
		at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
		at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2916)
	
		at org.apache.hadoop.ipc.Client.call(Client.java:1476)
		at org.apache.hadoop.ipc.Client.call(Client.java:1407)
		at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
		at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
		at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
		at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
		at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
		at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
		at java.lang.reflect.Method.invoke(Method.java:498)
		at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
		at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
		at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source)
		at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2116)
		at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
		at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
		at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
		at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
		at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1424)
		at com.alibaba.datax.plugin.writer.hdfswriter.HdfsHelper.isPathexists(HdfsHelper.java:156)
		at com.alibaba.datax.plugin.writer.hdfswriter.HdfsWriter$Job.prepare(HdfsWriter.java:151)
		at com.alibaba.datax.core.job.JobContainer.prepareJobWriter(JobContainer.java:724)
		at com.alibaba.datax.core.job.JobContainer.prepare(JobContainer.java:309)
		at com.alibaba.datax.core.job.JobContainer.start(JobContainer.java:115)
		at com.alibaba.datax.core.Engine.start(Engine.java:92)
		at com.alibaba.datax.core.Engine.entry(Engine.java:171)
		at com.alibaba.datax.core.Engine.main(Engine.java:204)
	
		at com.alibaba.datax.common.exception.DataXException.asDataXException(DataXException.java:40) ~[datax-common-0.0.1-SNAPSHOT.jar:na]
		at com.alibaba.datax.plugin.writer.hdfswriter.HdfsHelper.isPathexists(HdfsHelper.java:161) ~[hdfswriter-0.0.1-SNAPSHOT.jar:na]
		at com.alibaba.datax.plugin.writer.hdfswriter.HdfsWriter$Job.prepare(HdfsWriter.java:151) ~[hdfswriter-0.0.1-SNAPSHOT.jar:na]
		at com.alibaba.datax.core.job.JobContainer.prepareJobWriter(JobContainer.java:724) ~[datax-core-0.0.1-SNAPSHOT.jar:na]
		at com.alibaba.datax.core.job.JobContainer.prepare(JobContainer.java:309) ~[datax-core-0.0.1-SNAPSHOT.jar:na]
		at com.alibaba.datax.core.job.JobContainer.start(JobContainer.java:115) ~[datax-core-0.0.1-SNAPSHOT.jar:na]
		at com.alibaba.datax.core.Engine.start(Engine.java:92) [datax-core-0.0.1-SNAPSHOT.jar:na]
		at com.alibaba.datax.core.Engine.entry(Engine.java:171) [datax-core-0.0.1-SNAPSHOT.jar:na]
		at com.alibaba.datax.core.Engine.main(Engine.java:204) [datax-core-0.0.1-SNAPSHOT.jar:na]
	Caused by: org.apache.hadoop.ipc.RemoteException: Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error
		at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:98)
		at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:2017)
		at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1441)
		at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3125)
		at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1173)
		at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:973)
		at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
		at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
		at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
		at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1000)
		at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
		at java.security.AccessController.doPrivileged(Native Method)
		at javax.security.auth.Subject.doAs(Subject.java:422)
		at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
		at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2916)
	
		at org.apache.hadoop.ipc.Client.call(Client.java:1476) ~[hadoop-common-2.7.1.jar:na]
		at org.apache.hadoop.ipc.Client.call(Client.java:1407) ~[hadoop-common-2.7.1.jar:na]
		at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) ~[hadoop-common-2.7.1.jar:na]
		at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source) ~[na:na]
		at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771) ~[hadoop-hdfs-2.7.1.jar:na]
		at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_212]
		at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_212]
		at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_212]
		at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_212]
		at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[hadoop-common-2.7.1.jar:na]
		at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[hadoop-common-2.7.1.jar:na]
		at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source) ~[na:na]
		at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2116) ~[hadoop-hdfs-2.7.1.jar:na]
		at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305) ~[hadoop-hdfs-2.7.1.jar:na]
		at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301) ~[hadoop-hdfs-2.7.1.jar:na]
		at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) ~[hadoop-common-2.7.1.jar:na]
		at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317) ~[hadoop-hdfs-2.7.1.jar:na]
		at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1424) ~[hadoop-common-2.7.1.jar:na]
		at com.alibaba.datax.plugin.writer.hdfswriter.HdfsHelper.isPathexists(HdfsHelper.java:156) ~[hdfswriter-0.0.1-SNAPSHOT.jar:na]
		... 7 common frames omitted
	2022-10-13 00:05:06.194 [job-0] INFO  StandAloneJobContainerCommunicator - Total 0 records, 0 bytes | Speed 0B/s, 0 records/s | Error 0 records, 0 bytes |  All Task WaitWriterTime 0.000s |  All Task WaitReaderTime 0.000s | Percentage 0.00%
	2022-10-13 00:05:06.297 [job-0] ERROR Engine - 
	
	经DataX智能分析,该任务最可能的错误原因是:
	com.alibaba.datax.common.exception.DataXException: Code:[HdfsWriter-06], Description:[与HDFS建立连接时出现IO异常.]. - org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error
		at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:98)
		at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:2017)
		at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1441)
		at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3125)
		at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1173)
		at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:973)
		at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
		at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
		at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
		at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1000)
		at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
		at java.security.AccessController.doPrivileged(Native Method)
		at javax.security.auth.Subject.doAs(Subject.java:422)
		at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
		at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2916)
	
		at org.apache.hadoop.ipc.Client.call(Client.java:1476)
		at org.apache.hadoop.ipc.Client.call(Client.java:1407)
		at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
		at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
		at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
		at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
		at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
		at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
		at java.lang.reflect.Method.invoke(Method.java:498)
		at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
		at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
		at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source)
		at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2116)
		at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
		at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
		at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
		at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
		at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1424)
		at com.alibaba.datax.plugin.writer.hdfswriter.HdfsHelper.isPathexists(HdfsHelper.java:156)
		at com.alibaba.datax.plugin.writer.hdfswriter.HdfsWriter$Job.prepare(HdfsWriter.java:151)
		at com.alibaba.datax.core.job.JobContainer.prepareJobWriter(JobContainer.java:724)
		at com.alibaba.datax.core.job.JobContainer.prepare(JobContainer.java:309)
		at com.alibaba.datax.core.job.JobContainer.start(JobContainer.java:115)
		at com.alibaba.datax.core.Engine.start(Engine.java:92)
		at com.alibaba.datax.core.Engine.entry(Engine.java:171)
		at com.alibaba.datax.core.Engine.main(Engine.java:204)
	 - org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error
		at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:98)
		at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:2017)
		at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1441)
		at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3125)
		at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1173)
		at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:973)
		at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
		at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
		at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
		at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1000)
		at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
		at java.security.AccessController.doPrivileged(Native Method)
		at javax.security.auth.Subject.doAs(Subject.java:422)
		at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
		at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2916)
	
		at org.apache.hadoop.ipc.Client.call(Client.java:1476)
		at org.apache.hadoop.ipc.Client.call(Client.java:1407)
		at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
		at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
		at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
		at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
		at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
		at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
		at java.lang.reflect.Method.invoke(Method.java:498)
		at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
		at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
		at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source)
		at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2116)
		at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
		at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
		at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
		at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
		at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1424)
		at com.alibaba.datax.plugin.writer.hdfswriter.HdfsHelper.isPathexists(HdfsHelper.java:156)
		at com.alibaba.datax.plugin.writer.hdfswriter.HdfsWriter$Job.prepare(HdfsWriter.java:151)
		at com.alibaba.datax.core.job.JobContainer.prepareJobWriter(JobContainer.java:724)
		at com.alibaba.datax.core.job.JobContainer.prepare(JobContainer.java:309)
		at com.alibaba.datax.core.job.JobContainer.start(JobContainer.java:115)
		at com.alibaba.datax.core.Engine.start(Engine.java:92)
		at com.alibaba.datax.core.Engine.entry(Engine.java:171)
		at com.alibaba.datax.core.Engine.main(Engine.java:204)
	
		at com.alibaba.datax.common.exception.DataXException.asDataXException(DataXException.java:40)
		at com.alibaba.datax.plugin.writer.hdfswriter.HdfsHelper.isPathexists(HdfsHelper.java:161)
		at com.alibaba.datax.plugin.writer.hdfswriter.HdfsWriter$Job.prepare(HdfsWriter.java:151)
		at com.alibaba.datax.core.job.JobContainer.prepareJobWriter(JobContainer.java:724)
		at com.alibaba.datax.core.job.JobContainer.prepare(JobContainer.java:309)
		at com.alibaba.datax.core.job.JobContainer.start(JobContainer.java:115)
		at com.alibaba.datax.core.Engine.start(Engine.java:92)
		at com.alibaba.datax.core.Engine.entry(Engine.java:171)
		at com.alibaba.datax.core.Engine.main(Engine.java:204)
	Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error
		at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:98)
		at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:2017)
		at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1441)
		at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3125)
		at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1173)
		at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:973)
		at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
		at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
		at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
		at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1000)
		at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
		at java.security.AccessController.doPrivileged(Native Method)
		at javax.security.auth.Subject.doAs(Subject.java:422)
		at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
		at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2916)
	
		at org.apache.hadoop.ipc.Client.call(Client.java:1476)
		at org.apache.hadoop.ipc.Client.call(Client.java:1407)
		at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
		at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
		at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
		at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
		at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
		at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
		at java.lang.reflect.Method.invoke(Method.java:498)
		at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
		at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
		at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source)
		at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2116)
		at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
		at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
		at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
		at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
		at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1424)
		at com.alibaba.datax.plugin.writer.hdfswriter.HdfsHelper.isPathexists(HdfsHelper.java:156)
		... 7 more

dolphin会随机在节点所在位置进行任务分配,而我的json文件主要集中在hadoop master节点,所以需要同步到每台节点,这样其他任务节点接收到任务,才能访问到配置文件。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值