The maximum path component name limit

今日同事一个测试的任务经常异常退出

查看相关job日志:

org.apache.hadoop.yarn.exceptions.YarnRuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$PathComponentTooLongException): The maximum path component name limit of job_1542872443206_7299723-1551753291148-lf_cp_serv-%2D%2D%E5%93%81%E7%89%8C%E6%96%B0%E5%A2%9E%2D%E5%93%81%E7%89%8C%E8%BF%91%E5%85%AD%E6%9C%88%E7%B4%AF%E8%AE%A1%E5%AD%90%E6%96%B0%E5%A2%9E%E7%94%A8%E6%88%B7%E5%88%86%E5%B8%83%2D%E5%9C%B0%E5%B8%82%0Aselect+city_no%2C...t%28Stage-1551753352217-95-0-SUCCEEDED-root.ia_serv-1551753296447.jhist_tmp in directory /user/history/done_intermediate/lf_cp_serv is exceeded: limit=255 length=341 at org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyMaxComponentLength(FSDirectory.java:2224) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.addChild(FSDirectory.java:2335) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.addLastINode(FSDirectory.java:2304) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.addINode(FSDirectory.java:2087) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.addFile(FSDirectory.java:390) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2949) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2826) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2711) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:602) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.create(AuthorizationProviderProxyClientProtocol.java:115) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:412) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2226) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2222) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2220)

 解决方法:

1:修改参数dfs.namenode.fs-limits.max-component-length,将其修改未0不限制,但是太长的文件名并不被hadoop所推荐,因为会影响到hadoop的性能

2:设定jobname ,set mapreduce.job.name=XXX

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值