Web上传文件到HDFS报错java.io.IOException: File xx could only be written to 0 of the

问题描述

HDFS部署在阿里云服务器,使用web上传文件到阿里云服务器的HDFS中报错:java.io.IOException: File /17035041/服务器测试/test4.txt could only be written to 0 of the 1 minReplication nodes. There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
报错如下图:

2020-06-30 18:22:44,035 INFO org.apache.hadoop.hdfs.server.namenode.LeaseManager: [Lease.  Holder: DFSClient_NONMAPREDUCE_437409007_30, pending creates: 1] has expired hard limit
2020-06-30 18:22:44,036 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering [Lease.  Holder: DFSClient_NONMAPREDUCE_437409007_30, pending creates: 1], src=/17035041/服务器测试/test1.txt
2020-06-30 18:22:44,036 WARN org.apache.hadoop.hdfs.StateChange: BLOCK* internalReleaseLease: All existing blocks are COMPLETE, lease removed, file /17035041/服务器测试/test1.txt closed.
2020-06-30 18:22:44,036 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 10
2020-06-30 18:33:49,414 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 3 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 12
2020-06-30 18:33:49,418 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741861_1048, replicas=xx.xx.xx.xx:9866 for /17035041/服务器测试/test4.txt
2020-06-30 18:33:49,426 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology
2020-06-30 18:33:49,426 WARN org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=1, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
2020-06-30 18:33:49,426 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable:  unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
2020-06-30 18:33:49,426 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on default port 9000, call Call#44 Retry#0 org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 39.98.61.186:33354
java.io.IOException: File /17035041/服务器测试/test4.txt could only be written to 0 of the 1 minReplication nodes. There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2219)
	at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2789)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:892)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:574)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:528)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:999)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:927)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2915)

问题解决:

1、关闭服务器防火墙
2、将Hadoop的常用默认端口添加到安全组,比如9000 50070 9866 等
3、(重要,我是这么解决的)
看日志发现hadoop随机分配端口给datanode来进行操作,如果使用ip地址加端口号来操作,会出现端口号不在安全组而被网络安全规则组织的情况,所以需要配置本地机器来操作,在hdfs-site.xml中添加最下面一行的配置:

        <!-- 指定HDFS副本的数量 -->
        <property>
                <name>dfs.replication</name>
                <value>1</value>
        </property>
        <property>
                <name>dfs.name.dir</name>
                <value>/usr/local/hadoop/data/name</value>
        </property>
        <property>
                <name>dfs.data.dir</name>
                <value>/usr/local/hadoop/data/data</value>
        </property>
        <property>
                <name>dfs.datanode.http.address</name>
                <value>chxy:9864</value>
        </property>
        <property>
                <name>dfs.client.use.dataname.hostname</name>
                <value>true</value>
        </property>

并在客户端(web端)配置hosts文件做ip和主机名映射:

ip  主机名
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值