hadoop "File /user/<user>/input/conf/slaves could only be replicated to 0 nodes, instead of 1"问题及解决办

164 篇文章 1 订阅
9 篇文章 0 订阅

本文地址:http://blog.csdn.net/kongxx/article/details/6892675

安装hadoop的官方文档安装后,在伪分布式模式下运行

bin/hadoop fs -put conf input

出现以下异常

11/10/20 08:18:22 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/fkong/input/conf/slaves could only be replicated to 0 nodes, instead of 1
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1417)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:596)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1383)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1379)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1377)
	at org.apache.hadoop.ipc.Client.call(Client.java:1030)
	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:224)
	at $Proxy1.addBlock(Unknown Source)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
	at $Proxy1.addBlock(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3104)
	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2975)
	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2255)
	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2446)
11/10/20 08:18:22 WARN hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null
11/10/20 08:18:22 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/fkong/input/conf/slaves" - Aborting...
put: java.io.IOException: File /user/fkong/input/conf/slaves could only be replicated to 0 nodes, instead of 1
11/10/20 08:18:22 ERROR hdfs.DFSClient: Exception closing file /user/fkong/input/conf/slaves : org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/fkong/input/conf/slaves could only be replicated to 0 nodes, instead of 1
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1417)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:596)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1383)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1379)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1377)
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/fkong/input/conf/slaves could only be replicated to 0 nodes, instead of 1
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1417)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:596)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1383)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1379)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1377)
	at org.apache.hadoop.ipc.Client.call(Client.java:1030)
	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:224)
	at $Proxy1.addBlock(Unknown Source)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
	at $Proxy1.addBlock(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3104)
	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2975)
	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2255)
	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2446)
Exit 255
检查了一下发现是由于默认的hadoop.tmp.dir的路径为/tmp/hadoop-${user.name},而我的linux系统的/tmp目录文件系统的类型往往是Hadoop不支持的。所以这里就需要更改一下hadoop.tmp.dir的路径到别的地方,这里指定到hadoop的安装路径下的一个子目录。具体步骤如下:

1. 修改${HADOOP_HOME}/conf/core-site.xml文件,修改后文件内如如下:

<configuration>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/data/fkong/hadoop-0.20.203.0/hadoop-${user.name}</value>
</property>
  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:9000</value>
  </property>
</configuration>
2. 重新启动hadoop的daemon

$ bin/start-all.sh

3. 再次运行以下命令就不会再有异常出现了

$ bin/hadoop fs -put conf input

另外也有些别的情况会导致这个问题,具体可以参考Hadoop:could only be replicated to 0 nodes, instead





评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值