我已经下载并启动了Cloudera的Hadoop Demo VM for CDH4(运行Hadoop 2.0.0).我正在尝试编写一个从我的
Windows 7机器运行的
Java程序(VM运行的相同的机器/操作系统).我有一个示例程序,如:
public static void main(String[] args) {
try{
Configuration conf = new Configuration();
conf.addResource("config.xml");
FileSystem fs = FileSystem.get(conf);
FSDataOutputStream fdos=fs.create(new Path("/testing/file01.txt"), true);
fdos.writeBytes("Test text for the txt file");
fdos.flush();
fdos.close();
fs.close();
}catch(Exception e){
e.printStackTrace();
}
}
我的config.xml文件只有一个定义的属性:fs.default.name = hdfs:// CDH4_IP:8020.
当我运行它,我得到以下异常:
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /testing/file01.txt could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1322)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2170)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:471)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:297)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44080)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)
at org.apache.hadoop.ipc.Client.call(Client.java:1160)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
at $Proxy9.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
at $Proxy9.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:290)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1150)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1003)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:463)
我已经看过互联网,似乎这发生在磁盘空间不足,但是当我运行“hdfs dfsadmin -report”时,情况并非如此:我得到以下结果:
Configured Capacity: 25197727744 (23.47 GB)
Present Capacity: 21771988992 (20.28 GB)
DFS Remaining: 21770715136 (20.28 GB)
DFS Used: 1273856 (1.21 MB)
DFS Used%: 0.01%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 1 (1 total, 0 dead)
Live datanodes:
Name: 127.0.0.1:50010 (localhost.localdomain)
Hostname: localhost.localdomain
Decommission Status : Normal
Configured Capacity: 25197727744 (23.47 GB)
DFS Used: 1273856 (1.21 MB)
Non DFS Used: 3425738752 (3.19 GB)
DFS Remaining: 21770715136 (20.28 GB)
DFS Used%: 0.01%
DFS Remaining%: 86.4%
Last contact: Fri Jan 11 17:30:56 EST 201323 EST 2013
我也可以在VM中运行这个代码.我不知道问题是什么或如何解决它.这是我第一次使用hadoop,所以我可能错过了一些基本的东西.有任何想法吗?
更新
在日志中看到的唯一的事情是类似于客户端上的异常:
java.io.IOException: File /testing/file01.txt could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1322)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2170)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:471)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:297)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44080)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)
我尝试更改数据目录(/ var / lib / hadoop-hdfs / cache / hdfs / dfs / data)的权限,并且没有修复它(我已经尽可能地充分访问每个人).
我注意到,当我通过HUE网络应用程序浏览HDFS时,我看到文件夹结构被创建,并且该文件确实存在,但它是空的.我尝试将文件放在默认用户目录下使用
FSDataOutputStream fdos=fs.create(new Path("testing/file04.txt"), true);
代替
FSDataOutputStream fdos=fs.create(new Path("/testing/file04.txt"), true);
这使得文件路径成为“/user/dharris/testing/file04.txt”(‘dharris’是我的Windows用户).但是这给了我一样的错误.
我遇到了同样的问题.
在我的情况下,问题的关键是以下错误信息.
在此操作中有1个datanode运行,1个节点被排除.
这意味着您的hdfs-client无法使用50010端口连接到您的数据库.
当您连接到hdfs namenode时,您可以获得datanode的状态.但是,您的hdfs客户端将无法连接到您的数据库.
(在hdfs中,一个namenode管理文件目录和数据库,如果hdfs-client连接到一个namnenode,它将找到一个目标文件路径和datanode地址,该数据包含数据,然后hdfs-client将与datanode进行通信(你可以使用netstat检查那些datanode uri,因为hdfs-client将尝试使用由namenode通知的地址与datanodes进行通信)
我解决了这个问题:
>在防火墙中打开50010端口.
>添加正确的“dfs.client.use.datanode.hostname”,“true”
>在我的客户端PC中将hostname添加到hostfile.
我很抱歉我的英语水平很差.