代码:
public class HdfsAPI {
public static void main(String[] args) throws IOException {
put("input/test.txt", "/user/xuefeng/test.txt");
// get("output/test.txt", "/user/xuefeng/test.txt");
// delete("/user/xuefeng/test.txt");
}
public static boolean put(String localPath, String hdfsPath)
throws IOException {
Configuration conf = new Configuration();
FileSystem hdfs = FileSystem.get(conf);
Path src = new Path(localPath);
Path dst = new Path(hdfsPath);
hdfs.copyFromLocalFile(src, dst);
hdfs.close();
return true;
}
public static boolean get(String localPath, String hdfsPath)
throws IOException {
Configuration conf = new Configuration();
FileSystem hdfs = FileSystem.get(conf);
Path dst = new Path(localPath);
Path src = new Path(hdfsPath);
hdfs.copyToLocalFile(src, dst);
hdfs.close();
return true;
}
public static boolean delete(String hdfsPath) throws IOException {
Configuration conf = new Configuration();
FileSystem hdfs = FileSystem.get(conf);
Path path = new Path(hdfsPath);
boolean deleted = hdfs.delete(path, true);
hdfs.close();
return deleted;
}
}
遇到问题:
1.
12/07/11 20:35:25 INFO ipc.Client: Retrying connect to server: 192.168.1.102/192.168.1.102:9000. Already tried 8 time(s).
12/07/11 20:35:27 INFO ipc.Client: Retrying connect to server: 192.168.1.102/192.168.1.102:9000. Already tried 9 time(s).
Exception in thread "main" java.net.ConnectException: Call to 192.168.1.102/192.168.1.102:9000 failed on connection exception: java.net.ConnectException: Connection refused: no further information
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1095)
at org.apache.hadoop.ipc.Client.call(Client.java:1071)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at $Proxy1.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:123)
at org.apache.hadoop.examples.HdfsAPI.put(HdfsAPI.java:22)
at org.apache.hadoop.examples.HdfsAPI.main(HdfsAPI.java:14)
Caused by: java.net.ConnectException: Connection refused: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:489)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:434)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:560)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:184)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1202)
at org.apache.hadoop.ipc.Client.call(Client.java:1046)
... 15 more
借鉴 http://www.hadoopor.com/viewthread.php?action=printable&tid=853
root@test:/work/hadoop/hadoop-1.0.2# lsof -i :9000
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
java 11783 root 65u IPv6 32454 0t0 TCP test:9000 (LISTEN)
java 11783 root 85u IPv6 33634 0t0 TCP test:9000->test:46187 (ESTABLISHED)
java 11986 root 83u IPv6 33633 0t0 TCP test:46187->test:9000 (ESTABLISHED)
发现hadoop使用的是IPv6。。。。
在%HADOOP_HOME%\conf\hadoop-env.sh中,添加
export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
后,问题没了。
2.
put() 的时候,报错:
Exception in thread "main" org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create file/user/xuefeng/test.txt. Name node is in safe mode.
The ratio of reported blocks 0.0000 has not reached the threshold 0.9990. Safe mode will be turned off automatically.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1220)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1188)
at org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:628)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
借鉴 http://shutiao2008.iteye.com/blog/318950
bin/hadoop dfsadmin -safemode leave #关闭safe mode
OK。