1.报错信息:
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/maclaren/data/hadoopTempDir/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:290)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:292)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
修改core-site.xml的hadoop.tmp.dir值,或者第一次初始化的时候,一定要格式化namenode,否则会报错。
所以,一定要先清空hadoop.temp.dir指定的目录所有内容,然后运行
sh hadoop namenode -format
2.报错信息:
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Not able to place enough replicas, still in need of 1
dfs.block.size的大小一定设置合适,我是在我的笔记本上跑的,设为1024即可。修改hdfs-core.xml
<property>
<name>dfs.block.size</name>
<value>1024</value>
</property>
3.报错信息:
org.apache.hadoop.ipc.RPC$VersionMismatch: Protocol org.apache.hadoop.hdfs.protocol.ClientProtocol version mismatch.
解决方法:
将$HBASE_HOME/lib下的 hadoop-core-0.20-append-r1056497.jar换成hadoop-0.20.2-core.jar
4.报错信息:
Caused by: java.io.IOException: Call to /192.168.1.147:9000 failed on local exception: java.io.EOFException
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
at org.apache.hadoop.ipc.Client.call(Client.java:1075)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at $Proxy8.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
at org.springframework.data.hadoop.fs.HdfsResourceLoader.<init>(HdfsResourceLoader.java:82)
... 21 more
解决方法:
客户端hadoop的jar包和服务端的jar包不一样。然后修改hdfs-site.xml
因为Eclipse使用hadoop插件提交作业时,会默认以 DrWho 身份去将作业写入hdfs文件系统中,对应的也就是 HDFS 上的/user/hadoop , 由于 DrWho 用户对hadoop目录并没有写入权限,所以导致异常的发生。放开 hadoop 目录的权限, 命令如下 :$ hadoop fs -chmod 777 /user/hadoop