hadoop hdfs操作问题合集

1、上传文件异常

Exception in thread "main" org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /middle/weibo/test_input.txt could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1726)
	at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:265)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2567)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:829)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:510)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:850)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:793)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2489)

	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1489)
	at org.apache.hadoop.ipc.Client.call(Client.java:1435)
	at org.apache.hadoop.ipc.Client.call(Client.java:1345)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
	at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:444)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:409)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346)
	at com.sun.proxy.$Proxy11.addBlock(Unknown Source)
	at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1838)
	at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1638)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:704)

查看网上的解决方案

1、检查namenode和datanode的clusterID是否一致,是否因为多次初始化namenode导致的异常

检查结果,一致

2、datanode上传文件的50010端口是否可以访问

可以访问

后来自己思考了一下,觉得问题应该还是出在了host上,于是把host修改为

#127.0.0.1   master localhost localhost.localdomain localhost4 localhost4.localdomain4
#127.0.0.1   localhost master
#::1         master localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.93.10 master
#::1         master
0.0.0.0     master

重启Hadoop,查看端口

[hadoop@master sbin]$ netstat -lntp
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 192.168.93.10:9000      0.0.0.0:*               LISTEN      18975/java          
tcp        0      0 192.168.93.10:50090     0.0.0.0:*               LISTEN      19282/java          
tcp        0      0 127.0.0.1:45462         0.0.0.0:*               LISTEN      19112/java          
tcp        0      0 192.168.93.10:50070     0.0.0.0:*               LISTEN      18975/java          
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      -                   
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      -                   
tcp        0      0 0.0.0.0:50010           0.0.0.0:*               LISTEN      19112/java          
tcp        0      0 0.0.0.0:50075           0.0.0.0:*               LISTEN      19112/java          
tcp        0      0 192.168.93.10:50020     0.0.0.0:*               LISTEN      19112/java          
tcp6       0      0 :::22                   :::*                    LISTEN      -                   
tcp6       0      0 192.168.93.10:44509     :::*                    LISTEN      19549/java          
tcp6       0      0 192.168.93.10:8030      :::*                    LISTEN      19438/java          
tcp6       0      0 192.168.93.10:8031      :::*                    LISTEN      19438/java          
tcp6       0      0 192.168.93.10:8032      :::*                    LISTEN      19438/java          

这样,9000端口依旧可以从外访问

运行了一下上传文件的程序,也可以上传文件了,之前我遇到的错误猜测是因为host里只有0.0.0.0 master导致namenode无法与datanode进行通信造成的。

2、下载文件时遇到HADOOP_HOME and hadoop.home.dir are unset.

Exception in thread "main" java.lang.RuntimeException: java.io.FileNotFoundException: java.io.FileNotFoundException: HADOOP_HOME and hadoop.home.dir are unset. -see https://wiki.apache.org/hadoop/WindowsProblems
	at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:716)
	at org.apache.hadoop.util.Shell.getSetPermissionCommand(Shell.java:250)
	at org.apache.hadoop.util.Shell.getSetPermissionCommand(Shell.java:267)
	at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:771)
	at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:237)
	at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:221)
	at org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:319)
	at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:307)
	at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:339)
	at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.<init>(ChecksumFileSystem.java:399)
	at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:462)
	at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:441)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:929)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:910)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:807)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:368)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:341)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:292)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:2067)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:2036)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:2012)
	at com.su.hadoop.hdfs.GetFileFromHDFS.getFileFromHDFS(GetFileFromHDFS.java:28)
	at com.su.hadoop.hdfs.GetFileFromHDFS.main(GetFileFromHDFS.java:19)
Caused by: java.io.FileNotFoundException: java.io.FileNotFoundException: HADOOP_HOME and hadoop.home.dir are unset. -see https://wiki.apache.org/hadoop/WindowsProblems
	at org.apache.hadoop.util.Shell.fileNotFoundException(Shell.java:528)
	at org.apache.hadoop.util.Shell.getHadoopHomeDir(Shell.java:549)
	at org.apache.hadoop.util.Shell.getQualifiedBin(Shell.java:572)
	at org.apache.hadoop.util.Shell.<clinit>(Shell.java:669)
	at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:79)
	at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2972)
	at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2968)
	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2830)
	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389)
	at com.su.hadoop.hdfs.GetFileSystem.getFileSystem(GetFileSystem.java:31)
	at com.su.hadoop.hdfs.GetFileFromHDFS.getFileFromHDFS(GetFileFromHDFS.java:23)
	... 1 more
Caused by: java.io.FileNotFoundException: HADOOP_HOME and hadoop.home.dir are unset.
	at org.apache.hadoop.util.Shell.checkHadoopHomeInner(Shell.java:448)
	at org.apache.hadoop.util.Shell.checkHadoopHome(Shell.java:419)
	at org.apache.hadoop.util.Shell.<clinit>(Shell.java:496)
	... 8 more

参考s://blog.csdn.net/gvinaxu/article/details/75949059

就算用的是maven的hadoop,本机也需要安装Hadoop,解压好后在环境变量中配置HADOOP_HOME指向hadoop目录(D:\developer\hadoop\hadoop-2.8.5),在path中附加了%HADOOP_HOME%\bin及%HADOOP_HOME%\sbin,如果idea获取不到重启程序或者电脑仍旧报错说缺少winutils.exe,下载一个放到bin目录下

运行时仍报

Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
	at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
	at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:606)
	at org.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:969)
	at org.apache.hadoop.util.DiskChecker.checkAccessByFileMethods(DiskChecker.java:160)
	at org.apache.hadoop.util.DiskChecker.checkDirInternal(DiskChecker.java:100)
	at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:77)
	at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:314)
	at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:377)
	at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:151)
	at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:132)
	at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:116)
	at org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:125)
	at org.apache.hadoop.mapred.LocalJobRunner$Job.<init>(LocalJobRunner.java:171)
	at org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:758)
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:242)
	at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1341)
	at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1338)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844)
	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1338)
	at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1359)
	at com.su.hadoop.mapreduce.WordCount.main(WordCount.java:73)

把hadoop.dll也放到bin目录下(和C: windows\System32里)

3、

后续问题有时间把其他博客整合进来

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值