解决Class org.apache.hadoop.hdfs.DistributedFileSystem not found问题

使用的IDLE:Intellij IDEA
Hadoop版本:3.1.1

看了网上各种解决方案,国内的国外的。很多都是说缺少相对应的jar包,要去网上下载更全更新的版本。也的确很多人这么做了,可还是没有效果,甚至还不得不把重复的旧jar包清除。

我意外发现了一个神奇的解决方案。因为根据提示,是DistributedFileSystem没有找到,我就在想会不会是之前添加的jar包不全,因为只加了

加载上面路径common文件夹下:
lib所有的jar包和hadoop-common-3.1.1.jar
加载上面路径hdfs文件夹下:
lib所有的jar包和hadoop-hdfs-3.1.1.jar

因此,我就把hdfs下的其他jar包都添加了进去,即
在这里插入图片描述

最后再运行就成功了
0 [main] DEBUG org.apache.hadoop.util.Shell - Failed to detect a valid hadoop home directory
java.io.FileNotFoundException: HADOOP_HOME and hadoop.home.dir are unset.
at org.apache.hadoop.util.Shell.checkHadoopHomeInner(Shell.java:469)
at org.apache.hadoop.util.Shell.checkHadoopHome(Shell.java:440)
at org.apache.hadoop.util.Shell.(Shell.java:517)
at org.apache.hadoop.util.StringUtils.(StringUtils.java:78)
at org.apache.hadoop.fs.FileSystem C a c h e Cache CacheKey.(FileSystem.java:3533)
at org.apache.hadoop.fs.FileSystem C a c h e Cache CacheKey.(FileSystem.java:3528)
at org.apache.hadoop.fs.FileSystem C a c h e . g e t ( F i l e S y s t e m . j a v a : 3370 ) a t o r g . a p a c h e . h a d o o p . f s . F i l e S y s t e m . g e t ( F i l e S y s t e m . j a v a : 477 ) a t o r g . a p a c h e . h a d o o p . f s . F i l e S y s t e m . g e t ( F i l e S y s t e m . j a v a : 226 ) a t R e a d D a t a . m a i n ( R e a d D a t a . j a v a : 18 ) 18 [ m a i n ] D E B U G o r g . a p a c h e . h a d o o p . u t i l . S h e l l s e t s i d i s n o t a v a i l a b l e o n t h i s m a c h i n e . S o n o t u s i n g i t . 18 [ m a i n ] D E B U G o r g . a p a c h e . h a d o o p . u t i l . S h e l l s e t s i d e x i t e d w i t h e x i t c o d e 095 [ m a i n ] D E B U G o r g . a p a c h e . h a d o o p . m e t r i c s 2. l i b . M u t a b l e M e t r i c s F a c t o r y f i e l d o r g . a p a c h e . h a d o o p . m e t r i c s 2. l i b . M u t a b l e R a t e o r g . a p a c h e . h a d o o p . s e c u r i t y . U s e r G r o u p I n f o r m a t i o n Cache.get(FileSystem.java:3370) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:226) at ReadData.main(ReadData.java:18) 18 [main] DEBUG org.apache.hadoop.util.Shell - setsid is not available on this machine. So not using it. 18 [main] DEBUG org.apache.hadoop.util.Shell - setsid exited with exit code 0 95 [main] DEBUG org.apache.hadoop.metrics2.lib.MutableMetricsFactory - field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation Cache.get(FileSystem.java:3370)atorg.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)atorg.apache.hadoop.fs.FileSystem.get(FileSystem.java:226)atReadData.main(ReadData.java:18)18[main]DEBUGorg.apache.hadoop.util.Shellsetsidisnotavailableonthismachine.Sonotusingit.18[main]DEBUGorg.apache.hadoop.util.Shellsetsidexitedwithexitcode095[main]DEBUGorg.apache.hadoop.metrics2.lib.MutableMetricsFactoryfieldorg.apache.hadoop.metrics2.lib.MutableRateorg.apache.hadoop.security.UserGroupInformationUgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(sampleName=“Ops”, about=“”, always=false, type=DEFAULT, value={“Rate of successful kerberos logins and latency (milliseconds)”}, valueName=“Time”)
105 [main] DEBUG org.apache.hadoop.metrics2.lib.MutableMetricsFactory - field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation U g i M e t r i c s . l o g i n F a i l u r e w i t h a n n o t a t i o n @ o r g . a p a c h e . h a d o o p . m e t r i c s 2. a n n o t a t i o n . M e t r i c ( s a m p l e N a m e = " O p s " , a b o u t = " " , a l w a y s = f a l s e , t y p e = D E F A U L T , v a l u e = " R a t e o f f a i l e d k e r b e r o s l o g i n s a n d l a t e n c y ( m i l l i s e c o n d s ) " , v a l u e N a m e = " T i m e " ) 105 [ m a i n ] D E B U G o r g . a p a c h e . h a d o o p . m e t r i c s 2. l i b . M u t a b l e M e t r i c s F a c t o r y f i e l d o r g . a p a c h e . h a d o o p . m e t r i c s 2. l i b . M u t a b l e R a t e o r g . a p a c h e . h a d o o p . s e c u r i t y . U s e r G r o u p I n f o r m a t i o n UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(sampleName=“Ops”, about=“”, always=false, type=DEFAULT, value={“Rate of failed kerberos logins and latency (milliseconds)”}, valueName=“Time”) 105 [main] DEBUG org.apache.hadoop.metrics2.lib.MutableMetricsFactory - field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation UgiMetrics.loginFailurewithannotation@org.apache.hadoop.metrics2.annotation.Metric(sampleName=“Ops”,about=“”,always=false,type=DEFAULT,value=“Rateoffailedkerberosloginsandlatency(milliseconds)”,valueName=“Time”)105[main]DEBUGorg.apache.hadoop.metrics2.lib.MutableMetricsFactoryfieldorg.apache.hadoop.metrics2.lib.MutableRateorg.apache.hadoop.security.UserGroupInformationUgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(sampleName=“Ops”, about=“”, always=false, type=DEFAULT, value={“GetGroups”}, valueName=“Time”)
105 [main] DEBUG org.apache.hadoop.metrics2.lib.MutableMetricsFactory - field private org.apache.hadoop.metrics2.lib.MutableGaugeLong org.apache.hadoop.security.UserGroupInformation U g i M e t r i c s . r e n e w a l F a i l u r e s T o t a l w i t h a n n o t a t i o n @ o r g . a p a c h e . h a d o o p . m e t r i c s 2. a n n o t a t i o n . M e t r i c ( s a m p l e N a m e = " O p s " , a b o u t = " " , a l w a y s = f a l s e , t y p e = D E F A U L T , v a l u e = " R e n e w a l f a i l u r e s s i n c e s t a r t u p " , v a l u e N a m e = " T i m e " ) 105 [ m a i n ] D E B U G o r g . a p a c h e . h a d o o p . m e t r i c s 2. l i b . M u t a b l e M e t r i c s F a c t o r y f i e l d p r i v a t e o r g . a p a c h e . h a d o o p . m e t r i c s 2. l i b . M u t a b l e G a u g e I n t o r g . a p a c h e . h a d o o p . s e c u r i t y . U s e r G r o u p I n f o r m a t i o n UgiMetrics.renewalFailuresTotal with annotation @org.apache.hadoop.metrics2.annotation.Metric(sampleName=“Ops”, about=“”, always=false, type=DEFAULT, value={“Renewal failures since startup”}, valueName=“Time”) 105 [main] DEBUG org.apache.hadoop.metrics2.lib.MutableMetricsFactory - field private org.apache.hadoop.metrics2.lib.MutableGaugeInt org.apache.hadoop.security.UserGroupInformation UgiMetrics.renewalFailuresTotalwithannotation@org.apache.hadoop.metrics2.annotation.Metric(sampleName=“Ops”,about=“”,always=false,type=DEFAULT,value=“Renewalfailuressincestartup”,valueName=“Time”)105[main]DEBUGorg.apache.hadoop.metrics2.lib.MutableMetricsFactoryfieldprivateorg.apache.hadoop.metrics2.lib.MutableGaugeIntorg.apache.hadoop.security.UserGroupInformationUgiMetrics.renewalFailures with annotation @org.apache.hadoop.metrics2.annotation.Metric(sampleName=“Ops”, about=“”, always=false, type=DEFAULT, value={“Renewal failures since last successful login”}, valueName=“Time”)
109 [main] DEBUG org.apache.hadoop.metrics2.impl.MetricsSystemImpl - UgiMetrics, User and group related metrics
141 [main] DEBUG org.apache.hadoop.security.SecurityUtil - Setting hadoop.security.token.service.use_ip to true
156 [main] DEBUG org.apache.hadoop.security.Groups - Creating new Groups object
157 [main] DEBUG org.apache.hadoop.util.NativeCodeLoader - Trying to load the custom-built native-hadoop library…
158 [main] DEBUG org.apache.hadoop.util.NativeCodeLoader - Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError: no hadoop in java.library.path: [/Users/red/Library/Java/Extensions, /Library/Java/Extensions, /Network/Library/Java/Extensions, /System/Library/Java/Extensions, /usr/lib/java, .]
158 [main] DEBUG org.apache.hadoop.util.NativeCodeLoader - java.library.path=/Users/red/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:.
158 [main] WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
158 [main] DEBUG org.apache.hadoop.util.PerformanceAdvisory - Falling back to shell based
161 [main] DEBUG org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback - Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping
232 [main] DEBUG org.apache.hadoop.security.Groups - Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
357 [main] DEBUG org.apache.hadoop.security.UserGroupInformation - hadoop login
357 [main] DEBUG org.apache.hadoop.security.UserGroupInformation - hadoop login commit
361 [main] DEBUG org.apache.hadoop.security.UserGroupInformation - using local user:UnixPrincipal: red
361 [main] DEBUG org.apache.hadoop.security.UserGroupInformation - Using user: “UnixPrincipal: red” with name red
361 [main] DEBUG org.apache.hadoop.security.UserGroupInformation - User entry: “red”
361 [main] DEBUG org.apache.hadoop.security.UserGroupInformation - UGI loginUser:red (auth:SIMPLE)
383 [main] DEBUG org.apache.htrace.core.Tracer - sampler.classes = ; loaded no samplers
390 [main] DEBUG org.apache.htrace.core.Tracer - span.receiver.classes = ; loaded no span receivers
390 [main] DEBUG org.apache.hadoop.fs.FileSystem - Loading filesystems
403 [main] DEBUG org.apache.hadoop.fs.FileSystem - file:// = class org.apache.hadoop.fs.LocalFileSystem from /usr/local/Cellar/hadoop/3.1.1/libexec/share/hadoop/common/hadoop-common-3.1.1.jar
407 [main] DEBUG org.apache.hadoop.fs.FileSystem - viewfs:// = class org.apache.hadoop.fs.viewfs.ViewFileSystem from /usr/local/Cellar/hadoop/3.1.1/libexec/share/hadoop/common/hadoop-common-3.1.1.jar
409 [main] DEBUG org.apache.hadoop.fs.FileSystem - har:// = class org.apache.hadoop.fs.HarFileSystem from /usr/local/Cellar/hadoop/3.1.1/libexec/share/hadoop/common/hadoop-common-3.1.1.jar
411 [main] DEBUG org.apache.hadoop.fs.FileSystem - http:// = class org.apache.hadoop.fs.http.HttpFileSystem from /usr/local/Cellar/hadoop/3.1.1/libexec/share/hadoop/common/hadoop-common-3.1.1.jar
413 [main] DEBUG org.apache.hadoop.fs.FileSystem - https:// = class org.apache.hadoop.fs.http.HttpsFileSystem from /usr/local/Cellar/hadoop/3.1.1/libexec/share/hadoop/common/hadoop-common-3.1.1.jar
424 [main] DEBUG org.apache.hadoop.fs.FileSystem - hdfs:// = class org.apache.hadoop.hdfs.DistributedFileSystem from /usr/local/Cellar/hadoop/3.1.1/libexec/share/hadoop/hdfs/hadoop-hdfs-client-3.1.1.jar
730 [main] DEBUG org.apache.hadoop.fs.FileSystem - webhdfs:// = class org.apache.hadoop.hdfs.web.WebHdfsFileSystem from /usr/local/Cellar/hadoop/3.1.1/libexec/share/hadoop/hdfs/hadoop-hdfs-client-3.1.1.jar
731 [main] DEBUG org.apache.hadoop.fs.FileSystem - swebhdfs:// = class org.apache.hadoop.hdfs.web.SWebHdfsFileSystem from /usr/local/Cellar/hadoop/3.1.1/libexec/share/hadoop/hdfs/hadoop-hdfs-client-3.1.1.jar
733 [main] DEBUG org.apache.hadoop.fs.FileSystem - Looking for FS supporting hdfs
733 [main] DEBUG org.apache.hadoop.fs.FileSystem - looking for configuration option fs.hdfs.impl
786 [main] DEBUG org.apache.hadoop.fs.FileSystem - Filesystem hdfs defined in configuration option
789 [main] DEBUG org.apache.hadoop.fs.FileSystem - FS for hdfs is class org.apache.hadoop.hdfs.DistributedFileSystem
850 [main] DEBUG org.apache.hadoop.hdfs.client.impl.DfsClientConf - dfs.client.use.legacy.blockreader.local = false
850 [main] DEBUG org.apache.hadoop.hdfs.client.impl.DfsClientConf - dfs.client.read.shortcircuit = false
850 [main] DEBUG org.apache.hadoop.hdfs.client.impl.DfsClientConf - dfs.client.domain.socket.data.traffic = false
850 [main] DEBUG org.apache.hadoop.hdfs.client.impl.DfsClientConf - dfs.domain.socket.path =
862 [main] DEBUG org.apache.hadoop.hdfs.DFSClient - Sets dfs.client.block.write.replace-datanode-on-failure.min-replication to 0
877 [main] DEBUG org.apache.hadoop.io.retry.RetryUtils - multipleLinearRandomRetry = null
902 [main] DEBUG org.apache.hadoop.ipc.Server - rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine R p c P r o t o b u f R e q u e s t , r p c I n v o k e r = o r g . a p a c h e . h a d o o p . i p c . P r o t o b u f R p c E n g i n e RpcProtobufRequest, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine RpcProtobufRequest,rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngineServer$ProtoBufRpcInvoker@53941c2f
923 [main] DEBUG org.apache.hadoop.ipc.Client - getting client out of cache: org.apache.hadoop.ipc.Client@77825085
1372 [main] DEBUG org.apache.hadoop.util.PerformanceAdvisory - Both short-circuit local reads and UNIX domain socket are disabled.
1376 [main] DEBUG org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil - DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
1395 [main] DEBUG org.apache.hadoop.ipc.Client - The ping interval is 60000 ms.
1396 [main] DEBUG org.apache.hadoop.ipc.Client - Connecting to localhost/127.0.0.1:9000
1439 [IPC Client (2053628870) connection to localhost/127.0.0.1:9000 from red] DEBUG org.apache.hadoop.ipc.Client - IPC Client (2053628870) connection to localhost/127.0.0.1:9000 from red: starting, having connections 1
1442 [IPC Parameter Sending Thread #0] DEBUG org.apache.hadoop.ipc.Client - IPC Client (2053628870) connection to localhost/127.0.0.1:9000 from red sending #0 org.apache.hadoop.hdfs.protocol.ClientProtocol.getBlockLocations
1454 [IPC Client (2053628870) connection to localhost/127.0.0.1:9000 from red] DEBUG org.apache.hadoop.ipc.Client - IPC Client (2053628870) connection to localhost/127.0.0.1:9000 from red got value #0
1455 [main] DEBUG org.apache.hadoop.ipc.ProtobufRpcEngine - Call: getBlockLocations took 71ms
1506 [main] DEBUG org.apache.hadoop.hdfs.DFSClient - newInfo = LocatedBlocks{; fileLength=11; underConstruction=false; blocks=[LocatedBlock{BP-538605242-192.168.1.109-1542866941937:blk_1073741833_1009; getBlockSize()=11; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[127.0.0.1:9866,DS-8b632f24-169d-4f2e-be57-98e4cd50bd11,DISK]]}]; lastLocatedBlock=LocatedBlock{BP-538605242-192.168.1.109-1542866941937:blk_1073741833_1009; getBlockSize()=11; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[127.0.0.1:9866,DS-8b632f24-169d-4f2e-be57-98e4cd50bd11,DISK]]}; isLastBlockComplete=true; ecPolicy=null}
1508 [main] DEBUG org.apache.hadoop.hdfs.DFSClient - Connecting to datanode 127.0.0.1:9866
1515 [main] DEBUG org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
1515 [IPC Parameter Sending Thread #0] DEBUG org.apache.hadoop.ipc.Client - IPC Client (2053628870) connection to localhost/127.0.0.1:9000 from red sending #1 org.apache.hadoop.hdfs.protocol.ClientProtocol.getServerDefaults
1516 [IPC Client (2053628870) connection to localhost/127.0.0.1:9000 from red] DEBUG org.apache.hadoop.ipc.Client - IPC Client (2053628870) connection to localhost/127.0.0.1:9000 from red got value #1
1516 [main] DEBUG org.apache.hadoop.ipc.ProtobufRpcEngine - Call: getServerDefaults took 1ms
1525 [main] DEBUG org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient - SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:9866,DS-8b632f24-169d-4f2e-be57-98e4cd50bd11,DISK]

Hello world

1597 [main] DEBUG org.apache.hadoop.ipc.Client - stopping client from cache: org.apache.hadoop.ipc.Client@77825085
1597 [main] DEBUG org.apache.hadoop.ipc.Client - removing client from cache: org.apache.hadoop.ipc.Client@77825085
1597 [main] DEBUG org.apache.hadoop.ipc.Client - stopping actual client because no more references remain: org.apache.hadoop.ipc.Client@77825085
1598 [main] DEBUG org.apache.hadoop.ipc.Client - Stopping client
1601 [IPC Client (2053628870) connection to localhost/127.0.0.1:9000 from red] DEBUG org.apache.hadoop.ipc.Client - IPC Client (2053628870) connection to localhost/127.0.0.1:9000 from red: closed
1601 [IPC Client (2053628870) connection to localhost/127.0.0.1:9000 from red] DEBUG org.apache.hadoop.ipc.Client - IPC Client (2053628870) connection to localhost/127.0.0.1:9000 from red: stopped, remaining connections 0
1605 [Thread-2] DEBUG org.apache.hadoop.util.ShutdownHookManager - ShutdownHookManger complete shutdown.

Process finished with exit code 0

  • 6
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值