spark报错ERROR history.FsHistoryProvider

错误内容:
spark的logs日志
20/02/23 15:28:14 ERROR history.FsHistoryProvider: Exception encountered when attempting to load application log hdfs://node01:9000/sparklog/local-1577264935490
org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-1795867413-192.168.100.201-1573709411052:blk_1073745968_5179 file=/sparklog/local-1577264935490
at org.apache.hadoop.hdfs.DFSInputStream.refetchLocations(DFSInputStream.java:1040)
at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:1023)
at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:1002)
at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:642)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:895)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:954)
at java.io.DataInputStream.read(DataInputStream.java:149)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:284)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)
at java.io.InputStreamReader.read(InputStreamReader.java:184)
at java.io.BufferedReader.fill(BufferedReader.java:161)
at java.io.BufferedReader.readLine(BufferedReader.java:324)
at java.io.BufferedReader.readLine(BufferedReader.java:389)
at scala.io.BufferedSource B u f f e r e d L i n e I t e r a t o r . h a s N e x t ( B u f f e r e d S o u r c e . s c a l a : 72 ) a t s c a l a . c o l l e c t i o n . I t e r a t o r BufferedLineIterator.hasNext(BufferedSource.scala:72) at scala.collection.Iterator BufferedLineIterator.hasNext(BufferedSource.scala:72)atscala.collection.Iterator$anon 21. h a s N e x t ( I t e r a t o r . s c a l a : 836 ) a t s c a l a . c o l l e c t i o n . I t e r a t o r 21.hasNext(Iterator.scala:836) at scala.collection.Iterator 21.hasNext(Iterator.scala:836)atscala.collection.Iterator$anon 13. h a s N e x t ( I t e r a t o r . s c a l a : 461 ) a t o r g . a p a c h e . s p a r k . s c h e d u l e r . R e p l a y L i s t e n e r B u s . r e p l a y ( R e p l a y L i s t e n e r B u s . s c a l a : 78 ) a t o r g . a p a c h e . s p a r k . s c h e d u l e r . R e p l a y L i s t e n e r B u s . r e p l a y ( R e p l a y L i s t e n e r B u s . s c a l a : 58 ) a t o r g . a p a c h e . s p a r k . d e p l o y . h i s t o r y . F s H i s t o r y P r o v i d e r . o r g 13.hasNext(Iterator.scala:461) at org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:78) at org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:58) at org.apache.spark.deploy.history.FsHistoryProvider.org 13.hasNext(Iterator.scala:461)atorg.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:78)atorg.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:58)atorg.apache.spark.deploy.history.FsHistoryProvider.orgapache s p a r k spark sparkdeploy h i s t o r y history historyFsHistoryProvider r e p l a y ( F s H i s t o r y P r o v i d e r . s c a l a : 644 ) a t o r g . a p a c h e . s p a r k . d e p l o y . h i s t o r y . F s H i s t o r y P r o v i d e r . m e r g e A p p l i c a t i o n L i s t i n g ( F s H i s t o r y P r o v i d e r . s c a l a : 457 ) a t o r g . a p a c h e . s p a r k . d e p l o y . h i s t o r y . F s H i s t o r y P r o v i d e r replay(FsHistoryProvider.scala:644) at org.apache.spark.deploy.history.FsHistoryProvider.mergeApplicationListing(FsHistoryProvider.scala:457) at org.apache.spark.deploy.history.FsHistoryProvider replay(FsHistoryProvider.scala:644)atorg.apache.spark.deploy.history.FsHistoryProvider.mergeApplicationListing(FsHistoryProvider.scala:457)atorg.apache.spark.deploy.history.FsHistoryProvideranonfun$checkForLogs 3 3 3$anon 4. r u n ( F s H i s t o r y P r o v i d e r . s c a l a : 345 ) a t j a v a . u t i l . c o n c u r r e n t . E x e c u t o r s 4.run(FsHistoryProvider.scala:345) at java.util.concurrent.Executors 4.run(FsHistoryProvider.scala:345)atjava.util.concurrent.ExecutorsRunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

今天查了一下午,一开始我以为是我的hdfs进入了安全模式,但是看了一下并没有,然后把spark在hdfs上的日志文件全部删除了,还是报错.
启动spark-shell也报错,
Caused by: MetaException(message:Could not connect to meta store using any of the URIs provided. Most recent failure: 大致意思就是找不到元存储uri.
哎 我去,原来是我没有启动hive啊
启动hive后问题就解决了

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值