spark向hbase写入数据报错:tried to access method com

《一线大厂Java面试题解析+核心总结学习笔记+最新讲解视频+实战项目源码》点击传送门,即可获取!
spark向hbase写入数据时抛出的报错,实现代码转:https://blog.csdn.net/qq262593421/article/details/105969665

“C:\Program Files\Java\jdk1.8.0_111\bin\java.exe” “-javaagent:D:\JetBrains\IntelliJ IDEA 2019.2.3\lib\idea_rt.jar=50701:D:\JetBrains\IntelliJ IDEA 2019.2.3\bin” -Dfile.encoding=UTF-8 -classpath C:\Users\com\AppData\Local\Temp\classpath2035139547.jar com.xtd.hbase.SparkHBase

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in [jar:file:/F:/Maven/repository/org/slf4j/slf4j-log4j12/1.6.1/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/F:/Maven/repository/org/apache/logging/log4j/log4j-slf4j-impl/2.4.1/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

Using Spark’s default log4j profile: org/apache/spark/log4j-defaults.properties

Exception in thread “main” org.apache.hadoop.hbase.DoNotRetryIOException: java.lang.IllegalAccessError: tried to access method com.google.common.base.Stopwatch.()V from class org.apache.hadoop.hbase.zookeeper.MetaTableLocator

at org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:229)

at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:202)

at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320)

at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:295)

at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:160)

at org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:155)

at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:811)

at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:193)

at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:89)

at org.apache.hadoop.hbase.client.MetaScanner.allTableRegions(MetaScanner.java:324)

at org.apache.hadoop.hbase.client.HRegionLocator.getAllRegionLocations(HRegionLocator.java:88)

at org.apache.hadoop.hbase.util.RegionSizeCalculator.init(RegionSizeCalculator.java:94)

at org.apache.hadoop.hbase.util.RegionSizeCalculator.(RegionSizeCalculator.java:81)

at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:256)

at org.apache.hadoop.hbase.mapreduce.TableInputFormat.getSplits(TableInputFormat.java:237)

at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:130)

at org.apache.spark.rdd.RDD$ a n o n f u n anonfun anonfunpartitions$2.apply(RDD.scala:253)

at org.apache.spark.rdd.RDD$ a n o n f u n anonfun anonfunpartitions$2.apply(RDD.scala:251)

at scala.Option.getOrElse(Option.scala:121)

at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)

at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)

at org.apache.spark.rdd.RDD.count(RDD.scala:1168)

at com.xtd.hbase.SparkHBase$.main(SparkHBase.scala:34)

at com.xtd.hbase.SparkHBase.main(SparkHBase.scala)

Caused by: java.lang.IllegalAccessError: tried to access method com.google.common.base.Stopwatch.()V from class org.apache.hadoop.hbase.zookeeper.MetaTableLocator

at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:596)

at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:580)

at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:559)

at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:61)

at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateMeta(ConnectionManager.java:1185)

at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1152)

at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:300)

at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:153)

at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:61)

at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)

… 22 more

Process finished with exit code 1

最后

本人也收藏了一份Java面试核心知识点来应付面试,借着这次机会可以送给我的读者朋友们:

目录:

二面蚂蚁金服(交叉面),已拿offer,Java岗定级阿里P6

Java面试核心知识点

一共有30个专题,足够读者朋友们应付面试啦,也节省朋友们去到处搜刮资料自己整理的时间!

二面蚂蚁金服(交叉面),已拿offer,Java岗定级阿里P6

Java面试核心知识点
《一线大厂Java面试题解析+核心总结学习笔记+最新讲解视频+实战项目源码》点击传送门,即可获取!
79)]

Java面试核心知识点
《一线大厂Java面试题解析+核心总结学习笔记+最新讲解视频+实战项目源码》点击传送门,即可获取!

  • 10
    点赞
  • 23
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值