hbase连接无响应

18/12/26 16:03:05 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.30.20.231:35813) with ID 18
18/12/26 16:03:05 INFO BlockManagerMasterEndpoint: Registering block manager DataNode1:43095 with 2.5 GB RAM, BlockManagerId(4, DataNode1, 43095, None)
18/12/26 16:03:05 INFO BlockManagerMasterEndpoint: Registering block manager DataNode2:57624 with 2.5 GB RAM, BlockManagerId(18, DataNode2, 57624, None)
18/12/26 16:03:05 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.30.20.231:35814) with ID 3
18/12/26 16:03:05 INFO BlockManagerMasterEndpoint: Registering block manager DataNode2:45685 with 2.5 GB RAM, BlockManagerId(3, DataNode2, 45685, None)
18/12/26 16:03:06 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.30.20.231:35815) with ID 22
18/12/26 16:03:06 INFO BlockManagerMasterEndpoint: Registering block manager DataNode2:41514 with 2.5 GB RAM, BlockManagerId(22, DataNode2, 41514, None)
18/12/26 16:03:06 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.30.20.231:35816) with ID 8
18/12/26 16:03:06 INFO BlockManagerMasterEndpoint: Registering block manager DataNode2:50508 with 2.5 GB RAM, BlockManagerId(8, DataNode2, 50508, None)
18/12/26 16:03:53 INFO ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x267ab650fa4558c
18/12/26 16:03:53 INFO ZooKeeper: Session: 0x267ab650fa4558c closed
18/12/26 16:03:53 INFO ClientCnxn: EventThread shut down
Exception in thread "main" org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the location for replica 0
	at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:354)
	at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:159)
	at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:61)
	at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:211)
	at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:327)
	at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:302)
	at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:167)
	at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:162)
	at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:799)
	at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:193)
	at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:89)
	at org.apache.hadoop.hbase.client.MetaScanner.allTableRegions(MetaScanner.java:324)
	at org.apache.hadoop.hbase.client.HRegionLocator.getAllRegionLocations(HRegionLocator.java:90)
	at org.apache.hadoop.hbase.util.RegionSizeCalculator.init(RegionSizeCalculator.java:94)
	at org.apache.hadoop.hbase.util.RegionSizeCalculator.<init>(RegionSizeCalculator.java:81)
	at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:256)
	at org.apache.hadoop.hbase.mapreduce.TableInputFormat.getSplits(TableInputFormat.java:237)
	at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:125)
	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
	at scala.Option.getOrElse(Option.scala:121)
	at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2094)
	at org.apache.spark.rdd.RDD.count(RDD.scala:1158)
	at com.paibo.datacenter.MacCapture$.main(MacCapture.scala:49)
	at com.paibo.datacenter.MacCapture.main(MacCapture.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:782)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

 

spark/java连接不上hbase原因总结:

1.节点异常,重启

2.没配置host,加上

3.集群环境默认非安全连接,客户端默认安全连接, 

关闭安全连接 配上  conf.set("zookeeper.znode.parent", "/hbase-unsecure")

 

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值