18/12/26 16:03:05 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.30.20.231:35813) with ID 18
18/12/26 16:03:05 INFO BlockManagerMasterEndpoint: Registering block manager DataNode1:43095 with 2.5 GB RAM, BlockManagerId(4, DataNode1, 43095, None)
18/12/26 16:03:05 INFO BlockManagerMasterEndpoint: Registering block manager DataNode2:57624 with 2.5 GB RAM, BlockManagerId(18, DataNode2, 57624, None)
18/12/26 16:03:05 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.30.20.231:35814) with ID 3
18/12/26 16:03:05 INFO BlockManagerMasterEndpoint: Registering block manager DataNode2:45685 with 2.5 GB RAM, BlockManagerId(3, DataNode2, 45685, None)
18/12/26 16:03:06 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.30.20.231:35815) with ID 22
18/12/26 16:03:06 INFO BlockManagerMasterEndpoint: Registering block manager DataNode2:41514 with 2.5 GB RAM, BlockManagerId(22, DataNode2, 41514, None)
18/12/26 16:03:06 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.30.20.231:35816) with ID 8
18/12/26 16:03:06 INFO BlockManagerMasterEndpoint: Registering block manager DataNode2:50508 with 2.5 GB RAM, BlockManagerId(8, DataNode2, 50508, None)
18/12/26 16:03:53 INFO ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x267ab650fa4558c
18/12/26 16:03:53 INFO ZooKeeper: Session: 0x267ab650fa4558c closed
18/12/26 16:03:53 INFO ClientCnxn: EventThread shut down
Exception in thread "main" org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the location for replica 0
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:354)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:159)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:61)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:211)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:327)
at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:302)
at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:167)
at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:162)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:799)
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:193)
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:89)
at org.apache.hadoop.hbase.client.MetaScanner.allTableRegions(MetaScanner.java:324)
at org.apache.hadoop.hbase.client.HRegionLocator.getAllRegionLocations(HRegionLocator.java:90)
at org.apache.hadoop.hbase.util.RegionSizeCalculator.init(RegionSizeCalculator.java:94)
at org.apache.hadoop.hbase.util.RegionSizeCalculator.<init>(RegionSizeCalculator.java:81)
at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:256)
at org.apache.hadoop.hbase.mapreduce.TableInputFormat.getSplits(TableInputFormat.java:237)
at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:125)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2094)
at org.apache.spark.rdd.RDD.count(RDD.scala:1158)
at com.paibo.datacenter.MacCapture$.main(MacCapture.scala:49)
at com.paibo.datacenter.MacCapture.main(MacCapture.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:782)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
spark/java连接不上hbase原因总结:
1.节点异常,重启
2.没配置host,加上
3.集群环境默认非安全连接,客户端默认安全连接,
关闭安全连接 配上 conf.set("zookeeper.znode.parent", "/hbase-unsecure")