spark连接HBase Demo问题说明

本文档介绍了使用Spark读写HBase时遇到的问题及解决方案。错误在于`format`应为`org.apache.hadoop.hbase.spark`,同时需要提供正确的HBase配置信息。修正后的示例代码展示了如何通过`HBaseConfiguration.create()`创建配置,并在`write`操作中指定必要的选项,如`tableCatalog`、`newTable`和`zkUrl`。确保这些设置正确以避免NullPointerException。

https://sparkbyexamples.com/spark/spark-read-write-using-hbase-spark-connector

示例来源如上,直接说问题:

    // Reading from HBase to DataFrame
    val hbaseDF = spark.read
      .options(Map(HBaseTableCatalog.tableCatalog -> catalog))
      .format("org.apache.spark.sql.execution.datasources.hbase")
      .load()

这里的format不正确,即使是上文中的github链接的代码也不能直接运行。

第二点:HBase配置信息未加载,如何确定spark连接哪里的HBase

Exception in thread "main" java.lang.NullPointerException
	at org.apache.hadoop.hbase.spark.HBaseRelation.<init>(DefaultSource.scala:139)
	at org.apache.hadoop.hbase.spark.DefaultSource.createRelation(DefaultSource.scala:79)
	at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
	at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:668)
	at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:668)
	at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
	at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:668)
	at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:276)
	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:270)
	at com.sdmctech.offline.application.simple.TransformAssetId2ID$$anonfun$main$1.apply$mcV$sp(TransformAssetId2ID.scala:47)
	at com.sdmctech.offline.common.TApplication$class.start(TApplication.scala:26)
	at com.sdmctech.offline.application.simple.TransformAssetId2ID$.start(TransformAssetId2ID.scala:16)
	at com.sdmctech.offline.application.simple.TransformAssetId2ID$.main(TransformAssetId2ID.scala:19)
	at com.sdmctech.offline.application.simple.TransformAssetId2ID.main(TransformAssetId2ID.scala)

Process finished with exit code 1

报错如上。

综上,正确示例应该为

      val hbaseConf = HBaseConfiguration.create()
      new HBaseContext(spark.sparkContext,hbaseConf)
      allItemFeature.write.options(
        Map(HBaseTableCatalog.tableCatalog -> catalog, HBaseTableCatalog.newTable -> "3",
          "zkUrl" ->CommonName.zkUrl))
        .format("org.apache.hadoop.hbase.spark")
        .save()

评论 4
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

科兴科学园的小猪妖

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值