demo-Spark读取Json数据 ,通过BulkLoadd导入Hbase

3 篇文章 0 订阅

通过调用Hbase API,put的方法 将大批量的数据一条一条的导入Hbase中,不仅速度慢,还存在的其他问题,如对Hbase集群造成压力,如CPU和网络资源的使用率。它更合适的应用场景是一般是线上业务运行时,记录单条插入,如报文记录,处理记录,写入后htable对象即释放。

采用Spark + Bulk Load 写入 HBase批量写入数据: 优势

  1. BulkLoad 不会写 WAL,也不会产生 flush 以及 split。
  2. 如果我们大量调用 PUT 接口插入数据,可能会导致大量的 GC 操作。除了影响性能之外,严重时甚至可能会对 HBase 节点的稳定性造成影响,采用 BulkLoad 无此顾虑。
  3. 过程中没有大量的接口调用消耗性能。
  4. 可以利用 Spark 强大的计算能力。

当前开发环境依赖:

  <properties>
    <hadoop.version>2.6.0-cdh5.15.1</hadoop.version>
    <scala.version>2.11.8</scala.version>
    <spark.version>2.4.4</spark.version>
    <hbase.version>1.2.0-cdh5.15.1</hbase.version>
  </properties>


  	 <dependency>
          <groupId>org.apache.spark</groupId>
          <artifactId>spark-sql_2.11</artifactId>
          <version>${spark.version}</version>
     </dependency>
	<dependency>
      <groupId>org.apache.hbase</groupId>
      <artifactId>hbase-client</artifactId>
      <version>${hbase.version}</version>
    </dependency>
    <dependency>
      <groupId>org.apache.hbase</groupId>
      <artifactId>hbase-server</artifactId>
      <version>${hbase.version}</version>
    </dependency>
    <dependency>
      <groupId>org.apache.hbase</groupId>
      <artifactId>hbase-common</artifactId>
      <version>${hbase.version}</version>
    </dependency>

测试用例代码实现:

package main.scala.com.xiaolin.hbase


import com.xiaolin.utils.HdfsUtil
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.fs.{FileSystem, Path}
import org.apache.hadoop.hbase.{HBaseConfiguration, HColumnDescriptor, HConstants, HTableDescriptor, KeyValue, TableName}
import org.apache.hadoop.hbase.client.ConnectionFactory
import org.apache.hadoop.hbase.io.ImmutableBytesWritable
import org.apache.hadoop.hbase.mapreduce.{HFileOutputFormat2, LoadIncrementalHFiles, TableOutputFormat}
import org.apache.hadoop.hbase.util.Bytes
import org.apache.hadoop.mapreduce.Job
import org.apache.spark.sql.SparkSession
import com.xiaolin.flink.utils.HbaseUtil

import scala.collection.mutable.ListBuffer

/**
 * 测试用例:
 *    数据从json中,以 Hfile方式导入Hbase
 *    BulkLoad
 *
 * @author linzhy
 */
object DataHiveToHbaseBulkLoad {

  val zookeeperQuorum = "hadoop001:2181"
  val dataSourcePath = "data/data-test.json"
  val hdfsRootPath = "hdfs://hadoop001:9000"
  val hFilePath = "hdfs://hadoop001:9000/bulkload/hfile/"
  val tableName = "person"
  val familyName = "cf1"


  //设置用户
  System.setProperty("user.name", "hadoop")
  System.setProperty("HADOOP_USER_NAME", "hadoop")

  def main(args: Array[String]): Unit = {

    val spark = SparkSession.builder()
      .appName(this.getClass.getSimpleName)
      .master("local[2]")
      .config("spark.serializer", "org.apache.spark.serializer.KryoSerializer") //必须序列化
      .getOrCreate()

    val hadoopConf = new Configuration()
    hadoopConf.set("fs.defaultFS", hdfsRootPath)
    hadoopConf.set("dfs.client.use.datanode.hostname","true");
    val fileSystem = FileSystem.get(hadoopConf)

    val hbaseConf = HBaseConfiguration.create(hadoopConf)
    hbaseConf.set(HConstants.ZOOKEEPER_QUORUM, zookeeperQuorum)
    hbaseConf.set(TableOutputFormat.OUTPUT_TABLE, tableName)
    //如果导入数据量过大,可以适当修改默认值32  
    hbaseConf.set("hbase.mapreduce.bulkload.max.hfiles.perRegion.perFamily","3200")

    val hbaseConn = ConnectionFactory.createConnection(hbaseConf)
    val admin = hbaseConn.getAdmin
    val regionLocator = hbaseConn.getRegionLocator(TableName.valueOf(tableName))

    // 0. 准备程序运行的环境
    // 如果 HBase 表不存在,就创建一个新表
    HbaseUtil.createTable(tableName,familyName,admin)

    // 如果存放 HFile文件的路径已经存在,就删除掉
    HdfsUtil.deleteFile(hFilePath,fileSystem)

    // 1. 清洗需要存放到 HFile 中的数据,rowKey 一定要排序,否则会报错:
    // java.io.IOException: Added a key not lexically larger than previous.

    val dataFrame = spark.read.json(dataSourcePath).select("sessionid","sdkversion","requestdate","email","title")

    val columns = dataFrame.columns.dropWhile(_=="sessionid").sortBy(x=>(x,true)) //排序

    //数据处理
    val data = dataFrame.rdd.map(jsonstr => {
      val rowkey = jsonstr.getAs[String]("sessionid")
      val ik = new ImmutableBytesWritable(Bytes.toBytes(rowkey))
      var linkedList = new ListBuffer[KeyValue]()
      columns.map(column => {
        val columnValue = jsonstr.getAs[String](column).toString
        val kv = new KeyValue(Bytes.toBytes(rowkey), Bytes.toBytes(familyName), Bytes.toBytes(column), Bytes.toBytes(columnValue))
        linkedList.append(kv)
      })
      (ik, linkedList)
    }).flatMapValues(s => { //打散 排序
        val value = s.iterator
        value
      }
    ).sortByKey()

     // 2. Save Hfiles on HDFS
    val table = hbaseConn.getTable(TableName.valueOf(tableName))
    val job = Job.getInstance(hbaseConf)
    job.setMapOutputKeyClass(classOf[ImmutableBytesWritable])
    job.setMapOutputValueClass(classOf[KeyValue])
    HFileOutputFormat2.configureIncrementalLoadMap(job, table)

    data.saveAsNewAPIHadoopFile(
      hFilePath,
      classOf[ImmutableBytesWritable],
      classOf[KeyValue],
      classOf[HFileOutputFormat2],
      hbaseConf
    )

    //  3. Bulk load Hfiles to Hbase
    val bulkLoader = new LoadIncrementalHFiles(hbaseConf)
    bulkLoader.doBulkLoad(new Path(hFilePath), admin, table, regionLocator)

    hbaseConn.close()
    fileSystem.close()
    spark.stop()
  }

}

遇到的bug:

1.产生原因:
在制作HFile文件的时候,一定要主键排序。Put进去会自动排序。但自己做成HFile文件不会自动排序。所有一定要排序好,从 主键 列族 列 按照字典排序

java.io.IOException: Added a key not lexically larger than previous. Current cell = 03IQiB2M1F75PyLlKSyA7YmtT18wRcLB/cf1:requestdate/1582794808636/Put/vlen=10/seqid=0, lastCell = 03IQiB2M1F75PyLlKSyA7YmtT18wRcLB/cf1:sdkversion/1582794808636/Put/vlen=11/seqid=0
	at org.apache.hadoop.hbase.io.hfile.AbstractHFileWriter.checkKey(AbstractHFileWriter.java:204)
	at org.apache.hadoop.hbase.io.hfile.HFileWriterV2.append(HFileWriterV2.java:265)
	at org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:996)
	at org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2$1.write(HFileOutputFormat2.java:199)
	at org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2$1.write(HFileOutputFormat2.java:152)
	at org.apache.spark.internal.io.HadoopMapReduceWriteConfigUtil.write(SparkHadoopWriter.scala:358)
	at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$4.apply(SparkHadoopWriter.scala:132)
	at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$4.apply(SparkHadoopWriter.scala:129)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
	at org.apache.spark.internal.io.SparkHadoopWriter$.org$apache$spark$internal$io$SparkHadoopWriter$$executeTask(SparkHadoopWriter.scala:141)
	at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:83)
	at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:78)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:123)
	at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:748)

2.产生原因:
对象需要序列化

Caused by: org.apache.spark.SparkException: 
	Job aborted due to stage failure: Task 0.0 in stage 1.0 (TID 1) 
	had a not serializable result: org.apache.hadoop.hbase.io.ImmutableBytesWritable
  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值