Huid学习六:Structured Streaming实时写Hudi

本文详细介绍了如何通过Structured Streaming从Kafka消费数据并实时写入Hudi。首先,文章概述了整体架构,然后讲解了创建Kafka生产者、模拟数据生产和查看Kafka数据的过程。接着,重点阐述了Structured Streaming的实现步骤,包括代码实现和验证数据写入Hudi的情况,观察到Hudi中数据不断合并,但在不同分区下log文件的分布存在差异。
摘要由CSDN通过智能技术生成

一、整体架构

 二、创建Kafka生产者,模拟生产数据

  1、创建hudi_kafka topic

bin/kafka-topics.sh --zookeeper 192.168.74.100:2181 --create --replication-factor 1 --partitions 3 --topic hudi_kafka

  2、模拟生产数据

/**
 * @author oyl
 * @create 2022-06-18 17:35
 * @Description 摸你生产kafka数据
 */
public class HudiProducer {
    
    public static void main(String[] args) {

        Properties props = new Properties();
        props.put("bootstrap.servers", "hadoop100:9092,hadoop101:9092,hadoop102:9092");
        props.put("acks", "-1");
        props.put("batch.size", "1048576");
        props.put("linger.ms", "5");
        props.put("compression.type", "snappy");
        props.put("buffer.memory", "33554432");
        props.put("key.serializer",
                "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer",
                "org.apache.kafka.common.serialization.StringSerializer");

        KafkaProducer<String,String> producer = new KafkaProducer<>(props);

        Random random = new Random();
        for (int i = 0; i < 1000; i++) {
            JSONObject model = new JSONObject();
            model.put("userid", i);
            model.put("username", "name" + i);
            model.put("age", 18);
            model.put("partition", "20210808");
            System.out.println("第"+i+"条数据");
            producer.send(new ProducerRecord<String, String>("hudi_kafka", model.toJSONString()));
        }
        producer.flush();
        producer.close();
    }
}

  3、增加pom依赖

<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-sql-kafka-0-10_2.12</artifactId>
    <!--            <scope>provided</scope>-->
    <version>${spark.version}</version>
</dependency>

  4、查看kafka数据

bin/kafka-console-consumer.sh --bootstrap-server 192.168.74.100:9092 --topic hudi_kafka --from-beginning

 三、StructuredStreaming消费kafka数据写入Hudi

  1、代码

/**
  * @author oyl
  * @create 2022-06-18 18:24
  * @Description 使用StructuredStreaming消费kafka数据写入Hudi
  */
object StructuredStreamingToHudi {

  def main(args: Array[String]): Unit = {
    val sparkConf = new SparkConf()
      .setAppName("StructuredStreamingToHudi")
      .set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
      .setMaster("local[2]")
      //.set("spark.sql.shuffle.partitions", "2")

    val sparkSession = SparkSession.builder().config(sparkConf).enableHiveSupport().getOrCreate()

    //读取kafka数据
    val df = sparkSession.readStream.format("kafka")
      .option("kafka.bootstrap.servers", "hadoop100:9092,hadoop101:9092,hadoop102:9092")
      .option("subscribe", "hudi_kafka")
      .option("startingOffsets", "earliest")
      .option("maxOffsetsPerTrigger", "1000")
      .load()

    import sparkSession.implicits._

    val tableName = "kafka_hudi_mor_hive"
    val basePath = "/datas/hudi-warehouse/kafka_hudi_mor_hive"

    val query = df.selectExpr("cast (value as string)").as[String]
      .map(item => {
        val jsonObj: JSONObject = JSON.parseObject(item)
        val userid = jsonObj.getString("userid").toLong;
        val username = jsonObj.getString("username")
        val age = jsonObj.getString("age").toInt
        val partition = jsonObj.getString("partition").toInt
        val ts = System.currentTimeMillis()

        new Model(userid, username, age, partition, ts)
      }).writeStream.foreachBatch { (batchDF: Dataset[Model], batchid: Long) =>
      batchDF.write.format("hudi")
        .option(TABLE_TYPE.key(), MOR_TABLE_TYPE_OPT_VAL)   //选择表的类型 MERGE_ON_READ
        .option(RECORDKEY_FIELD.key(), "userid")            //设置主键
        .option(PRECOMBINE_FIELD.key(), "ts")               //数据更新时间戳的
        .option(PARTITIONPATH_FIELD.key(), "partition")     //hudi分区列
        .option("hoodie.table.name", tableName)             //hudi表名

        .option("hoodie.datasource.hive_sync.jdbcurl", "jdbc:hive2://hadoop100:10000") //hiveserver2地址
        .option("hoodie.datasource.hive_sync.username","oyl")                          //登入hiveserver2的用户
        .option("hoodie.datasource.hive_sync.password","123123")                       //登入hiveserver2的密码
        .option("hoodie.datasource.hive_sync.database", "hudi_hive")                   //设置hudi与hive同步的数据库
        .option("hoodie.datasource.hive_sync.table", tableName)                        //设置hudi与hive同步的表名
        .option("hoodie.datasource.hive_sync.partition_fields", "partition")               //hive表同步的分区列
        .option("hoodie.datasource.hive_sync.partition_extractor_class", classOf[MultiPartKeysValueExtractor].getName) // 分区提取器 按/ 提取分区
        .option("hoodie.datasource.hive_sync.enable","true")                           //设置数据集注册并同步到hive
        .option("hoodie.insert.shuffle.parallelism", "2")
        .option("hoodie.upsert.shuffle.parallelism", "2")
        .mode(SaveMode.Append)
        .save(basePath)
    }.option("checkpointLocation", "/datas/checkpoint/kafka_hudi_mor_hive")
      //      .trigger(Trigger.ProcessingTime(5, TimeUnit.MINUTES))
      .start()
    query.awaitTermination()

  }
  case class Model(
                    userid: Long,
                    username: String,
                    age: Int,
                    partition: Long,
                    ts: Long
                    )
}

  2、查看数据

        1)、hdfs文件,数据在不断增加,文件也在不断合并

        20220609分区数据在不断合并

         但是发现mor模式,分区20220609下没有log文件(不是很明白为什么),而分区20220608下有log文件

         2)、hive数据查询

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值