Hudi-集成Spark之DeltaStreamer导入工具

Hudi集成Spark之DeltaStreamer导入工具

工具说明

HoodieDeltaStreamer工具 (hudi-utilities-bundle中的一部分) 提供了从DFS或Kafka等不同来源进行摄取的方式,并具有以下功能:

  • 精准一次从Kafka采集新数据,从Sqoop、HiveIncrementalPuller的输出或DFS文件夹下的文件增量导入。
  • 导入的数据支持json、avro或自定义数据类型。
  • 管理检查点,回滚和恢复。
  • 利用 DFS 或 Confluent schema registry的 Avro Schema。
  • 支持自定义转换操作。

执行如下命令,查看帮助文档:

spark-submit \
    --class org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer \
    /opt/software/hudi-0.12.0/packaging/hudi-utilities-bundle/target/hudi-utilities-bundle_2.12-0.12.0.jar --help

Schema Provider和Source配置项:Streaming Ingestion | Apache Hudi

下面以File Based Schema Provider和JsonKafkaSource为例进行说明。

准备Kafka数据

  • 启动kafka集群,创建测试用的topic

    bin/kafka-topics.sh --bootstrap-server hadoop1:9092 --create --topic hudi_test
    
  • 准备java生产者代码往topic发送测试数据

    <dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka-clients</artifactId>
        <version>2.4.1</version>
    </dependency>
    
    <!--fastjson <= 1.2.80 存在安全漏洞,-->
    <dependency>
        <groupId>com.alibaba</groupId>
        <artifactId>fastjson</artifactId>
        <version>1.2.83</version>
    </dependency>
    
    import com.alibaba.fastjson.JSONObject;
    import org.apache.kafka.clients.producer.KafkaProducer;
    import org.apache.kafka.clients.producer.ProducerRecord;
    import java.util.Properties;
    import java.util.Random;
    
    public class TestKafkaProducer {
        public static void main(String[] args) {
            Properties props = new Properties();
            props.put("bootstrap.servers", "hadoop1:9092,hadoop2:9092,hadoop3:9092");
            props.put("acks", "-1");
            props.put("batch.size", "1048576");
            props.put("linger.ms", "5");
            props.put("compression.type", "snappy");
            props.put("buffer.memory", "33554432");
            props.put("key.serializer",
                      "org.apache.kafka.common.serialization.StringSerializer");
            props.put("value.serializer",
                      "org.apache.kafka.common.serialization.StringSerializer");
            KafkaProducer<String, String> producer = new KafkaProducer<String, String>(props);
            Random random = new Random();
            for (int i = 0; i < 1000; i++) {
                JSONObject model = new JSONObject();
                model.put("userid", i);
                model.put("username", "name" + i);
                model.put("age", 18);
                model.put("partition", random.nextInt(100));
                producer.send(new ProducerRecord<String, String>("hudi_test", model.toJSONString()));
            }
            producer.flush();
            producer.close();
        }
    }
    

准备配置文件

(1)定义arvo所需schema文件(包括source和target)

mkdir /opt/module/hudi-props/
vim /opt/module/hudi-props/source-schema-json.avsc
{        
  "type": "record",
  "name": "Profiles",   
  "fields": [
    {
      "name": "userid",
      "type": [ "null", "string" ],
      "default": null
    },
    {
      "name": "username",
      "type": [ "null", "string" ],
      "default": null
    },
    {
      "name": "age",
      "type": [ "null", "string" ],
      "default": null
    },
    {
      "name": "partition",
      "type": [ "null", "string" ],
      "default": null
    }
  ]
}
 
cp source-schema-json.avsc target-schema-json.avsc

(2)拷贝hudi配置base.properties

cp /opt/software/hudi-0.12.0/hudi-utilities/src/test/resources/delta-streamer-config/base.properties /opt/module/hudi-props/

(3)根据源码里提供的模板,编写自己的kafka source的配置文件

cp /opt/software/hudi-0.12.0/hudi-utilities/src/test/resources/delta-streamer-config/kafka-source.properties /opt/module/hudi-props/

vim /opt/module/hudi-props/kafka-source.properties 

include=hdfs://hadoop1:8020/hudi-props/base.properties

# Key fields, for kafka example
hoodie.datasource.write.recordkey.field=userid
hoodie.datasource.write.partitionpath.field=partition

# schema provider configs
hoodie.deltastreamer.schemaprovider.source.schema.file=hdfs://hadoop1:8020/hudi-props/source-schema-json.avsc
hoodie.deltastreamer.schemaprovider.target.schema.file=hdfs://hadoop1:8020/hudi-props/target-schema-json.avsc

# Kafka Source
hoodie.deltastreamer.source.kafka.topic=hudi_test

#Kafka props
bootstrap.servers=hadoop1:9092,hadoop2:9092,hadoop3:9092
auto.offset.reset=earliest
group.id=test-group

(4)将配置文件上传到hdfs

hadoop fs -put /opt/module/hudi-props/ /

拷贝所需jar包到Spark

cp /opt/software/hudi-0.12.0/packaging/hudi-utilities-bundle/target/hudi-utilities-bundle_2.12-0.12.0.jar /opt/module/spark-3.2.2/jars/

需要把hudi-utilities-bundle_2.12-0.12.0.jar放入spark的jars路径下,否则报错找不到一些类和方法。

运行导入命令

spark-submit \
    --class org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer  \
    /opt/module/spark-3.2.2/jars/hudi-utilities-bundle_2.12-0.12.0.jar \
    --props hdfs://hadoop1:8020/hudi-props/kafka-source.properties \
    --schemaprovider-class org.apache.hudi.utilities.schema.FilebasedSchemaProvider  \
    --source-class org.apache.hudi.utilities.sources.JsonKafkaSource  \
    --source-ordering-field userid \
    --target-base-path hdfs://hadoop1:8020/tmp/hudi/hudi_test  \
    --target-table hudi_test \
    --op BULK_INSERT \
    --table-type MERGE_ON_READ

查看导入结果

(1)启动spark-sql

spark-sql \
  --conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer' \
  --conf 'spark.sql.catalog.spark_catalog=org.apache.spark.sql.hudi.catalog.HoodieCatalog' \
  --conf 'spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension'

(2)指定location创建hudi表

use spark_hudi;
 
create table hudi_test using hudi
location 'hdfs://hadoop1:8020/tmp/hudi/hudi_test'

(3)查询hudi表

select * from hudi_test;
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值