string转换成date类型_详解Apache Hudi如何配置各种类型分区

1. 引入

Apache Hudi支持多种分区方式数据集,如多级分区、单分区、时间日期分区、无分区数据集等,用户可根据实际需求选择合适的分区方式,下面来详细了解Hudi如何配置何种类型分区。

2. 分区处理

为说明Hudi对不同分区类型的处理,假定写入Hudi的Schema如下

{  "type" : "record",  "name" : "HudiSchemaDemo",  "namespace" : "hoodie.HudiSchemaDemo",  "fields" : [ {    "name" : "age",    "type" : [ "long", "null" ]  }, {    "name" : "location",    "type" : [ "string", "null" ]  }, {    "name" : "name",    "type" : [ "string", "null" ]  }, {    "name" : "sex",    "type" : [ "string", "null" ]  }, {    "name" : "ts",    "type" : [ "long", "null" ]  }, {    "name" : "date",    "type" : [ "string", "null" ]  } ]}

其中一条具体数据如下

{  "name": "zhangsan",   "ts": 1574297893837,   "age": 16,   "location": "beijing",   "sex":"male",   "date":"2020/08/16"}

2.1 单分区

单分区表示使用一个字段表示作为分区字段的场景,可具体分为非日期格式字段(如location)和日期格式字段(如date)

2.1.1 非日期格式字段分区

如使用上述location字段作为分区字段,在写入Hudi并同步至Hive时配置如下

df.write().format("org.apache.hudi").                options(getQuickstartWriteConfigs()).                option(DataSourceWriteOptions.TABLE_TYPE_OPT_KEY(), "COPY_ON_WRITE").                option(DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY(), "ts").                option(DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY(), "name").                option(DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY(), partitionFields).                option(DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY(), keyGenerator).                option(TABLE_NAME, tableName).                option("hoodie.datasource.hive_sync.enable", true).                option("hoodie.datasource.hive_sync.table", tableName).                option("hoodie.datasource.hive_sync.username", "root").                option("hoodie.datasource.hive_sync.password", "123456").                option("hoodie.datasource.hive_sync.jdbcurl", "jdbc:hive2://localhost:10000").                option("hoodie.datasource.hive_sync.partition_fields", hivePartitionFields).                option("hoodie.datasource.write.table.type", "COPY_ON_WRITE").                option("hoodie.embed.timeline.server", false).                option("hoodie.datasource.hive_sync.partition_extractor_class", hivePartitionExtractorClass).                mode(saveMode).                save(basePath);

值得注意如下几个配置项

  • DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY()配置为location;
  • hoodie.datasource.hive_sync.partition_fields配置为location,与写入Hudi的分区字段相同;
  • DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY()配置为org.apache.hudi.keygen.SimpleKeyGenerator,或者不配置该选项,默认为org.apache.hudi.keygen.SimpleKeyGenerator
  • hoodie.datasource.hive_sync.partition_extractor_class配置为org.apache.hudi.hive.MultiPartKeysValueExtractor

Hudi同步到Hive创建的表如下

CREATE EXTERNAL TABLE `notdateformatsinglepartitiondemo`(  `_hoodie_commit_time` string,  `_hoodie_commit_seqno` string,  `_hoodie_record_key` string,  `_hoodie_partition_path` string,  `_hoodie_file_name` string,  `age` bigint,  `date` string,  `name` string,  `sex` string,  `ts` bigint)PARTITIONED BY (  `location` string)ROW FORMAT SERDE  'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'STORED AS INPUTFORMAT  'org.apache.hudi.hadoop.HoodieParquetInputFormat'OUTPUTFORMAT  'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'LOCATION  'file:/tmp/hudi-partitions/notDateFormatSinglePartitionDemo'TBLPROPERTIES (  'last_commit_time_sync'='20200816154250',  'transient_lastDdlTime'='1597563780')

查询表notdateformatsinglepartitiondemo

tips: 查询时请先将hudi-hive-sync-bundle-xxx.jar包放入$HIVE_HOME/lib下

a07e1f44418cef49f9f77761d5033d90.png

2.1.2 日期格式分区

如使用上述date字段作为分区字段,核心配置项如下

  • DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY()配置为date;
  • hoodie.datasource.hive_sync.partition_fields配置为date,与写入Hudi的分区字段相同;
  • DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY()配置为org.apache.hudi.keygen.SimpleKeyGenerator,或者不配置该选项,默认为org.apache.hudi.keygen.SimpleKeyGenerator
  • hoodie.datasource.hive_sync.partition_extractor_class配置为org.apache.hudi.hive.SlashEncodedDayPartitionValueExtractor

Hudi同步到Hive创建的表如下

CREATE EXTERNAL TABLE `dateformatsinglepartitiondemo`(  `_hoodie_commit_time` string,  `_hoodie_commit_seqno` string,  `_hoodie_record_key` string,  `_hoodie_partition_path` string,  `_hoodie_file_name` string,  `age` bigint,  `location` string,  `name` string,  `sex` string,  `ts` bigint)PARTITIONED BY (  `date` string)ROW FORMAT SERDE  'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'STORED AS INPUTFORMAT  'org.apache.hudi.hadoop.HoodieParquetInputFormat'OUTPUTFORMAT  'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'LOCATION  'file:/tmp/hudi-partitions/dateFormatSinglePartitionDemo'TBLPROPERTIES (  'last_commit_time_sync'='20200816155107',  'transient_lastDdlTime'='1597564276')

查询表dateformatsinglepartitiondemo

d8304db6e7f82897475be5a28c51f66e.png

2.2 多分区

多分区表示使用多个字段表示作为分区字段的场景,如上述使用location字段和sex字段,核心配置项如下

  • DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY()配置为location,sex;
  • hoodie.datasource.hive_sync.partition_fields配置为location,sex,与写入Hudi的分区字段相同;
  • DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY()配置为org.apache.hudi.keygen.ComplexKeyGenerator
  • hoodie.datasource.hive_sync.partition_extractor_class配置为org.apache.hudi.hive.MultiPartKeysValueExtractor

Hudi同步到Hive创建的表如下

CREATE EXTERNAL TABLE `multipartitiondemo`(  `_hoodie_commit_time` string,  `_hoodie_commit_seqno` string,  `_hoodie_record_key` string,  `_hoodie_partition_path` string,  `_hoodie_file_name` string,  `age` bigint,  `date` string,  `name` string,  `ts` bigint)PARTITIONED BY (  `location` string,  `sex` string)ROW FORMAT SERDE  'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'STORED AS INPUTFORMAT  'org.apache.hudi.hadoop.HoodieParquetInputFormat'OUTPUTFORMAT  'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'LOCATION  'file:/tmp/hudi-partitions/multiPartitionDemo'TBLPROPERTIES (  'last_commit_time_sync'='20200816160557',  'transient_lastDdlTime'='1597565166')

查询表multipartitiondemo

49a283737c64fb561edcbaf1b6854551.png

2.3 无分区

无分区场景是指无分区字段,写入Hudi的数据集无分区。核心配置如下

  • DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY()配置为空字符串;
  • hoodie.datasource.hive_sync.partition_fields配置为空字符串,与写入Hudi的分区字段相同;
  • DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY()配置为org.apache.hudi.keygen.NonpartitionedKeyGenerator
  • hoodie.datasource.hive_sync.partition_extractor_class配置为org.apache.hudi.hive.NonPartitionedExtractor

Hudi同步到Hive创建的表如下

CREATE EXTERNAL TABLE `nonpartitiondemo`(  `_hoodie_commit_time` string,  `_hoodie_commit_seqno` string,  `_hoodie_record_key` string,  `_hoodie_partition_path` string,  `_hoodie_file_name` string,  `age` bigint,  `date` string,  `location` string,  `name` string,  `sex` string,  `ts` bigint)ROW FORMAT SERDE  'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'STORED AS INPUTFORMAT  'org.apache.hudi.hadoop.HoodieParquetInputFormat'OUTPUTFORMAT  'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'LOCATION  'file:/tmp/hudi-partitions/nonPartitionDemo'TBLPROPERTIES (  'last_commit_time_sync'='20200816161558',  'transient_lastDdlTime'='1597565767')

查询表nonpartitiondemo

aa67a66a9c555a94bad19e277c360595.png

2.4 Hive风格分区

除了上述几种常见的分区方式,还有一种Hive风格分区格式,如location=beijing/sex=male格式,以location,sex作为分区字段,核心配置如下

  • DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY()配置为location,sex;
  • hoodie.datasource.hive_sync.partition_fields配置为location,sex,与写入Hudi的分区字段相同;
  • DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY()配置为org.apache.hudi.keygen.ComplexKeyGenerator
  • hoodie.datasource.hive_sync.partition_extractor_class配置为org.apache.hudi.hive.SlashEncodedDayPartitionValueExtractor
  • DataSourceWriteOptions.HIVE_STYLE_PARTITIONING_OPT_KEY()配置为true

生成的Hudi数据集目录结构会为如下格式

/location=beijing/sex=male

Hudi同步到Hive创建的表如下

CREATE EXTERNAL TABLE `hivestylepartitiondemo`(  `_hoodie_commit_time` string,  `_hoodie_commit_seqno` string,  `_hoodie_record_key` string,  `_hoodie_partition_path` string,  `_hoodie_file_name` string,  `age` bigint,  `date` string,  `name` string,  `ts` bigint)PARTITIONED BY (  `location` string,  `sex` string)ROW FORMAT SERDE  'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'STORED AS INPUTFORMAT  'org.apache.hudi.hadoop.HoodieParquetInputFormat'OUTPUTFORMAT  'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'LOCATION  'file:/tmp/hudi-partitions/hiveStylePartitionDemo'TBLPROPERTIES (  'last_commit_time_sync'='20200816172710',  'transient_lastDdlTime'='1597570039')

查询表hivestylepartitiondemo

b7d8b87ef3ce1d7c3f6498e003a6e11a.png

3. 总结

本篇文章介绍了Hudi如何处理不同分区场景,上述配置的分区类配置可以满足绝大多数场景,当然Hudi非常灵活,还支持自定义分区解析器,具体可查看KeyGenerator和PartitionValueExtractor类,其中所有写入Hudi的分区字段生成器都是KeyGenerator的子类,所有同步至Hive的分区值解析器都是PartitionValueExtractor的子类。

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
以下是抽取增量数据进入Hudi的代码: ```scala import org.apache.spark.sql.functions._ import org.apache.hudi.QuickstartUtils._ val jdbcUrl = "jdbc:mysql://localhost:3306/shtd_store?useSSL=false&serverTimezone=UTC" val dbProperties = new java.util.Properties() dbProperties.setProperty("user", "root") dbProperties.setProperty("password", "root") val user_df = spark.read.jdbc(jdbcUrl, "user_info", dbProperties) val hudi_options = Map[String, String]( HoodieWriteConfig.TABLE_NAME -> "user_info", HoodieWriteConfig.RECORDKEY_FIELD_OPT_KEY -> "id", HoodieWriteConfig.PRECOMBINE_FIELD_OPT_KEY -> "operate_time", HoodieWriteConfig.PARTITIONPATH_FIELD_OPT_KEY -> "etl_date", HoodieWriteConfig.KEYGENERATOR_CLASS_OPT_KEY -> "org.apache.hudi.keygen.NonpartitionedKeyGenerator", HoodieWriteConfig.OPERATION_OPT_KEY -> "upsert", HoodieWriteConfig.BULK_INSERT_SORT_MODE_OPT_KEY -> "GLOBAL_SORT", HoodieWriteConfig.BULK_INSERT_INPUT_RECORDS_NUM_OPT_KEY -> "500", HoodieWriteConfig.BULK_INSERT_PARALLELISM_OPT_KEY -> "2", HoodieWriteConfig.FORMAT_OPT_KEY -> "org.apache.hudi", HoodieWriteConfig.HIVE_SYNC_ENABLED_OPT_KEY -> "false", HoodieWriteConfig.HIVE_DATABASE_OPT_KEY -> "default", HoodieWriteConfig.HIVE_TABLE_OPT_KEY -> "user_info", HoodieWriteConfig.HIVE_PARTITION_FIELDS_OPT_KEY -> "etl_date", HoodieWriteConfig.HIVE_PARTITION_EXTRACTOR_CLASS_OPT_KEY -> "org.apache.hudi.hive.NonPartitionedExtractor", HoodieWriteConfig.HOODIE_TABLE_TYPE_OPT_KEY -> "MERGE_ON_READ" ) val etl_date = java.time.LocalDate.now.minusDays(1).format(java.time.format.DateTimeFormatter.BASIC_ISO_DATE) val hudi_df = user_df .withColumn("etl_date", lit(etl_date)) .withColumn("operate_time", coalesce(col("operate_time"), col("create_time"))) .withColumn("operate_time_long", unix_timestamp(col("operate_time"), "yyyy-MM-dd HH:mm:ss")) .withColumn("create_time_long", unix_timestamp(col("create_time"), "yyyy-MM-dd HH:mm:ss")) .withColumn("increment_ts", greatest(col("operate_time_long"), col("create_time_long"))) .filter(col("increment_ts") >= unix_timestamp(lit(etl_date), "yyyyMMdd")) .selectExpr("id", "username", "age", "gender", "create_time", "operate_time") .repartition(2) hudi_df.write .format("org.apache.hudi") .options(hudi_options) .mode("append") .save("hdfs://localhost:9000/user/hive/warehouse/ods_ds_hudi.db/user_info") ``` 执行完毕后,可以在Hive中使用`show partitions ods_ds_hudi.user_info`命令查看分区情况。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值