Hudi聚簇(Clustering)实现分析

​​​1、介绍

Hudi 在数据写入时倾向于使用小文件来提高并行度,并使数据尽快可用于查询,但是,如果有很多小文件,查询性能会很差。为权衡写入速度与查询性能,Hudi 提供了一种数据重组方式 clustering,主要用于对 CopyOnWrite 存储类型的表文件进行合并。

2、源码分析

Hudi 的 clustering 文件聚簇操作,主要分为生成 HoodieClusteringPlan 和执行 HoodieClusteringPlan 两阶段,前者是根据分区文件进行分组生成对应的 plan 计划,后者是按计划执行具体的合并操作。

2.1 生成HoodieClusteringPlan

Hudi 中所有生成 plan 的逻辑都由 BaseHoodieTableServiceClient 的 scheduleTableServiceInternal 控制,包括 archieve/cluster/compact/log_compat/clean 等服务。 

// org/apache/hudi/client/BaseHoodieTableServiceClient.java
  protected Option<String> scheduleTableServiceInternal(String instantTime, Option<Map<String, String>> extraMetadata,
                                                        TableServiceType tableServiceType) {
    if (!tableServicesEnabled(config)) {
      return Option.empty();
    }

    Option<String> option = Option.empty();
    HoodieTable<?, ?, ?, ?> table = createTable(config, hadoopConf);

    // Flink/Spark schedule service 的入口
    switch (tableServiceType) {
      case ARCHIVE:
        LOG.info("Scheduling archiving is not supported. Skipping.");
        break;
      case CLUSTER:
        LOG.info("Scheduling clustering at instant time :" + instantTime);
        Option<HoodieClusteringPlan> clusteringPlan = table
            .scheduleClustering(context, instantTime, extraMetadata);
        option = clusteringPlan.isPresent() ? Option.of(instantTime) : Option.empty();
        break;
      case COMPACT:
        LOG.info("Scheduling compaction at instant time :" + instantTime);
        Option<HoodieCompactionPlan> compactionPlan = table
            .scheduleCompaction(context, instantTime, extraMetadata);
        option = compactionPlan.isPresent() ? Option.of(instantTime) : Option.empty();
        break;
      case LOG_COMPACT:
        LOG.info("Scheduling log compaction at instant time :" + instantTime);
        Option<HoodieCompactionPlan> logCompactionPlan = table
            .scheduleLogCompaction(context, instantTime, extraMetadata);
        option = logCompactionPlan.isPresent() ? Option.of(instantTime) : Option.empty();
        break;
      case CLEAN:
        LOG.info("Scheduling cleaning at instant time :" + instantTime);
        Option<HoodieCleanerPlan> cleanerPlan = table
            .scheduleCleaning(context, instantTime, extraMetadata);
        option = cleanerPlan.isPresent() ? Option.of(instantTime) : Option.empty();
        break;
      default:
        throw new IllegalArgumentException("Invalid TableService " + tableServiceType);
    }

    Option<String> instantRange = delegateToTableServiceManager(tableServiceType, table);
    if (instantRange.isPresent()) {
      LOG.info("Delegate instant [" + instantRange.get() + "] to table service manager");
    }

    return option;
  }

scheduleClustering 底层会调用到 ClusteringPlanActionExecutor 执行器,该执行器负责生成 HoodieClusteringPlan,createClusteringPlan 尝试生成 HoodieClusteringPlan,如果成功生成 plan 则更新 Timeline 上状态,添加 replacecommit.requested 状态。

// org/apache/hudi/table/action/cluster/ClusteringPlanActionExecutor.java
  public Option<HoodieClusteringPlan> execute() {
    // 创建 HoodieClusteringPlan
    Option<HoodieClusteringPlan> planOption = createClusteringPlan();
    if (planOption.isPresent()) {
      HoodieInstant clusteringInstant =
          new HoodieInstant(HoodieInstant.State.REQUESTED, HoodieTimeline.REPLACE_COMMIT_ACTION, instantTime);
      try {
        HoodieRequestedReplaceMetadata requestedReplaceMetadata = HoodieRequestedReplaceMetadata.newBuilder()
            .setOperationType(WriteOperationType.CLUSTER.name())
            .setExtraMetadata(extraMetadata.orElse(Collections.emptyMap()))
            .setClusteringPlan(planOption.get())
            .build();
        // plan不为空时,生成 .replacecommit.requested 状态
        table.getActiveTimeline().saveToPendingReplaceCommit(clusteringInstant,
            TimelineMetadataUtils.serializeRequestedReplaceMetadata(requestedReplaceMetadata));
      } catch (IOException ioe) {
        throw new HoodieIOException("Exception scheduling clustering", ioe);
      }
    }

    return planOption;
  }

createClusteringPlan 方法首先会获取 Timeline 上当前最晚一次 clustering 对应的时间 lastClusteringInstant,然后计算从 lastClusteringInstant 开始到当前时间这期间提交的 commit 个数 commitsSinceLastClustering,接下来会进行一些列的校验,如果不满足 HoodieClusteringPlan 生成条件,则直接返回 Option.empty,如果满足条件,则通过反射加载 ClusteringPlanStrategy 策略,默认是 SparkSizeBasedClusteringPlanStrategy,并通过该策略生成 HoodieClusteringPlan。

// org/apache/hudi/table/action/cluster/ClusteringPlanActionExecutor.java
  protected Option<HoodieClusteringPlan> createClusteringPlan() {
    LOG.info("Checking if clustering needs to be run on " + config.getBasePath());
    // 最晚的一次的 REPLACE_COMMIT_ACTION 对应生成的 instant
    Option<HoodieInstant> lastClusteringInstant = table.getActiveTimeline()
        .filter(s -> s.getAction().equalsIgnoreCase(HoodieTimeline.REPLACE_COMMIT_ACTION)).lastInstant();

    // Timeline 上 instant 大于最晚一次 clustering 的 instant 对应的 commit 个数
    int commitsSinceLastClustering = table.getActiveTimeline().getCommitsTimeline().filterCompletedInstants()
        .findInstantsAfter(lastClusteringInstant.map(HoodieInstant::getTimestamp).orElse("0"), Integer.MAX_VALUE)
        .countInstants();

    // 这里进行一系列的校验,判断是否符合生成 clustering plan 的条件
    if (config.inlineClusteringEnabled() && config.getInlineClusterMaxCommits() > commitsSinceLastClustering) {
      LOG.info("Not scheduling inline clustering as only " + commitsSinceLastClustering
          + " commits was found since last clustering " + lastClusteringInstant + ". Waiting for "
          + config.getInlineClusterMaxCommits());
      return Option.empty();
    }

    if (config.isAsyncClusteringEnabled() && config.getAsyncClusterMaxCommits() > commitsSinceLastClustering) {
      LOG.info("Not scheduling async clustering as only " + commitsSinceLastClustering
          + " commits was found since last clustering " + lastClusteringInstant + ". Waiting for "
          + config.getAsyncClusterMaxCommits());
      return Option.empty();
    }

    // 获取 clustering plan 生成策略,默认是 SparkSizeBasedClusteringPlanStrategy
    LOG.info("Generating clustering plan for table " + config.getBasePath());
    ClusteringPlanStrategy strategy = (ClusteringPlanStrategy) ReflectionUtils.loadClass(
        ClusteringPlanStrategy.checkAndGetClusteringPlanStrategy(config),
            new Class<?>[] {HoodieTable.class, HoodieEngineContext.class, HoodieWriteConfig.class}, table, context, config);

    // 生成 ClusteringPlan
    return strategy.generateClusteringPlan();
  }

生成 HoodieClusteringPlan 的关键步骤分为两步,第一步是根据条件筛选出满足条件的分区,先通过 getAllPartitionPaths 拿到表元数据下所有分区,然后 filterPartitionPaths 对分区进行过滤,Hudi 当前支持 NONE/RECENT_DAYS/SELECTED_PARTITIONS/DAY_ROLLING 四种分区过滤模式,默认是 NONE 模式不过滤;第二步是依次遍历每个分区,对分区内能够执行 clustering 合并的文件进行分组,控制单独分组要处理的数据量。

// org/apache/hudi/table/action/cluster/strategy/PartitionAwareClusteringPlanStrategy.java
  public Option<HoodieClusteringPlan> generateClusteringPlan() {
    if (!checkPrecondition()) {
      return Option.empty();
    }

    HoodieTableMetaClient metaClient = getHoodieTable().getMetaClient();
    LOG.info("Scheduling clustering for " + metaClient.getBasePath());
    HoodieWriteConfig config = getWriteConfig();

    // 获取参数 hoodie.clustering.plan.strategy.partition.selected 配置的分区信息
    String partitionSelected = config.getClusteringPartitionSelected();
    LOG.info("Scheduling clustering partitionSelected: " + partitionSelected);
    List<String> partitionPaths;

    if (StringUtils.isNullOrEmpty(partitionSelected)) {
      // 如果没有配置分区信息,则扫描 hooide basePath 目录获取所有分区
      // get matched partitions if set
      partitionPaths = getRegexPatternMatchedPartitions(config, FSUtils.getAllPartitionPaths(getEngineContext(), config.getMetadataConfig(), metaClient.getBasePath()));
      // filter the partition paths if needed to reduce list status
    } else {
      partitionPaths = Arrays.asList(partitionSelected.split(","));
    }

    // 根据过滤方式筛选出对应的分区,默认不过滤
    partitionPaths = filterPartitionPaths(partitionPaths);
    LOG.info("Scheduling clustering partitionPaths: " + partitionPaths);

    if (partitionPaths.isEmpty()) {
      // In case no partitions could be picked, return no clustering plan
      return Option.empty();
    }

    // 依次遍历每个分区,构建 HoodieClusteringGroup 信息
    // 每个分区内 groups 个数不超过 hoodie.clustering.plan.strategy.max.num.groups 限制,所有分区总的 group 也不超过该参数限制
    List<HoodieClusteringGroup> clusteringGroups = getEngineContext()
        .flatMap(
            partitionPaths,
            partitionPath -> {
              // 获取可以执行 clustering 的所有 FileSlice
              List<FileSlice> fileSlicesEligible = getFileSlicesEligibleForClustering(partitionPath).collect(Collectors.toList());
              // 为每个分区构建 ClusteringGroups
              return buildClusteringGroupsForPartition(partitionPath, fileSlicesEligible).limit(getWriteConfig().getClusteringMaxNumGroups());
            },
            partitionPaths.size())
        .stream()
        .limit(getWriteConfig().getClusteringMaxNumGroups())
        .collect(Collectors.toList());

    // 如果 clusteringGroups 为空,则返回 empty
    if (clusteringGroups.isEmpty()) {
      LOG.info("No data available to cluster");
      return Option.empty();
    }

    HoodieClusteringStrategy strategy = HoodieClusteringStrategy.newBuilder()
        .setStrategyClassName(getWriteConfig().getClusteringExecutionStrategyClass())
        .setStrategyParams(getStrategyParams())
        .build();

    return Option.of(HoodieClusteringPlan.newBuilder()
        .setStrategy(strategy)
        .setInputGroups(clusteringGroups)
        .setExtraMetadata(getExtraMetadata())
        .setVersion(getPlanVersion())
        .setPreserveHoodieMetadata(true)
        .build());
  }

下面是生成 HoodieClusteringPlan 第二步的细节,主要是对单个分区内的文件进行分组。getLatestFileSlices 方法负责获取该分区下所有最新的 FileSlice,过滤掉处于 pengding 状态的 compaction/clustering 内的 FileSlice,避免相同 FileGroupId 的文件被重复合并。

// org/apache/hudi/client/clustering/plan/strategy/FlinkSizeBasedClusteringPlanStrategy.java
  protected Stream<FileSlice> getFileSlicesEligibleForClustering(final String partition) {
    return super.getFileSlicesEligibleForClustering(partition)
        // 过滤出小于小文件阈值的 FileSlice
        // Only files that have base file size smaller than small file size are eligible.
        .filter(slice -> slice.getBaseFile().map(HoodieBaseFile::getFileSize).orElse(0L) < getWriteConfig().getClusteringSmallFileLimit());
  }

// org/apache/hudi/table/action/cluster/strategy/ClusteringPlanStrategy.java
  protected Stream<FileSlice> getFileSlicesEligibleForClustering(String partition) {
    SyncableFileSystemView fileSystemView = (SyncableFileSystemView) getHoodieTable().getSliceView();
    // 获取 pending 的 compaction 和 clustering 对应的 FileGroupId
    Set<HoodieFileGroupId> fgIdsInPendingCompactionLogCompactionAndClustering =
        Stream.concat(fileSystemView.getPendingCompactionOperations(), fileSystemView.getPendingLogCompactionOperations())
            .map(instantTimeOpPair -> instantTimeOpPair.getValue().getFileGroupId())
            .collect(Collectors.toSet());
    fgIdsInPendingCompactionLogCompactionAndClustering.addAll(fileSystemView.getFileGroupsInPendingClustering().map(Pair::getKey).collect(Collectors.toSet()));

    // 获取分区内所有 FileGroup 下的最新 FileSlices 信息(过滤掉处于 pending 的 compaction/clustering 内的 FileSlice)
    return hoodieTable.getSliceView().getLatestFileSlices(partition)
        // file ids already in clustering are not eligible
        .filter(slice -> !fgIdsInPendingCompactionLogCompactionAndClustering.contains(slice.getFileGroupId()));
  }

getLatestFileSlices 的细节是先拿到该分区下所有 FileGroup 对应的最新 FileSlice,由于多个 FileSlice 经过 clustering 后的 FileGroupId 是新的 FileGroupId,所以在 clustering 服务中一个 FileGroupId 对应的 FileSlice 一般只有两种状态,一种是最新的 FileSlice,一种是被 replaced 过的 FileSlice(即之前执行 clustering 合并过的 FileSlice)。拿到所有最新的 FileSlice 后,先过滤掉被 replaceed 过的 FileSlice,再过滤出不属于 pending compaction 还未合并的 FileSlice,保证即将纳入 HoodieClusteringPlan 的 FileSlice 都是不会被重复合并的。

// org/apache/hudi/common/table/view/AbstractTableFileSystemView.java
  public final Stream<FileSlice> getLatestFileSlices(String partitionStr) {
    try {
      readLock.lock();
      String partitionPath = formatPartitionKey(partitionStr);
      ensurePartitionLoadedCorrectly(partitionPath);
      return fetchLatestFileSlices(partitionPath)
          .filter(slice -> !isFileGroupReplaced(slice.getFileGroupId()))  // 过滤出没有被 replaced 过的 FileSlice,即之前没有被合并过的文件
          .flatMap(slice -> this.filterBaseFileAfterPendingCompaction(slice, true))  // 过滤出不属于 pending compaction 还未合并的 FileSlice
          .map(this::addBootstrapBaseFileIfPresent);
    } finally {
      readLock.unlock();
    }
  }

  Stream<FileSlice> fetchLatestFileSlices(String partitionPath) {
    return fetchAllStoredFileGroups(partitionPath).map(HoodieFileGroup::getLatestFileSlice)
        .filter(Option::isPresent)
        .map(Option::get);
  }

获取到所有 FileGroup 对应的所有 FileSlice,就需要对这些 FileSlice 进行分组。分组前先对 FileSlice 按 base file 文件的大小降序排序,这样的目的是为了分组的 FileSlice 更加紧凑。分组的数量由参数 hoodie.clustering.plan.strategy.max.bytes.per.group(默认2g) 控制,分组内结果文件的大小由 hoodie.clustering.plan.strategy.target.file.max.bytes (默认1g)控制,也就是说一个分组内可能会写多个 FileGroupId 文件。对多个分区分组完成后进行封装生成对应的 HoodieClusteringPlan。

// org/apache/hudi/client/clustering/plan/strategy/SparkSizeBasedClusteringPlanStrategy.java
  protected Stream<HoodieClusteringGroup> buildClusteringGroupsForPartition(String partitionPath, List<FileSlice> fileSlices) {
    HoodieWriteConfig writeConfig = getWriteConfig();

    List<Pair<List<FileSlice>, Integer>> fileSliceGroups = new ArrayList<>();
    List<FileSlice> currentGroup = new ArrayList<>();

    // 分组前对 FileSlices 根据文件大小降序排序,为了下面分组更加紧凑
    // Sort fileSlices before dividing, which makes dividing more compact
    List<FileSlice> sortedFileSlices = new ArrayList<>(fileSlices);
    sortedFileSlices.sort((o1, o2) -> (int)
        ((o2.getBaseFile().isPresent() ? o2.getBaseFile().get().getFileSize() : writeConfig.getParquetMaxFileSize())
            - (o1.getBaseFile().isPresent() ? o1.getBaseFile().get().getFileSize() : writeConfig.getParquetMaxFileSize())));

    long totalSizeSoFar = 0;

    // 遍历排过序的 FileSlices,按参数 hoodie.clustering.plan.strategy.max.bytes.per.group 大小进行分组
    for (FileSlice currentSlice : sortedFileSlices) {
      long currentSize = currentSlice.getBaseFile().isPresent() ? currentSlice.getBaseFile().get().getFileSize() : writeConfig.getParquetMaxFileSize();
      // 分组的触发条件
      // check if max size is reached and create new group, if needed.
      if (totalSizeSoFar + currentSize > writeConfig.getClusteringMaxBytesInGroup() && !currentGroup.isEmpty()) {
        // 按 hoodie.clustering.plan.strategy.target.file.max.bytes 大小计算当前分组对应的输出Group数量
        int numOutputGroups = getNumberOfOutputFileGroups(totalSizeSoFar, writeConfig.getClusteringTargetFileMaxBytes());
        LOG.info("Adding one clustering group " + totalSizeSoFar + " max bytes: "
            + writeConfig.getClusteringMaxBytesInGroup() + " num input slices: " + currentGroup.size() + " output groups: " + numOutputGroups);
        fileSliceGroups.add(Pair.of(currentGroup, numOutputGroups));
        currentGroup = new ArrayList<>();
        totalSizeSoFar = 0;
      }

      // Add to the current file-group
      currentGroup.add(currentSlice);
      // assume each file group size is ~= parquet.max.file.size
      totalSizeSoFar += currentSize;
    }

    // 如果还有 FileSlice 没有创建分组,在这里单独添加
    if (!currentGroup.isEmpty()) {
      if (currentGroup.size() > 1 || writeConfig.shouldClusteringSingleGroup()) {
        int numOutputGroups = getNumberOfOutputFileGroups(totalSizeSoFar, writeConfig.getClusteringTargetFileMaxBytes());
        LOG.info("Adding final clustering group " + totalSizeSoFar + " max bytes: "
            + writeConfig.getClusteringMaxBytesInGroup() + " num input slices: " + currentGroup.size() + " output groups: " + numOutputGroups);
        fileSliceGroups.add(Pair.of(currentGroup, numOutputGroups));
      }
    }

    // 根据 fileSliceGroups 构建 HoodieClusteringGroup
    return fileSliceGroups.stream().map(fileSliceGroup ->
        HoodieClusteringGroup.newBuilder()
          .setSlices(getFileSliceInfo(fileSliceGroup.getLeft()))
          .setNumOutputFileGroups(fileSliceGroup.getRight())
          .setMetrics(buildMetrics(fileSliceGroup.getLeft()))   // 设置每个ClusteringGroup的指标信息
          .build());
  }

2.2 执行HoodieClusteringPlan

Spark 与 Flink 引擎执行 HoodieClusteringPlan 的逻辑不太一样,但整体思路差不多,本文主要介绍 Spark 执行 clustering 的实现。

执行 clustering 前会对当前 clusteringInstant 对应的 commit 状态进行校验,如果 requested 和 inflight 状态同时存在,则先对该 commit 进行回滚,然后通过 table.cluster 执行 clustering 合并,执行完成后更新相关 metadata 状态。

// org/apache/hudi/client/SparkRDDTableServiceClient.java
  public HoodieWriteMetadata<JavaRDD<WriteStatus>> cluster(String clusteringInstant, boolean shouldComplete) {
    HoodieSparkTable<T> table = HoodieSparkTable.create(config, context);
    HoodieTimeline pendingClusteringTimeline = table.getActiveTimeline().filterPendingReplaceTimeline();
    HoodieInstant inflightInstant = HoodieTimeline.getReplaceCommitInflightInstant(clusteringInstant);
    // 如果 Timeline 上 replacecommit action 同时存在 requested 和 inflight 状态,则先对其进行 rollback
    if (pendingClusteringTimeline.containsInstant(inflightInstant)) {
      table.rollbackInflightClustering(inflightInstant, commitToRollback -> getPendingRollbackInfo(table.getMetaClient(), commitToRollback, false));
      table.getMetaClient().reloadActiveTimeline();
    }
    clusteringTimer = metrics.getClusteringCtx();
    LOG.info("Starting clustering at " + clusteringInstant);
    // 执行 clustering 合并
    HoodieWriteMetadata<HoodieData<WriteStatus>> writeMetadata = table.cluster(context, clusteringInstant);
    HoodieWriteMetadata<JavaRDD<WriteStatus>> clusteringMetadata = writeMetadata.clone(HoodieJavaRDD.getJavaRDD(writeMetadata.getWriteStatuses()));
    // Validation has to be done after cloning. if not, it could result in dereferencing the write status twice which means clustering could get executed twice.
    validateClusteringCommit(clusteringMetadata, clusteringInstant, table);

    // Publish file creation metrics for clustering.
    if (config.isMetricsOn()) {
      clusteringMetadata.getWriteStats()
          .ifPresent(hoodieWriteStats -> hoodieWriteStats.stream()
              .filter(hoodieWriteStat -> hoodieWriteStat.getRuntimeStats() != null)
              .map(hoodieWriteStat -> hoodieWriteStat.getRuntimeStats().getTotalCreateTime())
              .forEach(metrics::updateClusteringFileCreationMetrics));
    }

    // 更新 clustering 执行完成后的状态
    // TODO : Where is shouldComplete used ?
    if (shouldComplete && clusteringMetadata.getCommitMetadata().isPresent()) {
      completeTableService(TableServiceType.CLUSTER, clusteringMetadata.getCommitMetadata().get(), table, clusteringInstant);
    }
    return clusteringMetadata;
  }

开始执行 clustering 时先将 Timeline 上的 replacecommit.requested 状态流转为 replacecommit.inflight 状态,然后调用 performClustering 执行合并,真正合并动作是由 collectAsList 触发。

// org/apache/hudi/table/action/commit/BaseCommitActionExecutor.java
  protected HoodieWriteMetadata<HoodieData<WriteStatus>> executeClustering(HoodieClusteringPlan clusteringPlan) {
    HoodieInstant instant = HoodieTimeline.getReplaceCommitRequestedInstant(instantTime);
    // clustering 的 requested 状态流转为 inflight 状态
    // Mark instant as clustering inflight
    table.getActiveTimeline().transitionReplaceRequestedToInflight(instant, Option.empty());
    table.getMetaClient().reloadActiveTimeline();

    // Disable auto commit. Strategy is only expected to write data in new files.
    config.setValue(HoodieWriteConfig.AUTO_COMMIT_ENABLE, Boolean.FALSE.toString());

    final Schema schema = HoodieAvroUtils.addMetadataFields(new Schema.Parser().parse(config.getSchema()));
    // 根据 clustering plan 执行合并操作
    HoodieWriteMetadata<HoodieData<WriteStatus>> writeMetadata = (
        (ClusteringExecutionStrategy<T, HoodieData<HoodieRecord<T>>, HoodieData<HoodieKey>, HoodieData<WriteStatus>>)
            ReflectionUtils.loadClass(config.getClusteringExecutionStrategyClass(),
                new Class<?>[] {HoodieTable.class, HoodieEngineContext.class, HoodieWriteConfig.class}, table, context, config))
        .performClustering(clusteringPlan, schema, instantTime);
    HoodieData<WriteStatus> writeStatusList = writeMetadata.getWriteStatuses();
    HoodieData<WriteStatus> statuses = updateIndex(writeStatusList, writeMetadata);
    // statuses 对象的 RDD 持久化,避免重复计算
    statuses.persist(config.getString(WRITE_STATUS_STORAGE_LEVEL_VALUE), context, HoodieData.HoodieDataCacheKey.of(config.getBasePath(), instantTime));
    // 触发 clustering job
    // triggers clustering.
    writeMetadata.setWriteStats(statuses.map(WriteStatus::getStat).collectAsList());
    writeMetadata.setPartitionToReplaceFileIds(getPartitionToReplacedFileIds(clusteringPlan, writeMetadata));
    commitOnAutoCommit(writeMetadata);
    if (!writeMetadata.getCommitMetadata().isPresent()) {
      HoodieCommitMetadata commitMetadata = CommitUtils.buildMetadata(writeMetadata.getWriteStats().get(), writeMetadata.getPartitionToReplaceFileIds(),
          extraMetadata, operationType, getSchemaToStoreInCommit(), getCommitActionType());
      writeMetadata.setCommitMetadata(Option.of(commitMetadata));
    }
    return writeMetadata;
  }

clustering 合并的规则是完全由上面生成的 HoodieClusteringPlan 定义,一个 HoodieClusteringPlan 中包含多个分组,Spark 也是按照分组依次遍历,对每个分组执行合并操作,目前 Spark 支持 rdd 和 dataset 两种操作方式,其中 dataset 是 Spark 原生 row 的方式读写 records,想比较 rdd 方式会减少一些不必要的开销。

// org/apache/hudi/client/clustering/run/strategy/MultipleSparkJobExecutionStrategy.java
  public HoodieWriteMetadata<HoodieData<WriteStatus>> performClustering(final HoodieClusteringPlan clusteringPlan, final Schema schema, final String instantTime) {
    JavaSparkContext engineContext = HoodieSparkEngineContext.getSparkContext(getEngineContext());
    boolean shouldPreserveMetadata = Option.ofNullable(clusteringPlan.getPreserveHoodieMetadata()).orElse(false);
    // 对每个分组执行 clustering 并统计写状态
    // execute clustering for each group async and collect WriteStatus
    Stream<HoodieData<WriteStatus>> writeStatusesStream = FutureUtils.allOf(
            clusteringPlan.getInputGroups().stream()
                .map(inputGroup -> {
                  // 支持 rdd 和 dataset 两种操作方式
                  if (getWriteConfig().getBooleanOrDefault("hoodie.datasource.write.row.writer.enable", false)) {
                    return runClusteringForGroupAsyncAsRow(inputGroup,
                        clusteringPlan.getStrategy().getStrategyParams(),
                        shouldPreserveMetadata,
                        instantTime);
                  }
                  return runClusteringForGroupAsync(inputGroup,
                      clusteringPlan.getStrategy().getStrategyParams(),
                      shouldPreserveMetadata,
                      instantTime);
                })
                .collect(Collectors.toList()))
        .join()
        .stream();
    JavaRDD<WriteStatus>[] writeStatuses = convertStreamToArray(writeStatusesStream.map(HoodieJavaRDD::getJavaRDD));
    JavaRDD<WriteStatus> writeStatusRDD = engineContext.union(writeStatuses);

    HoodieWriteMetadata<HoodieData<WriteStatus>> writeMetadata = new HoodieWriteMetadata<>();
    writeMetadata.setWriteStatuses(HoodieJavaRDD.of(writeStatusRDD));
    return writeMetadata;
  }

下面主要介绍 rdd 的方式,首先调用 readRecordsForGroup 读取当前分组中所有 records,然后对读取到的 records 记录进行合并。

// org/apache/hudi/client/clustering/run/strategy/MultipleSparkJobExecutionStrategy.java
  private CompletableFuture<HoodieData<WriteStatus>> runClusteringForGroupAsync(HoodieClusteringGroup clusteringGroup, Map<String, String> strategyParams,
                                                                                boolean preserveHoodieMetadata, String instantTime) {
    return CompletableFuture.supplyAsync(() -> {
      JavaSparkContext jsc = HoodieSparkEngineContext.getSparkContext(getEngineContext());
      // 读取分组中的 FileSlice 信息
      HoodieData<HoodieRecord<T>> inputRecords = readRecordsForGroup(jsc, clusteringGroup, instantTime);
      Schema readerSchema = HoodieAvroUtils.addMetadataFields(new Schema.Parser().parse(getWriteConfig().getSchema()));
      // NOTE: Record have to be cloned here to make sure if it holds low-level engine-specific
      //       payload pointing into a shared, mutable (underlying) buffer we get a clean copy of
      //       it since these records will be shuffled later.
      List<HoodieFileGroupId> inputFileIds = clusteringGroup.getSlices().stream()
          .map(info -> new HoodieFileGroupId(info.getPartitionPath(), info.getFileId()))
          .collect(Collectors.toList());
      // 对读取的 records 记录执行 clustering 合并操作
      return performClusteringWithRecordsRDD(inputRecords, clusteringGroup.getNumOutputFileGroups(), instantTime, strategyParams, readerSchema, inputFileIds, preserveHoodieMetadata,
          clusteringGroup.getExtraMetadata());
    });
  }

readRecordsForGroup 内部首先会将分组内的 FileSlice 封装成 ClusteringOperation,这样做的目的是 FileSlice 本身不支持序列化,ClusteringOperation 对其做了封装支持序列化,以便于 Spark 对其进行分布式处理,封装后则读取所有 ClusteringOperation 内的 records。这里区分了 ClusteringOperation 是否包含 log 文件,采用不同的方式读取,目前了解的 clustering 服务都是针对 parquet 文件合并,对于包含 log 的场景还不太清楚。

// org/apache/hudi/client/clustering/run/strategy/MultipleSparkJobExecutionStrategy.java
  private HoodieData<HoodieRecord<T>> readRecordsForGroup(JavaSparkContext jsc, HoodieClusteringGroup clusteringGroup, String instantTime) {
    // 拿到分组内的 FileSlice 封装成 ClusteringOperation,主要是支持序列化操作
    List<ClusteringOperation> clusteringOps = clusteringGroup.getSlices().stream().map(ClusteringOperation::create).collect(Collectors.toList());
    boolean hasLogFiles = clusteringOps.stream().anyMatch(op -> op.getDeltaFilePaths().size() > 0);
    // 读取分区中的文件,并以 RDD 的形式返回,这里区分包含log文件和不包含log文件
    // kk-TODO: 什么情况下clustering合并会包含log文件,目前不清楚这个场景
    if (hasLogFiles) {
      // if there are log files, we read all records into memory for a file group and apply updates.
      return readRecordsForGroupWithLogs(jsc, clusteringOps, instantTime);
    } else {
      // We want to optimize reading records for case there are no log files.
      return readRecordsForGroupBaseFiles(jsc, clusteringOps);
    }
  }

读取到要合并文件的 records 后,采用 BulkInsert 方式将这些 records 写入到新文件中,这里更新了 PARQUET_MAX_FILE_SIZE 大小,主要是用于写文件阶段控制写入文件的大小。

// org/apache/hudi/client/clustering/run/strategy/SparkSortAndSizeExecutionStrategy.java
  public HoodieData<WriteStatus> performClusteringWithRecordsRDD(final HoodieData<HoodieRecord<T>> inputRecords,
                                                                 final int numOutputGroups,
                                                                 final String instantTime,
                                                                 final Map<String, String> strategyParams,
                                                                 final Schema schema,
                                                                 final List<HoodieFileGroupId> fileGroupIdList,
                                                                 final boolean shouldPreserveHoodieMetadata,
                                                                 final Map<String, String> extraMetadata) {
    LOG.info("Starting clustering for a group, parallelism:" + numOutputGroups + " commit:" + instantTime);

    HoodieWriteConfig newConfig = HoodieWriteConfig.newBuilder()
        .withBulkInsertParallelism(numOutputGroups)
        .withProps(getWriteConfig().getProps()).build();

    // 重新设置 hoodie.parquet.max.file.size 参数,默认为 2g
    newConfig.setValue(HoodieStorageConfig.PARQUET_MAX_FILE_SIZE, String.valueOf(getWriteConfig().getClusteringMaxBytesInGroup()));

    // 采用 BulkInert 方式将读取到的 records 写入新文件
    return (HoodieData<WriteStatus>) SparkBulkInsertHelper.newInstance().bulkInsert(inputRecords, instantTime, getHoodieTable(),
        newConfig, false, getRDDPartitioner(strategyParams, schema), true, numOutputGroups, new CreateHandleFactory(shouldPreserveHoodieMetadata));
  }

BulkInsert 写入前首先会拿到 RDD 的分区方式,getRDDPartitioner 会根据文件的布局、是 rdd 写入还是 dataset 写入、是否是 bucket 索引等条件判断采用何种分区器,这里默认是采用 NonSortPartitioner 分区器,然后调用该分区器的 repartitionRecords 实现,底层实现是通过 coalesce 算子实现文件的合并。至此,该分组下的所有文件已按照 targetParallelism 并行度进行了合并。

// org/apache/hudi/table/action/commit/SparkBulkInsertHelper.java
  public HoodieData<WriteStatus> bulkInsert(HoodieData<HoodieRecord<T>> inputRecords,
                                            String instantTime,
                                            HoodieTable<T, HoodieData<HoodieRecord<T>>, HoodieData<HoodieKey>, HoodieData<WriteStatus>> table,
                                            HoodieWriteConfig config,
                                            boolean performDedupe,
                                            BulkInsertPartitioner partitioner,
                                            boolean useWriterSchema,
                                            int configuredParallelism,
                                            WriteHandleFactory writeHandleFactory) {

    // De-dupe/merge if needed
    HoodieData<HoodieRecord<T>> dedupedRecords = inputRecords;

    // 设置并行度,决定了clustering最后写多少个文件
    int targetParallelism = deduceShuffleParallelism(inputRecords, configuredParallelism);

    // 默认为 false
    if (performDedupe) {
      dedupedRecords = (HoodieData<HoodieRecord<T>>) HoodieWriteHelper.newInstance()
          .combineOnCondition(config.shouldCombineBeforeInsert(), inputRecords, targetParallelism, table);
    }

    // 对records进行按targetParallelism并行度重分区
    // only JavaRDD is supported for Spark partitioner, but it is not enforced by BulkInsertPartitioner API. To improve this, TODO HUDI-3463
    final HoodieData<HoodieRecord<T>> repartitionedRecords =
        HoodieJavaRDD.of((JavaRDD<HoodieRecord<T>>) partitioner.repartitionRecords(HoodieJavaRDD.getJavaRDD(dedupedRecords), targetParallelism));

    JavaRDD<WriteStatus> writeStatusRDD = HoodieJavaRDD.getJavaRDD(repartitionedRecords)
        .mapPartitionsWithIndex(new BulkInsertMapFunction<>(instantTime,
            partitioner.arePartitionRecordsSorted(), config, table, useWriterSchema, partitioner, writeHandleFactory), true)
        .flatMap(List::iterator);

    return HoodieJavaRDD.of(writeStatusRDD);
  }

3、总结

Hudi clustering 数据聚簇为 CopyOnWrite 表提供了一种小文件合并操作,生成 HoodieClusteringPlan 阶段首先筛选出需要进行 clustering 的分区,依次遍历每个分区,获取分区下需要合并的 parquet 数据文件 FileSlice,生成 HoodieClusteringGroup 分组,然后将所有分区的 HoodieClusteringGroup 分组封装起来生成 HoodieClusteringPlan,执行 HoodieClusteringPlan 阶段端会先从 HoodieClusteringPlan 中获取所有分组信息,然后依次每个分组的 FileSlice 文件,对这些文件进行合并生成新的 parquet 文件,完成后将统计信息写入到元数据。

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值