Hadoop2.8.5 MapReduce计算流程

上一篇我们从宏观的角度从作业认领到分发考察了MapReduce框架。今天我们来探究其内部,从宏观上说, MR 框架主要就是 Map 和 Reduce 这两个阶段。但是实际上远不是那么简单,这两个宏观的阶段都进一步划分成好几个更微观的阶段,前面提到过的排序(Sort )阶段为例, Mapper 的输出端有个由框架提供的局部排序阶段,而 Reducer 输入端的收取(Fetch )和合并( Mage ),以至汇合(Combine )阶段又带有排序的成分,由此而形成的全局排序功能。作业的操作流程(即控制流)定义了几个阶段 enumPhase { STARTING , MAP , SHUFFLE , SORT , REDUCE , CLEANUP }。

在这里插入图片描述

1. Mapper的输入

MR 的数据一般来源于 HDFS 文件,但是也可以来源于例如查询数据库的输出,也可以来源于某种数据“生成器”。文件也没有规定必须是某种特定格式的文件,例如也可以是网页。但是,对于具体的数据源, MR 框架显然必须能从中读出数据,形成 KV 对,并将其作为 nextKeyValue ()调用的输出。有什么样的数据源,什么样的数据格式,就得采用什么样的InputFormat。

hadoop-mapreduce-client-core\src\main\java\org\apache\hadoop\mapreduce\JobSubmitter.java

Hadoop 提供了很多具体的 InputFormat 类型。我们从作业的提交开始考察

class JobSubmitter {
  //作业的提交
  JobStatus submitJobInternal(Job job, Cluster cluster) 
     throws ClassNotFoundException, InterruptedException, IOException {
      // 对输入文件进行分片
      int maps = writeSplits(job, submitJobDir);
      conf.setInt(MRJobConfig.NUM_MAPS, maps);
  }
  
  private int writeSplits(org.apache.hadoop.mapreduce.JobContext job,Path jobSubmitDir) {
    JobConf jConf = (JobConf)job.getConfiguration();
    int maps;
    if (jConf.getUseNewMapper()) {
      // Split 的数量决定了 Mapper 的数量
      maps = writeNewSplits(job, jobSubmitDir);
    } else {
      maps = writeOldSplits(jConf, jobSubmitDir);
    }
    //这是应有的 Mapper 数量,有几个 Split 就应该有几个 Mapper
    return maps;
  }
  
  private <T extends InputSplit> int writeNewSplits(JobContext job, Path jobSubmitDir) {
    Configuration conf = job.getConfiguration();
    //JobContextImpl.getInputFormatClass()
    InputFormat<?, ?> input = ReflectionUtils.newInstance(job.getInputFormatClass(), conf);
    //获取本作业的输入格式,默认 TextInputFormat,InputSplit为抽象类
    //getSplits ()通过 computeSplitSize ()计算每个分片的大小,计算的依据有具体文件系统中块(
    //Block )的大小( HDFS 中块的大小为 64MB 或 128MB ),配置中的最小片长度和最大片长度
    List<InputSplit> splits = input.getSplits(job);
    //将 List 转化成数组
    T[] array = (T[]) splits.toArray(new InputSplit[splits.size()]);
    //对数组中的 Split 按大小排序,大的在前
    Arrays.sort(array, new SplitComparator());
    //创建分片文件元数据
    JobSplitWriter.createSplitFiles(jobSubmitDir, conf, 
        jobSubmitDir.getFileSystem(conf), array);
    return array.length;
  }
  
}

并不是根据文件的大小和预定的 Mapper 数量确定片的大小,而是根据文件的大小和片的大小确定片的数量。而片的数量一旦确定, Mapper 的数量也就确定了。

hadoop-mapreduce-client-core\src\main\java\org\apache\hadoop\mapreduce\split\JobSplitWriter.java

将分片信息写入 job.split 文件,这里的 splits 就是上面的 array

public class JobSplitWriter {
  public static void createSplitFiles(Path jobSubmitDir, 
      Configuration conf, FileSystem fs, 
      org.apache.hadoop.mapred.InputSplit[] splits) {
    FSDataOutputStream out = createFile(fs, 
        JobSubmissionFiles.getJobSplitFile(jobSubmitDir), conf);
    SplitMetaInfo[] info = writeOldSplits(splits, out, conf);
    out.close();
    writeJobSplitMetaInfo(fs,JobSubmissionFiles.getJobSplitMetaFile(jobSubmitDir), 
        new FsPermission(JobSubmissionFiles.JOB_FILE_PERMISSION), splitVersion,
        info);
  }
}

hadoop-mapreduce-client-core\src\main\java\org\apache\hadoop\mapreduce\JobSubmissionFiles.java

public class JobSubmissionFiles {
  public static Path getJobSplitFile(Path jobSubmissionDir) {
    return new Path(jobSubmissionDir, "job.split");
  }
}

hadoop-mapreduce-client-core\src\main\java\org\apache\hadoop\mapred\FileSplit.java

Split 是 InputSplit 对 象。不 过 InputSplit 是 抽 象 类,具 体 扩 充 落 实InputSplit 的类则有好多个,因为不同数据源的分片是不一样的。就数据文件而言,它的InputSplit 是 FileSplit。

public class FileSplit extends InputSplit implements Writable {
  private Path file; //文件路径
  private long start; //本 Split 在文件中的起点
  private long length; //本 Split 的长度
  private String[] hosts; //本 Split 内容所在的节点,一个 Split 可能涉及多个节点
  private SplitLocationInfo[] hostInfos; //所涉及节点的信息,例如是在内存中还是在磁盘上
}

hadoop-mapreduce-client-app\src\main\java\org\apache\hadoop\mapreduce\v2\app\job\impl\JobImpl.java

在AM进行资源本地化时InitTransition中会createSplits(),当时没有深入,现在我们来看

public static class InitTransition  implements MultipleArcTransition<JobImpl, JobEvent, JobStateInternal>  {
	TaskSplitMetaInfo[] taskSplitMetaInfo = createSplits(job, job.jobId);
	//元数据信息传到Map,一直传到MapTask
	createMapTasks(job, inputLength, taskSplitMetaInfo);
    createReduceTasks(job);
}

protected TaskSplitMetaInfo[] createSplits(JobImpl job, JobId jobId) {
      TaskSplitMetaInfo[] allTaskSplitMetaInfo;
      try {
        allTaskSplitMetaInfo = SplitMetaInfoReader.readSplitMetaInfo(job.oldJobId, job.fs, 
            job.conf, job.remoteJobSubmitDir);
      } 
      return allTaskSplitMetaInfo;
    }

hadoop-mapreduce-client-core\src\main\java\org\apache\hadoop\mapreduce\split\SplitMetaInfoReader.java
从作业中读取元数据信息

public class SplitMetaInfoReader {
	public static JobSplit.TaskSplitMetaInfo[] readSplitMetaInfo(
      JobID jobId, FileSystem fs, Configuration conf, Path jobSubmitDir) {
    long maxMetaInfoSize = conf.getLong(MRJobConfig.SPLIT_METAINFO_MAXSIZE,
        MRJobConfig.DEFAULT_SPLIT_METAINFO_MAXSIZE);
    Path metaSplitFile = JobSubmissionFiles.getJobSplitMetaFile(jobSubmitDir);
    String jobSplitFile = JobSubmissionFiles.getJobSplitFile(jobSubmitDir).toString();
    FileStatus fStatus = fs.getFileStatus(metaSplitFile);
    FSDataInputStream in = fs.open(metaSplitFile);
    byte[] header = new byte[JobSplit.META_SPLIT_FILE_HEADER.length];
    in.readFully(header);
    int vers = WritableUtils.readVInt(in);
    if (vers != JobSplit.META_SPLIT_VERSION) {
      in.close();
    }
    // 从 meta 文件读取 Split 的数量,这也决定了该有几个 MapTask
    int numSplits = WritableUtils.readVInt(in); //TODO: check for insane values
    JobSplit.TaskSplitMetaInfo[] allSplitMetaInfo = 
      new JobSplit.TaskSplitMetaInfo[numSplits];
    for (int i = 0; i < numSplits; i++) {
      JobSplit.SplitMetaInfo splitMetaInfo = new JobSplit.SplitMetaInfo();
      splitMetaInfo.readFields(in);
      // 为 Spilt 文件的每个片都创建一个 TaskSplitIndex 对象
      JobSplit.TaskSplitIndex splitIndex = new JobSplit.TaskSplitIndex(
          jobSplitFile, 
          splitMetaInfo.getStartOffset());
      //这个数组中的元素是 TaskSplitMetaInfo 对象, TaskSplitIndex 是其成分之一
      allSplitMetaInfo[i] = new JobSplit.TaskSplitMetaInfo(splitIndex, 
          splitMetaInfo.getLocations(), 
          splitMetaInfo.getInputDataLength());
    }
    in.close();
    return allSplitMetaInfo;
  }
}

hadoop-mapreduce-client-core\src\main\java\org\apache\hadoop\mapred\MapTask.java
在对岸的那里,创建具体的MapTask,会通过Recoder读入源文件信息

public class MapTask extends Task {
   public void run(){
   	 initialize(); //不同的数据源有不同的recoder.initialize()
   	 runNewMapper(job, splitMetaInfo, umbilical, reporter);
   }
}

hadoop-mapreduce-client-core\src\main\java\org\apache\hadoop\mapreduce\lib\input\LineRecordReader.java
对于TextInputFormat由LineRecordReader读入文件信息

public class LineRecordReader extends RecordReader<LongWritable, Text> {
  //数据输入源的初始化
  public void initialize(InputSplit genericSplit,
                         TaskAttemptContext context) throws IOException {
    FileSplit split = (FileSplit) genericSplit;
    Configuration job = context.getConfiguration();
    this.maxLineLength = job.getInt(MAX_LINE_LENGTH, Integer.MAX_VALUE);
    start = split.getStart();
    end = start + split.getLength();
    final Path file = split.getPath();
    // open the file and seek to the start of the split
    final FileSystem fs = file.getFileSystem(job);
    fileIn = fs.open(file);
    //解压文件
    CompressionCodec codec = new CompressionCodecFactory(job).getCodec(file);
    if (null!=codec) {
      isCompressedInput = true;
      decompressor = CodecPool.getDecompressor(codec);
      if (codec instanceof SplittableCompressionCodec) {
        final SplitCompressionInputStream cIn =
          ((SplittableCompressionCodec)codec).createInputStream(
            fileIn, decompressor, start, end,
            SplittableCompressionCodec.READ_MODE.BYBLOCK);
        in = new CompressedSplitLineReader(cIn, job,
            this.recordDelimiterBytes);
        start = cIn.getAdjustedStart();
        end = cIn.getAdjustedEnd();
        filePosition = cIn;
      }
        in = new SplitLineReader(codec.createInputStream(fileIn,
            decompressor), job, this.recordDelimiterBytes);
        filePosition = fileIn;
      }
    } else {
      fileIn.seek(start);
      in = new UncompressedSplitLineReader(
          fileIn, job, this.recordDelimiterBytes, split.getLength());
      filePosition = fileIn;
    }
    if (start != 0) {
      start += in.readLine(new Text(), 0, maxBytesToConsume(start));
    }
    this.pos = start;
  }
}

hadoop-mapreduce-client-core\src\main\java\org\apache\hadoop\mapreduce\Mapper.java

在对用户提供的计算接口上,从上下文中读取数据,此上下文为 MapContextImpl

public class Mapper<KEYIN, VALUEIN, KEYOUT, VALUEOUT> {
	public void run(Context context) throws IOException, InterruptedException {
    setup(context);
    try {
      while (context.nextKeyValue()) {
        map(context.getCurrentKey(), context.getCurrentValue(), context);
      }
    } finally {
      cleanup(context);
    }
  }
}

hadoop-mapreduce-client-core\src\main\java\org\apache\hadoop\mapreduce\task\MapContextImpl.java

recoder就是上面的LineRecordReader,所有的元数据信息都在那里

public class MapContextImpl<KEYIN,VALUEIN,KEYOUT,VALUEOUT> 
    extends TaskInputOutputContextImpl<KEYIN,VALUEIN,KEYOUT,VALUEOUT> 
    implements MapContext<KEYIN, VALUEIN, KEYOUT, VALUEOUT> {
   //循环从reader
   public boolean nextKeyValue() throws IOException, InterruptedException {
    return reader.nextKeyValue();
  } 
}

这样, Mapper 的输入端就或因读到了一个 KV 对而继续循环,或因输入数据已经穷尽而停止循环。

2. Mapper的输出

public class MapTask extends Task {
  private <INKEY,INVALUE,OUTKEY,OUTVALUE>
  void runNewMapper(final JobConf job,
                    final TaskSplitIndex splitIndex,
                    final TaskUmbilicalProtocol umbilical,
                    TaskReporter reporter
                    ) throws IOException, ClassNotFoundException,
                             InterruptedException {
    // make a task context so we can get the classes
    TaskAttemptContext taskContext =
      new TaskAttemptContextImpl(job,getTaskID(),reporter);
    // 确定该用哪一种具体的 Mapper ,然后创建
    org.apache.hadoop.mapreduce.Mapper<INKEY,INVALUE,OUTKEY,OUTVALUE> mapper =
      (org.apache.hadoop.mapreduce.Mapper<INKEY,INVALUE,OUTKEY,OUTVALUE>)
        ReflectionUtils.newInstance(taskContext.getMapperClass(), job);
    // 确定这个 Mapper 所用的输入是哪一种 InputFormat
    org.apache.hadoop.mapreduce.InputFormat<INKEY,INVALUE> inputFormat =
      (org.apache.hadoop.mapreduce.InputFormat<INKEY,INVALUE>)
        ReflectionUtils.newInstance(taskContext.getInputFormatClass(), job);
    //  确定这个 Mapper 所用的输入是哪一个 Split
    org.apache.hadoop.mapreduce.InputSplit split = null;
    split = getSplitDetails(new Path(splitIndex.getSplitLocation()),
        splitIndex.getStartOffset());
    //创建与具体 InputFormat 相称的 RecordReader
    org.apache.hadoop.mapreduce.RecordReader<INKEY,INVALUE> input =
      new NewTrackingRecordReader<INKEY,INVALUE>
        (split, inputFormat, reporter, taskContext);
    job.setBoolean(JobContext.SKIP_RECORDS, isSkipping());
    org.apache.hadoop.mapreduce.RecordWriter output = null;
    
    // 如果 Reducer 的数量设置为 0 ,就直接输出
    if (job.getNumReduceTasks() == 0) {
    //创建直接输出的 Collector
    output = new NewDirectOutputCollector(taskContext, job, umbilical, reporter);
    } else {
    //要不然就创建通往 Reducer 的 Collector
      output = new NewOutputCollector(taskContext, job, umbilical, reporter);
    }
    //创建MapContextImpl
    org.apache.hadoop.mapreduce.MapContext<INKEY, INVALUE, OUTKEY, OUTVALUE> 
    mapContext = 
      new MapContextImpl<INKEY, INVALUE, OUTKEY, OUTVALUE>(job, getTaskID(), 
          input, output, 
          committer, 
          reporter, split);
    ///所以 WrappedMapper.Context.mapContext 是个 MapContextImpl
    org.apache.hadoop.mapreduce.Mapper<INKEY,INVALUE,OUTKEY,OUTVALUE>.Context 
        mapperContext = 
          new WrappedMapper<INKEY, INVALUE, OUTKEY, OUTVALUE>().getMapContext(
              mapContext);

    try {
      input.initialize(split, mapperContext);
      //mapperContext 为 WrappedMapper.context
      mapper.run(mapperContext);
    }
  }
  
 //创建 Mapper 的输出 RecordWriter ,包括 collector 和 partitioner
 private class NewOutputCollector<K,V>
    extends org.apache.hadoop.mapreduce.RecordWriter<K,V> {
    private final MapOutputCollector<K,V> collector; //实现 MapOutputCollector 界面
    private final org.apache.hadoop.mapreduce.Partitioner<K,V> partitioner; //负责 Mapper 输出的分区
    private final int partitions;//分发目标的个数,即 Reducer 的个数

    @SuppressWarnings("unchecked")
    NewOutputCollector(org.apache.hadoop.mapreduce.JobContext jobContext,
                       JobConf job,
                       TaskUmbilicalProtocol umbilical,
                       TaskReporter reporter
                       ) throws IOException, ClassNotFoundException {
      //创建通向排序阶段的 Collector
      collector = createSortingCollector(job, reporter);
      //有几个 Reducer ,就有几个 partition
      partitions = jobContext.getNumReduceTasks();
      //如果有多个 partition
      if (partitions > 1) {
      //应该是对抽象类 Partitioner 的某种扩充,如果未加设定就默认 HashPartitioner
        partitioner = (org.apache.hadoop.mapreduce.Partitioner<K,V>)
          ReflectionUtils.newInstance(jobContext.getPartitionerClass(), job);
      } else {
       //只有一个 partition ,就动态扩充抽象类 Partitioner
        partitioner = new org.apache.hadoop.mapreduce.Partitioner<K,V>() {
          @Override
          public int getPartition(K key, V value, int numPartitions) {
            return partitions - 1;
          }
        };
      }
    }
    
 //创建Map输出阶段的收集器,默认为 MapOutputBuffer,可配置
 private <KEY, VALUE> MapOutputCollector<KEY, VALUE>
          createSortingCollector(JobConf job, TaskReporter reporter)
    throws IOException, ClassNotFoundException {
    MapOutputCollector.Context context =
      new MapOutputCollector.Context(this, job, reporter);
     //如果未加设置,就默认为 MapOutputBuffer.class
    Class<?>[] collectorClasses = job.getClasses(
      JobContext.MAP_OUTPUT_COLLECTOR_CLASS_ATTR, MapOutputBuffer.class);
    int remainingCollectors = collectorClasses.length;
    Exception lastException = null;
    for (Class clazz : collectorClasses) {
      try {
        Class<? extends MapOutputCollector> subclazz =
          clazz.asSubclass(MapOutputCollector.class);
        MapOutputCollector<KEY, VALUE> collector =
          ReflectionUtils.newInstance(subclazz, job);
        //初始化,实际上是 MapTask.MapOutputBuffer 的初始化
        collector.init(context);
        return collector;
      }
    }
  }
}

NewOutputCollector 中有两个成分都很重要,一个是 collector ,还有一个是 partitioner 。前者担负实际收集 Mapper 输出并将其交付给 Reducer 的工作,后者则决定应该将具体的输出交付给哪一个 Reducer。有多个 Reducer 存在时, MR 框架需要将每个 Mapper 的每项输出,即所收集到的 KV 对,按某种条件分拣送往不同的 Reducer 。这样就把每个 Mapper 的输出划分成了多个分区(Partition ),有几个 Reducer ,就把每个 Mapper 的输出分成几个 Partition ,而 Partitioner 就起着分拣的作用。分拣所依据的条件不同,具体的 Partitioner 就不同。比方说, HashPartitioner就是对每个输出 KV 对中的键值进行简单的 Hash 计算,根据 Hash 值将其分发给不同的Reducer。

hadoop-mapreduce-client-core\src\main\java\org\apache\hadoop\mapreduce\Mapper.java

public class Mapper<KEYIN, VALUEIN, KEYOUT, VALUEOUT> {
  //执行计算时, context 为 WrappedMapper.context
  protected void map(KEYIN key, VALUEIN value, Context context) throws IOException, InterruptedException {
   TaskInputOutputContextImpl.write == NewOutputCollector.write
    context.write((KEYOUT) key, (VALUEOUT) value);  
  }
}

hadoop-mapreduce-client-core\src\main\java\org\apache\hadoop\mapred\MapTask.java

private class NewOutputCollector<K,V>{
  // == MapTask.MapOutputBuffer. collect
  public void write(K key, V value) throws IOException, InterruptedException {
      collector.collect(key, value, partitioner.getPartition(key, value, partitions));
    }
 }

hadoop-mapreduce-client-core\src\main\java\org\apache\hadoop\mapred\MapTask.java

环形缓冲区对数据进行排序溢出合并到溢出文件

public static class MapOutputBuffer<K extends Object, V extends Object>
      implements MapOutputCollector<K, V>, IndexedSortable {
  public synchronized void collect(K key, V value, final int partition) {
    .....
     startSpill();
    .....
  }
     sortAndSpill();
     mergeParts();
     ......
}

3. Reduce阶段

hadoop-mapreduce-client-core\src\main\java\org\apache\hadoop\mapred\ReduceTask.java

public class ReduceTask extends Task {
 public void run(JobConf job, final TaskUmbilicalProtocol umbilical) {
   
    initialize(job, getJobID(), reporter, useNewApi);
    // Initialize the codec
    codec = initCodec();
    RawKeyValueIterator rIter = null;
    ShuffleConsumerPlugin shuffleConsumerPlugin = null;
    //如果需要就创建 combineCollector
    Class combinerClass = conf.getCombinerClass();
    CombineOutputCollector combineCollector = (null != combinerClass) ? 
     new CombineOutputCollector(reduceCombineOutputCounter, reporter, conf) : null;

    Class<? extends ShuffleConsumerPlugin> clazz =
          job.getClass(MRConfig.SHUFFLE_CONSUMER_PLUGIN, Shuffle.class, ShuffleConsumerPlugin.class);
     //创建 Shuffle 类对象			
    shuffleConsumerPlugin = ReflectionUtils.newInstance(clazz, job);
    //Shuffle.init
    shuffleConsumerPlugin.init(shuffleContext);
    //运行
    rIter = shuffleConsumerPlugin.run();
    // free up the data structures
    mapOutputFilesOnDisk.clear();
    sortPhase.complete();                         // sort is complete
    setPhase(TaskStatus.Phase.REDUCE); 
    statusUpdate(umbilical);
    Class keyClass = job.getMapOutputKeyClass();
    Class valueClass = job.getMapOutputValueClass();
    RawComparator comparator = job.getOutputValueGroupingComparator();

    if (useNewApi) {
      runNewReducer(job, umbilical, reporter, rIter, comparator, 
                    keyClass, valueClass);
    } else {
      runOldReducer(job, umbilical, reporter, rIter, comparator, 
                    keyClass, valueClass);
    }

    shuffleConsumerPlugin.close();
    done(umbilical, reporter);
  }
}

hadoop-mapreduce-client-core\src\main\java\org\apache\hadoop\mapreduce\task\reduce\Shuffle.java

public class Shuffle<K, V> implements ShuffleConsumerPlugin<K, V>, ExceptionReporter {
    //运行
	public RawKeyValueIterator run() throws IOException, InterruptedException {
    // 这个 eventFetcher 是一个线程
    final EventFetcher<K,V> eventFetcher = 
      new EventFetcher<K,V>(reduceId, umbilical, scheduler, this,
          maxEventsToFetch);
    //启动
    eventFetcher.start();
    
    // Start the map-output fetcher threads
    boolean isLocal = localMapFiles != null;
    final int numFetchers = isLocal ? 1 :
      jobConf.getInt(MRJobConfig.SHUFFLE_PARALLEL_COPIES, 5);
    //创建一个 Fetcher 数组,相当于一个线程池
    Fetcher<K,V>[] fetchers = new Fetcher[numFetchers];
    if (isLocal) {
      //如果 Mapper 与 Reducer 在同一机器上,那就只需本地 Fetcher
      //LocalFetcher 是对 Fetcher 的扩充,也是线程
      fetchers[0] = new LocalFetcher<K, V>(jobConf, reduceId, scheduler,
          merger, reporter, metrics, this, reduceTask.getShuffleSecret(),
          localMapFiles);
      fetchers[0].start(); //本地 Fetcher 只有一个
    } else {
      //Mapper 与 Reducer 不在同一机器上,需要若干个跨节点的 Fetcher
      for (int i=0; i < numFetchers; ++i) {
        fetchers[i] = new Fetcher<K,V>(jobConf, reduceId, scheduler, merger, 
                                       reporter, metrics, this, 
                                       reduceTask.getShuffleSecret());
        fetchers[i].start(); //启动所有的 Fetcher
      }
    }
    
    // 等待所有 Fetcher 都完成,每次超时就报告一下进度
    while (!scheduler.waitUntilDone(PROGRESS_FREQUENCY)) {
      reporter.progress();
      }
    }
    //Shuffle 操作已完成,所有 MapTask 的输出文件都已拷贝过来
    eventFetcher.shutDown();
    // 关闭所有的 Fetcher
    for (Fetcher<K,V> fetcher : fetchers) {
      fetcher.shutDown();
    }
    // stop the scheduler
    scheduler.close();
    copyPhase.complete(); // copy is already complete
    //下一步就是 Reducer 一侧的 MergeSort 了
    taskStatus.setPhase(TaskStatus.Phase.SORT);
    //通过“脐带”向 MRAppMaster 更新状态
    reduceTask.statusUpdate(umbilical);

    // Finish the on-going merges...
    RawKeyValueIterator kvIter = null;
    try {
    // 合并和排序,完成后返回一个队列,即 kvIter
      kvIter = merger.close();
    } catch (Throwable e) {
      throw new ShuffleError("Error while doing final merge " , e);
    }  
    return kvIter;
  }
}

要从 MapTask (所在的节点)搬运数据到 ReduceTask ,无非就是两种办法,一种是由MapTask 推送,另一种是由 ReduceTask 提取。这里采用的是后者,由 ReduceTask 主动去MapTask 那里提取,实际上就是文件复制。

hadoop-mapreduce-client-core\src\main\java\org\apache\hadoop\mapreduce\task\reduce\MergeManagerImpl.java

public class MergeManagerImpl<K, V> implements MergeManager<K, V> {
   //内存合并
   private class InMemoryMerger extends MergeThread<InMemoryMapOutput<K,V>, K,V> {
   		 public void merge(List<InMemoryMapOutput<K,V>> inputs) throws IOException {
   		 }
   }
   //磁盘合并
   private class OnDiskMerger extends MergeThread<CompressAwarePath,K,V> {
   		public void merge(List<CompressAwarePath> inputs) throws IOException {
   		}
   }
   
   private class IntermediateMemoryToMemoryMerger extends MergeThread<InMemoryMapOutput<K, V>, K, V> {
   		public void merge(List<InMemoryMapOutput<K, V>> inputs) throws IOException {
   		}
   }
   
   //Shuffle.run()中调用
   private RawKeyValueIterator finalMerge(JobConf job, FileSystem fs,
                                       List<InMemoryMapOutput<K,V>> inMemoryMapOutputs,
                                       List<CompressAwarePath> onDiskMapOutputs{
  }
}

最后,finalMerge ()的返回值就是 Merger. merge ()的返回值,这个返回值是一个 RawKeyValueIterator ,也就是一个 RawKeyValue 的序列。这个序列将被用作 Reducer 的输入。

hadoop-mapreduce-client-core\src\main\java\org\apache\hadoop\mapred\ReduceTask.java

public class ReduceTask extends Task {
  void runNewReducer(JobConf job, final TaskUmbilicalProtocol umbilical, ......) {
    // make a task context so we can get the classes
    org.apache.hadoop.mapreduce.TaskAttemptContext taskContext =
      new org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl(job,
          getTaskID(), reporter);
    // make a reducer
    org.apache.hadoop.mapreduce.Reducer<INKEY,INVALUE,OUTKEY,OUTVALUE> reducer =
      (org.apache.hadoop.mapreduce.Reducer<INKEY,INVALUE,OUTKEY,OUTVALUE>)
        ReflectionUtils.newInstance(taskContext.getReducerClass(), job);
    org.apache.hadoop.mapreduce.RecordWriter<OUTKEY,OUTVALUE> trackedRW = 
      new NewTrackingRecordWriter<OUTKEY, OUTVALUE>(this, taskContext);
    job.setBoolean("mapred.skip.on", isSkipping());
    job.setBoolean(JobContext.SKIP_RECORDS, isSkipping());
    org.apache.hadoop.mapreduce.Reducer.Context 
         reducerContext = createReduceContext(reducer, job, getTaskID(),
                                               rIter, reduceInputKeyCounter, 
                                               reduceInputValueCounter, 
                                               trackedRW,
                                               committer,
                                               reporter, comparator, keyClass,
                                               valueClass);
    try {
      //运行具体的Reducer 比如: PageviewReducer, 以DBOutputFormat的格式写到数据库
      reducer.run(reducerContext);
    } finally {
      trackedRW.close(reducerContext);
    }
  }
}

hadoop-mapreduce-client-core\src\main\java\org\apache\hadoop\mapred\ReduceTask.java

static class NewTrackingRecordWriter<K,V>  extends org.apache.hadoop.mapreduce.RecordWriter<K,V> {
	NewTrackingRecordWriter(ReduceTask reduce, org.apache.hadoop.mapreduce.TaskAttemptContext taskContext)
        throws InterruptedException, IOException {
      this.outputRecordCounter = reduce.reduceOutputCounter;
      this.fileOutputByteCounter = reduce.fileOutputByteCounter;

      List<Statistics> matchedStats = null;
      if (reduce.outputFormat instanceof org.apache.hadoop.mapreduce.lib.output.FileOutputFormat) {
        matchedStats = getFsStatistics(org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
            .getOutputPath(taskContext), taskContext.getConfiguration());
      }

      fsStats = matchedStats;

      long bytesOutPrev = getOutputBytes(fsStats);
      //输出格式 : 假定为 DBOutputFormat == DBOutputFormat. getRecordWriter ( taskContext )
      this.real = (org.apache.hadoop.mapreduce.RecordWriter<K, V>) reduce.outputFormat
          .getRecordWriter(taskContext);
      long bytesOutCurr = getOutputBytes(fsStats);
      fileOutputByteCounter.increment(bytesOutCurr - bytesOutPrev);
    }
}

可见,由于把 OutputFormat 设置成了 DBOutputFormat ,这个 Reducer 的输出是通过DBRecordWriter 写出去的,而 DBRecordWriter 则把输出插入数据库中预定的表中,就像 InputFormat 与 RecordReade 一样,每种 OutputFormat 也都有自己的 RecordWriter ,在App 中为作业设置了什么样的OutputFormat , Reducer 在输出时就会采用什么样的 RecordWriter 。当然,最常用的 OutputFormat 还是几种 FileOutputFormat 。

OK, 关于MapReduce作业流程就暂且考察到这里,其中忽略了很多流程,比如Map阶段的KV环形缓冲,Combiner,Merge阶段等。太过细枝末节,不再详细探究。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值