什么情况下,hive只会产生一个reduceTask,而没有mapTask

正常逻辑下,mapTask的职责就是负责读数据,做ETL,也可以利用combiner局部聚合;ReduceTask输入严重依赖于mapper输出,所以‘一直’的逻辑仅有reducer无法执行的。

hql常规解析后依赖的mapreduce主要还是ExecReducer/ExecMapper--通过实现hadoop的mapper和reducer来实现具体执行逻辑。

hive自己的mapreducer有一个实现案例,就是GenericMR,实现源码如下:

public final class GenericMR {
  public void map(final InputStream in, final OutputStream out,
      final Mapper mapper) throws Exception {
    map(new InputStreamReader(in), new OutputStreamWriter(out), mapper);
  }
 
 
  public void map(final Reader in, final Writer out, final Mapper mapper) throws Exception {
    handle(in, out, new RecordProcessor() {
      @Override
      public void processNext(RecordReader reader, Output output) throws Exception {
        mapper.map(reader.next(), output);
      }
    });
  }
 
 
  public void reduce(final InputStream in, final OutputStream out,
      final Reducer reducer) throws Exception {
    reduce(new InputStreamReader(in), new OutputStreamWriter(out), reducer);
  }
 
 
  public void reduce(final Reader in, final Writer out, final Reducer reducer) throws Exception {
    handle(in, out, new RecordProcessor() {
      @Override
      public void processNext(RecordReader reader, Output output) throws Exception {
        reducer.reduce(reader.peek()[0], new KeyRecordIterator(
            reader.peek()[0], reader), output);
      }
    });
  }
 
 
  private void handle(final Reader in, final Writer out,
      final RecordProcessor processor) throws Exception {
    final RecordReader reader = new RecordReader(in);
    final OutputStreamOutput output = new OutputStreamOutput(out);
 
 
    try {
      while (reader.hasNext()) {
        processor.processNext(reader, output);
      }
    } finally {
      try {
        output.close();
      } finally {
        reader.close();
      }
    }
  }
 
 
  private static interface RecordProcessor {
    void processNext(final RecordReader reader, final Output output) throws Exception;
  }
 
 
  private static final class KeyRecordIterator implements Iterator<String[]> {
    private final String key;
    private final RecordReader reader;
 
 
    private KeyRecordIterator(final String key, final RecordReader reader) {
      this.key = key;
      this.reader = reader;
    }
 
 
    @Override
    public boolean hasNext() {
      return (reader.hasNext() && key.equals(reader.peek()[0]));
    }
 
 
    @Override
    public String[] next() {
      if (!hasNext()) {
        throw new NoSuchElementException();
      }
 
 
      return reader.next();
    }
 
 
    @Override
    public void remove() {
      throw new UnsupportedOperationException();
    }
  }
 
 
  private static final class RecordReader {
    private final BufferedReader reader;
    private String[] next;
 
 
    private RecordReader(final InputStream in) {
      this(new InputStreamReader(in));
    }
 
 
    private RecordReader(final Reader in) {
      reader = new BufferedReader(in);
      next = readNext();
    }
 
 
    private String[] next() {
      final String[] ret = next;
 
 
      next = readNext();
 
 
      return ret;
    }
 
 
    private String[] readNext() {
      try {
        final String line = reader.readLine();
        return (line == null ? null : line.split("\t"));
      } catch (final Exception e) {
        throw new RuntimeException(e);
      }
    }
 
 
    private boolean hasNext() {
      return next != null;
    }
 
 
    private String[] peek() {
      return next;
    }
 
 
    private void close() throws Exception {
      reader.close();
      
    }
  }
 
 
  private static final class OutputStreamOutput implements Output {
    private final PrintWriter out;
 
 
    private OutputStreamOutput(final OutputStream out) {
      this(new OutputStreamWriter(out));
    }
 
 
    private OutputStreamOutput(final Writer out) {
      this.out = new PrintWriter(out);
    }
 
 
    public void close() throws Exception {
      out.close();
    }
 
 
    @Override
    public void collect(String[] record) throws Exception {
      out.println(_join(record, "\t"));
    }
 
 
    private static String _join(final String[] record, final String separator) {
      if (record == null || record.length == 0) {
        return "";
      }
      final StringBuilder sb = new StringBuilder();
      for (int i = 0; i < record.length; i++) {
        if (i > 0) {
          sb.append(separator);
        }
        sb.append(record[i]);
      }
      return sb.toString();
    }
  }
}
 
 

重点:常规的mapreducer的reducer输入依赖于mapper的输出的,所以无法单独执行。但是GenericMR的实现reducer里可以直接支持InputStream/Reader,所以就可以直接生成java的指定输入流或者reader即可

new GenericMR().reduce(new StringReader("a\tb\tc"), new StringWriter(),
          new Reducer() {
        public void reduce(String key, Iterator<String[]> records,
            Output output) throws Exception {
          while (true) {
            records.next();
          }
        }
});
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
【优质项目推荐】 1、项目代码均经过严格本地测试,运行OK,确保功能稳定后才上传平台。可放心下载并立即投入使用,若遇到任何使用问题,随时欢迎私信反馈与沟通,博主会第一时间回复。 2、项目适用于计算机相关专业(如计科、信息安全、数据科学、人工智能、通信、物联网、自动化、电子信息等)的在校学生、专业教师,或企业员工,小白入门等都适用。 3、该项目不仅具有很高的学习借鉴价值,对于初学者来说,也是入门进阶的绝佳选择;当然也可以直接用于 毕设、课设、期末大作业或项目初期立项演示等。 3、开放创新:如果您有一定基础,且热爱探索钻研,可以在此代码基础上二次开发,进行修改、扩展,创造出属于自己的独特应用。 欢迎下载使用优质资源!欢迎借鉴使用,并欢迎学习交流,共同探索编程的无穷魅力! 基于业务逻辑生成特征变量python实现源码+数据集+超详细注释.zip基于业务逻辑生成特征变量python实现源码+数据集+超详细注释.zip基于业务逻辑生成特征变量python实现源码+数据集+超详细注释.zip基于业务逻辑生成特征变量python实现源码+数据集+超详细注释.zip基于业务逻辑生成特征变量python实现源码+数据集+超详细注释.zip基于业务逻辑生成特征变量python实现源码+数据集+超详细注释.zip基于业务逻辑生成特征变量python实现源码+数据集+超详细注释.zip 基于业务逻辑生成特征变量python实现源码+数据集+超详细注释.zip 基于业务逻辑生成特征变量python实现源码+数据集+超详细注释.zip

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值