Hive框架中reduce number的计算

本文为转载内容,原文地址 http://blog.csdn.net/wisgood/article/details/42125367


我们每次执行hive的hql时,shell里都会提示一段话:

...
Number of reduce tasks not specified. Estimated from input data size: 500
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>
...
这个是调优的经常手段,主要有一下三个属性来决定

hive.exec.reducers.bytes.per.reducer    这个参数控制每个reducer的期待处理数据量。默认1GB。

 This controls how many reducers a map-reduce job should have, depending on the total size of input files to the job. Default is 1GB

hive.exec.reducers.max     这个参数控制最大的reducer的数量, 如果 input / bytes per reduce > max  则会启动这个参数所指定的reduce个数。  这个并不会影响mapre.reduce.tasks参数的设置。默认的max是999。

This controls the maximum number of reducers a map-reduce job can have.  If input_file_size divided by "hive.exec.bytes.per.reducer" is greater than this value, the map-reduce job will have this value as the number reducers.  Note this does not affect the number of reducers directly specified by the user through "mapred.reduce.tasks" and query hints


mapred.reduce.tasks  这个参数如果指定了,hive就不会用它的estimation函数来自动计算reduce的个数,而是用这个参数来启动reducer。默认是-1.

This overrides the hadoop configuration to make sure we enable the estimation of the number of reducers by the size of the input files. If this value is non-negative, then hive will pass this number directly to map-reduce jobs instead of doing the estimation.


reduce的个数设置其实对执行效率有很大的影响:

1、如果reduce太少:  如果数据量很大,会导致这个reduce异常的慢,从而导致这个任务不能结束,也有可能会OOM

2、如果reduce太多:  产生的小文件太多,合并起来代价太高,namenode的内存占用也会增大。


如果我们不指定mapred.reduce.tasks, hive会自动计算需要多少个reducer。


计算的公式:  reduce个数 =  InputFileSize   /   bytes per reducer

这个数个粗略的公式,详细的公式在:

common/src/java/org/apache/hadoop/hive/conf/HiveConf.java

我们先看下: 

1、计算输入文件大小的方法:其实很简单,遍历每个路径获取length,累加。

+   * Calculate the total size of input files.
+   * @param job the hadoop job conf.
+   * @return the total size in bytes.
+   * @throws IOException 
+   */
+  public static long getTotalInputFileSize(JobConf job, mapredWork work) throws IOException {
+    long r = 0;
+    FileSystem fs = FileSystem.get(job);
+    // For each input path, calculate the total size.
+    for (String path: work.getPathToAliases().keySet()) {
+      ContentSummary cs = fs.getContentSummary(new Path(path));
+      r += cs.getLength();
+    }
+    return r;
+  }
2、估算reducer的个数,及计算公式:

注意最重要的一句话:  int reducers = (int)((totalInputFileSize + bytesPerReducer - 1) / bytesPerReducer);

+  /**
+   * Estimate the number of reducers needed for this job, based on job input,
+   * and configuration parameters.
+   * @return the number of reducers.
+   */
+  public int estimateNumberOfReducers(HiveConf hive, JobConf job, mapredWork work) throws IOException {
+    long bytesPerReducer = hive.getLongVar(HiveConf.ConfVars.BYTESPERREDUCER);
+    int maxReducers = hive.getIntVar(HiveConf.ConfVars.MAXREDUCERS);
+    long totalInputFileSize = getTotalInputFileSize(job, work);
+
+    LOG.info("BytesPerReducer=" + bytesPerReducer + " maxReducers=" + maxReducers 
+        + " totalInputFileSize=" + totalInputFileSize);
+    int reducers = (int)((totalInputFileSize + bytesPerReducer - 1) / bytesPerReducer);
+    reducers = Math.max(1, reducers);
+    reducers = Math.min(maxReducers, reducers);
+    return reducers;    
+  }
3、真正的计算流程代码:
+  /**
+   * Set the number of reducers for the mapred work.
+   */
+  protected void setNumberOfReducers() throws IOException {
+    // this is a temporary hack to fix things that are not fixed in the compiler
+    Integer numReducersFromWork = work.getNumReduceTasks();
+    
+    if (numReducersFromWork != null && numReducersFromWork >= 0) {
+      LOG.info("Number of reduce tasks determined at compile: " + work.getNumReduceTasks());
+    } else if(work.getReducer() == null) {
+      LOG.info("Number of reduce tasks not specified. Defaulting to 0 since there's no reduce operator");
+      work.setNumReduceTasks(Integer.valueOf(0));
+    } else {
+      int reducers = estimateNumberOfReducers(conf, job, work);
+      work.setNumReduceTasks(reducers);
+      LOG.info("Number of reduce tasks not specified. Estimated from input data size: " + reducers);
     }
   }
这就是reduce个数计算的原理。




  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值