How Many Maps And Reduces

读了这篇文章相信就能解释为什么将dfs.block.size设置的太大也是不好的原因了!


new-mr-api 的切割大小的影响参数
  – mapred.max.split.size 切割出的split最大size 默认:Long.MAX_VALUE
  – mapred.min.split.size 切割出的split最小size 默认:1
• new-mr-api的切割大小算法
  – splitSize = max[minSize, min(maxSize, blockSize)]
  – minSize = ${mapred.min.split.size}
  – maxSize = ${mapred.max.split.size}
• mapred.max.split.size可以增大map数 (将mapred.max.split.size的大小调的小一点)
• mapred.min.split.size可以减少map数  (将mapred.min.split.size的大小调的比blockSize大,继续大下去则map数变小)

Partitioning your job into maps and reduces

Picking the appropriate size for the tasks for your job can radically change the performance of Hadoop. Increasing the number of tasks increases the framework overhead, but increases load balancing and lowers the cost of failures. At one extreme is the 1 map/1 reduce case where nothing is distributed. The other extreme is to have 1,000,000 maps/ 1,000,000 reduces where the framework runs out of resources for the overhead.

Number of Maps

The number of maps is usually driven by the number of DFS blocks in the input files. Although that causes people to adjust their DFS block size to adjust the number of maps. The right level of parallelism for maps seems to be around 10-100 maps/node, although we have taken it up to 300 or so for very cpu-light map tasks. Task setup takes awhile, so it is best if the maps take at least a minute to execute.

Actually controlling the number of maps is subtle. The mapred.map.tasks parameter is just a hint to the InputFormat for the number of maps. The default InputFormat behavior is to split the total number of bytes into the right number of fragments. However, in the default case the DFS block size of the input files is treated as an upper bound for input splits. A lower bound on the split size can be set via mapred.min.split.size. Thus, if you expect 10TB of input data and have 128MB DFS blocks, you'll end up with 82k maps, unless your mapred.map.tasks is even larger. Ultimately the InputFormat determines the number of maps.

The number of map tasks can also be increased manually using the JobConf's conf.setNumMapTasks(int num). This can be used to increase the number of map tasks, but will not set the number below that which Hadoop determines via splitting the input data.

Number of Reduces

The right number of reduces seems to be 0.95 or 1.75 * (nodes * mapred.tasktracker.tasks.maximum). At 0.95 all of the reduces can launch immediately and start transfering map outputs as the maps finish. At 1.75 the faster nodes will finish their first round of reduces and launch a second round of reduces doing a much better job of load balancing.

Currently the number of reduces is limited to roughly 1000 by the buffer size for the output files (io.buffer.size * 2 * numReduces << heapSize). This will be fixed at some point, but until it is it provides a pretty firm upper bound.

The number of reduces also controls the number of output files in the output directory, but usually that is not important because the next map/reduce step will split them into even smaller splits for the maps.

The number of reduce tasks can also be increased in the same way as the map tasks, via JobConf's conf.setNumReduceTasks(int num).


http://wiki.apache.org/hadoop/HowManyMapsAndReduces


What is your typical block size in HDFS of your production clusters?

I am primarily looking for block size distribution in production clusters. From your Hadoop deployment experience, what is min, average, max blocksizes you have seen?

The default block size is, of course, 64MB for newly created files. We (Cloudera) generally recommend starting at 128MB instead. While the block size effects (best case) sequential read and write sizes, it also has a direct impact on the map task performance due to how input splits are calculated, by default. Generally, you'll get a single input split for each HDFS block (modulo all the ways you can change this). What you're looking for is to amortize the cost of JVM startup and scheduler overhead over the length of the job. In other words, if you have a small block size, each map task has very little to do; it schedules the task, finds a machine, starts a JVM, processes a very small amount of data, and exits. As CPUs get faster, the individual map task run time gets shorter and the cost of having more tasks gets higher. You want to find a balance where each task is able to process a reasonable[1] amount of data while still getting the benefits of parallelism.

The smaller the block size, the more tasks you get, the more scheduling activity occurs. You don't want jobs with hundreds of thousands of tasks (unless that's specifically what they prescribe in terms of input data size), but you don't want too few such that you don't take advantage of all slots on the cluster. You may be tempted to try and aim for each job utilizing 100% of the cluster, no more, no less, but in a multitenant environment where there is slot contention that doesn't work either. All of this is very overwhelming, so what's the right answer? Start with 128MB and observe. Remember that the block size  is per file, not for all of HDFS. The dfs.block.size parameter only affects the size of newly created files that don't specifically set a block size.

I also want to caution against over-thinking / over-tuning something like this. You don't want to micromanage the block size of each dataset in the cluster. You probably have better fish to fry.

[1] I get that "reasonable" is terribly subjective. My personal bar is that each task takes at least 30 seconds.


Ref: http://www.quora.com/HDFS/What-is-your-typical-block-size-in-HDFS-of-your-production-clusters#

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值