首先我们来看一张描述MapReduce运行过程的图。
首先input就是输入文件。
spliting:把文件按行经行拆分。
Mapping:把每行的word进行计数。
Shuffing:混洗。将相同的word分发到相同的节点。
Reduceing:对每个节点的word进行统计。
以上就是简单的Mapreduce作业过程。下面看下官网的介绍:
A MapReduce job usually splits the input data-set into independent chunks which are processed by the map tasks in a completely parallel manner. The framework sorts the outputs of the maps, which are then input to the reduce tasks. Typically both the input and the output of the job are stored in a file-system. The framework takes care of scheduling tasks, monitoring them and re-executes the failed tasks.
一个MapReduce作业通常会把输入的文件分割成独立的块。这些块由map任务以并行的方式进行处理。然后mapreuce框架对map的处理结果进行排序,拍完序的结果作为reduce阶段的输入。通常输入和输出结果都是在一个文件系统进行排序的,框架负责任务的监控,调度,以及失败后的重执行。
Typically the compute nodes and the storage nodes are the same, that is, the MapReduce framework and the Hadoop Distributed File System (see HDFS Architecture Guide) are running on the same set of nodes. This configuration allows the framework to effectively schedule tasks on the nodes where data is already present, resulting in very high aggregate bandwidth across the cluster.
通常计算的节点和存储的节点是同一个,这也就是说,Map Reduce框架和HDFS文件系统是跑在同一系列节点上的,这样的配置可以是框架能够有效地调度已存节点上的任务,也能够使集群的带宽利用率非常高。
The MapReduce framework operates exclusively on 【key, value】 pairs, that is, the framework views the input to the job as a set of 【key, value 】pairs and produces a set of 【key, value】 pairs as the output of the job, conceivably of different types.
MapReduce框架只操作【key,value】键值对。即,框架把输入的作业作为一系列的【key,value】键值对,并且将一系列的【key,value】键值对作为作业的输出结果。
(发现csdn 编辑的一个bug&#