适用场景:
1.这个模式需要一个比较两条记录的比较函数。也就是说,我们必须得通过比较确认两条记录中哪一个更大一些。
2.输出记录数相对于输入记录数将会是异常的小,否则获得整个数据集的全排序将会更有意义。
结构:
这个模式同时使用了mapper和reducer。mapper任务找出其本地的top K,然后所有独立的top K集合在reducer中做最后的top K运算。因为在mapper中输出的数据记录最多是K条,而K通常相对较小,所以我们只需要一个reducer来处理最后的运算。
性能分析:
top 10模式的性能通常是很好的,但是有一些重要的局限性需要考虑。大多数局限性都来源于:不管这个模式需要处理的记录数有多少,它都只能使用一个reducer。需要特别注意的是:单一reducer需要处理的记录数目。每个map任务会输出K条记录,当这个作业由M个map任务组成时,这个reducer需要处理K*M条记录,这个值可能很大,
下面是map reduce写的top 10示例
输入文件如下:
10 9 8 7 6 5 1 2 3 4
11 12 13 14 15 20 19 18 17 16
最终得到如下结果:
我们来看代码:
package com.mr.top10;
import java.io.IOException;
import java.net.URI;
import java.util.StringTokenizer;
import java.util.TreeMap;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
/**
* @author luobao
*
*/
public class Top10 {
public static class TopTenMapper extends
Mapper<Object, Text, NullWritable, IntWritable> {
private TreeMap<Integer, String> repToRecordMap = new TreeMap<Integer, String>();
public void map(Object key, Text value, Context context) {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
repToRecordMap.put(Integer.parseInt(itr.nextToken()), " ");
if (repToRecordMap.size() > 10) {
repToRecordMap.remove(repToRecordMap.firstKey());
}
}
}
protected void cleanup(Context context) {
for (Integer i : repToRecordMap.keySet()) {
try {
context.write(NullWritable.get(), new IntWritable(i));
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
public static class TopTenReducer extends
Reducer<NullWritable, IntWritable, NullWritable, IntWritable> {
private TreeMap<Integer, String> repToRecordMap = new TreeMap<Integer, String>();
public void reduce(NullWritable key, Iterable<IntWritable> values,
Context context) throws IOException, InterruptedException {
for (IntWritable value : values) {
repToRecordMap.put(value.get(), " ");
if (repToRecordMap.size() > 10) {
repToRecordMap.remove(repToRecordMap.firstKey());
}
}
for (Integer i : repToRecordMap.keySet()) {
context.write(NullWritable.get(), new IntWritable(i));
}
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
String ipPre = "hdfs://192.168.40.191:9000/";
removeOutput(conf, ipPre);
Job job = Job.getInstance(conf);
job.setJarByClass(Top10.class);
job.setMapperClass(TopTenMapper.class);
job.setReducerClass(TopTenReducer.class);
job.setNumReduceTasks(1);
job.setMapOutputKeyClass(NullWritable.class);// map阶段的输出的key
job.setMapOutputValueClass(IntWritable.class);// map阶段的输出的value
job.setOutputKeyClass(Text.class);// reduce阶段的输出的key
job.setOutputValueClass(IntWritable.class);// reduce阶段的输出的value
FileInputFormat.addInputPath(job, new Path(ipPre + "input/top10"));
FileOutputFormat.setOutputPath(job, new Path(ipPre + "output"));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
private static void removeOutput(Configuration conf, String ipPre)
throws IOException {
String outputPath = ipPre + "output";
FileSystem fs = FileSystem.get(URI.create(outputPath), conf);
Path path = new Path(outputPath);
if (fs.exists(path)) {
fs.deleteOnExit(path);
}
fs.close();
}
}