看了好多博客大家都在用Map-Combine-Reduce模式编写倒排索引,其实这样编写并不完善,之所以这么说,是因为一个block(数据块)对应一个Map,而一个Map对应一个Combine,这样当一个输入文件(我们假设它叫SayHello)大小大于一个数据块大小时,就会出现两个Map对其处理,而假设有一个单词Hello,在这两部分中都有出现,在第一部分中出现30次,在第二部分出现10次,这样用Map-Combine-Reduce模式编写倒排索引就会得到这样的结果 Hello SayHello–30,SayHello–10,显然这并不是我们想要得到的结果,我们期望的是Hello SayHello–40这样的输出。
设计思想:
我们通过两个mapreduce来完成倒排索引。第一个mapreduce完成对单词在某文件中出现次数的加和,第二个mapreduce完成相同单词所出现文件的汇总。
思路:
index1(第一个mapreduce)
map1:
输入:<行偏移量(key),行内容>
LongWritable TEXT
输出:<单词-文件名称,1>
TEXT LongWritable
reduce1:
输入:<单词-文件名称,[1,1…]>
TEXT LongWritable
输出:<单词\t文件名称,sum>
TEXT LongWritable
index2(第二个mapreduce)
map2:
输入:<行偏移量(key),行内容>
LongWritable TEXT
输出:<单词,文件名称-sum>
TEXT TEXT
reduce2:
输入:<单词,[(文件名称-sum),(文件名称-sum)…]>
TEXT TEXT
这里需要对单词进行链接字符串操作
输出:<单词,链接字符串>
TEXT TEXT
如果大家看到这里还是不太明白,那么代码和其中的注释定会使你豁然开朗。
下面是index1(第一个mapreduce的代码)
package com.test.hoop.index;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.FileSplit;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class Index1 {
public static void main(String[] args) {
Configuration conf = new Configuration();
try{
Job job = Job.getInstance(conf);
//选择要调用的Class
job.setMapperClass(Index1Mapper.class);
job.setReducerClass(Index1Reducer.class);
//输入文件目录 输出文件目录
FileInputFormat.setInputPaths(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job,new Path(args[1]));
//执行所要调用的Class
job.setJarByClass(Index1.class);
/*//map的输出类型
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(LongWritable.class);*/
//当map的输出类型和reduce的输出类型一致时,map的输出类型可以不设置
//reduce的输出类型
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(LongWritable.class);
job.waitForCompletion(true);
}catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
//map1
class Index1Mapper extends Mapper<LongWritable, Text, Text, LongWritable>{
@Override
protected void map(LongWritable key, Text value, Mapper<LongWritable, Text, Text, LongWritable>.Context context)
throws IOException, InterruptedException {
//获取该行并按照空格进行切分
String line =value.toString();
String[] words = line.split(" ");
//获取文件名称
String fileName=((FileSplit)context.getInputSplit()).getPath().getName();
for(String string:words){
if(string!=null&&string.length()>0){
//因为reduce中要进行sum的计算,所以要将1单独放在value中
//所以不能写成(context.write(new Text(string),new TEXT(fileName+"\t"+1))
//之所以用\t是为了方便第二个map中进行\t切分
context.write(new Text(string+"\t"+fileName), new LongWritable(1));
}
}
}
}
//reduce1
class Index1Reducer extends Reducer<Text, LongWritable, Text, LongWritable>{
@Override
protected void reduce(Text key, Iterable<LongWritable> values,
Reducer<Text, LongWritable, Text, LongWritable>.Context context) throws IOException, InterruptedException {
long sum=0;
for(LongWritable value:values){
sum += value.get();
}
context.write(key, new LongWritable(sum));
}
}
下面是index2(第二个mapreduce的代码)
package com.test.hoop.index;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.FileSplit;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class Index2 {
public static void main(String[] args) {
Configuration conf = new Configuration();
try{
Job job = Job.getInstance(conf);
//选择要调用的Class
job.setMapperClass(Index2Mapper.class);
job.setReducerClass(Index2Reducer.class);
//输入文件目录 输出文件目录
FileInputFormat.setInputPaths(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job,new Path(args[1]));
//执行所要调用的Class
job.setJarByClass(Index2.class);
/*//map的输出类型
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(LongWritable.class);*/
//当map的输出类型和reduce的输出类型一致时,map的输出类型可以不设置
//reduce的输出类型
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
job.waitForCompletion(true);
}catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
//map2
class Index2Mapper extends Mapper<LongWritable, Text, Text, Text>{
@Override
protected void map(LongWritable key, Text value, Mapper<LongWritable, Text, Text, Text>.Context context)
throws IOException, InterruptedException {
//获取改行并进行切分
String line =value.toString();
String[] words = line.split("\t");
for(String string:words){
if(string!=null&&string.length()==3){
context.write(new Text(words[0]), new Text(words[1]+"--"+words[2]));
}
}
}
}
//reduce2
class Index2Reducer extends Reducer<Text, Text, Text, Text>{
@Override
protected void reduce(Text key, Iterable<Text> values,
Reducer<Text, Text, Text, Text>.Context context) throws IOException, InterruptedException {
String str = "";
for(Text value:values){
if(str.length()>0){
str += ",";
}
str += value.toString();
}
context.write(key, new Text(str));
}
}
虽然说在小文件中Map-Combine-Reduce是可以得出正确结果的,但是作为学习大数据的我们,要明白Combine的这个缺陷,做到思维缜密,才更程序员不是吗?