在上黄宜华老师的MapReduce的课程中,会有实验让实现带词频的文档倒排索引。一般情况下根据他的书就能实现基本的东西,但是根据书上的代码,运行的时候可能会有一些小的trick,会报出一些异常。其实如果参照这个文章 《Hadoop之倒排索引》就能实现所需要的功能了。但是本着知其然还要知其所以然的原则,我把我在实现过程中遇到的问题以及经历在这里分享。
首先讲很基本的东西,我们的类都不是内部static类,都是在单独的类文件中(好让伸手党直接用)。
在单机上测试用到三个文件,分别为file1.txt,file2.txt,file3.txt,内容分别如下:
file1.txt:
one fish
three cat
green bull
two fish
file2.txt:
red fish
blue tiger
file3.txt:
one red bird
最后输出情况希望是类似这样的:one <file1.txt, 1>;<file3.txt,1>。
参考书上的代码,我们先写出Mapper,因为我们自己的数据不需要停止词,故不写什么去停止词的部分,Mapper的代码如下:
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.input.FileSplit;
public class InvertedIndexMapper extends Mapper<Object, Text, Text, IntWritable>{
@Override
protected void map(Object key, Text value,
org.apache.hadoop.mapreduce.Mapper.Context context)
throws IOException, InterruptedException {
// TODO Auto-generated method stub
// super.map(key, value, context);
FileSplit fileSplit = (FileSplit) context.getInputSplit();
String fileName = fileSplit.getPath().getName();
String temp = new String();
String line = value.toString().toLowerCase();
StringTokenizer itr = new StringTokenizer(line);
for(;itr.hasMoreTokens();){
temp = itr.nextToken();
Text word = new Text();
word.set(temp+"#"+fileName);
Text valueInfo = new Text();
valueInfo.set("1");
context.write(word, valueInfo);
}
}
}
这里有一点和书上有一点不一样,但是在eclipse中没有提示语法错误什么的,继续写,分别给出Combiner代码和Partitioner代码:
Combiner:
import java.io.IOException;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.Reducer;
public class InvertedCombiner extends Reducer<Text, IntWritable, Text, IntWritable>{
private IntWritable result = new IntWritable();
protected void reduce(Text key, Iterable<IntWritable> values,
Context context)
throws IOException, InterruptedException {
// TODO Auto-generated method stub
// super.reduce(arg0, arg1, arg2);
int sum = 0;
for(IntWritable val : values){
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
Partitioner:
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.lib.partition.HashPartitioner;
public class NewPartitioner extends HashPartitioner<Text, IntWritable>{
@Override
public int getPartition(Text key, IntWritable value, int numReduceTasks) {
// TODO Auto-generated method stub
String term = new String();
term = key.toString().split(",")[0];
return super.getPartition(new Text(term), value, numReduceTasks);
}
}
注意这里的import的文件了,如果不对的话会引起在main函数中setPartitionerClass()中的参数不匹配问题,不过还是很容易解决的。
再接着给出Reducer阶段的代码:
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.Reducer;
public class InvertedIndexReducer extends Reducer<Text, IntWritable, Text, Text>{
private Text word1 = new Text();
private Text word2 = new Text();
String temp = new String();
static Text CurrentItem = new Text(" ");
static List<String> postingList = new ArrayList<String>();
@Override
protected void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
// TODO Auto-generated method stub
// super.reduce(arg0, arg1, arg2);
int sum = 0;
word1.set(key.toString().split("#")[0]);
temp = key.toString().split("#")[1];
for(IntWritable val : values){
sum += val.get();
}
word2.set("<" + temp + "," + sum + ">");
if(!CurrentItem.equals(word1)&&!CurrentItem.equals(" ")){
StringBuilder out = new StringBuilder();
long count = 0;
for(String p : postingList){
out.append(p);
out.append(";");
count += Long.parseLong(p.substring(p.indexOf(",")+1,p.indexOf(">")));
}
out.append("<total," + count + ">.");
if(count > 0)
context.write(CurrentItem, new Text(out.toString()));
postingList = new ArrayList<String>();
}
CurrentItem = new Text(word1);
postingList.add(word2.toString());
}
@Override
protected void cleanup(Context context)
throws IOException, InterruptedException {
// TODO Auto-generated method stub
// super.cleanup(context);
StringBuilder out = new StringBuilder();
long count = 0;
for(String p : postingList){
out.append(p);
out.append(";");
count += Long.parseLong(p.substring(p.indexOf(",")+1, p.indexOf(">")));
}
out.append("<total," + count + ">.");
if (count > 0) {
context.write(CurrentItem, new Text(out.toString()));
}
}
}
基本按照书上的方式写好了,书上没有给出这部分的main函数代码,因此这里方法和书上之前给出的main方法相比添加一些新的内容,因为我们扩张了Combiner和Partitioner,因此在main()方法中要显式地设置,否则这两个扩展的类没有使用到,代码如下:
public static void main(String[] args) throws IOException{
System.out.println("input = "+args[0]+",output = "+args[1]);
Configuration conf = new Configuration();
Job job = new Job(conf,"inverted");
job.setJarByClass(InvertedIndex.class);
job.setInputFormatClass(TextInputFormat.class);
job.setMapperClass(InvertedIndexMapper.class);
job.setCombinerClass(InvertedCombiner.class);
job.setReducerClass(InvertedIndexReducer.class);
job.setPartitionerClass(NewPartitioner.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
try {
System.exit(job.waitForCompletion(true)?0:1);
} catch (ClassNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
嗯,好了,可以开始运行了。运行的时候出现了一个异常,如下:
java.lang.ClassCastException: org.apache.hadoop.io.Text cannot be cast to org.apache.hadoop.io.IntWritable
at NewPartitioner.getPartition(NewPartitioner.java:1)
at org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:691)
at org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
at InvertedIndexMapper.map(InvertedIndexMapper.java:32)
at InvertedIndexMapper.map(InvertedIndexMapper.java:1)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
大致意思是说Text不能转换成IntWritable,因此在Partitioner中出错了。这个时候想了一下,应该是Mapper的时候出错了,write的时候value是Text的,而在Partitioner中是IntWritable的,因此更改Mapper中的write,如下:
context.write(word, new IntWritable(1));
这个时候依然有问题,异常如下:
java.io.IOException: Type mismatch in value from map: expected org.apache.hadoop.io.Text,
recieved org.apache.hadoop.io.IntWritable
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:1019)
at org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:691)
at org.apache.hadoop.mapreduce.TaskInputOutputContext.write
(TaskInputOutputContext.java:80)
at InvertedIndexMapper.map(InvertedIndexMapper.java:31)
at InvertedIndexMapper.map(InvertedIndexMapper.java:1)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs
(UserGroupInformation.java:1059)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
还是类型匹配出了问题,期望类型是Text的。收到的类型是IntWritable,似乎是输入有问题,重新定义输入,在main方法中给map定义输入,格式就好了,在main方法中加入如下两行:
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
然后再重新运行,应该就可以了。得到如下结果:
bird <file3.txt,1>;<total,1>.
blue <file2.txt,1>;<total,1>.
bull <file1.txt,1>;<total,1>.
cat <file1.txt,1>;<total,1>.
fish <file1.txt,2>;<file2.txt,1>;<total,3>.
green <file1.txt,1>;<total,1>.
one <file1.txt,1>;<file3.txt,1>;<total,2>.
red <file2.txt,1>;<file3.txt,1>;<total,2>.
three <file1.txt,1>;<total,1>.
tiger <file2.txt,1>;<total,1>.
two <file1.txt,1>;<total,1>.
在这中间最让人捉摸不定的应该是Partitioner那个异常了。还看到有一些是这样定义Mapper的,
public class InvertedIndexMapper extends Mapper<Text, Text, Text, IntWritable>
泛型的第一个参数是Text,不是书上的Object,运行就会出现如下异常:
java.lang.ClassCastException: org.apache.hadoop.io.LongWritable cannot be cast to org.apache.hadoop.io.Text,
这样就需要把泛型的第一个参数修改为LongWritable,或者Object就好了,在这个地方Object的子类型就是LongWritable。如果泛型第一个参数是Text,要想去掉这个异常的话,需要自己写一个文件输入,可以参考文章开头给出的那个链接的内容。最后修改的Mapper如下:
public class InvertedIndexMapper extends Mapper<LongWritable, Text, Text, IntWritable>{
@Override
protected void map(LongWritable key, Text value,
Context context)
throws IOException, InterruptedException {
// TODO Auto-generated method stub
// super.map(key, value, context);
FileSplit fileSplit = (FileSplit) context.getInputSplit();
String fileName = fileSplit.getPath().getName();
String temp = new String();
String line = value.toString().toLowerCase();
StringTokenizer itr = new StringTokenizer(line);
for(;itr.hasMoreTokens();){
temp = itr.nextToken();
Text word = new Text();
word.set(temp+"#"+fileName);
context.write(word, new IntWritable(1));
}
}
}
到此,实验结束了。以上为我所遇到的问题。回过头来看都是一些小坑,不过也比较费时间了。至于可能要计算词频,基于上面的代码也很容易实现了。
参考南京大学 黄宜华教授《深入理解大数据——大数据处理与编程实践》。