细节决定成败 MapReduce任务实战 倒排索引

今天在偶然看到一个博文,里面讲述如何使用mapreduce进行倒排索引处理。那就拿这个任务当成本篇实战任务吧。

一、任务描述


hdfs 上有三个文件,内容下上面左面框中所示。右框中为处理完成后的结果文件。
倒排索引 (Inverted index),也常被称为反向索引、置入档案或反向档案,是一种索引方法,被用来存储在全文搜索下某个单词在一个文档或者一组文档中的存储位置的映射。它是文档检索系统中最常用的数据结构。通过倒排索引,可以根据单词快速获取包含这个单词的文档列表。倒排索引主要由两个部分组成:“单词词典”和“倒排文件”
这个任务与传统的倒排索引任务不同的地方是加上了每个文件中的频数。

二、实现思路

  • 首先关注结果中有文件名称,这个我们有两种方式处理:1、自定义InputFormat,在其中的自定义RecordReader中,直接通过InputSplit得到Path,继而得到FileName;2、在Mapper中,通过上下文可以取到Split,也可以得到fileName。这个任务中我们使用第二种方式,得到filename.
  • 在mapper中,得到filename 及 word,封装到一个自定义keu中。value 使用IntWritable。在map  中直接输出值为1的IntWritable对象。
  • 对进入reduce函数中的key进行分组控制,要求按word相同的进入同一次reduce调用。所以需要自定义GroupingComparator。

三、实现代码

自定义Key, WordKey代码。 注意这里面有个故意设置的 坑。

点击(此处)折叠或打开

  1. package indexinverted;

  2. import java.io.DataInput;
  3. import java.io.DataOutput;
  4. import java.io.IOException;

  5. import org.apache.hadoop.io.WritableComparable;


  6. public class WordKey implements WritableComparable <WordKey> {

  7.         private String fileName;
  8.         private String word;

  9.         @Override
  10.         public void write(DataOutput out) throws IOException {
  11.                 out.writeUTF(fileName);
  12.                 out.writeUTF(word);
  13.         }

  14.         @Override
  15.         public void readFields(DataInput in) throws IOException {
  16.                 this.fileName = in.readUTF();
  17.                 this.word = in.readUTF();
  18.         }

  19.         @Override
  20.         public int compareTo(WordKey key) {
  21.                 int r = fileName.compareTo(key.fileName);
  22.                 if(r==0)
  23.                         r = word.compareTo(key.word);
  24.                 return r;
  25.         }

  26.         public String getFileName() {
  27.                 return fileName;
  28.         }

  29.         public void setFileName(String fileName) {
  30.                 this.fileName = fileName;
  31.         }

  32.         public String getWord() {
  33.                 return word;
  34.         }

  35.         public void setWord(String word) {
  36.                 this.word = word;
  37.         }
  38. }
Mapper、 Reducer、  IndexInvertedGroupingComparator (我喜欢把一些小的类当成内部类,放到Job类中,这样代码比较简单)
Reduce函数中处理输出结果有点繁琐,可以不用太关注。

点击(此处)折叠或打开

  1. package indexinverted;
  2. import java.io.IOException;
  3. import java.util.HashMap;
  4. import java.util.LinkedHashMap;

  5. import org.apache.hadoop.conf.Configuration;
  6. import org.apache.hadoop.conf.Configured;
  7. import org.apache.hadoop.fs.FileSystem;
  8. import org.apache.hadoop.fs.Path;
  9. import org.apache.hadoop.io.IntWritable;
  10. import org.apache.hadoop.io.LongWritable;
  11. import org.apache.hadoop.io.Text;
  12. import org.apache.hadoop.io.WritableComparable;
  13. import org.apache.hadoop.io.WritableComparator;
  14. import org.apache.hadoop.mapreduce.Job;
  15. import org.apache.hadoop.mapreduce.Mapper;
  16. import org.apache.hadoop.mapreduce.Reducer;
  17. import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
  18. import org.apache.hadoop.mapreduce.lib.input.FileSplit;
  19. import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
  20. import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
  21. import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
  22. import org.apache.hadoop.util.Tool;
  23. import org.apache.hadoop.util.ToolRunner;
  24. import org.apache.log4j.Logger;


  25. public class MyIndexInvertedJob extends Configured implements Tool{

  26.         public static class IndexInvertedMapper extends Mapper<LongWritable,Text,WordKey,IntWritable>{
  27.                 private WordKey newKey = new WordKey();
  28.                 private IntWritable ONE = new IntWritable(1);
  29.                 private String fileName ;
  30.                
  31.                 @Override
  32.                 protected void map(LongWritable key, Text value, Context context)
  33.                                 throws IOException, InterruptedException {
  34.                         newKey.setFileName(fileName);
  35.                         String words [] = value.toString().split(" ");
  36.                         for(String w:words){
  37.                                 newKey.setWord(w);
  38.                                 context.write(newKey, ONE);
  39.                         }
  40.                 }
  41.                 @Override
  42.                 protected void setup(Context context) throws IOException,
  43.                                 InterruptedException {
  44.                         FileSplit inputSplit = (FileSplit) context.getInputSplit();
  45.                         fileName = inputSplit.getPath().getName();
  46.                 }

  47.         }
  48.         public static class IndexInvertedReducer extends Reducer<WordKey,IntWritable,Text,Text>{
  49.                 private Text outputKey = new Text();

  50.                 @Override
  51.                 protected void reduce(WordKey key, Iterable<IntWritable> values,Context context)
  52.                                 throws IOException, InterruptedException {
  53.                         outputKey.set(key.getWord());
  54.                         LinkedHashMap <String,Integer> map = new LinkedHashMap<String,Integer>();
  55.                         for(IntWritable v :values){
  56.                                 if(map.containsKey(key.getFileName())){
  57.                                         map.put(key.getFileName(), map.get(key.getFileName())+ v.get());
  58.                                 }
  59.                                 else{
  60.                                         map.put(key.getFileName(), v.get());
  61.                                 }
  62.                         }
  63.                         StringBuilder sb = new StringBuilder();
  64.                         sb.append("{");
  65.                         for(String k: map.keySet()){
  66.                                 sb.append("(").append(k).append(",").append(map.get(k)).append(")").append(",");
  67.                         }
  68.                         sb.deleteCharAt(sb.length()-1).append("}");
  69.                         context.write(outputKey, new Text(sb.toString()));
  70.                 }

  71.         }
  72.         public static class IndexInvertedGroupingComparator extends WritableComparator{
  73.                 Logger log = Logger.getLogger(getClass());
  74.                 public IndexInvertedGroupingComparator(){
  75.                         super(WordKey.class,true);
  76.                 }

  77.                 @Override
  78.                 public int compare(WritableComparable a, WritableComparable b) {
  79.                         WordKey key1 = (WordKey) a;
  80.                         WordKey key2 = (WordKey) b;
  81.                         log.info("==============key1.getWord().compareTo(key2.getWord()):"+key1.getWord().compareTo(key2.getWord()));
  82.                         return key1.getWord().compareTo(key2.getWord());
  83.                 }

  84.         }
  85.         @Override
  86.         public int run(String[] args) throws Exception {
  87.                 Job job = Job.getInstance(getConf(), "IndexInvertedJob");
  88.                 job.setJarByClass(getClass());
  89.                 Configuration conf = job.getConfiguration();

  90.                 Path in = new Path("myinvertedindex/");
  91.                 Path out = new Path("myinvertedindex/output");
  92.                 FileSystem.get(conf).delete(out,true);
  93.                 FileInputFormat.setInputPaths(job, in);
  94.                 FileOutputFormat.setOutputPath(job, out);

  95.                 job.setInputFormatClass(TextInputFormat.class);
  96.                 job.setOutputFormatClass(TextOutputFormat.class);

  97.                 job.setMapperClass(IndexInvertedMapper.class);
  98.                 job.setMapOutputKeyClass(WordKey.class);
  99.                 job.setMapOutputValueClass(IntWritable.class);

  100.                 job.setReducerClass(IndexInvertedReducer.class);
  101.                 job.setOutputKeyClass(Text.class);
  102.                 job.setOutputValueClass(Text.class);

  103.                 job.setGroupingComparatorClass(IndexInvertedGroupingComparator.class);

  104.                 return job.waitForCompletion(true)?0:1;
  105.         }

  106.         public static void main(String [] args){
  107.                 int r = 0 ;
  108.                 try{
  109.                         r = ToolRunner.run(new Configuration(), new MyIndexInvertedJob(), args);
  110.                 }catch(Exception e){
  111.                         e.printStackTrace();
  112.                 }
  113.                 System.exit(r);
  114.         }

  115. }

四、查看结果

hadoop dfs -ca t myinvertedindex/output/ part-r-00000  

点击(此处)折叠或打开

  1. MapReduce {(1.txt,1)}
  2. is {(1.txt,1)}
  3. simple {(1.txt,1)}
  4. MapReduce {(2.txt,1)}
  5. is {(2.txt,2)}
  6. powerful {(2.txt,1)}
  7. simple {(2.txt,1)}
  8. Hello {(3.txt,1)}
  9. MapReduce {(3.txt,2)}
  10. bye {(3.txt,1)}
查看结果发现问题:单词并没有合并到一起,这会是什么原因?

五、深坑回填

查看结果发现问题:单词并没有合并到一起,这是什么原因?
GroupingComparator 是在什么样的基础上起作用的?是配置了按word相同输入到同一次reduce调用就一定会相同的word都进入同一个reduce调用吗?
NO! Groupingcomparator中只是比较了相临的两个key是否相等。所有要结果正确,就要保存key的排序与GroupingComparator的排序是相协调。
问题出在了WordKey的排序是先按文件再按单词进行排,这样相临的key并不是单词相同的,而是文件相同的。
所以要改下WordKey的comparTo方法 。修复后代码如下:

点击(此处)折叠或打开

  1. package indexinverted;

  2. import java.io.DataInput;
  3. import java.io.DataOutput;
  4. import java.io.IOException;

  5. import org.apache.hadoop.io.WritableComparable;


  6. public class WordKey implements WritableComparable <WordKey> {

  7.         private String fileName;
  8.         private String word;

  9.         @Override
  10.         public void write(DataOutput out) throws IOException {
  11.                 out.writeUTF(fileName);
  12.                 out.writeUTF(word);
  13.         }

  14.         @Override
  15.         public void readFields(DataInput in) throws IOException {
  16.                 this.fileName = in.readUTF();
  17.                 this.word = in.readUTF();
  18.         }

  19.         @Override
  20.         public int compareTo(WordKey key) {
  21.                 int r = word.compareTo(key.word);
  22.                 if(r==0)
  23.                         r = fileName.compareTo(key.fileName);
  24.                 return r;
  25.         }

  26.         public String getFileName() {
  27.                 return fileName;
  28.         }

  29.         public void setFileName(String fileName) {
  30.                 this.fileName = fileName;
  31.         }

  32.         public String getWord() {
  33.                 return word;
  34.         }

  35.         public void setWord(String word) {
  36.                 this.word = word;
  37.         }
  38. }


运行结果

点击(此处)折叠或打开

  1. Hello {(3.txt,1)}
  2. MapReduce {(1.txt,1),(2.txt,1),(3.txt,2)}
  3. bye {(3.txt,1)}
  4. is {(1.txt,1),(2.txt,2)}
  5. powerful {(2.txt,1)}
  6. simple {(1.txt,1),(2.txt,1)}








来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/30066956/viewspace-2120238/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/30066956/viewspace-2120238/

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值