hadoop 单词个数及所处文件位置统计

一、题目描述

        输入若干个文件,得到所有文件中某单词的所在文件名,单词在文档中出现的次数和具体的位置信息

例如,输入文件如下:

1.txt:

it iswhat it is

what isit

it is abanana

2.txt:

i is whathe is

haoop isit

it he abanana

3.txt:

hadoop iswhat hello is

what isit

hello isa he

 

输出如下:

a     {1:1;(3,3) };{3:1;(3,3) };{2:1;(3,3) };

banana   {2:1;(3,4) };{1:1;(3,4) };

hadoop  {3:1;(1,1) };

haoop    {2:1;(2,1) };

he   {2:2;(1,4) (3,2) };{3:1;(3,4) };

hello       {3:2;(1,4) (3,1) };

i      {2:1;(1,1) };

is     {2:3;(1,2) (1,5) (2,2) };{3:4;(1,2) (1,5)(2,2) (3,2) };{1:4;(1,2) (1,5) (2,2) (3,2) };

it     {1:4;(1,1) (1,4) (2,3) (3,1) };{2:2;(2,3)(3,1) };{3:1;(2,3) };

what       {3:2;(1,3) (2,1) };{2:1;(1,3)};{1:2;(1,3) (2,1) };

例如,在输出文件中“he  {2:2;(1,4) (3,2) };{3:1;(3,4) };”表示单词he,在文档2.txt中出现两次,位置是第一行第四个和第三行第二个,在3.txt中出现一次,位置是第三行第四个。



二、实现思路

       首先通过inputSplit()自带函数获取文件的名称,在每次maper函数运行时记录一次该文件的名称,maper,combine,reduce ,都设置成文件输入输出接口。 

      maper 函数主要功能是获取文件名称,给读取的每个单词设置个数为1, 方便reduce 处理统计相同单词的格式。 map 每次读取的一个文件是一行一行的读取的,每读取该文件一行,用变量a 记录该单词所处的行数,用b 记录该单词所处的列数。然后key 处理成string 格式(包含,单词名,所处文件名,行a,列b), value 为该单词的个数。

然后(key, value)打包成hadoop 文件接口传递给combine() 函数。

         combine() 函数主要处理传递过来的(key, value) , 将string 形式的key 用split() 划分,统计每个单词的总个数,累加为sum , 然后处理key 按题目要求写入

输出文件的格式, 然后将单词用key 存储, 单词后面的(文件: (行: 列))这样的格式存为key , 然后将(key, value) 以text 类型接口传给reduce().

       reduce 主要功能是实现单词和 单词所处的文件位置,最终按  单词   { (文件:(行:列);(文件:(行:列))......} 这样的格式写入输出文件中。

       main() 函数主要是实现 输入输出文件的路径, 刷新输出文件的内容, mapper , reduce 的初始化等功能。



三、完整代码

     

package wordcount;

import java.io.IOException;
import  java.util.*;
import java.io.File;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.InputSplit;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
import org.apache.hadoop.mapreduce.lib.input.FileSplit;

public class WordCount {
	
	public static int a=1;  //need change
	
	public static class TokenizerMapper extends Mapper<Object, Text, Text, Text>{
	private final static IntWritable one = new IntWritable(1);
	private Text word = new Text();
	

    public Map<String,String> wd;
    String strl = new String();
    
    private FileSplit split;
    private Text valueInfo =new Text();
    private Text keyInfo = new Text();
    
    public String fileName = "";
    public void setup(Context context) throws IOException,InterruptedException{
    	InputSplit inputSplit = context.getInputSplit();
    	String     fileFormat = ((FileSplit) inputSplit).getPath().getName().toString();
    	fileName = fileFormat.substring(0,fileFormat.lastIndexOf("."));
    	System.out.println("fileName: "+ fileName);
    	}
      
    public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
    		 int b=1;
    		 if(a==4)
    			 a=1;
    		split =(FileSplit) context.getInputSplit();
    		StringTokenizer stk = new StringTokenizer(value.toString());
 
    		String[] st=value.toString().split(" ");
       		System.out.println("Line sentence : "+ value.toString()+" "+st.length);
    		while(stk.hasMoreElements()) {
    			keyInfo.set(stk.nextToken()+ ":"+fileName+":"+a+":"+b);
    			if(b==st.length) a++;
    			valueInfo.set("1");
    			context.write(keyInfo, valueInfo);
    			b++;
    		}
    	//	a++;
      
    }
  }
  
  public   int cNum =1;
  private void setLocationFormat(String docID, String val){
	  String[] aa= val.split("");
	  String res = new String();
	  for (int i=0;i<val.length();i++)
		  res ="{" + docID +":"+i +"};";
	  cNum++;
  }
  
  public static class InvertedIndexCombiner extends Reducer<Text,Text,Text,Text> {
	  
    Text info = new Text();
    public void reduce(Text key, Iterable<Text> values,Context context) 
            throws IOException, InterruptedException {
      int sum = 0;
   //   String str = new String();
    //  str="  ";
      for (Text val : values) {
        sum += Integer.parseInt(val.toString());
      }
    //  result.set(sum);
    //  str += result;
       String[] str = key.toString().split(":");
   //   int splitIndex = key.toString().indexOf(":");
   //   key.set(key.toString().substring(0,splitIndex));
   //   System.out.println(key.toString().substring(0,splitIndex));
       key.set(str[0]);
    //  System.out.println(key.toString().substring(splitIndex+1));
    //  info.set(key.toString().substring(splitIndex+1) + ":" +sum);
      System.out.println("str[0]: "+ str[0]+"   "+"str[1]: "+str[1]);
       info.set(str[1] + ":" + sum+"("+ str[2] + "," + str[3] + ")");
      context.write(key, info);
    }
  }
  
  public static class InvertedIndexReduce extends Reducer<Text,Text,Text,Text>{
	  
	  private Text result = new Text();
	  public void reduce(Text key,Iterable<Text> values,Context context)
	  		throws IOException, InterruptedException {
		  String fileList = new String();
		  for(Text value:values){
			  fileList += value.toString()+";";
		  }
		  result.set("{"+fileList+"}");
		  context.write(key, result);
	  }
  }
  
  public static void wordDeal(String wordOfDoc, String docID,TreeMap<String ,TreeMap<String,Integer>> tmp){
	  wordOfDoc = wordOfDoc.toLowerCase();
	  if(!tmp.containsKey(wordOfDoc)){
		  // word load first time
		  TreeMap<String,Integer>tmpST = new TreeMap<String, Integer>();
		  tmpST.put(docID, 1);
		  tmp.put(wordOfDoc, tmpST);
	  }else{
		  //if load the word first time ,and count =null , then 
		  //add (docId,1) to tmpST, if not , then count++ , rewrite it to tmpST
		  TreeMap<String,Integer>tmpST = tmp.get(wordOfDoc);
		  Integer count = tmpST.get(docID);
		  count = ((count == null)?1:count++);
		  tmpST.put(docID, count);
		  tmp.put(wordOfDoc, tmpST);
		  
	  }
  }

  public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();
    //conf.set("fs.default.name", "hdfs://master:9000");
    //conf.set("mapred.job.tracker", "hdfs://master:9001");
    conf.set("fs.default.name", "file:///");
    conf.set("mapred.job.tracker", "local");
    
    String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
    if (otherArgs.length != 2) {
      System.err.println("Usage: wordcount <in> <out>");
      System.exit(2);
    }
    
    FileSystem fs= FileSystem.get(conf);
    fs.delete(new Path(args[1]),true);
    
    Job job = new Job(conf, "word count");
    job.setJarByClass(WordCount.class);
    

    
    job.setMapperClass(TokenizerMapper.class);
    job.setMapOutputKeyClass(Text.class);
    job.setMapOutputValueClass(Text.class);
    
    job.setCombinerClass(InvertedIndexCombiner.class);
    
    job.setReducerClass(InvertedIndexReduce.class);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(Text.class);
   
    FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
    FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
    System.exit(job.waitForCompletion(true) ? 0 : 1);
  }
}



四、输出结果

a {1:1(3,3);2:1(3,3);3:1(3,3);}
banana {1:1(3,4);2:1(3,4);}
hadoop {3:1(1,1);}
haooop {2:1(2,1);}
he {2:1(1,4);2:1(3,2);3:1(3,4);}
hello {3:1(1,4);3:1(3,1);}
i {2:1(1,1);}
is {2:1(1,2);2:1(1,5);2:1(2,2);3:1(1,2);3:1(1,5);3:1(2,2);3:1(3,2);1:1(1,2);1:1(1,5);1:1(2,2);1:1(3,2);}
it {1:1(1,1);1:1(1,4);1:1(2,3);1:1(3,1);2:1(2,3);2:1(3,1);3:1(2,3);}
what {3:1(1,3);3:1(2,1);2:1(1,3);1:1(1,3);1:1(2,1);}


  • 1
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值