Hadoop API的改变

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/Wild_Elegance_k/article/details/47450997
           在Hadoop 0.20版本之前,Hadoop 运用MapReduce 计算框架对数据进行统计时,都是讲Mapper 和Reducer 作为接口,用静态内部类实现Mapper 接口和Reducer 接口,分别重写map() 方法和reduce() 方法来做计算操作。
           这下面是WordCount 的案例实现:
import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class WordCount {

  public static class TokenizerMapper
       extends Mapper<Object, Text, Text, IntWritable>{

    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();

    public void map(Object key, Text value, Context context
                    ) throws IOException, InterruptedException {
      StringTokenizer itr = new StringTokenizer(value.toString());
      while (itr.hasMoreTokens()) {
        word.set(itr.nextToken());
        context.write(word, one);
      }
    }
  }

  public static class IntSumReducer
       extends Reducer<Text,IntWritable,Text,IntWritable> {
    private IntWritable result = new IntWritable();

    public void reduce(Text key, Iterable<IntWritable> values,
                       Context context
                       ) throws IOException, InterruptedException {
      int sum = 0;
      for (IntWritable val : values) {
        sum += val.get();
      }
      result.set(sum);
      context.write(key, result);
    }
  }

  public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();
    Job job = Job.getInstance(conf, "word count");
    job.setJarByClass(WordCount.class);
    job.setMapperClass(TokenizerMapper.class);
    job.setCombinerClass(IntSumReducer.class);
    job.setReducerClass(IntSumReducer.class);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);
    FileInputFormat.addInputPath(job, new Path(args[0]));
    FileOutputFormat.setOutputPath(job, new Path(args[1]));
    System.exit(job.waitForCompletion(true) ? 0 : 1);
  }
}
           而在Hadoop 0.20版本之后呢,都是将Mapper 和Reducer 作为抽象类,它们取代了原始API 中的Mapper 和Reducer 接口(org.apache.hadoop.mapred.Mapper 和org.apache.hadoop.mapred.Reducer)。在新的抽象类中也替换了MapReduceBase 类,使之被弃用。
           那么在新的API 中有什么新的特点呢?在新的API中,可以说是最有益的变化就是引入了Context 上下文对象。最直接的影响就是替换了map() 和reduce() 方法中使用的OutputCollection 和Reporter  对象。而这有什么好处呢?就是统一了应用代码,也就是程序员写的代码与MapReduce 框架之间的通信,同时呢,也固定了Mapper 和Reducer 的API,使得添加新功能时不会改变基本方法签名。而这些新功能的实现也仅仅是在context对象上增加的方法。也就是说我们要新添加的功能的实现的方法,可以动态的加载在MapReduce 框架上,在没有加入我们写的实现的方法之前,MapReduce 框架并不会感知到这些新的方法,那这些方法就可以在更新的版本中继续编译和运行了。
           新的API 中的map() 和reduce() 方法还多了一两处细微的变化,那就是在方法的签名上,可以抛出InterruptedException,而非单一的IOException。而且reduce() 方法不再以Iterator 而是以Iteratable 来接受一个值得列表,这样有助于Java 使用foreach 语义来实现迭代。
           在新的API 中JobConf 和JobClient 被替换了。而它们的功能呢,也分别有由Configuration(老版本中是JobConf 的父类)和一个新的类Job 来实现。
 
           那Hadoop 0.20版本该怎么实现MapReduce 呢。话不多说,请看代码:

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;


public class MyJob extends Configured implements Tool {

	public static class MapClass extends  Mapper<LongWritable, Text, Text, Text>{
		public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException{
			String[] citation = value.toString().split(",");
			context.write(new Text(citation[1]), new Text(citation[0]));
		}
	}
	
	public static class Reduce extends  Reducer<Text, Text, Text, Text>{
		public void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException{
			String csv = "";
			for(Text val : values){
				if(csv.length()>0) csv+=",";
				csv += val.toString();
			}
			context.write(key, new Text(csv));
		}
	} 
	
	@Override
	public int run(String[] args) throws Exception {
		Configuration conf = getConf();
		
		Job job = new Job(conf, "MyJob");
		job.setJarByClass(MyJob.class);
		
		Path in = new Path(args[0]);
		Path out = new Path(args[1]);
		FileInputFormat.setInputPaths(job, in);
		FileOutputFormat.setOutputPath(job, out);
		
		job.setMapperClass(MapClass.class);
		job.setReducerClass(Reduce.class);
		
		/*
		 * 兼容的InputFormat类和OutputFormat类
		 */
		job.setInputFormatClass(TextInputFormat.class);
		job.setOutputFormatClass(TextOutputFormat.class);
		job.setOutputKeyClass(Text.class);
		job.setOutputValueClass(Text.class);
		
		System.exit(job.waitForCompletion(true)?0:1);
		
		return 0;
	}
	
	public static void main(String[] args) throws Exception{
		int res = ToolRunner.run(new Configuration(), new MyJob(), args);
		System.exit(res);
	}
	
}

             目前,我发现Hadoop 0.20版本API的一些变化也就这么一些吧,如果有新的发现,请继续关注以后的blog吧。
        

没有更多推荐了,返回首页