map类
实现于库类中的Mapper接口, Mapper<keyin,valuein,keyout,valueout>,其中包括了map方法void map(K1 key, V1 value, OutputCollector<K2, V2> output, Reporter reporter)
throws IOException; 当然一般要重写;,并且执行会执行该方法;该方法中,包括了四个参数,经过处理,将结果送到上下文,使用context.write(keyout,valueout;
完整方法为
protected void map(Object key, Value value, Context context) throws IOException, InterruptedException{ context.write((KEYOUT) key, (VALUEOUT) value); } reduce类 实现了Reducer接口,并一般重写reduce方法,有点不同的是方法参数为(key,valuelist,context); 最后将结果返回到context.write(Text key,IntWritable value);来返回结果; 然后有个入口,来配置和启动两个类,可以说是mapreduce驱动 Configuration conf =new Configuration();得到配置对象; tring[] otherArgs = {"hdfs://localhost:9000/user/hadoop/input/","hdfs://localhost:9000/user/hadoop/output/"}; new GenericOptionParser(conf,otherArgs).getRemainingArgs();获得输入输出路径 配置指定类; Job job=new Job(conf,"WordCount");定义驱动名? job.setJarbyClass(WordCount.class); job.setMapperClass(Map.class);指定map实现类;job.setCombinerClass(IntSumReducer.class);指定combiner(合并实现类)
job.setReducerClass(reduce.class);指定reduce实现类;
job.setOutputKeyClass(Text.class);指定输出的键的类型的类;
job.setOutputValueClass(IntWritable.class); 制定输出的值的类型的类;
FileInputFormat.addInputPath(job,new Path(otherArgs[0]));指定文件输入输出格式和路径;
FileOutputFormat.addOutpuPath(job,new Path(otherArgs[1]));
还有个mapreduceDriver最小实现类,就叫这个名好像;