前言
MapReduce不需要“分割”,框架已经做好这一步了。
只需要进行“线程步骤”——Mapper,而且无需计算如何分割(这是Combine的工作)
和“聚合结果”——Reducer,而且无需设计如何聚合(Shuffle的工作)
这里提到的步骤、类在后面的文章中会一一拆解分析,这里仅作简单尝试。
准备
为了让程序在IDEA中测试运行,需要在win中设置一下Hadoop环境:
把Hadoop压缩包以管理员身份解压(不行就放c盘,c盘自动认为是管理员身份);两个配置文件,winutils.exe放在bin,hadoop.dlll放在windows/system32。
开始写
1.Mapper
继承Mapper 类,重写map 方法。让分割方式为“ ”。
public class WordMapper extends Mapper<LongWritable, Text,Text,LongWritable> {
@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String[] words = value.toString().split(" ");
for(String word : words){
context.write(new Text(word),new LongWritable(1));
}
}
}
2.Reducer
继承Reducer 类,重写reduce 方法。
public class WordReduce extends Reducer<Text, LongWritable,Text,LongWritable> {
@Override
protected void reduce(Text key, Iterable<LongWritable> values, Context context) throws IOException, InterruptedException {
int count=0;
for (LongWritable lw:values){
count+=lw.get();
}
context.write(key,new LongWritable(count));
}
}
3.Job
写出main方法,运行得到文件。
public class WordJob {
public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
Job job = Job.getInstance(new Configuration());
job.setJarByClass(WordJob.class);//告诉 主类就是我
FileInputFormat.setInputPaths(job,new Path("file:///d:/temp/a.txt"));//文件
FileOutputFormat.setOutputPath(job,new Path("file:///d:/temp/write"));//目录,事先不能存在
job.setMapperClass(WordMapper.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(LongWritable.class);
job.setReducerClass(WordReduce.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(LongWritable.class);
//设置reduce个数
//job.setNumReduceTasks(2);
job.waitForCompletion(true);
}
}