个人学习整理,所有资料来自尚硅谷
B站学习连接:添加链接描述
MapReduce-Combiner合并
1.Combiner合并
- Combiner是MR程序中Mapper和Reducer之外的一种组件;
- Combiner组件的父类就是Reducer;
- Combiner和Reducer的区别在于运行的位置;
- Combiner是每一个MapTask所在的节点运行
- Reducer是接收全局所有Mapper的输出结果
- Combiner的意义就是对每一个MapTask的输出进行局部汇总,以减少网络传输量;
- Combiner能够应用的前提是不能影响最终的业务逻辑,而且,Combiner的输出kv应跟Reducer的输入kv类型要对应起来;
- Combiner的两次使用位置:第一次在缓冲区溢写文件时,对溢写文件的同一个分区的数据进行合并,第二次是在溢写文件归并会后,因为缓冲区的数据到达80M时即开始溢写,所以每个溢写文件最大是80M,所以一个MapTask会产生多个溢写文件,而不同溢写文件如果数量对于3个,则会触发一次Combiner,因为一个MapTask最终会生成一个大文件到磁盘上,所以Combiner可以减轻Reducer的负担;
- Combiner可以自定义,自定义时继承一个Reducer类,并重写其中方法即可;
- Combiner是可以选择的,可以选择不触发,设置后才会触发。
2. Combiner合并案例实操
(1)需求:统计过程中对每一个MapTask的输出进行局部汇总,以减小网络传输量即采用Combiner功能。
(2)分析:
数据连接:添加链接描述
提取码:sjl1
2.1 方案一
- 增加一个WordcountCombiner类继承Reducer
- 在WordcountCombiner中
- 统计单词汇总
- 将统计结果输出
增加一个WordCountCombiner类继承Reducer
- WordCountCombiner类
package com.atguigu.mapreduce.combiner;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import java.io.IOException;
public class WordCountCombiner extends Reducer<Text,IntWritablem,Text,IntWritable>{
private IntWritable outV = new IntWritable();
@Override
protected void reduce(Text key, Iterable<IntWritable> values, Reducer<Text, IntWritable, Text, IntWritable>.Context context) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable value:values){
sum += value.get();
}
outV.set(sum);
context.write(key,outV);
}
}
- Mapper类
package com.atguigu.mapreduce.combiner;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import java.io.IOException;
/**
* KEYIN ,map阶段输入的key的类型:(偏移量)LongWritable
* VALUEIN,map阶段输入的value的类型:(这一行的内容)Text
* KEYOUT,map阶段输出的key的类型:(单词类型)Text
* VALUEOUT,map阶段输出的value的类型:(单词次数)IntWritable
*/
public class WordCountMapper extends Mapper<LongWritable, Text,Text, IntWritable> {
private Text outK = new Text();
private IntWritable outV = new IntWritable(1);
@Override
protected void map(LongWritable key, Text value, Mapper<LongWritable, Text, Text, IntWritable>.Context context) throws IOException, InterruptedException {
//1.获取一行信息
String line = value.toString();
//2.切割
String[] words = line.split(" ");
//3.循环写出
for (String word : words) {
outK.set(word);
context.write(outK,outV);
}
}
}
- Reducer类
package com.atguigu.mapreduce.combiner;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import java.io.IOException;
/**
* KEYIN ,reduce阶段输入的key的类型:Text
* VALUEIN,reduce阶段输入的value的类型:IntWritable
* KEYOUT,reduce阶段输出的key的类型:Text
* VALUEOUT,reduce阶段输出的value的类型:IntWritable
*/
public class WordCountReducer extends Reducer<Text, IntWritable,Text,IntWritable> {
private IntWritable outV = new IntWritable();
@Override
protected void reduce(Text key, Iterable<IntWritable> values, Reducer<Text, IntWritable, Text, IntWritable>.Context context) throws IOException, InterruptedException {
//Iterable<IntWritable>类似以一个集合
int sum = 0;
//atguigu, (1,1)
//1.累加
for (IntWritable value : values) {
sum += value.get();//sum是int类型,value是IntWritable类型,通过get方法转换
}
outV.set(sum);
//2.写出
context.write(key,outV);
}
}
- Driver类
package com.atguigu.mapreduce.combiner;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import java.io.IOException;
public class WordCountDriver {//mapreduce阶段若输出路径存在,则报错FileAlreadyExistsException
public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException {
//1.获取job——> org.apache.hadoop.mapreduce
Configuration conf = new Configuration();
Job job = Job.getInstance(conf);
//2.设置jar包路径
job.setJarByClass(WordCountDriver.class);
//3.关联mapper和reducer(jar包和mapper和reducer怎么产生联系?)
job.setMapperClass(WordCountMapper.class);
job.setReducerClass(WordCountReducer.class);
//4.设置map输出的kv类型
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
//5.设置最终输出的kv类型
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
//8、关联Combiner
job.setCombinerClass(WordCountCombiner.class);
//6.设置输入路径和输出路径
FileInputFormat.setInputPaths(job,new Path("D:\\downloads\\hadoop-3.1.0\\data\\11_input\\inputword"));
FileOutputFormat.setOutputPath(job,new Path("D:\\downloads\\hadoop-3.1.0\\data\\output\\withCombiner"));
//7.提交job
boolean result = job.waitForCompletion(true);
System.exit(result?0:1);
}
}
首先不使用combiner,日志如下:
使用combiner,日志如下:
可以发现,经过combiner之后,Combiner output records为7,即为文件中出现的单词类别,Map output materialized bytes也减少了。
2.2 方案二
- 将WordcountReducer作为Combiner在WordcountDriver驱动类中指定
job.setCombinerClass(WordcountReducer.class);