Hadoop系列
注:大家觉得博客好的话,别忘了点赞收藏呀,本人每周都会更新关于人工智能和大数据相关的内容,内容多为原创,Python Java Scala SQL 代码,CV NLP 推荐系统等,Spark Flink Kafka Hbase Hive Flume等等~写的都是纯干货,各种顶会的论文解读,一起进步。
今天继续和大家分享一下MapReduce基础入门4
#博学谷IT学习技术支持
前言
1、MapReduce会将一个大的计算任务进行拆分,拆分成小任务,让这些小任务在不同的计算机中进行处理,最后再将这些小任务的结果记性整体汇总
2、MapReduce分为两个阶段,一个Map阶段负责任务的拆分,一个是Reduce阶段,负责任务的汇总
3、整个MapReduce工作流程可以分为3个阶段:map、shuffle、reduce。
作者这里用又一个简单的案例来说明shuffle阶段Combiner(规约)的用法
一、Combiner(规约)是什么?
1、规约(Combiner)是MapReduce中的优化手段,将每一个Map的数据进行提前的聚合,减少Map端和Reduce端网络传输的数据量
2、规约可以理解为Reduce的逻辑在每一个Map端先执行一遍
3、Reduce是对所有的Map的数据进行汇总,而规约是对每一个Map的结果进行汇总
4、Combiner只是一种优化手段,不能改变最终的执行结果
二、使用步骤
今天使用的还是最简单的单词计数 word count 案例
1.数据准备
Allen Java Hadoop
Allen Python Spark
Tom Scala Hive
2.Map阶段
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import java.io.IOException;
public class Mapper_demo extends Mapper<LongWritable, Text,Text,LongWritable> {
@Override
protected void map(LongWritable key, Text value, Mapper<LongWritable, Text, Text, LongWritable>.Context context) throws IOException, InterruptedException {
for (String word : value.toString().split(" ")) {
context.write(new Text(word),new LongWritable(1));
}
}
}
3.Combiner(规约)阶段
这里Combiner(规约)阶段的代码和ReduceTask完全一样,来表示减轻ReduceTask的负担。
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import java.io.IOException;
public class Combiner_demo extends Reducer<Text, LongWritable,Text,LongWritable> {
@Override
protected void reduce(Text key, Iterable<LongWritable> values, Reducer<Text, LongWritable, Text, LongWritable>.Context context) throws IOException, InterruptedException {
long count = 0;
for (LongWritable value : values) {
count += value.get();
}
context.write(key,new LongWritable(count));
}
}
4.Reduce阶段
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import java.io.IOException;
public class Reducer_demo extends Reducer<Text, LongWritable,Text,LongWritable> {
@Override
protected void reduce(Text key, Iterable<LongWritable> values, Reducer<Text, LongWritable, Text, LongWritable>.Context context) throws IOException, InterruptedException {
long count = 0;
for (LongWritable value : values) {
count += value.get();
}
context.write(key,new LongWritable(count));
}
}
5.Driver运行入口
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import java.net.URI;
public class Driver_demo {
public static void main(String[] args) throws Exception {
Job job = Job.getInstance(new Configuration(), "wordCount");
job.setJarByClass(Driver_demo.class);
FileInputFormat.addInputPath(job,new Path("hdfs://node1:8020/input/words"));
job.setMapperClass(Mapper_demo.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(LongWritable.class);
job.setCombinerClass(Combiner_demo.class);
job.setReducerClass(Reducer_demo.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(LongWritable.class);
Path outputPath = new Path("hdfs://node1:8020/output/wordcount2/wordcount.txt");
FileOutputFormat.setOutputPath(job,outputPath);
FileSystem fileSystem = FileSystem.get(new URI("hdfs://node1:8020"), new Configuration(),"root");
boolean exists = fileSystem.exists(outputPath);
if (exists){
fileSystem.delete(outputPath,true);
}
boolean bl = job.waitForCompletion(true);
System.exit(bl? 0 : 1);
}
}
总结
这个案例主要是利用最简单的单词计数案例来说明Combiner(规约)的作用和用法。