Hadoop-MapReduce-WordCount示例及序列化

WordCount示例

    需求:在给定文本文件中统计输出每一个单词出现的次数。

输入						Map阶段				中间结果					Reduce阶段			输出
Java Java Java Java		<Java,1>...			<Assembly,<1,1>>		<Assembly,2>		Assembly	2
PHP PHP PHP PHP PHP		<PHP,1>...			<Java,<1,1,1,1>>		<Java,4>			Java		4
Python Python Python	<Python,1>...		<PHP,<1,1,1,1,1>>		<PHP,5>				PHP			5
Assembly Assembly		<Assembly,1>...		<Python,<1,1,1>>		<Python,3>			Python		3
SQL SQL SQL SQL			<SQL,1>...			<SQL,<1,1,1,1>>			<SQL,4>				SQL			4

WordCountMapper

  1. 用户自定义的Mapper要继承父类Mapper
  2. Mapper的输入数据的KV对的形式(KV的类型可自定义)
  3. Mapper的业务逻辑写在map()方法中
  4. Mapper的输出数据的KV对的形式(KV的类型可自定义)
  5. 每个<K, V>调用一次map()方法
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

//KEYIN    输入数据的key类型
//VALUEIN  输入数据的value类型
//KEYOUT   输出数据的key类型
//VALUEOUT 输出数据的value类型
public class WordCountMapper extends Mapper<LongWritable, Text, Text, IntWritable> {
	
	Text k = new Text();
	IntWritable v = new IntWritable(1);
	
	protected void map(LongWritable key, Text value, Mapper<LongWritable, Text, Text, IntWritable>.Context context)
			throws IOException, InterruptedException {
		String string = value.toString();   //获取一行数据
		String[] words = string.split(" ");   //获取一行的每个单词
		for (String word : words) {
			k.set(word);
			context.write(k, v);
		}
	}
}

WordCountReducer

  1. 用户自定义的Reducer要继承父类Reducer
  2. Reducer的输入数据类型是Mapper的输出数据类型
  3. Reducer的业务逻辑写在reduce()方法中
  4. Reducer的输出数据的KV对的形式(KV的类型可自定义)
  5. 每组相同的<K, V>调用一次reduce()方法
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

public class WordCountReducer extends Reducer<Text, IntWritable, Text, IntWritable>{
	IntWritable result = new IntWritable();
	protected void reduce(Text key, Iterable<IntWritable> values,
			Reducer<Text, IntWritable, Text, IntWritable>.Context context) throws IOException, InterruptedException {
		int sum = 0;
		for (IntWritable value : values) {
			sum += value.get();
		}
		result.set(sum);
		context.write(key, result);
	}
}

WordCountDriver

Driver分为7个步骤:

  1. 获取job对象
  2. 设置jar包存储位置
  3. 关联Map和Reduce类
  4. 设置Mapper阶段输出数据的key和value类型
  5. 设置最终输出数据的key和value类型
  6. 设置输入输出路径
  7. 提交job
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class WordCountDriver {
	public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
		Configuration conf = new Configuration();
		//1. 获取job对象
		Job job = Job.getInstance(conf);
		//2.设置jar包存储位置
		job.setJarByClass(WordCountDriver.class);
		//3.关联Map和Reduce类
		job.setMapperClass(WordCountMapper.class);
		job.setReducerClass(WordCountReducer.class);
		//4.设置Mapper阶段输出数据的key和value类型
		job.setMapOutputKeyClass(Text.class);
		job.setMapOutputValueClass(IntWritable.class);
		//5.设置最终输出数据的key和value类型
		job.setOutputKeyClass(Text.class);
		job.setOutputValueClass(IntWritable.class);
		//6.设置输入输出路径
		FileInputFormat.setInputPaths(job, new Path(args[0]));
		FileOutputFormat.setOutputPath(job, new Path(args[1]));
		//7.提交job
		boolean result = job.waitForCompletion(true);
		System.exit(result ? 0 : 1);
	}
}

序列化

    序列化指把内存中的对象转换成字节序列以便存储到磁盘(持久化)和网络传输。
    反序列化指把收到字节序列或磁盘的持久化数据转换为内存中的对象。
    Java的序列化是一个重量级序列化框架(Serializable),一个对象被序列化后,会附带很多额外的信息(校验信息,Header,继承体系等),不便于在网络中高效传输。所以,Hadoop自己开发了一套序列化机制(Writable
    常用数据类型对应的Hadoop数据序列化类型

Java类型Hadoop Writable类型
booleanBooleanWritable
byteByteWritable
intIntWritable
floatFloatWritable
longLongWritable
doubleDoubleWritable
StringText
mapMapWritable
arrayArrayWritable

自定义对象序列化

  1. 实现Writable接口
  2. 反序列化时,需要使用反射调用空参构造器,所以必须有空参构造器
  3. 重写序列化方法write()
  4. 重写反序列化方法readFields(),反序列化顺序必须与序列化顺序一致
  5. 要想把结果显示在文件,需要重写toString(),可以用 “\t” 隔开
  6. 如果需要将自定义的bean放在key中传输,则需要实现Comparable接口,因为MapReduce中的Shuffle过程需要对key排序
package beanwritable;

import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;

import org.apache.hadoop.io.Writable;

public class Student implements Writable, Comparable<Student> {
	private int chinese;
	private int math;
	private int english;
	private int sum;
	
	public Student() {}
	public Student(int chinese, int math, int english) {
		this.chinese = chinese;
		this.math = math;
		this.english = english;
		this.sum = chinese + math + english;
	}
	
	public int getChinese() { return chinese; }
	public void setChinese(int chinese) { this.chinese = chinese; }
	public int getMath() { return math; }
	public void setMath(int math) { this.math = math; }
	public int getEnglish() { return english; }
	public void setEnglish(int english) { this.english = english; }
	public int getSum() { return sum; }
	public void setSum(int sum) { this.sum = sum; }
	public void setGrade(int chinese, int math, int english) {
		this.chinese = chinese;
		this.math = math;
		this.english = english;
		this.sum = chinese + math + english;
	}
	
	public String toString() {
		return "chinese=" + chinese + "\tmath=" + math + "\tenglish=" + english;
	}
	
	//序列化方法
	public void write(DataOutput out) throws IOException {
		out.writeInt(chinese);
		out.writeInt(math);
		out.writeInt(english);
		out.writeInt(sum);
	}
	//反序列化方法:必须与序列化方法顺序一致
	public void readFields(DataInput in) throws IOException {
		chinese = in.readInt();
		math = in.readInt();
		english = in.readInt();
		sum = in.readInt();
	}
	
	public int compareTo(Student o) {
		return this.sum - o.sum;
	}
}
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值