Hadoop序列化
什么是序列化?
为什么要序列化?
因为之前的key-value都是Hadoop自带的数据类型,这些数据类型是Hadoop框架用自己的序列化方式封装好的基础数据类型。在企业开发中往往常用的基本序列化类型不能满足所有需求,比如在Hadoop框架内部传递一个bean对象,那么该对象就需要实现序列化接口。
为什么不用Java序列化?
什么叫重量级?
信息很多,Hadoop作为集群框架避免不了数据的传输,java自带序列化附加信息太多,所以Hadoop自定义了序列化框架,获取需要的内容。
为什么紧凑? 因为数据小
- 快速?因为小
- 可扩展? 没有额外的数据,没有header、校验等等,想怎么扩展就怎么扩展
- 互操作? 数据在各个编程语言中都是通用的!
比java轻小快
Hadoop自定义序列化类型
1.Hadoop 序列化接口Writable
此接口提供两个方法:write是序列化方法,readFields是反序列化方法
2.实现bean对象序列化步骤
(1)实现Writable接口
(2)提供空参构造
public FlowBean() {
super();
}
//反序列化时,需要必须反射调用空参构造函数,所以必须有空参构造;
(3)重写序列化方法
@Override
public void write(DataOutput out) throws IOException {
out.writeLong(upFlow);
out.writeLong(downFlow);
out.writeLong(sumFlow);
}
(4)重写反序列化方法
@Override
public void readFields(DataInput in) throws IOException {
upFlow = in.readLong();
downFlow = in.readLong();
sumFlow = in.readLong();
}
注意反序列化的顺序和序列化的顺序完全一致
(5)要想把结果显示在文件中,需要重写toString(),默认打印地址值,可用”\t”指定每个字段的分隔符,方便后作为MapReduce程序的输入数据使用。
(6)如果需要将自定义的bean放在key中传输,则还需要实现Comparable接口,因为MapReduce框中的Shuffle过程要求对key必须能排序。
@Override
public int compareTo(FlowBean o) {
// 倒序排列,从大到小
return this.sumFlow > o.getSumFlow() ? -1 : 1;
}
3.序列化案例实操
需求:统计每一个手机号耗费的总上行流量、总下行流量、总流量
输入数据:
1 13736230513 192.196.100.1 www.atguigu.com 2481 24681 200
2 13846544121 192.196.100.2 264 0 200
3 13956435636 192.196.100.3 132 1512 200
4 13966251146 192.168.100.1 240 0 404
5 18271575951 192.168.100.2 www.atguigu.com 1527 2106 200
6 84188413 192.168.100.3 www.atguigu.com 4116 1432 200
7 13590439668 192.168.100.4 1116 954 200
8 15910133277 192.168.100.5 www.hao123.com 3156 2936 200
9 13729199489 192.168.100.6 240 0 200
10 13630577991 192.168.100.7 www.shouhu.com 6960 690 200
11 15043685818 192.168.100.8 www.baidu.com 3659 3538 200
12 15959002129 192.168.100.9 www.atguigu.com 1938 180 500
13 13560439638 192.168.100.10 918 4938 200
14 13470253144 192.168.100.11 180 180 200
15 13682846555 192.168.100.12 www.qq.com 1938 2910 200
16 13992314666 192.168.100.13 www.gaga.com 3008 3720 200
17 13509468723 192.168.100.14 www.qinghua.com 7335 110349 404
18 18390173782 192.168.100.15 www.sogou.com 9531 2412 200
19 13975057813 192.168.100.16 www.baidu.com 11058 48243 200
20 13768778790 192.168.100.17 120 120 200
21 13568436656 192.168.100.18 www.alibaba.com 2481 24681 200
22 13568436656 192.168.100.19 1116 954 200
输入数据格式说明:
期望输出数据格式:
3.1 自定义Bean做value
package com.fantasy.mapreduce.Writable;
import org.apache.hadoop.io.Writable;
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
public class FlowBean implements Writable {
private long phone;
private int upFlow;
private int downFlow;
private int sumFlow;
//todo 1.无参构造
public FlowBean() {
}
public FlowBean(long phone, int upFlow, int downFlow) {
this.phone = phone;
this.upFlow = upFlow;
this.downFlow = downFlow;
this.sumFlow = upFlow + downFlow;
}
//todo 2.序列化方法
@Override
public void write(DataOutput dataOutput) throws IOException {
dataOutput.writeLong(phone);
dataOutput.writeInt(upFlow);
dataOutput.writeInt(downFlow);
}
//todo 2.反序列化方法
@Override
public void readFields(DataInput dataInput) throws IOException {
phone = dataInput.readLong();
upFlow = dataInput.readInt();
downFlow = dataInput.readInt();
}
@Override
public String toString() {
return
"upFlow\t" + upFlow +
"downFlow\t" + downFlow +
"sumFlow\t" + sumFlow
;
}
public long getPhone() {
return phone;
}
public void setPhone(long phone) {
this.phone = phone;
}
public int getUpFlow() {
return upFlow;
}
public void setUpFlow(int upFlow) {
this.upFlow = upFlow;
}
public int getDownFlow() {
return downFlow;
}
public void setDownFlow(int downFlow) {
this.downFlow = downFlow;
}
public int getSumFlow() {
return sumFlow;
}
public void setSumFlow(int sumFlow) {
this.sumFlow = sumFlow;
}
}
3.2 Mapper类
package com.fantasy.mapreduce.Writable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import java.io.IOException;
//keyOut: 手机号
//valueOut: 自定义Bean
public class writeableMapper extends Mapper<LongWritable,Text,Text,FlowBean> {
Text k = new Text();
FlowBean v = new FlowBean();
@Override
protected void map(LongWritable key, Text value,Context context) throws IOException, InterruptedException {
//1 13736230513 192.196.100.1 www.atguigu.com 2481 24681 200
String[] split = value.toString().split("\t");
String phone = split[1];
String upFlow = split[split.length-3];
String downFlow = split[split.length-2];
k.set(phone);
v.setUpFlow(Integer.parseInt(upFlow));
v.setDownFlow(Integer.parseInt(downFlow));
v.setSumFlow(Integer.parseInt(upFlow) + Integer.parseInt(downFlow));
context.write(k,v);
}
}
3.3 Reducer类
package com.fantasy.mapreduce.Writable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import java.io.IOException;
public class writeableReducer extends Reducer<Text,FlowBean,Text,FlowBean> {
FlowBean outValue = new FlowBean();
@Override
protected void reduce(Text key, Iterable<FlowBean> values,Context context) throws IOException, InterruptedException {
int upFlowTotal = 0;
int downFlowTotal = 0;
for (FlowBean value : values) {
upFlowTotal += value.getUpFlow();
downFlowTotal += value.getDownFlow();
}
outValue.setPhone(Long.parseLong(key.toString()));
outValue.setUpFlow(upFlowTotal);
outValue.setDownFlow(downFlowTotal);
outValue.setSumFlow(upFlowTotal + downFlowTotal);
context.write(key,outValue);
}
}
3.4 Driver类
package com.fantasy.mapreduce.Writable;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import java.io.IOException;
public class writeableDriver {
public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf);
job.setJarByClass(writeableDriver.class);
job.setMapperClass(writeableMapper.class);
job.setReducerClass(writeableReducer.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(FlowBean.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(FlowBean.class);
FileInputFormat.setInputPaths(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
boolean result = job.waitForCompletion(true);
System.exit(result? 0 : 1);
}
}
3.5 打包上传到服务器,运行查看结果
(1)将数据源文件上传到hdfs
(2)将jar包上传到linux
(3)执行任务
[atguigu@hadoop102 zxf]$ hadoop jar MapReduce-1.0.0.jar \
com.fantasy.mapreduce.Writable.writeableDriver \
/MapReduce/02writeable/input \
/MapReduce/02writeable/output
(4)查看执行结果