2.1 序列化概述
- 什么是序列化
序列化就是把内存中的对象,转换成字节序列(或其他数据传输协议)以便于存储到磁盘(持久化)和网络传输。
反序列化就是将收到的字节序列(或者其他数据传输协议)或者是磁盘的持久化数据,转换成内存中的对象。
- 为什么要序列化
一般来说,“活的”对象只生存在内存里,关机断电就没有了。而且“活的”对象只能由本地的进程使用,不能被发送到网络是上的另一台计算机。然而序列化可以存储“活的”对象,可以将“活的”对象发送到远程计算机。
- 为什么不用java的序列化
Java的序列化是一个重量级序列化框架,一个对象被序列化后,会附带很多额外的信息(各种校验信息,Header,继承体系等),不便于在网络中高效传输。
序列化优点:
- 紧凑:存储空间少
- 快速:传输速度快
- 互操作性:支持各种语言的交互
2.2 自定义bean对象实现序列化接口(Writable)
常用的基本序列化不能满足所有要求,比如在Hadoop框架内部传递一个bean对象,那么该对象就需要实现序列化接口。具体步骤如下:
- 必须实现Writable接口
- 反序列化时,需要反射调用空参构造函数,所以必须有空参构造
public FlowBean(){ super(); }
- 重写序列化方法
@Override public void write(DataOutput out) throws IOException { out.writeLong(upFlow); out.writeLong(downFlow); out.writeLong(sumFlow); }
- 重写反序列化方法
@Override public void readFields(DataInput in) throws IOException { upFlow = in.readLong(); downFlow = in.readLong(); sumFlow = in.readLong(); }
- 注意反序列化的顺序和序列化的顺序完全一致;
- 要想把结构显示在文件中,需要重写toString(),可用“\t”分开,方便以后使用;
- 如果需要将自定义的bean放在key中传输,则还需要实现Comparable接口,因为MapReduce框中的Shuffle过程要求对key必须能排序
@Override public int compareTo(FlowBean o){ // 倒序排列,从大到小 return this.sumFlow > o.getSumFlow ? -1 : 1; }
2.3 序列化案例实操
bean对象,mapper,reducer和driver实现程序如下:
package com.atguigu.mapreduce.writable;
/**
* @author
* @date 2021/06/03
**/
import org.apache.hadoop.io.Writable;
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
/**
* 1.定义类实现writable接口
* 2.重写序列化和反序列化方法
* 3.重写空参构造
* 4.toString方法
*/
public class FlowBean implements Writable {
private long upFlow; //上行流量
private long downFlow; //下行流量
private long sumFlow; //总流量
public long getUpFlow() {
return upFlow;
}
public void setUpFlow(long upFlow) {
this.upFlow = upFlow;
}
public long getDownFlow() {
return downFlow;
}
public void setDownFlow(long downFlow) {
this.downFlow = downFlow;
}
public long getSumFlow() {
return sumFlow;
}
public void setSumFlow(long sumFlow) {
this.sumFlow = sumFlow;
}
public void setSumFlow() {
this.sumFlow = this.upFlow + this.downFlow;
}
//空参构造
public FlowBean() {
}
public void write(DataOutput dataOutput) throws IOException {
dataOutput.writeLong(upFlow);
dataOutput.writeLong(downFlow);
dataOutput.writeLong(sumFlow);
}
public void readFields(DataInput dataInput) throws IOException {
this.upFlow = dataInput.readLong();
this.downFlow = dataInput.readLong();
this.sumFlow = dataInput.readLong();
}
@Override
public String toString() {
return upFlow + "\t" + downFlow + "\t" + sumFlow;
}
}
package com.atguigu.mapreduce.writable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import java.io.IOException;
/**
* @author
* @date 2021/06/05
**/
public class FlowMapper extends Mapper<LongWritable, Text, Text, FlowBean> {
private Text outK = new Text();
private FlowBean outValue = new FlowBean();
@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
// 1.获取一行
// 1 15847512684 120.196.110.99 www.jin.com 1454 245 200
// 2 14784125941 150.206.211.14 4101 142 200
String line = value.toString();
// 2.切割,以/t为分隔符,结构放到数组里
String[] split = line.split("\t");
// 3.抓取想要的手机号:15847512684,上行流量:1454和下行流量:245
String phone = split[1];
String upFlow = split[split.length - 3];
String downFlow = split[split.length - 2];
// 4.封装
outK.set(phone);
outValue.setUpFlow(Long.parseLong(upFlow));
outValue.setDownFlow(Long.parseLong(downFlow));
outValue.setSumFlow();
// 5.写出
context.write(outK, outValue);
}
}
package com.atguigu.mapreduce.writable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import java.io.IOException;
/**
* @author
* @date 2021/06/06
**/
public class FlowReducer extends Reducer<Text, FlowBean, Text, FlowBean> {
private FlowBean outV = new FlowBean();
@Override
protected void reduce(Text key, Iterable<FlowBean> values, Context context) throws IOException, InterruptedException {
// 1.遍历集合,累加
long totalup = 0;
long totalDown = 0;
for (FlowBean value : values) {
totalup += value.getUpFlow();
totalDown += value.getDownFlow();
}
// 2.封装outk,outv
outV.setUpFlow(totalup);
outV.setDownFlow(totalDown);
outV.setSumFlow();
// 3.写出
context.write(key, outV);
}
}
package com.atguigu.mapreduce.writable;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import java.io.IOException;
/**
* @author
* @date 2021/06/06
**/
public class FlowDriver {
public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
// 1.获取job
Configuration conf = new Configuration();
Job job = Job.getInstance(conf);
// 2.获取jar包
job.setJarByClass(FlowDriver.class);
// 3.关联mapper和reducer
job.setMapperClass(FlowMapper.class);
job.setReducerClass(FlowReducer.class);
// 4.设置mapper 输出的key和value类型
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(FlowBean.class);
// 5.设置最终数据输出的key和value类型
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(FlowBean.class);
// 6.设置数据输入路径和输出路径
FileInputFormat.setInputPaths(job, new Path("F:\\input"));
FileOutputFormat.setOutputPath(job, new Path("F:\\output"));
// 7.提交job
boolean result = job.waitForCompletion(true);
System.out.println(result ? 0 : 1);
}
}