Hadoop序列化
什么是序列化
序列化就是把内存中的对象,转换成字节序列(或其他数据传输协议)以便于存储到磁盘上(持久化)和网络传输(对象—>数据 简单化)
反序列化就是将收到的字节序列(或其他数据传输协议)或者是磁盘的持久化数据,转化成内存中的对象(数据—>对象 有具体含义)
为什么要序列化
一般来说,“活的”对象之生存在内存里,关机断电就没了,而且,“活的”对象只能有本地的进程使用,不能发送到网络上的另一台计算机。然而序列化可以存储“活的”对象,可以将“活的”对象发送到远程计算机
那为什么不用Java的序列化
Java的序列化是一个重量级序列化框架,一个对象被序列化以后,会附带很多额外的信息(各种校验信息,Header,继承体系等),不便于在网络中高效传输,所以Hadoop自己开发了一套序列化机制
简单的说,就是java的序列化过于复杂,而hadoop不需要这么复杂的机制,为了不浪费时间和空间,自己设计了一个
Hadoop序列化的特点
①紧凑:高效的使用存储空间
②快速:读写数据额外开销小
③可扩展:随着通信协议的升级而可升级
④互操作:支持多语言的交互
自定义对象实现序列化接口
当基本序列化类型不能满足需求时,可以在Hadoop框架内部传递一个bean对象,那么这个对象就要直线序列化接口
实现bean对象序列化步骤一共有7步
①必须实现Writable接口
②反序列化是,需要反射调用空参构造器,所以必须有空参构造器
public FlowBean(){
}
③重写序列化方法
@Override
public void write(DataOutput out) throws IOException {
out.writeLong(upFlow);
out.writeLong(downFlow);
out.writeLong(sumFlow);
}
④重写反序列化方法
@Override
public void readFields(DataInput in) throws IOException {
upFlow = in.readLong();
downFlow = in.readLong();
sumFlow = in.readLong();
}
⑤注意反序列化的顺序和序列化的顺序完全一致
⑥要想把结果显示在文件中,需要重写toString(),可用”\t”分开,方便后续用。
⑦如果需要将自定义的bean放在key中传输,则还需要实现Comparable接口,因为MapReduce框中的Shuffle过程要求对key必须能排序。
@Override
public int compareTo(FlowBean o) {
// 倒序排列,从大到小
return this.sumFlow > o.getSumFlow() ? -1 : 1;
}
实例化操作
- 需求
统计每一个手机号耗费的总上行流量、下行流量、总流量
输入数据格式:
7 | 13560436666 | 120.196.100.99 | 1116 | 954 | 200 |
id | 手机号码 | 网络ip | 上行流量 | 下行流量 | 网络状态码 |
期望输出数据格式
13560436666 | 1116 | 954 | 2070 |
手机号码 | 上行流量 | 下行流量 | 总流量 |
具体的代码实现
①FlowBean类
package com.atguigu.flow;
import org.apache.hadoop.io.Writable;
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
public class FlowBean implements Writable {
private long upFlow;
private long downFlow;
private long sumFlow;
public FlowBean() {
}
@Override
public String toString() {
return upFlow +"\t"+downFlow+"\t"+sumFlow;
}
public void set(long upFlow, long downFlow){
this.upFlow = upFlow;
this.downFlow = downFlow;
this.sumFlow = downFlow - upFlow ;
}
public long getUpFlow() {
return upFlow;
}
public void setUpFlow(long upFlow) {
this.upFlow = upFlow;
}
public long getDownFlow() {
return downFlow;
}
public void setDownFlow(long downFlow) {
this.downFlow = downFlow;
}
public long getSumFlow() {
return sumFlow;
}
public void setSumFlow(long sumFlow) {
this.sumFlow = sumFlow;
}
//注意:序列化与反序列化的数据顺序一定要一致
/**
* 序列化方法
* @param out 框架给我们提供的数据出口
* @throws IOException
*/
public void write(DataOutput out) throws IOException {
out.writeLong(upFlow);
out.writeLong(downFlow);
out.writeLong(sumFlow);
}
/**
* 反序列化方法
* @param in 框架提供的数据来源
* @throws IOException
*/
public void readFields(DataInput in) throws IOException {
upFlow = in.readLong();
downFlow = in.readLong();
sumFlow = in.readLong();
}
}
FlowMapper类
package com.atguigu.flow;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import java.io.IOException;
//前两个是入口,后两个是出口
public class FlowMapper extends Mapper <LongWritable ,Text, Text,FlowBean> {
private Text phone = new Text();
private FlowBean flow = new FlowBean();
@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String[] fields = value.toString().split("-");
phone.set(fields[1]);//拿到key手机号
Long upFlow = Long.parseLong(fields[fields.length-3]);//拿到upFlow
Long downFlow = Long.parseLong(fields[fields.length-2]);//拿到downFlow
flow.set(upFlow,downFlow);
context.write(phone,flow);
}
}
FlowReduce类
package com.atguigu.flow;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import java.io.IOException;
//前两个是输入数据,后两个是输出数据
public class FlowReducer extends Reducer<Text,FlowBean, Text,FlowBean> {
private FlowBean sumFlow = new FlowBean();
@Override
protected void reduce(Text key, Iterable<FlowBean> values, Context context) throws IOException, InterruptedException {
long sumUpFlow = 0;
long sumDownFlow = 0;
for (FlowBean value : values) {
sumUpFlow += value.getUpFlow();
sumDownFlow += value.getDownFlow();
}
sumFlow.set(sumUpFlow,sumDownFlow);
context.write(key,sumFlow);
}
}
FlowDriver类
package com.atguigu.flow;
import com.sun.xml.internal.ws.policy.privateutil.PolicyUtils;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import java.io.IOException;
public class FlowDriver {
public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
//1、获取Job实例
Job job = Job.getInstance(new Configuration());
//2、设置类路径
job.setJarByClass(FlowDriver.class);
//3、设置Mapper和Reducer
job.setMapperClass(FlowMapper.class);
job.setReducerClass(FlowReducer.class);
//4。设置输入输出类型
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(FlowBean.class);
job.setOutputKeyClass(Text.class);
job.setMapOutputValueClass(FlowBean.class);
//5、设置输入输出路径
FileInputFormat.setInputPaths(job,new Path(args[0]));
FileOutputFormat.setOutputPath(job,new Path(args[1]));
//6、提交
boolean b = job.waitForCompletion(true);
System.exit(b?0:1);
}
}