序列化相信大家都在Java中学习过。那么在Hadoop中有什么区别呢?
首先在Java中,我们是将数据转化成字节码文件,这样无论什么样的机器,通过同样的反序列化操作,都能得到我们的数据。但是在Java中,我们实现的Serializable接口,在输出的字节码文件中,还会加上一些校验信息。
Java的序列化是一个重量级序列化框架(Serializable),一个对象被序列化后,会附带很多额外的信息(各种校验信息,Header,继承体系等),不便于在网络中高效传输。所以,Hadoop自己开发了一套序列化机制(Writable),采取了一种轻量级的校验方式。提高了运行的效率。
Hadoop的序列化的优点:
(1)紧凑 :高效使用存储空间。
(2)快速:读写数据的额外开销小。
(3)互操作:支持多语言的交互
Java类型 | Hadoop Writable类型 |
Boolean | BooleanWritable |
Byte | ByteWritable |
Int | IntWritable |
Float | FloatWritable |
Long | LongWritable |
Double | DoubleWritable |
String | Text |
Map | MapWritable |
Array | ArrayWritable |
Null | NullWritable |
上述的数据类型都是可以直接进行序列化的。那么如果是我们自定义的Bean类呢?
如同Java中一般,我们自定义的Bean类需要继承Serializable接口,那么我们在Hadoop的框架下,我们需要继承Hadoop的序列化接口(Writerable)
在Hadoop中,我们实现序列化需要完成下述步骤:
(1)必须实现Writable接口
(2)反序列化时,需要反射调用空参构造函数,所以必须有空参构造
(3)重写序列化方法
(4)重写反序列化方法
(5)注意反序列化的顺序和序列化的顺序完全一致
(6)要想把结果显示在文件中,需要重写toString(),可用"\t"分开,方便后续用。
(7)如果需要将自定义的bean放在key中传输,则还需要实现Comparable接口,因为MapReduce框中的Shuffle过程要求对key必须能排序。
序列化Demo实操-统计每一个手机号的上行流量下行流量和总流量
我们可以看到在需求中,我们的Key应当为手机号,我们需要的是后面的上行流量和下行流量。
由于在数据中没有给出总流量,那么我们需要在创建Bean时,自行添加。则此时的Val就是Bean。其中包含三个流量即可。
Bean
package com.zc.mapreduce.Writable;
import org.apache.hadoop.io.Writable;
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
/**
* @program:MapReduce
* @descripton:实现Writerable 重写序列化与反序列化
* 重写空参构造,toString
* @author:ZhengCheng
* @create:2021/10/29-15:51
**/
public class FlowBean implements Writable {
private long upFlow;//上行流量
private long downFlow;//下行
private long sumFlow;//总
public FlowBean(long upFlow, long downFlow) {
this.upFlow = upFlow;
this.downFlow = downFlow;
this.sumFlow = upFlow + downFlow;
}
@Override
public String toString() {
return upFlow + "\t" + downFlow + "\t" + sumFlow ;
}
public long getUpFlow() {
return upFlow;
}
public void setUpFlow(long upFlow) {
this.upFlow = upFlow;
}
public long getDownFlow() {
return downFlow;
}
public void setDownFlow(long downFlow) {
this.downFlow = downFlow;
}
public long getSumFlow() {
return sumFlow;
}
public void setSumFlow() {
this.sumFlow = this.downFlow + this.upFlow;
}
public FlowBean() {
}
@Override
public void write(DataOutput dataOutput) throws IOException {
dataOutput.writeLong(upFlow);
dataOutput.writeLong(downFlow);
dataOutput.writeLong(sumFlow);
}
@Override
public void readFields(DataInput dataInput) throws IOException {
this.upFlow = dataInput.readLong();
this.downFlow= dataInput.readLong();
this.sumFlow = dataInput.readLong();
}
}
Mapper
package com.zc.mapreduce.Writable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.OutputFormat;
import java.io.IOException;
/**
* @program:MapReduce
* @descripton:
* @author:ZhengCheng
* @create:2021/10/29-15:43
**/
public class phoneMapper extends Mapper <LongWritable, Text,Text,FlowBean>{
private Text text = new Text();
private FlowBean fb = new FlowBean();
@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
//获取一行
String string = value.toString();
String[] split = string.split("\t");//拿1 4 5
//由于数据中有的有空格,所以会出现问题
//循环写出
text.set(split[1]);
//fb = new FlowBean(Long.parseLong(split[length-3]),Long.parseLong(split[length-2]));
fb.setUpFlow(Long.parseLong(split[split.length-3]));
fb.setDownFlow(Long.parseLong(split[split.length-2]));
fb.setSumFlow();
context.write(text,fb);
}
}
Reducer
package com.zc.mapreduce.Writable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import java.io.IOException;
/**
* @program:MapReduce
* @descripton:
* @author:ZhengCheng
* @create:2021/10/29-15:43
**/
public class phoneReducer extends Reducer<Text,FlowBean, Text,FlowBean> {
private FlowBean fb =new FlowBean();
@Override
protected void reduce(Text key, Iterable<FlowBean> values, Reducer<Text, FlowBean, Text, FlowBean>.Context context) throws IOException, InterruptedException {
//让不同的flowbean组合
Long upflow = 0l;
Long downflow = 0l;
for (FlowBean value : values) {
upflow += value.getUpFlow();
downflow += value.getDownFlow();
}
// fb = new FlowBean(upflow,downflow); 节约资源
fb.setUpFlow(upflow);
fb.setDownFlow(downflow);
fb.setSumFlow();
context.write(key,fb);
}
}
Driver
package com.zc.mapreduce.Writable;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import java.io.IOException;
/**
* @program:MapReduce
* @descripton:
* @author:ZhengCheng
* @create:2021/10/29-15:43
**/
public class phoneDriver {
public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException {
//初始化
Configuration conf = new Configuration();
Job job = Job.getInstance(conf);
//jar包
job.setJarByClass(phoneDriver.class);
//设置匹配的mapper和reducer
job.setMapperClass(phoneMapper.class);
job.setReducerClass(phoneReducer.class);
//指定mapper的kv
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(FlowBean.class);
//指定reducer的kv
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(FlowBean.class);
//设置输入输出路径
//FileInputFormat.setInputPaths(job, new Path("E:\\HDFS\\phone"));
//FileOutputFormat.setOutputPath(job, new Path("E:\\HDFSout\\phone"));
FileInputFormat.setInputPaths(job,new Path(args[0]));
FileOutputFormat.setOutputPath(job,new Path(args[1]));
//提交任务
boolean result = job.waitForCompletion(true);
System.exit(result ? 0 : 1);
}
}
完成!注意导包不要出错!