序列化、反序列化
一、概念
序列化(Serialization)是指把结构化对象转化为字节流。
反序列化(Deserialization)是序列化的逆过程。把字节流转为结构化对象。
Java序列化(java.io.Serializable)
当要在进程间传递对象或持久化对象的时候,就需要序列化对象成字节流,反之当要将接收到或从磁盘读取的字节流转换为对象,就要进行反序列化。
二.Hadoop序列化的特点
1.紧凑:高效使用存储空间;
2.快速:读写数据的额外开销小;
3.可扩展:可透明地读取老格式的数据;
4.互操作:支持多语言的交互;
三.Hadoop序列化作用
序列化在分布式环境的两大作用:进程间通信,永久存储
四、hadoop序列化和java序列化对比
Java 的序列化(Serializable)是一个重量级序列化框架,一个对象被序列化后,会附带很多额外的信息(各种校验信息,header,继承体系…),不便于在网络中高效传输;所以,hadoop 自己开发了一套序列化机制(Writable),精简,高效。不用像 java 对象类一样传输多层的父子关系,需要哪个属性就传输哪个属性值,大大的减少网络传输的开销。
Writable是Hadoop的序列化格式,hadoop定义了这样一个Writable接口。 一个类要支持可序列化只需实现这个接口即可。
排序
Writable有一个子接口是WritableComparable,writableComparable是既可实现序列化,也可以对key进行比较,这里可以通过自定义key实现WritableComparable来实现排序功能
代码实现
1.PairWritable类
import org.apache.hadoop.io.WritableComparable;
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
public class PairWritable implements WritableComparable<PairWritable> {
//组合key,第一部分是第一列,第二部分是第二列
private String first;
private int second;
@Override
public int compareTo(PairWritable o) {
//每次比较都是调用该方法的对象与传递的参数进行比较,说白了就是第一行与第二行比较完了之后的结果与第三行比较
//得出来的结果再去与第四行比较,依次类推
System.out.println(o.toString());
System.out.println(this.toString());
int comp = this.first.compareTo(o.first);
if (comp!=0){
return comp;
}else {//若第一个字段相等,则比价第二个字段
return Integer.valueOf(this.second).compareTo(Integer.valueOf(o.getSecond()));
}
}
@Override
public void write(DataOutput out) throws IOException {
out.writeUTF(first+"");
out.writeInt(second+0);
}
@Override
public void readFields(DataInput in) throws IOException {
this.first = in.readUTF();
this.second = in.readInt();
}
//定义给字段赋值的方法
public void set(String first, int second) {
this.first = first;
this.second = second;
}
public String getFirst() {
return first;
}
public void setFirst(String first) {
this.first = first;
}
public int getSecond() {
return second;
}
public void setSecond(int second) {
this.second = second;
}
@Override
public String toString() {
return "PairWritable{" +
"first='" + first + '\'' +
", second=" + second +
'}';
}
}
2.mapper类
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import java.io.IOException;
public class SortMapper extends Mapper<LongWritable,Text,PairWritable,IntWritable> {
private PairWritable mapOutKey = new PairWritable();
private IntWritable mapOutValue = new IntWritable();
@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String[] data = value.toString().split("\t");
mapOutKey.set(data[0], Integer.parseInt(data[1]));
mapOutValue = new IntWritable(Integer.valueOf(data[1]));
context.write(mapOutKey,mapOutValue);
}
}
3.Reduce类
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import java.io.IOException;
public class SortReducer extends Reducer<PairWritable,IntWritable,Text,IntWritable> {
@Override
protected void reduce(PairWritable key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
for (IntWritable value : values) {
context.write(new Text(key.getFirst()),value);
}
}
}
4.加载主类
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
public class SecondarySort extends Configured implements Tool {
@Override
public int run(String[] args) throws Exception {
Configuration conf = super.getConf();
conf.set("mapreduce.framework.name","local");
Job job = Job.getInstance(conf, SecondarySort.class.getSimpleName());
job.setJarByClass(SecondarySort.class);
job.setInputFormatClass(TextInputFormat.class);
TextInputFormat.addInputPath(job,new Path("file:///F:\\3term\\task20191120\\src\\main\\java\\com\\czxy\\input"));
TextOutputFormat.setOutputPath(job,new Path("file:///F:\\3term\\task20191120\\src\\main\\java\\com\\czxy\\onput"));
job.setMapperClass(SortMapper.class);
job.setMapOutputKeyClass(PairWritable.class);
job.setMapOutputValueClass(IntWritable.class);
job.setReducerClass(SortReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
boolean b = job.waitForCompletion(true);
return b?0:1;
}
public static void main(String[] args) throws Exception {
Configuration entries = new Configuration();
ToolRunner.run(entries,new SecondarySort(),args);
}
}