序列化:
把一个服务器在内存中的数据序列化到磁盘,再从磁盘拷贝传输到另一台服务器的磁盘,最后通过反序列化加载到内存。
Hadoop的序列化相较于Java存储空间更小,传输速度更快。
自定义bean对象实现序列化接口(Writable)
- 必须实现Writable接口
- 重写序列化方法
- 重写反序列化方法
- 注意反序列化的顺序和序列化顺序完全一致
- 可以重写toString( )方法,使结果显示在文件中
- 将对象属性默认值设置为""
多表连接查询:
通过userId,查询customer表和order表
order表:
OrderId userId goodId buyNum
1 1 123 1
2 2 456 2
3 3 789 5
costumer表”
userId userName age
1 zhangsan 40
2 lisi 30
3 wangwu 20
OrderCustomer:
public class OrderCustomer implements Writable {
private String userId="";
private String userName="";
private String age="";
private String orderId="";
private String goodId="";
private String buyNum="";
private String flag="";
}
//需要封装get、set,无参构造,toString()方法
JoinMapper:
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.input.FileSplit;
import java.io.IOException;
public class JoinMapper extends Mapper<LongWritable, Text, Text, OrderCustomer> {
//由于每一行都会调用一次map(),为了节约资源,将new OrderCostumer封装
private OrderCustomer oc = new OrderCustomer();
private String fileName;
@Override
//map前调用,用于获取文件名等信息
protected void setup(Mapper<LongWritable, Text, Text, OrderCustomer>.Context context) throws IOException, InterruptedException {
FileSplit fs = (FileSplit) context.getInputSplit();
fileName = fs.getPath().getName();
}
@Override
protected void map(LongWritable key, Text value, Mapper<LongWritable, Text, Text, OrderCustomer>.Context context) throws IOException, InterruptedException {
String[] split = value.toString().split(",");
if (fileName.startsWith("order")){
oc.setOrderId(split[0]);
oc.setUserId(split[1]);
oc.setGoodId(split[2]);
oc.setBuyNum(split[3]);
oc.setFlag("1");
}else {
oc.setUserId(split[0]);
oc.setUserName(split[1]);
oc.setAge(split[2]);
oc.setFlag("0");
}
context.write(new Text(oc.getUserId()),oc);
}
}
JoinReducer:
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import java.io.IOException;
public class JoinReducer extends Reducer<Text, OrderCustomer, OrderCustomer, NullWritable> {
private OrderCustomer fillOc = new OrderCustomer();
@Override
//NullWritable:key重写了toString(),不需要输出value
protected void reduce(Text key, Iterable<OrderCustomer> values, Reducer<Text, OrderCustomer, OrderCustomer, NullWritable>.Context context) throws IOException, InterruptedException {
for (OrderCustomer oc : values) {
if (oc.getFlag().equals("0")) {
fillOc.setUserId(oc.getUserId());
fillOc.setUserName(oc.getUserName());
fillOc.setAge(oc.getAge());
} else {
fillOc.setOrderId(oc.getOrderId());
fillOc.setGoodId(oc.getGoodId());
fillOc.setBuyNum(oc.getBuyNum());
}
}
context.write(fillOc,NullWritable.get());
}
}
JoinJob:
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import java.io.IOException;
public class JoinJob {
public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf);
job.setJarByClass(JoinJob.class);
job.setMapperClass(JoinMapper.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(OrderCustomer.class);
job.setReducerClass(JoinReducer.class);
job.setOutputKeyClass(OrderCustomer.class);
job.setOutputValueClass(NullWritable.class);
Path path = new Path("file:///e:/temp1/output");
if (path.getFileSystem(conf).exists(path)) {
path.getFileSystem(conf).delete(path, true);
}
FileInputFormat.setInputPaths(job, new Path("file:///e:/temp1/read1"));
FileOutputFormat.setOutputPath(job, path);
job.waitForCompletion(true);
}
}
结果:
也可以将userId为key,OrderCustomer为值,从map传递给reduce,可以获得如下结果: