自定义bean对象实现序列化接口(Writable)进行多表连接查询

序列化:

把一个服务器在内存中的数据序列化到磁盘,再从磁盘拷贝传输到另一台服务器的磁盘,最后通过反序列化加载到内存。

Hadoop的序列化相较于Java存储空间更小,传输速度更快。

自定义bean对象实现序列化接口(Writable)

  1. 必须实现Writable接口
  2. 重写序列化方法
  3. 重写反序列化方法
  4. 注意反序列化的顺序和序列化顺序完全一致
  5. 可以重写toString( )方法,使结果显示在文件中
  6. 将对象属性默认值设置为""

多表连接查询:

通过userId,查询customer表和order表

order表:

OrderId userId goodId buyNum
1		1		123		1
2		2		456		2
3		3		789		5

costumer表”

userId 	userName 	age
1		zhangsan	40
2		lisi		30
3		wangwu		20

OrderCustomer:

public class OrderCustomer implements Writable {
    private String userId="";
    private String userName="";
    private String age="";
    private String orderId="";
    private String goodId="";
    private String buyNum="";
    private String flag="";
}
//需要封装get、set,无参构造,toString()方法

JoinMapper:

import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.input.FileSplit;

import java.io.IOException;

public class JoinMapper extends Mapper<LongWritable, Text, Text, OrderCustomer> {
    //由于每一行都会调用一次map(),为了节约资源,将new OrderCostumer封装
    private OrderCustomer oc = new OrderCustomer();
    private String fileName;

    @Override
    //map前调用,用于获取文件名等信息
    protected void setup(Mapper<LongWritable, Text, Text, OrderCustomer>.Context context) throws IOException, InterruptedException {
        FileSplit fs = (FileSplit) context.getInputSplit();
        fileName = fs.getPath().getName();
    }

    @Override
    protected void map(LongWritable key, Text value, Mapper<LongWritable, Text, Text, OrderCustomer>.Context context) throws IOException, InterruptedException {
        String[] split = value.toString().split(",");
        if (fileName.startsWith("order")){
            oc.setOrderId(split[0]);
            oc.setUserId(split[1]);
            oc.setGoodId(split[2]);
            oc.setBuyNum(split[3]);
            oc.setFlag("1");
        }else {
            oc.setUserId(split[0]);
            oc.setUserName(split[1]);
            oc.setAge(split[2]);
            oc.setFlag("0");
        }
        context.write(new Text(oc.getUserId()),oc);
    }
}

JoinReducer:

import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import java.io.IOException;

public class JoinReducer extends Reducer<Text, OrderCustomer, OrderCustomer, NullWritable> {
    private OrderCustomer fillOc = new OrderCustomer();

    @Override
    //NullWritable:key重写了toString(),不需要输出value
    protected void reduce(Text key, Iterable<OrderCustomer> values, Reducer<Text, OrderCustomer, OrderCustomer, NullWritable>.Context context) throws IOException, InterruptedException {
        for (OrderCustomer oc : values) {
            if (oc.getFlag().equals("0")) {
                fillOc.setUserId(oc.getUserId());
                fillOc.setUserName(oc.getUserName());
                fillOc.setAge(oc.getAge());
            } else {
                fillOc.setOrderId(oc.getOrderId());
                fillOc.setGoodId(oc.getGoodId());
                fillOc.setBuyNum(oc.getBuyNum());
            }
        }
        context.write(fillOc,NullWritable.get());
    }
}

JoinJob:

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import java.io.IOException;

public class JoinJob {
    public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException {
        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf);
        job.setJarByClass(JoinJob.class);
        job.setMapperClass(JoinMapper.class);
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(OrderCustomer.class);
        job.setReducerClass(JoinReducer.class);
        job.setOutputKeyClass(OrderCustomer.class);
        job.setOutputValueClass(NullWritable.class);
        Path path = new Path("file:///e:/temp1/output");
        if (path.getFileSystem(conf).exists(path)) {
            path.getFileSystem(conf).delete(path, true);
        }
        FileInputFormat.setInputPaths(job, new Path("file:///e:/temp1/read1"));
        FileOutputFormat.setOutputPath(job, path);
        job.waitForCompletion(true);
    }
}

结果:
在这里插入图片描述

也可以将userId为key,OrderCustomer为值,从map传递给reduce,可以获得如下结果:
在这里插入图片描述

  • 5
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值