05-MapReduce(2)序列化

目录

1、序列化概述

 2、自定义bean对象实现序列化接口(Writable)

3、序列化案例实操

(1)要求

(2)需求分析

(3)编写MapReduce程序

1)建包

2)编写FlowBean.java

3)编写FlowCountMapper.java

4)编写FlowCountReducer.java

5)编写FlowCountDriver.java

 6)测试程序

7)集群分布式运行


1、序列化概述

 2、自定义bean对象实现序列化接口(Writable)

要想实现bean对象序列化步骤如下7步:

(1)必须实现Writable接口

(2)反序列化时,需要反射调用空参构造函数,所以必须要有空参构造

public FlowBean() {
    super();
}

 (3)重写序列化方法

@Override
public void write(DataOutput out) throws IOException {
    out.writeLong(upFlow);
    out.writeLong(downFlow);
    out.writeLong(sumFlow);
}

(4)重写反序列化方法

@Override
public void readFields(DataInput in) throws IOException {
    upFlow = in.readLong();
    downFlow = in.readLong();
    sumFLow = in.readLong();
}

(5)注意反序列化的顺序和序列化的顺序完全一致。

(6)要想把结果显示在文件中,需要重写toString(),可用\t分开,方便后续使用。

(7)如果需要将自定义的bean放在key中传输,则还需要实现Comparable接口,因为MapReduce框中的Shuffle过程要求对key必须能够排序,详细见后案例。

@Override
public int compareTo(FlowBean o) {
    // 倒序排序,从大到小
    return this.sumFlow > o.getSumFlow() ? -1 : 1;
}

3、序列化案例实操

(1)要求

要求:统计每个手机号的总上行流量、下行流量、总流量

输入数据:

 输入数据格式:

id 手机号码 网络ip 上行流量 下行流量 网络状态码

期望的输出数据格式:

手机号码 上行流量 下行流量 总流量

(2)需求分析

(3)编写MapReduce程序

1)建包

 在src/main/java下新建一个包:com.wolf.mr.flowsum

2)编写FlowBean.java

在该包新建一个FlowBean.java类,并写入以下内容:

package com.wolf.mr.flowsum;

import org.apache.hadoop.io.Writable;

import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;

public class FlowBean implements Writable {

    private long upFlow; // shang xing liu liang
    private long downFlow; // xia xing liu liang
    private long sumFlow; // zong liu liang

    // 空参构造, 为了后续反射用
    public FlowBean() {
        super();
    }

    public FlowBean(long upFlow, long downFlow) {
        super();
        this.upFlow = upFlow;
        this.downFlow = downFlow;
        sumFlow = upFlow + downFlow;
    }

    // 序列化方法
    @Override
    public void write(DataOutput dataOutput) throws IOException {
        dataOutput.writeLong(upFlow);
        dataOutput.writeLong(downFlow);
        dataOutput.writeLong(sumFlow);
    }

    // 反序列化方法
    @Override
    public void readFields(DataInput dataInput) throws IOException {
        upFlow = dataInput.readLong();
        downFlow = dataInput.readLong();
        sumFlow = dataInput.readLong();
    }

    @Override
    public String toString() {
        return upFlow + "\t" + downFlow + "\t" + sumFlow;
    }

    public long getUpFlow() {
        return upFlow;
    }

    public void setUpFlow(long upFlow) {
        this.upFlow = upFlow;
    }

    public long getDownFlow() {
        return downFlow;
    }

    public void setDownFlow(long downFlow) {
        this.downFlow = downFlow;
    }

    public long getSumFlow() {
        return sumFlow;
    }

    public void setSumFlow(long sumFlow) {
        this.sumFlow = sumFlow;
    }

    public void set(long upFlow, long downFlow) {
        this.upFlow = upFlow;
        this.downFlow = downFlow;
        sumFlow = upFlow+downFlow;
    }
}

3)编写FlowCountMapper.java

在该包新建FlowCountMapper.java,写入以下内容:

package com.wolf.mr.flowsum;

import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

import java.io.IOException;

public class FlowCountMapper extends Mapper<LongWritable, Text,Text,FlowBean> {

    Text k = new Text();
    FlowBean v = new FlowBean();
    @Override
    protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
       
        // 1	13736230513	192.196.100.1	www.atguigu.com	2481	24681	200
        
        // 1. get 1 line
        String line = value.toString();
        // 2. split by \t
        String[] fields = line.split("\t");
        // 3. package obj
        k.set(fields[1]); // tele number
        // attention the method here
        long upFlow =Long.parseLong(fields[fields.length - 3]) ;
        long downFlow =Long.parseLong(fields[fields.length - 2]) ;
        v.setUpFlow(upFlow);
        v.setDownFlow(downFlow);
        // v.set(upFlow,downFlow);
        // 4. write out
        context.write(k,v);
    }
}

4)编写FlowCountReducer.java

 在该包新建FlowCountReducer.java,写入以下内容:

package com.wolf.mr.flowsum;

import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

import java.io.IOException;

public class FlowCountReducer extends Reducer<Text, FlowBean, Text, FlowBean> {
    FlowBean v = new FlowBean();

    @Override
    protected void reduce(Text key, Iterable<FlowBean> values, Context context) throws IOException, InterruptedException {
        // input : telenum <upflow downflow sumflow(unknown)>

        // 1. sum
        long sum_upFlow = 0;
        long sum_downFlow = 0;

        for (FlowBean flowBean : values) {
            sum_upFlow += flowBean.getUpFlow();
            sum_downFlow += flowBean.getDownFlow();
        }
        v.set(sum_upFlow, sum_downFlow);
        // 2. write out
        context.write(key, v);
    }
}

5)编写FlowCountDriver.java

在该包新建FlowCountDriver.java,写入以下内容:

package com.wolf.mr.flowsum;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

import java.io.IOException;

public class FlowCountDriver {
    public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException {

        // 1. get job obj
        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf);
        // 2. set jar storage path
        job.setJarByClass(FlowCountDriver.class);
        // 3.link mapper and reducer
        job.setMapperClass(FlowCountMapper.class);
        job.setReducerClass(FlowCountReducer.class);
        // 4.set mapper's type of key and value
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(FlowBean.class);
        // 5. set final output type of key and value
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(FlowBean.class);
        // 6. set input output path
        FileInputFormat.setInputPaths(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));
        // 7. submit job
        boolean result = job.waitForCompletion(true);
        System.exit(result ? 0 : 1);
    }
}

当前目录应该是差不多这样的(在上一篇的基础上): 

 6)测试程序

设置参数为: /home/wolf/phone_data.txt /home/wolf/output/out_phonedata,运行:

查看结果:

可以看到成功统计了每个手机号对应的上行、下行流量和总流量。

7)集群分布式运行

在pom.xml中把mainclass的类名修改成FlowCountDriver(假设已经按照上一篇博客添加了打包的依赖项):

<mainClass>com.wolf.mr.flowsum.FlowCountDriver</mainClass>

点一下这个按钮进行重载:

 执行install

把这个复制到本地:

可以重命名为fc.jar

启动集群,把phone_data.txt上传到集群:

hadoop fs -put phone_data.txt /

 分布式运行jar包:

hadoop jar /home/wolf/fc.jar com.wolf.mr.flowsum.FlowCountDriver /phone_data.txt /output_phone

查看结果:

hadoop fs -cat /output_phone/part-r-00000

一切正常。

通过序列化的方式,我们可以完成更为复杂的MapReduce任务,以满足实际需要。

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值