mapreduce-partition分区及实操

了解到,在Mapper之后Reducer之前mapreduce会有一个叫做shuffle机制的流程,这个流程会将Mapper输出的键值对进行分区,每一个分区开启一个reducetask来处理这些数据,而最后输出的文件则为一个reducetask对应输出一个文件。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-mQNvk1qA-1603712853630)(https://s1.ax1x.com/2020/10/20/BpIsBD.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-DwN7t9Mc-1603712853633)(https://s1.ax1x.com/2020/10/20/BpIV0g.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-LMLhcEes-1603712853636)(https://s1.ax1x.com/2020/10/20/BpIoDS.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-IQPeIwvU-1603712853641)(https://s1.ax1x.com/2020/10/20/BpTkdg.png)]

通过自定义Partitioner业务逻辑可以自定义将不同的数据分入不同的文件中,实现上述问题。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-4AYRscPK-1603712853645)(https://s1.ax1x.com/2020/10/20/BpT2lt.png)]

代码实例:

自定义Partitoiner:
package com.bean.mr;

import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Partitioner;

public class ProvicePartitioner extends Partitioner<Text,FlowBean>{

    @Override
    public int getPartition(Text key, FlowBean value, int i) {
        //key为手机号
        //value为流量信息
        String presum = key.toString().substring(0,3);

        int partition = 4;
        if ("136".equals(presum)) {
            partition = 0;
        } else if ("137".equals(presum)) {
            partition = 1;
        } else if ("138".equals(presum)) {
            partition = 2;
        } else if ("139".equals(presum)) {
            partition = 3;
        }
        return partition;
    }

    //@Override

}
驱动类:
package com.bean.mr;

import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class FlowDriver {

    public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {

        //args = new String[] {"d:/mapreduceinput/input2","d:/mapreduceoutput/output2"};
        // 1 获取job实例
        Job job = Job.getInstance(new Configuration());
        // 2.设置类路径
        job.setJarByClass(FlowDriver.class);

        // 3 指定本业务job要使用的mapper/Reducer业务类
        job.setMapperClass(FlowMapper.class);
        job.setReducerClass(FlowReducer.class);

        // 4 指定mapper输出数据的kv类型
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(FlowBean.class);

        // 5 指定最终输出的数据的kv类型
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(FlowBean.class);

        job.setPartitionerClass(ProvicePartitioner.class);
        job.setNumReduceTasks(5);

        // 6 指定job的输入原始文件所在目录
        FileInputFormat.setInputPaths(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));

        // 7 将job中配置的相关参数,以及job所用的java类所在的jar包, 提交给yarn去运行
        boolean result = job.waitForCompletion(true);
        System.exit(result ? 0 : 1);
    }
}
Mapper:
package com.bean.mr;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

import java.io.IOException;


public class FlowMapper extends Mapper<LongWritable, Text, Text, FlowBean> {

    private Text phone = new Text();
    private FlowBean flow = new FlowBean();

    @Override
    protected void map(LongWritable key, Text value, Context context)	throws IOException, InterruptedException {


        String[] fields = value.toString().split("\t");

        //System.out.println(fields[1]);

        phone.set(fields[1]);

        long upFlow = Long.parseLong(fields[fields.length - 3]);
        long downFlow = Long.parseLong(fields[fields.length - 2]);


        flow.set(upFlow,downFlow);
        context.write(phone, flow);
        //long tmp = context.getInputSplit().getLength();
    }
}
Rdeucer
package com.bean.mr;

import java.io.IOException;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;


public class FlowReducer extends Reducer<Text, FlowBean, Text, FlowBean> {

    private FlowBean sunFlow = new FlowBean();

    @Override
    protected void reduce(Text key, Iterable<FlowBean> values, Context context)throws IOException, InterruptedException {

        long sum_upFlow = 0;
        long sum_downFlow = 0;

        // 1 遍历所用bean,将其中的上行流量,下行流量分别累加
        for (FlowBean value : values) {
            sum_upFlow += value.getUpFlow();
            sum_downFlow += value.getDownFlow();
        }

        // 2 封装对象
        sunFlow.set(sum_upFlow, sum_downFlow);

        // 3 写出
        context.write(key, sunFlow);
    }
}
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值