Partition分区

本文介绍了如何在Hadoop MapReduce中使用Partitioner进行数据分区,包括默认的HashPartitioner工作原理和自定义Partitioner的步骤。通过案例展示了如何根据手机号的前三位进行分区,将数据输出到不同的文件中,并强调了设置ReduceTask数量的重要性。
摘要由CSDN通过智能技术生成

Partition分区

需求:按照条件输出到不同文件中。

案例:按照手机归属地输出到不同文件中。

1、默认partitioner分区

默认分区根据key的hashCode对ReduceTasks个数取模得到。

public class HashPartitioner<K, V> extends Partitioner<K, V> {
  public int getPartition(K key, V value,int numReduceTasks) {
    return (key.hashCode() & Integer.MAX_VALUE) % numReduceTasks;
  }
}

2、自定义Partitioner步骤

  1. 自定义类继承Partitioner类,重写getPartition()方法
  2. 在Job驱动中,设置自定义Partitioner;
  3. 自定义partition后,根据自定义Partitioner的逻辑设置相应数量的ReduceTask。

3、注意事项

  1. 如果ReduceTask的数量 > getPartition的结果数,则会多产生几个空的输出文件part-r-000xx ;
  2. 如果1<ReduceTask的数量<getPartition的结果数,则有一部分分区数据无处安放,会Exception;
  3. 如果ReduceTask的数量=1,则不管MapTask端输出多少个分区文件,最终结果都交给这一个ReduceTask,最终也就只会产生一个结果文件part-r-00000 ;
  4. 分区号必须从零开始,逐一累加。

4、案例

FlowBean

package com.hpu.hadoop.partitioner;

import org.apache.hadoop.io.Writable;

import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;

public class FlowBean implements Writable {
    private Integer upFlow;
    private Integer downFlow;
    private Integer sumFlow;

    public FlowBean(){}

    public Integer getUpFlow() {
        return upFlow;
    }

    public void setUpFlow(Integer upFlow) {
        this.upFlow = upFlow;
    }

    public Integer getDownFlow() {
        return downFlow;
    }

    public void setDownFlow(Integer downFlow) {
        this.downFlow = downFlow;
    }

    public Integer getSumFlow() {
        return sumFlow;
    }

    public void setSumFlow(Integer sunFlow) {
        this.sumFlow = sunFlow;
    }
    public void setSumFlow() {
        this.sumFlow = this.upFlow+this.downFlow;
    }

    @Override
    public void write(DataOutput out) throws IOException {
        out.writeInt(this.upFlow);
        out.writeInt(this.downFlow);
    }

    @Override
    public void readFields(DataInput in) throws IOException {
        this.upFlow = in.readInt();
        this.downFlow = in.readInt();
    }

    @Override
    public String toString() {
        return this.upFlow+"\t"+this.downFlow+"\t"+this.sumFlow;
    }
}

Mapper:

package com.hpu.hadoop.partitioner;

import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

import java.io.IOException;

public class FlowMapper extends Mapper<LongWritable, Text,Text, FlowBean> {
    private Text phone;
    private FlowBean flowBean;


    @Override
    protected void setup(Context context) throws IOException, InterruptedException {
        flowBean = new FlowBean();
        phone = new Text();
    }

    @Override
    protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
        String line = value.toString();
        String[] split = line.split("\t");


        flowBean.setUpFlow(Integer.parseInt(split[split.length-3]));
        flowBean.setDownFlow(Integer.parseInt(split[split.length-2]));
        flowBean.setSumFlow();

        phone.set(split[1]);
        context.write(phone,flowBean);
    }
}

自定义Partitioner:

package com.hpu.hadoop.partitioner;

import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Partitioner;

public class CustomPartitioner extends Partitioner<Text,FlowBean> {

    @Override
    public int getPartition(Text text, FlowBean flowBean, int numPartitions) {
        String Phone = text.toString();
        String P = Phone.substring(0, 3);

        if ("136".equals(P)){
            return 0;
        } else if ("137".equals(P)){
            return 1;
        } else if ("138".equals(P)){
            return 2;
        } else if ("139".equals(P)){
            return 3;
        } else {
            return 4;
        }
    }
}

Reducer:

package com.hpu.hadoop.partitioner;

import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

import java.io.IOException;

public class FlowReducer extends Reducer<Text, FlowBean,Text, FlowBean> {

    private FlowBean flowBean;
    private int sumUp;
    private int sumDown;

    @Override
    protected void setup(Context context) throws IOException, InterruptedException {
        flowBean = new FlowBean();
    }

    @Override
    protected void reduce(Text key, Iterable<FlowBean> values, Context context) throws IOException, InterruptedException {
        sumUp = 0;
        sumDown = 0;
        for (FlowBean value : values) {
            sumUp += value.getUpFlow();
            sumDown += value.getDownFlow();
        }
        flowBean.setUpFlow(sumUp);
        flowBean.setDownFlow(sumDown);
        flowBean.setSumFlow();
        context.write(key,flowBean);
    }
}

Driver:

package com.hpu.hadoop.partitioner;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

import java.io.IOException;

public class FlowDriver {
    public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
        //1.配置
        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf);
        //2.Driver
        job.setJarByClass(FlowDriver.class);
        //3.Mapper
        job.setMapperClass(FlowMapper.class);
        //4.Reducer
        job.setReducerClass(FlowReducer.class);
        //5.KV
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(FlowBean.class);
        //6.OKV
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(FlowBean.class);
		// 指定自定义分区器
        job.setPartitionerClass(CustomPartitioner.class);
        // 同时指定相应数量的ReduceTask
        job.setNumReduceTasks(5);

        //7.InPath
        FileInputFormat.setInputPaths(job,new Path("F:\\input\\inputflow\\phone_data.txt"));
        FileOutputFormat.setOutputPath(job,new Path("E:\\Test\\f4"));
        //8.提交

        job.waitForCompletion(true);
    }
}
  • 2
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

MelodyYN

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值