06-MapReduce(3)Shuffle机制

目录

一、定义

二、分区 (Partition)

 三、Partition分区案例实操

1.需求

2、需求分析

3、编写代码

四、排序(WritableComparable)

1、排序的分类

​2、自定义排序WritableComparable

3、WritableComparable排序案例实操(全排序)

(1)需求

(2)需求分析

 (3)编写代码

1)建包

2)编写FlowBean类

3)编写FlowCountSortMapper类

4)编写FlowCountSortReducer类

5)编写FlowCountSortDriver类

6)运行程序

4、WritableComparable排列案例实操(区内排序)

(1)需求

(2)需求分析

(3)编写代码

5、GroupingComparator分组排序(辅助排序)案例实操

(1)需求

(2)需求分析

(3)编写代码

1)编写OrderBean类

2)编写 OrderMapper类

3)编写OrderGroupingComparator类

4)编写OrderReducer类

5)编写OrderDriver类

6)编写运行参数

7)运行程序


一、定义

Map方法之后,Reduce方法之前的数据处理过程,称为Shuffle。

二、分区 (Partition)

 

 三、Partition分区案例实操

1.需求

将统计结果按照手机归属地不同省份输出到不同文件中(分区)

(1)输入数据

1	13736230513	192.196.100.1	www.atguigu.com	2481	24681	200
2	13846544121	192.196.100.2			264	0	200
3 	13956435636	192.196.100.3			132	1512	200
4 	13966251146	192.168.100.1			240	0	404
5 	18271575951	192.168.100.2	www.atguigu.com	1527	2106	200
6 	84188413	192.168.100.3	www.atguigu.com	4116	1432	200
7 	13590439668	192.168.100.4			1116	954	200
8 	15910133277	192.168.100.5	www.hao123.com	3156	2936	200
9 	13729199489	192.168.100.6			240	0	200
10 	13630577991	192.168.100.7	www.shouhu.com	6960	690	200
11 	15043685818	192.168.100.8	www.baidu.com	3659	3538	200
12 	15959002129	192.168.100.9	www.atguigu.com	1938	180	500
13 	13560439638	192.168.100.10			918	4938	200
14 	13470253144	192.168.100.11			180	180	200
15 	13682846555	192.168.100.12	www.qq.com	1938	2910	200
16 	13992314666	192.168.100.13	www.gaga.com	3008	3720	200
17 	13509468723	192.168.100.14	www.qinghua.com	7335	110349	404
18 	18390173782	192.168.100.15	www.sogou.com	9531	2412	200
19 	13975057813	192.168.100.16	www.baidu.com	11058	48243	200
20 	13768778790	192.168.100.17			120	120	200
21 	13568436656	192.168.100.18	www.alibaba.com	2481	24681	200
22 	13568436656	192.168.100.19			1116	954	200

(2)期望输出数据

        手机号136、137、138、139开头都分别放到一个独立的4个文件中,其他开头的放到一个文件中。

2、需求分析

3、编写代码

在上一个FlowCount案例的基础上,添加一个分区类ProvincePartitioner.java,写入以下内容:

package com.wolf.mr.flowsum;

import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Partitioner;

public class ProvincePartitioner extends Partitioner<Text, FlowBean> {
    @Override
    public int getPartition(Text key, FlowBean value, int numPartitions) {

        // get prefix
        String prePhoneNum = key.toString().substring(0, 3);
        int partition = 4;
        if ("136".equals(prePhoneNum)) {
            partition = 0;
        } else if ("137".equals(prePhoneNum)) {
            partition = 1;
        } else if ("138".equals(prePhoneNum)) {
            partition = 2;
        } else if ("139".equals(prePhoneNum)) {
            partition = 3;
        }
        return partition;
    }
}

在FlowCountDriver中添加如下几行代码:

job.setPartitionerClass(ProvincePartitioner.class);
job.setNumReduceTasks(5);

运行一下(注意输出路径不能已经存在,如果已经存在要先删除):

查看结果:

可以看到确实是分了五个区:

 

可以看到,结果也正确。

四、排序(WritableComparable)

 

1、排序的分类

 2、自定义排序WritableComparable

原理分析

bean对象做为key传输,需要实现WritableComparable接口重写compareTo方法,就可以实现排序。

@Override
public int compareTo(FlowBean o) {

	int result;
		
	// 按照总流量大小,倒序排列
	if (sumFlow > bean.getSumFlow()) {
		result = -1;
	}else if (sumFlow < bean.getSumFlow()) {
		result = 1;
	}else {
		result = 0;
	}

	return result;
}

3、WritableComparable排序案例实操(全排序)

(1)需求

根据案例2.3产生的结果再次对总流量进行排序。

1)输入数据

原始数据                          第一次处理后的数据

2)期望输出数据

13509468723  7335        110349    117684

13736230513  2481        24681      27162

13956435636  132          1512        1644

13846544121  264          0              264

(2)需求分析

 (3)编写代码

1)建包

在src/main/java下添加包com.wolf.mr.sort

2)编写FlowBean类

在包内添加并编写FlowBean类

package com.wolf.mr.sort;

import org.apache.hadoop.io.WritableComparable;

import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;

public class FlowBean implements WritableComparable<FlowBean> {
    // bi jiao
    private long upFlow;
    private long downFlow;
    private long sumFlow;

    public FlowBean() {
        super();
    }

    public FlowBean(long upFlow, long downFlow) {
        this.upFlow = upFlow;
        this.downFlow = downFlow;
        sumFlow = upFlow+downFlow;
    }

    @Override
    public int compareTo(FlowBean bean) {
        // compare
        int result;
        if (sumFlow > bean.getSumFlow()){
            result = -1;
        }else if (sumFlow< bean.getSumFlow()){
            result = 1;
        }else {
            result = 0;
        }

        return result;
    }
    // xu lie hua
    @Override
    public void write(DataOutput dataOutput) throws IOException {
        dataOutput.writeLong(upFlow);
        dataOutput.writeLong(downFlow);
        dataOutput.writeLong(sumFlow);
    }
    // fan xu lie hua
    @Override
    public void readFields(DataInput dataInput) throws IOException {
        upFlow = dataInput.readLong();
        downFlow = dataInput.readLong();
        sumFlow = dataInput.readLong();
    }

    public long getUpFlow() {
        return upFlow;
    }

    public void setUpFlow(long upFlow) {
        this.upFlow = upFlow;
    }

    public long getDownFlow() {
        return downFlow;
    }

    public void setDownFlow(long downFlow) {
        this.downFlow = downFlow;
    }

    public long getSumFlow() {
        return sumFlow;
    }

    public void setSumFlow(long sumFlow) {
        this.sumFlow = sumFlow;
    }

    @Override
    public String toString() {
        return upFlow + "\t" + downFlow + "\t" + sumFlow;
    }
}
3)编写FlowCountSortMapper类
package com.wolf.mr.sort;

import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

import java.io.IOException;

public class FlowCountSortMapper extends Mapper<LongWritable, Text,FlowBean,Text> {
    FlowBean k = new FlowBean();
    Text v = new Text();

    @Override
    protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
        // 1.get 1 line
        String line = value.toString();
        // 2. split
        String[] fields = line.split("\t");
        // 3. pack obj
        String phoneNum = fields[0];
        long upFlow = Long.parseLong(fields[1]);
        long downFlow = Long.parseLong(fields[2]);
        long sumFlow = Long.parseLong(fields[3]);

        k.setUpFlow(upFlow);
        k.setDownFlow(downFlow);
        k.setSumFlow(sumFlow);
        v.set(phoneNum);
        // 4. write out
        context.write(k,v);
    }
}
4)编写FlowCountSortReducer类
package com.wolf.mr.sort;

import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

import java.io.IOException;

public class FlowCountSortReducer extends Reducer<FlowBean,Text,Text,FlowBean> {
    @Override
    protected void reduce(FlowBean key, Iterable<Text> values, Context context) throws IOException, InterruptedException {
        for (Text value :
                values) {
            context.write(value, key);
        }
    }
}
5)编写FlowCountSortDriver类
package com.wolf.mr.sort;


import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

import java.io.IOException;

public class FlowCountSortDriver {
    public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException {

        // 1. get job obj
        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf);
        // 2. set jar storage path
        job.setJarByClass(FlowCountSortDriver.class);
        // 3.link mapper and reducer
        job.setMapperClass(FlowCountSortMapper.class);
        job.setReducerClass(FlowCountSortReducer.class);
        // 4.set mapper's type of key and value
        job.setMapOutputKeyClass(FlowBean.class);
        job.setMapOutputValueClass(Text.class);
        // 5. set final output type of key and value
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(FlowBean.class);



        // 6. set input output path
        FileInputFormat.setInputPaths(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));
        // 7. submit job
        boolean result = job.waitForCompletion(true);
        System.exit(result ? 0 : 1);
    }
}

6)运行程序

输入数据:

设置运行参数:

/home/wolf/phonesort.txt /home/wolf/output/phonesortout

运行程序

查看结果

可以看到确实是按照倒序排列了总流量。

4、WritableComparable排列案例实操(区内排序)

(1)需求

要求每个省份手机号输出的文件中按照总流量内部排序。

(2)需求分析

基于前一个需求,增加自定义分区类,分区按照省份手机号设置。

(3)编写代码

这个不具体展开了,就是在上一个案例的基础上增加一个自定义分区类。

5、GroupingComparator分组排序(辅助排序)案例实操

分组排序也是一种很重要的排序,对Reduce阶段的数据根据某一个或几个字段进行分组

分组排序步骤:

(1)自定义类继承WritableComparator

(2)重写compare()方法

@Override
public int compare(WritableComparable a, WritableComparable b) {
		// 比较的业务逻辑
		return result;
}

(3)创建一个构造将比较对象的类传给父类

protected OrderGroupingComparator() {
		super(OrderBean.class, true);
}

(1)需求

 1)输入数据

0000001	Pdt_01	222.8
0000002	Pdt_05	722.4
0000001	Pdt_02	33.8
0000003	Pdt_06	232.8
0000003	Pdt_02	33.8
0000002	Pdt_03	522.8
0000002	Pdt_04	122.4

2)期望输出数据

1        222.8

2        722.4

3        232.8

(2)需求分析

1)利用“订单id和成交金额”作为key,可以将Map阶段读取到的所有订单数据按照id升序排序,如果id相同再按照金额降序排序,发送到Reduce。

2)在Reduce端利用groupingComparator将订单id相同的kv聚合成组,然后取第一个即是该订单中最贵商品。

(3)编写代码

新建order包

1)编写OrderBean类
package com.wolf.mr.order;

import org.apache.hadoop.io.WritableComparable;

import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;

public class OrderBean implements WritableComparable<OrderBean> {
    private int order_id;
    private double price;

    public OrderBean() {
        super();
    }

    public OrderBean(int order_id, double price) {
        super();
        this.order_id = order_id;
        this.price = price;
    }

    public int getOrder_id() {
        return order_id;
    }

    @Override
    public String toString() {
        return order_id + "\t" + price;
    }

    public void setOrder_id(int order_id) {
        this.order_id = order_id;
    }

    public double getPrice() {
        return price;
    }

    public void setPrice(double price) {
        this.price = price;
    }

    @Override
    public int compareTo(OrderBean orderBean) {
        // 1. according id up
        int result;
        if (order_id>orderBean.getOrder_id()){
            result = 1;
        } else if (order_id < orderBean.getOrder_id()) {
            result = -1;
        }else {
            // 2. same id ,according price down
            if (price>orderBean.getPrice()){
                result  = -1;
            } else if (price < orderBean.getPrice()) {
                result = 1;
            }else {
                result = 0;
            }
        }
        
        return result;
    }

    @Override
    public void write(DataOutput dataOutput) throws IOException {
        dataOutput.writeInt(order_id);
        dataOutput.writeDouble(price);
    }

    @Override
    public void readFields(DataInput dataInput) throws IOException {
        order_id = dataInput.readInt();
        price = dataInput.readDouble();
    }
}
2)编写 OrderMapper类
package com.wolf.mr.order;

import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

import java.io.IOException;

public class OrderMapper extends Mapper<LongWritable, Text, OrderBean, NullWritable> {
    OrderBean k = new OrderBean();

    @Override
    protected void map(LongWritable key, Text value, Mapper<LongWritable, Text, OrderBean, NullWritable>.Context context) throws IOException, InterruptedException {
        // get 1 line
        String line = value.toString();

        // split
        String[] fields = line.split(" ");

        // pack obj
        k.setOrder_id(Integer.parseInt(fields[0]));
        k.setPrice(Double.parseDouble(fields[2]));

        // write out
        context.write(k, NullWritable.get());
    }
}
3)编写OrderGroupingComparator类
package com.wolf.mr.order;

import org.apache.hadoop.io.WritableComparable;
import org.apache.hadoop.io.WritableComparator;

public class OrderGroupingComparator extends WritableComparator {
    public OrderGroupingComparator() {
        super(OrderBean.class,true);
    }

    @Override
    public int compare(WritableComparable a, WritableComparable b) {
        OrderBean aBean = (OrderBean) a;
        OrderBean bBean = (OrderBean) b;

        int result;
        if (aBean.getOrder_id() > bBean.getOrder_id()) {
            result = 1;
        } else if (aBean.getOrder_id() < bBean.getOrder_id()) {
            result = -1;
        } else {
            result = 0;
        }

        return result;

    }
}
4)编写OrderReducer类
package com.wolf.mr.order;

import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.mapreduce.Reducer;

import java.io.IOException;

public class OrderReducer extends Reducer<OrderBean, NullWritable,OrderBean,NullWritable> {
    @Override
    protected void reduce(OrderBean key, Iterable<NullWritable> values, Context context) throws IOException, InterruptedException {

        context.write(key,NullWritable.get());

    }
}
5)编写OrderDriver类
package com.wolf.mr.order;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

import java.io.IOException;

public class OrderDriver {

    public static void main(String[] args) throws Exception, IOException {


        // 1 获取配置信息
        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf);

        // 2 设置jar包加载路径
        job.setJarByClass(OrderDriver.class);

        // 3 加载map/reduce类
        job.setMapperClass(OrderMapper.class);
        job.setReducerClass(OrderReducer.class);

        // 4 设置map输出数据key和value类型
        job.setMapOutputKeyClass(OrderBean.class);
        job.setMapOutputValueClass(NullWritable.class);

        // 5 设置最终输出数据的key和value类型
        job.setOutputKeyClass(OrderBean.class);
        job.setOutputValueClass(NullWritable.class);

        // 6 设置输入数据和输出数据路径
        FileInputFormat.setInputPaths(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));

        // 8 设置reduce端的分组
        job.setGroupingComparatorClass(OrderGroupingComparator.class);

        // 7 提交
        boolean result = job.waitForCompletion(true);
        System.exit(result ? 0 : 1);
    }
}

6)编写运行参数
/home/wolf/GroupingComparator.txt /home/wolf/output/order_out
7)运行程序

查看结果

可以看到,结果正确。

以上大概就是Shuffle机制中需要重点掌握的内容(用得到的),还有一些其他的内容,有机会回来补充。

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值