" 常在河边走,哪能不湿鞋。" ——若发现文章内容有误,敬请指正,望不吝赐教,感谢!
一、参考资料
二、运行环境
- windows 10
- JDK 8
- Hadoop 3.1.3 windows版
- IDEA
三、Partition分区
3.1 默认Partitioner分区
相关的部分源码 org.apache.hadoop.mapreduce.lib.partition.HashPartitioner.java
:
public int getPartition(K key, V value,
int numReduceTasks) {
return (key.hashCode() & Integer.MAX_VALUE) % numReduceTasks;
}
默认分区是根据Map任务key的hasCode对ReduceTasks个数取模得到的,用户无法控制哪个key存储到哪个分区
3.2 自定义 Parititioner 分区 步骤&案例
案例: 按电话号码进行统计,统计手机耗费的总流量,将统计结果按照手机归属地不同省份输出到不同文件中(分区),将手机号135、137,138开头的都放到各自独立的三个文件中,其他开头的放到同一个文件中
测试数据:
1 13736230513 192.196.100.1 www.atguigu.com 2481 24681 200
2 13846544121 192.196.100.2 264 0 200
3 13956435636 192.196.100.3 132 1512 200
4 13966251146 192.168.100.1 240 0 404
5 18271575951 192.168.100.2 www.atguigu.com 1527 2106 200
6 84188413 192.168.100.3 www.atguigu.com 4116 1432 200
7 13590439668 192.168.100.4 1116 954 200
8 15910133277 192.168.100.5 www.hao123.com 3156 2936 200
9 13729199489 192.168.100.6 240 0 200
10 13630577991 192.168.100.7 www.shouhu.com 6960 690 200
11 15043685818 192.168.100.8 www.baidu.com 3659 3538 200
12 15959002129 192.168.100.9 www.atguigu.com 1938 180 500
13 13560439638 192.168.100.10 918 4938 200
14 13470253144 192.168.100.11 180 180 200
15 13682846555 192.168.100.12 www.qq.com 1938 2910 200
16 13992314666 192.168.100.13 www.gaga.com 3008 3720 200
17 13509468723 192.168.100.14 www.qinghua.com 7335 110349 404
18 18390173782 192.168.100.15 www.sogou.com 9531 2412 200
19 13975057813 192.168.100.16 www.baidu.com 11058 48243 200
20 13768778790 192.168.100.17 120 120 200
21 13568436656 192.168.100.18 www.alibaba.com 2481 24681 200
22 13568436656 192.168.100.19 1116 954 200
输入数据格式:
id 手机号码 网络ip 上行流量 下行流量 网络状态码
7 13560436666 120.196.100.99 1116 954 200
输出数据格式:
13560436666 1116 954 2070
手机号码 上行流量 下行流量 总流量
步骤分析: 读取输入数据、增加ProvincePartitioner分区(四个分区,分别存储135、136,137、其他手机号开头的统计结果)、期望数据输出、Driver驱动类指定自定义的数据分区与ReduceTask个数
3.2.1 编写序列化实体类
用于map、reduce过程的数据传递
FlowBean.java
package com.uni.partitioner2;
import org.apache.hadoop.io.Writable;
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
/**
* 1. 定义类实现 writable 接口
* 2. 重写序列化和反序列化方法
* 3. 重写无参构造
* 4. toString方法
*
*/
public class FlowBean implements Writable {
private long upFlow; // 上行流量
private long downFlow; // 下行流量
private long sumFlow; // 总流量
// 无参构造
public FlowBean(){
}
public long getUpFlow() {
return upFlow;
}
public void setUpFlow(long upFlow) {
this.upFlow = upFlow;
}
public long getDownFlow() {
return downFlow;
}
public void setDownFlow(long downFlow) {
this.downFlow = downFlow;
}
public long getSumFlow() {
return sumFlow;
}
public void setSumFlow() {
this.sumFlow = this.upFlow + this.downFlow;
}
@Override
public void write(DataOutput out) throws IOException {
out.writeLong(upFlow);
out.writeLong(downFlow);
out.writeLong(sumFlow);
}
@Override
public void readFields(DataInput in) throws IOException {
this.upFlow = in.readLong();
this.downFlow = in.readLong();
this.sumFlow = in.readLong();
}
@Override
public String toString() {
return upFlow + "\t" + downFlow + "\t" + sumFlow;
}
}
3.2.2 自定义类继承Partitioner
需要在继承类里重写 getPartition() 方法
ProvincePartitioner.java
package com.uni.partitioner2;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Partitioner;
public class ProvincePartitioner extends Partitioner<Text, FlowBean> {
@Override
public int getPartition(Text text, FlowBean flowBean, int partition) {
// text 手机号
String phone = text.toString();
String prePhone = phone.substring(0, 3);
if("135".equals(prePhone)){
partition = 0;
} else if ("136".equals(prePhone)){
partition = 1;
} else if ("137".equals(prePhone)){
partition = 2;
} else
partition = 3;
return partition;
}
}
3.2.3 编写自定义的Map、Reduce类
FlowMapper.java
package com.uni.partitioner2;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import java.io.IOException;
public class FlowMapper extends Mapper<LongWritable, Text, Text, FlowBean> {
private Text outputKey = new Text();
private FlowBean outputValue = new FlowBean();
@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
// 1.获取一行 7 13560436666 120.196.100.99 1116 954 200
String line = value.toString();
// 2.切割成 [7,13560436666,120.196.100.99,1116,954,200]
String[] split = line.split("\t");
// 3. 获取想要的数据: 手机号 13560436666, 上行流量和下行流量: 1116、954
String phone = split[1];
// 顺序的话数据有残缺,故逆序取上行、下行流量
String up = split[split.length - 3];
String down = split[split.length - 2];
// 4. 封装
outputKey.set(phone);
outputValue.setUpFlow(Long.parseLong(up));
outputValue.setDownFlow(Long.parseLong(down));
outputValue.setSumFlow();
// 5. 写出
context.write(outputKey, outputValue);
}
}
FlowReducer.java
package com.uni.partitioner2;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import java.io.IOException;
public class FlowReducer extends Reducer<Text, FlowBean, Text, FlowBean> {
private FlowBean outputValue = new FlowBean();
@Override
protected void reduce(Text key, Iterable<FlowBean> values, Context context) throws IOException, InterruptedException {
// 1. 遍历集合,累加上下行流量
long totalUp = 0;
long totalDown = 0;
for (FlowBean value : values) {
totalUp += val