一.需求
–统计每个手机号上行流量和、下行流量和、总流量和(上行流量和+下行流量和),并且:将统计结果按照手机号的前缀进行区分,并输出到不同的输出文件中去。
13* ==> …
15* ==> …
other ==> …
其中,access.log数据文件第二个字段:手机号
倒数第三个字段:上行流量
倒数第二个字段:下行流量
二. 思路
根据手机号进行分组,然后把该手机号对应的上下行流量加起来
Mapper: 把手机号、上行流量、下行流量拆开
把手机号作为key,把Access作为value写出去
Reducer形如:(“手机号”,<Access,Access>)
自定义分区类(需要继承Partitioner抽象类),并覆写
getPartition()方法
三.开发步骤
1.自定义Access类: 包括属性:手机号、上行流量、下行流量、总流量
2.自定义Map任务类(Map Task)对每一行日志内容进行拆分,Map输出数据为:
phone==>Access(手机号,该行手机号的上行流量,该行手机号的下行流量)
3.编写Reduce任务类(Reduce Task)
对每个手机号的流量进行汇总,Map输出数据为:
phone==>Access(手机号,上行流量和,下行流量和)也可以优化为:
phone==>Access(NullWritable对象,上行流量和,下行流量和)
4.编写分区处理类
继承org.apache.hadoop.mapreduce.Partitioner类,
"13"开头的手机号交给第一个ReduceTask任务处理,最终输出到0号分区,
"15"开头的手机号交给第二个ReduceTask任务处理,最终输出到1号分区,其余手机号交给第三个ReduceTask任务处理,最终输出到2号分区。
四.代码实现
1.Access.Java
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import org.apache.hadoop.io.Writable;
// access对象要实例化
public class Access implements Writable {
private long upFlow;
private long downFlow;
private long sumFlow;
// 反序列化时,需要反射调用空参构造函数,所以必须有
public Access() {
super();
}
public Access(long upFlow, long downFlow) {
super();
this.upFlow = upFlow;
this.downFlow = downFlow;
this.sumFlow = upFlow + downFlow;
}
public long getSumFlow() {
return sumFlow;
}
public void setSumFlow(long sumFlow) {
this.sumFlow = sumFlow;
}
public long getUpFlow() {
return upFlow;
}
public void setUpFlow(long upFlow) {
this.upFlow = upFlow;
}
public long getDownFlow() {
return downFlow;
}
public void setDownFlow(long downFlow) {
this.downFlow = downFlow;
}
/**
* 序列化方法
*
* @param out
* @throws IOException
*/
@Override
public void write(DataOutput out) throws IOException {
out.writeLong(upFlow);
out.writeLong(downFlow);
out.writeLong(sumFlow);
}
/**
* 反序列化方法
注意反序列化的顺序和序列化的顺序完全一致
*
* @param in
* @throws IOException
*/
@Override
public void readFields(DataInput in) throws IOException {
upFlow = in.readLong();
downFlow = in.readLong();
sumFlow = in.readLong();
}
@Override
public String toString() {
return upFlow + "\t" + downFlow + "\t" + sumFlow;
}
public void set(long upFlow, long downFlow) {
this.upFlow = upFlow;
this.downFlow = downFlow;
this.sumFlow = upFlow + downFlow;
}
}
2.ProvincePartitioner
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Partitioner;
/**
* K2 V2 对应的是map输出kv类型
* @author Administrator
*/
public class ProvincePartitioner extends Partitioner<Text, Access> {
@Override
public int getPartition(Text key, Access value, int numPartitions) {
// 1 获取电话号码的前两位
String preNum = key.toString().substring(0, 2);
int partition = 3;
// 2 判断是哪个分区
if ("13".equals(preNum)) {
partition = 0;
}else if ("15".equals(preNum)) {
partition = 1;
}else {
partition = 2;
}
return partition;
}
}
3.Map.java
import java.io.IOException;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
public class Map extends Mapper<LongWritable, Text, Text, Access>{
//不能在map方法中new对象,map方法执行频率高,内存消耗大。这也就是需要在access对象中要有一个空构造方法的原因
Access access = new Access();
Text k = new Text();
@Override
protected void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
//1.获取一行数据
String line = value.toString();
//2.截取字段
String[] fields = line.split("\t");
//3.封装access对象,获取电话号码(第二大步:具体的业务逻辑)
String phoneNum = fields[1];
long upFlow = Long.parseLong(fields[fields.length - 3]);
long downFlow = Long.parseLong(fields[fields.length - 2]);
access.set(upFlow, downFlow);
k.set(phoneNum);
//4.写出去(第三大步:将数据输出出去,key和value分别是什么,规定清楚)
context.write(k, access);
}
}
4.Reduce.java
import java.io.IOException;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
public class Reduce extends Reducer<Text, Access, Text, Access>{
@Override
protected void reduce(Text key, Iterable<Access> values, Context context)
throws IOException, InterruptedException {
//1.计算总的流量
long sum_upFlow = 0;
long sum_downFlow = 0;
for(Access access : values){
sum_upFlow += access.getUpFlow();
sum_downFlow += access.getDownFlow();
}
//2.输出
context.write(key, new Access(sum_upFlow,sum_downFlow));
}
}