需求:
统计每个手机号上行流量和、下行流量和、总流量和(上行流量和+下行流量和),并且:将统计结果按照手机号的前缀进行区分,并输出到不同的输出文件中去。
13* ==> …
15* ==> …
other ==> …
其中,access.log数据文件
- 第二个字段:手机号
- 倒数第三个字段:上行流量
- 倒数第二个字段:下行流量
数据
思路:
- 根据手机号进行分组,然后把该手机号对应的上下行流量加起来
- Mapper: 把手机号、上行流量、下行流量拆开
把手机号作为key,把Access作为value写出去 - Reducer形如:(“手机号”,<Access,Access>)
- 自定义分区类(需要继承Partitioner抽象类),并覆写getPartition()方法
开发步骤:
(1)自定义Access类
包括属性:手机号、上行流量、下行流量、总流量
(2)自定义Map任务类(Map Task)
对每一行日志内容进行拆分,Map输出数据为:
phone==>Access(手机号,该行手机号的上行流量,该行手机号的下行流量)
(3)编写Reduce任务类(Reduce Task)
对每个手机号的流量进行汇总,Map输出数据为:
phone==>Access(手机号,上行流量和,下行流量和)
也可以优化为:
phone==>Access(NullWritable对象,上行流量和,下行流量和)
(4)编写分区处理类
继承org.apache.hadoop.mapreduce.Partitioner类,"13"开头的手机号交给第一个ReduceTask任务处理,最终输出到0号分区,"15"开头的手机号交给第二个ReduceTask任务处理,最终输出到1号分区,其余手机号交给第三个ReduceTask任务处理,最终输出到2号分区。
在idea中新建项目
新建Java类
(1)FLowBean类
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import org.apache.hadoop.io.Writable;
public class FlowBean implements Writable{
private long upFlow;
private long downFlow;
private long sumFlow;
// 反序列化时,需要反射调用空参构造函数,所以必须有
public FlowBean() {
super();
}
public FlowBean(long upFlow, long downFlow) {
super();
this.upFlow = upFlow;
this.downFlow = downFlow;
this.sumFlow = upFlow + downFlow;
}
public long getSumFlow() {
return sumFlow;
}
public void setSumFlow(long sumFlow) {
this.sumFlow = sumFlow;
}
public long getUpFlow() {
return upFlow;
}
public void setUpFlow(long upFlow) {
this.upFlow = upFlow;
}
public long getDownFlow() {
return downFlow;
}
public void setDownFlow(long downFlow) {
this.downFlow = downFlow;
}
/**
* 序列化方法
*
* @param out
* @throws IOException
*/
@Override
public void write(DataOutput out) throws IOException {
out.writeLong(upFlow);
out.writeLong(downFlow);
out.writeLong(sumFlow);
}
/**
* 反序列化方法
注意反序列化的顺序和序列化的顺序完全一致
*
* @param in
* @throws IOException
*/
@Override
public void readFields(DataInput in) throws IOException {
upFlow = in.readLong();
downFlow = in.readLong();
sumFlow = in.readLong();
}
@Override
public String toString() {
return upFlow + "\t" + downFlow + "\t" + sumFlow;
}
public void set(long upFlow, long downFlow) {
this.upFlow = upFlow;
this.downFlow = downFlow;
this.sumFlow = upFlow + downFlow;
}
}
(2)FlowDriver类
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class FlowDriver {
public static void main(String[] args) throws Exception {
//1.获取配置信息
Configuration conf = new Configuration();
Job job = Job.getInstance(conf);
//2.获取jar包信息
job.setJarByClass(FlowDriver.class);
//3.配置mapper、reducer类
job.setMapperClass(FlowMapper.class);
job.setReducerClass(FlowReducer.class);
//4.配置mapper输出key、value值
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(FlowBean.class);
//5.配置输出key、value值
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(FlowBean.class);
//设置分区
job.setPartitionerClass(ProvincePartitioner.class);
//设置Reducenum,依据是看flowpartitioner里分了几个区
job.setNumReduceTasks(3);
//6.配置输入路径和输出路径
FileInputFormat.setInputPaths(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
//7.提交
boolean result = job.waitForCompletion(true);
System.exit(result?0:1);
}
}
(3)Flow Mapper类
import java.io.IOException;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
public class FlowMapper extends Mapper<LongWritable,Text,Text,FlowBean>{
FlowBean bean = new FlowBean();
Text k = new Text();
@Override
protected void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
//1.获取一行数据()
String line = value.toString();
//2.截取字段(1、2可归结为一大步:对输入数据的处理)
String[] fields = line.split("\t");
//3.封装bean对象,获取电话号码(第二大步:具体的业务逻辑)
String phoneNum = fields[1];
long upFlow = Long.parseLong(fields[fields.length - 3]);
long downFlow = Long.parseLong(fields[fields.length - 2]);
//在map方法中new对象是不好的,因为在输入数据时,每读一行数据,会执行以下map方法,这一造成内存消耗很大
//FlowBean bean = new FlowBean(upFlow,downFlow);
bean.set(upFlow, downFlow);
k.set(phoneNum);
//4.写出去(第三大步:将数据输出出去,key和value分别是什么,规定清楚)
//context.write(new Text(phoneNum), bean);
context.write(k, bean);
}
}
(4)Flow Reducer类
import java.io.IOException;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
public class FlowReducer extends Reducer<Text,FlowBean,Text,FlowBean>{
@Override
protected void reduce(Text key, Iterable<FlowBean> values, Context context)
throws IOException, InterruptedException {
//1.计算总的流量
long sum_upFlow = 0;
long sum_downFlow = 0;
for(FlowBean bean : values){
sum_upFlow += bean.getUpFlow();
sum_downFlow += bean.getDownFlow();
}
//2.输出
context.write(key, new FlowBean(sum_upFlow,sum_downFlow));
}
}
(5)ProvincePartitioner类
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Partitioner;
public class ProvincePartitioner extends Partitioner<Text,FlowBean>{
@Override
public int getPartition(Text key, FlowBean value, int numPartitions) {
// 1 获取电话号码的前三位
String preNum = key.toString().substring(0, 2);
int partition = 3;
// 2 判断是哪个省
if ("13".equals(preNum)) {
partition = 0;
}else if ("15".equals(preNum)) {
partition = 1;
}else {
partition = 2;
}
return partition;
}
}
代码书写完毕将项目打jar包
file选择Projiect Structure
选择ok
选择Build Artifacts
在out 中会找到打包好的jar包
找到jar包所在文件位置
在jar包所在位置打卡终端
在提交jar包前先将Hadoop集群启动
打开到集群安装位置
sbin/start-all.sh
hadoop提交jar包命令
hadoop jar liulaingfl.jar /data/access.log /data/output/output1
注:文件在Hadoop提交前已上传,但输出路径不需要创建如创建会导致报错
也可以在终端中使用命令查看
hdfs dfs -cat /data/output/output1/part-r-00000