统计手机用户上网流量的数据
实际需求: 统计出每个手机号上网的上、下行流量及总流量(上行+下行)
我们需要从源数据中统计出每个用户(手机号)的所有请求的上行流量、下行流量、总流量的数据,并输出到单独的文件中。
给出的数据:
1363157985066 13726230503 00-FD-07-A4-72-B8:CMCC 120.196.100.82 i02.c.aliimg.com 24 27 2481 24681 200
1363157995052 13826544101 5C-0E-8B-C7-F1-E0:CMCC 120.197.40.4 4 0 264 0 200
1363157991076 13926435656 20-10-7A-28-CC-0A:CMCC 120.196.100.99 2 4 132 1512 200
1363154400022 13926251106 5C-0E-8B-8B-B1-50:CMCC 120.197.40.4 4 0 240 0 200
1363157993044 18211575961 94-71-AC-CD-E6-18:CMCC-EASY 120.196.100.99 iface.qiyi.com 视频网站 15 12 1527 2106 200
1363157995074 84138413 5C-0E-8B-8C-E8-20:7DaysInn 120.197.40.4 122.72.52.12 20 16 4116 1432 200
1363157993055 13560439658 C4-17-FE-BA-DE-D9:CMCC 120.196.100.99 18 15 1116 954 200
1363157995033 15920133257 5C-0E-8B-C7-BA-20:CMCC 120.197.40.4 sug.so.360.cn 信息安全 20 20 3156 2936 200
1363157983019 13719199419 68-A1-B7-03-07-B1:CMCC-EASY 120.196.100.82 4 0 240 0 200
1363157984041 13660577991 5C-0E-8B-92-5C-20:CMCC-EASY 120.197.40.4 s19.cnzz.com 站点统计 24 9 6960 690 200
1363157973098 15013685858 5C-0E-8B-C7-F7-90:CMCC 120.197.40.4 rank.ie.sogou.com 搜索引擎 28 27 3659 3538 200
1363157986029 15989002119 E8-99-C4-4E-93-E0:CMCC-EASY 120.196.100.99 www.umeng.com 站点统计 3 3 1938 180 200
1363157992093 13560439658 C4-17-FE-BA-DE-D9:CMCC 120.196.100.99 15 9 918 4938 200
1363157986041 13480253104 5C-0E-8B-C7-FC-80:CMCC-EASY 120.197.40.4 3 3 180 180 200
1363157984040 13602846565 5C-0E-8B-8B-B6-00:CMCC 120.197.40.4 2052.flash2-http.qq.com 综合门户 15 12 1938 2910 200
1363157995093 13922314466 00-FD-07-A2-EC-BA:CMCC 120.196.100.82 img.qfc.cn 12 12 3008 3720 200
1363157982040 13502468823 5C-0A-5B-6A-0B-D4:CMCC-EASY 120.196.100.99 y0.ifengimg.com 综合门户 57 102 7335 110349 200
1363157986072 18320173382 84-25-DB-4F-10-1A:CMCC-EASY 120.196.100.99 input.shouji.sogou.com 搜索引擎 21 18 9531 2412 200
1363157990043 13925057413 00-1F-64-E1-E6-9A:CMCC 120.196.100.55 t3.baidu.com 搜索引擎 69 63 11058 48243 200
1363157988072 13760778710 00-FD-07-A4-7B-08:CMCC 120.196.100.82 2 2 120 120 200
1363157985066 13726238888 00-FD-07-A4-72-B8:CMCC 120.196.100.82 i02.c.aliimg.com 24 27 2481 24681 200
1363157993055 13560436666 C4-17-FE-BA-DE-D9:CMCC 120.196.100.99 18 15 1116 954 200
实现步骤:
1:导入Pom.xml依赖及其日志log4j.properties
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.6.5</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>2.6.5</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.6.5</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-mapreduce-client-core</artifactId>
<version>2.6.5</version>
</dependency>
日志文件log4j.properties
log4j.rootLogger=info,stdout,logFile
#\u63A7\u5236\u53F0\u8F93\u51FA
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%d{ABSOLUTE}] %5p %c{1}:%L - %m%n
#\u7CFB\u7EDF\u65E5\u5FD7\u8F93\u51FA
log4j.appender.logFile=org.apache.log4j.DailyRollingFileAppender
log4j.appender.logFile.File=logs/mad_ccg.log
log4j.appender.logFile.DatePattern='.'yyyy-MM-dd
log4j.appender.logFile.layout=org.apache.log4j.PatternLayout
log4j.appender.logFile.layout.ConversionPattern=[%d{ABSOLUTE}] %5p %c{1}:%L - %m%n
#\u8BBE\u7F6E\u672C\u5DE5\u7A0B\u7C7B\u7EA7\u522B
log4j.logger.com.ctc.email=DEBUG
2:由于是上下行流量和汇总的数据,需要设计Bean对象
public class FlowBean implements WritableComparable {
//1:添加上下行流量字段
private long upLink;
private long downLink;
private long totalLink;
//添加计算统计总流量的方法
public void set(long upLink,long downLink){
this.upLink=upLink;
this.downLink=downLink;
this.totalLink=upLink+downLink;
}
//添加上下行流量和总流量的方法
public long getDownLink(){
return downLink;
}
public long getUpLink(){
return upLink;
}
public long gettotalLink(){
return totalLink;
}
//覆盖的方法,进行序列化
public void write(DataOutput dataOutput) throws IOException {
dataOutput.writeLong(downLink);
dataOutput.writeLong(upLink);
dataOutput.writeLong(totalLink);
}
//覆盖的方法:进行反序列化
public void readFields(DataInput dataInput) throws IOException {
this.downLink=dataInput.readLong();
this.upLink=dataInput.readLong();
this.totalLink=dataInput.readLong();
}
//覆盖的方法:进行排序,如果是同一个号码的数据的话,就按照各自的hashcode的值去排序
public int compareTo(FlowBean o) {
if(o.gettotalLink()-this.gettotalLink()==0){
return o.hashCode()-this.hashCode();
}else{
return (int) (o.gettotalLink()-this.gettotalLink());
}
}
//覆盖原有的toString方法,自定义
@Override
public String toString() {
return this.upLink+"\t"+this.downLink+"\t"+this.totalLink;
}
}
3:创建Mapper和Reduce和程序的启动类,进行第一步数据的清洗
public class FlowProgram {
//添加Mapper类
public static class ProgramMapper extends Mapper<LongWritable, Text,Text,FlowBean>{
//创建一个FlowBean的对象
FlowBean bean = new FlowBean();
//继承的map方法,先进行数据的清洗,
//原始数据:1363157985066 13726230503 00-FD-07-A4-72-B8:CMCC 120.196.100.82 i02.c.aliimg.com 24 27 2481 24681 200
//清洗之后的数据要求: //13726230503 2481 24681 2481+24681
@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String[] wordsInRow = StringUtils.split(value.toString(), "\t");
bean.set(Long.parseLong(wordsInRow[wordsInRow.length-3]),
Long.parseLong(wordsInRow[wordsInRow.length-2]));
context.write(new Text(wordsInRow[1]),bean);
}
}
public static class ProgramReduce extends Reducer<Text,FlowBean,Text,FlowBean>{
//创建一个FlowBean的对象
FlowBean sumbean = new FlowBean();
//进行相同号码在不同时间段内产生不同流量的汇总:
@Override
protected void reduce(Text key, Iterable<FlowBean> values, Context context) throws IOException, InterruptedException {
long totalUpLink=0;
long totalDownLink=0;
Iterator<FlowBean> iterator = values.iterator();
//这里是重点:reduce方法进来的时候就是 一个key,而这个key有多个value,形成一个数组,当遍历的时候,key值不动,而是只遍历values,
//每一个value的上下行流量做汇总,在同一个key的前提下
while (iterator.hasNext()) {
//同一号码某段时间产生的流量
FlowBean next = iterator.next();
totalUpLink +=next.getUpLink();
totalDownLink +=next.getDownLink();
}
//最后,通过上下行流量用set方法计算出总流量
sumbean.set(totalUpLink,totalDownLink);
context.write(key,sumbean);
}
}
//创建一个驱动类,设置添加各个配置
public static void main(String[] args) throws Exception{
//1:创建配置和任务
Configuration conf = new Configuration();
Job job = Job.getInstance(conf);
//2:指定jar驱动类
job.setJarByClass(FlowProgram.class);
//3:指定mapper和Reduce的类
job.setMapperClass(ProgramMapper.class);
job.setReducerClass(ProgramReduce.class);
//4:指定mapper和Reduce的输出类型,如果相等的话,可以省略mapper的输出类型
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(FlowBean.class);
//job.setInputFormatClass():使用默认组件就不用写
//5:指定输入数据和输出数据的路径,如果输出路径存在的话删除输出路径
FileInputFormat.setInputPaths(job,new Path("F:\\flowsum\\input"));
Path outPath = new Path("F:\\flowsum\\out");
FileOutputFormat.setOutputPath(job,outPath);
deleteIfExists(conf, outPath);
//6:设置成功标志
boolean flag = job.waitForCompletion(true);
System.out.println(flag);
}
private static void deleteIfExists(Configuration conf, Path outPath) throws IOException {
FileSystem fs = FileSystem.get(conf);
if(fs.exists(outPath)){
fs.delete(outPath);
}
}
}
4进行数据的分区和数据的排序,按照总流量的倒序去排列
public class FlowProgram1 {
//添加Mapper类
public static class ProgramMapper extends Mapper<LongWritable, Text,FlowBean,Text>{
//创建一个FlowBean的对象
FlowBean bean = new FlowBean();
//数据//13480253104 180 180 360
//mapper输出时候把bean放在前面的原因是因为要用数据来进行排序,而排序的方法在bean中,所以要把bean放在前面
@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String[] wordsInRow = StringUtils.split(value.toString(), "\t");
bean.set(Long.parseLong(wordsInRow[1]),
Long.parseLong(wordsInRow[2]));
//map输出是bean和FlowBean的原因是:进入Reduce的时候需要遍历FlowBean,而不是bean
context.write(bean,new Text(wordsInRow[0]));
}
}
//排序完成好之后,由于bean在前面,需要在重新倒过来,让数据在前面
public static class ProgramReduce extends Reducer<FlowBean,Text,Text,FlowBean>{
//创建一个FlowBean的对象
FlowBean sumbean = new FlowBean();
@Override
protected void reduce(FlowBean flowBean, Iterable<Text> values, Context context) throws IOException, InterruptedException {
Iterator<Text> iterator = values.iterator();
Text phoneNo = iterator.next();
context.write(phoneNo,flowBean);
}
}
//创建一个驱动类,设置添加各个配置
public static void main(String[] args) throws Exception{
//1:创建配置和任务
Configuration conf = new Configuration();
Job job = Job.getInstance(conf);
//2:指定jar驱动类
job.setJarByClass(FlowProgram1.class);
//3:指定mapper和Reduce的类
job.setMapperClass(ProgramMapper.class);
job.setReducerClass(ProgramReduce.class);
//指定任务的分区的实现类
job.setPartitionerClass(Flowpartitioner.class);
//指定Reduce的数量
job.setNumReduceTasks(5);
//4:指定mapper和Reduce的输出类型,如果相等的话,可以省略mapper的输出类型
job.setMapOutputKeyClass(FlowBean.class);
job.setMapOutputValueClass(Text.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(FlowBean.class);
//job.setInputFormatClass():使用默认组件就不用写
//5:指定输入数据和输出数据的路径,如果输出路径存在的话删除输出路径
FileInputFormat.setInputPaths(job,new Path("F:\\flowsum\\out"));
Path outPath = new Path("F:\\flowsum\\out1");
FileOutputFormat.setOutputPath(job,outPath);
deleteIfExists(conf, outPath);
//6:设置成功标志
boolean flag = job.waitForCompletion(true);
System.out.println(flag);
}
private static void deleteIfExists(Configuration conf, Path outPath) throws IOException {
FileSystem fs = FileSystem.get(conf);
if(fs.exists(outPath)){
fs.delete(outPath);
}
}
5:需要用到的分区的类:继承Partitioner
public class Flowpartitioner extends Partitioner<FlowBean, Text> {
//创建一个map
private static HashMap<String,Integer> cache = new HashMap<String,Integer>();
//在加载类的时候就会执行静态代码快,先已经创建好了以下的各个map.只需要在getPartitioner的方法中切分好了之后,和已经存在的map去比较,
//去看放在哪个map中
static {
cache.put("135",0);
cache.put("137",1);
cache.put("139",2);
cache.put("159",3);
//剩下的号码放在第5个文件中
}
/**
*因为这个类的泛型和map的输出泛型保持一致
* @param flowBean 是上下行流量和总流量
* @param text 是电话号码
* @param numPartitions
* @return 返回的是一个int,就是往编号是几的文件中存放数据
*/
@Override
public int getPartition(FlowBean flowBean, Text text, int numPartitions) {
//把电话号码进行切分,只要前三位
String sign = text.toString().substring(0, 3);
//如果我的号码前三位的value值没有的话,就放在编号为4的文件中,有的话就存放在已经存在的编号的文件中
return cache.get(sign)==null?4:cache.get(sign);
}
}
总结:
默认情况下,系统只有一个reduce task,也只有一个partition分区,假如setNumberReduceTasks(0),就说明不要reduceTask,输出的结果就是part-m-xxxx。
如果reduce task < partition分区,就会报Illegal partition。
如果reduce task > partition分区,则会有多余的空文件产生。
注意: reduceTask默认的个数是1,如果使用默认的方式,就必须传入reduceTask 的实现类,也就是说我们要调用 setReducerClass(),如果没调用就会抛异常。 如果reduceTask的个数设置为0,那么就代表我们不需要使用全局统计,此时会在输出文件夹下产生mapTask输 出的key value内容,有多少个mapTask就有多少个文件。