计数器的概念:
数据清洗(ETL):
在运行核心业务MapReduce程序之前,往往要先对数据进行清洗,清理掉不符合用户要
求的数据。清理的过程往往只需要运行Mapper程序,不需要运行Reduce程序。
案例实操(就一个简单的例子):
1.需求
去除日志中字段长度小于等于11的日志。
(1)输入数据(类似这种,一共一万多条,不再一一列举)
194.237.142.21 - - [18/Sep/2013:06:49:18 +0000] "GET /wp-content/uploads/2013/07/rstudio-git3.png HTTP/1.1" 304 0 "-" "Mozilla/4.0 (compatible;)"
183.49.46.228 - - [18/Sep/2013:06:49:23 +0000] "-" 400 0 "-" "-"
163.177.71.12 - - [18/Sep/2013:06:49:33 +0000] "HEAD / HTTP/1.1" 200 20 "-" "DNSPod-Monitor/1.0"
163.177.71.12 - - [18/Sep/2013:06:49:36 +0000] "HEAD / HTTP/1.1" 200 20 "-" "DNSPod-Monitor/1.0"
101.226.68.137 - - [18/Sep/2013:06:49:42 +0000] "HEAD / HTTP/1.1" 200 20 "-" "DNSPod-Monitor/1.0"
101.226.68.137 - - [18/Sep/2013:06:49:45 +0000] "HEAD / HTTP/1.1" 200 20 "-" "DNSPod-Monitor/1.0"
(2)期望输出数据
每行字段长度都大于11。
2.需求分析
需要在Map阶段对输入的数据根据规则进行过滤清洗。
3.实现代码
Mapper:
package com.mapreduce.weblog;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
public class LogMapper extends Mapper<LongWritable, Text, Text, NullWritable> {
Text k = new Text();
protected void map(LongWritable key, Text value, Context context) throws java.io.IOException, InterruptedException {
// 1. 获取 1 行数据
String line = value.toString();
// 2. 解析日志
boolean result = paraseLog(line, context);
// 3. 日志不合法就退出
if(!result) {
return ;
}
// 4. 设置key
k.set(line);
// 5. 写出数据
context.write(k, NullWritable.get());
}
private boolean paraseLog(String line, Context context) {
// 1. 截取数据
String[] fields = line.split(" ");
// 2. 日志长度大于11的为合法
if (fields.length > 11) {
// 系统计数器
context.getCounter("map", "true").increment(1);
return true;
} else {
context.getCounter("map", "false").increment(1);;
return false;
}
}
}
Driver:
package com.mapreduce.weblog;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class LogDriver {
public static void main(String[] args) throws Exception {
// 输入输出路径需要根据自己电脑上实际的输入输出路径设置
args = new String[] { "D:\\hadoop-2.7.1\\winMR\\WebLog\\input", "D:\\hadoop-2.7.1\\winMR\\WebLog\\output1" };
// 1 获取job信息
Configuration conf = new Configuration();
Job job = Job.getInstance(conf);
// 2 加载jar包
job.setJarByClass(LogDriver.class);
// 3 关联map
job.setMapperClass(LogMapper.class);
// 4 设置最终输出类型
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(NullWritable.class);
// 设置reducetask个数为0
job.setNumReduceTasks(0);
// 5 设置输入和输出路径
FileInputFormat.setInputPaths(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
// 6 提交
job.waitForCompletion(true);
}
}