天气数据案例分析
求每年的最高温度
示例数据:
0029029070999991901010106004+64333+023450FM-12+000599999V0202701N015919999999N0000001N9-00781+99999102001ADDGF108991999999999999999999
1. 分析数据
15-19位为年分数据
87-91为气温数据
92为校验数据
气温数据中9999为错误数据,校验数据01459判定为有效数据
2. 编写程序
2.1 编写mapper
package temperature;
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
public class TempMapper extends Mapper<LongWritable, Text, IntWritable, IntWritable>{
String line;
String year;
String temp;
String quality;
IntWritable _year = new IntWritable();
IntWritable _temp = new IntWritable();
int iy;
int it;
@Override
protected void map(LongWritable key, Text value,
Mapper<LongWritable, Text, IntWritable, IntWritable>.Context context)
throws IOException, InterruptedException {
line = value.toString();
year = line.substring(15, 19);
temp = line.substring(87, 92);
quality = line.substring(92, 93);
iy = Integer.valueOf(year);
it = Integer.valueOf(temp);
if(Math.abs(it) != 9999 && quality.matches("[01459]")) {
_year.set(iy);
_temp.set(it);
context.write(_year, _temp);
}
}
}
- 获取value,将value转为字符串
- 截取子串,获得年、温度、数据有效性判定符
- 将年和温度字符串转化为数字
- 过滤不合格的数据
- 将合格数据写入输出
2.2 编写Reducer
package temperature;
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.mapreduce.Reducer;
public class TempReducer extends Reducer<IntWritable, IntWritable, IntWritable, IntWritable>{
IntWritable max_temp = new IntWritable();
@Override
protected void reduce(IntWritable key, Iterable<IntWritable> values,
Reducer<IntWritable, IntWritable, IntWritable, IntWritable>.Context context)
throws IOException, InterruptedException {
int max = Integer.MIN_VALUE;
for (IntWritable value : values) {
if(max > value.get());
else
max = value.get();
}
max_temp.set(max);
context.write(key, max_temp);
}
}
- 从每个二元组的,第二个元素温度集合中获取最大的温度值
- 将二元组的第一个值年份和温度最大值写入输出
2.3 编写Driver
package temperature;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class TempDriver {
public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
Configuration conf = new Configuration();
conf.set("mapreduce.framework.name", "local");
Path outPut = new Path("file:///D:/out");
FileSystem fs = outPut.getFileSystem(conf);
if(fs.exists(outPut)) {
fs.delete(outPut, true);
}
Job job = Job.getInstance(conf);
job.setJobName("temp");
job.setJarByClass(TempDriver.class);
job.setMapperClass(TempMapper.class);
job.setReducerClass(TempReducer.class);
job.setMapOutputKeyClass(IntWritable.class);
job.setMapOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path("file:///D:/temp"));
FileOutputFormat.setOutputPath(job, outPut);
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
这里只需要设定Mapper输出的数据类型,因为Mapper输入是<LongWritable,Text>,而Mapper输出是<IntWritable,IntWritable>,类型不一致。而Reducer输入和输出都是<IntWritable,IntWritable>,同时Reducer的输入就是Mapper的输出,Mapper指定了输出的数据类型,因此Reducer无需指定输出的数据类型。
3. 运行程序
为了方便,使用本地运行
4. 执行结果
1901 317
1902 244