数据准备
在ftp://ftp.ncdc.noaa.gov/pub/data/平台下载gsod目录下的2016和2017年的数据,然后放在linux的任何目录下
1.使用如下命令解压gsod_2016和gsod_2017这两个压缩包
tar -xvf gsod_2016.tar
tar -xvf gsod_2017.tar
2.使用zcat命令把这些数据文件解压并合并到一个ncdc.txt文件中,并且可以查看
zcat *.gz > ncdc.txt
ll |grep ncdc
3.去除标题行,然后查看结果
sed -i '/STN/d' ncdc.txt
head -12 ncdc.txt
4.在hdfs上面创建一个目录in,并且将那个ncdc.txt文件上传到hdfs
./bin/hdfs dfs -mkdir /in
./bin/hdfs dfs -put /home/hadoop/gsod/ncdc.txt /in
./bin/hdfs dfs -ls /in #查看/in目录下的ncdc.txt文件
5.准备好以下三个java脚本文件
MinTemperature.java
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class MinTemperature {
public static void main(String[] args) throws Exception {
if(args.length != 2) {
System.err.println("Usage: MinTemperature<input path> <output path>");
System.exit(-1);
}
Job job = new Job();
job.setJarByClass(MinTemperature.class);
job.setJobName("Min temperature");
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.setMapperClass(MinTemperatureMapper.class);
job.setReducerClass(MinTemperatureReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
MinTemperatureMapper.java
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
public class MinTemperatureMapper extends Mapper<LongWritable, Text, Text, IntWritable>{
private static final int MISSING = 9999;
@Override
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String line = value.toString();
String year = line.substring(14, 18);
int airTemperature;
airTemperature =(int)Math.floor(Double.valueOf(line.substring(24, 30).trim()));
if(airTemperature != MISSING )
{
context.write(new Text(year), new IntWritable(airTemperature));
}
}
}
MinTemperatureReducer.java
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
public class MinTemperatureReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
@Override
public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
int minValue = Integer.MAX_VALUE;
for(IntWritable value : values) {
minValue = Math.min(minValue, value.get());
}
context.write(key, new IntWritable(minValue));
}
}
6.以上三个java文件写完将其Export为.jar文件并且保存在/usr/local/hadoop/myapp这个目录下(也可以放在其他目录下),将这个文件名字命名为MinTempearture
7.可以在/usr/local/hadoop上的myapp目录下查看压缩包是否存在并开启hadoop平台
8.运行如下命令运行.jar这个文件
cd /usr/local/hadoop
./bin/hadoop jar ./myapp/MinTempearture.jar /in/ncdc.txt /ncdc
注意:/in/ncdc.txt是事先上传到hdfs的文件,/ncdc是程序运行结束才有的文件
9.查看文件写入情况
如上图所示已经生成了ncdc这个目录,但是运行结果看不到,在linux输入如下命令查看
cd /usr/local/hadoop
./bin/hdfs dfs -ls /ncdc
./bin/hdfs dfs -cat /ncdc/part-r-00000
如下图所示:
运行成功,最低气温2016年是-112,2017年是-115.