MaxTemperature

数据下载,参考:https://blog.csdn.net/jc_benben/article/details/86020114

将gz后缀的文件放到一个文件中,zcat  *.gz  >sample.txt,或者手动编辑一个文件做测试用,内容如下:

0067011990999991950051507004+68750+023550FM-12+038299999V0203301N00671220001CN9999999N9+00001+99999999999
0043011990999991950051512004+68750+023550FM-12+038299999V0203201N00671220001CN9999999N9+00221+99999999999
0043011990999991950051518004+68750+023550FM-12+038299999V0203201N00261220001CN9999999N9-00111+99999999999
0043012650999991949032412004+62300+010750FM-12+048599999V0202701N00461220001CN0500001N9+01111+99999999999
0043012650999991949032418004+62300+010750FM-12+048599999V0202701N00461220001CN0500001N9+00781+99999999999

原理 解释:

这些行以键/值对的方式来表示map函数:

0067011990999991950051507004…9999999N9+00001+99999…
0043011990999991950051512004…9999999N9+00221+99999…
0043011990999991950051518004…9999999N9-00111+99999…
0043012650999991949032412004…0500001N9+01111+99999…
0043012650999991949032418004…0500001N9+00781+99999…

map函数的功能仅限于提取年份和气温信息(分别为红色和黑色),并将它们输出(气温已经用整数处理表示):
(1950,0)
(1950,22)
(1950,-11)
(1950,111)
(1950,78)
map函数的输出经过MapReduce框架处理后,最后被发送到reduce函数。这一处理过程中需要根据键对键/值进行排序和分组。因此,示例中,reduce函数会看到如下输入:
(1949,[111,78])
(1950,[0,22,-11])
每一年份后紧跟着一系列气温数据。所有reduce函数现在需要做的是遍历整个列表并从中找出最大的读数。
(1949,111)
(1950,22)
这是最终的输出结果:每一年的全球最高气温纪录
工作流程图:

使用Eclipse编辑所需源文件并打包:

 

 

 

 

下一步下一步,导出jar包,其中三个文件内容为:

MaxTemperature.java:

package MaxTemperature;

//cc MaxTemperature Application to find the maximum temperature in the weather dataset
//vv MaxTemperature
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class MaxTemperature {

public static void main(String[] args) throws Exception {
 if (args.length != 2) {
   System.err.println("Usage: MaxTemperature <input path> <output path>");
   System.exit(-1);
 }
 
 Job job = new Job();
 job.setJarByClass(MaxTemperature.class);
 job.setJobName("Max temperature");

 FileInputFormat.addInputPath(job, new Path(args[0]));
 FileOutputFormat.setOutputPath(job, new Path(args[1]));
 
 job.setMapperClass(MaxTemperatureMapper.class);
 job.setReducerClass(MaxTemperatureReducer.class);

 job.setOutputKeyClass(Text.class);
 job.setOutputValueClass(IntWritable.class);
 
 System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
//^^ MaxTemperature

 

MaxTemperatureMapper.java:

package MaxTemperature;
// cc MaxTemperatureMapper Mapper for maximum temperature example
// vv MaxTemperatureMapper
import java.io.IOException;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

public class MaxTemperatureMapper
  extends Mapper<LongWritable, Text, Text, IntWritable> {

  private static final int MISSING = 9999;
  
  @Override
  public void map(LongWritable key, Text value, Context context)
      throws IOException, InterruptedException {
    
    String line = value.toString();
    String year = line.substring(15, 19);
    int airTemperature;
    if (line.charAt(87) == '+') { // parseInt doesn't like leading plus signs
      airTemperature = Integer.parseInt(line.substring(88, 92));
    } else {
      airTemperature = Integer.parseInt(line.substring(87, 92));
    }
    String quality = line.substring(92, 93);
    if (airTemperature != MISSING && quality.matches("[01459]")) {
      context.write(new Text(year), new IntWritable(airTemperature));
    }
  }
}
// ^^ MaxTemperatureMapper

MaxTemperatureReducer.java:

package MaxTemperature;

//cc MaxTemperatureReducer Reducer for maximum temperature example
//vv MaxTemperatureReducer
import java.io.IOException;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

public class MaxTemperatureReducer
extends Reducer<Text, IntWritable, Text, IntWritable> {

@Override
public void reduce(Text key, Iterable<IntWritable> values,
   Context context)
   throws IOException, InterruptedException {
 
 int maxValue = Integer.MIN_VALUE;
 for (IntWritable value : values) {
   maxValue = Math.max(maxValue, value.get());
 }
 context.write(key, new IntWritable(maxValue));
}
}
//^^ MaxTemperatureReducer

依赖的jar包:

上传测试文件:

hadoop fs -copyFromLocal /root/hadoop/hadoop-book-master/appc/src/main/sh/1950/sample.txt /user/root/

执行命令:

[root@centos7 1950]# hadoop jar MaxTemperature1.jar sample.txt output
2019-01-10 15:52:59,021 INFO  [main] client.RMProxy (RMProxy.java:newProxyInstance(133)) - Connecting to ResourceManager at localhost/127.0.0.1:8032
2019-01-10 15:52:59,910 WARN  [main] mapreduce.JobResourceUploader (JobResourceUploader.java:uploadResourcesInternal(142)) - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
2019-01-10 15:53:00,832 INFO  [main] input.FileInputFormat (FileInputFormat.java:listStatus(289)) - Total input files to process : 1
2019-01-10 15:53:01,156 INFO  [main] mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(204)) - number of splits:1
2019-01-10 15:53:01,466 INFO  [main] Configuration.deprecation (Configuration.java:logDeprecation(1297)) - yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
2019-01-10 15:53:04,169 INFO  [main] mapreduce.JobSubmitter (JobSubmitter.java:printTokens(300)) - Submitting tokens for job: job_1547096593230_0005
2019-01-10 15:53:12,864 INFO  [main] impl.YarnClientImpl (YarnClientImpl.java:submitApplication(290)) - Submitted application application_1547096593230_0005
2019-01-10 15:53:13,213 INFO  [main] mapreduce.Job (Job.java:submit(1574)) - The url to track the job: http://centos7:8088/proxy/application_1547096593230_0005/
2019-01-10 15:53:13,227 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1619)) - Running job: job_1547096593230_0005
2019-01-10 15:53:58,417 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1640)) - Job job_1547096593230_0005 running in uber mode : false
2019-01-10 15:53:58,420 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1647)) -  map 0% reduce 0%
2019-01-10 15:54:36,785 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1647)) -  map 100% reduce 0%
2019-01-10 15:54:52,027 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1647)) -  map 100% reduce 100%
2019-01-10 15:54:54,121 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1658)) - Job job_1547096593230_0005 completed successfully
2019-01-10 15:54:54,364 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1665)) - Counters: 49
        File System Counters
                FILE: Number of bytes read=61
                FILE: Number of bytes written=396055
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=632
                HDFS: Number of bytes written=17
                HDFS: Number of read operations=6
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=2
        Job Counters 
                Launched map tasks=1
                Launched reduce tasks=1
                Data-local map tasks=1
                Total time spent by all maps in occupied slots (ms)=30679
                Total time spent by all reduces in occupied slots (ms)=12439
                Total time spent by all map tasks (ms)=30679
                Total time spent by all reduce tasks (ms)=12439
                Total vcore-milliseconds taken by all map tasks=30679
                Total vcore-milliseconds taken by all reduce tasks=12439
                Total megabyte-milliseconds taken by all map tasks=31415296
                Total megabyte-milliseconds taken by all reduce tasks=12737536
        Map-Reduce Framework
                Map input records=5
                Map output records=5
                Map output bytes=45
                Map output materialized bytes=61
                Input split bytes=102
                Combine input records=0
                Combine output records=0
                Reduce input groups=2
                Reduce shuffle bytes=61
                Reduce input records=5
                Reduce output records=2
                Spilled Records=10
                Shuffled Maps =1
                Failed Shuffles=0
                Merged Map outputs=1
                GC time elapsed (ms)=694
                CPU time spent (ms)=6120
                Physical memory (bytes) snapshot=421990400
                Virtual memory (bytes) snapshot=3830411264
                Total committed heap usage (bytes)=267911168
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters 
                Bytes Read=530
        File Output Format Counters 
                Bytes Written=17

 查看结果:

[root@centos7 hadoop-2.9.2]#  hadoop fs -cat /user/root/output/part-r-00000
1949    111
1950    22

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
Java对hdfs操作报如下错误,请问怎么解决?错误如下:Exception in thread "main" java.io.IOException: (null) entry in command string: null chmod 0700 I:\tmp\hadoop-22215\mapred\staging\222151620622033\.staging at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:770) at org.apache.hadoop.util.Shell.execCommand(Shell.java:866) at org.apache.hadoop.util.Shell.execCommand(Shell.java:849) at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:733) at org.apache.hadoop.fs.RawLocalFileSystem.mkOneDirWithMode(RawLocalFileSystem.java:491) at org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:532) at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:509) at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:305) at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:133) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:144) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308) at com.sl.maxTemperature.main(maxTemperature.java:41)
04-23

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

朝闻道-夕死可矣

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值