MapReduce链接作业之预处理

环境:Vmware 8.0 和Ubuntu11.04

Hadoop 实战之MapReduce链接作业之预处理

第一步:首先创建一个工程命名为HadoopTest.目录结构如下图:


 

第二步: 在/home/tanglg1987目录下新建一个start.sh脚本文件,每次启动虚拟机都要删除/tmp目录下的全部文件,重新格式化namenode,代码如下:

 
[plain] view plain copy
 
  1. sudo rm -rf /tmp/*
  2. rm -rf /home/tanglg1987/hadoop-0.20.2/logs
  3. hadoop namenode -format
  4. hadoop datanode -format
  5. start-all.sh
  6. hadoop fs -mkdir input
  7. hadoop dfsadmin -safemode leave

 

第三步:给start.sh增加执行权限并启动hadoop伪分布式集群,代码如下:
[plain] view plain copy
 
  1. chmod 777 /home/tanglg1987/start.sh
  2. ./start.sh

执行过程如下:

第四步:上传本地文件到hdfs

 

在/home/tanglg1987目录下新建Customer.txt内容如下:

[plain] view plain copy
 
  1. 100 tom 90
  2. 101 mary 85
  3. 102 kate 60

上传本地文件到hdfs:

[plain] view plain copy
 
  1. hadoop fs -put /home/tanglg1987/ChainMapper.txt input

第五步:新建一个ChainMapperDemo.java,代码如下:

[java] view plain copy
 
  1. package com.baison.action;
  2. import java.io.IOException;
  3. import java.util.*;
  4. import java.lang.String;
  5. import org.apache.hadoop.fs.Path;
  6. import org.apache.hadoop.conf.*;
  7. import org.apache.hadoop.io.*;
  8. import org.apache.hadoop.mapred.*;
  9. import org.apache.hadoop.util.*;
  10. import org.apache.hadoop.mapred.lib.*;
  11. publicclass ChainMapperDemo {
  12. publicstaticclass Map00 extends MapReduceBase implements
  13. Mapper<Text, Text, Text, Text> {
  14. publicvoid map(Text key, Text value, OutputCollector output,
  15. Reporter reporter) throws IOException {
  16. Text ft = new Text("100");
  17. if (!key.equals(ft)) {
  18. output.collect(key, value);
  19. }
  20. }
  21. }
  22. publicstaticclass Map01 extends MapReduceBase implements
  23. Mapper<Text, Text, Text, Text> {
  24. publicvoid map(Text key, Text value, OutputCollector output,
  25. Reporter reporter) throws IOException {
  26. Text ft = new Text("101");
  27. if (!key.equals(ft)) {
  28. output.collect(key, value);
  29. }
  30. }
  31. }
  32. publicstaticclass Reduce extends MapReduceBase implements
  33. Reducer<Text, Text, Text, Text> {
  34. publicvoid reduce(Text key, Iterator values, OutputCollector output,
  35. Reporter reporter) throws IOException {
  36. while (values.hasNext()) {
  37. output.collect(key, values.next());
  38. }
  39. }
  40. }
  41. publicstaticvoid main(String[] args) throws Exception {
  42. String[] arg = { "hdfs://localhost:9100/user/tanglg1987/input/ChainMapper.txt",
  43. "hdfs://localhost:9100/user/tanglg1987/output" };
  44. JobConf conf = new JobConf(ChainMapperDemo.class);
  45. conf.setJobName("ChainMapperDemo");
  46. conf.setInputFormat(KeyValueTextInputFormat.class);
  47. conf.setOutputFormat(TextOutputFormat.class);
  48. ChainMapper cm = new ChainMapper();
  49. JobConf mapAConf = new JobConf(false);
  50. cm.addMapper(conf, Map00.class, Text.class, Text.class, Text.class,
  51. Text.class, true, mapAConf);
  52. JobConf mapBConf = new JobConf(false);
  53. cm.addMapper(conf, Map01.class, Text.class, Text.class, Text.class,
  54. Text.class, true, mapBConf);
  55. conf.setReducerClass(Reduce.class);
  56. conf.setOutputKeyClass(Text.class);
  57. conf.setOutputValueClass(Text.class);
  58. FileInputFormat.setInputPaths(conf, new Path(arg[0]));
  59. FileOutputFormat.setOutputPath(conf, new Path(arg[1]));
  60. JobClient.runJob(conf);
  61. }
  62. }

第六步:Run On Hadoop,运行过程如下:

12/10/17 21:05:53 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
12/10/17 21:05:53 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
12/10/17 21:05:53 WARN mapred.JobClient: No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
12/10/17 21:05:54 INFO mapred.FileInputFormat: Total input paths to process : 1
12/10/17 21:05:54 INFO mapred.JobClient: Running job: job_local_0001
12/10/17 21:05:54 INFO mapred.FileInputFormat: Total input paths to process : 1
12/10/17 21:05:54 INFO mapred.MapTask: numReduceTasks: 1
12/10/17 21:05:54 INFO mapred.MapTask: io.sort.mb = 100
12/10/17 21:05:54 INFO mapred.MapTask: data buffer = 79691776/99614720
12/10/17 21:05:54 INFO mapred.MapTask: record buffer = 262144/327680
12/10/17 21:05:54 INFO mapred.MapTask: Starting flush of map output
12/10/17 21:05:54 INFO mapred.MapTask: Finished spill 0
12/10/17 21:05:54 INFO mapred.TaskRunner: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
12/10/17 21:05:54 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/ChainMapper.txt:0+35
12/10/17 21:05:54 INFO mapred.TaskRunner: Task 'attempt_local_0001_m_000000_0' done.
12/10/17 21:05:54 INFO mapred.LocalJobRunner:
12/10/17 21:05:54 INFO mapred.Merger: Merging 1 sorted segments
12/10/17 21:05:54 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 16 bytes
12/10/17 21:05:54 INFO mapred.LocalJobRunner:
12/10/17 21:05:54 INFO mapred.TaskRunner: Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting
12/10/17 21:05:54 INFO mapred.LocalJobRunner:
12/10/17 21:05:54 INFO mapred.TaskRunner: Task attempt_local_0001_r_000000_0 is allowed to commit now
12/10/17 21:05:54 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0001_r_000000_0' to hdfs://localhost:9100/user/tanglg1987/output
12/10/17 21:05:54 INFO mapred.LocalJobRunner: reduce > reduce
12/10/17 21:05:54 INFO mapred.TaskRunner: Task 'attempt_local_0001_r_000000_0' done.
12/10/17 21:05:55 INFO mapred.JobClient: map 100% reduce 100%
12/10/17 21:05:55 INFO mapred.JobClient: Job complete: job_local_0001
12/10/17 21:05:55 INFO mapred.JobClient: Counters: 15
12/10/17 21:05:55 INFO mapred.JobClient: FileSystemCounters
12/10/17 21:05:55 INFO mapred.JobClient: FILE_BYTES_READ=36152
12/10/17 21:05:55 INFO mapred.JobClient: HDFS_BYTES_READ=70
12/10/17 21:05:55 INFO mapred.JobClient: FILE_BYTES_WRITTEN=73202
12/10/17 21:05:55 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=12
12/10/17 21:05:55 INFO mapred.JobClient: Map-Reduce Framework
12/10/17 21:05:55 INFO mapred.JobClient: Reduce input groups=1
12/10/17 21:05:55 INFO mapred.JobClient: Combine output records=0
12/10/17 21:05:55 INFO mapred.JobClient: Map input records=3
12/10/17 21:05:55 INFO mapred.JobClient: Reduce shuffle bytes=0
12/10/17 21:05:55 INFO mapred.JobClient: Reduce output records=1
12/10/17 21:05:55 INFO mapred.JobClient: Spilled Records=2
12/10/17 21:05:55 INFO mapred.JobClient: Map output bytes=12
12/10/17 21:05:55 INFO mapred.JobClient: Map input bytes=35
12/10/17 21:05:55 INFO mapred.JobClient: Combine input records=0
12/10/17 21:05:55 INFO mapred.JobClient: Map output records=1
12/10/17 21:05:55 INFO mapred.JobClient: Reduce input records=1

第七步:查看结果集,运行结果如下:

 

 

[plain] view plain copy
 
  1. sudo rm -rf /tmp/*
  2. rm -rf /home/tanglg1987/hadoop-0.20.2/logs
  3. hadoop namenode -format
  4. hadoop datanode -format
  5. start-all.sh
  6. hadoop fs -mkdir input
  7. hadoop dfsadmin -safemode leave

 

第三步:给start.sh增加执行权限并启动hadoop伪分布式集群,代码如下:

[plain] view plain copy
 
  1. chmod 777 /home/tanglg1987/start.sh
  2. ./start.sh
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值