hadoop初学之WordCount程序一步一步运行

出处:http://blog.chinaunix.net/u3/105376/showart_2329753.html

虽说现在用Eclipse下开发hadoop程序很方便了,但是命令行方式对于小程序开发验证很方便。这是初学hadoop时的笔记,记录下来以备查。

1. 经典的WordCound程序(WordCount.java),见 hadoop0.18文档

import java.io.IOException;
 import java.util.*;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.conf.*;
 import org.apache.hadoop.io.*;
 import org.apache.hadoop.mapred.*;
 import org.apache.hadoop.util.*;
 
public class WordCount {
  public static class Map extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> {
     private final static IntWritable one = new IntWritable(1);
     private Text word = new Text();
     public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
         String line = value.toString();
         StringTokenizer tokenizer = new StringTokenizer(line);
         while (tokenizer.hasMoreTokens()) {
             word.set(tokenizer.nextToken());
             output.collect(word, one);
         }
     }
  }
  
  public static class Reduce extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> {
     public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
         int sum = 0;
         while (values.hasNext()) {
             sum += values.next().get();
         }
         output.collect(key, new IntWritable(sum));
     }
  }
  public static void main(String[] args) throws Exception {
     JobConf conf = new JobConf(WordCount.class);
     conf.setJobName("wordcount");
    
     conf.setOutputKeyClass(Text.class);
     conf.setOutputValueClass(IntWritable.class);
     
     conf.setMapperClass(Map.class);
     conf.setCombinerClass(Reduce.class);
     conf.setReducerClass(Reduce.class);
     
     conf.setInputFormat(TextInputFormat.class);
     conf.setOutputFormat(TextOutputFormat.class);
     
     FileInputFormat.setInputPaths(conf, new Path(args[0]));
     FileOutputFormat.setOutputPath(conf, new Path(args[1]));
     
     JobClient.runJob(conf);
  }
 }
 


 

2. 保证hadoop集群是配置好了的,单机的也好。
   新建一个目录,比如 /home/admin/WordCount
   编译WordCount.java程序。
javac -classpath /home/admin/hadoop/hadoop-0.19.1-core.jar WordCount.java -d /home/admin/WordCount 


 

3. 编译完后在/home/admin/WordCount目录会发现三个class文件 WordCount.class,WordCount$Map.class,WordCount$Reduce.class。
cd 进入 /home/admin/WordCount目录,然后执行:
jar cvf WordCount.jar *.class


 

这样就生成了WordCount.jar。

当然这里也可以通过eclipse 加上hadoop的jar包完成本地打jar包过程。

4. 构造一些输入数据

   input1.txt和input2.txt的文件里面是一些单词。如下






[admin@host WordCount]$ cat input1.txt
Hello, i love china
are you ok?
[admin@host WordCount]$ cat input2.txt
hello, i love word
You are ok
 


在hadoop上新建目录,和put程序运行所需要的输入文件:

hadoop fs -mkdir /tmp/input
hadoop fs -mkdir /tmp/output
hadoop fs -put input1.txt /tmp/input/
hadoop fs -put input2.txt /tmp/input/


5. 运行程序,会显示job运行时的一些信息

[admin@host WordCount]$ hadoop jar WordCount.jar WordCount /tmp/input /tmp/output

10/09/16 22:49:43 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
10/09/16 22:49:43 INFO mapred.FileInputFormat: Total input paths to process :2
10/09/16 22:49:43 INFO mapred.JobClient: Running job: job_201008171228_76165
10/09/16 22:49:44 INFO mapred.JobClient: map 0% reduce 0%
10/09/16 22:49:47 INFO mapred.JobClient: map 100% reduce 0%
10/09/16 22:49:54 INFO mapred.JobClient: map 100% reduce 100%
10/09/16 22:49:55 INFO mapred.JobClient: Job complete: job_201008171228_76165
10/09/16 22:49:55 INFO mapred.JobClient: Counters: 16
10/09/16 22:49:55 INFO mapred.JobClient: File Systems
10/09/16 22:49:55 INFO mapred.JobClient: HDFS bytes read=62
10/09/16 22:49:55 INFO mapred.JobClient: HDFS bytes written=73
10/09/16 22:49:55 INFO mapred.JobClient: Local bytes read=152
10/09/16 22:49:55 INFO mapred.JobClient: Local bytes written=366
10/09/16 22:49:55 INFO mapred.JobClient: Job Counters 
10/09/16 22:49:55 INFO mapred.JobClient: Launched reduce tasks=1
10/09/16 22:49:55 INFO mapred.JobClient: Rack-local map tasks=2
10/09/16 22:49:55 INFO mapred.JobClient: Launched map tasks=2
10/09/16 22:49:55 INFO mapred.JobClient: Map-Reduce Framework
10/09/16 22:49:55 INFO mapred.JobClient: Reduce input groups=11
10/09/16 22:49:55 INFO mapred.JobClient: Combine output records=14
10/09/16 22:49:55 INFO mapred.JobClient: Map input records=4
10/09/16 22:49:55 INFO mapred.JobClient: Reduce output records=11
10/09/16 22:49:55 INFO mapred.JobClient: Map output bytes=118
10/09/16 22:49:55 INFO mapred.JobClient: Map input bytes=62
10/09/16 22:49:55 INFO mapred.JobClient: Combine input records=14
10/09/16 22:49:55 INFO mapred.JobClient: Map output records=14
10/09/16 22:49:55 INFO mapred.JobClient: Reduce input records=14


6. 查看运行结果

[admin@host WordCount]$ hadoop fs -ls /tmp/output/ 
Found 2 items
 drwxr-x--- - admin admin 0 2010-09-16 22:43 /tmp/output/_logs
 -rw-r----- 1 admin admin 102 2010-09-16 22:44 /tmp/output/part-00000
 [admin@host WordCount]$ hadoop fs -cat /tmp/output/part-00000 
Hello, 1
 You 1
 are 2
 china 1
 hello, 1
 i 2
 love 2
 ok 1
 ok? 1
 word 1
 you 1


 

OK 结束了
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值