Hadoop MapReduce示例程序WordCount.java手动编译运行解析

WordCount.java

vi WordCount.java

import java.io.IOException;  
import java.util.StringTokenizer;  
      
import org.apache.hadoop.conf.Configuration;  
import org.apache.hadoop.fs.Path;  
import org.apache.hadoop.io.IntWritable;  
import org.apache.hadoop.io.Text;  
import org.apache.hadoop.mapreduce.Job;  
import org.apache.hadoop.mapreduce.Mapper;  
import org.apache.hadoop.mapreduce.Reducer;  
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;  
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;  
import org.apache.hadoop.util.GenericOptionsParser;  
      
public class WordCount {  
      
  public static class TokenizerMapper   
       extends Mapper<Object, Text, Text, IntWritable>{  
          
    private final static IntWritable one = new IntWritable(1);  
    private Text word = new Text();  
            
    public void map(Object key, Text value, Context context  
                    ) throws IOException, InterruptedException {  
      StringTokenizer itr = new StringTokenizer(value.toString());  
      while (itr.hasMoreTokens()) {  
        word.set(itr.nextToken());  
        context.write(word, one);  
      }  
    }  
  }  
        
  public static class IntSumReducer   
       extends Reducer<Text,IntWritable,Text,IntWritable> {  
    private IntWritable result = new IntWritable();  
      
    public void reduce(Text key, Iterable<IntWritable> values,   
                       Context context  
                       ) throws IOException, InterruptedException {  
      int sum = 0;  
      for (IntWritable val : values) {  
        sum += val.get();  
      }  
      result.set(sum);  
      context.write(key, result);  
    }  
  }  
      
  public static void main(String[] args) throws Exception {  
    Configuration conf = new Configuration();  
    String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();  
    if (otherArgs.length != 2) {  
      System.err.println("Usage: wordcount <in> <out>");  
      System.exit(2);  
    }  
    Job job = new Job(conf, "word count");  
    job.setJarByClass(WordCount.class);  
    job.setMapperClass(TokenizerMapper.class);  
    job.setCombinerClass(IntSumReducer.class);  
    job.setReducerClass(IntSumReducer.class);  
    job.setOutputKeyClass(Text.class);  
    job.setOutputValueClass(IntWritable.class);  
    FileInputFormat.addInputPath(job, new Path(otherArgs[0]));  
    FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));  
    System.exit(job.waitForCompletion(true) ? 0 : 1);  
  }  
}

编译

建立保存生成的编译后的class文件的文件夹wordcount_classes

mkdir ~/wordcount_classes

需要指定编译依赖的jar包,中间用冒号隔开

javac -classpath /usr/lib/hadoop-0.20/hadoop-core-0.20.2-cdh3u6.jar:/usr/lib/hadoop-0.20/lib/commons-cli-1.2.jar -d wordcount_classes WordCount.java

打包

jar -cvf WordCount.jar -C wordcount_classes/ .

运行

hadoop jar ~/WordCount.jar WordCount input output

input对应hdfs://user/root/input文件夹,output是结果输出的文件夹,必须是原来不存在的,否则将运行不成功,output将生成在/user/root/ouput位置。

结果

root@bjidss46:~# hadoop jar WordCount.jar WordCount input output  
13/11/20 16:10:07 INFO input.FileInputFormat: Total input paths to process : 1  
13/11/20 16:10:07 WARN snappy.LoadSnappy: Snappy native library is available  
13/11/20 16:10:07 INFO util.NativeCodeLoader: Loaded the native-hadoop library  
13/11/20 16:10:07 INFO snappy.LoadSnappy: Snappy native library loaded  
13/11/20 16:10:07 INFO mapred.JobClient: Running job: job_201311201528_0008  
13/11/20 16:10:08 INFO mapred.JobClient:  map 0% reduce 0%  
13/11/20 16:10:12 INFO mapred.JobClient:  map 100% reduce 0%  
13/11/20 16:10:16 INFO mapred.JobClient:  map 100% reduce 100%  
13/11/20 16:10:16 INFO mapred.JobClient: Job complete: job_201311201528_0008  
13/11/20 16:10:16 INFO mapred.JobClient: Counters: 26  
13/11/20 16:10:16 INFO mapred.JobClient:   Job Counters   
13/11/20 16:10:16 INFO mapred.JobClient:     Launched reduce tasks=1  
13/11/20 16:10:16 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=4473  
13/11/20 16:10:16 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0  
13/11/20 16:10:16 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0  
13/11/20 16:10:16 INFO mapred.JobClient:     Launched map tasks=1  
13/11/20 16:10:16 INFO mapred.JobClient:     Data-local map tasks=1  
13/11/20 16:10:16 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=3523  
13/11/20 16:10:16 INFO mapred.JobClient:   FileSystemCounters  
13/11/20 16:10:16 INFO mapred.JobClient:     FILE_BYTES_READ=57  
13/11/20 16:10:16 INFO mapred.JobClient:     HDFS_BYTES_READ=138  
13/11/20 16:10:16 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=105460  
13/11/20 16:10:16 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=35  
13/11/20 16:10:16 INFO mapred.JobClient:   Map-Reduce Framework  
13/11/20 16:10:16 INFO mapred.JobClient:     Map input records=1  
13/11/20 16:10:16 INFO mapred.JobClient:     Reduce shuffle bytes=57  
13/11/20 16:10:16 INFO mapred.JobClient:     Spilled Records=8  
13/11/20 16:10:16 INFO mapred.JobClient:     Map output bytes=43  
13/11/20 16:10:16 INFO mapred.JobClient:     CPU time spent (ms)=1530  
13/11/20 16:10:16 INFO mapred.JobClient:     Total committed heap usage (bytes)=504758272  
13/11/20 16:10:16 INFO mapred.JobClient:     Combine input records=4  
13/11/20 16:10:16 INFO mapred.JobClient:     SPLIT_RAW_BYTES=111  
13/11/20 16:10:16 INFO mapred.JobClient:     Reduce input records=4  
13/11/20 16:10:16 INFO mapred.JobClient:     Reduce input groups=4  
13/11/20 16:10:16 INFO mapred.JobClient:     Combine output records=4  
13/11/20 16:10:16 INFO mapred.JobClient:     Physical memory (bytes) snapshot=334163968  
13/11/20 16:10:16 INFO mapred.JobClient:     Reduce output records=4  
13/11/20 16:10:16 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=2914021376  
13/11/20 16:10:16 INFO mapred.JobClient:     Map output records=4

总结

之所以使用原始的javac方式编译执行是为了更了解mapreduce的流程,使用eclipse的时候导出jar请不要将依赖的诸多jar包一起打包,只需要hadoop-core-0.20.2-cdh3u6.jar和/usr/lib/hadoop-0.20/lib/commons-cli-1.2.jar即可。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值