Hadoop之WordCount

Hadoop的WordCount实例,代码如下:

import java.io.IOException;
import java.util.*;

import org.apache.hadoop.conf.*;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.lib.input.*;
import org.apache.hadoop.mapreduce.lib.output.*;
import org.apache.hadoop.util.*;


public class WordCount extends Configured implements Tool {
	
	public static class Map extends Mapper<LongWritable, Text, Text, IntWritable>
	{
		 private final static IntWritable one = new IntWritable(1);
	      private Text word = new Text();
	
	      public void map(LongWritable key, Text value, Context context) throws IOException,InterruptedException {
	        String line = value.toString();
			StringTokenizer tokenizer = new StringTokenizer(line);
	        while (tokenizer.hasMoreTokens()) {
	          word.set(tokenizer.nextToken());
	          context.write(word, one);
	        }
 	      }
	}
	
	public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable>
	{
		 public void reduce(Text key, Iterator<IntWritable> values, Context context) throws IOException,InterruptedException {
		        int sum = 0;
	  	        while (values.hasNext()) {
	 	          sum += values.next().get();
		        }
	  	      context.write(key, new IntWritable(sum));
	 	      }
	}

	public int run(String[] args) throws Exception{
		Job job = new Job(getConf());
		job.setJarByClass(WordCount.class);
		job.setJobName("wordcount");
		
		job.setOutputKeyClass(Text.class);
		job.setOutputValueClass(IntWritable.class);
		
		job.setMapperClass(Map.class);
		job.setReducerClass(Reduce.class);
		
		job.setInputFormatClass(TextInputFormat.class);
		job.setOutputFormatClass(TextOutputFormat.class);
		
		FileInputFormat.setInputPaths(job, new Path(args[0]));
		FileOutputFormat.setOutputPath(job, new Path(args[1]));
		
		boolean success = job.waitForCompletion(true);
		return success ? 0 : 1;
	}
	/**
	 * @param args
	 */
	public static void main(String[] args) throws Exception{
		// TODO Auto-generated method stub

		int ret = ToolRunner.run(new WordCount(), args);
		System.exit(ret);
	}

}

WordCount工程中Export---java(JAR File)---不要勾选.classpath和.project,生成wordcount_test.jar包,然后放到服务器上去执行。

在hdfs文件系统中创建数据文件,如:

[root@localhost hadoop-0.20.2]# bin/hadoop fs -lsr
drwxr-xr-x   - root supergroup          0 2013-01-09 09:43 /user/root/wordcount
drwxr-xr-x   - root supergroup          0 2013-01-08 08:41 /user/root/wordcount/input
-rw-r--r--   1 root supergroup         43 2013-01-08 08:41 /user/root/wordcount/input/hello1.txt
-rw-r--r--   1 root supergroup         63 2013-01-08 08:41 /user/root/wordcount/input/hello2.txt

运行jar包,对数据进行统计:

[root@localhost hadoop-0.20.2]# bin/hadoop jar wordcount_test.jar WordCount /user/root/wordcount/input/* output
13/01/23 09:19:06 INFO input.FileInputFormat: Total input paths to process : 2
13/01/23 09:19:06 INFO mapred.JobClient: Running job: job_201301220408_0001
13/01/23 09:19:07 INFO mapred.JobClient:  map 0% reduce 0%
13/01/23 09:19:14 INFO mapred.JobClient:  map 100% reduce 0%
13/01/23 09:19:26 INFO mapred.JobClient:  map 100% reduce 100%
13/01/23 09:19:28 INFO mapred.JobClient: Job complete: job_201301220408_0001
13/01/23 09:19:28 INFO mapred.JobClient: Counters: 17
13/01/23 09:19:28 INFO mapred.JobClient:   Job Counters
13/01/23 09:19:28 INFO mapred.JobClient:     Launched reduce tasks=1
13/01/23 09:19:28 INFO mapred.JobClient:     Launched map tasks=2
13/01/23 09:19:28 INFO mapred.JobClient:     Data-local map tasks=2
13/01/23 09:19:28 INFO mapred.JobClient:   FileSystemCounters
13/01/23 09:19:28 INFO mapred.JobClient:     FILE_BYTES_READ=196
13/01/23 09:19:28 INFO mapred.JobClient:     HDFS_BYTES_READ=106
13/01/23 09:19:28 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=462
13/01/23 09:19:28 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=134
13/01/23 09:19:28 INFO mapred.JobClient:   Map-Reduce Framework
13/01/23 09:19:28 INFO mapred.JobClient:     Reduce input groups=5
13/01/23 09:19:28 INFO mapred.JobClient:     Combine output records=0
13/01/23 09:19:28 INFO mapred.JobClient:     Map input records=7
13/01/23 09:19:28 INFO mapred.JobClient:     Reduce shuffle bytes=202
13/01/23 09:19:28 INFO mapred.JobClient:     Reduce output records=14
13/01/23 09:19:28 INFO mapred.JobClient:     Spilled Records=28
13/01/23 09:19:28 INFO mapred.JobClient:     Map output bytes=162
13/01/23 09:19:28 INFO mapred.JobClient:     Combine input records=0
13/01/23 09:19:28 INFO mapred.JobClient:     Map output records=14
13/01/23 09:19:28 INFO mapred.JobClient:     Reduce input records=14

统计结果:

[root@localhost hadoop-0.20.2]# bin/hadoop fs -lsr
drwxr-xr-x   - root supergroup          0 2013-01-23 09:19 /user/root/output
drwxr-xr-x   - root supergroup          0 2013-01-23 09:19 /user/root/output/_logs
drwxr-xr-x   - root supergroup          0 2013-01-23 09:19 /user/root/output/_logs/history
-rw-r--r--   1 root supergroup      17647 2013-01-23 09:19 /user/root/output/_logs/history/localhost_1358845718908_job_201301220408_0001_conf.xml
-rw-r--r--   1 root supergroup       8951 2013-01-23 09:19 /user/root/output/_logs/history/localhost_1358845718908_job_201301220408_0001_root_wordcount
-rw-r--r--   1 root supergroup        134 2013-01-23 09:19 /user/root/output/part-r-00000
drwxr-xr-x   - root supergroup          0 2013-01-09 09:43 /user/root/wordcount
drwxr-xr-x   - root supergroup          0 2013-01-08 08:41 /user/root/wordcount/input
-rw-r--r--   1 root supergroup         43 2013-01-08 08:41 /user/root/wordcount/input/hello1.txt
-rw-r--r--   1 root supergroup         63 2013-01-08 08:41 /user/root/wordcount/input/hello2.txt

最终,得到output中的结果:

bin/hadoop fs -cat /user/root/output/part-r-00000


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值