在开始之前,假设你已经配置好了Hadoop的环境。如果没有,可以参考http://www.linuxidc.com/Linux/2012-02/53927.htm
主要使用的软件:
1:java7u25
2:Hadoop1.2.0
3:Eclipse Kepler
4:OS:Ubuntu 12.0.4
先看一下项目的整体结构:
需要注意的是,项目需要导入Hadoop安装目录下和lib目录下所有的jar包。
另外需要在shell命令下开启Hadoop。
接下来编写简单的map函数和reduce函数,并执行这个作业。
- package org.hadoop.tutorial;
- import java.io.IOException;
- import java.util.StringTokenizer;
- import org.apache.hadoop.conf.Configuration;
- import org.apache.hadoop.fs.Path;
- import org.apache.hadoop.io.IntWritable;
- import org.apache.hadoop.io.Text;
- import org.apache.hadoop.mapreduce.Job;
- import org.apache.hadoop.mapreduce.Mapper;
- import org.apache.hadoop.mapreduce.Reducer;
- import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
- import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
- import org.apache.hadoop.util.GenericOptionsParser;
- public class MyWordCount {
- public static class TokenizerMapper
- extends Mapper<Object, Text, Text, IntWritable>{
- private final static IntWritable one = new IntWritable(1);
- private Text word = new Text();
- public void map(Object key, Text value, Context context
- ) throws IOException, InterruptedException {
- StringTokenizer itr = new StringTokenizer(value.toString());
- while (itr.hasMoreTokens()) {
- word.set(itr.nextToken());
- context.write(word, one);
- }
- }
- }
- public static class IntSumReducer
- extends Reducer<Text,IntWritable,Text,IntWritable> {
- private IntWritable result = new IntWritable();
- public void reduce(Text key, Iterable<IntWritable> values,
- Context context
- ) throws IOException, InterruptedException {
- int sum = 0;
- for (IntWritable val : values) {
- sum += val.get();
- }
- result.set(sum);
- context.write(key, result);
- }
- }
- public static void main(String[] args) throws Exception {
- Configuration conf = new Configuration();
- String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
- if (otherArgs.length != 2) {
- System.err.println("Usage: wordcount <in> <out>");
- System.exit(2);
- }
- Job job = new Job(conf, "word count");
- job.setJarByClass(MyWordCount.class);
- job.setMapperClass(TokenizerMapper.class);
- job.setCombinerClass(IntSumReducer.class);
- job.setReducerClass(IntSumReducer.class);
- job.setOutputKeyClass(Text.class);
- job.setOutputValueClass(IntWritable.class);
- FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
- FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
- System.exit(job.waitForCompletion(true) ? 0 : 1);
- }
- }
点击Eclipse的Run Configuration,传入参数words.txt和output。words.txt是我们需要分析处理的文本文件。
- hello world hello hadoop hello ubuntu
- This is a simple text file
output是输出目录。
执行这个main方法,下面是从控制台返回的信息:
- 13/07/28 13:01:09 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
- 13/07/28 13:01:09 WARN mapred.JobClient: No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
- 13/07/28 13:01:09 INFO input.FileInputFormat: Total input paths to process : 1
- 13/07/28 13:01:09 WARN snappy.LoadSnappy: Snappy native library not loaded
- 13/07/28 13:01:09 INFO mapred.JobClient: Running job: job_local2121945460_0001
- 13/07/28 13:01:09 INFO mapred.LocalJobRunner: Waiting for map tasks
- 13/07/28 13:01:09 INFO mapred.LocalJobRunner: Starting task: attempt_local2121945460_0001_m_000000_0
- 13/07/28 13:01:09 INFO util.ProcessTree: setsid exited with exit code 0
- 13/07/28 13:01:09 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@1282b42
- 13/07/28 13:01:09 INFO mapred.MapTask: Processing split: file:/home/tsw/workspace/FirstHadoopPrj/words.txt:0+64
- 13/07/28 13:01:09 INFO mapred.MapTask: io.sort.mb = 100
- 13/07/28 13:01:09 INFO mapred.MapTask: data buffer = 79691776/99614720
- 13/07/28 13:01:09 INFO mapred.MapTask: record buffer = 262144/327680
- 13/07/28 13:01:09 INFO mapred.MapTask: Starting flush of map output
- 13/07/28 13:01:09 INFO mapred.MapTask: Finished spill 0
- 13/07/28 13:01:09 INFO mapred.Task: Task:attempt_local2121945460_0001_m_000000_0 is done. And is in the process of commiting
- 13/07/28 13:01:09 INFO mapred.LocalJobRunner:
- 13/07/28 13:01:09 INFO mapred.Task: Task 'attempt_local2121945460_0001_m_000000_0' done.
- 13/07/28 13:01:09 INFO mapred.LocalJobRunner: Finishing task: attempt_local2121945460_0001_m_000000_0
- 13/07/28 13:01:09 INFO mapred.LocalJobRunner: Map task executor complete.
- 13/07/28 13:01:09 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@50a6d4
- 13/07/28 13:01:09 INFO mapred.LocalJobRunner:
- 13/07/28 13:01:09 INFO mapred.Merger: Merging 1 sorted segments
- 13/07/28 13:01:09 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 115 bytes
- 13/07/28 13:01:09 INFO mapred.LocalJobRunner:
- 13/07/28 13:01:09 INFO mapred.Task: Task:attempt_local2121945460_0001_r_000000_0 is done. And is in the process of commiting
- 13/07/28 13:01:09 INFO mapred.LocalJobRunner:
- 13/07/28 13:01:09 INFO mapred.Task: Task attempt_local2121945460_0001_r_000000_0 is allowed to commit now
- 13/07/28 13:01:09 INFO output.FileOutputCommitter: Saved output of task 'attempt_local2121945460_0001_r_000000_0' to output
- 13/07/28 13:01:09 INFO mapred.LocalJobRunner: reduce > reduce
- 13/07/28 13:01:09 INFO mapred.Task: Task 'attempt_local2121945460_0001_r_000000_0' done.
- 13/07/28 13:01:10 INFO mapred.JobClient: map 100% reduce 100%
- 13/07/28 13:01:10 INFO mapred.JobClient: Job complete: job_local2121945460_0001
- 13/07/28 13:01:10 INFO mapred.JobClient: Counters: 20
- 13/07/28 13:01:10 INFO mapred.JobClient: File Output Format Counters
- 13/07/28 13:01:10 INFO mapred.JobClient: Bytes Written=85
- 13/07/28 13:01:10 INFO mapred.JobClient: File Input Format Counters
- 13/07/28 13:01:10 INFO mapred.JobClient: Bytes Read=64
- 13/07/28 13:01:10 INFO mapred.JobClient: FileSystemCounters
- 13/07/28 13:01:10 INFO mapred.JobClient: FILE_BYTES_READ=583
- 13/07/28 13:01:10 INFO mapred.JobClient: FILE_BYTES_WRITTEN=102109
- 13/07/28 13:01:10 INFO mapred.JobClient: Map-Reduce Framework
- 13/07/28 13:01:10 INFO mapred.JobClient: Reduce input groups=10
- 13/07/28 13:01:10 INFO mapred.JobClient: Map output materialized bytes=119
- 13/07/28 13:01:10 INFO mapred.JobClient: Combine output records=10
- 13/07/28 13:01:10 INFO mapred.JobClient: Map input records=2
- 13/07/28 13:01:10 INFO mapred.JobClient: Reduce shuffle bytes=0
- 13/07/28 13:01:10 INFO mapred.JobClient: Physical memory (bytes) snapshot=0
- 13/07/28 13:01:10 INFO mapred.JobClient: Reduce output records=10
- 13/07/28 13:01:10 INFO mapred.JobClient: Spilled Records=20
- 13/07/28 13:01:10 INFO mapred.JobClient: Map output bytes=113
- 13/07/28 13:01:10 INFO mapred.JobClient: Total committed heap usage (bytes)=292421632
- 13/07/28 13:01:10 INFO mapred.JobClient: CPU time spent (ms)=0
- 13/07/28 13:01:10 INFO mapred.JobClient: Virtual memory (bytes) snapshot=0
- 13/07/28 13:01:10 INFO mapred.JobClient: SPLIT_RAW_BYTES=114
- 13/07/28 13:01:10 INFO mapred.JobClient: Map output records=12
- 13/07/28 13:01:10 INFO mapred.JobClient: Combine input records=12
- 13/07/28 13:01:10 INFO mapred.JobClient: Reduce input records=10
在你的项目里可以发现生成的output目录,在该目录里有两个文件:part-r-0000(r代表reduce,0000代表分块号)和_SUCCESS。
打开part-r-0000文件:
这就是Hadoop的HelloWorld-单词计数
(上述代码取自于hadoop-examples jar包的WordCount.java类)