Apache Hadoop MapReduce WordCount.java 程序相当于C编程语言的hello-world.c示例。
- Hadoop1.0的示例使用旧式org.apache.hadoop.mapred API
- Hadoop 2.0的示例使用旧式org.apache.hadoop.mapreduce API
TIPs:如果在编译WordCount.java时遇到问题,仔细检查源代码和Hadoop版本
WordCount.java
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class WordCount {
public static class TokenizerMapper
extends Mapper<Object, Text, Text, IntWritable>{
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context
) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
public static class IntSumReducer
extends Reducer<Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,
Context context
) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
WordCount是一个在一组给定的输入中清点每个电磁的出现次数的简单应用程序。MapReduce框架独占地操作键值对,框架把作业的输入当作一组键值对,并产生一组不同类型的键值对,MapReduce作业的流程如下:
(输入)<k1,v1>->映射-><k2,v2>->合并-><k2,v2>->缩减-><k3,v3>(输出)
映射程序的实现通过map方法一次处理由指定的TextInputFormat类提供的一行,然后使用StringTokenizer将此行分割成有空格分割的标记,并发出<word,1>
键值对,有关代码段如下:
public void map(Object key, Text value, Context context ) throws IOException,
InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
根据两个输入文件的内容,Hello World Bye World 和 Hello Hadoop Goodbye Hadoop,WordCpunt映射程序将生成一下两个映射:
<Hello,1>
<World,1>
<Bye,1>
<World,1>
和
<Hello,1>
<Hadoop,1>
<Goodbye,1>
<Hadoop,1>
从示例程序中可以看出,WordCount设置了一个映射程序
job.setMapperClass(TokenzerMapper.class)
合并程序
job.setCombinerClassClass(IntSumReducer.class)
和一个缩减程序
job.setReducerClass(IntSumReducer.class)
因此每个映射的输出都通过本地合并程序(它像缩减程序一样对值求和),在本地聚合,然后把数据发送到最后的缩减程序。因此,每个合并程序上的映射都有执行以下预缩减:
<Bye,1>
<Hello,1>
<World,2>
<Goodbye,1>
<Hadoop,2>
<Hello,1>
缩减程序的实现通过reduce方法简单地汇总这些值,得出每个键的出现次数,有关代码段的示例如下:
public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException,
InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
缩减程序的最终输出结果如下:
<Bye,1>
<Goodbye,1>
<Hadoop,2>
<Hello,2>
<World,2>
从命令行编译并执行该程序的步骤如下:
创建一个本地的wordcount_classes目录:
$ mkdir wordcount_classes
使用‘hadoop classpath ’命令包括所有可用的Hadoop类路径来编译WordCount.java程序:
javac -cp ‘hadoop classpath’ -d wordcount-classes WordCount.java
使用一下命令创建jar文件:
jar -cvf wordcount.jar -C wordcount_classes/
要运行此程序,在HDFS中创建输入目录并在新目录中放置一个文本文件。假设我们使用war-and-pace.txt文件:
$hdfs dfs -mkdir war-and-pace-input
$hdfs dfs -put war-and-pace.txt war-and-pace-input运行WordCount应用程序使用一下命令:
$hadoop jar wordcount.jar WordCount war-and-pace-input war-and-pace-output
如果一切正常将会输出一些消息,此外在war-and-pace-output目录中产生一下文件
-$ hdfs dfs -ls war-and-pace-output
Found 2 items
-rw-r--r-- 2 hdfs hdfs 0 2015-05-24 11:14 war-and-pace-output/_SUCCESS
-rw-r--r-- 2 hdfs hdfs 467839 2015-05-24 11:14 war-and-pace-output/part-r-00000
可以使用下面的命令将单词个数的完整清单从HDFS复制到工作目录:
$hdfs dfs -get war-and-pace-output/part-r-00000