在配置好基本环境之后,linux下打包运行MR程序分为这么几步:
1.编写MR程序;
2.编译xx.java源文件【javac wordcount.java】
3.打包jar 【jar -cvf WordCount.jar ./WordCount*】
4.运行jar 【 hadoop jar WordCount.jar org.apache.hadoop.examples.WordCount /input /output】注意WordCount前面要将包名写完整
网上的 MapReduce WordCount 教程对于如何编译 WordCount.java 几乎是一笔带过… 而有写到的,大多又是 0.20 等旧版本版本的做法,即 javac -classpath /usr/local/hadoop/hadoop-1.0.1/hadoop-core-1.0.1.jar WordCount.java,但较新的 2.X 版本中,已经没有 hadoop-core*.jar 这个文件,因此编辑和打包自己的 MapReduce 程序与旧版本有所不同。
本文以 Hadoop 2.6.0 环境下的 WordCount 实例来介绍 2.x 版本中如何编辑自己的 MapReduce 程序。
Hadoop 2.x 版本中的依赖 jar
Hadoop 2.x 版本中 jar 不再集中在一个 hadoop-core*.jar 中,而是分成多个 jar,如使用 Hadoop 2.6.0 运行 WordCount 实例至少需要如下三个 jar:
$HADOOP_HOME/share/hadoop/common/hadoop-common-2.6.0.jar
$HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.6.0.jar
$HADOOP_HOME/share/hadoop/common/lib/commons-cli-1.2.jar
实际上,通过命令 hadoop classpath 我们可以得到运行 Hadoop 程序所需的全部 classpath 信息。
编译、打包 Hadoop MapReduce 程序
我们将 Hadoop 的 classhpath 信息添加到 CLASSPATH 变量中,在 ~/.bashrc 中增加如下几行:
export HADOOP_HOME=/usr/local/hadoop
export CLASSPATH=$($HADOOP_HOME/bin/hadoop classpath):$CLASSPATH
别忘了执行 source ~/.bashrc 使变量生效,接着就可以通过 javac 命令编译 WordCount.java 了(使用的是 Hadoop 源码中的 WordCount.java,源码在文本最后面):
javac WordCount.java
Shell 命令
编译时会有警告,可以忽略。编译后可以看到生成了几个 .class 文件。
使用Javac编译自己的MapReduce程序
接着把 .class 文件打包成 jar,才能在 Hadoop 中运行:
jar -cvf WordCount.jar ./WordCount*.class
Shell 命令
打包完成后,运行试试,创建几个输入文件:
mkdirinput
echo"echo of the rainbow"> ./input/file0
echo"the waiting game"> ./input/file1
Shell 命令
创建WordCount的输入
开始运行:
/usr/local/hadoop/bin/hadoop jar WordCount.jar WordCount input output
Shell 命令
不过这边可能会遇到如下的提示 Exception in thread "main" java.lang.NoClassDefFoundError: WordCount :
提示找不到 WordCount 类
因为程序中声明了 package ,所以在命令中也要 org.apache.hadoop.examples 写完整:
/usr/local/hadoop/bin/hadoop jar WordCount.jar org.apache.hadoop.examples.WordCount input output
Shell 命令
正确运行后的结果如下:
WordCount 运行结果
进阶:使用 Eclipse 编译运行 MapReduce 程序
使用命令行编译运行MapReduce程序毕竟有些麻烦,修改一次就得手动编译、打包一次,使用Eclipse编译运行MapReduce程序会更加方便。
WordCount.java 源码
文件位于 hadoop-2.6.0-src\hadoop-mapreduce-project\hadoop-mapreduce-examples\src\main\java\org\apache\hadoop\examples 中:
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
packageorg.apache.hadoop.examples;
importjava.io.IOException;
importjava.util.StringTokenizer;
importorg.apache.hadoop.conf.Configuration;
importorg.apache.hadoop.fs.Path;
importorg.apache.hadoop.io.IntWritable;
importorg.apache.hadoop.io.Text;
importorg.apache.hadoop.mapreduce.Job;
importorg.apache.hadoop.mapreduce.Mapper;
importorg.apache.hadoop.mapreduce.Reducer;
importorg.apache.hadoop.mapreduce.lib.input.FileInputFormat;
importorg.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
importorg.apache.hadoop.util.GenericOptionsParser;
publicclassWordCount{
publicstaticclassTokenizerMapper
extendsMapper{
privatefinalstaticIntWritableone=newIntWritable(1);
privateTextword=newText();
publicvoidmap(Objectkey,Textvalue,Contextcontext
)throwsIOException,InterruptedException{
StringTokenizeritr=newStringTokenizer(value.toString());
while(itr.hasMoreTokens()){
word.set(itr.nextToken());
context.write(word,one);
}
}
}
publicstaticclassIntSumReducer
extendsReducer{
privateIntWritableresult=newIntWritable();
publicvoidreduce(Textkey,Iterablevalues,
Contextcontext
)throwsIOException,InterruptedException{
intsum=0;
for(IntWritableval:values){
sum+=val.get();
}
result.set(sum);
context.write(key,result);
}
}
publicstaticvoidmain(String[]args)throwsException{
Configurationconf=newConfiguration();
String[]otherArgs=newGenericOptionsParser(conf,args).getRemainingArgs();
if(otherArgs.length!=2){
System.err.println("Usage: wordcount ");
System.exit(2);
}
Jobjob=newJob(conf,"word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job,newPath(otherArgs[0]));
FileOutputFormat.setOutputPath(job,newPath(otherArgs[1]));
System.exit(job.waitForCompletion(true)?0:1);
}
}
Java
参考资料