初学者往往会走很多的弯路,有很多的地方再清晰明白的也会抛出很多异常。
这两天在自己的java虚拟机上安装好了hadoop-1.2.1版的hadoop,然后按照hadoop权威指南上面的入门级的例子敲了一下,想将其放在自己搭建的hadoop上面运行起来。可是就是这么一个小小的例子却也有很多异常抛出的,这里现在还没明白书上的例子是怎么运行成功的。
下面是所敲的代码:
package LFS.Hadoop.First.simpleTest;
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.Mapper;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reporter;
public class SimpleTestMapper extends MapReduceBase implements
Mapper<LongWritable, Text, LongWritable, IntWritable> {
public void map(LongWritable inputKey, Text inputValue,
OutputCollector<LongWritable, IntWritable> outputCollecter, Reporter reporter)
throws IOException {
//这里面做事情了,只要按照正常思路编程就好
String line = inputValue.toString();//获取一行数据
//分割数据
String []words = line.split(":");
String SourceKey[] = words[1].split("p");
long outputKey = Long.parseLong(SourceKey[0].substring(1,SourceKey[0].length()-1));
int outputValue= Integer.parseInt(words[2].substring(2));
outputCollecter.collect(new LongWritable(outputKey)
, new IntWritable(outputValue));
//map 函数就这么写完了,就这么简单?yes,就这么简单了
}
}
package LFS.Hadoop.First.simpleTest;
import java.io.IOException;
import java.util.Iterator;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reducer;
import org.apache.hadoop.mapred.Reporter;
public class SimpleTestReducer extends MapReduceBase implements
Reducer<LongWritable, IntWritable, LongWritable, IntWritable> {
//reduce函数就是将map从文件中映射的数据进行处理,变成我们想要的结果放起来
public void reduce(LongWritable inputKey, Iterator<IntWritable> inputValue,
OutputCollector<LongWritable, IntWritable> outputCollector, Reporter reporter)
throws IOException {
//我想做的是找到一个最大值,而不是找到一个key的最大值,目前先这样子写吧
//找到一个key最大的value值
int maxValue = Integer.MIN_VALUE;
while(inputValue.hasNext())
{
IntWritable curValue = inputValue.next();
maxValue = Math.max(maxValue, curValue.get());
}
//将这个值写入到输出文件
outputCollector.collect(inputKey, new IntWritable(maxValue));
}
}
package LFS.Hadoop.First.simpleTest;
import java.io.IOException;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
public class SimpleTestStart {
public static void main(String args[]) {
JobConf jobConf = new JobConf(SimpleTestStart.class);
<span style="white-space:pre"> <span style="color:#ff0000;"> </span></span><span style="color:#ff0000;">//String jarName = args[2];
<span style="white-space:pre"> </span>//jobConf.setJar(jarName);</span>
jobConf.setJobName("simple test");
Path inputPath = new Path(args[0]);
FileInputFormat.addInputPath(jobConf, inputPath);
Path outputPath = new Path(args[1]);
FileOutputFormat.setOutputPath(jobConf, outputPath);
jobConf.setMapperClass(SimpleTestMapper.class);
jobConf.setReducerClass(SimpleTestReducer.class);
jobConf.setOutputKeyClass(LongWritable.class);
jobConf.setOutputValueClass(IntWritable.class);
try {
JobClient.runJob(jobConf);
} catch (IOException e) {
e.printStackTrace();
}
}
}
Caused by: java\.lang\.RuntimeException: java\.lang\.RuntimeException: java\.lang\.ClassNotFoundException: LFS\.Hadoop\.First\.simpleTest\.SimpleTestMapper
at org\.apache\.hadoop\.conf\.Configuration\.getClass(Configuration\.java:889)
at org\.apache\.hadoop\.mapred\.JobConf\.getMapperClass(JobConf\.java:968)
at org\.apache\.hadoop\.mapred\.MapRunner\.configure(MapRunner\.java:34)
\.\.\. 14 more
Caused by: java\.lang\.RuntimeException: java\.lang\.ClassNotFoundException: LFS\.Hadoop\.First\.simpleTest\.SimpleTestMapper
at org\.apache\.hadoop\.conf\.Configuration\.getClass(Configuration\.java:857)
at org\.apache\.hadoop\.conf\.Configuration\.getClass(Configuration\.java:881)
\.\.\. 16 more
Caused by: java\.lang\.ClassNotFoundException: LFS\.Hadoop\.First\.simpleTest\.SimpleTestMapper
这个类的class明明我就放在了jar包中啊,为什么它还给我报错呢?开始网上寻找解决办法,一搜一大堆啊。大致都是hadoop的寻找类的绝对路径的问题。刚开始采纳了什么export HADOOP_CLASSPATH=jar包放在的路径,可是没有成功,然后采用setJarbyClass的方法,还是没有成功。最后采用setjar的方法成功。就是上面代码红色标记的代码,将注释去掉才能够运行成功。
2.还有一个就是给数据的地址,很多时候你传入地址不是一个绝对路径时往往会出现filenotfound这样的错误。你一看路径会发现hadoop运行时给你组装的路径完全不是你想要的,如你传了一个 /test/pp.cvs,但是hadoop会给你的装出一个:hdfs://127.0.0.1:9000/user/username/test/pp.cvs. 嗯,这是hadoop会默认的将你设定的相对路径上加上hdfs://127.0.0.1:9000/user/username的。要想明确好自己的路径请加绝对的路径hdfs://127.0.0.1:9000/...
这是这两天遇到的问题,这里记录一下,以备以后有用。