【错误1】
12/12/05 23:11:45 INFO mapred.Merger: Down to the last merge-pass, with 2 segments left of total size: 791 bytes
12/12/05 23:11:45 INFO mapred.LocalJobRunner:12/12/05 23:11:45 WARN mapred.LocalJobRunner: job_local_0001
java.lang.RuntimeException: java.lang.NoSuchMethodException: com.wang.demo.DataJoin$TaggedWritable.<init>()
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:123)
at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:68)
at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:44)
at org.apache.hadoop.mapred.Task$ValuesIterator.readNextValue(Task.java:1210)
at org.apache.hadoop.mapred.Task$ValuesIterator.next(Task.java:1150)
at org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.moveToNext(ReduceTask.java:215)
at org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.next(ReduceTask.java:211)
at org.apache.hadoop.contrib.utils.join.DataJoinReducerBase.regroup(DataJoinReducerBase.java:106)
at org.apache.hadoop.contrib.utils.join.DataJoinReducerBase.reduce(DataJoinReducerBase.java:129)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:440)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:388)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:413)
Caused by: java.lang.NoSuchMethodException: com.wang.demo.DataJoin$TaggedWritable.<init>()
at java.lang.Class.getConstructor0(Class.java:2706)
at java.lang.Class.getDeclaredConstructor(Class.java:1985)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
... 11 more
12/12/05 23:11:45 INFO mapreduce.Job: map 100% reduce 0%
12/12/05 23:11:45 INFO mapreduce.Job: Job complete: job_local_0001
检查一下 DataJoin的TaggedWritable类是不是没有默认的构造方法?
【错误2】
今天写了个读hdfs文件的demo,结果报错:
java.io.IOException: Cannot open filename /user/wang/user1/part-m-000000
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:1497)
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.<init>(DFSClient.java:1488)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:376)
at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:178)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:356)
at com.wang.hdfs.ReadFile.main(ReadFile.java:39)
文件路径: String file = "/user/wang/user1/part-m-000000";
在网上搜了半天没解决,后来发现是文件路径写错了,文件路径后面多写了个0,按理说应该报文件不存在的错误,结果报了这么个错误!