当本地编译测试过的mapreduce程序放到集群上跑则报以下错误
15/12/24 14:19:36 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1448435314558_0043
15/12/24 14:19:37 INFO impl.YarnClientImpl: Submitted application application_1448435314558_0043
15/12/24 14:19:37 INFO mapreduce.Job: The url to track the job: http://InterFinance02:8088/proxy/application_1448435314558_0043/
15/12/24 14:19:37 INFO mapreduce.Job: Running job: job_1448435314558_0043
15/12/24 14:19:46 INFO mapreduce.Job: Job job_1448435314558_0043 running in uber mode : false
15/12/24 14:19:46 INFO mapreduce.Job: map 0% reduce 0%
15/12/24 14:19:52 INFO mapreduce.Job: Task Id : attempt_1448435314558_0043_m_000000_0, Status : FAILED
Error: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
15/12/24 14:19:59 INFO mapreduce.Job: Task Id : attempt_1448435314558_0043_m_000000_1, Status : FAILED
Error: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
15/12/24 14:20:06 INFO mapreduce.Job: Task Id : attempt_1448435314558_0043_m_000000_2, Status : FAILED
Error: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
15/12/24 14:20:15 INFO mapreduce.Job: map 100% reduce 100%
15/12/24 14:20:15 INFO mapreduce.Job: Job job_1448435314558_0043 failed with state FAILED due to: Task failed task_1448435314558_0043_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
15/12/24 14:20:15 INFO mapreduce.Job: Counters: 12
Job Counters
Failed map tasks=4
Launched map tasks=4
Other local map tasks=3
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=21980
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=21980
Total vcore-seconds taken by all map tasks=21980
Total megabyte-seconds taken by all map tasks=22507520
Map-Reduce Framework
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
解决方法:
从这一句错误可以看出Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
TaskAttemptContext这个找到的是一接口,而期望的是一个类
这个是因为在本地用低版本的hadoop进行编译,然后拿到集群的高版本的hadoop上面去跑,就会报上面的错误
通过以下源代码可以看到
hadoop1.2.1代码如下:
hadoop2.6.0的代码如下:
所以在高版本中TaskAttemptContext只是一个接口,那么解决方法就是在本地用高版本的hadoop进行编译就可以解决这个错误。