我们再执行MapReduce代码时报了一个如下的错误: Job job_1607082280342_0001 failed with state FAILED due to: Task failed task_1607082280342_0001_r_000000 Job failed as tasks failed. failedMaps:0 failedReduces:1
Job job_1607082280342_0001 failed with state FAILED due to: Task failed task_1607082280342_0001_r_000000
Job failed as tasks failed. failedMaps:0 failedReduces:1
20/12/04 19:31:51 INFO mapreduce.Job: Counters: 41
File System Counters
FILE: Number of bytes read=0
FILE: Number of bytes written=212
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=193
HDFS: Number of bytes written=130525
HDFS: Number of read operations=16
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Failed reduce tasks=1
Launched map tasks=1
Launched reduce tasks=1
Other local map tasks=1
Total time spent by all maps in occupied slots (ms)=322
Total time spent by all reduces in occupied slots (ms)=161
TOTAL_LAUNCHED_UBERTASKS=2
NUM_UBER_SUBMAPS=1
NUM_UBER_SUBREDUCES=1
NUM_FAILED_UBERTASKS=1
Total time spent by all map tasks (ms)=322
Total time spent by all reduce tasks (ms)=161
Total vcore-milliseconds taken by all map tasks=322
Total vcore-milliseconds taken by all reduce tasks=161
Total megabyte-milliseconds taken by all map tasks=329728
Total megabyte-milliseconds taken by all reduce tasks=164864
这一个问题是很多新手入门大数据的人员都有可能犯得错误,经过我们的排查发现,在此处我们导入Text包的 到错误了.
正确应该导入 hadoop.io.Text下的包
接下来执行代码就成功了:
所以说我们平时在写代码的时候一定要细心,要明确你要用的类在什么包下面,什么时候该调用什么方法.