【Hadoop】Hadoop2.7.3执行job下来几个bug和解决思路

转载著名文章来自:http://blog.csdn.net/lsttoy/article/details/52400193
最近执行hadoop的job下来,发现三个问题。

基本条件:name服务器和node服务器都正常。WEBUI中显示都是OK,都是存活。

执行现象之一:总是job运行中,毫无反应。
16/09/01 09:32:29 INFO mapreduce.Job: Running job: job_1472644198158_0001

执行现象二:代码如下,总是尝试执行, 报failed。
16/09/01 09:32:29 INFO mapreduce.Job: Running job: job_1472644198158_0001
16/09/01 09:32:46 INFO mapreduce.Job: Job job_1472644198158_0001 running in uber mode : false
16/09/01 09:32:46 INFO mapreduce.Job: map 0% reduce 0%
16/09/01 09:33:08 INFO mapreduce.Job: Task Id : attempt_1472644198158_0001_m_000000_0, Status : FAILED
16/09/01 09:33:08 INFO mapreduce.Job: Task Id : attempt_1472644198158_0001_m_000001_0, Status : FAILED
16/09/01 09:33:25 INFO mapreduce.Job: Task Id : attempt_1472644198158_0001_m_000001_1, Status : FAILED
16/09/01 09:33:29 INFO mapreduce.Job: Task Id : attempt_1472644198158_0001_m_000000_1, Status : FAILED
16/09/01 09:33:41 INFO mapreduce.Job: Task Id : attempt_1472644198158_0001_m_000001_2, Status : FAILED
16/09/01 09:33:45 INFO mapreduce.Job: Task Id : attempt_1472644198158_0001_m_000000_2, Status : FAILED
16/09/01 09:33:58 INFO mapreduce.Job: map 100% reduce 100%
16/09/01 09:33:58 INFO mapreduce.Job: Job job_1472644198158_0001 failed with state FAILED due to: Task failed task_1472644198158_0001_m_000001
Job failed as tasks failed. failedMaps:1 failedReduces:0

16/09/01 09:33:58 INFO mapreduce.Job: Counters: 17
Job Counters
Failed map tasks=7
Killed map tasks=1
Killed reduce tasks=1
Launched map tasks=8
Other local map tasks=6
Data-local map tasks=2
Total time spent by all maps in occupied slots (ms)=123536
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=123536
Total time spent by all reduce tasks (ms)=0
Total vcore-milliseconds taken by all map tasks=123536
Total vcore-milliseconds taken by all reduce tasks=0
Total megabyte-milliseconds taken by all map tasks=126500864
Total megabyte-milliseconds taken by all reduce tasks=0
Map-Reduce Framework
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
[root@slave1 mapreduce]# hadoop jar hadoop-mapreduce-examples-2.7.3.jar wordcount /input /output
16/09/01 10:16:30 INFO client.RMProxy: Connecting to ResourceManager at /114.XXX.XXX.XXX:8032
org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory hdfs://master:9000/output already exists
at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:146)
at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:266)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:139)
at org.apache.hadoop.mapreduce.Job 10.run(Job.java:1290)atorg.apache.hadoop.mapreduce.Job 10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

执行现象三,子节点执行job报错
16/09/01 09:32:29 INFO mapreduce.Job: Running job: job_1472644198158_0001
16/09/01 09:32:46 INFO mapreduce.Job: Job job_1472644198158_0001 running in uber mode : false
16/09/01 09:32:46 INFO mapreduce.Job: map 0% reduce 0%
16/09/01 09:33:08 INFO mapreduce.Job: Task Id : attempt_1472644198158_0001_m_000000_0, Status : FAILED
16/09/01 09:33:08 INFO mapreduce.Job: Task Id : attempt_1472644198158_0001_m_000001_0, Status : FAILED
16/09/01 09:33:25 INFO mapreduce.Job: Task Id : attempt_1472644198158_0001_m_000001_1, Status : FAILED
16/09/01 09:33:29 INFO mapreduce.Job: Task Id : attempt_1472644198158_0001_m_000000_1, Status : FAILED
16/09/01 09:33:41 INFO mapreduce.Job: Task Id : attempt_1472644198158_0001_m_000001_2, Status : FAILED

问题解决方案:
针对问题一,总是运行中的状态,解决方案为修改yarn-site.xml。提供最小为1024MB,最大2048MB,即可启动计算。
针对问题二,出现如上面代码问题,若为生成的代码提示的bug。可能的原因在于你输出的结果文件夹中如output/文件夹已经存在。
请执行hadoop fs -rm -r /output 。(hadoop fs -rmr /output)括号中为老版本命令。
针对问题三,如果出现jobfaild,请查询node中的日志,会发现可能的问题在于node各节点和name节点中信息沟通失败。
需要修改hosts文件。具体修改方法比较简单,请百度:)

  • 0
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 15
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 15
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

乐扣老师lekkoliu

你的鼓励是我最大的科研动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值