然后观察YARN的WebUI进行查看,如图:
然后观察Hive Client的控制台输出,如下:
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there’s no reduce operator
Starting Job = job_1552895066408_0001, Tracking URL = http://localhost:8088/proxy/application_1552895066408_0001/
Kill Command = /wangqingguo/bigdata/hadoop-2.6.0-cdh5.7.0/bin/hadoop job -kill job_1552895066408_0001
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2019-03-18 15:44:40,758 Stage-1 map = 0%, reduce = 0%
Ended Job = job_1552895066408_0001 with errors
Error during job, obtaining debugging information…
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Stage-Stage-1: HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
二.解决思路:
通过YARN的WebUI看到,发现YARN没有Core和Memory,按照常理讲,
如果不配置Core和Memeory,yarn-site.xml文件会有默认的值。
为了保险起见,我添加以下参数:
yarn.nodemanager.resource.cpu-vcores 8 yarn.nodemanager.resource.memory-mb 8192 yarn.scheduler.minimum-allocation-mb 1024 yarn.scheduler.maximum-allocation-mb 8192 重启HDFS的进程后,重新提交job,发现还是报这个错,然后通过仔细观察WebUI的log发现一句话: Hadoop MapReduce Error - /bin/bash: /bin/java: is a directory 终于找到错误的所在,原来是找不到Java。最后我在etc/hadoop/hadoop.env.sh中配置了java_home,问题解决。
哈哈哈,这都是若泽在高级班教我的排查方法。
有时候问题答案真心不重要,排查思路很重要!