Hadoop错误:failed with state FAILED due to: Application

0、部署Hadoop执行程序

​ 部署Hadoop完成后,修改了namenode名称,从localhost修改为hadoop001,然后在Hadoop上运行测试程序wordcount

1、任务执行时,报错,具体信息如下:
20/05/09 23:55:49 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
20/05/09 23:55:51 INFO input.FileInputFormat: Total input paths to process : 1
20/05/09 23:55:51 INFO mapreduce.JobSubmitter: number of splits:1
20/05/09 23:55:51 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1589039466797_0001
20/05/09 23:55:52 INFO impl.YarnClientImpl: Submitted application application_1589039466797_0001
20/05/09 23:55:52 INFO mapreduce.Job: The url to track the job: http://hadoop001:8088/proxy/application_1589039466797_0001/
20/05/09 23:55:52 INFO mapreduce.Job: Running job: job_1589039466797_0001
20/05/09 23:56:01 INFO mapreduce.Job: Job job_1589039466797_0001 running in uber mode : false
20/05/09 23:56:01 INFO mapreduce.Job:  map 0% reduce 0%
20/05/09 23:56:06 INFO mapreduce.Job:  map 100% reduce 0%
20/05/09 23:56:12 INFO mapreduce.Job:  map 100% reduce 100%
20/05/09 23:56:17 INFO ipc.Client: Retrying connect to server: 192.168.1.111/192.168.1.111:19307. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
20/05/09 23:56:18 INFO ipc.Client: Retrying connect to server: 192.168.1.111/192.168.1.111:19307. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
20/05/09 23:56:19 INFO ipc.Client: Retrying connect to server: 192.168.1.111/192.168.1.111:19307. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
20/05/09 23:56:25 INFO mapreduce.Job:  map 0% reduce 0%
20/05/09 23:56:25 INFO mapreduce.Job: Job job_1589039466797_0001 failed with state FAILED due to: Application application_1589039466797_0001 failed 2 times due to AM Container for appattempt_1589039466797_0001_000002 exited with  exitCode: 1
For more detailed output, check application tracking page:http://hadoop001:8088/cluster/app/application_1589039466797_0001Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1589039466797_0001_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:585)
	at org.apache.hadoop.util.Shell.run(Shell.java:482)
	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:776)
	at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)


Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
20/05/09 23:56:25 INFO mapreduce.Job: Counters: 0
2、错误原因排查

从网上搜索了各种方式,好像都不是解决这个问题,最后搜到一篇和我的错误完全一样,在此作个记录,涉及到的文件为mapred-site.xml ,修改内容如下:

<?xml version="1.0" encoding="utf-8"?>

<configuration>       
  <property>         
    <name>mapred.job.tracker</name>                   
    <value>hadoop001:9001</value>       
  </property>  
  <property> 
    <name>mapreduce.framework.name</name>  
    <value>yarn</value> 
  </property>  
  <property> 
    <name>mapreduce.jobhistory.address</name>  
    <!--<value>master:10020</value>-->        
    <value>hadoop001:10020</value> 
  </property>  
  <property> 
    <!--查看历史服务器已经运行完的Mapreduce作业记录的web地址,需要启动该服务才行-->
    <name>mapreduce.jobhistory.webapp.address</name>  
    <!--<value>master:19888</value>-->          
    <value>hadoop001:19888</value> 
    <description>MapReduce JobHistory Server Web UI host:port</description>
  </property> 
</configuration>

上面的配置中,估计也是开始配置时,直接Copy别的地方的,开始时使用的是localhost,没有问题,后来在修改为hadoop001时,忘了修改mapreduce.jobhistory相关的几项值,导致的问题出现,后来了解后,其实mapreduce.jobhistory部分可以不用配置,不会有问题,要是配置就需要指定正确的机器信息,终于搞定了,记录下
如果不作mapreduce.jobhistory的配置,如下即可:

<configuration>
      <property>
        <name>mapred.job.tracker</name>
        <value>hadoop001:9001</value>
      </property>
          <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
          </property>
</configuration>

3、运行成功,查看结果
[root@hadoop-2.7.7]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.7.jar wordcount /test/words /test/wordout11
20/05/10 16:04:58 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
20/05/10 16:04:59 INFO input.FileInputFormat: Total input paths to process : 1
20/05/10 16:05:00 INFO mapreduce.JobSubmitter: number of splits:1
20/05/10 16:05:00 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1589039466797_0003
20/05/10 16:05:00 INFO impl.YarnClientImpl: Submitted application application_1589039466797_0003
20/05/10 16:05:00 INFO mapreduce.Job: The url to track the job: http://hadoop001:8088/proxy/application_1589039466797_0003/
20/05/10 16:05:00 INFO mapreduce.Job: Running job: job_1589039466797_0003
20/05/10 16:05:07 INFO mapreduce.Job: Job job_1589039466797_0003 running in uber mode : false
20/05/10 16:05:07 INFO mapreduce.Job:  map 0% reduce 0%
20/05/10 16:05:12 INFO mapreduce.Job:  map 100% reduce 0%
20/05/10 16:05:16 INFO mapreduce.Job:  map 100% reduce 100%
20/05/10 16:05:17 INFO mapreduce.Job: Job job_1589039466797_0003 completed successfully
20/05/10 16:05:18 INFO mapreduce.Job: Counters: 49
	File System Counters
		FILE: Number of bytes read=68
		FILE: Number of bytes written=246119
		FILE: Number of read operations=0
		FILE: Number of large read operations=0
		FILE: Number of write operations=0
		HDFS: Number of bytes read=161
		HDFS: Number of bytes written=42
		HDFS: Number of read operations=6
		HDFS: Number of large read operations=0
		HDFS: Number of write operations=2
	Job Counters 
		Launched map tasks=1
		Launched reduce tasks=1
		Data-local map tasks=1
		Total time spent by all maps in occupied slots (ms)=2783
		Total time spent by all reduces in occupied slots (ms)=2856
		Total time spent by all map tasks (ms)=2783
		Total time spent by all reduce tasks (ms)=2856
		Total vcore-milliseconds taken by all map tasks=2783
		Total vcore-milliseconds taken by all reduce tasks=2856
		Total megabyte-milliseconds taken by all map tasks=2849792
		Total megabyte-milliseconds taken by all reduce tasks=2924544
	Map-Reduce Framework
		Map input records=3
		Map output records=7
		Map output bytes=74
		Map output materialized bytes=68
		Input split bytes=115
		Combine input records=7
		Combine output records=5
		Reduce input groups=5
		Reduce shuffle bytes=68
		Reduce input records=5
		Reduce output records=5
		Spilled Records=10
		Shuffled Maps =1
		Failed Shuffles=0
		Merged Map outputs=1
		GC time elapsed (ms)=129
		CPU time spent (ms)=1300
		Physical memory (bytes) snapshot=439214080
		Virtual memory (bytes) snapshot=4290285568
		Total committed heap usage (bytes)=314048512
	Shuffle Errors
		BAD_ID=0
		CONNECTION=0
		IO_ERROR=0
		WRONG_LENGTH=0
		WRONG_MAP=0
		WRONG_REDUCE=0
	File Input Format Counters 
		Bytes Read=46
	File Output Format Counters 
		Bytes Written=42

终于搞定了,OK

后面关于mapreduce.jobhistory的使用,下次再作记录了

  • 4
    点赞
  • 12
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值