提交mapreduce的example案例到YARN上运行时,一直卡在map 0% reduce 0%,报错提示:
Job job_** failed with state FAILED due to: Application application_*** failed 2 times due to AM Container for appattempt_*** exited with exitCode: -1 完整报错如下:
看了看发现是container_***这个文件不存在
19/04/17 16:49:06 INFO mapreduce.Job: map 0% reduce 0%
19/04/17 16:49:06 INFO mapreduce.Job: Job job_1555490729473_0002 failed with state FAILED due to: Application application_1555490729473_0002 failed 2 times due to AM Container for appattempt_1555490729473_0002_000002 exited with exitCode: -1
For more detailed output, check application tracking page:http://suddev-PC:8088/proxy/application_1555490729473_0002/Then, click on links to logs of each attempt.
Diagnostics: File /home/suddev/dev/bd/app/tmp/nm-local-dir/usercache/suddev/appcache/application_1555490729473_0002/container_1555490729473_0002_02_000001 does not exist
Failing this attempt. Failing the application.
19/04/17 16:49:06 INFO mapreduce.Job: Counters: 0
Job Finished in 9.091 seconds
java.io.FileNotFoundException: File does not exist: hdfs://localhost:8020/user/suddev/QuasiMonteCarlo_1555490936334_196215618/out/reduce-out
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1122)
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1750)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1774)
at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)
at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
继续看yarn nodemanager的log,如下
2019-04-17 16:46:11,609 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Failed to launch container.
java.io.FileNotFoundException: File /home/suddev/dev/bd/app/tmp/nm-local-dir/usercache/suddev/appcache/application_1555490729473_0001/container_1555490729473_0001_02_000001 does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:534)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524)
at org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:1051)
at org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:157)
at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:197)
at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:724)
at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:720)
at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:720)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.createDir(DefaultContainerExecutor.java:513)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:161)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-04-17 16:46:11,610 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1555490729473_0001_02_000001 transitioned from RUNNING to EXITED_WITH_FAILURE
2019-04-17 16:46:11,610 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_1555490729473_0001_02_000001
2019-04-17 16:46:12,914 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1555490729473_0001_02_000001
2019-04-17 16:46:13,714 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Could not get pid for container_1555490729473_0001_02_000001. Waited for 2000 ms.
2019-04-17 16:46:13,727 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /tmp/hadoop-suddev/nm-local-dir/usercache/suddev/appcache/application_1555490729473_0001/container_1555490729473_0001_02_000001
去文件夹看了看发现真没有这个文件,各种尝试后发现yarn的yarn.nodemanager.local-dirs
和hadoop的hadoop.tmp.dir
参数对应文件位置不一致
解决办法
将hdfs-site.xml中hadoop.tmp.dir
属性和yarn-site.xml中的yarn.nodemanager.local-dirs
属性设置为相同路径
示例
hdfs-site.xml
<property>
<name>hadoop.tmp.dir</name>
<value>/home/suddev/dev/bd/app/tmp</value>
</property>
yarn-site.xml
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>/home/suddev/dev/bd/app/tmp</value>
</property>
然后重启dfs和Yarn就可以正常工作啦
./stop-dfs.sh
./stop-yarn.sh
./start-dfs.sh
./start-yarn.sh