spark yarn 启动报错:Not able to initialize app directories in any of the configured local directories

spark-shell --master yarn模式启动的时候,报错:

Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
23/08/10 18:49:55 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
23/08/10 18:49:56 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
23/08/10 18:50:00 ERROR YarnClientSchedulerBackend: The YARN application has already ended! It might have been killed or the Application Master may have failed to start. Check the YARN application logs for more details.
23/08/10 18:50:00 ERROR SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: Application application_1691658778247_0002 failed 2 times due to AM Container for appattempt_1691658778247_0002_000002 exited with  exitCode: -1000
Failing this attempt.Diagnostics: [2023-08-10 18:50:00.128]Not able to initialize app-log directories in any of the configured local directories for app application_1691658778247_0002
For more detailed output, check the application tracking page: http://master1:8034/cluster/app/application_1691658778247_0002 Then click on links to logs of each attempt.
. Failing the application.
	at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:98)
	at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:65)
	at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:222)
	at org.apache.spark.SparkContext.<init>(SparkContext.scala:585)
	at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2704)
	at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$2(SparkSession.scala:953)
	at scala.Option.getOrElse(Option.scala:189)
	at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:947)
	at org.apache.spark.repl.Main$.createSparkSession(Main.scala:106)
	at $line3.$read$$iw$$iw.<init>(<console>:15)
	at $line3.$read$$iw.<init>(<console>:42)
	at $line3.$read.<init>(<console>:44)
	at $line3.$read$.<init>(<console>:48)
	at $line3.$read$.<clinit>(<console>)
	at $line3.$eval$.$print$lzycompute(<console>:7)
	at $line3.$eval$.$print(<console>:6)
	at $line3.$eval.$print(<console>)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:747)
	at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1020)
	at scala.tools.nsc.interpreter.IMain.$anonfun$interpret$1(IMain.scala:568)
	at scala.reflect.internal.util.ScalaClassLoader.asContext(ScalaClassLoader.scala:36)
	at scala.reflect.internal.util.ScalaClassLoader.asContext$(ScalaClassLoader.scala:116)
	at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:41)
	at scala.tools.nsc.interpreter.IMain.loadAndRunReq$1(IMain.scala:567)
	at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:594)
	at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:564)
	at scala.tools.nsc.interpreter.IMain.$anonfun$quietRun$1(IMain.scala:216)
	at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:206)
	at scala.tools.nsc.interpreter.IMain.quietRun(IMain.scala:216)
	at org.apache.spark.repl.SparkILoop.$anonfun$initializeSpark$2(SparkILoop.scala:83)
	at scala.collection.immutable.List.foreach(List.scala:431)
	at org.apache.spark.repl.SparkILoop.$anonfun$initializeSpark$1(SparkILoop.scala:83)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at scala.tools.nsc.interpreter.ILoop.savingReplayStack(ILoop.scala:97)
	at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:83)
	at org.apache.spark.repl.SparkILoop.$anonfun$process$4(SparkILoop.scala:165)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at scala.tools.nsc.interpreter.ILoop.$anonfun$mumly$1(ILoop.scala:166)
	at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:206)
	at scala.tools.nsc.interpreter.ILoop.mumly(ILoop.scala:163)
	at org.apache.spark.repl.SparkILoop.loopPostInit$1(SparkILoop.scala:153)
	at org.apache.spark.repl.SparkILoop.$anonfun$process$10(SparkILoop.scala:221)
	at org.apache.spark.repl.SparkILoop.withSuppressedSettings$1(SparkILoop.scala:189)
	at org.apache.spark.repl.SparkILoop.startup$1(SparkILoop.scala:201)
	at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:236)
	at org.apache.spark.repl.Main$.doMain(Main.scala:78)
	at org.apache.spark.repl.Main$.main(Main.scala:58)
	at org.apache.spark.repl.Main.main(Main.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
	at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:958)
	at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
	at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
	at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
	at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1046)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

主要信息就是:Not able to initialize app-log directories in any of the configured local directories for app application_1691658778247_0002 , 错误代码:exitCode: -1000

查了很多信息,包括删除loca-dir下usercache/*,  清除log-dir下文件等;甚至我都格式化了namenode, 问题都没有解决。于是我重新去找日志看日志。既然是yarn的问题,那应该就跟nodemanager 或者resourcemanager有关:

到某节点上,查看$HADOOP_HOME/logs/下日志,

hadoop-root-nodemanager-XXXXXX.log 下,从启动开始查看,看到了这样的信息:

 磁盘满了。。。虽然这里只是warn

清除内存后,重启hadoop, 然后再启动yarn模式spark-shell, 启动成功,后续任务也可以运行了。

清楚内存参考:

df -hl    #查看磁盘使用情况;

du -hs *  #查看当前目录下各文件夹的大小

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值