在使用 Spark on Yarn模式在集群中提交任务的时候运行很缓慢,并且还报了一个WARN
使用集群提交任务
./bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn \
--deploy-mode cluster \
--executor-memory 1G \
--num-executors 1 \
/opt/spark-2.3.0-bin-hadoop2.6/examples/jars/spark-examples_2.11-2.3.0.jar \
10
但是出现警告信息:
WARN Client:66 - Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
日志在提交程序依赖的 jar 包,造成任务提交速度慢,在官网上看到解决办法
To make Spark runtime jars accessible from YARN side, you can specify spark.yarn.archive or spark.yarn.jars.
For details please refer to Spark Properties. If neither spark.yarn.archive nor spark.yarn.jars is specified,
Spark will create a zip file with all jars under $SPARK_HOME/jars and upload it to the distributed cache.
大概是:要想在 yarn 节点访问 spark 的 runtime jars,需要指定spark.yarn.jars。如果没有指定,spark就会把$SPARK_HOME/jars/下的 jar 包上传到分布缓存中去。
解决办法:将$SPARK_HOME/jars/* 下spark运行依赖的jar上传到hdfs上。
hadoop fs -mkdir /tmp/lib_jars
hadoop fs -put $SPARK_HOME/jars/* /tmp/lib_jars
在配置文件$SPARK_HOME/conf/spark-defaults.conf 添加内容
spark.yarn.jars hdfs://master:9000/tmp/lib_jars/*
再次提交任务,执行,出现以下信息。
2018-03-21 22:35:13 INFO Client:54 - Preparing resources for our AM container
2018-03-21 22:35:16 INFO Client:54 - Source and destination file systems are the same. Not copying hdfs://master:9000/tmp/lib_jars/JavaEWAH-0.3.2.jar
2018-03-21 22:35:16 INFO Client:54 - Source and destination file systems are the same. Not copying hdfs://master:9000/tmp/lib_jars/RoaringBitmap-0.5.11.jar
2018-03-21 22:35:16 INFO Client:54 - Source and destination file systems are the same. Not copying hdfs://master:9000/tmp/lib_jars/ST4-4.0.4.jar
2018-03-21 22:35:16 INFO Client:54 - Source and destination file systems are the same. Not copying hdfs://master:9000/tmp/lib_jars/activation-1.1.1.jar