关于spark作业提交:spark运行example为例

示例:yarn-cluster提交:

spark-submit --master yarn --deploy-mode cluster --executor-memory 2G --executor-cores 2 --queue root.helowin --class org.apache.spark.examples.SparkPi spark-examples-1.6.3-hadoop2.6.0.jar 1000

 

查看日志:

yarn logs -applicationId application_1536809934546_0008

 

spark提交作业分为多种,比较常用:

yarn 分2种,client、cluster

standalone分2种,client、cluster

还有不常用mesos等

参考:https://spark.apache.org/docs/2.2.0/submitting-applications.html

 

以SparkPi为例:

1.yarn-cluster模式提交:

spark-submit --master yarn \ 
--deploy-mode cluster \
--executor-memory 2G \
--executor-cores 2 \
--queue root.helowin \
--class org.apache.spark.examples.SparkPi \
spark-examples-1.6.3-hadoop2.6.0.jar \
1000

 

不指定deploy-mode,默认为client提交。

 

2.standalone-cluster提交:

这里测试时候发现无法提交,始终没有worker,后面发现,可能是集群资源分配问题,这里添加上driver给定资源,即可运行。client不存在该问题。

spark-submit \

--class org.apache.spark.examples.SparkPi \

--master spark://dc1:7077 \

--deploy-mode cluster \

--supervise \

--driver-cores 20 \

--driver-memory 100M \

--executor-memory 200M \

--total-executor-cores 2 \

spark-examples-1.6.3-hadoop2.6.0.jar \

1000

打印信息也短,因为日志直接写到节点上面,节点在集群当中,所以你本机只是作为一个提交作业的机器。只有client模式下,日志才会写回到本机,因为本机是作为driver了:

Running Spark using the REST application submission protocol.

18/09/17 17:57:15 INFO rest.RestSubmissionClient: Submitting a request to launch an application in spark://dc1:7077.

18/09/17 17:57:26 WARN rest.RestSubmissionClient: Unable to connect to server spark://dc1:7077.

Warning: Master endpoint spark://dc1:7077 was not a REST server. Falling back to legacy submission gateway instead.

以上提交有个warning,可以使用restfule形式来提交,具体端口可以在spark webUI上看到:

spark-submit --class org.apache.spark.examples.SparkPi \
--master spark://dc1:6066 \
--deploy-mode cluster \
--supervise \
--driver-cores 4 \
--driver-memory 200M \
--executor-memory 4G \
--total-executor-cores 6 \
spark-examples-1.6.3-hadoop2.6.0.jar 1000

打印信息:

Running Spark using the REST application submission protocol.

18/09/17 18:00:00 INFO rest.RestSubmissionClient: Submitting a request to launch an application in spark://dc1:6066.

18/09/17 18:00:01 INFO rest.RestSubmissionClient: Submission successfully created as driver-20180917180001-0015. Polling submission state...

18/09/17 18:00:01 INFO rest.RestSubmissionClient: Submitting a request for the status of submission driver-20180917180001-0015 in spark://dc1:6066.

18/09/17 18:00:01 INFO rest.RestSubmissionClient: State of driver driver-20180917180001-0015 is now RUNNING.

18/09/17 18:00:01 INFO rest.RestSubmissionClient: Driver is running on worker worker-20180913183011-192.168.9.168-7078 at 192.168.9.168:7078.

18/09/17 18:00:01 INFO rest.RestSubmissionClient: Server responded with CreateSubmissionResponse:

{

"action" : "CreateSubmissionResponse",

"message" : "Driver successfully submitted as driver-20180917180001-0015",

"serverSparkVersion" : "1.6.0",

"submissionId" : "driver-20180917180001-0015",

"success" : true

}

最后附带一张webUI上面的截图:

 

常见问题:

1.Warning: Master endpoint spark://dc1:7077 was not a REST server. Falling back to legacy submission gateway instead.

解决:提交master由默认7077端口换成rest server的端口:默认6066

2.standalone cluster提交后,貌似没有执行,但是yarn-cluster执行却正常。

解决:这可能是因为集群资源不够,常见在自己笔记本安装的虚拟机群。因为spark driver启动默认就需要vcpu 1,内存1G,造成机器资源不够。在web-ui上面可以看看是否被选中的Driver上分配到了core以及memory,如果有的话,再看看分配到worker是否也具有core和memory。如果都有,一般就没什么问题了,后台在运行了,查看节点上日志即可看到。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值