我的Spark学习笔记(六)

本节主要说明如何通过REST API提交任务、查询任务状态等。高版本的Spark已不提倡通过REST API提交任务了。

启动Master:

# pwd
/usr/local/src/spark-2.2.0-bin-hadoop2.7/sbin
# start-master.sh

下面的jar包就是我们前面提到的。我没有换行,一旦换行了,在Linux环境下,很难输入。

# curl -X POST http://127.0.0.1:6066/v1/submissions/create --header "Content-Type:application/json;charset=UTF-8" --data '{"action":"CreateSubmissionRequest","appArgs":[],"appResource":"file:/root/jinjiankang/ScalaHelloWorld.jar","clientSparkVersion":"2.2.0","environmentVariables":{"SPARK_ENV_LOADED":"1"},"mainClass":"com.jjk.Hello","sparkProperties":{"spark.jars":"file:/root/jinjiankang/ScalaHelloWorld.jar","spark.driver.supervise":"false","spark.app.name":"doJob","spark.eventLog.enabled":"true","spark.submit.deployMode":"cluster","spark.master":"spark://127.0.0.1:6066"}}'

服务端立即返回:

{
  "action" : "CreateSubmissionResponse",
  "message" : "Driver successfully submitted as driver-20191206135034-0000",
  "serverSparkVersion" : "2.2.0",
  "submissionId" : "driver-20191206135034-0000",
  "success" : true
}

查询任务状态,注意submissionId就来自上面的响应:

# curl http://127.0.0.1:6066/v1/submissions/status/driver-20191206135034-0000
{
  "action" : "SubmissionStatusResponse",
  "driverState" : "SUBMITTED",
  "serverSparkVersion" : "2.2.0",
  "submissionId" : "driver-20191206135034-0000",
  "success" : true
}

杀死任务:

# curl -X POST http://127.0.0.1:6066/v1/submissions/kill/driver-20191206135034-0000
{
  "action" : "KillSubmissionResponse",
  "message" : "Kill request for driver-20191206135034-0000 submitted",
  "serverSparkVersion" : "2.2.0",
  "submissionId" : "driver-20191206135034-0000",
  "success" : true
}

再次查询任务状态:

# curl http://127.0.0.1:6066/v1/submissions/status/driver-20191206135034-0000
{
  "action" : "SubmissionStatusResponse",
  "driverState" : "KILLED",
  "serverSparkVersion" : "2.2.0",
  "submissionId" : "driver-20191206135034-0000",
  "success" : true
}

你注意到了吗,我们的任务非常简单,仅输出一行“hello world”,为什么任务状态不是“已成功”呢?还未启动slave,即worker:

start-slave.sh spark://127.0.0.1:7077

再次提交任务后,查询状态:

# curl http://127.0.0.1:6066/v1/submissions/status/driver-20191206151415-0001
{
  "action" : "SubmissionStatusResponse",
  "driverState" : "FINISHED",
  "serverSparkVersion" : "2.2.0",
  "submissionId" : "driver-20191206151415-0001",
  "success" : true,
  "workerHostPort" : "172.17.0.17:46073",
  "workerId" : "worker-20191206151301-172.17.0.17-46073"
}

修改代码逻辑,故意抛出运行期异常。再次提交任务后,再次查询状态:

# curl http://127.0.0.1:6066/v1/submissions/status/driver-20191206161152-0000
{
  "action" : "SubmissionStatusResponse",
  "driverState" : "FAILED",
  "serverSparkVersion" : "2.2.0",
  "submissionId" : "driver-20191206161152-0000",
  "success" : true,
  "workerHostPort" : "172.17.0.17:46073",
  "workerId" : "worker-20191206151301-172.17.0.17-46073"
}

这次看到了“FAILED”。

TODO:如何才能体验下“ERROR”呢?

DriverState.scala源码:

package org.apache.spark.deploy.master

private[deploy] object DriverState extends Enumeration {

  type DriverState = Value

  // SUBMITTED: Submitted but not yet scheduled on a worker
  // RUNNING: Has been allocated to a worker to run
  // FINISHED: Previously ran and exited cleanly
  // RELAUNCHING: Exited non-zero or due to worker failure, but has not yet started running again
  // UNKNOWN: The state of the driver is temporarily not known due to master failure recovery
  // KILLED: A user manually killed this driver
  // FAILED: The driver exited non-zero and was not supervised
  // ERROR: Unable to run or restart due to an unrecoverable error (e.g. missing jar file)
  val SUBMITTED, RUNNING, FINISHED, RELAUNCHING, UNKNOWN, KILLED, FAILED, ERROR = Value
}

补充,在Standalone高可用集群环境下测试:

curl -X POST http://YOUR-MASTER-IP:6066/v1/submissions/create --header "Content-Type:application/json;charset=UTF-8" --data '{"action":"CreateSubmissionRequest","appArgs":[],"appResource":"file:/export/servers/spark/examples/jars/spark-examples_2.11-2.1.1.jar","clientSparkVersion":"2.1.1","environmentVariables":{"SPARK_ENV_LOADED":"1"},"mainClass":"org.apache.spark.examples.SparkPi","sparkProperties":{"spark.jars":"file:/export/servers/spark/examples/jars/spark-examples_2.11-2.1.1.jar","spark.driver.supervise":"false","spark.app.name":"REST-PI","spark.eventLog.enabled":"true","spark.submit.deployMode":"cluster","spark.master":"spark://YOUR-MASTER-IP:6066"}}'

立即返回:

{
  "action" : "CreateSubmissionResponse",
  "message" : "Driver successfully submitted as driver-20191219161132-0003",
  "serverSparkVersion" : "2.1.1",
  "submissionId" : "driver-20191219161132-0003",
  "success" : true
}

查询任务状态:

curl http://YOUR-MASTER-IP:6066/v1/submissions/status/driver-20191219161132-0003

立即返回:

{
  "action" : "SubmissionStatusResponse",
  "driverState" : "FINISHED",
  "serverSparkVersion" : "2.1.1",
  "submissionId" : "driver-20191219161132-0003",
  "success" : true,
  "workerHostPort" : "10.240.2.10:23300",
  "workerId" : "worker-20191219150502-10.240.2.10-23300"
}

Spark配置说明:http://spark.apache.org/docs/latest/configuration.html

问题:在集群环境里,假设A(alive)、B(standby)、C(standby),你在A主机上执行上述REST接口,一切OK,但如果在B或C上执行,就会得到:

{
  "action" : "SubmissionStatusResponse",
  "message" : "Exception from the cluster:\njava.lang.Exception: Current state is not alive: STANDBY. Can only request driver status in ALIVE state.\n\torg.apache.spark.deploy.master.Master$$anonfun$receiveAndReply$1.applyOrElse(Master.scala:470)\n\torg.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:105)\n\torg.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:205)\n\torg.apache.spark.rpc.netty.Inbox.process(Inbox.scala:101)\n\torg.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:213)\n\tjava.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n\tjava.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n\tjava.lang.Thread.run(Thread.java:745)",
  "serverSparkVersion" : "2.1.1",
  "submissionId" : "driver-20191219165731-0006",
  "success" : false
}

关键信息:
Exception from the cluster:
java.lang.Exception: Current state is not alive: STANDBY.
Can only request driver status in ALIVE state.

在集群环境了,如何才能知道哪个master处于ALIVE 状态呢?谁知道请留言。

一种笨的方法是,使用HTMLParser等组件,通过程序遍历集群里每个spark web ui地址,解析页面元素,如果发现特征字符串“Status:ALIVE”,那它就处于ALIVE 状态。

还有一种办法,没有验证过,那就是Zookeeper节点数里一定记录了当前处于ALIVE 状态的master信息。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值