Spark --- 启动、运行、关闭过程

计算PI值

 
  1. // scalastyle:off println

  2. package org.apache.spark.examples

  3.  
  4. import scala.math.random

  5.  
  6. import org.apache.spark._

  7.  
  8. /** Computes an approximation to pi */

  9. object SparkPi {

  10. def main(args: Array[String]) {

  11. val conf = new SparkConf().setAppName("Spark Pi")

  12. val spark = new SparkContext(conf)

  13. val slices = if (args.length > 0) args(0).toInt else 2

  14. val n = math.min(100000L * slices, Int.MaxValue).toInt // avoid overflow

  15. val count = spark.parallelize(1 until n, slices).map { i =>

  16. val x = random * 2 - 1

  17. val y = random * 2 - 1

  18. if (x*x + y*y < 1) 1 else 0

  19. }.reduce(_ + _)

  20. println("Pi is roughly " + 4.0 * count / n)

  21. spark.stop()

  22. }

  23. }

流程分析

 
  1. [abc@search-engine---dev4 spark]$ ./bin/run-example SparkPi

  2. Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties

  3. 16/06/07 03:43:20 INFO SparkContext: Running Spark version 1.6.1

  4. 16/06/07 03:43:20 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

  5. #进行acls用户权限认证

  6. 16/06/07 03:43:20 INFO SecurityManager: Changing view acls to: abc

  7. 16/06/07 03:43:20 INFO SecurityManager: Changing modify acls to: abc

  8. 16/06/07 03:43:20 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(abc); users with modify permissions: Set(abc)

  9. 16/06/07 03:43:21 INFO Utils: Successfully started service 'sparkDriver' on port 40568.

  10. 16/06/07 03:43:23 INFO Slf4jLogger: Slf4jLogger started

  11. #启动远程监听服务,端口是36739,Spark的通信工作由akka来实现

  12. 16/06/07 03:43:23 INFO Remoting: Starting remoting

  13. 16/06/07 03:43:23 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@127.0.0.1:36739]

  14. 16/06/07 03:43:23 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 36739.

  15. #注册MapOutputTracker,BlockManagerMaster,BlockManager

  16. 16/06/07 03:43:23 INFO SparkEnv: Registering MapOutputTracker

  17. 16/06/07 03:43:23 INFO SparkEnv: Registering BlockManagerMaster

  18. #分配存储空间,包括磁盘空间和内存空间

  19. 16/06/07 03:43:23 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-8a68c39e-40e5-43ca-b21e-081ef8d278e2

  20. 16/06/07 03:43:23 INFO MemoryStore: MemoryStore started with capacity 511.1 MB

  21. 16/06/07 03:43:23 INFO SparkEnv: Registering OutputCommitCoordinator

  22. 16/06/07 03:43:24 INFO Utils: Successfully started service 'SparkUI' on port 4040.

  23. 16/06/07 03:43:24 INFO SparkUI: Started SparkUI at http://127.0.0.1:4040

  24. 16/06/07 03:43:24 INFO HttpFileServer: HTTP File server directory is /tmp/spark-3ef0b16c-fe81-482e-8446-30571da062e7/httpd-796af3e2-122c-4780-9273-f4aa7d32bb04

  25. #启动HTTP服务,可以通过界面查看服务和任务运行情况

  26. 16/06/07 03:43:24 INFO HttpServer: Starting HTTP Server

  27. 16/06/07 03:43:24 INFO Utils: Successfully started service 'HTTP file server' on port 54315.

  28. #启动SparkContext,并上传本地运行的jar包到http://127.0.0.1:54315

  29. 16/06/07 03:43:24 INFO SparkContext: Added JAR file:/usr/local/spark/lib/spark-examples-1.6.1-hadoop2.6.0.jar at http://127.0.0.1:54315/jars/spark-examples-1.6.1-hadoop2.6.0.jar with timestamp 1465285404966

  30. 16/06/07 03:43:25 INFO Executor: Starting executor ID driver on host localhost

  31. 16/06/07 03:43:25 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 59217.

  32. 16/06/07 03:43:25 INFO NettyBlockTransferService: Server created on 59217

  33. 16/06/07 03:43:25 INFO BlockManagerMaster: Trying to register BlockManager

  34. 16/06/07 03:43:25 INFO BlockManagerMasterEndpoint: Registering block manager localhost:59217 with 511.1 MB RAM, BlockManagerId(driver, localhost, 59217)

  35. 16/06/07 03:43:25 INFO BlockManagerMaster: Registered BlockManager

  36. #Spark提交了一个job给DAGScheduler

  37. 16/06/07 03:43:26 INFO SparkContext: Starting job: reduce at SparkPi.scala:36

  38. #DAGScheduler收到一个编号为0的含有2个partitions分区的job

  39. 16/06/07 03:43:26 INFO DAGScheduler: Got job 0 (reduce at SparkPi.scala:36) with 2 output partitions

  40. #将job转换为编号为0的stage

  41. 16/06/07 03:43:26 INFO DAGScheduler: Final stage: ResultStage 0 (reduce at SparkPi.scala:36)

  42. #DAGScheduler在submitting stage之前,首先寻找本次stage的parents,如果missing parents为空,则submitting stage;

  43. #如果有,会对parents stage进行递归submit stage,随之又将stage 0分成了2个task,提交给TaskScheduler的submitTasks方法。

  44. #对于某些简单的job,如果它没有依赖关系,并且只有一个partition,这样的job会使用local thread处理而并不会提交到TaskScheduler上处理。

  45. 16/06/07 03:43:26 INFO DAGScheduler: Parents of final stage: List()

  46. 16/06/07 03:43:26 INFO DAGScheduler: Missing parents: List()

  47. 16/06/07 03:43:26 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:32), which has no missing parents

  48. 16/06/07 03:43:26 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1904.0 B, free 1904.0 B)

  49. 16/06/07 03:43:26 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1218.0 B, free 3.0 KB)

  50. 16/06/07 03:43:26 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:59217 (size: 1218.0 B, free: 511.1 MB)

  51. 16/06/07 03:43:26 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1006

  52. 16/06/07 03:43:26 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:32)

  53. #TaskSchedulerImpl是TaskScheduler的实现类,接收了DAGScheduler提交的2个task

  54. 16/06/07 03:43:26 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks

  55. 16/06/07 03:43:26 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, partition 0,PROCESS_LOCAL, 2152 bytes)

  56. 16/06/07 03:43:26 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, localhost, partition 1,PROCESS_LOCAL, 2152 bytes)

  57. #Executor接收任务后则从远程的服务器中将运行jar包存放到本地,然后进行计算,并各自汇报了任务执行状态

  58. 16/06/07 03:43:26 INFO Executor: Running task 1.0 in stage 0.0 (TID 1)

  59. 16/06/07 03:43:26 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)

  60. 16/06/07 03:43:26 INFO Executor: Fetching http://127.0.0.1:54315/jars/spark-examples-1.6.1-hadoop2.6.0.jar with timestamp 1465285404966

  61. 16/06/07 03:43:27 INFO Utils: Fetching http://127.0.0.1:54315/jars/spark-examples-1.6.1-hadoop2.6.0.jar to /tmp/spark-3ef0b16c-fe81-482e-8446-30571da062e7/userFiles-b021b090-3024-421c-b4b0-73fc9f723f44/fetchFileTemp4760324069006875921.tmp

  62. 16/06/07 03:43:28 INFO Executor: Adding file:/tmp/spark-3ef0b16c-fe81-482e-8446-30571da062e7/userFiles-b021b090-3024-421c-b4b0-73fc9f723f44/spark-examples-1.6.1-hadoop2.6.0.jar to class loader

  63. 16/06/07 03:43:29 INFO Executor: Finished task 1.0 in stage 0.0 (TID 1). 1031 bytes result sent to driver

  64. 16/06/07 03:43:29 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1031 bytes result sent to driver

  65. #TaskSetManager、SparkContent各自收到任务完成报告

  66. 16/06/07 03:43:29 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 2131 ms on localhost (1/2)

  67. 16/06/07 03:43:29 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 2189 ms on localhost (2/2)

  68. 16/06/07 03:43:29 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool

  69. 16/06/07 03:43:29 INFO DAGScheduler: ResultStage 0 (reduce at SparkPi.scala:36) finished in 2.217 s

  70. 16/06/07 03:43:29 INFO DAGScheduler: Job 0 finished: reduce at SparkPi.scala:36, took 2.877995 s

  71. #打印程序执行结果

  72. Pi is roughly 3.14282

  73. #Spark服务关闭

  74. 16/06/07 03:43:29 INFO SparkUI: Stopped Spark web UI at http://127.0.0.1:4040

  75. 16/06/07 03:43:29 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!

  76. 16/06/07 03:43:29 INFO MemoryStore: MemoryStore cleared

  77. 16/06/07 03:43:29 INFO BlockManager: BlockManager stopped

  78. 16/06/07 03:43:29 INFO BlockManagerMaster: BlockManagerMaster stopped

  79. 16/06/07 03:43:29 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!

  80. 16/06/07 03:43:29 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.

  81. 16/06/07 03:43:29 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.

  82. 16/06/07 03:43:29 INFO SparkContext: Successfully stopped SparkContext

  83. 16/06/07 03:43:29 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down.

  84. 16/06/07 03:43:29 INFO ShutdownHookManager: Shutdown hook called

  85. 16/06/07 03:43:29 INFO ShutdownHookManager: Deleting directory /tmp/spark-3ef0b16c-fe81-482e-8446-30571da062e7/httpd-796af3e2-122c-4780-9273-f4aa7d32bb04

  86. 16/06/07 03:43:29 INFO ShutdownHookManager: Deleting directory /tmp/spark-3ef0b16c-fe81-482e-8446-30571da062e7

  87.  

--------------------- 本文来自 javastart 的CSDN 博客 ,全文地址请点击:https://blog.csdn.net/javastart/article/details/71214815?utm_source=copy

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值