SparkEnv

SparkEnv是Spark执行环境的核心,涉及Executor的创建。它包括安全管理器、RpcEnv、MapOutputTracker、ShuffleManager、MemoryManager、NettyBlockTransferService、BlockManagerMaster、BlockManager、BroadcastManager、CacheManager、MetricsSystem和OutputCommitCoordinator等组件的创建。这些组件共同协作,确保Spark作业的高效执行和资源管理。
摘要由CSDN通过智能技术生成

SparkEnv是Spark的执行环境对象,其中包括与众多Executor执行相关的对象。在local模式下Driver会创建Executor,local-cluster部署模式或者Standalone部署模式下Worker另起的CoarseGrainedExecutorBackend进程中也会创建Executor,所以SparkEnv存在于Driver或者CoarseGrainedExecutorBackend进程中。创建SparkEnv主要使用SparkEnv的createDriverEnv方法,有四个参数:conf、isLocal、listenerBus 以及在本地模式下driver运行executor需要的numberCores。

  // 是否是本地模式
  def isLocal: Boolean = (master == "local" || master.startsWith("local["))
  // 采用监听器模式维护各类事件的处理
  // An asynchronous listener bus for Spark events
  private[spark] val listenerBus = new LiveListenerBus
  ...

  /**
   * 获取在本地模式下执行程序需要的cores个数,否则为0
   * The number of driver cores to use for execution in local mode, 0 otherwise.
   */
  private[spark] def numDriverCores(master: String): Int = {
    def convertToInt(threads: String): Int = {
      if (threads == "*") Runtime.getRuntime.availableProcessors() else threads.toInt
    }
    master match {
      case "local" => 1
      case SparkMasterRegex.LOCAL_N_REGEX(threads) => convertToInt(threads)
      case SparkMasterRegex.LOCAL_N_FAILURES_REGEX(threads, _) => convertToInt(threads)
      case _ => 0 // driver is not used for execution
    }
  }

  ...
  // This function allows components created by SparkEnv to be mocked in unit tests:
  private[spark] def createSparkEnv(
      conf: SparkConf,
      isLocal: Boolean,
      listenerBus: LiveListenerBus): SparkEnv = {
    SparkEnv.createDriverEnv(conf, isLocal, listenerBus, SparkContext.numDriverCores(master))
  }

SparkEnv的构造步骤如下:
1. 创建安全管理器SecurityManager;
2. 创建RpcEnv;
3. 创建基于Akka的分布式消息系统ActorSystem(注意:Spark 1.4.0之后已经废弃了
4. 创建Map任务输出跟踪器MapOutputTracker;
5. 创建ShuffleManager;
6. 内存管理器MemoryManager;
7. 创建块传输服务NettyBlockTransferService;
8. 创建BlockManagerMaster
9. 创建块管理器BlockManager
10. 创建广播管理器BroadcastManager;
11. 创建缓存管理器CacheManager;
12. 创建测量系统MetricsSystem;
13. 创建OutputCommitCoordinator;
14. 创建SparkEnv

1. 创建安全管理器SecurityManager

SecurityManager主要对权限、账号进行设置,如果使用Hadoop YARN作为集群管理器,则需要使用证书生成secret key登录,最后给当前系统设置默认的口令认证实例,此实例采用的是匿名内部类:

  private val secretKey = generateSecretKey()
  ...

  // 使用HTTP链接设置口令认证
  // Set our own authenticator to properly negotiate user/password for HTTP connections.
  // This is needed by the HTTP client fetching from the HttpServer. Put here so its
  // only set once.
  if (authOn) {
    Authenticator.setDefault(
     // 创建口令认证实例,复写PasswordAuthentication方法,获得用户名和密码
      new Authenticator() {
        override def getPasswordAuthentication(): PasswordAuthentication = {
          var passAuth: PasswordAuthentication = null
          val userInfo = getRequestingURL().getUserInfo()
          if (userInfo != null)
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Exception in thread "main" java.lang.RuntimeException: java.lang.NoSuchFieldException: DEFAULT_TINY_CACHE_SIZE at org.apache.spark.network.util.NettyUtils.getPrivateStaticField(NettyUtils.java:131) at org.apache.spark.network.util.NettyUtils.createPooledByteBufAllocator(NettyUtils.java:118) at org.apache.spark.network.server.TransportServer.init(TransportServer.java:95) at org.apache.spark.network.server.TransportServer.<init>(TransportServer.java:74) at org.apache.spark.network.TransportContext.createServer(TransportContext.java:114) at org.apache.spark.rpc.netty.NettyRpcEnv.startServer(NettyRpcEnv.scala:118) at org.apache.spark.rpc.netty.NettyRpcEnvFactory$$anonfun$4.apply(NettyRpcEnv.scala:454) at org.apache.spark.rpc.netty.NettyRpcEnvFactory$$anonfun$4.apply(NettyRpcEnv.scala:453) at org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:2237) at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160) at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:2229) at org.apache.spark.rpc.netty.NettyRpcEnvFactory.create(NettyRpcEnv.scala:458) at org.apache.spark.rpc.RpcEnv$.create(RpcEnv.scala:56) at org.apache.spark.SparkEnv$.create(SparkEnv.scala:246) at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:175) at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:257) at org.apache.spark.SparkContext.<init>(SparkContext.scala:432) at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2509) at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:909) at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:901) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:901) at com.cssl.scala720.KafkaSparkStreamingHBase$.main(KafkaSparkStreamingHBase.scala:28) at com.cssl.scala720.KafkaSparkStreamingHBase.main(KafkaSparkStreamingHBase.scala) Caused by: java.lang.NoSuchFieldException: DEFAULT_TINY_CACHE_SIZE at java.lang.Class.getDeclaredField(Class.java:2070) at org.apache.spark.network.util.NettyUtils.getPrivateStaticField(NettyUtils.java:127) ... 23 more Process finished with exit code 1
07-24
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值