spark 架构_深入研究Spark内部和架构

spark 架构

by Jayvardhan Reddy

通过杰伊瓦尔丹·雷迪(Jayvardhan Reddy)

深入研究Spark内部和架构 (Deep-dive into Spark internals and architecture)

Apache Spark is an open-source distributed general-purpose cluster-computing framework. A spark application is a JVM process that’s running a user code using the spark as a 3rd party library.

Apache Spark是一个开源的分布式通用集群计算框架。 Spark应用程序是一个JVM进程,正在使用Spark作为第三方库来运行用户代码。

As part of this blog, I will be showing the way Spark works on Yarn architecture with an example and the various underlying background processes that are involved such as:

作为此博客的一部分,我将通过示例和涉及的各种基础后台过程来演示Spark在Yarn体系结构上的工作方式,例如:

  • Spark Context

    火花上下文
  • Yarn Resource Manager, Application Master & launching of executors (containers).

    纱线资源经理,应用程序主管和执行程序(容器)的启动。
  • Setting up environment variables, job resources.

    设置环境变量,作业资源。
  • CoarseGrainedExecutorBackend & Netty-based RPC.

    CoarseGrainedExecutor后端和基于Netty的RPC。
  • SparkListeners.

    SparkListeners。
  • Execution of a job (Logical plan, Physical plan).

    执行工作(逻辑计划,物理计划)。
  • Spark-WebUI.

    Spark-WebUI。
火花上下文 (Spark Context)

Spark context is the first level of entry point and the heart of any spark application. Spark-shell is nothing but a Scala-based REPL with spark binaries which will create an object sc called spark context.

Spark上下文是入口点的第一级,是任何Spark应用程序的核心。 Spark-shell只是一个基于Scala的具有火花二进制文件的REPL,它将创建一个称为火花上下文的对象sc。

We can launch the spark shell as shown below:

我们可以如下所示启动spark shell:

spark-shell --master yarn \ --conf spark.ui.port=12345 \ --num-executors 3 \ --executor-cores 2 \ --executor-memory 500M

As part of the spark-shell, we have mentioned the num executors. They indicate the number of worker nodes to be used and the number of cores for each of these worker nodes to execute tasks in parallel.

作为“火花壳”的一部分,我们提到了num执行程序。 它们指示要使用的工作程序节点的数量以及这些工作程序节点中的每个并行执行任务的内核的数量。

Or you can launch spark shell using the default configuration.

或者,您可以使用默认配置启动Spark Shell。

spark-shell --master yarn

The configurations are present as part of spark-env.sh

该配置作为spark-env.sh的一部分存在

Our Driver program is executed on the Gateway node which is nothing but a spark-shell. It will create a spark context and launch an application.

我们的驱动程序在网关节点上执行,不过这只是一个火花。 它将创建一个火花上下文并启动一个应用程序。

The spark context object can be accessed using sc.

可以使用sc访问spark上下文对象

After the Spark context is created it waits for the resources. Once the resources are available, Spark context sets up internal services and establishes a connection to a Spark execution environment.

创建Spark上下文后,它将等待资源。 一旦资源可用,Spark上下文将建立内部服务并建立与Spark执行环境的连接。

纱线资源经理,应用程序主管和执行程序(容器)的启动。 (Yarn Resource Manager, Application Master & launching of executors (containers).)

Once the Spark context is created it will check with the Cluster Manager and launch the Application Master i.e, launches a container and registers signal handlers.

创建Spark上下文后,它将与集群管理器进行检查并启动Application Master,即启动容器并注册信号处理程序

Once the Application Master is started it establishes a connection with the Driver.

一旦启动应用程序主控器,它将与驱动程序建立连接。

Next, the ApplicationMasterEndPoint triggers a proxy application to connect to the resource manager.

接下来,ApplicationMasterEndPoint触发代理应用程序以连接到资源管理器。

Now, the Yarn Container will perform the below operations as shown in the diagram.

现在,Yarn容器将执行以下操作,如图所示。

ii) YarnRMClient will register with the Application Master.

ii)YarnRMClient将向Application Master注册。

iii) YarnAllocator: Will request 3 executor containers, each with 2 cores and 884 MB memory including 384 MB overhead

iii)YarnAllocator:将请求3个执行器容器,每个执行器容器具有2个内核和884 MB内存,包括384 MB的开销

iv) AM starts the Reporter Thread

iv)AM启动Reporter线程

Now the Yarn Allocator receives tokens from Driver to launch the Executor nodes and start the containers.

现在,Yarn分配器从Driver接收令牌,以启动Executor节点并启动容器。

设置环境变量,作业资源和启动容器。 (Setting up environment variables, job resources & launching containers.)

Every time a container is launched it does the following 3 things in each of these.

每次启动容器时,它都会分别执行以下三项操作。

  • Setting up env variables

    设置环境变量

Spark Runtime Environment (SparkEnv) is the runtime environment with Spark’s services that are used to interact with each other in order to establish a distributed computing platform for a Spark application.

Spark运行时环境(SparkEnv)是带有Spark服务的运行时环境,这些服务相互交互以为Spark应用程序建立分布式计算平台。

  • Setting up job resources

    设置工作资源
  • Launching container

    发射容器

YARN executor launch context assigns each executor with an executor id to identify the corresponding executor (via Spark WebUI) and starts a CoarseGrainedExecutorBackend.

YARN执行程序启动上下文为每个执行程序分配一个执行程序ID,以标识相应的执行程序(通过Spark WebUI),并启动CoarseGrainedExecutorBackend。

CoarseGrainedExecutor后端和基于Netty的RPC。 (CoarseGrainedExecutorBackend & Netty-based RPC.)

After obtaining resources from Resource Manager, we will see the executor starting up

从资源管理器获取资源后,我们将看到执行程序正在启动

CoarseGrainedExecutorBackend is an ExecutorBackend that controls the lifecycle of a single executor. It sends the executor’s status to the driver.

CoarseGrainedExecutorBackend是一个ExecutorBackend,用于控制单个执行程序的生命周期。 它将执行者的状态发送给驱动程序。

When ExecutorRunnable is started, CoarseGrainedExecutorBackend registers the Executor RPC endpoint and signal handlers to communicate with the driver (i.e. with CoarseGrainedScheduler RPC endpoint) and to inform that it is ready to launch tasks.

启动ExecutorRunnable时,CoarseGrainedExecutorBackend将注册Executor RPC端点和信号处理程序以与驱动程序进行通信(即与CoarseGrainedScheduler RPC端点)进行通信,并通知其已准备好启动任务。

Netty-based RPC - It is used to communicate between worker nodes, spark context, executors.

基于Netty的RPC-用于在工作节点,Spark上下文,执行程序之间进行通信。

NettyRPCEndPoint is used to track the result status of the worker node.

NettyRPCEndPoint用于跟踪工作程序节点的结果状态。

RpcEndpointAddress is the logical address for an endpoint registered to an RPC Environment, with RpcAddress and name.

RpcEndpointAddress是使用RpcAddress和名称注册到RPC环境的端点的逻辑地址。

It is in the format as shown below:

其格式如下所示:

This is the first moment when CoarseGrainedExecutorBackend initiates communication with the driver available at driverUrl through RpcEnv.

这是CoarseGrainedExecutorBackend通过RpcEnv启动与driverUrl上可用的驱动程序通信的第一时间。

SparkListeners (SparkListeners)

SparkListener (Scheduler listener) is a class that listens to execution events from Spark’s DAGScheduler and logs all the event information of an application such as the executor, driver allocation details along with jobs, stages, and tasks and other environment properties changes.

SparkListener(调度程序侦听器)是一个类,用于侦听Spark的DAGScheduler中的执行事件,并记录应用程序的所有事件信息,例如执行程序,驱动程序分配详细信息以及作业,阶段和任务以及其他环境属性更改。

SparkContext starts the LiveListenerBus that resides inside the driver. It registers JobProgressListener with LiveListenerBus which collects all the data to show the statistics in spark UI.

SparkContext启动驻留在驱动程序内部的LiveListenerBus。 它向LiveListenerBus注册JobProgressListener,LiveListenerBus收集所有数据以在spark UI中显示统计信息。

By default, only the listener for WebUI would be enabled but if we want to add any other listeners then we can use spark.extraListeners.

默认情况下,仅启用WebUI的侦听器,但如果要添加其他侦听器,则可以使用spark.extraListeners。

Spark comes with two listeners that showcase most of the activities

Spark附带了两个听众,展示了大多数活动

i) StatsReportListener

i)StatsReportListener

ii) EventLoggingListener

ii)EventLoggingListener

EventLoggingListener: If you want to analyze further the performance of your applications beyond what is available as part of the Spark history server then you can process the event log data. Spark Event Log records info on processed jobs/stages/tasks. It can be enabled as shown below...

EventLoggingListener: 如果您想进一步分析应用程序的性能,而不是Spark历史记录服务器提供的性能,则可以处理事件日志数据。 Spark事件日志记录有关已处理作业/阶段/任务的信息。 可以如下所示启用它...

The event log file can be read as shown below

可以如下所示读取事件日志文件

  • The Spark driver logs into job workload/perf metrics in the spark.evenLog.dir directory as JSON files.

    Spark驱动程序以JSON文件身份登录spark.evenLog.dir目录中的作业工作量/性能指标。
  • There is one file per application, the file names contain the application id (therefore including a timestamp) application_1540458187951_38909.

    每个应用程序只有一个文件,文件名包含应用程序ID(因此包含时间戳)application_1540458187951_38909。

It shows the type of events and the number of entries for each.

它显示事件的类型以及每个事件的条目数。

Now, let’s add StatsReportListener to the spark.extraListeners and check the status of the job.

现在,让我们将StatsReportListener添加到spark.extraListeners 并检查作业状态。

Enable INFO logging level for org.apache.spark.scheduler.StatsReportListener logger to see Spark events.

为org.apache.spark.scheduler.StatsReportListener记录器启用INFO记录级别以查看Spark事件。

To enable the listener, you register it to SparkContext. It can be done in two ways.

要启用侦听器,请将其注册到SparkContext。 它可以通过两种方式完成。

i) Using SparkContext.addSparkListener(listener: SparkListener) method inside your Spark application.

i)在Spark应用程序中使用SparkContext.addSparkListener(listener:SparkListener)方法。

Click on the link to implement custom listeners - CustomListener

单击链接以实现自定义侦听器-CustomListener

ii) Using the conf command-line option

ii)使用conf命令行选项

Let’s read a sample file and perform a count operation to see the StatsReportListener.

让我们阅读一个示例文件并执行计数操作以查看StatsReportListener。

执行工作(逻辑计划,物理计划)。 (Execution of a job (Logical plan, Physical plan).)

In Spark, RDD (resilient distributed dataset) is the first level of the abstraction layer. It is a collection of elements partitioned across the nodes of the cluster that can be operated on in parallel. RDDs can be created in 2 ways.

在Spark中,RDD( 弹性分布式数据集 )是抽象层的第一层。 它是跨集群节点划分的元素的集合,可以并行操作。 可以通过两种方式创建RDD。

i) Parallelizing an existing collection in your driver program

在你的驱动程序我)p arallelizing现有的集合

ii) Referencing a dataset in an external storage system

ii)引用外部存储系统中的数据集

RDDs are created either by using a file in the Hadoop file system, or an existing Scala collection in the driver program, and transforming it.

通过使用Hadoop文件系统中的文件或驱动程序中现有的Scala集合创建RDD,然后对其进行转换。

Let’s take a sample snippet as shown below

让我们来看一个示例片段,如下所示

The execution of the above snippet takes place in 2 phases.

以上代码段的执行分为两个阶段。

6.1 Logical Plan: In this phase, an RDD is created using a set of transformations, It keeps track of those transformations in the driver program by building a computing chain (a series of RDD)as a Graph of transformations to produce one RDD called a Lineage Graph.

6.1逻辑计划:在此阶段,使用一组转换来创建RDD,它通过构建计算链(一系列RDD)作为转换图来生成驱动程序中的那些转换,以产生一个称为ADD的RDD。 沿袭图

Transformations can further be divided into 2 types

转换可以进一步分为2种类型

  • Narrow transformation: A pipeline of operations that can be executed as one stage and does not require the data to be shuffled across the partitions — for example, Map, filter, etc..

    窄转换:可以作为一个阶段执行的操作流水线,不需要在分区之间对数据进行混洗(例如Map,filter等)。

Now the data will be read into the driver using the broadcast variable.

现在,将使用广播变量将数据读取到驱动程序中。

  • Wide transformation: Here each operation requires the data to be shuffled, henceforth for each wide transformation a new stage will be created — for example, reduceByKey, etc..

    广泛转换:这里的每个操作都需要对数据进行混洗,从此以后,对于每次广泛转换,都会创建一个新阶段-例如reduceByKey等。

We can view the lineage graph by using toDebugString

我们可以使用toDebugString查看沿袭图

6.2 Physical Plan: In this phase, once we trigger an action on the RDD, The DAG Scheduler looks at RDD lineage and comes up with the best execution plan with stages and tasks together with TaskSchedulerImpl and execute the job into a set of tasks parallelly.

6.2物理计划: 在此阶段,一旦我们在RDD上触发了动作, DAG Scheduler就会查看RDD沿袭,并提出最佳的执行计划以及阶段和任务,以及TaskSchedulerImpl,并并行执行一组任务。

Once we perform an action operation, the SparkContext triggers a job and registers the RDD until the first stage (i.e, before any wide transformations) as part of the DAGScheduler.

一旦我们执行动作操作,SparkContext将触发作业并将RDD注册到DAGScheduler的第一阶段(即,在进行任何宽转换之前)。

Now before moving onto the next stage (Wide transformations), it will check if there are any partition data that is to be shuffled and if it has any missing parent operation results on which it depends, if any such stage is missing then it re-executes that part of the operation by making use of the DAG( Directed Acyclic Graph) which makes it Fault tolerant.

现在,在进入下一个阶段(宽转换)之前,它将检查是否有任何将要改组的分区数据,以及是否有依赖于它的任何父操作结果丢失,如果缺少任何这样的阶段,它将重新进行-通过使用DAG(有向无环图)来执行该部分操作,从而使其具有容错能力。

In the case of missing tasks, it assigns tasks to executors.

在缺少任务的情况下,它将任务分配给执行者。

Each task is assigned to CoarseGrainedExecutorBackend of the executor.

每个任务都分配给执行者的CoarseGrainedExecutorBackend。

It gets the block info from the Namenode.

它从Namenode获取块信息。

now, it performs the computation and returns the result.

现在,它执行计算并返回结果。

Next, the DAGScheduler looks for the newly runnable stages and triggers the next stage (reduceByKey) operation.

接下来,DAGScheduler查找新可运行的阶段并触发下一阶段(reduceByKey)操作。

The ShuffleBlockFetcherIterator gets the blocks to be shuffled.

ShuffleBlockFetcherIterator获取要重排的块。

Now the reduce operation is divided into 2 tasks and executed.

现在,reduce操作分为两个任务并执行。

On completion of each task, the executor returns the result back to the driver.

完成每个任务后,执行程序将结果返回给驱动程序。

Once the Job is finished the result is displayed.

作业完成后,将显示结果。

Spark-WebUI (Spark-WebUI)

Spark-UI helps in understanding the code execution flow and the time taken to complete a particular job. The visualization helps in finding out any underlying problems that take place during the execution and optimizing the spark application further.

Spark-UI有助于理解代码执行流程以及完成特定作业所花费的时间。 可视化有助于发现执行过程中发生的任何潜在问题,并进一步优化spark应用程序。

We will see the Spark-UI visualization as part of the previous step 6.

我们将在前面的步骤6中看到Spark-UI可视化

Once the job is completed you can see the job details such as the number of stages, the number of tasks that were scheduled during the job execution of a Job.

作业完成后,您可以看到作业详细信息,例如阶段数,在作业执行过程中计划的任务数。

On clicking the completed jobs we can view the DAG visualization i.e, the different wide and narrow transformations as part of it.

单击完成的作业后,我们可以查看DAG可视化,即其中不同的宽窄转换。

You can see the execution time taken by each stage.

您可以看到每个阶段花费的执行时间。

On clicking on a Particular stage as part of the job, it will show the complete details as to where the data blocks are residing, data size, the executor used, memory utilized and the time taken to complete a particular task. It also shows the number of shuffles that take place.

单击特定阶段作为工作的一部分时,它将显示有关数据块的位置,数据大小,使用的执行程序,已利用的内存以及完成特定任务所花费的时间的完整详细信息。 它还显示发生的洗牌数量。

Further, we can click on the Executors tab to view the Executor and driver used.

此外,我们可以单击Executors选项卡以查看所使用的Executor和驱动程序。

Now that we have seen how Spark works internally, you can determine the flow of execution by making use of Spark UI, logs and tweaking the Spark EventListeners to determine optimal solution on the submission of a Spark job.

现在我们已经了解了Spark在内部的工作方式,您可以通过使用Spark UI,日志和调整Spark EventListeners来确定执行流程,从而确定提交Spark作业时的最佳解决方案。

Note: The commands that were executed related to this post are added as part of my GIT account.

注意与该帖子相关的已执行命令被添加为我的GIT帐户的一部分。

Similarly, you can also read more here:

同样,您也可以在此处阅读更多内容:

If you would like too, you can connect with me on LinkedIn — Jayvardhan Reddy.

如果您愿意,也可以通过LinkedIn- Jayvardhan Reddy与我联系

If you enjoyed reading it, you can click the clap and let others know about it. If you would like me to add anything else, please feel free to leave a response ?

如果您喜欢阅读它,可以单击拍手并告知其他人。 如果您希望我添加其他任何内容,请随时回复。

翻译自: https://www.freecodecamp.org/news/deep-dive-into-spark-internals-and-architecture-f6e32045393b/

spark 架构

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值