Spark 在一个sparksession中并行的执行多个Job

23 篇文章 0 订阅

有两个独立的job A和B可以并行执行,按spark默认的方式A和B是顺序执行的

在代码中进行如下调整

测试用例如下:

代码在win10虚拟机中执行
cpu核数为6


object testAsyncExecJob {

  def getLocalSparkSession() = {

    val properties = new Properties
    properties.load(getResourceAsStream("conf.properties"))

    val conf = new SparkConf()
      .setAppName("testAsyncExecJob")
      .setMaster("local[*]")
      .set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
      .set("spark.scheduler.mode", "FIFO")
      .set("spark.sql.parquet.binaryAsString", "true")
      .set("spark.sql.crossJoin.enabled", "true")
      .set("spark.debug.maxToStringFields", "1000")
      .set("spark.local.dir", "./SparkTempFile")

    val spark = SparkSession.builder()
      .config(conf)
      .getOrCreate()

    spark
  }

  def main(args: Array[String]): Unit = {

    val spark = getLocalSparkSession()

    import spark.implicits._
    val list1 = (0 until 1000).toDF().repartition(2)
    val list2 = (1001 until 2000).toDF().repartition(2)
  }


  def a(df: DataFrame)(implicit xc: ExecutionContext) = Future {

    df.foreach(x => {
      Thread.sleep(1000)
      println("a:" + x)
    })

  }

  def b(df: DataFrame)(implicit xc: ExecutionContext) = Future {

    df.foreach(x => {
      Thread.sleep(1000)
      println("b:" + x)
    })

  }

  def c(df: DataFrame) = {

    df.foreach(x => {
      Thread.sleep(1000)
      println("c:" + x)
    })

  }

  def d(df: DataFrame) = {

    df.foreach(x => {
      Thread.sleep(1000)
      println("d:" + x)
    })

  }

}

失败尝试1

在一个sparksession中,仅修改任务调度模式为spark.scheduler.mode=FAIR

    c(list1)
    d(list2)

任务仍为顺序执行
在这里插入图片描述

方法1

def main(args: Array[String]): Unit = {

				...
				...

    // 创建一个新的线程池,并指定线程数量
    val executors = Executors.newFixedThreadPool(10)
    
    // 基于一个已有的线程池,隐式的声明一个ExecutionContext
    implicit val xc: ExecutionContextExecutorService = ExecutionContext.fromExecutorService(executors)
    
    // 创建一个预执行任务序列,将任务a,b放入队列中,ExecutionContext会将队列中的任务同时提交,任务是否开始执行取决于cpu、内存资源是否足够
	// 等待任务都结束后退出app
	Await.result(Future.sequence(Seq(a(list1), b(list2))), Duration(1, MINUTES))

}

				...
				...

可以看到两个job同时在执行
在这里插入图片描述
查看输出
在这里插入图片描述
注释掉repartiiton(2),发现两个job虽然同时提交,但是其中一个任务独占了全部资源处于运行状态
在这里插入图片描述
在这里插入图片描述

方法2

使用Callable或Runable类,重写类中的call方法或run方法,将要执行的job放入call或run方法中提交

Callable和Runable的区别是前者的call有返回值,后者的run无返回值

 def main(args: Array[String]): Unit = {
		...
		...
		...
    // 创建一个新的线程池,并指定线程数量
    val executors = Executors.newFixedThreadPool(10)

//-----------------------------
    //计算业务1
        val task1 = executors.submit(new Callable[String] {
          override def call(): String = {
            //计算代码
            c(list1)
            "业务c计算完成"
          }
        })

        //计算业务2
        val task2 = executors.submit(new Callable[String] {
          override def call(): String = {
            //计算代码
            d(list2)
            "业务d计算完成"
          }
        })

        executors.awaitTermination(1, TimeUnit.HOURS)
        print(task1.get()+task2.get())
    //-----------------------------//

参考资料

spark多线程并发执行job
http://bcxw.net/article/810.html

谈Spark下并行执行多个Job的问题
https://bruce.blog.csdn.net/article/details/88349295

Spark异步job
https://blog.csdn.net/weixin_30793643/article/details/97128340

ExecutorService中submit和execute的区别
https://www.cnblogs.com/wanqieddy/p/3853863.html

方法3 集合的并行处理

现有一个文件名构成的列表,要对列表内的文件进行一系列的处理后将数据写入数据库。

在这个情景下,采取并行的方式同时处理n(假设n=5)个文件,能够更有效的利用服务器的资源。

测试用例如下方代码所示。

实际使用中Job能否并行和前几个方法类似,都需要保证资源足够这个前提。
对程序中设计的DataFrame、DataSet,将其分区数修改到一个合适的值,我这里用的是:当前环境下可用的CPU核数/并行数量

ps:进行重分区时建议用coalesce,不会进行额外的shuffle

scala.collection.parallel
trait TaskSupport
extends Tasks

A trait implementing the scheduling of a parallel collection operation.
Parallel collections are modular in the way operations are scheduled. Each parallel collection is parametrized with a task support object which is responsible for scheduling and load-balancing tasks to processors.
A task support object can be changed in a parallel collection after it has been created, but only during a quiescent period, i.e. while there are no concurrent invocations to parallel collection methods.
There are currently a few task support implementations available for parallel collections. The ForkJoinTaskSupport uses a fork-join pool internally.
The ExecutionContextTaskSupport uses the default execution context implementation found in scala.concurrent, and it reuses the thread pool used in scala.concurrent.
The execution context task support is set to each parallel collection by default, so parallel collections reuse the same fork-join pool as the future API.
Here is a way to change the task support of a parallel collection:
  import scala.collection.parallel._
  val pc = mutable.ParArray(1, 2, 3)
  pc.tasksupport = new ForkJoinTaskSupport(
    new scala.concurrent.forkjoin.ForkJoinPool(2))
package myspark.core

import org.apache.hadoop.util.ThreadUtil.getResourceAsStream
import org.apache.spark.SparkConf
import org.apache.spark.sql.{DataFrame, SparkSession}

import java.util.Properties
import scala.collection.parallel.ForkJoinTaskSupport
import scala.concurrent.forkjoin.ForkJoinPool

object useSeqPar {

	def getLocalSparkSession() = {

		val properties = new Properties
		properties.load(getResourceAsStream("conf.properties"))

		val conf = new SparkConf()
			.setAppName("testAsyncExecJob")
			.setMaster("local[*]")
			.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
			.set("spark.scheduler.mode", "FAIR")
			.set("spark.sql.parquet.binaryAsString", "true")
			.set("spark.sql.crossJoin.enabled", "true")
			.set("spark.debug.maxToStringFields", "1000")
			.set("spark.local.dir", "./SparkTempFile")

		val spark = SparkSession.builder()
			.config(conf)
			.getOrCreate()

		spark
	}

	def main(args: Array[String]): Unit = {
		val spark = getLocalSparkSession()
		import spark.implicits._
		
		// 表示待处理的文件名称构成的序列
		val seq = Seq(1, 2, 3, 4, 5, 6, 7)
		val spar = seq.par
		val availableCPUCores = Runtime.getRuntime.availableProcessors
		val parNum = 5
		spar.tasksupport = new ForkJoinTaskSupport(new ForkJoinPool(parNum))
		println(Runtime.getRuntime.availableProcessors)
		val df1 = (0 until 1000).toDF()
		spar.foreach(run(_, df1.coalesce(availableCPUCores/parNum)))
	}

	/**
	 * 定义一个任务,表示在进行文件处理工作
	 * @param i
	 * @param df
	 */
	def run(i: Int, df: DataFrame) = {
		df.foreach(x => {
			Thread.sleep(1000)
			println(s"任务$i:$x")
		})

	}


}
参考资料2:

Spark Repartition 使用
https://www.jianshu.com/p/391d42665a30

spark docker java kubernetes 获取cpu内核/线程数问题
https://www.cnblogs.com/zihunqingxin/p/10829273.html

Scala——的并行集合
https://www.cnblogs.com/shaozhiqi/p/12195580.html

  • 4
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 3
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值