从Spark序列化问题理解闭包

本文通过一个Spark序列化问题引入,详细解释了闭包的概念,并分析了Spark中的ClosureCleaner如何处理不必要的外部类引用,减少序列化开销。讨论了函数内部访问局部变量是否构成闭包引用,并探讨了为何Scala编译器不在编译期解决此类问题。
摘要由CSDN通过智能技术生成

从Spark序列化问题理解闭包

问题描述

在一次开发中,遇到这个问题

 Caused by: java.io.NotSerializableException: Object of org.apache.spark.streaming.kafka010.DirectKafkaInputDStream is being serialized possibly as a part of closure of an RDD operation. This is because the DStream object is being referred to from within the closure. Please rewrite the RDD operation inside this DStream to avoid this. This has been enforced to avoid bloating of Spark tasks with unnecessary objects.
Serialization stack:
at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)
at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:46)
at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:100)
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:342)
... 54 more 

关键代码部分如下

class XXX{
   
    val conf2=new SparkConf(true)
  /**
   * 获取DStream 流
   *
   * @return
   */
    override def getDStream[R: ClassTag](messageHandler: ConsumerRecord[K, V] => R): DStream[R] = {
   
        val kp = kafkaParams ++ Map("enable.auto.commit" -> "false")
        val stream = km.createDirectStream[K, V](ssc, kp, topicSet)
        canCommitOffsets = stream.asInstanceOf[CanCommitOffsets]
        stream.transform((rdd, time) => {
   
        offsetRanges.put(time
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值