从Spark序列化问题理解闭包
问题描述
在一次开发中,遇到这个问题
Caused by: java.io.NotSerializableException: Object of org.apache.spark.streaming.kafka010.DirectKafkaInputDStream is being serialized possibly as a part of closure of an RDD operation. This is because the DStream object is being referred to from within the closure. Please rewrite the RDD operation inside this DStream to avoid this. This has been enforced to avoid bloating of Spark tasks with unnecessary objects.
Serialization stack:
at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)
at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:46)
at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:100)
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:342)
... 54 more
关键代码部分如下
class XXX{
val conf2=new SparkConf(true)
/**
* 获取DStream 流
*
* @return
*/
override def getDStream[R: ClassTag](messageHandler: ConsumerRecord[K, V] => R): DStream[R] = {
val kp = kafkaParams ++ Map("enable.auto.commit" -> "false")
val stream = km.createDirectStream[K, V](ssc, kp, topicSet)
canCommitOffsets = stream.asInstanceOf[CanCommitOffsets]
stream.transform((rdd, time) => {
offsetRanges.put(time