Spark ~ RDD 序列化
案列,没有经过系列化的情况
package org.example
import org.apache.log4j.{Level, Logger}
import org.apache.spark.{SparkConf, SparkContext}
object Kryo {
def main(args: Array[String]): Unit = {
Logger.getLogger("org.apache.spark").setLevel(Level.WARN)
Logger.getLogger("org.eclipse.jetty.server").setLevel(Level.OFF)
var conf = new SparkConf().setMaster("local").setAppName("cai")
var sc = new SparkContext(conf)
var rdd = sc.parallelize(1 to 9)
var user = new User
//rdd 算子中传递的函数如果包括闭包操作的,就会进行检测功能检测变量有没有序列化
//闭包检测
rdd.foreach(x=>{
println("age is:"+ (x.toInt + user.age))
})
}
class User {
var age : Int = 30
}
}
此时会报错:
Exception in thread "main" org.apache.spark.SparkException: Task not serializable
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:403)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:393)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:162)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2326)
at org.apache.spark.rdd.RDD.$anonfun$foreach$1(RDD.scala:971)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:385)
at org.apache.spark.rdd.RDD.foreach(RDD.scala:970)
at org.example.Kryo$.main(Kryo.scala:18)
at org.example.Kryo.main(Kryo.scala)
Caused by: java.io.NotSerializableException: org.example.Kryo$User
Serialization stack:
- object not serializable (class: org.example.Kryo$User, value: org.example.Kryo$User@76304b46)
- field (class: scala.runtime.ObjectRef, name: elem, type: class java.lang.Object)
- object (class scala.runtime.ObjectRef, org.example.Kryo$User@76304b46)
- element of array (index: 0)
- array (class [Ljava.lang.Object;, size 1)
- field (class: java.lang.invoke.SerializedLambda, name: capturedArgs, type: class [Ljava.lang.Object;)
- object (class java.lang.invoke.SerializedLambda, SerializedLambda[capturingClass=class org.example.Kryo$, functionalInterfaceMethod=scala/runtime/java8/JFunction1$mcVI$sp.apply$mcVI$sp:(I)V, implementation=invokeStatic org/example/Kryo$.$anonfun$main$1:(Lscala/runtime/ObjectRef;I)V, instantiatedMethodType=(I)V, numCaptured=1])
- writeReplace data (class: java.lang.invoke.SerializedLambda)
- object (class org.example.Kryo$$$Lambda$527/1691629865, org.example.Kryo$$$Lambda$527/1691629865@43d455c9)
at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:41)
at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:46)
at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:100)
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:400)
... 10 more
Process finished with exit code 1
Exception in thread “main” org.apache.spark.SparkException: Task not serializable
Caused by: java.io.NotSerializableException: org.example.Kryo$User
报错信息显示没有进行序列化,User没有进行序列化。
原因分析:
从计算的角度, 算子以外的代码都是在 Driver 端执行, 算子里面的代码都是在 Executor
端执行。那么在 scala 的函数式编程中,就会导致算子内经常会用到算子外的数据,这样就形成了闭包的效果,如果使用的算子外的数据无法序列化,就意味着无法传值给 Executor端执行,就会发生错误,所以需要在执行任务计算前,检测闭包内的对象是否可以进行序列化,这个操作我们称之为闭包检测。Scala2.12 版本后闭包编译方式发生了改变
进行序列化
extends Serializable 序列化
使用 extends Serializable 来进行序列化。
class User extends Serializable {
var age : Int = 30
}
或者使用以下的方式来进行序列化
case class User() {
var age : Int = 30
}
Kryo 序列化
Java 的序列化能够序列化任何的类。但是比较重(字节多),序列化后,对象的提交也
比较大。Spark 出于性能的考虑,Spark2.0 开始支持另外一种 Kryo 序列化机制。Kryo 速度是 Serializable 的 10 倍。当 RDD 在 Shuffle 数据的时候,简单数据类型、数组和字符串类型已经在 Spark 内部使用 Kryo 来序列化。
注意:即使使用 Kryo 序列化,也要继承 Serializable 接口
var conf = new SparkConf().setMaster(“local”).setAppName(“cai”)
// 替换默认的序列化机制
.set(“spark.serializer”,“org.apache.spark.serializer.KryoSerializer”)
// 注册需要使用 kryo 序列化的自定义类
.registerKryoClasses(Array(classOf[User]))
package org.example
import org.apache.log4j.{Level, Logger}
import org.apache.spark.{SparkConf, SparkContext}
object Kryo {
def main(args: Array[String]): Unit = {
Logger.getLogger("org.apache.spark").setLevel(Level.WARN)
Logger.getLogger("org.eclipse.jetty.server").setLevel(Level.OFF)
var conf = new SparkConf().setMaster("local").setAppName("cai")
// 替换默认的序列化机制
.set("spark.serializer","org.apache.spark.serializer.KryoSerializer")
// 注册需要使用 kryo 序列化的自定义类
.registerKryoClasses(Array(classOf[User]))
var sc = new SparkContext(conf)
var rdd = sc.parallelize(1 to 9)
var user = new User
//rdd 算子中传递的函数如果包括闭包操作的,就会进行检测功能检测变量有没有序列化
//闭包检测
rdd.foreach(x=>{
println("age is:"+ (x.toInt + user.age))
})
}
case class User() {
var age : Int = 30
}
}