Spark 调优指南(一)-数据序列化

Spark 调优指南(一)-数据序列化

官网介绍

Data Serialization
数据序列化
Serialization plays an important role in the performance of any distributed application. 
序列化在任何分布式应用程序的性能中起着重要作用
Formats that are slow to serialize objects into, or consume a large number of bytes,will greatly slow down the computation.
将对象序列化或消耗大量字节的速度慢的格式将大大减慢计算速度
Often, this will be the first thing you should tune to optimize a Spark application.
通常,这将是应该优化Spark应用程序的第一件事。
Spark aims to strike a balance between convenience (allowing you to work with any Java type in your operations) 
and performance. It provides two serialization libraries:
Spark旨在在便利性(允许使用操作中的任何Java类型)和性能之间取得平衡。它提供了两个序列化库
Java serialization: By default, Spark serializes objects using Java’s ObjectOutputStream framework, 
and can work with any class you create that implements java.io.Serializable. 
Java序列化:默认情况下,Spark使用Java ObjectOutputStream框架序列化对象,并且可以创建的任何类一起使用
You can also control the performance of your serialization more closely by extending java.io.Externalizable.
Java serialization is flexible but often quite slow, and leads to large serialized formats for many classes.
还可以通过扩展来更紧密地控制序列化的性能 java.io.Externalizable。
Java序列化是灵活的,但通常很慢,并导致许多类的大型序列化格式。
Kryo serialization: Spark can also use the Kryo library (version 4) to serialize objects more quickly. 
Kryo serialization:Spark还可以使用Kryo库(版本4)更快地序列化对象
Kryo is significantly faster and more compact than Java serialization (often as much as 10x), 
but does not support all Serializable types and requires you to register the classes
Kryo比Java序列化(通常高达10倍)显着更快,更紧凑,但不支持所有 Serializable类型
you’ll use in the program in advance for best performance.
并且需要您提前注册您将在程序中使用的类以获得最佳性能。
You can switch to using Kryo by initializing your job with a SparkConf and calling 
conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer").
您可以通过使用SparkConf初始化作业 并调用来切换到使用Kryo conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")。
This setting configures the serializer used for not only shuffling data between worker nodes but
also when serializing RDDs to disk. 
此设置配置序列化程序,不仅用于在工作节点之间混洗数据,还用于将RDD序列化到磁盘。
The only reason Kryo is not the default is because of the custom registration requirement, 
but we recommend trying it in any network-intensive application. 
Kryo不是默认值的唯一原因是因为自定义注册要求,但我们建议在任何网络密集型应用程序中尝试它。
Since Spark 2.0.0,we internally use Kryo serializer when shuffling RDDs with simple types, arrays of simple types, or string type.
从Spark 2.0.0开始,我们在使用简单类型,简单类型数组或字符串类型对RDD进行混洗时,内部使用Kryo序列化程序。
Spark automatically includes Kryo serializers for the many commonly-used core Scala classes covered in the AllScalaRegistrar from the Twitter chill library.
Spark自动包含Kryo序列化程序,用于来自Twitter chill库的AllScalaRegistrar中涵盖的许多常用核心Scala类。
To register your own custom classes with Kryo, use the registerKryoClasses method.
要使用Kryo注册自己的自定义类,请使用该registerKryoClasses方法。

val conf = new SparkConf().setMaster(...).setAppName(...)
conf.registerKryoClasses(Array(classOf[MyClass1], classOf[MyClass2]))
val sc = new SparkContext(conf)


The Kryo documentation describes more advanced registration options, such as adding custom serialization code.
所述KRYO文档描述了更先进的注册选项,如添加自定义序列的代码。
If your objects are large, you may also need to increase the spark.kryoserializer.buffer config. This value needs to be large enough to hold the largest object you will serialize.
如果对象很大,您可能还需要增加spark.kryoserializer.buffer 配置。此值必须足够大,以容纳要序列化的最大对象。
Finally, if you don’t register your custom classes, Kryo will still work, but it will have to store the full class name with each object, which is wasteful.
最后,如果你没有注册你的自定义类,Kryo仍然会工作,但它必须存储每个对象的完整类名,这是浪费。

案例测试

1. 不采用序列化

  • MEMORY_ONLY
def main(args: Array[String]): Unit = {
    val sparkConf= new SparkConf().setAppName("CacheApp").setMaster("local[2]")
    val sc = new SparkContext(sparkConf)
    val users = new ListBuffer[User]
    for(i <- 1 to 1000000){
      users.+=(new User(i,"name"+i,i.toString))
    }
    val usersRDD=sc.parallelize(users)
    usersRDD.cache()
    usersRDD
      .foreach(println(_))
    Thread.sleep(100000)
    sc.stop()
  }
class User(id:Int,username:String,age:String) extends Serializable

如果报错
java.io.NotSerializableException:说明需要序列化 extends Serializable
cache
通过观察发现不序列化100w条数据在内存中占用19.1M

2. 默认采用java序列化方式

  • MEMORY_ONLY_SER
def main(args: Array[String]): Unit = {
    val sparkConf= new SparkConf().setAppName("CacheApp").setMaster("local[2]")
    val sc = new SparkContext(sparkConf)
    val users = new ListBuffer[User]
    for(i <- 1 to 1000000){
      users.+=(new User(i,"name"+i,i.toString))
    }
    val usersRDD=sc.parallelize(users)
    usersRDD.persist(StorageLevel.MEMORY_ONLY_SER)
    usersRDD
      .foreach(println(_))
    Thread.sleep(100000)
    sc.stop()
  }

Java cache 序列化
观察结果序列化后的数据明显变少了 19.1 =>6.1 M

3. Kryo序列化但是没有注册类

def main(args: Array[String]): Unit = {
    val sparkConf= new SparkConf().setAppName("CacheApp").setMaster("local[2]")
    sparkConf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
    val sc = new SparkContext(sparkConf)
    val users = new ListBuffer[User]
    for(i <- 1 to 1000000){
      users.+=(new User(i,"name"+i,i.toString))
    }
    val usersRDD=sc.parallelize(users)
    usersRDD.persist(StorageLevel.MEMORY_ONLY_SER)
    usersRDD
      .foreach(println(_))
    Thread.sleep(100000)
    sc.stop()
  }

Kryo 序列化
观察结果发现 没有注册i类的情况下,kryo的数据变得更大了

4. Kryo序列化并注册类

def main(args: Array[String]): Unit = {
    val sparkConf= new SparkConf().setAppName("CacheApp").setMaster("local[2]")
    sparkConf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
    sparkConf.registerKryoClasses(Array(classOf[User]))
    val sc = new SparkContext(sparkConf)
    val users = new ListBuffer[User]
    for(i <- 1 to 1000000){
      users.+=(new User(i,"name"+i,i.toString))
    }
    val usersRDD=sc.parallelize(users)
    usersRDD.persist(StorageLevel.MEMORY_ONLY_SER)
    usersRDD
      .foreach(println(_))
    Thread.sleep(100000)
    sc.stop()
  }

Kryo+注册类
观察结果 明显 Kryo+注册类的情况更好才占用了1953.kb的空间

总结

  1. 通过官网介绍了序列化调优的好处以及用法
  2. 结合案例对比结果发现Kryo+注册类效果最好
  3. 序列化减少占用空间加快网络传输但是耗cpu
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值