背景
今天在spark-shell临时性导出300万数数据生成csv文件的时候,其中我启动spark-shell的语句如下;
# 用yarn模式运行spark,client模式提交任务,运行在etl队列,分配4个executors 每个1核 2G内存
spark-shell --master yarn --deploy-mode client --executor-memory 2G --executor-cores 1 --num-executors 4 --queue etl
生成临时性csv文件如下;
# coalesce(1)表示最后生成一个文件输出
spark.sql("select * from dw.ods_rs_media_tbi_media_location_kkf where event_day='20191111'").coalesce(1).write.option("header", true).csv("/source/kkf_temp.csv")
报错如下;
2020-05-29 17:13:08 WARN TaskSetManager:66 - Lost task 1.2 in stage 2.0 (TID 19, shucang-02.szanba.ren, executor 2): org.apache.spark.SparkException: Kryo serialization failed: Buffer overflow. Available: 0, required: 30076801. To avoid this, increase spark.kryoserializer.buffer.max value.
at org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:350)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:393)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.esotericsoftware.kryo.KryoException: Buffer overflow. Available: 0, required: 30076801
at com.esotericsoftware.kryo.io.Output.require(Output.java:163)
at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:246)
at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:232)
at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:54)
at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:43)
at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:628)
at com.twitter.chill.Tuple2Serializer.write(TupleSerializers.scala:37)
at com.twitter.chill.Tuple2Serializer.write(TupleSerializers.scala:33)
at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:628)
at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:366)
at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:307)
at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:628)
at org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:347)
... 4 more
这个bug的意思是说你spark序列化的缓存不够,撑爆了,那其实解决的思路就是把序列缓存加大,注意一定要配置在启动spark-shell环境的参数里,则在原来启动spark-shell的基础上加上--conf spark.kryoserializer.buffer.max=256m --conf spark.kryoserializer.buffer=64m
,问题得到解决,这个值可以根据自己的需要来加,具体启动如下,sparkConf
和SparkSession
略有区别;
spark-shell --master yarn --deploy-mode client --executor-memory 2G --executor-cores 1 --num-executors 4 --queue etl --conf spark.kryoserializer.buffer.max=256m --conf spark.kryoserializer.buffer=64m
如果你是写scala/java/py-spark代码遇到这个bug,那么也是一样在启动时配上即可,scala为例子如下,sparkConf
和SparkSession
略有区别;
sparkConf
版本;
// sparkConf 方式
sparkConf.set("spark.kryoserializer.buffer", "64m")
sparkConf.set("spark.kryoserializer.buffer.max", "128m")
//可以打印你设置的值
println( sparkConf.get("spark.kryoserializer.buffer.max") )
SparkSession
版本;
//SparkSession方式
def createSpark(): SparkSession = {
val spark = SparkSession
.builder()
.appName("kryoserializer")
.enableHiveSupport()
.config("spark.kryoserializer.buffer", "64m")
.config("spark.kryoserializer.buffer.max", "256m")
.master("local")
.getOrCreate()
spark
}
//可以打印你设置的值
println( spark.conf.get("spark.kryoserializer.buffer.max") )
题外话:我测试了下在启动的spark-sql环境里面,利用set spark.kryoserializer.buffer=64m和set spark.kryoserializer.buffer.max=256m
运行不报错,但是依旧bug不解决问题,思考了下,应该是在启动spark环境的时候才是配置是优先级最高的,如果已经进入了spark环境,那么这种类似配置类的参数就已经定型了,无法修改,还是会选择spark-conf.default
的配置,这纯属个人想法,没看源码验证,知道的大佬可以评论区验证下。
set spark.kryoserializer.buffer=64m
set spark.kryoserializer.buffer.max=256m
select * from dw.ods_rs_media_tbi_media_location_kkf where event_day='20191111';
--运行不报错,但是不起效果,依然报错