Spark问题12之kryoserializer shuffle size 不够,出现overflow

 

更多代码请见:https://github.com/xubo245/SparkLearning

Spark生态之Alluxio学习 版本:alluxio(tachyon) 0.7.1,spark-1.5.2,hadoop-2.6.0

1.问题描述

1.1

运行cs-bwamem是出现序列化shuffle overflow问题,主要是需要输出sam到本地,文件比较大,默认的是:

spark.kryoserializer.buffer.max 64m

而实际大于2G,所以不够。spark webUI显示14.5 GBinput。。。。 而且是collect操作

2.脚本运行

hadoop@Master:~/disk2/xubo/project/alignment/cs-bwamem$ cat csbwamemAlignP1Test10sam.sh 
#for t in 1 2 3 4 5 6 7 8 9 10 15 20 30 40 50 60 70 80 90 100 120 140 160 180 200 300 400 500 600 700 800 900 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
#for t in 1 10 50 100 400 1000 5000 10000
#for t in {90..400..10}
for t in 100
do
for k in {7..9}
do 
#for j in 10000 100000 1000000 10000000
for j in 10000000
do
for i in 50
do
echo $i
echo $j
echo 't'$t
echo 'k'$k
#fq='g38L'$i'c'$j'Nhs20Paired'$k'.fq'
#fq0='g38L'$i'c'$j'Nhs20Paired*.fastq'
#fq1='/xubo/alignment/sparkBWA/g38L'$i'c'$j'Nhs20Paired1.fastq'
#fq2='/xubo/alignment/sparkBWA/g38L'$i'c'$j'Nhs20Paired2.fastq'
#out='g38L'$i'c'$j'Nhs20Paired12.sam'
#out='/xubo/project/alignment/cs-bwamem/input/fastq/newg38L'$i'c'$j'Nhs20Paired12P64bn200000000t'$t'k'$k'sbatch.adam'
out='/home/hadoop/disk2/xubo/project/alignment/cs-bwamem/newg38L'$i'c'$j'Nhs20Paired12P64bn200000000.sam'
file='/xubo/project/alignment/cs-bwamem/input/fastq/newg38L'$i'c'$j'Nhs20Paired12P64bn200000000.fastq'
#out='/xubo/project/alignment/cs-bwamem/input/fastq/newg38L'$i'c'$j'Nhs20Paired12P1k'$k'.adam'
#file='/xubo/project/alignment/cs-bwamem/input/fastq/newg38L'$i'c'$j'Nhs20Paired12P1.fastq'

echo $file
echo $out

spark-submit --class cs.ucla.edu.bwaspark.BWAMEMSpark --total-executor-cores 20 --executor-cores 2 --executor-memory 20G \
--master spark://219.219.220.149:7077 /home/hadoop/disk2/xubo/tools/cloud-scale-bwamem-0.2.2/target/cloud-scale-bwamem-0.2.2-assembly.jar \
cs-bwamem -bfn 1 -bPSW 1 -sbatch $t -bPSWJNI 1  -oChoice 1 -oPath $out -localRef 1 \
-jniPath /home/hadoop/disk2/xubo/tools/cloud-scale-bwamem-0.2.2/target/jniNative.so \
-isSWExtBatched 1  1 \
/home/hadoop/disk2/xubo/ref/GRCH38L1Index/GRCH38chr1L3556522.fasta  $file

#spark-submit --executor-memory 6g --class cs.ucla.edu.bwaspark.BWAMEMSpark --total-executor-cores 20 --master spark://219.219.220.149:7077  --conf spark.driver.host=219.219.220.149 --conf spark.driver.cores=4 --conf spark.driver.maxResultSize=6g --conf spark.storage.memoryFraction=0.7  --conf spark.akka.threads=2 --conf spark.akka.frameSize=1024 /home/hadoop/xubo/tools/cloud-scale-bwamem-0.2.1/target/cloud-scale-bwamem-0.2.0-assembly.jar merge hdfs://219.219.220.149:9000 $file $out


#/xubo/alignment/sparkBWA/GRCH38chr1L3556522N10L50paired1.fastq /xubo/alignment/sparkBWA/GRCH38chr1L3556522N10L50paired2.fastq \
#/xubo/alignment/output/sparkBWA/datatestLocalGRCH38chr1L3556522N10L50paired12YarnMaster

done 
done
done
done
#--master spark://219.219.220.149:7077 /home/hadoop/disk2/xubo/tools/cloud-scale-bwamem-0.2.1/target/cloud-scale-bwamem-0.2.0-assembly.jar \
#--master spark://219.219.220.149:7077 /curr/pengwei/github/cloud-scale-bwamem/target/cloud-scale-bwamem-0.2.0-assembly.jar \

3.运行记录:

hadoop@Master:~/disk2/xubo/project/alignment/cs-bwamem$ ./csbwamemAlignP1Test10sam.sh > csbwamemAlignP1Test10samtime201702281645.txt
[Stage 2:>                                                        (0 + 16) / 64]17/02/28 16:57:37 ERROR TaskSetManager: Task 9 in stage 2.0 failed 4 times; aborting job
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 9 in stage 2.0 failed 4 times, most recent failure: Lost task 9.3 in stage 2.0 (TID 180, Mcnode4): org.apache.spark.SparkException: Kryo serialization failed: Buffer overflow. Available: 0, required: 109. To avoid this, increase spark.kryoserializer.buffer.max value.
    at org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:263)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:240)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283)
		at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1271)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1270)
		at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
		at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
		at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1270)
		at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
		at scala.Option.foreach(Option.scala:236)
		at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
		at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1496)
		at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1458)
		at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1447)
		at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1824)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1837)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1850)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1921)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:909)
		at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
		at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
		at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)
		at org.apache.spark.rdd.RDD.collect(RDD.scala:908)
		at cs.ucla.edu.bwaspark.FastMap$.memPairEndMapping(FastMap.scala:397)
		at cs.ucla.edu.bwaspark.FastMap$.memMain(FastMap.scala:144)
		at cs.ucla.edu.bwaspark.BWAMEMSpark$.main(BWAMEMSpark.scala:318)
		at cs.ucla.edu.bwaspark.BWAMEMSpark.main(BWAMEMSpark.scala)
		at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
		at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
		at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
		at java.lang.reflect.Method.invoke(Method.java:606)
		at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:674)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.spark.SparkException: Kryo serialization failed: Buffer overflow. Available: 0, required: 109. To avoid this, increase spark.kryoserializer.buffer.max value.
    at org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:263)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:240)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

参考

【1】http://spark.apache.org/docs/1.5.2/programming-guide.html
【2】https://github.com/xubo245/SparkLearning
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值