java.lang.StackOverflowError when calling count()

Just to add some more clarity in the discussion, there is a difference between caching to memory and checkpointing, when considered from the lineage point of view. 

When an RDD in checkpointed, the data of the RDD is saved to HDFS (or any Hadoop API compatible fault-tolerant storage) and the lineage of the RDD is truncated. This is okay because in case of the worker failure, the RDD data can be read back from the fault-tolerant storage.

When an RDD is cached, the data of the RDD is cached in memory, but the lineage is not truncated. This is because if the in-memory data is lost, the lineage is required to recompute the data.

So to deal with stackoverflow errors due to long lineage, just caching is not going to be useful. You have to checkpoint the RDD, and as far as I think, its correct way to do this is to do the following
1. Mark RDD of every Nth iteration for caching and checkpointing (both). 
2. And before generating N+1 th iteration RDD, force the materialization of this RDD by doing a rdd.count(). This will persist the RDD in memory as well as save to HDFS and truncate the lineage. If you just mark all Nth iteration RDD for checkpointing, but only force the materialization after ALL the iterations (not after every N+1 th iteration as I suggested) that will still lead to stackoverflow errors.

Yes this checkpointing and materialization is definitely decrease performance, but that is the limitation of the current implementation. 

If you are brave enough, you can try the following. Instead of relying on checkpointing to HDFS for truncating lineage, you can do the following.
1. Persist Nth RDD with replication (see different StorageLevels), this would replicated the in-memory RDD between workers within Spark. Lets call this RDD as R.
2. Force it materialize in the memory. 
3. Create a modified RDD R` which has the same data as RDD R but does not have the lineage. This is done by creating a new BlockRDD using the ids of blocks of data representing the in-memory R (can elaborate on that if you want).

This will avoid writing to HDFS (replication in the Spark memory), but truncate the lineage (by creating new BlockRDDs), and avoid stackoverflow error.

Hope this helps



The long lineage causes a long/deep Java object tree (DAG of RDD objects), which needs to be serialized as part of the task creation. When serializing, the whole object DAG needs to be traversed leading to the stackoverflow error. 

TD


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值