structured streaming的checkpoint文件无限增长
Will it grow indefinitely?
No. Apache Spark will always keep the number of checkpointed files that you specified in the configuration entry. The configuration entry responsible for that number is spark.sql.streaming.minBatchesToRetain and its default is 100.
You should not ignore this property since it will define your data reprocessing period. For example, if you decided to keep only the last 10 entries that are generated every minute, you will be unable to reprocess the data older than 10 minutes - or at least, you will be unable to do it easily by simply promoting checkpointed information to the one to use by the query. Checkpoint cleaning is a physical delete operation, so you lose the information indefinitely.
答案:
you can use a more global property called spark.sql.streaming.checkpointLocation. If this property is used, Apache Spark will create a checkpoint directory under
s
p
a
r
k
.
s
q
l
.
s
t
r
e
a
m
i
n
g
.
c
h
e
c
k
p
o
i
n
t
L
o
c
a
t
i
o
n
/
{spark.sql.streaming.checkpointLocation}/
spark.sql.streaming.checkpointLocation/{options.queryName}. If queryName options is missing it will generate a directory with random UUID identifier.
Always define queryName alongside the spark.sql.streaming.checkpointLocation
If you want to use the checkpoint as your main fault-tolerance mechanism and you configure it with spark.sql.streaming.checkpointLocation, always define the queryName sink option. Otherwise when the query will restart, Apache Spark will create a completely new checkpoint directory and, therefore, do not restore your checkpointed state!