spark checkpoint

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/a1837634447/article/details/79113377
/**
   * Mark this RDD for checkpointing. It will be saved to a file inside the checkpoint
   * directory set with `SparkContext#setCheckpointDir` and all references to its parent
   * RDDs will be removed. This function must be called before any job has been
   * executed on this RDD. It is strongly recommended that this RDD is persisted in
   * memory, otherwise saving it on a file will require recomputation.
   */
   /**
    *将此RDD标记为检查点。 它将被保存到使用`SparkContext#setCheckpointDir`设置的检查点目录中的一个文件中,并且所有对其父项的引用
    * RDD将被删除。 这个功能必须在任何工作之前被调用
    *在此RDD上执行。 强烈建议将此RDD保存在内存中,否则将其保存在文件中将需要重新计算。
   */
    def checkpoint(): Unit = RDDCheckpointData.synchronized {
    // NOTE: we use a global lock here due to complexities downstream with ensuring
    // children RDD partitions point to the correct parent partitions. In the future
    // we should revisit this consideration.
    if (context.checkpointDir.isEmpty) {
      throw new SparkException("Checkpoint directory has not been set in the SparkContext")
    } else if (checkpointData.isEmpty) {
      checkpointData = Some(new ReliableRDDCheckpointData(this))
    }
  }
 System.setProperty("hadoop.home.dir", "G:\\hadoop-common-2.2.0-bin")
    val conf = new SparkConf().setAppName("WC").setMaster("local[4]")
    val sc = new SparkContext(conf)

    val value = sc.parallelize(Array("")).cache()
    sc.setCheckpointDir("hdfs://node1:9000/checkpoint/")

    value.checkpoint()
展开阅读全文

没有更多推荐了,返回首页