一,前言
因为在Spark中进行RDD的嵌套操作,所以在操作的时候报了一个错误
Caused by: org.apache.spark.SparkException: This RDD lacks a SparkContext. It could happen in the following cases:
RDD transformations and actions are NOT invoked by the driver, but inside of other transformations; for example, rdd1.map(x => rdd2.values.count() * x) is invalid because the values transformation and count action cannot be performed inside of the rdd1.map transformation. For more information, see SPARK-5063.
When a Spark Streaming job recovers from checkpoint, this exception will be hit if a reference to an RDD not defined by the streaming job is used in DStream operations. For more information, See SPARK-13758.
二,错误概述
Spark程序的大部分操作都是RDD操作,通过传入函数给RDD操作函数来计算,这些函数在不同节点上并发执行,但每个内存的变量由不同的作用域,不能相互访问,所以有时会不太方便,Spark提供了两类共享变量供编程使用:
- 广播变量(broadcast variable
- 计数器(accumulator)