SparkCore-核心数据集RDD
今天真是美好的一天啊,那我们开始吧,我们今天讲一下RDD,为什么要将RDD了,先说一下我,作为一枚标准的理工男,如果没有彻底弄明白一个东西,就去实操,那肯定是一脸懵逼的,即使瞎一道题目猫碰上死耗子,暂时有了正确结果,但是题目文件类型一变,那又将是懵逼树上懵逼果,懵逼树下你和我。还记得高中化学,1mol水分子=2mol氢原子+1mol氧原子,没有弄明白mol的我,为什么2+1=1???
好吧 就这样。因为初步我们的数据集都将是一个一个的RDD(这里我的理解是RDD是一个模板,会随数据复刻出许许多多的RDD),但是不明白RDD,我们很难讲一个RDD转化为另一个RDD。在SparkCore中,数据处理就是RDD之间的互相转换。
在Spark源码中对于RDD的描述是这样的:
A Resilient Distributed Dataset (RDD), the basic abstraction in Spark. Represents an immutable, partitioned collection of elements that can be operated on in parallel. This class contains the basic operations available on all RDDs, such as map, filter, and persist. In addition, PairRDDFunctions contains operations available only on RDDs of key-value pairs, such as groupByKey and join; DoubleRDDFunctions contains operations available only on RDDs of Doubles; and SequenceFileRDDFunctions contains operations available on RDDs that can be saved as SequenceFiles. All operations are automatically available on any RDD of the right type (e.g. RDD[(Int, Int)]) through implicit.
Internally, each RDD is characterized by five main properties:
A list of partitions
A function for computing each split
A list of dependencies on other RDDs
Optionally, a Partitioner for key-value RDDs (e.g. to say that the RDD is hash-partitioned)
Optionally, a list of preferred locations to compute each split on (e.g. block locations for an HDFS file)
All of the scheduling and execution in Spark is done based on these methods, allowing each RDD to implement its own way of computing itself. Indeed, users can implement custom RDDs (e.g. for reading data from a new storage system) by overriding these functions. Please refer to