创建RDD
在Spark中创建RDD的方式分为三种:
- 从外部存储创建RDD
- 从集合中创建RDD
- 从其他RDD创建
textfile
调用SparkContext.textFile()方法,从外部存储中读取数据来创建 RDD
parallelize
调用SparkContext 的 parallelize()方法,将一个存在的集合,变成一个RDD
makeRDD
方法一
/** Distribute a local Scala collection to form an RDD.
*
* This method is identical to `parallelize`.
*/
def makeRDD[T: ClassTag](
seq: Seq[T],
numSlices: Int = defaultParallelism): RDD[T] = withScope {
parallelize(seq, numSlices)
}
方法二:分配一个本地Scala集合形成一个RDD,为每个集合对象创建一个最佳分区。
/**
* Distribute a local Scala collection to form an RDD, with one or more
* location preferences (hostnames of Spark nodes) for each object.
* Create a new partition for each collection item.
*/
def makeRDD[T: ClassTag](seq: Seq[(T, Seq[String])]): RDD[T] = withScope {
assertNotStopped()
val indexToPrefs = seq.zipWithIndex.map(t => (t._2, t._1._2)).toMap
new ParallelCollectionRDD[T](this, seq.map(_._1), math.max(seq.size, 1), indexToPrefs)
}
举例
scala> val rdd = sc.parallelize(1 to 6, 2)
val rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[2] at parallelize at <console>:1
scala> rdd.collect()
val res4: Array[Int] = Array(1, 2, 3, 4, 5, 6)
scala> val seq = List(("American Person", List("Tom", "Jim")), ("China Person", List("LiLei", "HanMeiMei")), ("Color Type", List("Red", "Blue")))
val seq: List[(String, List[String])] = List((American Person,List(Tom, Jim)), (China Person,List(LiLei, HanMeiMei)), (Color Type,List(Red, Blue)))
scala> val rdd2 = sc.makeRDD(seq)
val rdd2: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[0] at makeRDD at <console>:1
scala> rdd2.partitions.size
val res0: Int = 3
scala> rdd2.foreach(println)
American Person
Color Type
China Person
scala> val rdd1 = sc.parallelize(seq)
val rdd1: org.apache.spark.rdd.RDD[(String, List[String])] = ParallelCollectionRDD[1] at parallelize at <console>:1
scala> rdd1.partitions.size
val res1: Int = 2
scala> rdd2.collect()
val res2: Array[String] = Array(American Person, China Person, Color Type)
scala> rdd1.collect()
val res3: Array[(String, List[String])] = Array((American Person,List(Tom, Jim)), (China Person,List(LiLei, HanMeiMei)), (Color Type,List(Red, Blue)))
scala> var lines = sc.textFile("/root/tmp/a.txt",3)
var lines: org.apache.spark.rdd.RDD[String] = /root/tmp/a.txt MapPartitionsRDD[4] at textFile at <console>:1
scala> lines.collect()
val res6: Array[String] = Array(a,b,c)
scala> lines.partitions.size
val res7: Int = 3
转换算子
flatMap
map
mapPartitions
map与mapPartitions的区别
map: 比如一个partition中有1万条数据;那么你的function要执行和计算1万次。
MapPartitions:一个task仅仅会执行一次function,function一次接收所有的partition数据。只要执行一次就可以了,性能比较高。
reduceByKey
groupByKey
举例
scala> var lines = sc.textFile("/root/tmp/a.txt",3)
var lines: org.apache.spark.rdd.RDD[String] = /root/tmp/a.txt MapPartitionsRDD[13] at textFile at <console>:1
scala> lines.flatMap(x=>x.split(",")).map(x=>(x,1)).reduceByKey((a,b)=>a+b).foreach(println)
(c,2)
(b,1)
(d,1)
(a,2)
scala> lines.collect()
val res27: Array[String] = Array(a,b,c, c, a,d)
scala> lines.map(_.split(",")).collect()
val res25: Array[Array[String]] = Array(Array(a, b, c), Array(c), Array(a, d))
scala> lines.flatMap(_.split(",")).collect()
val res26: Array[String] = Array(a, b, c, c, a, d)
keyBy
scala> val a = sc.parallelize(List("dog", "tiger", "lion", "cat", "spider", "eagle"), 2)
val a: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[0] at parallelize at <console>:1
scala> val b = a.keyBy(_.length) //将字符串的长度作为key值。
val b: org.apache.spark.rdd.RDD[(Int, String)] = MapPartitionsRDD[1] at keyBy at <console>:1
scala> b.foreach(println)
(3,cat)0:> (0 + 2) / 2]
(6,spider)
(5,eagle)
(3,dog)
(5,tiger)
(4,lion)
groupBy
内部调用HashPartitioner进行分区,分区数怎么取?如果设置了spark.default.parallelism就用这个值,没有的话取相关RDD里分区数的最大值。
scala> val a = sc.parallelize(1 to 9, 3)
val a: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[3] at parallelize at <console>:1
scala> a.groupBy(x => { if (x % 2 == 0) "even" else "odd" }).collect
warning: 1 deprecation (since 2.13.3); for details, enable `:setting -deprecation` or `:replay -deprecation`
val res2: Array[(String, Iterable[Int])] = Array((even,Seq(2, 4, 6, 8)), (odd,Seq(1, 3, 5, 7, 9)))
groupByKey
scala> val a = sc.parallelize(List("dog", "tiger", "lion", "cat", "spider", "eagle"), 2)
val a: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[0] at parallelize at <console>:1
scala> val b = a.keyBy(_.length) //将字符串的长度作为key值。
val b: org.apache.spark.rdd.RDD[(Int, String)] = MapPartitionsRDD[1] at keyBy at <console>:1
scala> b.foreach(println)
(3,cat)0:> (0 + 2) / 2]
(6,spider)
(5,eagle)
(3,dog)
(5,tiger)
(4,lion)
scala> b.groupByKey.collect //根据相同key值来进行group操作
warning: 2 deprecations (since 2.13.3); for details, enable `:setting -deprecation` or `:replay -deprecation`
val res1: Array[(Int, Iterable[String])] = Array((4,Seq(lion)), (6,Seq(spider)), (3,Seq(dog, cat)), (5,Seq(tiger, eagle)))
reduceByKey
aggregateByKey
RDD 的话建议使用 reduceByKey 或者 aggregateByKey 算子来替代掉 groupByKey 算子。 因为 reduceByKey 和 aggregateByKey 算子都会使用用户自定义的函数对每个节点本地的相 同 key 进行预聚合。而 groupByKey 算子是不会进行预聚合的,全量的数据会在集群的各个 节点之间分发和传输,性能相对来说比较差。
SparkSQL 本身的 HashAggregte 就会实现本地预聚合+全局聚合。
行动算子
foreach
saveAsTextFile
saveAsObjectFile
colloect
collectAsMap
lookup
count
top
reduce
重分区算子
coalesce
repartition
持久化算子
persist
cache
checkpoint
scala> sc.setCheckpointDir("/root/tmp/checkpoint")
scala> val rdd1 = sc.textFile("/root/tmp/a.txt",3).flatMap(x=>x.split(",")).map(x=>(x,1)).reduceByKey((a,b)=>a+b)
scala> rdd1.cache()
scala> rdd1.checkpoint()
scala> rdd1.collect()
checkpoint的意思就是建立检查点,类似于快照,例如在spark计算里面,计算流程DAG特别长,服务器需要将整个DAG计算完成得出结果,但是如果在这很长的计算流程中突然中间算出的数据丢失了,spark又会根据RDD的依赖关系从头到尾计算一遍,这样子就很费性能,当然我们可以将中间的计算结果通过cache或者persist放到内存或者磁盘中,但是这样也不能保证数据完全不会丢失,存储的这个内存出问题了或者磁盘坏了,也会导致spark从头再根据RDD计算一遍,所以就有了checkpoint,其中checkpoint的作用就是将DAG中比较重要的中间数据做一个检查点将结果存储到一个高可用的地方(通常这个地方就是HDFS里面)。
RDD的依赖
checkpoint先了解一下RDD的依赖,比如计算wordcount:
scala> sc.textFile("hdfs://leen:8020/user/hive/warehouse/tools.db/cde_prd").flatMap(_.split("\\\t")).map((_,1)).reduceByKey(_+_);
res0: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[4] at reduceByKey at <console>:28
scala> res0.toDebugString
res1: String =
(2) ShuffledRDD[4] at reduceByKey at <console>:28 []
+-(2) MapPartitionsRDD[3] at map at <console>:28 []
| MapPartitionsRDD[2] at flatMap at <console>:28 []
| hdfs://leen:8020/user/hive/warehouse/tools.db/cde_prd MapPartitionsRDD[1] at textFile at <console>:28 []
| hdfs://leen:8020/user/hive/warehouse/tools.db/cde_prd HadoopRDD[0] at textFile at <console>:28 []
1、在textFile读取hdfs的时候就会先创建一个HadoopRDD,其中这个RDD是去读取hdfs的数据key为偏移量value为一行数据,因为通常来讲偏移量没有太大的作用所以然后会将HadoopRDD转化为MapPartitionsRDD,这个RDD只保留了hdfs的数据。
2、flatMap 产生一个RDD MapPartitionsRDD
3、map 产生一个RDD MapPartitionsRDD
4、reduceByKey 产生一个RDD ShuffledRDD
如何建立checkPoint
1、首先需要用sparkContext设置hdfs的checkpoint的目录,如果不设置使用checkpoint会抛出异常:
scala> res0.checkpoint
org.apache.spark.SparkException: Checkpoint directory has not been set in the SparkContext
scala> sc.setCheckpointDir("hdfs://leen:8020/checkPointDir")
执行了上面的代码,hdfs里面会创建一个目录:
/checkPointDir/9ae90c62-a7ff-442a-bbf0-e5c8cdd7982d
2、然后执行checkpoint
scala> res0.checkpoint
1
发现hdfs中还是没有数据,说明checkpoint也是个transformation的算子。
scala> res0.count()
INFO ReliableRDDCheckpointData: Done checkpointing RDD 4 to hdfs://leen:8020/checkPointDir/9ae90c62-a7ff-442a-bbf0-e5c8cdd7982d/rdd-4, new parent is RDD 5
res5: Long = 73689
1
2
3
hive > dfs -du -h /checkPointDir/9ae90c62-a7ff-442a-bbf0-e5c8cdd7982d/rdd-4;
147 147 /checkPointDir/9ae90c62-a7ff-442a-bbf0-e5c8cdd7982d/rdd-4/_partitioner
1.2 M 1.2 M /checkPointDir/9ae90c62-a7ff-442a-bbf0-e5c8cdd7982d/rdd-4/part-00000
1.2 M 1.2 M /checkPointDir/9ae90c62-a7ff-442a-bbf0-e5c8cdd7982d/rdd-4/part-00001
但是执行的时候相当于走了两次流程,前面计算了一遍,然后checkpoint又会计算一次,所以一般我们先进行cache然后做checkpoint就会只走一次流程,checkpoint的时候就会从刚cache到内存中取数据写入hdfs中,如下:
rdd.cache()
rdd.checkpoint()
rdd.collect
在源码中,在checkpoint的时候强烈建议先进行cache,并且当你checkpoint执行成功了,那么前面所有的RDD依赖都会被销毁,如下:
/**
* Mark this RDD for checkpointing. It will be saved to a file inside the checkpoint
* directory set with `SparkContext#setCheckpointDir` and all references to its parent
* RDDs will be removed. This function must be called before any job has been
* executed on this RDD. It is strongly recommended that this RDD is persisted in
* memory, otherwise saving it on a file will require recomputation.
*/
def checkpoint(): Unit = RDDCheckpointData.synchronized {
// NOTE: we use a global lock here due to complexities downstream with ensuring
// children RDD partitions point to the correct parent partitions. In the future
// we should revisit this consideration.
if (context.checkpointDir.isEmpty) {
throw new SparkException("Checkpoint directory has not been set in the SparkContext")
} else if (checkpointData.isEmpty) {
checkpointData = Some(new ReliableRDDCheckpointData(this))
}
}
RDD依赖被销毁
scala> res0.toDebugString
res6: String =
(2) ShuffledRDD[4] at reduceByKey at <console>:28 []
| ReliableCheckpointRDD[5] at count at <console>:30 []
参考:Spark_Spark 中 checkpoint 的正确使用方式 以及 与 cache区别_spark checkpoint(eager)-CSDN博客