CompactBuffer并不是scala里定义的数据结构,而是spark里的数据结构,它继承自一个迭代器和序列,所以它的返回值是一个很容易进行循环遍历的集合。
spark的groupByKey算子结果按key进行分组,生成了一组CompactBuffer结构的数据,
PairRDD特有的 ,普通RDD没有
示例如下:
scala> val words = Array("one", "two", "two", "three", "three", "three")
words: Array[String] = Array(one, two, two, three, three, three)
//先变成PairRDD
scala> val wordPairsRDD = sc.parallelize(words).map(word => (word, 1))
wordPairsRDD: org.apache.spark.rdd.RDD[(String, Int)] = MapPartitionsRDD[62] at map at <console>:26
scala> wordPairsRDD.collect
res51: Array[(String, Int)] = Array((one,1), (two,1), (two,1), (three,1), (three,1), (three,1))
scala> wordPairsRDD.groupByKey().collect
res52: Array[(String, Iterable[Int])] = Array((two,CompactBuffer(1, 1)), (one,CompactBuffer(1)), (three,CompactBuffer(1, 1, 1)))
scala> wordPairsRDD.groupByKey().map(x=>(x._1,x._2.sum)).collect
res53: Array[(String, Int)] = Array((two,2), (one,1), (three,3))