reduceByKey
def reduceByKey(func: (V, V) => V): RDD[(K, V)]
对元素为kv对的RDD中key相同的元素的Value进行reduce操作
示例
scala> val a = sc.parallelize(List("dog", "cat", "owl", "gnu", "ant"), 2) a: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[0] at parallelize at <console>:27
scala> val b = a.map(x => (x.length, x)) b: org.apache.spark.rdd.RDD[(Int, String)] = MapPartitionsRDD[1] at map at <console>:29
scala> b.reduceByKey(_ + _).collect res0: Array[(Int, String)] = Array((3,dogcatowlgnuant))
foldByKey
针对键值对的RDD进行聚合(带有初始值)
scala> val a = sc.parallelize(List("dog", "cat", "owl", "gnu", "ant"), 2) a: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[7] at parallelize at <console>:27
scala> val b = a.map(x => (x.length, x)) b: org.apache.spark.rdd.RDD[(Int, String)] = MapPartitionsRDD[8] at map at <console>:29
scala> b.foldByKey("")(_ + _).collect res8: Array[(Int, String)] = Array((3,dogcatowlgnuant))
区别:虽然reduceByKey和foldByKey都是聚合的但是foldByKey多了一个初始值,通过代码就能看出来
b.reduceByKey(_ + _).collect //reduceByKey
b.foldByKey("")(_ + _).collect //foldByKey