- reduceByKey:按照key进行聚合,在shuffle之前有combine(预聚合)操作,返回结果是RDD[k,v].
- groupByKey:按照key进行分组,直接进行shuffle。
- 开发指导:reduceByKey比groupByKey,建议使用。但是需要注意是否会影响业务逻辑。
- 当然reduceByKey和groupByKey 使用也不一样
groupByKey 没参数:wordPairsRDD.groupByKey().
scala> wordPairsRDD.collect
res42: Array[(String, Int)] = Array((one,1), (two,1), (two,1), (three,1), (three,1), (three,1))
scala> wordPairsRDD.groupByKey()
res49: org.apache.spark.rdd.RDD[(String, Iterable[Int])] = ShuffledRDD[59] at groupByKey at <console>:29
scala> wordPairsRDD.groupByKey().map(x=>(x._1,x._2.sum))
res50: org.apache.spark.rdd.RDD[(String, Int)] = MapPartitionsRDD[61] at map at <console>:29
scala> wordPairsRDD.groupByKey().map(x=>(x._1,x._2.sum)).collect
res51: Array[(String, Int)] = Array((two,2), (one,1), (three,3))
reduceByKey 有参数wordPairsRDD.reduceByKey(_+_)
scala> wordPairsRDD.reduceByKey(_+_)
res52: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[64] at reduceByKey at <console>:29