Spark算子:transformation之键值转换groupByKey、reduceByKey、reduceByKeyLocally

26 篇文章 0 订阅

1、groupByKey

1)def groupByKey(): RDD[(K, Iterable[V])]
2)def groupByKey(numPartitions: Int): RDD[(K, Iterable[V])]
3)def groupByKey(partitioner: Partitioner): RDD[(K, Iterable[V])]

该函数用于将RDD[K,V]中每个K对应的V值,合并到一个集合Iterable[V]中,参数numPartitions指分区数,partitioner指分区函数。

scala> var rdd1 = sc.makeRDD(Array(("A",0),("A",2),("B",1),("B",2),("C",1)))
rdd1: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[89] at makeRDD at :21
 
scala> rdd1.groupByKey().collect
res81: Array[(String, Iterable[Int])] = Array((A,CompactBuffer(0, 2)), (B,CompactBuffer(2, 1)), (C,CompactBuffer(1)))

2、reduceByKey

1)def reduceByKey(func: (V, V) => V): RDD[(K, V)]
2)def reduceByKey(func: (V, V) => V, numPartitions: Int): RDD[(K, V)]
3)def reduceByKey(partitioner: Partitioner, func: (V, V) => V): RDD[(K, V)]

该函数将RDD[K,V]中的每个K对应的V值根据映射函数来计算。参数numPartitions指分区数,partitioner指分区函数。

scala> var rdd1 = sc.makeRDD(Array(("A",0),("A",2),("B",1),("B",2),("C",1)))
rdd1: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[91] at makeRDD at :21
 
scala> rdd1.partitions.size
res82: Int = 15
 
scala> var rdd2 = rdd1.reduceByKey((x,y) => x + y)
rdd2: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[94] at reduceByKey at :23
 
scala> rdd2.collect
res85: Array[(String, Int)] = Array((A,2), (B,3), (C,1))
 
scala> rdd2.partitions.size
res86: Int = 15
 
scala> var rdd2 = rdd1.reduceByKey(new org.apache.spark.HashPartitioner(2),(x,y) => x + y)
rdd2: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[95] at reduceByKey at :23
 
scala> rdd2.collect
res87: Array[(String, Int)] = Array((B,3), (A,2), (C,1))
 
scala> rdd2.partitions.size
res88: Int = 2
 
val textRDD = sc.parallelize(List(("A", "aa"), ("B","bb"),("C","cc"),("C","cc"), ("D","dd"), ("D","dd")))
val reducedRDD = textRDD.reduceByKey((value1,value2) => {value1+";"+value2})
reducedRDD.collect.foreach(println)
(D,dd;dd)
(A,aa)
(B,bb)
(C,cc;cc)

3、reduceByKeyLocally:def reduceByKeyLocally(func: (V, V) => V): Map[K, V]

该函数将RDD[K,V]中每个K对应的V值根据映射函数来运算,运算结果映射到一个Map[K,V]中,而不是RDD[K,V]。

scala> var rdd1 = sc.makeRDD(Array(("A",0),("A",2),("B",1),("B",2),("C",1)))
rdd1: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[91] at makeRDD at :21
 
scala> rdd1.reduceByKeyLocally((x,y) => x + y)
res90: scala.collection.Map[String,Int] = Map(B -> 3, A -> 2, C -> 1)

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值