combineByKey算子求解平均值实例

不同场景平均值算法


求平均值系列之一:

val input = sc.parallelize(Seq(("t1", 1), ("t1", 2), ("t1", 3), ("t2", 2), ("t2", 5)))
val result = input.combineByKey( 
(v) => (v, 1), 
(acc: (Int, Int), v) => (acc._1 + v, acc._2 + 1),
(acc1: (Int, Int), acc2: (Int, Int)) => (acc1._1 + acc2._1, acc1._2 + acc2._2) 
).map{ case (key, value) => (key, value._1 / value._2.toFloat) }
result.collectAsMap().foreach(println(_)) 


-----------------测试运行结果:--------------------
scala> val input = sc.parallelize(Seq(("t1", 1), ("t1", 2), ("t1", 3), ("t2", 2), ("t2", 5)))
input: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[17] at parallelize at <console>:25


scala> val result = input.combineByKey( 
     | (v) => (v, 1), 
     | (acc: (Int, Int), v) => (acc._1 + v, acc._2 + 1),
     | (acc1: (Int, Int), acc2: (Int, Int)) => (acc1._1 + acc2._1, acc1._2 + acc2._2) 
     | ).map{ case (key, value) => (key, value._1 / value._2.toFloat) }
result: org.apache.spark.rdd.RDD[(String, Float)] = MapPartitionsRDD[19] at map at <console>:31


scala> result.collectAsMap().foreach(println(_)) 
(t1,2.0)
(t2,3.5)



求平均值系列之二:

val testData = sc.parallelize(Seq(("t1", (1,2)), ("t1", (2,4)), ("t1", (3,5)), ("t2", (2,1)), ("t2", (5,2))))
(t2,(6,3))(t1,(4,2))


val result = testData.combineByKey( 
(v) => (v._1, v._2), 
(acc: (Int, Int), v) => (acc._1 + v._1, acc._2 + v._2),
(acc1: (Int, Int), acc2: (Int, Int)) => (acc1._1 + acc2._1, acc1._2 + acc2._2) 
).map{ case (key, value) => (key, value._1 / value._2) }
result.collectAsMap().foreach(println(_)) 


val result = testData.combineByKey( 
(v) => (v._1, v._2), 
(acc: (Int, Int), v) => (acc._1 + v._1, acc._2 + v._2),
(acc1: (Int, Int), acc2: (Int, Int)) => (acc1._1 + acc2._1, acc1._2 + acc2._2) 
).map{ case (key, value) => (key, value._1 / value._2) }
result.collectAsMap().foreach(println(_)) 


求平均值算法验证,下面不能直接使用于生成环境


val testData = sc.parallelize(Seq(("t1", (1,2)), ("t1", (2,4)), ("t1", (3,5)), ("t2", (2,1)), ("t2", (5,2))))
 
 val tt = testData.combineByKey((_: String) => (0, 0),(pair: (Int, Int), value: String) =>(pair._1 + Integer.parseInt(value.split("#")(0)), pair._2 + Integer.parseInt(value.split("#")(1))),(pair1: (Int, Int), pair2: (Int, Int)) =>(pair1._1 + pair2._1, pair2._2 + pair2._2))


(t2,(5,2))(t1,(2,0))
val tt = testData.combineByKey((_: String) => (0, 0),(pair: (Int, Int), value: String) => (pair._1 + Integer.parseInt(value.split("#")(0)), pair._2 + Integer.parseInt(value.split("#")(1))),(pair1: (Int, Int), pair2: (Int, Int)) =>( pair1._1 + pair2._1, pair2._2 + pair2._2 ))




val averages: RDD[String, Double] = sumCountPairs.mapValues {
  case (sum, 0L) => 0D
  case (sum, count) => sum.toDouble / count
}


val sumCountPairs:RDD[(String, (Int, Long))] = testData.combineByKey(
  (_: Int) => (0, 0L),
  (pair: (Int, Long), value: Int) =>
    (pair._1 + value, pair._2 + 1L),
  (pair1: (Int, Long), pair2: (Int, Long)) =>
    (pair1._1 + pair2._1, pair2._2 + pair2._2)
)


val sumCountPairs = testData.combineByKey(
  (_: Int) => (0, 0L),
  (pair: (Int, Long), value: Int) =>
    (pair._1 + value, pair._2 + 1L),
  (pair1: (Int, Long), pair2: (Int, Long)) =>
    (pair1._1 + pair2._1, pair2._2 + pair2._2)
)


val averages = sumCountPairs.mapValues {
  case (sum, 0L) => 0D
  case (sum, count) => sum.toDouble / count
}



案例拾遗:


题主示例代码中 testData 这个 RDD 的类型是已经确定为 RDD[(String, Int)],然后通过 RDD.rddToRDDPairFunctions 这个隐式类型转换转为 PairRDDFunctions[String, Int],从而获得 reduceByKey 和 combineByKey 这两个 methods。然后来对比下二者的函数签名: class PairRDDFunctions[K, V](...) {
  def reduceByKey(func: (V, V) => V): RDD[(K, V)]


  def combineByKey[C](
      createCombiner: V => C,
      mergeValue: (C, V) => C,
      mergeCombiners: (C, C) => C): RDD[(K, C)]
}
可以看到 reduceByKey 的 func 参数的类型只依赖于 PairRDDFunction 的类型参数 V,在这个例子里也就是 Int。于是 func 的类型已经确定为 (Int, Int) => Int,所以就不需要额外标识类型了。而 combineByKey 比 reduceByKey 更加通用,它允许各个 partition 在 shuffle 前先做 local reduce 得到一个类型为 C 的中间值,待 shuffle 后再做合并得到各个 key 对应的 C。以求均值为例,我们可以让每个 partiton 先求出单个 partition 内各个 key 对应的所有整数的和 sum 以及个数 count,然后返回一个 pair (sum, count)。在 shuffle 后累加各个 key 对应的所有 sum 和 count,再相除得到均值:val sumCountPairs: RDD[(String, (Int, Long))] = testData.combineByKey(
  (_: Int) => (0, 0L),


  (pair: (Int, Long), value: Int) =>
    (pair._1 + value, pair._2 + 1L),


  (pair1: (Int, Long), pair2: (Int, Long)) =>
    (pair1._1 + part2._1, pair2._2 + pair2._2)
)


val averages: RDD[String, Double] = sumCountPairs.mapValues {
  case (sum, 0L) => 0D
  case (sum, count) => sum.toDouble / count
}
由于 C 这个 类型参数是任意的,并不能从 testData 的类型直接推导出来,所以必须明确指定。只不过题主的例子是最简单的用 reduceByKey 就可以搞定的情况,也就是 V 和 C 完全相同,于是就看不出区别了。




val listRDD = sc.parallelize(List(1,2,3,4,4,5)).map(x => (x,1))
def combineByKey[C](createCombiner: Int => C,mergeValue: (C, Int) => C,mergeCombiners: (C, C) => C): org.apache.spark.rdd.RDD[(Int, C)]


val sumandcnt = listRDD.combineByKey((_: Int)=>(0, 0),(pair:(Int,Int),value:Int)=>(pair._1 + value, pair._2 + 1),(pair1:(Int, Int),pair2:(Int, Int))=>(pair1._1 + pair2._1, pair2._2 + pair2._2))
val ll =sumandcnt.mapValues {
  case (sum, 0) => 0D
  case (sum, count) => sum.toDouble / count
}


val rdd = List(1,2,3,4)
val input = sc.parallelize(rdd)
val result = input.aggregate((0,0))(
(acc,value) => (acc._1 + value, acc._2 + 1),
(acc1,acc2) => (acc1._1 + acc2._1, acc1._2 + acc2._2)
)




result: (Int, Int) = (10, 4)
val avg = result._1 / result._2
avg: Int = 2.5




程序的详细过程大概如下:
首先定义一个初始值 (0, 0),即我们期待的返回类型的初始值。
(acc,value) => (acc._1 + value, acc._2 + 1), value是函数定义里面的T,这里是List里面的元素。所以acc._1 + value, acc._2 + 1的过程如下:
0+1, 0+1
1+2, 1+1
3+3, 2+1
6+4, 3+1
结果为 (10,4)。在实际Spark执行中是分布式计算,可能会把List分成多个分区,假如3个,p1(1,2), p2(3), p3(4),经过计算各分区的的结果 (3,2), (3,1), (4,1),这样,执行 (acc1,acc2) => (acc1._1 + acc2._1, acc1._2 + acc2._2) 就是 (3+3+4,2+1+1) 即 (10,4),然后再计算平均值。




  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值