作者:Syn良子 出处:http://www.cnblogs.com/cssdongl 转载请注明出处
用spark来快速计算分组的平均值,写法很便捷,话不多说上代码
object ColumnValueAvg extends App { /** * ID,Name,ADDRESS,AGE * 001,zhangsan,chaoyang,20 * 002,zhangsa,chaoyang,27 * 003,zhangjie,chaoyang,35 * 004,lisi,haidian,24 * 005,lier,haidian,40 * 006,wangwu,chaoyang,90 * 007,wangchao,haidian,80 */ val conf = new SparkConf().setAppName("test column value sum and avg").setMaster("local[1]") val sc = new SparkContext(conf) val textRdd = sc.textFile(args(0)) //be careful the toInt here is necessary ,if no cast ,then it will be age string append val addressAgeMap = textRdd.map(x => (x.split(",")(2), x.split(",")(3).toInt)) val sumAgeResult = addressAgeMap.reduceByKey(_ + _).collect().foreach(println) val avgAgeResult = addressAgeMap.combineByKey( (v) => (v, 1), (accu: (Int, Int), v) => (accu._1 + v, accu._2 + 1), (accu1: (Int, Int), accu2: (Int, Int)) => (accu1._1 + accu2._1, accu1._2 + accu2._2) ).mapValues(x => (x._1 / x._2).toDouble).collect().foreach(println) println("Sum and Avg calculate successfuly") sc.stop() }
用textFile读取数据后,以address进行分组来求age的平均值,这里用combineByKey来计算,这是一个抽象层次很高的函数.稍微总结一下自己的理解
查看源代码会发现combineByKey定义如下
def combineByKey[C](createCombiner: V => C, mergeValue: (C, V) => C, mergeCombiners: (C, C) => C) : RDD[(K, C)] = { combineByKey(createCombiner, mergeValue, mergeCombiners, defaultPartitioner(self)) }
combineByKey函数需要传递三个函数做为参数,分别为createCombiner、mergeValue、mergeCombiner,需要理解这三个函数的意义
结合数据来讲的话,combineByKey默认按照key来进行元素的combine,这里三个参数都是对value的一些操作
1>第一个参数createCombiner,如代码中定义的是 : (v) => (v, 1)
这里是创建了一个combiner,作用是当遍历rdd的分区时,遇到第一次出现的key值,那么生成一个(v,1)的combiner,比如这里key为address,当遇到第一个