Spark RDD算子之键值对聚合 -- combineByKey

combineByKey

应用于分布式数据集的数据聚合操作

语法格式

def combineByKey[C](createCombiner: (V) => C,  
                    mergeValue: (C, V) => C,   
                    mergeCombiners: (C, C) => C): RD

1、createCombiner

  • combineByKey() 会遍历分区中的所有元素,每个元素的键要么还没有遇到过,要么就和之前的某个元素的键相同
  • 如果这是一个新的元素, combineByKey() 会使用一个叫作 createCombiner() 的函数来创建那个键对应的累加器的初始值

2、mergeValue

  • 如果这是一个在处理当前分区之前已经遇到的键, 它会使用 mergeValue() 方法将该键的累加器对应的当前值与这个新的值进行合并

3、mergeCombiners

  • 由于每个分区都是独立处理的,因此对于同一个键可以有多个累加器
  • 如果有两个或者更多的分区都有对应同一个键的累加器, 就需要使用用户提供的 mergeCombiners() 方法将各个分区的结果进行合并。

示例

Scala版本

计算学生的平均成绩

创建学生成绩样例类

		case class ScoreDetail(studentName: String, subject: String, score: Float)

测试数据

    val scores = List(
      ScoreDetail("XM", "Math", 98),
      ScoreDetail("XM", "English", 88),
      ScoreDetail("WW", "Math", 75),
      ScoreDetail("WW", "English", 78),
      ScoreDetail("LH", "Math", 90),
      ScoreDetail("LH", "English", 80),
      ScoreDetail("ZS", "Math", 91),
      ScoreDetail("ZS", "English", 84),
      ScoreDetail("LS", "English", 87),
      ScoreDetail("LS", "Math", 92),
      ScoreDetail("LS", "English", 83))

将集合转换成二元组

可以理解成转换成一个map, 利用了for 和 yield的组合

		val scoresWithKey = for { i <- scores } yield (i.studentName, i)

创建RDD,并指定分区

    val scoresWithKeyRDD = sc.parallelize(scoresWithKey).partitionBy(new HashPartitioner(3)).cache

输出打印各个分区的长度以及各个分区的数据

    println("--------------------各个分区长度-------------------------------")
    scoresWithKeyRDD.foreachPartition(partition => println(partition.length))
    println("--------------------各个分区的数据------------------------------")
    scoresWithKeyRDD.foreachPartition(
      partition => partition.foreach(
        item => println(item._2)))

打印结果如下:

在这里插入图片描述
聚合求平均值

println("--------------------平均值------------------------------")
    val avgScoresRDD = scoresWithKeyRDD.combineByKey(
      (x: ScoreDetail) => (x.score, 1) /*createCombiner*/,
      (acc: (Float, Int), x: ScoreDetail) => (acc._1 + x.score, acc._2 + 1) /*mergeValue*/,
      (acc1: (Float, Int), acc2: (Float, Int)) => (acc1._1 + acc2._1, acc1._2 + acc2._2) /*mergeCombiners*/
    ).map( { case(key, value) => (key, value._1/value._2) })/*map*/
    avgScoresRDD.collect.foreach(println)

1、createCombiner:创建键值对,并转换成(name,(score,1))的形式
2、mergeValue:键值对组内求和,name 相同的,value 值相加,例如ZS的三个成绩分别为78、82、90,则(ZS,(250,3))
3、mergeCombiners:键值对组间求和,过程与mergeValue相同,不同的是mergeCombiners是对组间求和,即不同的worknode间的求和
4、map:最后求平均值

运行结果如下:

在这里插入图片描述
Java版本
1、ScoreDetail类

public class ScoreDetail implements Serializable{
    //case class ScoreDetail(studentName: String, subject: String, score: Float)
    public String studentName;
    public String subject;
    public float score;

    public ScoreDetail(String studentName, String subject, float score) {
        this.studentName = studentName;
        this.subject = subject;
        this.score = score;
    }
}

2、CombineByKey的测试类

public class CombineTest {
    public static void main(String[] args) {
        SparkConf sparkConf = new SparkConf().setAppName("JavaWordCount").setMaster("local");
        JavaSparkContext sc = new JavaSparkContext(sparkConf);
        ArrayList<ScoreDetail> scoreDetails = new ArrayList<>();
        scoreDetails.add(new ScoreDetail("xiaoming", "Math", 98));
        scoreDetails.add(new ScoreDetail("xiaoming", "English", 88));
        scoreDetails.add(new ScoreDetail("wangwu", "Math", 75));
        scoreDetails.add(new ScoreDetail("wangwu", "Englist", 78));
        scoreDetails.add(new ScoreDetail("lihua", "Math", 90));
        scoreDetails.add(new ScoreDetail("lihua", "English", 80));
        scoreDetails.add(new ScoreDetail("zhangsan", "Math", 91));
        scoreDetails.add(new ScoreDetail("zhangsan", "English", 80));

        JavaRDD<ScoreDetail> scoreDetailsRDD = sc.parallelize(scoreDetails);

        JavaPairRDD<String, ScoreDetail> pairRDD = scoreDetailsRDD.mapToPair(new PairFunction<ScoreDetail, String, ScoreDetail>() {
            @Override
            public Tuple2<String, ScoreDetail> call(ScoreDetail scoreDetail) throws Exception {

                return new Tuple2<>(scoreDetail.studentName, scoreDetail);
            }
        });
		//        new Function<ScoreDetail, Float,Integer>();

        Function<ScoreDetail, Tuple2<Float, Integer>> createCombine = new Function<ScoreDetail, Tuple2<Float, Integer>>() {
            @Override
            public Tuple2<Float, Integer> call(ScoreDetail scoreDetail) throws Exception {
                return new Tuple2<>(scoreDetail.score, 1);
            }
        };

        // Function2传入两个值,返回一个值
        Function2<Tuple2<Float, Integer>, ScoreDetail, Tuple2<Float, Integer>> mergeValue = new Function2<Tuple2<Float, Integer>, ScoreDetail, Tuple2<Float, Integer>>() {
            @Override
            public Tuple2<Float, Integer> call(Tuple2<Float, Integer> tp, ScoreDetail scoreDetail) throws Exception {
                return new Tuple2<>(tp._1 + scoreDetail.score, tp._2 + 1);
            }
        };
        Function2<Tuple2<Float, Integer>, Tuple2<Float, Integer>, Tuple2<Float, Integer>> mergeCombiners = new Function2<Tuple2<Float, Integer>, Tuple2<Float, Integer>, Tuple2<Float, Integer>>() {
            @Override
            public Tuple2<Float, Integer> call(Tuple2<Float, Integer> tp1, Tuple2<Float, Integer> tp2) throws Exception {
                return new Tuple2<>(tp1._1 + tp2._1, tp1._2 + tp2._2);
            }
        };
        JavaPairRDD<String, Tuple2<Float,Integer>> combineByRDD  = pairRDD.combineByKey(createCombine,mergeValue,mergeCombiners);

        //打印平均数
        Map<String, Tuple2<Float, Integer>> stringTuple2Map = combineByRDD.collectAsMap();
        for ( String et:stringTuple2Map.keySet()) {
            System.out.println(et+" "+stringTuple2Map.get(et)._1/stringTuple2Map.get(et)._2);
        }
    }
}
  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值