大数据——Spark RDD算子(五)键值对聚合操作combineByKey

Spark RDD算子(五)键值对聚合操作combineByKey

combineByKey

聚合数据一般在集中式数据比较方便,如果涉及到分布式的数据,就比较繁琐了。这里介绍一下combineByKey,是各种聚合操作的鼻祖

简要介绍

def combineByKey[C](createCombiner: (V) => C,  
                    mergeValue: (C, V) => C,   
                    mergeCombiners: (C, C) => C): RD
  • createCombiner:combine()会遍历分区中的所有元素,因此每个元素的键要么还没也遇到,要么就和之前的某个元素的键相同。如果这是一个新的元素,combineByKey()会使用一个叫做createCombiner()的函数来创建。那个键对应的累加器的初始值
  • mergeValue:如果这是一个在处理当前分区之前已经遇到的键,它会使用mergeValue()方法将该键的累加器对应的当前值与这个新的值进行合并
  • mergeCombiners:由于每个分区都是独立处理的,因此对于同一个键可以有多个累加器。如果有两个或者更多的分区都有对应同一个键的累加器,就需要使用用户提供的mergeCombiners()方法将各个分区的结果进行合并

Scala版本

  • 计算学生平均成绩
package nj.zb.sparkstu

import org.apache.spark.rdd.RDD
import org.apache.spark.{HashPartitioner, SparkConf, SparkContext}

object CombineByKeyScala {
//创建一个学生成绩说明的类
  case class ScoreDetail(studentName:String,subject:String,score:Int)

  def main(args: Array[String]): Unit = {
    val conf = new SparkConf().setMaster("local[*]").setAppName("combineByKey")
    val sc = new SparkContext(conf)
//学生成绩集合
    val scores = List(
      ScoreDetail("zhangsan", "Math", 99),
      ScoreDetail("zhangsan", "English", 97),
      ScoreDetail("lisi", "Math", 91),
      ScoreDetail("lisi", "English", 89),
      ScoreDetail("wangwu", "Math", 91),
      ScoreDetail("wangwu", "English", 94),
      ScoreDetail("zhaoliu", "Math", 83),
      ScoreDetail("zhaoliu", "English", 90),
      ScoreDetail("laowang", "Math", 83),
      ScoreDetail("laowang", "English", 90),
      ScoreDetail("laozhang", "Math", 83),
      ScoreDetail("laozhang", "English", 90))

//将集合转换成二元组,也可以理解成转换成一个map,利用了for和yield的组合
    val scoreWithKey:List[(String,ScoreDetail)] = for(i<-scores) yield (i.studentName,i)

//创建RDD,并指定三个分区
    val scoreWithKeyRDD = sc.parallelize(scoreWithKey).partitionBy(new HashPartitioner(3)).cache()
    
//输出打印各个分区的长度和各个分区的数据
    scoreWithKeyRDD.foreachPartition(partition =>println(partition.length))
println("----------------------------------------------")

    scoreWithKeyRDD.collect.foreach(println)

    println("----------------------------------------------")
    scoreWithKeyRDD.foreachPartition(partContent=>{
      partContent.foreach(x=>println(x._1,x._2.studentName,x._2.subject,x._2.score))
    })
    println("-------------------------------------------")
    //聚合求平均值后打印
    val stuScoreInforRdd:RDD[(String,(Int,Int))] = scoreWithKeyRDD.combineByKey(
      (x: ScoreDetail) => (x.score, 1),
      (acc1: (Int, Int), x: ScoreDetail) => (acc1._1 + x.score, acc1._2 + 1),
      (acc2: (Int, Int), acc3: (Int, Int)) => (acc2._1 + acc3._1, acc2._2 + acc3._2)
    )
    val stuAvg:RDD[(String,Int)] = stuScoreInforRdd.map({case(key,value)=>(key,value._1/value._2)})
    val stuAvg2:RDD[(String,Int)] = stuScoreInforRdd.map(x=>(x._1,x._2._1/x._2._2))

    stuAvg.collect.foreach(println)
  }
}

结果展示:在这里插入图片描述

解释一下scoreWithKeyRDD.combineByKey

  • createCombiner:(x: ScoreDetail) => (x.score, 1)

     这是第一次遇到zhangsan,创建一个函数,把map中的value转成另一个类型
     ,这里是把(zhangsan,(ScoreDetail类))转换成(zhangsan,(99,1))
    
  • mergeValue:(acc1: (Int, Int), x: ScoreDetail)=>(acc1._1 + x.score, acc1._2 + 1)

     再次遇到zhangsan,就把这两个合并,这里是将(zhangsan,(99,1))这种类型
     和(zhangsan,(ScoreDetail类))这种类型合并,和并后成了(zhangsan,(196,2))
    
  • mergeCombiners:(acc2: (Int, Int), acc3: (Int, Int))

     这个是将多分区中的zhangsan的数据进行合并,我们这里zhangsan在同一个
     分区,这个地方就没用到
    

Java版本

  • 计算学生平均成绩

ScoreDetailsJava类

package nj.zb.sparkstu;

import scala.Int;

import java.io.Serializable;

public class ScoreDetailsJava implements Serializable {
    public String stuName;
    public Integer score;
    public String  subject;

    public ScoreDetailsJava(String stuName, String subject, Integer score) {
        this.stuName = stuName;
        this.score = score;
        this.subject = subject;
    }
}

CombineByKey测试类

package nj.zb.sparkstu;

import org.apache.spark.SparkConf;

import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.api.java.function.Function2;
import org.apache.spark.api.java.function.PairFunction;


import scala.Tuple2;

import java.util.ArrayList;
import java.util.List;

public class CombineByKey {
    public static void main(String[] args) {
        SparkConf conf = new SparkConf().setMaster("local[*]").setAppName("CombineByKey");
        JavaSparkContext sc = new JavaSparkContext(conf);
        List<ScoreDetailsJava> scoreDetails = new ArrayList<>();
        scoreDetails.add(new ScoreDetailsJava("zhangsan", "Math", 99));
        scoreDetails.add(new ScoreDetailsJava("zhangsan", "English", 97));
        scoreDetails.add(new ScoreDetailsJava("lisi", "Math", 91));
        scoreDetails.add(new ScoreDetailsJava("lisi", "English", 89));
        scoreDetails.add(new ScoreDetailsJava("wangwu", "Math", 91));
        scoreDetails.add(new ScoreDetailsJava("wangwu", "English", 94));
        scoreDetails.add(new ScoreDetailsJava("zhaoliu", "Math", 83));
        scoreDetails.add(new ScoreDetailsJava("zhaoliu", "English", 90));
        scoreDetails.add(new ScoreDetailsJava("laowang", "Math", 83));
        scoreDetails.add(new ScoreDetailsJava("laowang", "English", 90));
        scoreDetails.add(new ScoreDetailsJava("laozhang", "Math", 83));
        scoreDetails.add(new ScoreDetailsJava("laozhang", "English", 90));


        JavaRDD<ScoreDetailsJava> scoreDetailsJavaRDD = sc.parallelize(scoreDetails);

        JavaPairRDD<String, ScoreDetailsJava> pairRDD = scoreDetailsJavaRDD.mapToPair(new PairFunction<ScoreDetailsJava, String, ScoreDetailsJava>() {
            @Override
            public Tuple2<String, ScoreDetailsJava> call(ScoreDetailsJava scoreDetailsJava) throws Exception {
                return new Tuple2<>(scoreDetailsJava.stuName, scoreDetailsJava);
            }
        });

        //createCombiner
        Function<ScoreDetailsJava, Tuple2<Integer, Integer>> createCombiner = new Function<ScoreDetailsJava, Tuple2<Integer, Integer>>() {
            @Override
            public Tuple2<Integer, Integer> call(ScoreDetailsJava v1) throws Exception {
                return new Tuple2<>(v1.score,1);
            }
        };


        //mergeValue
        Function2<Tuple2<Integer, Integer>, ScoreDetailsJava, Tuple2<Integer, Integer>> mergeValue = new Function2<Tuple2<Integer, Integer>, ScoreDetailsJava, Tuple2<Integer, Integer>>() {
            @Override
            public Tuple2<Integer, Integer> call(Tuple2<Integer, Integer> v1, ScoreDetailsJava v2) throws Exception {
                return new Tuple2<>(v2.score + v1._1, v1._2 + 1);
            }
        };

        //mergeCombiners

        Function2<Tuple2<Integer, Integer>, Tuple2<Integer, Integer>, Tuple2<Integer, Integer>> mergeCombiners = new Function2<Tuple2<Integer, Integer>, Tuple2<Integer, Integer>, Tuple2<Integer, Integer>>() {
            @Override
            public Tuple2<Integer, Integer> call(Tuple2<Integer, Integer> v1, Tuple2<Integer, Integer> v2) throws Exception {
                return new Tuple2<>(v1._1 + v2._1, v1._2 + v2._2);
            }
        };

        JavaPairRDD<String, Tuple2<Integer, Integer>> stringTuple2JavaPairRDD = pairRDD.combineByKey(createCombiner, mergeValue, mergeCombiners);
        List<Tuple2<String, Tuple2<Integer, Integer>>> collect = stringTuple2JavaPairRDD.collect();

        for(Tuple2<String,Tuple2<Integer,Integer>>tp2:collect){
            System.out.println(tp2._1+" "+tp2._2._1/tp2._2._2
            );
        }
    }
}

结果展示:在这里插入图片描述

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值