1、groupByKey
基本语法
def groupByKey(): RDD[(K, Iterable[V])]
def groupByKey(numPartitions: Int): RDD[(K, Iterable[V])]
def groupByKey(partitioner: Partitioner): RDD[(K, Iterable[V])]
groupByKey 会将RDD[key,value]
按照相同的 key 进行分组,形成RDD[key,Iterable[value]]
的形式, 有点类似于 sql 中的 groupby
Scala版本
val conf = new SparkConf().setMaster("local[*]").setAppName("GroupByKeyScala")
val sc = new SparkContext(conf)
val scoreDetail = sc.parallelize(List(("xiaoming",75),("xiaoming",90),("lihua",95),("lihua",100),("xiaofeng",85)))
scoreDetail.groupByKey().collect().foreach(println(_))
Java版本
SparkConf conf = new SparkConf().setAppName("GroupByKeyJava").setMaster("local[*]");
JavaSparkContext sc = new JavaSparkContext(conf);
JavaRDD<Tuple2<String,Integer>> scoreDetails = sc.parallelize(Arrays.asList(new Tuple2<String,Integer>("xiaoming", 75)
, new Tuple2<String,Integer>("xiaoming", 90)
, new Tuple2<String,Integer>("lihua", 95)
, new Tuple2<String,Integer>("lihua", 188)));
//将JavaRDD<Tuple2<String,Float>> 类型转换为 JavaPairRDD<String, Integer>
JavaPairRDD<String, Integer> scoreMapRDD = JavaPairRDD.fromJavaRDD(scoreDetails);
Map<String, Iterable<Integer>> resultMap = scoreMapRDD.groupByKey().collectAsMap();
for (String key:resultMap.keySet()) {
System.out.println(key+":"+resultMap.get(key));
}
运行结果如下:
2、cogroup
对多个共享同一个键的 RDD 进行分组
RDD1.cogroup(RDD2)
会将 RDD1 和 RDD2 按照相同的 key 进行分组,得到(key,RDD[key,Iterable[value1],Iterable[value2]])
的形式;cogroup 也可以多个进行分组,多个 RDD 用逗号分隔
Scala版本
val conf = new SparkConf().setMaster("local[*]").setAppName("CogroupScala")
val sc = new SparkContext(conf)
val scoreDetail = sc.parallelize(List(("xiaoming",95),("xiaoming",90),("lihua",95),("lihua",98),("xiaofeng",97)))
val scoreDetai2 = sc.parallelize(List(("xiaoming",65),("lihua",63),("lihua",62),("xiaofeng",67)))
val scoreDetai3 = sc.parallelize(List(("xiaoming",25),("xiaoming",15),("lihua",35),("lihua",28),("xiaofeng",36)))
val cogroup = scoreDetail.cogroup(scoreDetai2,scoreDetai3)
cogroup.collect.foreach(println)
运行结果如下:
Java版本
SparkConf conf = new SparkConf().setAppName("CogroupJava").setMaster("local[*]");
JavaSparkContext sc = new JavaSparkContext(conf);
JavaRDD<Tuple2<String,Integer>> scoreDetails1 = sc.parallelize(Arrays.asList(new Tuple2<String,Integer>("xiaoming", 75)
, new Tuple2<String,Integer>("xiaoming", 90)
, new Tuple2<String,Integer>("lihua", 95)
, new Tuple2<String,Integer>("lihua", 96)));
JavaRDD<Tuple2<String,Integer>> scoreDetails2 = sc.parallelize(Arrays.asList(new Tuple2<String,Integer>("xiaoming", 75)
, new Tuple2<String,Integer>("lihua", 60)
, new Tuple2<String,Integer>("lihua", 62)));
JavaRDD<Tuple2<String,Integer>> scoreDetails3 = sc.parallelize(Arrays.asList(new Tuple2<String,Integer>("xiaoming", 75)
, new Tuple2<String,Integer>("xiaoming", 45)
, new Tuple2<String,Integer>("lihua", 24)
, new Tuple2<String,Integer>("lihua", 57)));
JavaPairRDD<String, Integer> scoreMapRDD1 = JavaPairRDD.fromJavaRDD(scoreDetails1);
JavaPairRDD<String, Integer> scoreMapRDD2 = JavaPairRDD.fromJavaRDD(scoreDetails2);
JavaPairRDD<String, Integer> scoreMapRDD3 = JavaPairRDD.fromJavaRDD(scoreDetails2);
JavaPairRDD<String, Tuple3<Iterable<Integer>, Iterable<Integer>, Iterable<Integer>>> cogroupRDD
= (JavaPairRDD<String, Tuple3<Iterable<Integer>, Iterable<Integer>, Iterable<Integer>>>) scoreMapRDD1.cogroup(scoreMapRDD2, scoreMapRDD3);
Map<String, Tuple3<Iterable<Integer>, Iterable<Integer>, Iterable<Integer>>> tuple3 = cogroupRDD.collectAsMap();
for (String key:tuple3.keySet()) {
System.out.println(key+":"+tuple3.get(key));
}
运行结果如下: