语法
val newRdd = oldRdd. cogroup(otherDataset, [numTasks])
otherDataset表示join的对象
numTasks表示分区数
源码
def cogroup[W](other : org.apache.spark.rdd.RDD[scala.Tuple2[K, W]]) : org.apache.spark.rdd.RDD[scala.Tuple2[K, scala.Tuple2[scala.Iterable[V], scala.Iterable[W]]]] = { /* compiled code */ }
作用
在类型为(K,V)和(K,W)的RDD上调用,返回一个(K,(Iterable,Iterable))类型的RDD
例子
package com.day1
import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}
object oper {
def main(args: Array[String]): Unit = {
val config:SparkConf = new SparkConf().setMaster("local[*]").setAppName("wordCount")
// 创建上下文对象
val sc = new SparkContext(config)
// join算子
val rdd = sc.makeRDD(Array((1,"a"),(2,"b"),(3,"c")))
val rdd1 = sc.makeRDD(Array((1,4),(2,5),(3,6)))
val joinRdd = rdd.join(rdd1)
joinRdd.collect().foreach(println)
}
}
输入
(1,"a"),(2,"b"),(3,"c")
(1,4),(2,5),(3,6)
输出
(1,(CompactBuffer(a),CompactBuffer(4)))
(2,(CompactBuffer(b),CompactBuffer(5)))
(3,(CompactBuffer(c),CompactBuffer(6)))
示意
(K,V).cogroup((K,W)) => (K,(Iterable<V>,Iterable<W>))