语法
val newRdd = oldRdd. join(otherDataset, [numTasks])
otherDataset表示join的对象
numTasks表示分区数
源码
def join[W](other : org.apache.spark.rdd.RDD[scala.Tuple2[K, W]]) : org.apache.spark.rdd.RDD[scala.Tuple2[K, scala.Tuple2[V, W]]] = { /* compiled code */ }
作用
在类型为(K,V)和(K,W)的RDD上调用,返回一个相同key对应的所有元素对在一起的(K,(V,W))的RDD
例子
package com.day1
import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}
object oper {
def main(args: Array[String]): Unit = {
val config:SparkConf = new SparkConf().setMaster("local[*]").setAppName("wordCount")
// 创建上下文对象
val sc = new SparkContext(config)
// join算子
val rdd = sc.makeRDD(Array((1,"a"),(2,"b"),(3,"c")))
val rdd1 = sc.makeRDD(Array((1,4),(2,5),(3,6)))
val joinRdd = rdd.join(rdd1)
joinRdd.collect().foreach(println)
}
}
输入
(1,"a"),(2,"b"),(3,"c")
(1,4),(2,5),(3,6)
输出
(1,(a,4))
(2,(b,5))
(3,(c,6))
示意
(K,V).join((K,W)) => (K,(K,W))