【Spark Java API】Transformation(7)—cogroup、join

cogroup


官方文档描述:

For each key k in `this` or `other`, return a resulting RDD that contains a tuple with the list of values for that key in `this` as well as `other`.

函数原型:

def cogroup[W](other: JavaPairRDD[K, W], partitioner: Partitioner): JavaPairRDD[K, (JIterable[V], JIterable[W])]

def cogroup[W1, W2](other1: JavaPairRDD[K, W1], other2: JavaPairRDD[K, W2],    partitioner: Partitioner): JavaPairRDD[K, (JIterable[V], JIterable[W1], JIterable[W2])]

def cogroup[W1, W2, W3](other1: JavaPairRDD[K, W1],    other2: JavaPairRDD[K, W2],    other3: JavaPairRDD[K, W3],    partitioner: Partitioner): JavaPairRDD[K, (JIterable[V], JIterable[W1], JIterable[W2], JIterable[W3])]

def cogroup[W](other: JavaPairRDD[K, W]): JavaPairRDD[K, (JIterable[V], JIterable[W])]

def cogroup[W1, W2](other1: JavaPairRDD[K, W1], other2: JavaPairRDD[K, W2]): JavaPairRDD[K, (JIterable[V], JIterable[W1], JIterable[W2])]

def cogroup[W1, W2, W3](other1: JavaPairRDD[K, W1],    other2: JavaPairRDD[K, W2],    other3: JavaPairRDD[K, W3]): JavaPairRDD[K, (JIterable[V], JIterable[W1], JIterable[W2], JIterable[W3])]

def cogroup[W](other: JavaPairRDD[K, W], numPartitions: Int): JavaPairRDD[K, (JIterable[V], JIterable[W])]

def cogroup[W1, W2](other1: JavaPairRDD[K, W1], other2: JavaPairRDD[K, W2], numPartitions: Int): JavaPairRDD[K, (JIterable[V], JIterable[W1], JIterable[W2])]

def cogroup[W1, W2, W3](other1: JavaPairRDD[K, W1],    other2: JavaPairRDD[K, W2],    other3: JavaPairRDD[K, W3],    numPartitions: Int): JavaPairRDD[K, (JIterable[V], JIterable[W1], JIterable[W2], JIterable[W3])]

源码分析:

def cogroup[W](other: RDD[(K, W)], partitioner: Partitioner)    : RDD[(K, (Iterable[V], Iterable[W]))] = self.withScope {  
if (partitioner.isInstanceOf[HashPartitioner] && keyClass.isArray) {    
  throw new SparkException("Default partitioner cannot partition array keys.")  
}  
val cg = new CoGroupedRDD[K](Seq(self, other), partitioner)  
cg.mapValues { case Array(vs, w1s) =>    
    (vs.asInstanceOf[Iterable[V]], w1s.asInstanceOf[Iterable[W]])  
  }
}

override def getDependencies: Seq[Dependency[_]] = {  
  rdds.map { rdd: RDD[_ <: Product2[K, _]] =>    
    if (rdd.partitioner == Some(part)) {      
      logDebug("Adding one-to-one dependency with " + rdd)      
      new OneToOneDependency(rdd)    
    } else {      
      logDebug("Adding shuffle dependency with " + rdd)      
      new ShuffleDependency[K, Any, CoGroupCombiner](rdd, part, serializer)    
    }  
  }
}
override def getPartitions: Array[Partition] = {  
  val array = new Array[Partition](part.numPartitions)  
  for (i <- 0 until array.length) {    
    // Each CoGroupPartition will have a dependency per contributing RDD    
    array(i) = new CoGroupPartition(i, rdds.zipWithIndex.map { case (rdd, j) =>      
    // Assume each RDD contributed a single dependency, and get it        
    dependencies(j) match {
        case s: ShuffleDependency[_, _, _] =>          
            None        
        case _ =>          
            Some(new NarrowCoGroupSplitDep(rdd, i, rdd.partitions(i)))      
      }    
    }.toArray)  
  }  
  array
}

cogroup() 的计算结果放在 CoGroupedRDD 中哪个 partition 是由用户设置的 partitioner 确定的(默认是 HashPartitioner)。
CoGroupedRDD 依赖的所有 RDD 放进数组 rdds[RDD] 中。再次,foreach i,如果 CoGroupedRDD 和 rdds(i) 对应的 RDD 是 OneToOneDependency 关系,那么 Dependecy[i] = new OneToOneDependency(rdd),否则 = new ShuffleDependency(rdd)。最后,返回与每个 parent RDD 的依赖关系数组 deps[Dependency]。
Dependency 类中的 getParents(partition id) 负责给出某个 partition 按照该 dependency 所依赖的 parent RDD 中的 partitions: List[Int]。
getPartitions() 负责给出 RDD 中有多少个 partition,以及每个 partition 如何序列化。

实例:

List<Integer> data = Arrays.asList(1, 2, 4, 3, 5, 6, 7, 1, 2);
JavaRDD<Integer> javaRDD = javaSparkContext.parallelize(data);

JavaPairRDD<Integer,Integer> javaPairRDD = javaRDD.mapToPair(new PairFunction<Integer, Integer, Integer>() {    
@Override    
  public Tuple2<Integer, Integer> call(Integer integer) throws Exception {        
    return new Tuple2<Integer, Integer>(integer,1);    
  }
});

//与 groupByKey() 不同,cogroup() 要 aggregate 两个或两个以上的 RDD。
JavaPairRDD<Integer,Tuple2<Iterable<Integer>,Iterable<Integer>>> cogroupRDD = javaPairRDD.cogroup(javaPairRDD);
System.out.println(cogroupRDD.collect());

JavaPairRDD<Integer,Tuple2<Iterable<Integer>,Iterable<Integer>>> cogroupRDD3 = javaPairRDD.cogroup(javaPairRDD, new Partitioner() {    
    @Override    
    public int numPartitions() {        
      return 2;    
    }    
    @Override    
    public int getPartition(Object key) {        
      return (key.toString()).hashCode()%numPartitions();
    }
});
System.out.println(cogroupRDD3);

join


官方文档描述:

Return an RDD containing all pairs of elements with matching keys in `this` and `other`. Each pair of elements will be returned as a (k, (v1, v2)) tuple, where (k, v1) is in `this` and (k, v2) is in `other`. Performs a hash join across the cluster.

函数原型:

def join[W](other: JavaPairRDD[K, W]): JavaPairRDD[K, (V, W)]

def join[W](other: JavaPairRDD[K, W], numPartitions: Int): JavaPairRDD[K, (V, W)]

def join[W](other: JavaPairRDD[K, W], partitioner: Partitioner): JavaPairRDD[K, (V, W)]

源码分析:

def join[W](other: RDD[(K, W)], partitioner: Partitioner): RDD[(K, (V, W))] = self.withScope {  
  this.cogroup(other, partitioner).flatMapValues( pair =>    
    for (v <- pair._1.iterator; w <- pair._2.iterator) yield (v, w)  
  )
}

从源码中可以看出,join() 将两个 RDD[(K, V)] 按照 SQL 中的 join 方式聚合在一起。与 intersection() 类似,首先进行 cogroup(), 得到

实例:

List<Integer> data = Arrays.asList(1, 2, 4, 3, 5, 6, 7);
final Random random = new Random();
JavaRDD<Integer> javaRDD = javaSparkContext.parallelize(data);
JavaPairRDD<Integer,Integer> javaPairRDD = javaRDD.mapToPair(new PairFunction<Integer, Integer, Integer>() {    
  @Override    
  public Tuple2<Integer, Integer> call(Integer integer) throws Exception {        
    return new Tuple2<Integer, Integer>(integer,random.nextInt(10));    
  }
});

JavaPairRDD<Integer,Tuple2<Integer,Integer>> joinRDD = javaPairRDD.join(javaPairRDD);
System.out.println(joinRDD.collect());

JavaPairRDD<Integer,Tuple2<Integer,Integer>> joinRDD2 = javaPairRDD.join(javaPairRDD,2);
System.out.println(joinRDD2.collect());

JavaPairRDD<Integer,Tuple2<Integer,Integer>> joinRDD3 = javaPairRDD.join(javaPairRDD, new Partitioner() {    
  @Override    
  public int numPartitions() {        
    return 2;    
  }    
  @Override    
  public int getPartition(Object key) {        
    return (key.toString()).hashCode()%numPartitions();
    }
});
System.out.println(joinRDD3.collect());
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值