Spark算子:RDD键值转换操作(1)–partitionBy、mapValues、flatMapValues

Spark算子:RDD键值转换操作(1)–partitionBy、mapValues、flatMapValues

关键字:Spark算子、Spark RDD键值转换、partitionBy、mapValues、flatMapValues
partitionBy

def partitionBy(partitioner: Partitioner): RDD[(K, V)]

该函数根据partitioner函数生成新的ShuffleRDD,将原RDD重新分区。

    scala> var rdd1 = sc.makeRDD(Array((1,"A"),(2,"B"),(3,"C"),(4,"D")),2)
    rdd1: org.apache.spark.rdd.RDD[(Int, String)] = ParallelCollectionRDD[23] at makeRDD at :21
     
    scala> rdd1.partitions.size
    res20: Int = 2
     
    //查看rdd1中每个分区的元素
    scala> rdd1.mapPartitionsWithIndex{
         |         (partIdx,iter) => {
         |           var part_map = scala.collection.mutable.Map[String,List[(Int,String)]]()
         |             while(iter.hasNext){
         |               var part_name = "part_" + partIdx;
         |               var elem = iter.next()
         |               if(part_map.contains(part_name)) {
         |                 var elems = part_map(part_name)
         |                 elems ::= elem
         |                 part_map(part_name) = elems
         |               } else {
         |                 part_map(part_name) = List[(Int,String)]{elem}
         |               }
         |             }
         |             part_map.iterator
         |            
         |         }
         |       }.collect

    res22: Array[(String, List[(Int, String)])] = Array((part_0,List((2,B), (1,A))), (part_1,List((4,D), (3,C))))
    //(2,B),(1,A)在part_0中,(4,D),(3,C)在part_1中
     
    //使用partitionBy重分区
    scala> var rdd2 = rdd1.partitionBy(new org.apache.spark.HashPartitioner(2))
    rdd2: org.apache.spark.rdd.RDD[(Int, String)] = ShuffledRDD[25] at partitionBy at :23
     
    scala> rdd2.partitions.size
    res23: Int = 2
     
    //查看rdd2中每个分区的元素
    scala> rdd2.mapPartitionsWithIndex{
         |         (partIdx,iter) => {
         |           var part_map = scala.collection.mutable.Map[String,List[(Int,String)]]()
         |             while(iter.hasNext){
         |               var part_name = "part_" + partIdx;
         |               var elem = iter.next()
         |               if(part_map.contains(part_name)) {
         |                 var elems = part_map(part_name)
         |                 elems ::= elem
         |                 part_map(part_name) = elems
         |               } else {
         |                 part_map(part_name) = List[(Int,String)]{elem}
         |               }
         |             }
         |             part_map.iterator
         |         }
         |       }.collect

    res24: Array[(String, List[(Int, String)])] = Array((part_0,List((4,D), (2,B))), (part_1,List((3,C), (1,A))))
    //(4,D),(2,B)在part_0中,(3,C),(1,A)在part_1中
     

mapValues

    def mapValues[U](f: (V) => U): RDD[(K, U)]

    同基本转换操作中的map,只不过mapValues是针对[K,V]中的V值进行map操作。

    scala> var rdd1 = sc.makeRDD(Array((1,"A"),(2,"B"),(3,"C"),(4,"D")),2)
    rdd1: org.apache.spark.rdd.RDD[(Int, String)] = ParallelCollectionRDD[27] at makeRDD at :21
     
    scala> rdd1.mapValues(x => x + "_").collect
    res26: Array[(Int, String)] = Array((1,A_), (2,B_), (3,C_), (4,D_))
     

flatMapValues

     def flatMapValues[U](f: (V) => TraversableOnce[U]): RDD[(K, U)]

    同基本转换操作中的flatMap,只不过flatMapValues是针对[K,V]中的V值进行flatMap操作。

    scala> rdd1.flatMapValues(x => x + "_").collect
    res36: Array[(Int, Char)] = Array((1,A), (1,_), (2,B), (2,_), (3,C), (3,_), (4,D), (4,_))
    

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值