reduce(化简)
def reduce(f: (T, T) ⇒ T): T
根据映射函数f,对RDD中的元素进行二元计算,返回计算结果。
scala> var rdd1 = sc.makeRDD(1 to 10,2)
rdd1: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[36] at makeRDD at :21
scala> rdd1.reduce(_ + _)
res18: Int = 55
scala> var rdd2 = sc.makeRDD(Array(("A",0),("A",2),("B",1),("B",2),("C",1)))
rdd2: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[38] at makeRDD at :21
scala> rdd2.reduce((x,y) => {(x._1 + y._1,x._2 + y._2)})
res21: (String, Int) = (CBBAA,6)
countByKey(action操作)
def countByKey(): Map[K, Long]
countByKey用于统计RDD[K,V]中每个K的数量。(比如统计每天的新增用户数)
scala> var rdd1 = sc.makeRDD(Array(("A",0),("A",2),("B",1),("B",2),("B",3))) rdd1: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[7] at makeRDD at :21 scala> rdd1.countByKey res5: scala.collection.Map[String,Long] = Map(A -> 2, B -> 3)
top
def top(num: Int)(implicit ord: Ordering[T]): Array[T]
top函数用于从RDD中,按照默认(降序)或者指定的排序规则,返回前num个元素。
scala> var rdd1 = sc.makeRDD(Seq(10, 4, 2, 12, 3))
rdd1: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[40] at makeRDD at :21
scala> rdd1.top(1)
res2: Array[Int] = Array(12)
scala> rdd1.top(2)
res3: Array[Int] = Array(12, 10)
//指定排序规则
scala> implicit val myOrd = implicitly[Ordering[Int]].reverse //其实隐士函数类似于装饰者模式,可以强大类型的变换
myOrd: scala.math.Ordering[Int] = scala.math.Ordering$$anon$4@767499ef
scala> rdd1.top(1)
res4: Array[Int] = Array(2)
scala> rdd1.top(2)
res5: Array[Int] = Array(2, 3)