一、Rdd转化算子
1、【reduce】可以对rdd中的元素进行求和;
2、【reduceByKey】是存储着元组类型的rdd,相同key为一组,计算对应的value值;
3、【aggregateByKey】参数:初始值,分区元素的计算,全局元素的计算;
4、【combineByKey】参数,(1)、遍历当前分区中所有元素,遇到之前累加,没遇到和0累加;(2)、当前分区内运算;(3)、全局分区运算;
二、实例
package com.cn.rddOperator
import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}
object Transformation05 {
def main(args: Array[String]): Unit = {
val conf = new SparkConf().setAppName("transformation05").setMaster("local[1]")
val sc = new SparkContext(conf)
sc.setLogLevel("WARN")
/**
* reduce:可以对rdd中的元素进行求和
*/
val rdd1: RDD[Int] = sc.parallelize(List(1,2,3,4,5))
val sum: Int = rdd1.reduce(_+_)
println(sum)//15
/**
* reduceByKey 是存储着元组类型的rdd,相同key为一组,计算对应的value值
*/
val rdd2: RDD[(String, Int)] = sc.parallelize(List(("aa",1),("bb",2),("cc",3),("bb",5),("cc",8)))
val rdd3: RDD[(String, Int)] = rdd2.reduceByKey(_+_)
println(rdd3.collect().toBuffer)//ArrayBuffer((aa,1), (bb,7), (cc,11))
val rdd4: RDD[(String, String)] = sc.parallelize(List(("aa","1"),("bb","2"),("cc","3"),("bb","4"),("cc","8")))
val rdd5: RDD[(String, String)] = rdd4.reduceByKey(_+_)
println(rdd5.collect().toBuffer)//ArrayBuffer((aa,1), (bb,24), (cc,38))
/**
* aggregateByKey 参数:初始值,分区元素的计算,全局元素的计算
*/
val rdd6: RDD[(String, Int)] = rdd2.aggregateByKey(1)(_+_,_+_)
println(rdd6.collect().toBuffer)//ArrayBuffer((aa,2), (bb,8), (cc,12))
/**
* 参数,1、遍历当前分区中所有元素,遇到之前累加,没遇到和0累加;2、当前分区内运算;3、全局分区运算
*/
val rdd7: RDD[(String, Int)] = rdd2.combineByKey(x=>x, (a:Int, b:Int)=>a+b, (m:Int, n:Int)=>m+n)
println(rdd7.collect().toBuffer)//ArrayBuffer((aa,1), (bb,7), (cc,11))
}
}