spark中RDD的键值转换操作

groupBy

groupBy(function) 

function返回key,传入的RDD的各个元素根据这个key进行分组

def main(args: Array[String]): Unit = {
  //默认分区12个
  val sc = new SparkContext(new SparkConf().setMaster("local").setAppName("test").set("spark.default.parallelism", "12"))
  var rdd1 = sc.makeRDD(1 to 10, 2)
  rdd1.groupBy(x => { if (x % 2 == 0) "even" else "odd" }).collect.foreach(println(_))
}

16/12/20 16:39:07 INFO DAGScheduler: Job 0 finished: collect at ShellTest.scala:25, took 2.225605 s
(even,CompactBuffer(2, 4, 6, 8, 10))
(odd,CompactBuffer(1, 3, 5, 7, 9))
16/12/20 16:39:07 INFO SparkContext: Invoking stop() from shutdown hook

keyBy

def main(args: Array[String]): Unit = {
  //默认分区12个
  val sc = new SparkContext(new SparkConf().setMaster("local").setAppName("test").set("spark.default.parallelism", "12"))
  val a = sc.parallelize(List("dog", "tiger", "lion", "cat", "spider", "eagle"), 2)
  val b = a.keyBy(_.length)//给value加上key,key为对应string的长度
  b.groupByKey.collect.foreach(println(_))
}

16/12/20 16:42:25 INFO DAGScheduler: Job 0 finished: collect at ShellTest.scala:26, took 2.853266 s
(3,CompactBuffer(dog, cat))
(4,CompactBuffer(lion))
(5,CompactBuffer(tiger, eagle))
(6,CompactBuffer(spider))
16/12/20 16:42:25 INFO SparkContext: Invoking stop() from shutdown hook

groupByKey

def groupByKey(): RDD[(K, Iterable[V])]

def groupByKey(numPartitions: Int): RDD[(K, Iterable[V])]

def groupByKey(partitioner: Partitioner): RDD[(K, Iterable[V])]

该函数用于将RDD[K,V]中每个K对应的V值,合并到一个集合Iterable[V]中,

参数numPartitions用于指定分区数;

参数partitioner用于指定分区函数;

def main(args: Array[String]): Unit = {
  //默认分区12个
  val sc = new SparkContext(new SparkConf().setMaster("local").setAppName("test").set("spark.default.parallelism", "12"))
  val rdd1 = sc.makeRDD(Array((1, "A"), (1, "B"), (2, "A"), (2, "D"), (3, "E"), (1, "A")))
  rdd1.groupByKey(2).collect.foreach(println(_))
}

16/12/20 16:18:35 INFO DAGScheduler: Job 0 finished: collect at ShellTest.scala:23, took 1.716898 s
(2,CompactBuffer(A, D))
(1,CompactBuffer(A, B, A))
(3,CompactBuffer(E))
16/12/20 16:18:35 INFO SparkContext: Invoking stop() from shutdown hook

reduceByKey

def reduceByKey(func: (V, V) => V): RDD[(K, V)]

def reduceByKey(func: (V, V) => V, numPartitions: Int): RDD[(K, V)]

def reduceByKey(partitioner: Partitioner, func: (V, V) => V): RDD[(K, V)]

该函数用于将RDD[K,V]中每个K对应的V值根据映射函数来运算。

参数numPartitions用于指定分区数;

参数partitioner用于指定分区函数

def main(args: Array[String]): Unit = {
  //默认分区12个
  val sc = new SparkContext(new SparkConf().setMaster("local").setAppName("test").set("spark.default.parallelism", "12"))
  val rdd1 = sc.makeRDD(Array((1, "A"), (1, "B"), (2, "A"), (2, "D"), (3, "E"), (1, "A")))
  rdd1.reduceByKey(_+_).collect.foreach(println(_))
}

16/12/20 16:21:11 INFO DAGScheduler: Job 0 finished: collect at ShellTest.scala:23, took 1.476519 s
(1,ABA)
(2,AD)
(3,E)
16/12/20 16:21:11 INFO SparkContext: Invoking stop() from shutdown hook

reduceByKeyLocally

def reduceByKeyLocally(func: (V, V) => V): Map[K, V]

该函数将RDD[K,V]中每个K对应的V值根据映射函数来运算,运算结果映射到一个Map[K,V]中,而不是RDD[K,V]。

def main(args: Array[String]): Unit = {
  //默认分区12个
  val sc = new SparkContext(new SparkConf().setMaster("local").setAppName("test").set("spark.default.parallelism", "12"))
  val rdd1 = sc.makeRDD(Array((1, "A"), (1, "B"), (2, "A"), (2, "D"), (3, "E"), (1, "A")))
  rdd1.reduceByKeyLocally(_+_).foreach(println(_))
}

(1,ABA)
(2,AD)
(3,E)

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值