1.groupBy指明以某个元素分组
val source = Source.fromFile("E://data.txt","UTF-8") val lines = source.getLines() val list = lines.toList.flatMap(line => line.split(" ").map(word => (word,1))) val res0 = list.groupBy(_._1).map(temp => (temp._1,temp._2.map(word => word._2).reduceLeft(_ + _))) |
2.groupByKey指明以Key进行分组
val conf = new SparkConf().setAppName("GroupAndReduce").setMaster("local") val sc = new SparkContext(conf) val words = Array("one", "two", "two", "three", "three", "three") val wordsRDD = sc.parallelize(words).map(word => (word, 1)) val wordsCountWithGroup = wordsRDD.groupByKey().map(w => (w._1, w._2.sum)).collect().foreach(println)
|
3.reduceByKey
reduceByKey函数更适合使用在大数据集上。 这是因为Spark知道它可以在每个分区移动数据之前将输出数据与一个共用的key
结合。你可以想象一个非常大的数据集,在使用 reduceByKey 和 groupByKey 时他们的差别会被放大更多倍。
val conf = new SparkConf().setAppName("GroupAndReduce").setMaster("local") val sc = new SparkContext(conf) val words = Array("one", "two", "two", "three", "three", "three") val wordsRDD = sc.parallelize(words).map(word => (word, 1)) val wordsCountWithReduce = wordsRDD.reduceByKey(_ + _).collect().foreach(println) |