//wordCountsWithReduce
val words = Array("one", "two", "two", "three", "three", "three")
val wordPairsRDD = sc.parallelize(words).map(word => (word, 1))
val wordCountsWithReduce = wordPairsRDD.reduceByKey(_ + _).collect()
//wordCountsWithGroup
val wordCountsWithGroup = wordPairsRDD.groupByKey().map(t => (t._1,
t._2.sum)).collect()
虽然这两种功能会产生正确的答案,但是reduceByKey在用于大型数据集好得多。
reduceByKey是每个分片统计后再合并,groupByKey是最后才进行所有合并。
combineByKey又是什么鬼?非要弄得复杂不可。
val input = sc.parallelize(List(("coffee",1),("coffee",2),("panda",4)))
val result =input.combineByKey(
(v) =>(v,1),(acc:(Int,Int),v) =>(acc._1+v,acc._2+1),(acc1:(Int,Int),acc2:
(Int,Int)) =>(acc1._1 +acc2._1,acc1._2+acc2._2)
).map{case (key,value)=>(key,value._1/value._2.toFloat)}
//作为map输出
result.collectAsMap().map(println(_))
//读取本地文件
val input = sc.textFile("file:///usr/local/spark/examples/src/main/resources/people.txt")
//读取小文件,(path,value)
val input = sc.wholeTextFiles("file:///usr/local/spark/examples/src/main/resources/people.txt")
//Scala average value per file
val input = sc.wholeTextFiles("file://home/holden/happypanda")
val result = input.mapValues{y =>
val nums = y.split(" ").map(x => x.toDouble)
nums.sum / nums.size.toDouble
}
val ex1 = sc.parallelize(List((1,2),(3,4),(3,6),(2,8)))
//返回键值为3的所有元素,4,6
//Seq[Int] = WrappedArray(4, 6)
ex1.lookup(3)
ex1.mapValues(x =>x+1).collect//ex1.mapValues(_+1).collect
//scala.collection.Map[Int,Long] = Map(2 -> 1, 1 -> 1, 3 -> 2)
//每个键对应多少个值(数每个键的元素)
ex1.countByKey()
//scala.collection.Map[Int,Int] = Map(2 -> 8, 1 -> 2, 3 -> 6)
//返回distinct键值对,一个键对应多个值的,只返回一个(跟给出的答案不一致)
ex1.collectAsMap().size //返回集合大小,没有()
ex1.collectAsMap()
ex1.groupByKey().collect
//查看键值,不能有()
ex1.keys.collect
//查看值,不能有()
ex1.values.collect
ex1.flatMapValues(x =>x.to(5)).collect
//数组类型,可以直接reduceByKey
ex1.flatMapValues(x =>x.to(5)).reduceByKey(_ + _).collect