Spark入门--求中位数

数据如下:

1 2 3 4 5 6 8 9 11 12 13 15 18 20 22 23 25 27 29

代码如下:

import org.apache.spark.{SparkConf, SparkContext}
import scala.util.control.Breaks._
/**
 * Created by xuyao on 15-7-24.
 * 求中位数,数据是分布式存储的
 * 将整体的数据分为K个桶,统计每个桶内的数据量,然后统计整个数据量
 * 根据桶的数量和总的数据量,可以判断数据落在哪个桶里,以及中位数的偏移量
 * 取出这个中位数
 */
object Median {
   
   def main (args: Array[String]) {
    val conf =new SparkConf().setAppName("Median")
     val sc=new SparkContext(conf)
     //通过textFile读入的是字符串型,所以要进行类型转换
     val data =sc.textFile("data").flatMap(x=>x.split(' ')).map(x=>x.toInt)
     //将数据分为4组,当然我这里的数据少
     val  mappeddata =data.map(x=>(x/4,x)).sortByKey()
     //p_count为每个分组的个数
     val p_count =data.map(x=>(x/4,1)).reduceByKey(_+_).sortByKey()
     p_count.foreach(println)
     //p_count是一个RDD,不能进行Map集合操作,所以要通过collectAsMap方法将其转换成scala的集合
     val scala_p_count=p_count.collectAsMap()
     //根据key值得到value值
     println(scala_p_count(0))
     //sum_count是统计总的个数,不能用count(),因为会得到多少个map对。
     val sum_count = p_count.map(x=>x._2).sum().toInt
     println(sum_count)
     var temp =0//中值所在的区间累加的个数
     var temp2=0//中值所在区间的前面所有的区间累加的个数
     var index=0//中值的区间
     var mid= 0
     if(sum_count%2!=0){
        mid =sum_count/2+1//中值在整个数据的偏移量
     }
     else{
        mid =sum_count/2
     }
     val pcount=p_count.count()
     breakable{
       for(i <- 0 to pcount.toInt-1){
         temp =temp + scala_p_count(i)
         temp2 =temp-scala_p_count(i)
         if(temp>=mid){
           index=i
           break
         }
       }
     }
     println(mid+" "+index+" "+temp+" "+temp2)
     //中位数在桶中的偏移量
     val offset =mid-temp2
     //takeOrdered它默认可以将key从小到大排序后,获取rdd中的前n个元素
     val result =mappeddata.filter(x=>x._1==index).takeOrdered(offset)
     println(result(offset-1)._2)
     sc.stop()
  }

}

运行结果如下:

/usr/lib/jvm/java-7-sun/bin/java -Dspark.master=local -Didea.launcher.port=7535 -Didea.launcher.bin.path=/opt/idea/bin -Dfile.encoding=UTF-8 -classpath /usr/lib/jvm/java-7-sun/jre/lib/jfr.jar:/usr/lib/jvm/java-7-sun/jre/lib/javaws.jar:/usr/lib/jvm/java-7-sun/jre/lib/resources.jar:/usr/lib/jvm/java-7-sun/jre/lib/plugin.jar:/usr/lib/jvm/java-7-sun/jre/lib/jfxrt.jar:/usr/lib/jvm/java-7-sun/jre/lib/jsse.jar:/usr/lib/jvm/java-7-sun/jre/lib/charsets.jar:/usr/lib/jvm/java-7-sun/jre/lib/deploy.jar:/usr/lib/jvm/java-7-sun/jre/lib/management-agent.jar:/usr/lib/jvm/java-7-sun/jre/lib/rt.jar:/usr/lib/jvm/java-7-sun/jre/lib/jce.jar:/usr/lib/jvm/java-7-sun/jre/lib/ext/sunpkcs11.jar:/usr/lib/jvm/java-7-sun/jre/lib/ext/sunjce_provider.jar:/usr/lib/jvm/java-7-sun/jre/lib/ext/sunec.jar:/usr/lib/jvm/java-7-sun/jre/lib/ext/dnsns.jar:/usr/lib/jvm/java-7-sun/jre/lib/ext/zipfs.jar:/usr/lib/jvm/java-7-sun/jre/lib/ext/localedata.jar:/opt/IdeaProjects/SparkTest/target/scala-2.10/classes:/home/xuyao/.sbt/boot/scala-2.10.4/lib/scala-library.jar:/home/xuyao/spark/lib/spark-assembly-1.4.0-hadoop2.4.0.jar:/opt/idea/lib/idea_rt.jar com.intellij.rt.execution.application.AppMain Median
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/07/29 12:43:28 INFO SparkContext: Running Spark version 1.4.0
15/07/29 12:43:28 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/07/29 12:43:29 WARN Utils: Your hostname, hadoop resolves to a loopback address: 127.0.1.1; using 192.168.73.129 instead (on interface eth0)
15/07/29 12:43:29 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
15/07/29 12:43:29 INFO SecurityManager: Changing view acls to: xuyao
15/07/29 12:43:29 INFO SecurityManager: Changing modify acls to: xuyao
15/07/29 12:43:29 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(xuyao); users with modify permissions: Set(xuyao)
15/07/29 12:43:30 INFO Slf4jLogger: Slf4jLogger started
15/07/29 12:43:31 INFO Remoting: Starting remoting
15/07/29 12:43:32 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@192.168.73.129:58364]
15/07/29 12:43:32 INFO Utils: Successfully started service 'sparkDriver' on port 58364.
15/07/29 12:43:32 INFO SparkEnv: Registering MapOutputTracker
15/07/29 12:43:33 INFO SparkEnv: Registering BlockManagerMaster
15/07/29 12:43:33 INFO DiskBlockManager: Created local directory at /tmp/spark-329d9ad9-4ed6-4a79-97f3-254cab1a13b8/blockmgr-f9da5521-a9c0-4801-bffb-3a92f089d1cd
15/07/29 12:43:33 INFO MemoryStore: MemoryStore started with capacity 131.6 MB
15/07/29 12:43:33 INFO HttpFileServer: HTTP File server directory is /tmp/spark-329d9ad9-4ed6-4a79-97f3-254cab1a13b8/httpd-fd2adba3-06b9-4035-9c2b-6733e379207a
15/07/29 12:43:33 INFO HttpServer: Starting HTTP Server
15/07/29 12:43:33 INFO Utils: Successfully started service 'HTTP file server' on port 58175.
15/07/29 12:43:33 INFO SparkEnv: Registering OutputCommitCoordinator
15/07/29 12:43:38 INFO Utils: Successfully started service 'SparkUI' on port 4040.
15/07/29 12:43:38 INFO SparkUI: Started SparkUI at http://192.168.73.129:4040
15/07/29 12:43:39 INFO Executor: Starting executor ID driver on host localhost
15/07/29 12:43:39 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 56974.
15/07/29 12:43:39 INFO NettyBlockTransferService: Server created on 56974
15/07/29 12:43:39 INFO BlockManagerMaster: Trying to register BlockManager
15/07/29 12:43:39 INFO BlockManagerMasterEndpoint: Registering block manager localhost:56974 with 131.6 MB RAM, BlockManagerId(driver, localhost, 56974)
15/07/29 12:43:39 INFO BlockManagerMaster: Registered BlockManager
15/07/29 12:43:40 WARN SizeEstimator: Failed to check whether UseCompressedOops is set; assuming yes
15/07/29 12:43:41 INFO MemoryStore: ensureFreeSpace(137512) called with curMem=0, maxMem=137948037
15/07/29 12:43:41 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 134.3 KB, free 131.4 MB)
15/07/29 12:43:41 INFO MemoryStore: ensureFreeSpace(12633) called with curMem=137512, maxMem=137948037
15/07/29 12:43:41 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 12.3 KB, free 131.4 MB)
15/07/29 12:43:41 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:56974 (size: 12.3 KB, free: 131.5 MB)
15/07/29 12:43:41 INFO SparkContext: Created broadcast 0 from textFile at Median.scala:15
15/07/29 12:43:41 INFO FileInputFormat: Total input paths to process : 1
15/07/29 12:43:41 INFO SparkContext: Starting job: foreach at Median.scala:20
15/07/29 12:43:41 INFO DAGScheduler: Registering RDD 6 (map at Median.scala:19)
15/07/29 12:43:41 INFO DAGScheduler: Registering RDD 7 (reduceByKey at Median.scala:19)
15/07/29 12:43:41 INFO DAGScheduler: Got job 0 (foreach at Median.scala:20) with 1 output partitions (allowLocal=false)
15/07/29 12:43:41 INFO DAGScheduler: Final stage: ResultStage 2(foreach at Median.scala:20)
15/07/29 12:43:41 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 1)
15/07/29 12:43:41 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 1)
15/07/29 12:43:41 INFO DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[6] at map at Median.scala:19), which has no missing parents
15/07/29 12:43:41 INFO MemoryStore: ensureFreeSpace(4168) called with curMem=150145, maxMem=137948037
15/07/29 12:43:41 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 4.1 KB, free 131.4 MB)
15/07/29 12:43:41 INFO MemoryStore: ensureFreeSpace(2376) called with curMem=154313, maxMem=137948037
15/07/29 12:43:41 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.3 KB, free 131.4 MB)
15/07/29 12:43:41 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on localhost:56974 (size: 2.3 KB, free: 131.5 MB)
15/07/29 12:43:41 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:874
15/07/29 12:43:41 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[6] at map at Median.scala:19)
15/07/29 12:43:41 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
15/07/29 12:43:42 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, PROCESS_LOCAL, 1399 bytes)
15/07/29 12:43:42 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
15/07/29 12:43:42 INFO HadoopRDD: Input split: file:/opt/IdeaProjects/SparkTest/data:0+49
15/07/29 12:43:42 INFO deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id
15/07/29 12:43:42 INFO deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
15/07/29 12:43:42 INFO deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap
15/07/29 12:43:42 INFO deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
15/07/29 12:43:42 INFO deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
15/07/29 12:43:42 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 2001 bytes result sent to driver
15/07/29 12:43:42 INFO DAGScheduler: ShuffleMapStage 0 (map at Median.scala:19) finished in 0.435 s
15/07/29 12:43:42 INFO DAGScheduler: looking for newly runnable stages
15/07/29 12:43:42 INFO DAGScheduler: running: Set()
15/07/29 12:43:42 INFO DAGScheduler: waiting: Set(ShuffleMapStage 1, ResultStage 2)
15/07/29 12:43:42 INFO DAGScheduler: failed: Set()
15/07/29 12:43:42 INFO DAGScheduler: Missing parents for ShuffleMapStage 1: List()
15/07/29 12:43:42 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 411 ms on localhost (1/1)
15/07/29 12:43:42 INFO TaskSchedulerImpl: Removed TaskSet 
  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值