Spark之反转排序

Spark之反转排序

关于反转排序的原理,可以参考之前写过的MapReduce之反转排序,这是尝试用Spark复现后的代码

package OrderInversion

import org.apache.spark.{SparkConf, SparkContext}

object OrderInversion {
  def main(args: Array[String]): Unit = {
    val sparkConf = new SparkConf().setAppName("RelativeFrequency").setMaster("local")
    val sc = new SparkContext(sparkConf)

    val neighborWindow = 2
    val input = "input/OrderInversion.txt"
    val output = "output"
    //广播到其他节点,如果有的话
    val broadcastWindow = sc.broadcast(neighborWindow)

    val rawData = sc.textFile(input)

    //将输入文本格式化(word,(neighbor,1))
    val pairs = rawData.flatMap(line => {
      val tokens = line.split("\\s")
      for {
        i <- 0 until tokens.length
        start = if (i - broadcastWindow.value < 0) 0 else i - broadcastWindow.value
        end = if (i + broadcastWindow.value >= tokens.length) tokens.length - 1 else i + broadcastWindow.value
        j <- start to end if (j != i)
      } yield (tokens(i), (tokens(j), 1))
    })

    //按照键进行统计(word,sum(word))
    val totalByKey = pairs.map(t => (t._1, t._2._2)).reduceByKey(_ + _)

    val grouped = pairs.groupByKey()
    
    val uniquePairs = grouped.flatMapValues(_.groupBy(_._1).mapValues(_.unzip._2.sum))
    //连接
    val joined = uniquePairs join totalByKey
    //计算
    val orderInversion = joined.map(t => {
      ((t._1, t._2._1._1), (t._2._1._2.toDouble / t._2._2.toDouble))
    })

    val formatResult_tab_separated=orderInversion.map(t=>t._1._1+"\t"+t._1._2+"\t"+t._2)
    formatResult_tab_separated.saveAsTextFile(output)

    sc.stop()
  }
}

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值