上回使用spark streaming实现从kafka中获取实时数据流,进而实现简单业务计算需求,这两天打算将两者的计算复杂度提升,决定引进graphx组件,实现对于复杂图关系的计算,希望在未来某天实现标签图,概率图等实时的并行计算,下面实现的一个简单需求,在周期时间内计算节点的出度关系,例子如下:
import kafka.serializer.StringDecoder
import org.apache.spark.streaming._
import org.apache.spark.streaming.kafka._
import org.apache.spark._
import org.apache.spark.rdd.RDD
import org.apache.spark.graphx._
object DirectKafkaGraphx {
def main(args: Array[String]) {
//System.setProperty("hadoop.home.dir", "E:\\software\\hadoop-2.5.2");
//StreamingExamples.setStreamingLogLevels()
val brokers = "101.271.251.121:9092"
val topics = "page_visits"
if (args.length < 2) {
System.err.println("Continue")
}else{
val Array(brokers, topics) = args
}
// Create context with 2 second batch interval
val sparkConf = new SparkConf().setAppName("DirectKafkaWordCount")
val ssc = new StreamingContext(sparkConf, Seconds(10))
//ssc.checkpoint(".")
val topicsSet = topics.split(",").toSet
val kafkaParams = Map[String, String]("metadata.broker.list" -> brokers)
val messages = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](ssc, kafkaParams, topicsSet)
val lines = messages.map(_._2)
val words = lines.map(_.split(","))
val cleanedDStream = words.transform(rdd=>{
rdd.map(x=>Edge(x(1).toInt,x(2).toInt,1))
})
cleanedDStream.print()
val graphDStream=cleanedDStream.transform(rdd=>
Graph.fromEdges(rdd,"a").collectNeighborIds(EdgeDirection.Out).map(e=>(e._1,e._2.toSet))
);
graphDStream.print()
ssc.start()
ssc.awaitTermination()
}
}
对于上述代码中涉及到的几个主要图计算工程说明:
1.利用streaming中的transform函数将DStream转化成RDD类型,调用graphx的高级函数API实现图计算,区别foreachRDD函数的功能;
2.利用伴生对象Graph实现对图的构造实现,产生graph对象;
3.利用graph中的各种高级函数实现对具体需求的解决。