一、outerjoinvertices的介绍
![](https://i-blog.csdnimg.cn/blog_migrate/bc6f2886c212e07bda8a70fa15eba04e.png#pic_center)
outerjoinvertices是对顶点进行的操作,所以不涉及边
outerjoinvertices:
这个操作会把关联上的定点的属性给重新赋值,
所以这样join的话就有点leftjoin的感觉
def outerJoinVertices[U: ClassTag, VD2: ClassTag](other: RDD[(VertexId, U)])
(mapFunc: (VertexId, VD, Option[U]) => VD2)(implicit eq: VD =:= VD2 = null)
: Graph[VD2, ED]
mapFunc是用来对值进行操作的。
如果第二个不存在,则返回none,也就是说,跟LeftOuterJoin操作一样。
二、例子
1)代码
def main(args: Array[String]): Unit = {
//设置运行环境
val conf = new SparkConf().setAppName("OuterJoinVerticesJob").setMaster("local[4]")
val sc = new SparkContext(conf)
//创建点RDD
val usersVertices: RDD[(VertexId, (String, String))] = sc.parallelize(Array(
(1L, ("Spark", "scala")), (2L, ("Hadoop", "java")),
(3L, ("Kafka", "scala")), (4L, ("Zookeeper", "Java "))))
//创建边RDD
val usersEdges: RDD[Edge[String]] = sc.parallelize(Array(
Edge(2L, 1L, "study"), Edge(3L, 2L, "train"),
Edge(1L, 2L, "exercise"), Edge(4L, 1L, "None")))
val salaryVertices :RDD[(VertexId,(String,Long))] =sc.parallelize(Array(
(1L,("Spark",30L)),(2L, ("Hadoop", 15L)),
(3L, ("Kafka", 10L)), (5L, ("parameter server", 40L))
))
val salaryEdges: RDD[Edge[String]] = sc.parallelize(Array(
Edge(2L, 1L, "study"), Edge(3L, 2L, "train"),
Edge(1L, 2L, "exercise"), Edge(5L, 1L, "None")))
//构造Graph
val graph = Graph(usersVertices, usersEdges)
val graph1 = Graph(salaryVertices, salaryEdges)
//outerJoinVertices操作,
val joinGraph = graph.outerJoinVertices(graph1.vertices) { (id, attr, deps) =>
deps match {
case Some(deps) => deps
case None => 0
}
}
joinGraph.vertices.collect.foreach(println)
sc.stop()
}
2)运行结果
(4,0)
(1,(Spark,30))
(2,(Hadoop,15))
(3,(Kafka,10))