当时遇到使用两个表,需要用到like的时候,建议使用map side join或者使用spark sql的broad cast join
sqlContext.sql(
"""
|select * from left A,right B where A.url like contact(B.url,'%')
""".stripMargin)
val importantBroad = sc.broadcast(important)
val primary = leftRdd.map( x =>{
var flag : Boolean = false
var importantDomain : String = ""
for(s<-importantBroad.value;if(!flag)){
if(x._3.contains(s)){
flag = true
importantDomain = s
}
}
(x._1,x._2,x._3,x._4,x._5,importantDomain)
}).toDF("id","age","domain","userId","time","importantDomain")```
这个和spark rdd使用的map side join 是类似的原理,都是把小的集合通过广播,将数据
这里写代码片在每个executor上共享一份数据。
spark sql的broad join
import org.apache.spark.sql.functions.broadcast
dataFrame.join(broadcast(idDF),"id")
“`