Flink DateSet定制API详解(Scala版) -004
Join
join将两个DataSet按照一定的关联度进行类似SQL中的Join操作。
执行程序:
package code.book.batch.dataset.advance.api
import org.apache.flink.api.scala.{ExecutionEnvironment, _}
object JoinFunction001scala {
def main(args: Array[String]): Unit = {
val env = ExecutionEnvironment.getExecutionEnvironment
val authors = env.fromElements(
Tuple3("A001", "zhangsan", "zhangsan@qq.com"),
Tuple3("A001", "lisi", "lisi@qq.com"),
Tuple3("A001", "wangwu", "wangwu@qq.com"))
val posts = env.fromElements(
Tuple2("P001", "zhangsan"),
Tuple2("P002", "lisi"),
Tuple2("P003", "wangwu"),
Tuple2("P004", "lisi"))
val text2 = authors.join(posts).where(1).equalTo(1)
text2.print()
}
}
执行结果:
text2.print()
((A001,wangwu,wangwu@qq.com),(P003,wangwu))
((A001,zhangsan,zhangsan@qq.com),(P001,zhangsan))
((A001,lisi,lisi@qq.com),(P002,lisi))
((A001,lisi,lisi@qq.com),(P004,lisi))
CoGroup
将2个DataSet中的元素,按照key进行分组,一起分组2个DataSet。而groupBy值能分组一个DataSet
执行程序:
package code.book.batch.dataset.advance.api
import org.apache.flink.api.scala.{ExecutionEnvironment, _}
object CoGroupFunction001scala {
def main(args: Array[String]): Unit = {
val env = ExecutionEnvironment.getExecutionEnvironment
val authors = env.fromElements(
Tuple3("A001", "zhangsan", "zhangsan@qq.com"),
Tuple3("A001", "lisi", "lisi@qq.com"),
Tuple3("A001", "wangwu", "wangwu@qq.com"))
val posts = env.fromElements(
Tuple2("P001", "zhangsan"),
Tuple2("P002", "lisi"),
Tuple2("P003", "wangwu"),
Tuple2("P004", "lisi"))
val text2 = authors.coGroup(posts).where(1).equalTo(1)
text2.print()
}
}
执行结果:
text2.print()
([Lscala.Tuple3;@6c2c1385,[Lscala.Tuple2;@5f354bcf)
([Lscala.Tuple3;@3daf7722,[Lscala.Tuple2;@78641d23)
([Lscala.Tuple3;@74589991,[Lscala.Tuple2;@146dfe6)