原文地址:https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/scala_api_extensions.html
Scala API Extensions
为了保持Scala Api的数量和Java Api数量相当的话,对于批处理和流处理Scala可以有较高的表达能力。如果你喜欢Scala的开发的话,你可以使用隐式转换拓展强化Scala Api。
为了使用所有可用的拓展,对于DataSet API可以使用一个简单的import命令实现拓展增强:
import org.apache.flink.api.scala.extensions._
该隐式转换的实现在:org.apache.flink.api.scala.extensions中!
对于DataStream API可以使用一个简单的import命令实现拓展增强:
import org.apache.flink.streaming.api.scala.extensions._
该隐式转换的实现在:
org.apache.flink.streaming.api.scala.extensions中!
Note:另外,你也可以只导入你需要使用的隐式转换,例如:
import org.apache.flink.streaming.api.scala.extensions.acceptPartialFunctions
Accept partial functions
通常,DataSet和DataStream Apis都不接受匿名模式匹配函数去结构元组,例如如下是不支持的:
val data: DataSet[(Int, String, Double)] = // [...]
data.map {
case (id, name, temperature) => // [...]
// The previous line causes the following compilation error:
// "The argument types of an anonymous function must be fully known. (SLS 8.5)"
}
如下这个拓展在DataSet和DataStream中引入了新方法,这些Api分别有一对一的对应的Api,这些代理方法支持匿名的模式匹配函数:
DataSet API
Method | Original | Example |
---|---|---|
mapWith | map (DataSet) |
|
mapPartitionWith | mapPartition (DataSet) |
|
flatMapWith | flatMap (DataSet) |
|
filterWith | filter (DataSet) |
|
reduceWith | reduce (DataSet, GroupedDataSet) |
|
reduceGroupWith | reduceGroup (GroupedDataSet) |
|
groupingBy | groupBy (DataSet) |
|
sortGroupWith | sortGroup (GroupedDataSet) |
|
combineGroupWith | combineGroup (GroupedDataSet) |
|
projecting | apply (JoinDataSet, CrossDataSet) |
|
projecting | apply (CoGroupDataSet) |
|
DataStream API
Method | Original | Example |
---|---|---|
mapWith | map (DataStream) |
|
mapPartitionWith | mapPartition (DataStream) |
|
flatMapWith | flatMap (DataStream) |
|
filterWith | filter (DataStream) |
|
keyingBy | keyBy (DataStream) |
|
mapWith | map (ConnectedDataStream) |
|
flatMapWith | flatMap (ConnectedDataStream) |
|
keyingBy | keyBy (ConnectedDataStream) |
|
reduceWith | reduce (KeyedDataStream, WindowedDataStream) |
|
foldWith | fold (KeyedDataStream, WindowedDataStream) |
|
applyWith | apply (WindowedDataStream) |
|
projecting | apply (JoinedDataStream) |
|
为了使用某一个拓展,可以之引入一个:
import org.apache.flink.api.scala.extensions.acceptPartialFunctions
如下代码片段展示如何使用拓展方法(DataSet API):
object Main {
import org.apache.flink.api.scala.extensions._
case class Point(x: Double, y: Double)
def main(args: Array[String]): Unit = {
val env = ExecutionEnvironment.getExecutionEnvironment
val ds = env.fromElements(Point(1, 2), Point(3, 4), Point(5, 6))
ds.filterWith {
case Point(x, _) => x > 1
}.reduceWith {
case (Point(x1, y1), (Point(x2, y2))) => Point(x1 + y1, x2 + y2)
}.mapWith {
case Point(x, y) => (x, y)
}.flatMapWith {
case (x, y) => Seq("x" -> x, "y" -> y)
}.groupingBy {
case (id, value) => id
}
}
}