Flink Scala API Extensions学习笔记以及翻译

原文地址:https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/scala_api_extensions.html

Scala API Extensions



为了保持Scala Api的数量和Java Api数量相当的话,对于批处理和流处理Scala可以有较高的表达能力。如果你喜欢Scala的开发的话,你可以使用隐式转换拓展强化Scala Api。


为了使用所有可用的拓展,对于DataSet API可以使用一个简单的import命令实现拓展增强:

 

import org.apache.flink.api.scala.extensions._

该隐式转换的实现在:org.apache.flink.api.scala.extensions中!


对于DataStream API可以使用一个简单的import命令实现拓展增强:

import org.apache.flink.streaming.api.scala.extensions._
该隐式转换的实现在: org.apache.flink.streaming.api.scala.extensions中!


Note:另外,你也可以只导入你需要使用的隐式转换,例如:

import org.apache.flink.streaming.api.scala.extensions.acceptPartialFunctions

Accept partial functions

通常,DataSet和DataStream Apis都不接受匿名模式匹配函数去结构元组,例如如下是不支持的:

val data: DataSet[(Int, String, Double)] = // [...]
data.map {
  case (id, name, temperature) => // [...]
  // The previous line causes the following compilation error:
  // "The argument types of an anonymous function must be fully known. (SLS 8.5)"
}

如下这个拓展在DataSet和DataStream中引入了新方法,这些Api分别有一对一的对应的Api,这些代理方法支持匿名的模式匹配函数:

DataSet API
Method Original Example
mapWith map (DataSet)
data.mapWith {
  case (_, value) => value.toString
}
mapPartitionWith mapPartition (DataSet)
data.mapPartitionWith {
  case head #:: _ => head
}
flatMapWith flatMap (DataSet)
data.flatMapWith {
  case (_, name, visitTimes) => visitTimes.map(name -> _)
}
filterWith filter (DataSet)
data.filterWith {
  case Train(_, isOnTime) => isOnTime
}
reduceWith reduce (DataSet, GroupedDataSet)
data.reduceWith {
  case ((_, amount1), (_, amount2)) => amount1 + amount2
}
reduceGroupWith reduceGroup (GroupedDataSet)
data.reduceGroupWith {
  case id #:: value #:: _ => id -> value
}
groupingBy groupBy (DataSet)
data.groupingBy {
  case (id, _, _) => id
}
sortGroupWith sortGroup (GroupedDataSet)
grouped.sortGroupWith(Order.ASCENDING) {
  case House(_, value) => value
}
combineGroupWith combineGroup (GroupedDataSet)
grouped.combineGroupWith {
  case header #:: amounts => amounts.sum
}
projecting apply (JoinDataSet, CrossDataSet)
data1.join(data2).
  whereClause(case (pk, _) => pk).
  isEqualTo(case (_, fk) => fk).
  projecting {
    case ((pk, tx), (products, fk)) => tx -> products
  }

data1.cross(data2).projecting {
  case ((a, _), (_, b) => a -> b
}
projecting apply (CoGroupDataSet)
data1.coGroup(data2).
  whereClause(case (pk, _) => pk).
  isEqualTo(case (_, fk) => fk).
  projecting {
    case (head1 #:: _, head2 #:: _) => head1 -> head2
  }
}
DataStream API
Method Original Example
mapWith map (DataStream)
data.mapWith {
  case (_, value) => value.toString
}
mapPartitionWith mapPartition (DataStream)
data.mapPartitionWith {
  case head #:: _ => head
}
flatMapWith flatMap (DataStream)
data.flatMapWith {
  case (_, name, visits) => visits.map(name -> _)
}
filterWith filter (DataStream)
data.filterWith {
  case Train(_, isOnTime) => isOnTime
}
keyingBy keyBy (DataStream)
data.keyingBy {
  case (id, _, _) => id
}
mapWith map (ConnectedDataStream)
data.mapWith(
  map1 = case (_, value) => value.toString,
  map2 = case (_, _, value, _) => value + 1
)
flatMapWith flatMap (ConnectedDataStream)
data.flatMapWith(
  flatMap1 = case (_, json) => parse(json),
  flatMap2 = case (_, _, json, _) => parse(json)
)
keyingBy keyBy (ConnectedDataStream)
data.keyingBy(
  key1 = case (_, timestamp) => timestamp,
  key2 = case (id, _, _) => id
)
reduceWith reduce (KeyedDataStream, WindowedDataStream)
data.reduceWith {
  case ((_, sum1), (_, sum2) => sum1 + sum2
}
foldWith fold (KeyedDataStream, WindowedDataStream)
data.foldWith(User(bought = 0)) {
  case (User(b), (_, items)) => User(b + items.size)
}
applyWith apply (WindowedDataStream)
data.applyWith(0)(
  foldFunction = case (sum, amount) => sum + amount
  windowFunction = case (k, w, sum) => // [...]
)
projecting apply (JoinedDataStream)
data1.join(data2).
  whereClause(case (pk, _) => pk).
  isEqualTo(case (_, fk) => fk).
  projecting {
    case ((pk, tx), (products, fk)) => tx -> products
  }

关于更多方法的细节参考 DataSet  and  DataStream  文档!

为了使用某一个拓展,可以之引入一个:


import org.apache.flink.api.scala.extensions.acceptPartialFunctions

如下代码片段展示如何使用拓展方法(DataSet API):


object Main {
  import org.apache.flink.api.scala.extensions._
  case class Point(x: Double, y: Double)
  def main(args: Array[String]): Unit = {
    val env = ExecutionEnvironment.getExecutionEnvironment
    val ds = env.fromElements(Point(1, 2), Point(3, 4), Point(5, 6))
    ds.filterWith {
      case Point(x, _) => x > 1
    }.reduceWith {
      case (Point(x1, y1), (Point(x2, y2))) => Point(x1 + y1, x2 + y2)
    }.mapWith {
      case Point(x, y) => (x, y)
    }.flatMapWith {
      case (x, y) => Seq("x" -> x, "y" -> y)
    }.groupingBy {
      case (id, value) => id
    }
  }
}










评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值