(3) Flink核心概念和编程模型

大数据处理的流程

MapReduce: input -> map(reduce) -> output

Storm: input -> Spout/Bolt -> output

Spark: input -> transformation/action -> output

Flink: input -> transformation/sink -> output

DataSet and DataStream

immutable

批处理:DataSet

流处理:DaTa Stream

Anatomy of a Flink Program

  1. Obtain an execution environment,
  2. Load/create the initial data,
  3. Specify transformations on this data,
  4. Specify where to put the results of your computations,
  5. Trigger the program execution

Lazy Evaluation

All Flink programs are executed lazily: When the program’s main method is executed, the data loading and transformations do not happen directly. Rather, each operation is created and added to the program’s plan. The operations are actually executed when the execution is explicitly triggered by an execute() call on the execution environment. Whether the program is executed locally or on a cluster depends on the type of execution environment

The lazy evaluation lets you construct sophisticated programs that Flink executes as one holistically planned unit.

简单地说,延迟执行适应于Pipline流水线的方式。这种方式可以在中间进行一些优化,总体上有很大的一个性能提升。

参考链接

Basic API Concepts–Flink v1.7

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值