Flink--Streaming Connectors

原网址:https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/connectors/

 

 

  1. Predefined Sources and Sinks   预定义的源和汇
  2. Bundled Connectors    捆绑连接器
  3. Connectors in Apache Bahir
  4. Other Ways to Connect to Flink   连接到Flink的其他方法
    1. Data Enrichment via Async I/O   通过异步I/O来丰富数据
    2. Queryable State                    可查询状态

 

Predefined Sources and Sinks

A few basic data sources and sinks are built into Flink and are always available.

The predefined data sources include reading from files, directories, and sockets, and ingesting data from collections and iterators.

The predefined data sinks support writing to files, to stdout and stderr, and to sockets.

预定义的源和汇

一些基本的数据源和接收器内置在Flink中,并且总是可用的。

预定义的sources包括从文件、目录和套接字中读取数据,以及从集合和迭代器中获取数据。

预定义的sinks支持写入文件、stdout和stderr以及套接字。


 

 

Bundled Connectors   捆绑连接器

Connectors provide code for interfacing with various third-party systems. Currently these systems are supported:  

连接器提供用于与各种第三方系统连接的代码。 目前支持这些系统:

Keep in mind that to use one of these connectors in an application, additional third party components are usually required, e.g. servers for the data stores or message queues.

请记住,要在应用程序中使用其中一个连接器,通常需要其他第三方组件,例如, 数据存储或消息队列的服务器。

 

Note also that while the streaming connectors listed in this section are part of the Flink project and are included in source releases, they are not included in the binary distributions. Further instructions can be found in the corresponding subsections.

另请注意,虽然本节中列出的流连接器是Flink项目的一部分,并且包含在源版本中,但它们不包含在二进制分发版中。 可以在相应的小节中找到进一步的说明。


 

Connectors in Apache Bahir    Apache Bahir中的连接器

Additional streaming connectors for Flink are being released through Apache Bahir, including:

Flink的其他流媒体连接器正在通过Apache Bahir发布,包括:

 


Data Enrichment via Async I/O   通过异步I / O进行数据丰富

Using a connector isn’t the only way to get data in and out of Flink. 

使用连接器不是将数据输入和输出Flink的唯一方法。

One common pattern is to query an external database or web service in a Map or FlatMap in order to enrich the primary datastream.

        一种常见的模式是在Map或FlatMap中查询外部数据库或Web服务,以丰富主数据流。

 Flink offers an API for Asynchronous I/O to make it easier to do this kind of enrichment efficiently and robustly.

 Flink提供了一个用于异步I / O的API,以便更有效,更稳健地进行这种丰富。


 

Queryable State   可查询状态

When a Flink application pushes a lot of data to an external data store, this can become an I/O bottleneck.

当Flink应用程序将大量数据推送到外部数据存储时,这可能会成为I / O瓶颈。

If the data involved has many fewer reads than writes, a better approach can be for an external application to pull from Flink the data it needs.

如果所涉及的数据具有比写入少得多的读取,则更好的方法可以是外部应用程序从Flink获取所需的数据。

The Queryable Stateinterface enables this by allowing the state being managed by Flink to be queried on demand.

可查询状态接口通过允许按需查询Flink管理的状态来实现此目的。

 

 

Data Sinks

原网址:https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/datastream_api.html#data-sinks

Data sinks consume DataStreams and forward them to files, sockets, external systems, or print them. Flink comes with a variety of built-in output formats that are encapsulated behind operations on the DataStreams:

数据接收器使用DataStream并将它们转发到文件,套接字,外部系统或打印它们。 Flink带有各种内置输出格式,这些格式封装在DataStreams上的操作后面:

    • writeAsText() / TextOutputFormat - Writes elements line-wise as Strings. The Strings are obtained by calling the toString()method of each element.

writeAsText()/ TextOutputFormat  - 将元素按行顺序写入字符串。通过调用每个元素的toString()方法获得字符串。

    • writeAsCsv(...) / CsvOutputFormat - Writes tuples as comma-separated value files. Row and field delimiters are configurable. The value for each field comes from the toString() method of the objects.

writeAsCsv(...)/ CsvOutputFormat  - 将元组写为逗号分隔值文件。行和字段分隔符是可配置的。每个字段的值来自对象的toString()方法。

    • print() / printToErr() - Prints the toString() value of each element on the standard out / standard error stream. Optionally, a prefix (msg) can be provided which is prepended to the output. This can help to distinguish between different calls to print. If the parallelism is greater than 1, the output will also be prepended with the identifier of the task which produced the output.

print()/ printToErr() - 在标准输出/标准错误流上打印每个元素的toString()值。可选地,可以提供前缀(msg),其前缀为输出。这有助于区分不同的打印调用。如果并行度大于1,则输出也将以生成输出的任务的标识符为前缀。

    • writeUsingOutputFormat() / FileOutputFormat - Method and base class for custom file outputs. Supports custom object-to-bytes conversion.

writeUsingOutputFormat()/ FileOutputFormat  - 自定义文件输出的方法和基类。支持自定义对象到字节的转换。

    • writeToSocket - Writes elements to a socket according to a SerializationSchema

writeToSocket  - 根据SerializationSchema将元素写入套接字

    • addSink - Invokes a custom sink function. Flink comes bundled with connectors to other systems (such as Apache Kafka) that are implemented as sink functions.

addSink  - 调用自定义接收器功能。 Flink捆绑了其他系统(如Apache Kafka)的连接器,这些系统实现为接收器功能。

Note that the write*() methods on DataStream are mainly intended for debugging purposes.

They are not participating in Flink’s checkpointing, this means these functions usually have at-least-once semantics.

The data flushing to the target system depends on the implementation of the OutputFormat.

This means that not all elements send to the OutputFormat are immediately showing up in the target system.

Also, in failure cases, those records might be lost.

For reliable, exactly-once delivery of a stream into a file system, use the flink-connector-filesystem.

Also, custom implementations through the .addSink(...) method can participate in Flink’s checkpointing for exactly-once semantics.

  请注意,DataStream上的write *()方法主要用于调试目的。

他们没有参与Flink的检查点,这意味着这些函数通常具有至少一次的语义。

刷新到目标系统的数据取决于OutputFormat的实现。

这意味着并非所有发送到OutputFormat的元素都会立即显示在目标系统中。

此外,在失败的情况下,这些记录可能会丢失。

要将流可靠,准确地一次传送到文件系统,请使用flink-connector-filesystem。

此外,通过.addSink(...)方法的自定义实现可以参与Flink的精确一次语义检查点。






 

 



 

转载于:https://www.cnblogs.com/lixgjob/p/10597352.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值