CC00066.flink——|Hadoop&Flink.V05|——|Flink.v04|Flink SQL|Flink SQL输出表|输出到kafka|

本文详细介绍了如何使用Flink SQL将数据输出到Kafka,包括具体的SQL语句和编程代码实现,为Flink与Kafka的集成提供实战指导。
摘要由CSDN通过智能技术生成
一、FlinkSQL输出表:输出到Kafka
### --- 输出到kafka

~~~     # 往kafka上输出表
DataStreamSource<String> data = env.addSource(new SourceFunction<String>
    () {
        @Override
        public void run(SourceContext<String> ctx) throws Exception {
        int num = 0;
        while (true) {
        num++;
        ctx.collect("name"+num);
        Thread.sleep(1000);
        }
        }

        @Override
        public void cancel() {
        }
        });
        Table name = tEnv.fromDataStream(data, $("name"));
        ConnectTableDescriptor descriptor = tEnv.connect(
        // declare the external system to connect to
        new Kafka()
        .version("universal")
        .topic("animal")
        .startFromEarliest()
        .property("bootstrap.servers", "hdp-2:9092")
        )
        // declare a format for this system
        .withFormat(
        // new Json()
        new Csv()
        )
        // declare the schema of the table
        .withSchema(
        new Schema()
        // .field("rowtime", DataTypes.TIMESTAMP(3))
        // .rowtime(new Rowtime()
        // .timestampsFromField("timestamp")
        // .watermarksPeriodicBounded(60000)
        // )
        // .field("user", DataTypes.BIGINT())
        .field("message", DataTypes.STRING())
        );
        // create a table with given name
        descriptor.createTemporaryTable("MyUserTable");
        name.executeInsert("MyUserTable");
### --- 输出到mysql (了解)

CREATE TABLE MyUserTable (
    ...
) WITH (
    'connector.type' = 'jdbc', 
    -- required: specify this table type is jdbc 'connector.url' = 'jdbc:mysql://localhost:3306/flink-test', 
    -- required: JDBCDB url 'connector.table' = 'jdbc_table_name', 
    -- required: jdbc table name
    -- optional: the class name of the JDBC driver to use to connect to this URL.
    -- If not set, it will automatically be derived from the URL.'connector.driver' = 'com.mysql.jdbc.Driver',
    -- optional: jdbc user name and password 'connector.username' = 'name', 'connector.password' = 'password',
    -- **followings are scan options, optional, used when reading from a table**
    -- optional: SQL query / prepared statement.
    -- If set, this will take precedence over the 'connector.table' setting 'connector.read.query' = 'SELECT * FROM sometable',
    -- These options must all be specified if any of them is specified. Inaddition,
    -- partition.num must be specified. They describe how to partition the tablewhen
    -- reading in parallel from multiple tasks. partition.column must be anumeric,
    -- date, or timestamp column from the table in question. Notice that lowerBound and
    -- upperBound are just used to decide the partition stride, not for filtering the
    -- rows in table. So all rows in the table will be partitioned and returned.'connector.read.partition.column' = 'column_name', 
    -- optional: the column name used for partitioning the input.'connector.read.partition.num' = '50', 
    -- optional: the number of partitions.'connector.read.partition.lower-bound' = '500', 
    -- optional: the smallest value of the first partition.'connector.read.partition.upper-bound' = '1000', 
    -- optional: the largest value of the last partition.
    -- optional, Gives the reader a hint as to the number of rows that should be fetched
    -- from the database when reading per round trip. If the value specified is zero, then
    -- the hint is ignored. The default value is zero. 'connector.read.fetch-size' = '100',
    -- **followings are lookup options, optional, used in temporary join**
    -- optional, max number of rows of lookup cache, over this value, the oldest rows will
    -- be eliminated. "cache.max-rows" and "cache.ttl" options must all be specified if any
    -- of them is specified. Cache is not enabled as default.'connector.lookup.cache.max-rows' = '5000',
    -- optional, the max time to live for each rows in lookup cache, over this time, the oldest rows
    -- will be expired. "cache.max-rows" and "cache.ttl" options must all be specified if any of
    -- them is specified. Cache is not enabled as default.'connector.lookup.cache.ttl' = '10s','connector.lookup.max-retries' = '3', 
    -- optional, max retry times if lookup database failed
    -- **followings are sink options, optional, used when writing into table**
    -- optional, flush max size (includes all append, upsert and delete records),
    -- over this number of records, will flush data. The default value is "5000".'connector.write.flush.max-rows' = '5000',
    -- optional, flush interval mills, over this time, asynchronous threads will flush data.
    -- The default value is "0s", which means no asynchronous flush thread will be scheduled.'connector.write.flush.interval' = '2s',
    -- optional, max retry times if writing records to database failed 'connector.write.max-retries' = '3'
)
二、编程代码实现
### --- 编程代码实现:输出表到kafka

package com.yanqi.tableql;

import org.apache.flink.api.java.tu
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

yanqi_vip

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值