Flink学习10---DataStream之Sink简介及RichSinkFunction

功能就是负责把 Flink 处理后的数据输出到外部系统中。

一、Flink针对DataStream提供了大量的已经实现的数据下沉(sink)方式,具体有:

1. writeAsText(): 将元素以字符串形式逐行写入,这些字符串通过调用每个元素的toString()方法来获取。

2. print() / printToErr(): 打印每个元素的toString()方法的值到标准输出或者标准错误输出流中。

3. 自定义输出:addSink可以实现把数据输出到第三方存储介质中。

Flink通过内置的Connector和Apache Bahir组件提供了对应sink的支持。

详细参考:https://blog.csdn.net/zhuzuwei/article/details/107137295

 

二、Sink组件容错性保证

Sink语义保证备注
HDFSExactly-once 
ElasticsearchAt-least-once 
Kafka ProduceAt-least-once / At-most-onceKafka 0.9和0.10提供At-least-once
Kafka 0.11提供Exactly_once
FileAt-least-once 
RedisAt-least-once 

 

三、实例演示

     之前都是print()的sink方式,此处演示sink到Txt文件和Redis数据库。

import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.api.java.tuple.Tuple3;
import org.apache.flink.api.java.utils.ParameterTool;
import org.apache.flink.core.fs.FileSystem;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.util.Collector;

public class AddSinkReivew {
    public static void main(String[] args) throws Exception{
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

        DataStreamSource<String> lines = env.socketTextStream("192.168.***.***", 8888);

        SingleOutputStreamOperator<Tuple2<String, Integer>> words = lines.flatMap(new FlatMapFunction<String, Tuple2<String, Integer>>() {
            @Override
            public void flatMap(String s, Collector<Tuple2<String, Integer>> collector) throws Exception {
                String[] words = s.split(",");
                for (int i = 0; i < words.length; i++) {
                    collector.collect(Tuple2.of(words[i], 1));
                }
            }
        });

        SingleOutputStreamOperator<Tuple2<String, Integer>> summed = words.keyBy(0).sum(1);

        summed.print();

        summed.writeAsText("C:\\Users\\Dell\\Desktop\\flinkTest\\sinkout1.txt", FileSystem.WriteMode.OVERWRITE);

        SingleOutputStreamOperator<Tuple3<String,String, Integer>> words2 = lines.flatMap(new FlatMapFunction<String, Tuple3<String,String, Integer>>() {
            @Override
            public void flatMap(String s, Collector<Tuple3<String,String, Integer>> collector) throws Exception {
                String[] words = s.split(",");
                for (int i = 0; i < words.length; i++) {
                    collector.collect(Tuple3.of("wordscount",words[i], 1));
                }
            }
        });

        SingleOutputStreamOperator<Tuple3<String,String, Integer>> summed2 = words2.keyBy(1).sum(2);

        String configPath = "C:\\Users\\Dell\\Desktop\\flinkTest\\config.txt";
        ParameterTool parameters = ParameterTool.fromPropertiesFile(configPath);
        //设置全局参数
        env.getConfig().setGlobalJobParameters(parameters);

        summed2.addSink(new MyRedisSinkFunction());

        env.execute("AddSinkReivew");
    }
}
import org.apache.flink.api.java.tuple.Tuple3;
import org.apache.flink.api.java.utils.ParameterTool;
import org.apache.flink.configuration.Configuration;
import org.apache.flink.streaming.api.functions.sink.RichSinkFunction;
import redis.clients.jedis.Jedis;

public class MyRedisSinkFunction extends RichSinkFunction<Tuple3<String, String, Integer>>{
    private transient Jedis jedis;


    @Override
    public void open(Configuration config) {
        ParameterTool parameters = (ParameterTool)getRuntimeContext().getExecutionConfig().getGlobalJobParameters();
        String host = parameters.getRequired("redis.host");
        String password = parameters.get("redis.password", "");
        Integer port = parameters.getInt("redis.port", 6379);
        Integer timeout = parameters.getInt("redis.timeout", 5000);
        Integer db = parameters.getInt("redis.db", 0);
        jedis = new Jedis(host, port, timeout);
        jedis.auth(password);
        jedis.select(db);
    }

    @Override
    public void invoke(Tuple3<String, String, Integer> value, Context context) throws Exception {
        if (!jedis.isConnected()) {
            jedis.connect();
        }
        //保存
        jedis.hset(value.f0, value.f1, String.valueOf(value.f2));
    }

    @Override
    public void close() throws Exception {
        jedis.close();
    }
}

WriteAsText中指定的sinkout1.txt并不是一个文件,而是会生成同名文件夹。里面有4个文件对应并行度,保存不同subTask sink的结果。

单个文件夹内保存的结果如下:

Redis中也有了sink的结果

下面是使用 Flink Connector TiDB CDC 通过 Stream API 连接 TiDB 的示例代码: ```java import org.apache.flink.api.common.functions.MapFunction; import org.apache.flink.api.common.serialization.SimpleStringSchema; import org.apache.flink.streaming.api.datastream.DataStream; import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer; import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer; import org.apache.flink.streaming.connectors.tidb.JdbcConnectionOptions; import org.apache.flink.streaming.connectors.tidb.TiDBOptions; import org.apache.flink.streaming.connectors.tidb.TiDBSink; import org.apache.flink.streaming.connectors.tidb.TiDBSource; import org.apache.flink.streaming.connectors.tidb.TransactionIsolation; import org.apache.flink.streaming.connectors.tidb.TiDBCatalog; import org.apache.flink.table.api.EnvironmentSettings; import org.apache.flink.table.api.bridge.java.StreamTableEnvironment; import org.apache.flink.table.api.bridge.java.internal.StreamTableEnvironmentImpl; import org.apache.flink.types.Row; import java.util.Properties; public class TiDBStreamExample { public static void main(String[] args) throws Exception { StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); EnvironmentSettings settings = EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build(); StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings); // Define TiDB catalog TiDBCatalog catalog = new TiDBCatalog("tidb_catalog", "default_database", JdbcConnectionOptions.builder() .withUrl("jdbc:mysql://tidb_ip:tidb_port/tidb_database_name?useSSL=false&useUnicode=true&characterEncoding=UTF-8&serverTimezone=UTC") .withUsername("tidb_username") .withPassword("tidb_password") .build(), TiDBOptions.builder().withDatabaseUrl("jdbc:mysql://tidb_ip:tidb_port/tidb_database_name").build()); tEnv.registerCatalog("tidb_catalog", catalog); tEnv.useCatalog("tidb_catalog"); // Define TiDB source TiDBSource source = TiDBSource.builder() .setDatabaseName("tidb_database_name") .setTableName("tidb_table_name") .setOptions(TiDBOptions.builder() .withDatabaseUrl("jdbc:mysql://tidb_ip:tidb_port/tidb_database_name") .build()) .setPrimaryKey("id") .setTransactionIsolation(TransactionIsolation.READ_COMMITTED) .build(); // Create a data stream from TiDB source DataStream<Row> stream = env.addSource(source); // Define Flink Kafka producer Properties props = new Properties(); props.setProperty("bootstrap.servers", "kafka_ip:kafka_port"); FlinkKafkaProducer<String> kafkaProducer = new FlinkKafkaProducer<String>( "kafka_topic", new SimpleStringSchema(), props); // Map the data stream to a string stream and send it to Kafka DataStream<String> stringStream = stream.map(new MapFunction<Row, String>() { @Override public String map(Row row) throws Exception { return row.toString(); } }); stringStream.addSink(kafkaProducer); // Define Flink Kafka consumer FlinkKafkaConsumer<String> kafkaConsumer = new FlinkKafkaConsumer<String>( "kafka_topic", new SimpleStringSchema(), props); // Create a data stream from Kafka DataStream<String> kafkaStream = env.addSource(kafkaConsumer); // Convert the Kafka stream to a table and register it in the table environment tEnv.createTemporaryView("kafka_table", kafkaStream, "value"); // Query the table and print the result to console tEnv.sqlQuery("SELECT * FROM kafka_table").execute().print(); // Define TiDB sink TiDBSink sink = TiDBSink.builder() .setDatabaseName("tidb_database_name") .setTableName("tidb_table_name") .setOptions(TiDBOptions.builder() .withDatabaseUrl("jdbc:mysql://tidb_ip:tidb_port/tidb_database_name") .build()) .setPrimaryKey("id") .build(); // Convert the Kafka stream back to a data stream of rows and write it to TiDB DataStream<Row> rowStream = kafkaStream.map(new MapFunction<String, Row>() { @Override public Row map(String value) throws Exception { String[] fields = value.split(","); return Row.of(Integer.parseInt(fields[0]), fields[1], Double.parseDouble(fields[2])); } }); rowStream.addSink(sink); env.execute("TiDB Stream Example"); } } ``` 在上面的示例代码中,我们首先定义了一个 TiDBCatalog 对象,用于连接 TiDB 数据库。然后,我们使用 TiDBSource.builder() 方法定义了一个 TiDB 数据源,用于从 TiDB 中读取数据。接着,我们使用 env.addSource(source) 方法创建了一个 Flink 数据流。我们还定义了一个 Flink Kafka 生产者,用于将数据流发送到 Kafka。为了将数据流转换为字符串流,我们使用了 map() 方法。然后,我们将字符串流发送到 Kafka。接着,我们定义了一个 Flink Kafka 消费者,用于从 Kafka 中读取数据。我们还将 Kafka 数据流转换为表,并在表环境中注册它。最后,我们使用 TiDBSink.builder() 方法定义了一个 TiDB 汇聚器,用于将数据流写入 TiDB 中。 请注意,在上面的示例代码中,我们使用了 TiDBCatalog 和 TiDBSource 类来连接 TiDB 数据库。这些类需要 TiDB Connector JAR 包的支持。如果您没有安装该 JAR 包,请按照以下步骤安装: 1. 访问 TiDB Connector JAR 包的下载页面:https://github.com/pingcap/tidb/releases/tag/v4.0.8 2. 下载适用于您的操作系统的 JAR 包 3. 将 JAR 包添加到您的项目依赖中 最后,记得将代码中的 tidb_ip、tidb_port、tidb_database_name、tidb_table_name、tidb_username 和 tidb_password 替换为实际的值。同样,将代码中的 kafka_ip、kafka_port 和 kafka_topic 替换为实际的值。
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值