Flink读取kafka的Topic,如果不存在就创建这个topic

该代码段展示了如何在Java中使用ApacheFlink动态检查并创建Kafka主题,然后构建KafkaSource。如果指定的topic不存在,程序会创建一个新的topic,并可配置其保留策略。之后,它会设置KafkaSource的相关参数,如bootstrap服务器、groupID、valuedeserializer等,最后返回DataStreamSource。
摘要由CSDN通过智能技术生成

代码如下

private static final ArrayList<String> createTTLTopics = Lists.newArrayList("real-time-screen-robot-work-count", "medical-device-status-history", "deviceStatusChange", "flink-test");

    /**
     * 功能描述: <br>
     * 〈自定义build,生产kafkaSource〉
     *
     * @Param: [env, topic, groupId, offsets]
     * @Return: org.apache.flink.streaming.api.datastream.DataStreamSource<java.lang.String>
     * @Author: sheng
     * @Date: 2022/5/31 11:15 上午
     */
    public static DataStreamSource<String> getNewKafkaSource(StreamExecutionEnvironment env,
                                                             JSONObject kafka,
                                                             String topic,
                                                             String groupId,
                                                             String sourceName,
                                                             OffsetsInitializer offsets) {
        // 1.15 之后需要新方法创建kafka source
        // 检查主题是否存在,如果不存在则创建它
        Properties adminProperties = new Properties();
        adminProperties.setProperty(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, kafka.getString("boot_strap"));
        AdminClient adminClient = AdminClient.create(adminProperties);
        try {
            if (!adminClient.listTopics().names().get().contains(topic)) {
                NewTopic newTopic = new NewTopic(topic, 10, (short) 1);
                if (createTTLTopics.contains(topic)){
                	//可以传入你想要的参数
                    newTopic.configs(Collections.singletonMap("retention.ms", "86400000"));
                }
                adminClient.createTopics(Collections.singletonList(newTopic)).all().get();
                LOG.warn("一个 {} 发生. 创建了一个名为:{}的topic.", "warn",topic );
            }
        } catch (InterruptedException | ExecutionException e) {
            e.printStackTrace();
        }
        KafkaSource<String> source = KafkaSource.<String>builder()
                .setProperty("partition.discovery.interval.ms", "60000")
                .setClientIdPrefix(UUID.randomUUID().toString())
                .setBootstrapServers(kafka.getString("boot_strap"))
                .setTopics(topic)
                .setGroupId(groupId)
                .setValueOnlyDeserializer(new SimpleStringSchema())
                .setStartingOffsets(offsets)
                .build();

        return env.fromSource(source, WatermarkStrategy.noWatermarks(), sourceName);
    }

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
好的,您可以按照以下步骤操作: 1. 在 Flink 中添加 Kafka 和 MySQL 的依赖: ```xml <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-connector-kafka_${scala.binary.version}</artifactId> <version>${flink.version}</version> </dependency> <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-connector-jdbc_${scala.binary.version}</artifactId> <version>${flink.version}</version> </dependency> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>8.0.23</version> </dependency> ``` 2. 创建 Flink SQL 的执行环境: ```java StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); EnvironmentSettings settings = EnvironmentSettings.newInstance() .useBlinkPlanner() .inStreamingMode() .build(); StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env, settings); ``` 3. 注册 Kafka 数据源和 MySQL 数据汇: ```java tableEnv.executeSql("CREATE TABLE kafka_source (\n" + " id INT,\n" + " name STRING,\n" + " age INT,\n" + " PRIMARY KEY (id) NOT ENFORCED\n" + ") WITH (\n" + " 'connector' = 'kafka',\n" + " 'topic' = 'test',\n" + " 'properties.bootstrap.servers' = 'localhost:9092',\n" + " 'properties.group.id' = 'testGroup',\n" + " 'format' = 'json',\n" + " 'scan.startup.mode' = 'earliest-offset'\n" + ")"); tableEnv.executeSql("CREATE TABLE mysql_sink (\n" + " id INT,\n" + " name STRING,\n" + " age INT,\n" + " PRIMARY KEY (id)\n" + ") WITH (\n" + " 'connector' = 'jdbc',\n" + " 'url' = 'jdbc:mysql://localhost:3306/test',\n" + " 'table-name' = 'user',\n" + " 'driver' = 'com.mysql.cj.jdbc.Driver',\n" + " 'username' = 'root',\n" + " 'password' = 'root'\n" + ")"); ``` 4. 使用 Flink SQL 读取 Kafka 数据源并将数据写入 MySQL 数据汇: ```java tableEnv.executeSql("INSERT INTO mysql_sink SELECT * FROM kafka_source"); env.execute(); ``` 这样就可以使用 Flink SQL 从 Kafka读取数据,并将数据写入 MySQL 数据库中了。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值