flinksql 监控kafka topic 是否发数据过来

5 篇文章 0 订阅
3 篇文章 0 订阅

1、streamx 中 代码:

-- streamx 执行

-- 读取kafka的数据并创建表
--source topic : stg_traffic_participants
CREATE  TABLE `stg_traffic_participants_kafka` (
    `ts` TIMESTAMP(3) METADATA FROM 'timestamp'
) WITH (
        'connector' = 'kafka',
        'format' = 'json',
        'topic' = 'ihs_v2x_tgac_vhc_rt',
        'scan.startup.mode' = 'latest-offset',
        'properties.bootstrap.servers' = 'ip:port',
        'properties.group.id' = 'ihs_v2x_tgac_vhc_rt'
 );

--sink kafka
--sink topic : ogl_ivis_tfc_partcp_rt
CREATE  TABLE `test` (
    `ts` TIMESTAMP(3) METADATA FROM 'timestamp'
) WITH (
        'connector' = 'kafka',
        'format' = 'json',
        'topic' = 'test',
        'properties.bootstrap.servers' = 'ip:port',
         'sink.partitioner'='round-robin'
 );

insert into test
select
    ts as ts
from stg_traffic_participants_kafka
;

2、当有数据到来时,会有报错信息,从报错信息中可以看到 topic 中的数据格式。

java.io.IOException: Failed to deserialize JSON '[{"alarmCode":"00000000000000000000000000000011","derection":"187","encrypt":"1","gpsTime":"2022-08-29 10:22:40","lat":"39.511066","lon":"115.997756","stateCode":"00000000000000000000000000000000","vec1":"18","vec2":"18","vehicleColor":"1","vehicleNo":"京KHM007"}]'.
	at org.apache.flink.formats.json.JsonRowDataDeserializationSchema.deserialize(JsonRowDataDeserializationSchema.java:105)
	at org.apache.flink.formats.json.JsonRowDataDeserializationSchema.deserialize(JsonRowDataDeserializationSchema.java:46)
	at org.apache.flink.api.common.serialization.DeserializationSchema.deserialize(DeserializationSchema.java:82)
	at org.apache.flink.streaming.connectors.kafka.table.DynamicKafkaDeserializationSchema.deserialize(DynamicKafkaDeserializationSchema.java:130)
	at org.apache.flink.streaming.connectors.kafka.internals.KafkaFetcher.partitionConsumerRecordsHandler(KafkaFetcher.java:179)
	at org.apache.flink.streaming.connectors.kafka.internals.KafkaFetcher.runFetchLoop(KafkaFetcher.java:142)
	at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:826)
	at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:110)
	at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:66)
	at org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:263)
Caused by: java.lang.ClassCastException: org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.node.ArrayNode cannot be cast to org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.node.ObjectNode
	at org.apache.flink.formats.json.JsonToRowDataConverters.lambda$createRowConverter$ef66fe9a$1(JsonToRowDataConverters.java:343)
	at org.apache.flink.formats.json.JsonToRowDataConverters.lambda$wrapIntoNullableConverter$de0b9253$1(JsonToRowDataConverters.java:375)
	at org.apache.flink.formats.json.JsonRowDataDeserializationSchema.deserialize(JsonRowDataDeserializationSchema.java:99)
	... 9 more

 

好的,您可以按照以下步骤操作: 1. 在 Flink 中添加 Kafka 和 MySQL 的依赖: ```xml <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-connector-kafka_${scala.binary.version}</artifactId> <version>${flink.version}</version> </dependency> <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-connector-jdbc_${scala.binary.version}</artifactId> <version>${flink.version}</version> </dependency> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>8.0.23</version> </dependency> ``` 2. 创建 Flink SQL 的执行环境: ```java StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); EnvironmentSettings settings = EnvironmentSettings.newInstance() .useBlinkPlanner() .inStreamingMode() .build(); StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env, settings); ``` 3. 注册 Kafka 数据源和 MySQL 数据汇: ```java tableEnv.executeSql("CREATE TABLE kafka_source (\n" + " id INT,\n" + " name STRING,\n" + " age INT,\n" + " PRIMARY KEY (id) NOT ENFORCED\n" + ") WITH (\n" + " 'connector' = 'kafka',\n" + " 'topic' = 'test',\n" + " 'properties.bootstrap.servers' = 'localhost:9092',\n" + " 'properties.group.id' = 'testGroup',\n" + " 'format' = 'json',\n" + " 'scan.startup.mode' = 'earliest-offset'\n" + ")"); tableEnv.executeSql("CREATE TABLE mysql_sink (\n" + " id INT,\n" + " name STRING,\n" + " age INT,\n" + " PRIMARY KEY (id)\n" + ") WITH (\n" + " 'connector' = 'jdbc',\n" + " 'url' = 'jdbc:mysql://localhost:3306/test',\n" + " 'table-name' = 'user',\n" + " 'driver' = 'com.mysql.cj.jdbc.Driver',\n" + " 'username' = 'root',\n" + " 'password' = 'root'\n" + ")"); ``` 4. 使用 Flink SQL 读取 Kafka 数据源并将数据 MySQL 数据汇: ```java tableEnv.executeSql("INSERT INTO mysql_sink SELECT * FROM kafka_source"); env.execute(); ``` 这样就可以使用 Flink SQLKafka 中读取数据,并将数据 MySQL 数据库中了。
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值