1、参数设置
以下参数都必须/建议设置上
1.订阅的主题
2.反序列化规则
3.消费者属性-集群地址
4.消费者属性-消费者组id(如果不设置,会有默认的,但是默认的不方便管理)
5.消费者属性-offset重置规则,如earliest/latest…
6.动态分区检测(当kafka的分区数变化/增加时,Flink能够检测到!)
7.Checkpoint会把offset随着做Checkpoint的时候提交到Checkpoint和默认主题中
————————————————
2、参数说明
![[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-UQIr5qje-1644828138220)(C:\Users\SANFU\AppData\Roaming\Typora\typora-user-images\image-20220114113148273.png)]](https://img-blog.csdnimg.cn/86d1f46f0797436791f67fcde7c2df79.png?x-oss-process=image/watermark,type_d3F5LXplbmhlaQ,shadow_50,text_Q1NETiBA5bmz5bmz5peg5aWH5bCP56CB5Yac,size_15,color_FFFFFF,t_70,g_se,x_16)
![在这里插入图片描述](https://img-blog.csdnimg.cn/1c70dd3f954442cd81597ba2ec804c9a.png?x-oss-process=image/watermark,type_d3F5LXplbmhlaQ,shadow_50,text_Q1NETiBA5bmz5bmz5peg5aWH5bCP56CB5Yac,size_15,color_FFFFFF,t_70,g_se,x_16)
3、kafka的水印策略
![在这里插入图片描述](https://img-blog.csdnimg.cn/2337a5d9b99348ffb037ac30d28b7288.png?x-oss-process=image/watermark,type_d3F5LXplbmhlaQ,shadow_50,text_Q1NETiBA5bmz5bmz5peg5aWH5bCP56CB5Yac,size_15,color_FFFFFF,t_70,g_se,x_16)
4、kafka动态发现分区、主题
Properties properties = new Properties();
properties.setProperty("bootstrap.servers",kafka_server_dev);
properties.setProperty("group.id","testtttt");
Pattern topicPattern = Pattern.compile("topic[0-9]]");
FlinkKafkaConsumerBase<String> kafkaDataPattern = new FlinkKafkaConsumer<>(
topicPattern,
new SimpleStringSchema(),
properties
).setStartFromEarliest();
properties.setProperty("FlinkKafkaConsumerBase.KEY_PARTITION_DISCOVERY_INTERVAL_MILLIS",30*1000+"");
5、指定分区偏移量开始消费
String topic = "odsEventDetail";
String groupId = "console-con-new-offline-final";
Map<KafkaTopicPartition, Long> specificStartOffsets = new HashMap<>();
specificStartOffsets.put(new KafkaTopicPartition("myTopic", 0), 23L);
specificStartOffsets.put(new KafkaTopicPartition("myTopic", 1), 31L);
specificStartOffsets.put(new KafkaTopicPartition("myTopic", 2), 43L);
FlinkKafkaConsumerBase<ObjectNode> kafkaSource = MyKafkaUtil.getKafkaSource_ObjectNode(topic, groupId)
.setStartFromSpecificOffsets(specificStartOffsets);
DataStreamSource<ObjectNode> kafkaDS = env.addSource(kafkaSource);
6、设置空闲等待
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
Properties properties = new Properties();
properties.setProperty("bootstrap.servers","");
properties.setProperty("group.id","");
FlinkKafkaConsumer<String> kafkaData = new FlinkKafkaConsumer<>(
"flinktest",
new SimpleStringSchema(),
properties
);
kafkaData.assignTimestampsAndWatermarks(
(AssignerWithPeriodicWatermarks<String>) WatermarkStrategy
.forBoundedOutOfOrderness(Duration.ofMinutes(2))
.withIdleness(Duration.ofMinutes(5))
);