在测试的过程中,心血来潮,想要测试下新topic中还没被消费的消息。专门查了下ai,奈何一本正经的胡说八道,浪费了点时间。现在记录下:
- 解决topic缺失时项目无法启动 , 报错: Topic(s) [……] is/are not present and missingTopicsFatal is true
- 指定消息消费的区间
@Configuration
@Primary
public class CommonKafkaConfig extends KafkaProperties {
@Value("${spring.kafka.concurrency}")
public int concurrency;
@Value("${spring.kafka.poll-timeout-ms}")
public long pollTimeout;
@Value("${spring.kafka.consumer.auto-offset-reset}")
private String autoOffsetReset;
@Value("${spring.kafka.consumer.auto-commit-interval-ms}")
private String autoCommitInterval;
@Value("${spring.kafka.consumer.bootstrap-servers}")
private String mpSyncBootstrapServers;
@Value("${spring.kafka.consumer.cloud-bootstrap-servers}")
private String cloudSyncBootstrapServers;
@Value("${spring.kafka.consumer.group-id}")
private String mpSyncGroupId;
@Value("${spring.kafka.consumer.max-pool-records}")
private String maxPoolRecords;
@Value("${spring.profiles.active}")
String env;
@Bean(name = "kafkaListenerContainerFactory")
KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(ConsumerFactory());
factory.setConcurrency(concurrency);
factory.getContainerProperties().setPollTimeout(pollTimeout);
//忽略不存在的topic, 针对报警:
// Topic(s) [……] is/are not present and missingTopicsFatal is true
factory.getContainerProperties().setMissingTopicsFatal(false);
return factory;
}
private ConsumerFactory<String, String> ConsumerFactory() {
return new DefaultKafkaConsumerFactory<>(ConsumerConfigs());
}
private Map<String, Object> ConsumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, getBootStrap());
// 配置的值为:latest: 消费最新的消息。
// earliest:( 表示当没有初始偏移量或者偏移量无效时,)消费者会从最早的可用消息开始消费
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest");
props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, autoCommitInterval);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, mpSyncGroupId);
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, maxPoolRecords);
return props;
}
public String getBootStrap() {
if (env.contains("cloud")) return cloudSyncBootstrapServers;
return mpSyncBootstrapServers;
}
private Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, getBootStrap());
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
return props;
}
private ProducerFactory<String, String> producerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
@Bean(name = "commonKafkaTemplate")
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
}
问题1,日志里会有大量的[warn],如下:
2024-04-02 16:58:32.906 WARN 15652 --- [ errorHandler-4-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-69, groupId=errorHandler] Error while fetching metadata with correlation id 973 : {mp.publish.grab.high.priority=UNKNOWN_TOPIC_OR_PARTITION, mp.publish.other=UNKNOWN_TOPIC_OR_PARTITION, mp.publish.grab.low.priority=UNKNOWN_TOPIC_OR_PARTITION}
可以在logback.xml中,加一行配置,忽略日志:
<logger name="org.apache.kafka.clients.NetworkClient" level="ERROR"/>