kafka 常用非基础的核心设置项

本文介绍了如何在遇到Kafka项目因topic缺失而无法启动的问题时,通过修改`ConsumerConfig`和日志配置,设置`missingTopicsFatal`为`false`以及忽略`NetworkClient`的警告日志,确保项目的正常运行。
摘要由CSDN通过智能技术生成

        在测试的过程中,心血来潮,想要测试下新topic中还没被消费的消息。专门查了下ai,奈何一本正经的胡说八道,浪费了点时间。现在记录下:

  1.     解决topic缺失时项目无法启动 , 报错: Topic(s) [……] is/are not present and missingTopicsFatal is true  
  2.  指定消息消费的区间
@Configuration
@Primary
public class CommonKafkaConfig extends KafkaProperties {

    @Value("${spring.kafka.concurrency}")
    public int concurrency;

    @Value("${spring.kafka.poll-timeout-ms}")
    public long pollTimeout;

    @Value("${spring.kafka.consumer.auto-offset-reset}")
    private String autoOffsetReset;

    @Value("${spring.kafka.consumer.auto-commit-interval-ms}")
    private String autoCommitInterval;

    @Value("${spring.kafka.consumer.bootstrap-servers}")
    private String mpSyncBootstrapServers;

    @Value("${spring.kafka.consumer.cloud-bootstrap-servers}")
    private String cloudSyncBootstrapServers;

    @Value("${spring.kafka.consumer.group-id}")
    private String mpSyncGroupId;

    @Value("${spring.kafka.consumer.max-pool-records}")
    private String maxPoolRecords;

    @Value("${spring.profiles.active}")
    String env;

    @Bean(name = "kafkaListenerContainerFactory")
    KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
        ConcurrentKafkaListenerContainerFactory<String, String> factory =
                new ConcurrentKafkaListenerContainerFactory<>();
        factory.setConsumerFactory(ConsumerFactory());
        factory.setConcurrency(concurrency);
        factory.getContainerProperties().setPollTimeout(pollTimeout);
       //忽略不存在的topic, 针对报警: 
       // Topic(s) [……] is/are not present and missingTopicsFatal is true
        factory.getContainerProperties().setMissingTopicsFatal(false);
        return factory;
    }

    private ConsumerFactory<String, String> ConsumerFactory() {
        return new DefaultKafkaConsumerFactory<>(ConsumerConfigs());
    }

    private Map<String, Object> ConsumerConfigs() {
        Map<String, Object> props = new HashMap<>();
        props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, getBootStrap());
        // 配置的值为:latest: 消费最新的消息。 
        // earliest:( 表示当没有初始偏移量或者偏移量无效时,)消费者会从最早的可用消息开始消费
        props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest");
        props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, autoCommitInterval);
        props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        props.put(ConsumerConfig.GROUP_ID_CONFIG, mpSyncGroupId);
        props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, maxPoolRecords);
        return props;
    }

    public String getBootStrap() {
        if (env.contains("cloud")) return cloudSyncBootstrapServers;
        return mpSyncBootstrapServers;
    }


    private Map<String, Object> producerConfigs() {
        Map<String, Object> props = new HashMap<>();
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, getBootStrap());
        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
        return props;
    }

    private ProducerFactory<String, String> producerFactory() {
        return new DefaultKafkaProducerFactory<>(producerConfigs());
    }

    @Bean(name = "commonKafkaTemplate")
    public KafkaTemplate<String, String> kafkaTemplate() {
        return new KafkaTemplate<>(producerFactory());
    }
}

问题1,日志里会有大量的[warn],如下:

2024-04-02 16:58:32.906 WARN 15652 --- [ errorHandler-4-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-69, groupId=errorHandler] Error while fetching metadata with correlation id 973 : {mp.publish.grab.high.priority=UNKNOWN_TOPIC_OR_PARTITION, mp.publish.other=UNKNOWN_TOPIC_OR_PARTITION, mp.publish.grab.low.priority=UNKNOWN_TOPIC_OR_PARTITION}

可以在logback.xml中,加一行配置,忽略日志:


<logger name="org.apache.kafka.clients.NetworkClient" level="ERROR"/>

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值