springboot + kafka (二)kafka-client队列消息的发送和监听

1.kafka-client介绍

之前的博客单个队列消息的发送和监听是springboot+sping-kafka的单个队列的发送和监听。最近项目接到一个需求,就是接入腾讯ckafka,而ckafka最高支持到kafka版本V1.1.1。于是便使用了kafka-client的版本1.0.2,便于接入ckafka。而kafka-client是kafka官方调用kafka的客户端版本。

2.自定义配置文件

2.1 配置文件

此处为yml自定义配置文件,一些个性化需求可以修改。

kafka:
  #=============== provider  =======================
  bootstrap-servers: 127.0.0.1:9092
  session-out: 30000
  retry: 3
  # 每次批量发送消息的数量
  batch-size: 232840
  buffer-memory: 33554432
  username:
  password:
  ##=============== consumer  =======================
  consumer-groupId: user-group-0
  auto-commit-interval: 5000
  enable.auto.commit: true

2.2 配置producer 和consumer

此处由于调试的时候使用了sasl安全方式,此处方便两种方式调用,由于作者水平有限,可以自定义配置。字段意义后续博客讲解。此文主要介绍接入springboot,并启动时启动监听。

  • 引入自定义配置
    @Value("${kafka.bootstrap-servers}")
    private String servers;
    @Value("${kafka.session-out}")
    private String sessionOut;
    @Value("${kafka.retry}")
    private int retry;
    @Value("${kafka.batch-size}")
    private int batchSize;
    @Value("${kafka.buffer-memory}")
    private String bufferMemory;
    @Value("${kafka.password}")
    private String kafkaPassword;
    @Value("${kafka.username}")
    private String kafkaUserName;
    @Value("${kafka.enable.auto.commit}")
    private String enableAutoCommit;

    //    --------------consumer-----------------
    @Value("${kafka.consumer-groupId}")
    private String groupId;
    @Value("${kafka.auto-commit-interval}")
    private String autoCommitInterval;
  • 配置consumer
    @Bean
    @Scope("prototype")
    public KafkaConsumer kafkaConsumer() {
        Properties props = new Properties();
        props.put("bootstrap.servers", servers);
        props.put("group.id", groupId);
        props.put("auto.commit.interval.ms", autoCommitInterval);
        props.put("session.timeout.ms", sessionOut);
        if (!StringUtils.isEmpty(kafkaUserName) && !StringUtils.isEmpty(kafkaPassword)) {
            props.put("sasl.jaas.config",
                    "org.apache.kafka.common.security.plain.PlainLoginModule required username=" + kafkaUserName + " password=" + kafkaPassword + ";");
            props.put("authorizer.class.name", "kafka.security.auth.SimpleAclAuthorizer");
            props.put("security.protocol", "SASL_PLAINTEXT");
            props.put("sasl.mechanism", "PLAIN");
        }

        props.put("enable.auto.commit", "true");
        props.put("auto.offset.reset", "earliest");
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        return new KafkaConsumer(props);
    }
  • 配置producer
    @Bean
    public KafkaProducer kafkaProducer() {
        Properties props = new Properties();
        props.put("bootstrap.servers", servers);
        props.put("session.timeout.ms", sessionOut);
        props.put("retries", retry);
        props.put("buffer.memory", bufferMemory);
        props.put("batch.size", batchSize);
        if (!StringUtils.isEmpty(kafkaUserName) && !StringUtils.isEmpty(kafkaPassword)) {
            props.put("sasl.jaas.config",
                    "org.apache.kafka.common.security.plain.PlainLoginModule required username=" + kafkaUserName + " password=" + kafkaPassword + ";");
            props.put("security.protocol", "SASL_PLAINTEXT");
            props.put("sasl.mechanism", "PLAIN");
            props.put("authorizer.class.name", "kafka.security.auth.SimpleAclAuthorizer");
        }

        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("acks", "1");
        props.put("linger.ms", 10);
        props.put("max.block.ms", 3000);
        return new KafkaProducer(props);

    }

}

3. Restful 方式发送消息

由于要多次发送消息,保证kafkaPruducer不关闭,此处注掉close。此处简单示例,正确开发应该使用controller对外暴露接口

   @Autowired
   private KafkaProducer kafkaProducer;

   @RequestMapping("/send/{topic}/{message}")
   public String sendMessage(@PathVariable("topic") String topic, @PathVariable("message") String message) {
       asynSendRecord(topic, message);
       return message;
   }

   //异步发送消息
   public void asynSendRecord(String topic, Object message) {
       ProducerRecord<String, Object> record = new ProducerRecord<>(topic, message);
       log.info("record:" + record.value());
       kafkaProducer.send(record, (recordMetadata, e) -> {
           if (e == null) {
               log.info("消息发送: offset: {} timestamp:{}  topic:{}  partition: {} ", recordMetadata.offset(), recordMetadata.timestamp(), recordMetadata.topic(), recordMetadata.partition());
               log.info("消息发送成功");
           } else {
               log.error(String.format("消息发送失败: {}", e.getMessage()));
           }
       });

	// kafkaProducer.close();
   }

4. 监听消费messages

    @Autowired
    private KafkaMessageService kafkaMessageService;

    public void onMessage(KafkaConsumer kafkaConsumer, List<String> topic) {
        kafkaConsumer.subscribe(topic);
        log.info("队列开始监听:topic {}", topic);
        try {
            while (true) {
                ConsumerRecords<String, String> records = kafkaConsumer.poll(1000);
                for (ConsumerRecord<String, String> record : records) {
                    log.info("partition:{} offset = {}, key = {}, value = {}",record.partition(), record.offset(), record.key(), record.value());
                    try {
                        String messageData = new String(record.value().getBytes(), StandardCharsets.UTF_8);
                        log.info("{}解析处理内容为:{}", LOGGER_MSG, messageData);
                        handle(record.topic(), messageData);
                    } catch (Exception e) {
                        log.error("消息处理异常");
                    }
                }
            }
        } finally {
		// kafkaConsumer.close();
        }
    }

5.SpringBoot 启动时加载监听

5.1 Consumer和Listener对象注入

此处实现 org.springframework.boot.ApplicationRunner 接口,在 重写run方法中,启动监听topic的功能。
当考虑到启动监听的时候我是这样注入spring 消费服务和监听业务方法的。

    @Autowired
    KafkaListenMessageHandler kafkaInvoiceHandler;
    @Autowired
    KafkaConsumer kafkaConsumer;

后来由于业务中有多个topic,而我又都写入到一个Handler中,此处需要原型模式来实现多实例。所以在 KafkaConsumer和KafkaListenMessageHandler中添加注解 @Scope(“prototype”),KafkaListenMessageHandler就是监听消费的实体对象。

  • KafkaConsumer对象
  	@Bean
    @Scope("prototype")
    public KafkaConsumer kafkaConsumer() {...}
  • KafkaListenMessageHandler对象
  	@Bean
    @Scope("prototype")
	public class KafkaListenMessageHandler {...}

5.2 线程启动时调用Listener

一开始我是这样做的,k-a,k-b就是需要监听的队列。而启动的时候kafkaConsumer和kafkaInvoiceHandler也应该是多实例。以下为错误示例:

new Thread(() -> kafkaListenMessageHandler.onMessage(kafkaConsumer,Arrays.asList("k-a"))).start();
new Thread(() -> kafkaListenMessageHandler.onMessage(kafkaConsumer,Arrays.asList("k-b"))).start();

后续感觉使用线程池比较好,修改成了线程池模式。线程的个数可以根据业务调整。

public static ExecutorService executorService = Executors.newFixedThreadPool(2);
@Override
public void run(ApplicationArguments args) {
    log.info("监听服务启动!");
    executorService.execute(() -> {
        KafkaListenMessageHandler kafkaListenMessageHandler = SpringBeanUtils.getBean(KafkaListenMessageHandler.class);
        kafkaListenMessageHandler.onMessage(SpringBeanUtils.getBean("kafkaConsumer"), Arrays.asList("k-a"));
    });
    executorService.execute(() -> {
        KafkaListenMessageHandler kafkaListenMessageHandler = SpringBeanUtils.getBean("kafkaInvoiceHandler");
        kafkaListenMessageHandler.onMessage(SpringBeanUtils.getBean("kafkaConsumer"), Arrays.asList("k-b"));
    });
}

5.3 获取对象SpringUtils#getBean方法

@SuppressWarnings("unchecked")
@Component
public class SpringBeanUtils implements ApplicationContextAware {

    private static ApplicationContext applicationContext;
    @Override
    public void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
        SpringBeanUtils.applicationContext = applicationContext;
    }
    public static <T> T getBean(String beanName) {
        if (applicationContext.containsBean(beanName)) {
            return (T) applicationContext.getBean(beanName);
        } else {
            return null;
        }
    }
    public static <T> T getBean(Class<T> clazz) {
        return applicationContext.getBean(clazz);
    }
}

以上为所有内容,由于比较晚,许多内容没有详细解释,如有疑问,可以评论。下文第三篇准备完成上文承诺的多队列监听消费。第四篇准备完成队列的自动创建和config的修改。

  • 1
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
Spring Boot是一个用于构建独立的、生产级别的Spring应用程序的框架。而Kafka是一个分布式的发布-订阅消息系统,可以处理大量数据并提供高吞吐量。在Spring Boot应用程序中使用Kafka可以通过导入spring-kafka的starter依赖来实现。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* [SpringBoot整合Kafka](https://blog.csdn.net/m0_37294838/article/details/127253991)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 33.333333333333336%"] - *2* [springboot整合kafka](https://blog.csdn.net/m0_74642813/article/details/131307133)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 33.333333333333336%"] - *3* [springboot-kafka](https://blog.csdn.net/qq_47848696/article/details/125422997)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 33.333333333333336%"] [ .reference_list ]
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值