SpringBoot整合Kafka示例使用

  作者之前接触过消息中间件,如RocketMq;最近工作中项目使用到了Kafka,机制和RocketMq相似,这里把代码、心得总结给贴出来。

运行Kafka

  使用Kafka的前提是你安装好了Jdk、Scala,
  https://www.scala-lang.org/download/scala2.html(Scala的,Jdk请自行搜索)
  之后在这个网址:https://kafka.apache.org/downloads.html
  下载Kafka,选择那个二进制版本的
在这里插入图片描述
  然后选择和你安装的Scala版本一致的。
  下载解压后,因为里面集成了windows版本,所以不用额外去开虚拟机用Linux系统来运行。解压后新建一个logs文件夹,然后对config文件夹下server.properties编辑,添加一行:
  log.dirs=解压后路径/logs

  在windows的cmd窗口中运行2个命令(开2个cmd窗口):
  解压后路径\bin\windows\zookeeper-server-start.bat 解压后路径\config\zookeeper.properties
  解压后路径\bin\windows\kafka-server-start.bat 解压后路径\config\server.properties
  前者运行zookeeper,后者运行kafka;作者这里是安装的3.1.1

maven引入

<dependencies>
    <dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka-clients</artifactId>
        <version>2.3.1</version>
    </dependency>
    <dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka-streams</artifactId>
        <version>2.3.1</version>
    </dependency>
    <dependency>
        <groupId>org.springframework.kafka</groupId>
        <artifactId>spring-kafka</artifactId>
        <version>2.3.4.RELEASE</version>
    </dependency>
</dependencies>

配置类

@Configuration
@EnableKafka
public class KafkaConfig {

    @Bean
    public KafkaAdmin admin() {
        Map<String, Object> configs = new HashMap<String, Object>();
        configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, 
        "127.0.0.1:9092");
        return new KafkaAdmin(configs);
    }

    @Bean
    public NewTopic topic1() {
        //该主题2个分区,备份因子为1
        return new NewTopic("bar2", 2, (short)1);
    }


    //生产者配置
    @Bean
    public Map<String, Object> producerConfigs() {
        Map<String, Object> props = new HashMap<String,Object>();
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, 
        "127.0.0.1:9092");
        props.put("acks", "all");
        props.put("retries", 0);
        props.put("batch.size", 16384);
        props.put("linger.ms", 1);
        props.put("buffer.memory", 33554432);
        props.put("key.serializer", 
        "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", 
        "org.apache.kafka.common.serialization.StringSerializer");
        return props;
    }

    @Bean
    public ProducerFactory<String, String> producerFactory() {
        return new DefaultKafkaProducerFactory<String, String>(
        producerConfigs());
    }

    @Bean
    public KafkaTemplate<String, String> kafkaTemplate() {
        return new KafkaTemplate<String, String>(producerFactory());
    }


    //消费者配置
    @Bean
    public Map<String,Object> consumerConfigs(){
        HashMap<String, Object> props = new HashMap<String, Object>();
        props.put("bootstrap.servers", "127.0.0.1:9092");
        props.put("group.id", "test");
        props.put("enable.auto.commit", "true");
        props.put("auto.commit.interval.ms", "1000");
        props.put("key.deserializer", 
        "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", 
        "org.apache.kafka.common.serialization.StringDeserializer");
        return props;
    }

    @Bean
    public ConsumerFactory<String,String> consumerFactory(){
        return new DefaultKafkaConsumerFactory<String, String>(
        consumerConfigs());
    }

    @Bean
    public ConcurrentKafkaListenerContainerFactory<String,String> 
    kafkaListenerContainerFactory(){
        ConcurrentKafkaListenerContainerFactory<String, String> factory = 
        new ConcurrentKafkaListenerContainerFactory<String, String>();
        factory.setConsumerFactory(consumerFactory());
        return factory;
    }

}

生产者

public class Producer {

    public static void main(String[] args) throws ExecutionException, 
    InterruptedException {
        AnnotationConfigApplicationContext ctx = 
        new AnnotationConfigApplicationContext(KafkaConfig.class);
        KafkaTemplate<String, String> kafkaTemplate = 
        (KafkaTemplate<String, String>)ctx.getBean("kafkaTemplate");
        String data = "男";
        //给0分区发送消息
        ListenableFuture<SendResult<String, String>> send = 
        kafkaTemplate.send("bar2", 0, "sex", data);
        send.addCallback(new ListenableFutureCallback<SendResult<String, 
        String>>() {
            public void onFailure(Throwable throwable) {

            }

            public void onSuccess(SendResult<String, String> 
            integerStringSendResult) {

            }
        });
        
    }

}

消费者监听器

public class SimpleConsumerListener {

    @KafkaListener(id = "myContainer0", topics = { "bar2" })
    public void listen(ConsumerRecord<?, ?> record) {
        System.out.println(record.topic());
        System.out.println(record.key());
        System.out.println(record.value());
    }
    
}

  该类要配置到KafkaConfig中:

@Bean
public SimpleConsumerListener simpleConsumerListener(){
    return new SimpleConsumerListener();
}

  运行结果如下:

在这里插入图片描述

@KafkaListener的其余用法

  • 指定消费哪些主题和分区的消息
// 只接收bar2主题0分区的消息,
@KafkaListener(
        id = "myContainer1",
        topicPartitions = {
                @TopicPartition(topic = "bar2", partitions = { "0" }),
        })
public void listen1(ConsumerRecord<?, ?> record) {
   //业务操作
}
  • 多线程消费
    默认是单线程消费的,Producer的main()方法发送2条消息:
//0分区和1分区都发送消息
kafkaTemplate.send("bar2", 0, "sex", "男");
kafkaTemplate.send("bar2", 1, "sex", "女");

  监听器:

@KafkaListener(
        id = "myContainer0",
        topicPartitions = {
                @TopicPartition(topic = "bar2", partitions = { "0", "1" }),
        })
public void listen(ConsumerRecord<?, ?> record) {
    System.out.println(Thread.currentThread().getName());
    System.out.println(record.key());
    System.out.println(record.value());
}

  此时输出结果:

在这里插入图片描述

  多线程消费(2个):

//concurrency属性控制,不要超过分区数
@KafkaListener(
        id = "myContainer0",
        topicPartitions = {
                @TopicPartition(topic = "bar2", partitions = { "0", "1" })
        }, concurrency = "2")
public void listen(ConsumerRecord<?, ?> record) {
    System.out.println(Thread.currentThread().getName());
    System.out.println(record.key());
    System.out.println(record.value());
}

在这里插入图片描述

  • 批量消费
    KafkaConfig的consumerConfigs()中,添加:
//消费者每次拉取的最大值
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "3");

  kafkaListenerContainerFactory()中,添加:

factory.setBatchListener(true);

  这时我们多发送几条消息,Producer的main()方法发送5条消息:

kafkaTemplate.send("bar2", 1, "age", "0");
kafkaTemplate.send("bar2", 1, "age", "1");
kafkaTemplate.send("bar2", 0, "age", "2");
kafkaTemplate.send("bar2", 1, "age", "3");
kafkaTemplate.send("bar2", 0, "age", "4");

  监听器的参数要改写:

@KafkaListener(
        id = "myContainer0",
        topicPartitions = {
                @TopicPartition(topic = "bar2", partitions = { "0", "1" })
        })
    public void listen(List<ConsumerRecord<?, ?>> list) {
        System.out.println("批量消费:" + list.size());
        for(int i = 0; i <list.size(); i++) {
            ConsumerRecord record = list.get(i);
            System.out.println(record.key());
            System.out.println(record.value());
        }
    }

  运行结果为:
在这里插入图片描述

  • 消息重试和死信队列
    当消息监听的方法中,抛出异常后,可以进行重试;当重试的次数达到一定次数后,就会进入死信队列,主题是原主题+.DLT。

  KafkaConfig类中的kafkaListenerContainerFactory()方法添加代码:

//设置重试间隔 5秒,次数为 3次;超过3次进入死信队列
BackOff backOff = new FixedBackOff(5 * 1000L, 3L);
factory.setErrorHandler(new SeekToCurrentErrorHandler(
new DeadLetterPublishingRecoverer(kafkaTemplate()), backOff));

  消息监听类:

    @KafkaListener(
          id = "myContainer0",
          topicPartitions = {
                 @TopicPartition(topic = "bar2",partitions = { "0","1"})
          })
    public void listen(ConsumerRecord<?, ?> record) {
        System.out.println(record.key());
        System.out.println(record.value());
        throw new RuntimeException("抛异常");
    }

    @KafkaListener(id = "myContainer2", topics = "bar2.DLT")
    public void listen2(ConsumerRecord<?, ?> record) {
        System.out.println("进入死信队列");
        System.out.println("topic:" + record.topic());
        System.out.println("key:" + record.key());
        System.out.println("value:"+record.value());
    }
  • 0
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
以下是一个简单的Spring Boot应用程序,演示了如何使用Kafka生产者和消费者。 1. 添加Kafka依赖 ```xml <dependency> <groupId>org.springframework.kafka</groupId> <artifactId>spring-kafka</artifactId> </dependency> ``` 2. 配置Kafka ```yaml spring: kafka: bootstrap-servers: localhost:9092 producer: key-serializer: org.apache.kafka.common.serialization.StringSerializer value-serializer: org.apache.kafka.common.serialization.StringSerializer consumer: group-id: test-group auto-offset-reset: earliest key-deserializer: org.apache.kafka.common.serialization.StringDeserializer value-deserializer: org.apache.kafka.common.serialization.StringDeserializer ``` 3. 编写Kafka生产者 ```java import org.springframework.beans.factory.annotation.Autowired; import org.springframework.kafka.core.KafkaTemplate; import org.springframework.stereotype.Service; @Service public class KafkaProducer { private static final String TOPIC = "test-topic"; @Autowired private KafkaTemplate<String, String> kafkaTemplate; public void sendMessage(String message) { this.kafkaTemplate.send(TOPIC, message); } } ``` 4. 编写Kafka消费者 ```java import org.springframework.kafka.annotation.KafkaListener; import org.springframework.stereotype.Service; @Service public class KafkaConsumer { @KafkaListener(topics = "test-topic", groupId = "test-group") public void consume(String message) { System.out.println("Received message: " + message); } } ``` 5. 测试 ```java import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.CommandLineRunner; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplication public class KafkaDemoApplication implements CommandLineRunner { @Autowired private KafkaProducer kafkaProducer; public static void main(String[] args) { SpringApplication.run(KafkaDemoApplication.class, args); } @Override public void run(String... args) throws Exception { kafkaProducer.sendMessage("Hello, Kafka!"); } } ``` 在运行应用程序后,将打印出接收到的消息。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值