Kafka入门笔记

Kafka学习应用笔记

定义

应用场景

  • 缓冲/消峰
  • 解耦
  • 异步通信

两种模式

  • 点对点模式
  • 发布/订阅模式

1.1 下载kafka

官网地址:https://kafka.apache.org/downloads

kafka_2.13-3.5.1.tgz

在这里插入图片描述

1.2 上传至服务器并解压

tar -zxvf  kafka_2.13-3.5.1.tgz

1.3 修改配置文件

1.3.1 修改server.properties

broker.id=0
port=9092 #端口号

listeners=PLAINTEXT://192.168.48.128:9092 #设置本机IP
advertised.listeners=PLAINTEXT://192.168.48.128:9092
advertised.host.name=192.168.48.128
host.name=192.168.48.128

log.dirs=/wb/kafka/logs #日志存放路径,上面创建的目录
zookeeper.connect=localhost:2181 #zookeeper地址和端口,单机配置部署,localhost:2181

1.3.2 修改启动脚本

防止内存太小起不来服务
在这里插入图片描述

1.4 启动zookeeper

启动命令

/wb/kafka/kafka_2.13-3.5.1/bin/zookeeper-server-start.sh /wb/kafka/kafka_2.13-3.5.1/config/zookeeper.properties &

1.5 启动kafka

 sh /wb/kafka/kafka_2.13-3.5.1/bin/kafka-server-start.sh /wb/kafka/kafka_2.13-3.5.1/config/server.properties  &

1.6 验证进程

ps -ef | grep zookeeper
ps -ef | grep kafka
jps

1.7 常用命令

版本高的用:–bootstrap-server

版本低的用:–zookeeper

注意点

  • 监听设置的ip下面则需要ip
# 创建topic
sh kafka-topics.sh --bootstrap-server localhost:9092 --create --topic heima --partitions 2 --replication-factor 1
# 查看topic
sh kafka-topics.sh --list --bootstrap-server localhost:9092

1.8 kafkatool管理工具

下载地址:https://www.kafkatool.com/download.html

本地软件: offsetexplorer_64bit.exe

在这里插入图片描述

1.9 springboot集成Kafka

  1. Spring Boot 版本和 Spring-Kafka 的版本对应关系:https://spring.io/projects/spring-kafka
| Spring for Apache Kafka Version | Spring Integration for Apache Kafka Version | kafka-clients       |
| ------------------------------- | ------------------------------------------- | ------------------- |
| 2.2.x                           | 3.1.x                                       | 2.0.0, 2.1.0        |
| 2.1.x                           | 3.0.x                                       | 1.0.x, 1.1.x, 2.0.0 |
| 2.0.x                           | 3.0.x                                       | 0.11.0.x, 1.0.x     |
| 1.3.x                           | 2.3.x                                       | 0.11.0.x, 1.0.x     |
| 1.2.x                           | 2.2.x                                       | 0.10.2.x            |
| 1.1.x                           | 2.1.x                                       | 0.10.0.x, 0.10.1.x  |
| 1.0.x                           | 2.0.x                                       | 0.9.x.x             |
| N/A*                            | 1.3.x                                       | 0.8.2.2             |

2.Spring-Kafka 官方文档:https://docs.spring.io/spring-kafka/docs/2.2.0.RELEASE/reference/html/

1.9.1 pom

    <dependency>
        <groupId>org.springframework.kafka</groupId>
        <artifactId>spring-kafka</artifactId>
        <version>2.9.11</version>
    </dependency>

1.9.2 配置文件

spring:
  kafka:
    bootstrap-servers: 192.168.48.128:9092
    producer:
      # 发生错误后,消息重发的次数。
      retries: 0
      # 键的序列化方式
      key-serializer: org.apache.kafka.common.serialization.StringSerializer
      # 值的序列化方式
      value-serializer: org.apache.kafka.common.serialization.StringSerializer

      properties:
        # 配置拦截器多个使用,分割
        interceptor.classes: com.spring.demo.kafka.interceptor.CustomProducerInterceptor
    consumer:
      group-id: test
      # 键的反序列化方式
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      # 值的反序列化方式
      value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      properties:
        # 配置拦截器多个使用,分割
        interceptor.classes: com.spring.demo.kafka.interceptor.CustomConsumerInterceptor
    listener:
      # 但消费者监听的topic不存在时,保证能够是项目启动
      missing-topics-fatal: false

1.9.3 配置类

常量池

/**
 * kafka 常量池
 *
 */
public interface KafkaConsts {
    /**
     * 默认分区大小
     */
    Integer DEFAULT_PARTITION_NUM = 3;

    /**
     * Topic 名称
     */
    String TOPIC_TEST = "test";
}

配置类

import com.spring.demo.constants.KafkaConsts;
import lombok.AllArgsConstructor;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.annotation.EnableKafka;
import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;
import org.springframework.kafka.core.*;
import org.springframework.kafka.listener.ContainerProperties;

/**
 * kafka配置类
 */
@Configuration
@EnableConfigurationProperties({KafkaProperties.class})
@EnableKafka
@AllArgsConstructor
public class KafkaConfig {
    private final KafkaProperties kafkaProperties;

    @Bean
    public KafkaTemplate<String, String> kafkaTemplate() {
        return new KafkaTemplate<>(producerFactory());
    }

    @Bean
    public ProducerFactory<String, String> producerFactory() {
        return new DefaultKafkaProducerFactory<>(kafkaProperties.buildProducerProperties());
    }

    @Bean
    public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
        ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
        factory.setConsumerFactory(consumerFactory());
        factory.setConcurrency(KafkaConsts.DEFAULT_PARTITION_NUM);
        factory.setBatchListener(true);
        factory.getContainerProperties().setPollTimeout(3000);
        return factory;
    }

    @Bean
    public ConsumerFactory<String, String> consumerFactory() {
        return new DefaultKafkaConsumerFactory<>(kafkaProperties.buildConsumerProperties());
    }

    @Bean("ackContainerFactory")
    public ConcurrentKafkaListenerContainerFactory<String, String> ackContainerFactory() {
        ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
        factory.setConsumerFactory(consumerFactory());
        factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
        factory.setConcurrency(KafkaConsts.DEFAULT_PARTITION_NUM);
        return factory;
    }

}

1.9.4 消费者


import lombok.extern.slf4j.Slf4j;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.stereotype.Component;

import java.util.Optional;

/**
 * kafka消费者
 */
@Component
@Slf4j
public class KafkaConsumer {

    @KafkaListener(topics = {"test-kafka"}, groupId = "test")
    public void consumer(ConsumerRecord<?, ?> record) {
        Optional<?> kafkaMessage = Optional.ofNullable(record.value());
        if (kafkaMessage.isPresent()) {
            Object message = kafkaMessage.get();
            log.info("----------------- record =" + record);
            log.info("------------------ message =" + message);
        }
    }
}

1.9.5 生产者

import lombok.RequiredArgsConstructor;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.stereotype.Component;


/**
 * kafka生产者
 */
@Component
@RequiredArgsConstructor
public class KafkaProducer {

    private final KafkaTemplate kafkaTemplate;

    /**
     * kafka消息发送
     */
    public void send(String topic){
        kafkaTemplate.send(topic,"123");
    }
}

1.9.6 测试

import com.spring.demo.core.annotation.Anonymous;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequestMapping("/test/kafka")
public class TestKafka {

    @Autowired
    private KafkaProducer producer;

    @Anonymous
    @GetMapping("/send/{topic}")
    public void send(@PathVariable String topic) {
        producer.send(topic);
    }
}

1.9.7 效果

在这里插入图片描述

1.10 扩展

1.10.1 消费者拦截器

import lombok.extern.slf4j.Slf4j;
import org.apache.kafka.clients.consumer.ConsumerInterceptor;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.springframework.stereotype.Component;

import java.util.Map;

/**
 * 消费者拦截器
 */
@Slf4j
@Component
public class CustomConsumerInterceptor implements ConsumerInterceptor<Object,Object> {
    @Override
    public ConsumerRecords onConsume(ConsumerRecords<Object,Object> records) {
        // 消息进行预处理、日志记录、性能监控等操作,并不会改变消息的内容

//        records.forEach(record ->{
//            if (StringUtils.equals("test-kafka", record.topic())){
//                String jsonStr = record.value().toString();
//                JSONObject jsonObject = JSONObject.parseObject(jsonStr);
//                String data = JSON.toJSONString(jsonObject.getOrDefault("data", StringUtils.EMPTY));
//                record = new ConsumerRecord<>(record.topic(), record.partition(), record.offset(), record.key(), data);
//
//            }
//        });

        return records;
    }

    @Override
    public void close() {

    }

    @Override
    public void onCommit(Map map) {

    }

    @Override
    public void configure(Map<String, ?> map) {

    }
}

1.10.2 生产者拦截器

import com.alibaba.fastjson2.JSON;
import lombok.extern.slf4j.Slf4j;
import org.apache.kafka.clients.producer.ProducerInterceptor;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
import org.springframework.stereotype.Component;

import java.util.HashMap;
import java.util.Map;

/**
 * 生产者拦截器
 */
@Slf4j
@Component
public class CustomProducerInterceptor implements ProducerInterceptor {
    @Override
    public ProducerRecord onSend(ProducerRecord record) {
        // 在发送消息前的自定义逻辑
        log.info("Before sending message: {}", record.value());

        Map<String, Object> recordMap = new HashMap<>();
        recordMap.put("data", record.value());
        recordMap.put("author", "wb");
        recordMap.put("version", "1.0.0");
        // 可以修改消息内容或其他属性
        record = new ProducerRecord<>(record.topic(), record.partition(), record.key(), JSON.toJSONString(recordMap));
        return record;
    }

    @Override
    public void onAcknowledgement(RecordMetadata metadata, Exception exception) {
        // 在消息发送完成后的自定义逻辑
        if (exception == null) {
            System.out.println("Message sent successfully: " + metadata.offset());
        } else {
            System.out.println("Failed to send message: " + exception.getMessage());
        }
    }

    @Override
    public void close() {

    }

    @Override
    public void configure(Map<String, ?> map) {

    }
}

1.10.3 开机自启

在 /lib/systemd/system/ 目录下创建 zookeeper服务和kafka服务 的配置文件。

1.10.3.1 zookeeper.service
[Unit]
Description=Zookeeper service
After=network.target

[Service]
Type=simple
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/java/jdk-11.0.1/bin"
User=root
Group=root
ExecStart=/wb/kafka/kafka_2.13-3.5.1/bin/zookeeper-server-start.sh /wb/kafka/kafka_2.13-3.5.1/config/zookeeper.properties
ExecStop=/wb/kafka/kafka_2.13-3.5.1/bin/zookeeper-server-stop.sh
Restart=on-failure

[Install]
WantedBy=multi-user.target
1.10.3.2 kafka.service
[Unit]![在这里插入图片描述](https://img-blog.csdnimg.cn/c0abf52c1a1f4547bbb843de6e51aab1.png#pic_center)

Description=Apache Kafka server (broker)
After=network.target  zookeeper.service

[Service]
Type=simple
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/java/jdk-11.0.1/bin"
User=root
Group=root
ExecStart=/wb/kafka/kafka_2.13-3.5.1/bin/kafka-server-start.sh /wb/kafka/kafka_2.13-3.5.1/config/server.properties
ExecStop=/wb/kafka/kafka_2.13-3.5.1/bin/kafka-server-stop.sh
Restart=on-failure

[Install]
WantedBy=multi-user.target
1.10.3.3 刷新配置
systemctl daemon-reload
1.10.3.4 启动

加入开机自启

systemctl enable zookeeper

systemctl enable kafka

关闭开机自启

systemctl disabled kafka

systemctl disabled zookeeper

其他命令

# 状态
systemctl status zookeeper 
# 启动
systemctl start  zookeeper 
# 关闭
systemctl stop   zookeeper 
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值