springboot kafka spring boot搭建 单机 集群 集成入门

搭建kafka (单机版)

使用虚拟机 + docker
参考 https://blog.csdn.net/qq_35394891/article/details/84349955
https://www.cnblogs.com/xiaohanlin/p/10078865.html

  1. 拉取镜像 (kafka 依赖zookeeper ,所有两个都要)

docker pull wurstmeister/zookeeper
docker pull wurstmeister/kafka:2.12-2.3.1

  1. 启动

#先zookeeper
docker run -d --name zookeeper -p 2181:2181 -t wurstmeister/zookeeper
#此处需要修改KAFKA_ADVERTISED_HOST_NAME 为实际IP地址 ,否则远程访问会出现 Connection to node {} ({}) could not be established. Broker may not be available. 此警告然后启动失败

docker run -d --name kafka -p 9092:9092 -e KAFKA_BROKER_ID=0 -e KAFKA_ZOOKEEPER_CONNECT=192.168.35.3:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.35.3:9092 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 -t wurstmeister/kafka:2.12-2.3.1

  1. 进入kafka 内部docker 测试

kafka 是容器id或容器别名 ,后面目录改成自己对应版本,因为此处拉取的是最新镜像 不保证相同

docker exec -it kafka /bin/bash
cd /opt/kafka_2.12-2.3.1/bin

创建topic

kafka-topics.sh --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 1 --topic mykafka

运行一个生产者

kafka-console-producer.sh --broker-list localhost:9092 --topic mykafka

如上再打开一个会话窗口 , 运行一个消费者

kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic mykafka --from-beginning

在生产者端 发送消息 可以在消费者端看到消息

kafka 集群+zookeeper 集群 docker

首先安装docker-compose 可以添加到path

curl -L https://github.com/docker/compose/releases/download/1.24.0/docker-compose-uname -s-uname -m -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
docker-compose --version

创建docker内部网络,可以理解成容器中的host

docker network create --driver bridge --subnet 172.23.0.0/25 --gateway 172.23.0.1 zookeeper_network
这里版本还是什么关系我换成了官方版本
docker pull zookeeper:3.4.14
docker pull wurstmeister/kafka:2.12-2.3.1
docker pull hlebalbau/kafka-manager


version: '2'

services:
# 三个zookeeper
  zoo1:
    image:  zookeeper:3.4.14 # 镜像
    restart: always # 重启
    container_name: zoo1
    hostname: zoo1
    ports:
    - "2181:2181"
    environment:
      ZOO_MY_ID: 1 # id
      ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
    networks:
      cc17223:
        ipv4_address: 172.23.0.2

  zoo2:
    image:  zookeeper:3.4.14
    restart: always
    container_name: zoo2
    hostname: zoo2
    ports:
    - "2182:2181"
    environment:
      ZOO_MY_ID: 2
      ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
    networks:
      cc17223:
        ipv4_address: 172.23.0.3

  zoo3:
    image:  zookeeper:3.4.14
    restart: always
    container_name: zoo3
    hostname: zoo3
    ports:
    - "2183:2181"
    environment:
      ZOO_MY_ID: 3
      ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
    networks:
      cc17223:
        ipv4_address: 172.23.0.4

  kafka1:
    image: wurstmeister/kafka:2.12-2.3.1 # 镜像
    restart: always
    container_name: kafka1
    hostname: kafka1
    ports:
    - 9092:9092
    - 9999:9999
    environment:
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.35.3:9092 # 暴露在外的地址
      KAFKA_ADVERTISED_HOST_NAME: kafka1 # 
      KAFKA_HOST_NAME: kafka1
      KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
      KAFKA_ADVERTISED_PORT: 9092 # 暴露在外的端口
      KAFKA_BROKER_ID: 0 # 
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
      JMX_PORT: 9999 # jmx
    volumes:
    - /etc/localtime:/etc/localtime
    links:
    - zoo1
    - zoo2
    - zoo3
    networks:
      cc17223:
        ipv4_address: 172.23.0.5

  kafka2:
    image: wurstmeister/kafka:2.12-2.3.1
    restart: always
    container_name: kafka2
    hostname: kafka2
    ports:
    - 9093:9092
    - 9998:9999
    environment:
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.35.3:9093
      KAFKA_ADVERTISED_HOST_NAME: kafka2
      KAFKA_HOST_NAME: kafka2
      KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
      KAFKA_ADVERTISED_PORT: 9093
      KAFKA_BROKER_ID: 1
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
      JMX_PORT: 9999
    volumes:
    - /etc/localtime:/etc/localtime
    links:
    - zoo1
    - zoo2
    - zoo3
    networks:
      cc17223:
        ipv4_address: 172.23.0.6

  kafka3:
    image: wurstmeister/kafka:2.12-2.3.1
    restart: always
    container_name: kafka3
    hostname: kafka3
    ports:
    - 9094:9092
    - 9997:9999
    environment:
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.35.3:9094
      KAFKA_ADVERTISED_HOST_NAME: kafka3
      KAFKA_HOST_NAME: kafka3
      KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
      KAFKA_ADVERTISED_PORT: 9094
      KAFKA_BROKER_ID: 2
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
      JMX_PORT: 9999
    volumes:
    - /etc/localtime:/etc/localtime
    links:
    - zoo1
    - zoo2
    - zoo3
    networks:
      cc17223:
        ipv4_address: 172.23.0.7

  kafka-manager:
    image: hlebalbau/kafka-manager
    restart: always
    container_name: kafka-manager
    hostname: kafka-manager
    ports:
    - 9000:9000
    links:
    - kafka1
    - kafka2
    - kafka3
    - zoo1
    - zoo2
    - zoo3
    environment:
      ZK_HOSTS: zoo1:2181,zoo2:2181,zoo3:2181
      KAFKA_BROKERS: kafka1:9092,kafka2:9092,kafka3:9092
      APPLICATION_SECRET: letmein
      KAFKA_MANAGER_AUTH_ENABLED: "true" # 开启验证
      KAFKA_MANAGER_USERNAME: "admin" # 用户名
      KAFKA_MANAGER_PASSWORD: "admin" # 密码
      KM_ARGS: -Djava.net.preferIPv4Stack=true
    networks:
      cc17223:
        ipv4_address: 172.23.0.8

networks:
  cc17223:
    external:
      name: zookeeper_network


启动后首先观察zookeeper 1/2/3 是否为leader follower 如果是则成功,
docker logs zoo1/2/3
然后判断kafka 1/2/3 看是否有连接拒绝或者异常报错之类的,搭建时出现异常不断重启
docker logs kafka1/2/3
但是这里好像因为版本兼容 kafka 的管理界面 ,没办法添加集群,升级版本之后又没办法启动,所以暂时不使用web版本,还是使用kafka tools

项目代码

此处参考 https://segmentfault.com/a/1190000015316875
此处我使用的是idea+lombok

  1. 首先创建一个spring boot 工程
  2. 修改配置文件 我这里使用的是yml结构的

spring:
  kafka:
    bootstrap-servers: 192.168.35.3:9092
    producer:
      retries: 0
      batch-size: 16384
      buffer-memory: 33554432
      key-serializer: org.apache.kafka.common.serialization.StringSerializer
      value-serializer: org.apache.kafka.common.serialization.StringSerializer
      properties:
        linger.ms: 1

    consumer:
      enable-auto-commit: false
      auto-commit-interval: 100ms
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      properties:
        session.timeout.ms: 15000
      group-id: test-consume-group
    listener:
      missing-topics-fatal: false

kafka:
  topic:
    group-id: test-consume-group
    topic-name:
      - test
  1. 新增配置文件类
package com.example.springbootmqkafka.config;

import lombok.AllArgsConstructor;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.annotation.EnableKafka;
import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;
import org.springframework.kafka.core.*;
import org.springframework.kafka.listener.ContainerProperties;

import java.io.Serializable;

/**
 * @author chen.chao
 * @version 1.0
 * @date 2020/4/2 16:13
 * @description
 */
@Configuration
@EnableConfigurationProperties(KafkaTopicConfiguration.KafkaTopicProperties.class)
public class KafkaTopicConfiguration {

    private final KafkaTopicProperties properties;

    /**
     * Topic 名称
     */
    public static final String TOPIC_TEST = "test";

    public KafkaTopicConfiguration(KafkaTopicProperties properties) {
        this.properties = properties;
    }

    @Bean
    public String[] kafkaTopicName() {
        return properties.getTopicName();
    }

    @Bean
    public String topicGroupId() {
        return properties.getGroupId();
    }

    @ConfigurationProperties("kafka.topic")
    static class KafkaTopicProperties implements Serializable {

        private String groupId;
        private String[] topicName;

        public String getGroupId() {
            return groupId;
        }

        public void setGroupId(String groupId) {
            this.groupId = groupId;
        }

        public String[] getTopicName() {
            return topicName;
        }

        public void setTopicName(String[] topicName) {
            this.topicName = topicName;
        }
    }
} 

5. 新增业务方法
```java
package com.example.springbootmqkafka.service;

import lombok.extern.slf4j.Slf4j;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.support.SendResult;
import org.springframework.stereotype.Service;
import org.springframework.util.concurrent.ListenableFuture;
import org.springframework.util.concurrent.ListenableFutureCallback;

/**
 * @author chen.chao
 * @version 1.0
 * @date 2020/4/2 16:13
 * @description
 */
@Slf4j
@Service
public class IndicatorService {

    private final KafkaTemplate<Integer, String> kafkaTemplate;

    /**
     * 注入KafkaTemplate
     * @param kafkaTemplate kafka模版类
     */
    @Autowired
    public IndicatorService(KafkaTemplate kafkaTemplate) {
        this.kafkaTemplate = kafkaTemplate;
    }
 
    public void sendMessage(String topic, String data) {
        log.info("kafka sendMessage start");
        ListenableFuture<SendResult<Integer, String>> future = kafkaTemplate.send(topic, data);
        future.addCallback(new ListenableFutureCallback<SendResult<Integer, String>>() {
            @Override
            public void onFailure(Throwable ex) {
                log.error("kafka sendMessage error, ex = {}, topic = {}, data = {}", ex, topic, data);
            }

            @Override
            public void onSuccess(SendResult<Integer, String> result) {
                log.info("kafka sendMessage success topic = {}, data = {}",topic, data);
            }
        });
        log.info("kafka sendMessage end");
    }
}

  1. 新增前台入口
package com.example.springbootmqkafka.controller;

import com.alibaba.fastjson.JSON;
import com.example.springbootmqkafka.service.IndicatorService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Controller;
import org.springframework.stereotype.Service;
import org.springframework.web.bind.annotation.*;

/**
 * @author chen.chao
 * @version 1.0
 * @date 2020/4/2 16:15
 * @description
 */
@RestController
public class ShopController {


    @Autowired
    private IndicatorService indicatorService;


    @GetMapping("/shop/order/{id}")
    public String shop(@PathVariable("id") Integer id , @RequestParam("name") String name){
        indicatorService.sendMessage("new_order", JSON.toJSONString(new Order(id,name)));
        return "预约下单成功,具体信息以短信提醒为准";
    }

   static class Order{
        private Integer id;
        private String name;
        @Override
        public String toString() {
            return "Order{" +
                    "id=" + id +
                    ", name='" + name + '\'' +
                    '}';
        }

        public Order(Integer id, String name) {
            this.id = id;
            this.name = name;
        }

        public Integer getId() {
            return id;
        }

        public void setId(Integer id) {
            this.id = id;
        }

        public String getName() {
            return name;
        }

        public void setName(String name) {
            this.name = name;
        }
    }

}

  1. 消费者

package com.example.springbootmqkafka.consumer;

import com.example.springbootmqkafka.config.KafkaTopicConfiguration;
import lombok.extern.slf4j.Slf4j;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.stereotype.Component;

/**
 * @author chen.chao
 * @version 1.0
 * @date 2020/4/2 19:26
 * @description
 */
@Slf4j
@Component
public class OrderListener {


    @KafkaListener(topics = KafkaTopicConfiguration.TOPIC_TEST
                ,groupId = "test-consume-group"    )
    public void handleMessage(ConsumerRecord record) {
        try {
            String message = (String) record.value();
            log.info("收到消息: {}", message);
        } catch (Exception e) {
            log.error(e.getMessage(), e);
        }
    }

}

  1. 最后启动 ,访问http://localhost:8080/shop/order/2999?name=chenchao

2020-04-03 14:19:34.016  INFO 1120 --- [ntainer#0-0-C-1] c.e.s.consumer.OrderListener             : 收到消息: {"id":2999,"name":"chenchao"}

总结和可能出现的问题

  1. 选取版本的时候注意不要用最新的,因为有可能最新的版本没有对应的客户端会出现无法消费消息的情况
  2. 搭建时如果出现失败 , 最好把zookeeper 也重新删掉 ,zookeeper 中有数据残留,重新搭kafka的时候极可能步鄹正确无法得到预期结果

版本参考

https://spring.io/projects/spring-kafka

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

木秀林

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值