Kafka使用

Docker 集群搭建

单机手动安装版

https://www.cnblogs.com/answerThe/p/11267129.html

依赖zookeeper

Docker compose

  • 创建network
    docker network create zookeeper_default --driver bridge
    192.168.0.227 为本机本地IP
version: '2'

networks:
  zookeeper:
    external: true
    name: zookeeper_default

services:
  kafka1:
    image: 'bitnami/kafka:latest'
    ports:
      - '9094:9094'
    container_name: kafka1
    networks:
      - zookeeper
    environment:
      - KAFKA_ZOOKEEPER_CONNECT=zookeeperHostIp:2181,zookeeperHostIp:2182,zookeeperHostIp:2183/kafka
      - KAFKA_BROKER_ID=1
      - ALLOW_PLAINTEXT_LISTENER=yes
      - KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9094
      - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://KafkaHostIp:9094
    volumes:
      - /opt/kafka/kafka1/persistence:/bitnami/kafka
      - /etc/localtime:/etc/localtime

  kafka2:
    image: 'bitnami/kafka:latest'
    ports:
      - '9092:9092'
    container_name: kafka2
    networks:
      - zookeeper
    environment:
      - KAFKA_ZOOKEEPER_CONNECT=zookeeperHostIp:2181,zookeeperHostIp:2182,zookeeperHostIp:2183/kafka
      - KAFKA_BROKER_ID=2
      - ALLOW_PLAINTEXT_LISTENER=yes
      - KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092
      - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://KafkaHostIp:9092
    volumes:
      - /opt/kafka/kafka2/persistence:/bitnami/kafka
      - /etc/localtime:/etc/localtime

  kafka3:
    image: 'bitnami/kafka:latest'
    ports:
      - '9093:9093'
    container_name: kafka3
    networks:
      - zookeeper
    environment:
      - KAFKA_ZOOKEEPER_CONNECT=zookeeperHostIp:2181,zookeeperHostIp:2182,zookeeperHostIp:2183/kafka
      - KAFKA_BROKER_ID=3
      - ALLOW_PLAINTEXT_LISTENER=yes
      - KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9093
      - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://KafkaHostIp:9093
    volumes:
      - /opt/kafka/kafka3/persistence:/bitnami/kafka
      - /etc/localtime:/etc/localtime
      
  kafka-manager:
    image: sheepkiller/kafka-manager:1.2.7
    ports:
      - '9000:9000'
    environment:
      - ZK_HOSTS=zookeeperHostIp:2181,zookeeperHostIp:2182,zookeeperHostIp:2183/kafka
      - APPLICATION_SECRET=ryan123
    volumes:
      - /opt/kafka-admin/confdir:/kafka-manager-1.2.7/conf
      - /etc/localtime:/etc/localtime
    networks:
      - zookeeper
  • docker-compose up 汇报没有application.conf 配置文件
# Copyright 2015 Yahoo Inc. Licensed under the Apache License, Version 2.0
# See accompanying LICENSE file.

# This is the main configuration file for the application.
# ~~~~~

# Secret key
# ~~~~~
# The secret key is used to secure cryptographics functions.
# If you deploy your application to several instances be sure to use the same key!
play.crypto.secret="^<csmm5Fx4d=r2HEX8pelM3iBkFVv?k[mc;IZE<_Qoq8EkX_/7@Zt6dP05Pzea3U"
play.crypto.secret=${?APPLICATION_SECRET}

# The application languages
# ~~~~~
play.i18n.langs=["en"]

play.http.requestHandler = "play.http.DefaultHttpRequestHandler"
play.http.context = "/"
play.application.loader=loader.KafkaManagerLoader

kafka-manager.zkhosts="kafka-manager-zookeeper:2181"
kafka-manager.zkhosts=${?ZK_HOSTS}
pinned-dispatcher.type="PinnedDispatcher"
pinned-dispatcher.executor="thread-pool-executor"
application.features=["KMClusterManagerFeature","KMTopicManagerFeature","KMPreferredReplicaElectionFeature","KMReassignPartitionsFeature"]

akka {
  loggers = ["akka.event.slf4j.Slf4jLogger"]
  loglevel = "INFO"
}


basicAuthentication.enabled=false
basicAuthentication.username="admin"
basicAuthentication.password="password"
basicAuthentication.realm="Kafka-Manager"


kafka-manager.consumer.properties.file=${?CONSUMER_PROPERTIES_FILE}

验证

  • 创建Topic
    ./kafka-topics.sh --create --bootstrap-server 192.168.16.1:9092,192.168.16.1:9093,192.168.16.1:9094 --topic test123   --replica-assignment 0:1,1:2,2:0
    
  • 消费监听
    ./kafka-console-consumer.sh --bootstrap-server 192.168.16.1:9092,192.168.16.1:9093,192.168.16.1:9094 --topic test123 --from-beginning
    
  • 发布消息
    ./kafka-console-producer.sh --broker-list 192.168.16.1:9092,192.168.16.1:9093,192.168.16.1:9094 --topic test123
    
  • 分组查看
     ./kafka-consumer-groups.sh --bootstrap-server 192.168.16.1:9092,192.168.16.1:9093,192.168.16.1:9094 --all-groups --list --all-topics
    
  • Kafka Manager
    在这里插入图片描述
    在这里插入图片描述

Java 对接

  • POM引入
    <dependency>
       <groupId>org.apache.kafka</groupId>
       <artifactId>kafka_2.10</artifactId>
       <version>0.8.2.0</version>
     </dependency>
    
  • 消费者
    package org.example.kafka;
    
    import kafka.consumer.Consumer;
    import kafka.consumer.ConsumerConfig;
    import kafka.consumer.KafkaStream;
    import kafka.javaapi.consumer.ConsumerConnector;
    
    import java.util.HashMap;
    import java.util.List;
    import java.util.Map;
    import java.util.Properties;
    
    /**
     * @Auther: Ryan
     * @Date: 2022/1/21 14:05
     * @Description: 接收数据
     */
    public class kafkaConsumer extends Thread{
    
        private final String topic;
    
        public kafkaConsumer(String topic){
            super();
            this.topic = topic;
        }
    
        @Override
        public void run() {
            ConsumerConnector consumer = createConsumer();
            Map<String, Integer> topicCountMap = new HashMap<String, Integer>();
            topicCountMap.put(topic, 1); // 一次从主题中获取一个数据
            Map<String, List<KafkaStream<byte[], byte[]>>>  messageStreams = consumer.createMessageStreams(topicCountMap);
            KafkaStream<byte[], byte[]> stream = messageStreams.get(topic).get(0);// 获取每次接收到的这个数据
            for (kafka.message.MessageAndMetadata<byte[], byte[]> messageAndMetadata : stream) {
                String message = new String(messageAndMetadata.message());
                System.out.println("接收到: " + message);
            }
        }
    
        private ConsumerConnector createConsumer() {
            Properties properties = new Properties();
            properties.put("zookeeper.connect", "192.168.16.1:2181,192.168.16.1:2182,192.168.16.1:2183");//声明zk
            properties.put("group.id", "group1");
            return Consumer.createJavaConsumerConnector(new ConsumerConfig(properties));
        }
    
        public static void main(String[] args) {
            new kafkaConsumer("test123").start();// 使用kafka集群中创建好的主题 test123
        }
    
    }
    
    
  • 生产者
    package org.example.kafka;
    
    import java.util.Properties;
    import java.util.concurrent.TimeUnit;
    import kafka.javaapi.producer.Producer;
    import kafka.producer.KeyedMessage;
    import kafka.producer.ProducerConfig;
    import kafka.serializer.StringEncoder;
    
    /**
     * @Auther: Ryan
     * @Date: 2022/1/21 14:05
     * @Description: 发送Kafka消息
     */
    public class kafkaProducer extends Thread{
    
        private final String topic;
    
        public kafkaProducer(String topic){
            super();
            this.topic = topic;
        }
    
        @Override
        public void run() {
            Producer<Integer, String> producer = createProducer();
            int i=0;
            while(true){
                producer.send(new KeyedMessage<Integer, String>(topic, "message: " + i++));
                try {
                    TimeUnit.SECONDS.sleep(1);
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
            }
        }
    
        private Producer<Integer, String> createProducer() {
            Properties properties = new Properties();
            properties.put("zookeeper.connect", "192.168.16.1:2181,192.168.16.1:2182,192.168.16.1:2183");//声明zk
            properties.put("serializer.class", StringEncoder.class.getName());
            properties.put("metadata.broker.list", "192.168.16.1:9092,192.168.16.1:9093,192.168.16.1:9094");// 声明kafka broker
            return new Producer<Integer, String>(new ProducerConfig(properties));
        }
        
        public static void main(String[] args) {
            new kafkaProducer("test123").start();// 使用kafka集群中创建好的主题 test123
        }
    }
    
  • 完整项目代码:JavaLearning

面试题

  1. Kafka的用途有哪些?使用场景如何?
  2. Kafka中的ISR、AR又代表什么?ISR的伸缩又指什么
  3. Kafka中的HW、LEO、LSO、LW等分别代表什么?
  4. Kafka中是怎么体现消息顺序性的?
  5. Kafka中的分区器、序列化器、拦截器是否了解?它们之间的处理顺序是什么?
  6. Kafka生产者客户端的整体结构是什么样子的?
  7. Kafka生产者客户端中使用了几个线程来处理?分别是什么?
  8. Kafka的旧版Scala的消费者客户端的设计有什么缺陷?
  9. “消费组中的消费者个数如果超过topic的分区,那么就会有消费者消费不到数据”这句话是否正确?如果正确,那么有没有什么hack的手段?
  10. 有哪些情形会造成重复消费?
  11. 哪些情景下会造成消息漏消费?
  12. KafkaConsumer是非线程安全的,那么怎么样实现多线程消费?
  13. 简述消费者与消费组之间的关系
  14. 当你使用kafka-topics.sh创建(删除)了一个topic之后,Kafka背后会执行什么逻辑?
  15. topic的分区数可不可以增加?如果可以怎么增加?如果不可以,那又是为什么?
  16. topic的分区数可不可以减少?如果可以怎么减少?如果不可以,那又是为什么?
  17. 创建topic时如何选择合适的分区数?
  18. Kafka目前有哪些内部topic,它们都有什么特征?各自的作用又是什么?
  19. 优先副本是什么?它有什么特殊的作用?
  20. Kafka有哪几处地方有分区分配的概念?简述大致的过程及原理
  21. 简述Kafka的日志目录结构
  22. Kafka中有哪些索引文件?
  23. 如果我指定了一个offset,Kafka怎么查找到对应的消息?
  24. 如果我指定了一个timestamp,Kafka怎么查找到对应的消息?
  25. 聊一聊你对Kafka的Log Retention的理解
  26. 聊一聊你对Kafka的Log Compaction的理解
  27. 聊一聊你对Kafka底层存储的理解
  28. 聊一聊Kafka的延时操作的原理
  29. 聊一聊Kafka控制器的作用
  30. Kafka的旧版Scala的消费者客户端的设计有什么缺陷?
  31. 消费再均衡的原理是什么?(提示:消费者协调器和消费组协调器)
  32. Kafka中的幂等是怎么实现的?
  33. Kafka中的事务是怎么实现的?
  34. 失效副本是指什么?有哪些应对措施?
  35. 多副本下,各个副本中的HW和LEO的演变过程
  36. Kafka在可靠性方面做了哪些改进?(HW, LeaderEpoch)
  37. 为什么Kafka不支持读写分离?
  38. Kafka中的延迟队列怎么实现
  39. Kafka中怎么实现死信队列和重试队列?
  40. Kafka中怎么做消息审计?
  41. Kafka中怎么做消息轨迹?
  42. 怎么计算Lag?(注意read_uncommitted和read_committed状态下的不同)
  43. Kafka有哪些指标需要着重关注?
  44. Kafka的哪些设计让它有如此高的性能?
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值