docker部署kafka集群(单台服务器)

前言

… …

概述

在实际生产环境中,kafka都是通过集群方式部署,常见的架构如下所示
在这里插入图片描述

Kafka集群由多个Broker组成,每个Broker对应一个Kafka实例;Zookeeper负责管理Kafka集群的Leader选举以及Consumer Group发生变化的时候进行reblance操作

搭建kafka集群
部署

因kafka需要配合zookeeper服务(这里直接搭建zookeeper集群)使用,故同时部署服务,或者接入已经部署好的zookeeper服务

  • 部署的kafka三个服务,都接入到zookeeper集群
~]# mkdir -p /data/deploy/kafkaCluster
kafkaCluster]# cd /data/deploy/kafkaCluster
kafkaCluster]# cat > docker-compose.yml <<-EOF
version: '3.1'
services:
  zoo1:
    image: wurstmeister/zookeeper
    restart: always
    hostname: zoo1
    container_name: zoo1
    ports:
      - 2184:2181
    volumes:
      - /data/wangzunbin/volume/zkcluster/zoo1/data:/data:Z
      - /data/wangzunbin/volume/zkcluster/zoo1/datalog:/datalog:Z
    environment:
      ZOO_MY_ID: 1
      ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888

  zoo2:
    image: wurstmeister/zookeeper
    restart: always
    hostname: zoo2
    container_name: zoo2
    ports:
      - 2185:2181
    volumes:
      - /data/wangzunbin/volume/zkcluster/zoo2/data:/data:Z
      - /data/wangzunbin/volume/zkcluster/zoo2/datalog:/datalog:Z
    environment:
      ZOO_MY_ID: 2
      ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888

  zoo3:
    image: wurstmeister/zookeeper
    restart: always
    hostname: zoo3
    container_name: zoo3
    ports:
      - 2186:2181
    volumes:
      - /data/wangzunbin/volume/zkcluster/zoo3/data:/data:Z
      - /data/wangzunbin/volume/zkcluster/zoo3/datalog:/datalog:Z
    environment:
      ZOO_MY_ID: 3
      ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888

  kafka1:
    image: wurstmeister/kafka
    restart: always
    hostname: kafka1
    container_name: kafka1
    ports:
      - 9092:9092
    environment:
      KAFKA_ADVERTISED_HOST_NAME: kafka1
      KAFKA_ADVERTISED_PORT: 9092
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka1:9092
      KAFKA_LISTENERS: PLAINTEXT://kafka1:9092
    volumes:
      - /data/wangzunbin/volume/kfkluster/kafka1/logs:/kafka:Z
    external_links:
      - zoo1
      - zoo2
      - zoo3

  kafka2:
    image: wurstmeister/kafka
    restart: always
    hostname: kafka2
    container_name: kafka2
    ports:
      - 9093:9092
    environment:
      KAFKA_ADVERTISED_HOST_NAME: kafka2
      KAFKA_ADVERTISED_PORT: 9092
      KAFKA_BROKER_ID: 2
      KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka2:9092
      KAFKA_LISTENERS: PLAINTEXT://kafka2:9092
    volumes:
      - /data/wangzunbin/volume/kfkluster/kafka2/logs:/kafka:Z
    external_links:
      - zoo1
      - zoo2
      - zoo3

  kafka3:
    image: wurstmeister/kafka
    restart: always
    hostname: kafka3
    container_name: kafka3
    ports:
      - 9094:9092
    environment:
      KAFKA_ADVERTISED_HOST_NAME: kafka3
      KAFKA_ADVERTISED_PORT: 9092
      KAFKA_BROKER_ID: 3
      KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka3:9092
      KAFKA_LISTENERS: PLAINTEXT://kafka3:9092
    volumes:
      - /data/wangzunbin/volume/kfkluster/kafka3/logs:/kafka:Z
    external_links:
      - zoo1
      - zoo2
      - zoo3
EOF
kafkaCluster]# docker-compose up -d
Creating network "kafkacluster_default" with the default driver
Creating kafka3 ... done
Creating zoo1   ... done
Creating zoo3   ... done
Creating zoo2   ... done
Creating kafka1 ... done
Creating kafka2 ... done
kafkaCluster]# docker ps
CONTAINER ID        IMAGE                    COMMAND                  CREATED             STATUS              PORTS                                                NAMES
081d20e76193        wurstmeister/kafka       "start-kafka.sh"         9 seconds ago       Up 7 seconds        0.0.0.0:9092->9092/tcp                               kafka1
ffe52bc09cb0        wurstmeister/zookeeper   "/bin/sh -c '/usr/sb…"   9 seconds ago       Up 8 seconds        22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2185->2181/tcp   zoo2
92c4b741bfb9        wurstmeister/kafka       "start-kafka.sh"         9 seconds ago       Up 8 seconds        0.0.0.0:9093->9093/tcp                               kafka2
baa8ed8467d5        wurstmeister/kafka       "start-kafka.sh"         9 seconds ago       Up 7 seconds        0.0.0.0:9094->9094/tcp                               kafka3
7934bd26c161        wurstmeister/zookeeper   "/bin/sh -c '/usr/sb…"   9 seconds ago       Up 7 seconds        22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2186->2181/tcp   zoo3
6eebd11bb5c6        wurstmeister/zookeeper   "/bin/sh -c '/usr/sb…"   9 seconds ago       Up 7 seconds        22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2184->2181/tcp   zoo1
测试验证集群
在Broker 1节点上创建一个用于测试的topic
  • 1.在Broker 1上创建一个副本为3、分区为5的topic用于测试

因Kafka的topic所有分区会分散在不同Broker上,所以该topic的5个分区会被分散到3个Broker上,其中有两个Broker得到两个分区,另一个Broker只有1个分区。该结论在下面将会得到验证

kafkaCluster]# docker exec -it kafka1 /bin/bash
bash-4.4# cd /opt/kafka_2.13-2.7.0/
bash-4.4# kafka-topics.sh --create --zookeeper 10.0.0.114:2184 --replication-factor 3 --partitions 5 --topic TestTopic
Created topic TestTopic.
  • 2.查看新创建的topic信息
bash-4.4# kafka-topics.sh --describe --zookeeper 10.0.0.114:2184 --topic TestTopic
Topic: TestTopic	PartitionCount: 5	ReplicationFactor: 3	Configs: 
Topic: TestTopic	Partition: 0	Leader: 1	Replicas: 1,2,3	Isr: 1,2,3
Topic: TestTopic	Partition: 1	Leader: 2	Replicas: 2,3,1	Isr: 2,3,1
Topic: TestTopic	Partition: 2	Leader: 3	Replicas: 3,1,2	Isr: 3,1,2
Topic: TestTopic	Partition: 3	Leader: 1	Replicas: 1,3,2	Isr: 1,3,2
Topic: TestTopic	Partition: 4	Leader: 2	Replicas: 2,1,3	Isr: 2,1,3

如上所说到的"该topic的5个分区会被分散到3个Broker上,其中有两个Broker得到两个分区,另一个Broker只有1个分区"情况,详见如下解释

Topic: TestTopic PartitionCount: 5 ReplicationFactor: 3		# 代表TestTopic有5个分区,3个副本节点;
Topic: TestTopic Partition: 0 Leader: 1 Replicas: 1,2,3 Isr: 1,2,3	# Leader:1代表TestTopic下的分区0的Leader Replica在Broker.id = 1节点上,
Replicas代表他的副本节点有Broker.id = 1,2,3(包括Leader Replica和Follower Replica,且不管是否存活)
Isr表示存活并且同步Leader节点的副本有Broker.id=1,2,3
kafka集群验证

上一步在Broker1上创建了一个topic:TestTopic,接着另开两个终端,分别进入Kafka2和Kafka3容器内,查看在该两容器内是否已同步两topic

  • 1.查看kafka2
~]# docker exec -it kafka2 /bin/bash
bash-4.4# cd /opt/kafka_2.13-2.7.0/bin/
bash-4.4# kafka-topics.sh --describe --zookeeper 10.0.0.114:2184 --topic TestTopic
Topic: TestTopic	PartitionCount: 5	ReplicationFactor: 3	Configs: 
	Topic: TestTopic	Partition: 0	Leader: 1	Replicas: 1,2,3	Isr: 1,2,3
	Topic: TestTopic	Partition: 1	Leader: 2	Replicas: 2,3,1	Isr: 2,3,1
	Topic: TestTopic	Partition: 2	Leader: 3	Replicas: 3,1,2	Isr: 3,1,2
	Topic: TestTopic	Partition: 3	Leader: 1	Replicas: 1,3,2	Isr: 1,3,2
	Topic: TestTopic	Partition: 4	Leader: 2	Replicas: 2,1,3	Isr: 2,1,3
  • 2.查看kafka3
~]# docker exec -it kafka3 /bin/bash
bash-4.4# cd /opt/kafka_2.13-2.7.0/bin/
bash-4.4# kafka-topics.sh --describe --zookeeper 10.0.0.114:2184 --topic TestTopic
Topic: TestTopic	PartitionCount: 5	ReplicationFactor: 3	Configs: 
	Topic: TestTopic	Partition: 0	Leader: 1	Replicas: 1,2,3	Isr: 1,2,3
	Topic: TestTopic	Partition: 1	Leader: 2	Replicas: 2,3,1	Isr: 2,3,1
	Topic: TestTopic	Partition: 2	Leader: 3	Replicas: 3,1,2	Isr: 3,1,2
	Topic: TestTopic	Partition: 3	Leader: 1	Replicas: 1,3,2	Isr: 1,3,2
	Topic: TestTopic	Partition: 4	Leader: 2	Replicas: 2,1,3	Isr: 2,1,3

可以查看到,Kafka2和Kafka3上已同步新创建的topi

  • 3.分别在Broker1上运行一个生产者,Broker2、3上分别运行一个消费者

broker1(进入之前进入到的kafka1容器内操作)运行生产者,并输入生产者消息

bash-4.4# kafka-console-producer.sh --broker-list 10.0.0.114:9092 --topic TestTopic
>test	# 输入生产者信息

broker2(进入之前进入到的kafka2容器内操作)运行消费者,会得到生产者消息发来的消息

bash-4.4# kafka-console-consumer.sh --bootstrap-server 10.0.0.114:9093 --topic TestTopic --from-beginning
test	# 接收到的消息

broker3(进入之前进入到的kafka3容器内操作)运行消费者,会得到生产者消息发来的消息

bash-4.4# kafka-console-consumer.sh --bootstrap-server 10.0.0.114:9094 --topic TestTopic --from-beginning
test	# 接收到的消息
结语

kafka中文文档
kafka官方文档
wurstmeister/kafka镜像仓库

  • 0
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值