docker部署kafka集群(多台服务器)

前言

… …

概述

在实际生产环境中,kafka都是通过集群方式部署,常见的架构如下所示
在这里插入图片描述

Kafka集群由多个Broker组成,每个Broker对应一个Kafka实例;Zookeeper负责管理Kafka集群的Leader选举以及Consumer Group发生变化的时候进行reblance操作

搭建kafka集群

使用三台服务器部署zookeeper、kafka服务,部署节点的服务详见如下:

主机IP服务备注
10.0.0.95zoo1、kafka1、kafka-managerkafka-manager为管理kafka服务的应用
10.0.0.187zoo2、kafka2
10.0.0.115zoo3、kafka3
  • 采用docker的host网络模式,使其容器服务的网络全暴露于宿主机上,不然,采用容器形式部署的服务,不能在多台宿主机上部署kafka集群
部署

因kafka需要配合zookeeper服务(这里直接搭建zookeeper集群)使用,故同时部署服务,或者接入已经部署好的zookeeper服务

  • 在主机IP为10.0.0.95服务器上部署zoo1、kafka1、kafka-manager应用服务
    kafka-manager服务使用,这里暂不做更多的赘述
~]# mkdir -p /data/deploy/kafkaCluster
~]# cd /data/deploy/kafkaCluster/
kafkaCluster]# cat > docker-compose.yml <<-EOF
version: '3.1'
services:
  zoo1:
    image: wurstmeister/zookeeper
    restart: always
    hostname: zoo1
    container_name: zoo1
    ports:
      - 2181:2181
      - 2888:2888
      - 3888:3888
    volumes:
      - /data/wangzunbin/volume/zkcluster/zoo1/data:/data:Z
      - /data/wangzunbin/volume/zkcluster/zoo1/datalog:/datalog:Z
    environment:
      ZOO_MY_ID: 1
      ZOO_SERVERS: server.1=10.0.0.95:2888:3888;2181 server.2=10.0.0.187:2888:3888;2181 server.3=10.0.0.115:2888:3888;2181
    network_mode: host

  kafka1:
    image: wurstmeister/kafka
    restart: always
    hostname: kafka1
    container_name: kafka1
    ports:
      - 9092:9092
    environment:
      KAFKA_ADVERTISED_HOST_NAME: 10.0.0.95
      KAFKA_HOST_NAME: 10.0.0.95
      KAFKA_ADVERTISED_PORT: 9092
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: 10.0.0.95:2181,10.0.0.187:2181,10.0.0.115:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://10.0.0.95:9092
      KAFKA_LISTENERS: PLAINTEXT://10.0.0.95:9092
    volumes:
      - /data/wangzunbin/volume/kfkluster/kafka1/logs:/kafka:Z
    network_mode: host
EOF
kafkaCluster]# docker-compose up -d
kafkaCluster]# mkdir -p /data/deploy/kafkaCluster/kafka-mamager
kafkaCluster]# cd /data/deploy/kafkaCluster/kafka-mamager
kafka-mamager]# cat > docker-compose.yml <<-EOF
version: '3.1'
services:
  kafka-manager:
    image: sheepkiller/kafka-manager
    restart: always
    hostname: kafka-manager
    container_name: kafka-manager
    ports:
      - 9000:9000
    environment:
      ZK_HOSTS: 10.0.0.95:2181,10.0.0.187:2181,10.0.0.115:2181
      KAFKA_BROKERS: 10.0.0.95:9092,10.0.0.187:9092,10.0.0.115:9092
      APPLICATION_SECRET: letmein
      KM_ARGS: -Djava.net.preferIPv4Stack=true
    network_mode: host
kafka-mamager]# docker-compose up -d    
  • 在主机IP为10.0.0.187服务器上部署zoo2、kafka2应用服务
~]# mkdir -p /data/deploy/kafkaCluster
~]# cd /data/deploy/kafkaCluster/
kafkaCluster]# cat > docker-compose.yml <<-EOF
version: '3.1'
services:
  zoo2:
    image: wurstmeister/zookeeper
    restart: always
    hostname: zoo2
    container_name: zoo2
    ports:
      - 2181:2181
      - 2888:2888
      - 3888:3888
    volumes:
      - /data/wangzunbin/volume/zkcluster/zoo2/data:/data:Z
      - /data/wangzunbin/volume/zkcluster/zoo2/datalog:/datalog:Z
    environment:
      ZOO_MY_ID: 2
      ZOO_SERVERS: server.1=10.0.0.95:2888:3888;2181 server.2=10.0.0.187:2888:3888;2181 server.3=10.0.0.115:2888:3888;2181
    network_mode: host

  kafka2:
    image: wurstmeister/kafka
    restart: always
    hostname: kafka2
    container_name: kafka2
    ports:
      - 9092:9092
    environment:
      KAFKA_ADVERTISED_HOST_NAME: 10.0.0.187
      KAFKA_HOST_NAME: 10.0.0.187
      KAFKA_ADVERTISED_PORT: 9092
      KAFKA_BROKER_ID: 2
      KAFKA_ZOOKEEPER_CONNECT: 10.0.0.95:2181,10.0.0.187:2181,10.0.0.115:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://10.0.0.187:9092
      KAFKA_LISTENERS: PLAINTEXT://10.0.0.187:9092
    volumes:
      - /data/wangzunbin/volume/kfkluster/kafka2/logs:/kafka:Z
    network_mode: host
EOF
kafkaCluster]# docker-compose up -d
  • 在主机IP为10.0.0.115服务器上部署zoo3、kafka3应用服务
~]# mkdir -p /data/deploy/kafkaCluster
~]# cd /data/deploy/kafkaCluster/
kafkaCluster]# cat > docker-compose.yml <<-EOF
version: '3.1'
services:
  zoo3:
    image: wurstmeister/zookeeper
    restart: always
    hostname: zoo3
    container_name: zoo3
    ports:
      - 2181:2181
      - 2888:2888
      - 3888:3888
    volumes:
      - /data/wangzunbin/volume/zkcluster/zoo3/data:/data:Z
      - /data/wangzunbin/volume/zkcluster/zoo3/datalog:/datalog:Z
    environment:
      ZOO_MY_ID: 3
      ZOO_SERVERS: server.1=10.0.0.95:2888:3888;2181 server.2=10.0.0.187:2888:3888;2181 server.3=10.0.0.115:2888:3888;2181
    network_mode: host

  kafka3:
    image: wurstmeister/kafka
    restart: always
    hostname: kafka3
    container_name: kafka3
    ports:
      - 9092:9092
    environment:
      KAFKA_ADVERTISED_HOST_NAME: 10.0.0.115
      KAFKA_HOST_NAME: 10.0.0.115
      KAFKA_ADVERTISED_PORT: 9092
      KAFKA_BROKER_ID: 3
      KAFKA_ZOOKEEPER_CONNECT: 10.0.0.95:2181,10.0.0.187:2181,10.0.0.115:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://10.0.0.115:9092
      KAFKA_LISTENERS: PLAINTEXT://10.0.0.115:9092
    volumes:
      - /data/wangzunbin/volume/kfkluster/kafka3/logs:/kafka:Z
    network_mode: host
EOF
kafkaCluster]# docker-compose up -d
在Broker 1(10.0.0.95服务器)节点上创建一个用于测试的topic
  • 1.在Broker 1上创建一个副本为3、分区为5的topic用于测试

因Kafka的topic所有分区会分散在不同Broker上,所以该topic的5个分区会被分散到3个Broker上,其中有两个Broker得到两个分区,另一个Broker只有1个分区。该结论在下面将会得到验证

kafkaCluster]# docker exec -it kafka1 /bin/bash
bash-4.4# cd /opt/kafka_2.13-2.7.0/bin/
bash-4.4# kafka-topics.sh --create --zookeeper 10.0.0.95:2181 --replication-factor 3 --partitions 5 --topic TestTopic
Created topic TestTopic.
  • 2.查看新创建的topic信息
bash-4.4# kafka-topics.sh --describe --zookeeper 10.0.0.95:2181 --topic TestTopic
Topic: TestTopic	PartitionCount: 5	ReplicationFactor: 3	Configs: 
	Topic: TestTopic	Partition: 0	Leader: 3	Replicas: 3,1,2	Isr: 3,1,2
	Topic: TestTopic	Partition: 1	Leader: 1	Replicas: 1,2,3	Isr: 1,2,3
	Topic: TestTopic	Partition: 2	Leader: 2	Replicas: 2,3,1	Isr: 2,3,1
	Topic: TestTopic	Partition: 3	Leader: 3	Replicas: 3,2,1	Isr: 3,2,1
	Topic: TestTopic	Partition: 4	Leader: 1	Replicas: 1,3,2	Isr: 1,3,2

如上所说到的"该topic的5个分区会被分散到3个Broker上,其中有两个Broker得到两个分区,另一个Broker只有1个分区"情况,详见如下解释

Topic: TestTopic PartitionCount: 5 ReplicationFactor: 3		# 代表TestTopic有5个分区,3个副本节点;
Topic: TestTopic Partition: 0 Leader: 3 Replicas: 3,1,2 Isr: 3,1,2	# Leader:3代表TestTopic下的分区0的Leader Replica在Broker.id = 3节点上,
Replicas代表他的副本节点有Broker.id = 3,1,2(包括Leader Replica和Follower Replica,且不管是否存活)
Isr表示存活并且同步Leader节点的副本有Broker.id=3,1,2
kafka集群验证

上一步在Broker1(10.0.0.95服务器)上创建了一个topic:TestTopic,接着另开两个终端,分别进入Kafka2(10.0.0.187服务器)和Kafka3(10.0.0.115服务器)容器内,查看在该两容器内是否已同步两topic

  • 1.查看kafka2
kafkaCluster]# docker exec -it kafka2 /bin/bash
bash-4.4# cd /opt/kafka_2.13-2.7.0/bin/
bash-4.4# kafka-topics.sh --describe --zookeeper 10.0.0.95:2181 --topic TestTopic
Topic: TestTopic	PartitionCount: 5	ReplicationFactor: 3	Configs: 
	Topic: TestTopic	Partition: 0	Leader: 3	Replicas: 3,1,2	Isr: 3,1,2
	Topic: TestTopic	Partition: 1	Leader: 1	Replicas: 1,2,3	Isr: 1,2,3
	Topic: TestTopic	Partition: 2	Leader: 2	Replicas: 2,3,1	Isr: 2,3,1
	Topic: TestTopic	Partition: 3	Leader: 3	Replicas: 3,2,1	Isr: 3,2,1
	Topic: TestTopic	Partition: 4	Leader: 1	Replicas: 1,3,2	Isr: 1,3,2
  • 2.查看kafka3
kafkaCluster]# docker exec -it kafka3 /bin/bash
bash-4.4# cd /opt/kafka_2.13-2.7.0/bin/
bash-4.4# kafka-topics.sh --describe --zookeeper 10.0.0.95:2181 --topic TestTopic
Topic: TestTopic	PartitionCount: 5	ReplicationFactor: 3	Configs: 
	Topic: TestTopic	Partition: 0	Leader: 3	Replicas: 3,1,2	Isr: 3,1,2
	Topic: TestTopic	Partition: 1	Leader: 1	Replicas: 1,2,3	Isr: 1,2,3
	Topic: TestTopic	Partition: 2	Leader: 2	Replicas: 2,3,1	Isr: 2,3,1
	Topic: TestTopic	Partition: 3	Leader: 3	Replicas: 3,2,1	Isr: 3,2,1
	Topic: TestTopic	Partition: 4	Leader: 1	Replicas: 1,3,2	Isr: 1,3,2

可以查看到,Kafka2和Kafka3容器应用服务上已同步新创建的topi

  • 3.分别在Broker1(10.0.0.95服务器)上运行一个生产者,Broker2(10.0.0.187服务器)、3(10.0.0.115服务器)上分别运行一个消费者

broker1(进入之前进入到的kafka1容器内操作)运行生产者,并输入生产者消息

bash-4.4# kafka-console-producer.sh --broker-list 10.0.0.95:9092 --topic TestTopic
>test	# 输入生产者信息

broker2(进入之前进入到的kafka2容器内操作)运行消费者,会得到生产者消息发来的消息

bash-4.4# kafka-console-consumer.sh --bootstrap-server 10.0.0.187:9092 --topic TestTopic --from-beginning
test	# 接收到的消息

broker3(进入之前进入到的kafka3容器内操作)运行消费者,会得到生产者消息发来的消息

bash-4.4# kafka-console-consumer.sh --bootstrap-server 10.0.0.115:9092 --topic TestTopic --from-beginning
test	# 接收到的消息
结语

kafka中文文档
kafka官方文档
wurstmeister/kafka镜像仓库

  • 5
    点赞
  • 17
    收藏
    觉得还不错? 一键收藏
  • 18
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 18
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值