docker+elk7.8实战之基于docker-compose搭建单机版zookeeper集群和kafka集群

服务器提前安装docker以及docker-compose环境

说明: 由于kafka集群需要依赖于zookeeper集群,所以先搭建zookeeper集群。

zookeeper集群搭建

1.新建目录
mkdir -p /opt/elk7/zookeeper
cd /opt/elk7/zookeeper
2.创建docker-compose.yml文件

创建docker内部网络,为结合后续搭建kafka集群做铺垫。

# 查看所有网络
docker network ls

# 如果存在 移除网络
docker network rm kafka-default

# 新建网络 需要设置网关
docker network create --driver bridge --subnet 172.18.0.0/24 --gateway 172.18.0.1 kafka-default

执行命令: vi docker-compose.yml, 输入如下内容:

version: '3.3'
services:
  zk01:
    image: zookeeper
    restart: always
    hostname: zk01
    container_name: zk01
    ports:
      - 2181:2181
      - 2888:2888
      - 3888:3888
    volumes:
      - /opt/elk7/zookeeper/zk01/data:/data
      - /opt/elk7/zookeeper/zk01/datalog:/datalog
      - /opt/elk7/zookeeper/zk01/logs:/logs
    environment:
      ZOO_MY_ID: 1
      ZOO_SERVERS: server.1=zk01:2888:3888;2181 server.2=zk02:2888:3888;2181 server.3=zk03:2888:3888;2181
    networks:
      kafka-default:
      	ipv4_address: 172.18.0.11
  
  zk02:
    image: zookeeper
    restart: always
    hostname: zk02
    container_name: zk02
    ports:
      - 2182:2181
      - 2889:2888
      - 3889:3888
    volumes:
      - /opt/elk7/zookeeper/zk02/data:/data
      - /opt/elk7/zookeeper/zk02/datalog:/datalog
      - /opt/elk7/zookeeper/zk02/logs:/logs
    environment:
      ZOO_MY_ID: 2
      ZOO_SERVERS: server.1=zk01:2888:3888;2181 server.2=zk02:2888:3888;2181 server.3=zk03:2888:3888;2181
  	networks:
      kafka-default:
      	ipv4_address: 172.18.0.12
  	
  zk03:
    image: zookeeper:latest
    restart: always
    hostname: zk03
    container_name: zk03
    ports:
      - 2183:2181
      - 2890:2888
      - 3890:3888
    volumes:
      - /opt/elk7/zookeeper/zk03/data:/data
      - /opt/elk7/zookeeper/zk03/datalog:/datalog
      - /opt/elk7/zookeeper/zk03/logs:/logs
    environment:
      ZOO_MY_ID: 3
      ZOO_SERVERS: server.1=zk01:2888:3888;2181 server.2=zk02:2888:3888;2181 server.3=zk03:2888:3888;2181
	networks:
      kafka-default:
      	ipv4_address: 172.18.0.13
	
networks:
  kafka-default:
  	external:
  	  name: kafka-default
 注意:
   1.docker-compose在做宿主机和容器文件映射的时候会自动创建文件夹,不需要手动创建。宿主机对应的地址请修改成实际使用的地址。
   2.hostname: zk01 由于是在单机搭建的伪集群,修改了/etc/host文件 添加了3个主机名,分别是zk01 zk02 zk03。这些地址在容器内部是可以解析的。
   3.注意ZOO_SERVERS的端口配置的是容器的内部端口,尤其要注意3888后的分号和2181端口
3.启动zookeeper
# 启动(后台启动)
docker-compose up (-d)
# 停止
docker-compose stop

启动完成后 可以通过docker ps查看容器启动状态

[root@kf202 zookeeper]# docker ps
CONTAINER ID        IMAGE                                 COMMAND                  CREATED             STATUS              PORTS                                                                              NAMES
eafbba82f51b        zookeeper:latest   "/docker-entrypoint.…"   17 minutes ago      Up 17 minutes       0.0.0.0:2181->2181/tcp, 0.0.0.0:2888->2888/tcp, 0.0.0.0:3888->3888/tcp, 8080/tcp   zk01
14b794171c94        zookeeper:latest   "/docker-entrypoint.…"   17 minutes ago      Up 17 minutes       8080/tcp, 0.0.0.0:2183->2181/tcp, 0.0.0.0:2890->2888/tcp, 0.0.0.0:3890->3888/tcp   zk03
e1f8faa64d6a        zookeeper:latest   "/docker-entrypoint.…"   17 minutes ago      Up 17 minutes       8080/tcp, 0.0.0.0:2182->2181/tcp, 0.0.0.0:2889->2888/tcp, 0.0.0.0:3889->3888/tcp   zk02
569ac3efa6aa        logstash:7.8.1     "/usr/local/bin/dock…"   3 days ago          Up 19 hours                                                                                            logstash
5dd146f7d16f        kibana:7.8.1       "/usr/local/bin/dumb…"   8 days ago          Up 8 days           0.0.0.0:5601->5601/tcp                                                             kibana
edf01440dbb2        elasticsearch:7.8.1                   "/tini -- /usr/local…"   8 days ago          Up 8 days           9200/tcp, 0.0.0.0:9203->9203/tcp, 9300/tcp, 0.0.0.0:9303->9303/tcp                 es-03
281a9e99e0d4        elasticsearch:7.8.1                   "/tini -- /usr/local…"   8 days ago          Up 8 days           9200/tcp, 0.0.0.0:9202->9202/tcp, 9300/tcp, 0.0.0.0:9302->9302/tcp                 es-02
1a0d40f6861a        elasticsearch:7.8.1                   "/tini -- /usr/local…"   8 days ago          Up 8 days           0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp                                     es-01

可以看到zk01 zk02 zk03都是成功启动的。

4.验证
docker exec -it zk01 bash
cd bin
./zkServer.sh status

可以看到如下输出

root@zk02:/apache-zookeeper-3.6.1-bin/bin# ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower

继续进入其他容器执行相同命令即可看到对应的leader和follower状态

至此,基于docker的单机版zookeeper集群搭建完成。

kafka集群搭建

1.新建目录
mkdir -p /opt/elk7/kafka
cd /opt/elk7/kafka
2.创建docker-compose.yml文件

由于zookeeper集群搭建的时候已经创建过网络了,这里我们直接使用。

执行命令: vi docker-compose.yml, 输入如下内容:

version: '3'

services:
  kafka1:
    image: wurstmeister/kafka
    restart: always
    container_name: kafka1
    hostname: kafka1
    ports:
    - 9092:9092
    environment:
      KAFKA_ADVERTISED_HOST_NAME: kafka1
      KAFKA_ADVERTISED_PORT: 9092
      KAFKA_ZOOKEEPER_CONNECT: zk01:2181,zk02:2181,zk03:2181
    volumes:
    - /opt/elk7/kafka/kafka01/logs:/kafka
    external_links:
    - zk01
    - zk02
    - zk03
    networks:
      kafka-default:
        ipv4_address: 172.18.0.14

  kafka2:
    image: wurstmeister/kafka
    restart: always
    container_name: kafka2
    hostname: kafka2
    ports:
    - 9093:9092
    environment:
      KAFKA_ADVERTISED_HOST_NAME: kafka2
      KAFKA_ADVERTISED_PORT: 9092
      KAFKA_ZOOKEEPER_CONNECT: zk01:2181,zk02:2181,zk03:2181
    volumes:
    - /opt/elk7/kafka/kafka02/logs:/kafka
    external_links:
    - zk01
    - zk02
    - zk03
    networks:
      kafka-default:
        ipv4_address: 172.18.0.15

  kafka3:
    image: wurstmeister/kafka
    restart: always
    container_name: kafka3
    hostname: kafka3
    ports:
    - 9094:9092
    environment:
      KAFKA_ADVERTISED_HOST_NAME: kafka3
      KAFKA_ADVERTISED_PORT: 9092
      KAFKA_ZOOKEEPER_CONNECT: zk01:2181,zk02:2181,zk03:2181
    volumes:
    - /opt/elk7/kafka/kafka03/logs:/kafka
    external_links:
    - zk01
    - zk02
    - zk03
    networks:
      kafka-default:
        ipv4_address: 172.18.0.16

networks:
  kafka-default:
    external:
      name: kafka-default
3.启动
# 启动(后台启动)
docker-compose up (-d)
# 停止
docker-compose stop

启动完成后 可以通过docker ps查看容器启动状态

[root@kf202 kafka]# docker ps
CONTAINER ID        IMAGE                                 COMMAND                  CREATED             STATUS              PORTS                                                                              NAMES
bcaab7ab5a16        wurstmeister/kafka                    "start-kafka.sh"         About an hour ago   Up About an hour    0.0.0.0:9094->9092/tcp                                                             kafka3
86ce0941a790        wurstmeister/kafka                    "start-kafka.sh"         About an hour ago   Up About an hour    0.0.0.0:9093->9092/tcp                                                             kafka2
cfa0568e8701        wurstmeister/kafka                    "start-kafka.sh"         About an hour ago   Up About an hour    0.0.0.0:9092->9092/tcp                                                             kafka1

可以看到kafke1 kafka2 kafka3都启动成功了。

4.验证

进入容器内部

docker exec -it kafka1 bash

查看环境变量

bash-4.4# env
KAFKA_ADVERTISED_PORT=9092
KAFKA_HOME=/opt/kafka
LANG=C.UTF-8
KAFKA_ADVERTISED_HOST_NAME=kafka1
HOSTNAME=kafka1
JAVA_HOME=/usr/lib/jvm/java-1.8-openjdk/jre
JAVA_VERSION=8u212
PWD=/home
HOME=/root
GLIBC_VERSION=2.31-r0
TERM=xterm
KAFKA_VERSION=2.6.0
SHLVL=1
KAFKA_ZOOKEEPER_CONNECT=zk01:2181,zk02:2181,zk03:2181
SCALA_VERSION=2.13
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/jvm/java-1.8-openjdk/jre/bin:/usr/lib/jvm/java-1.8-openjdk/bin:/opt/kafka/bin
JAVA_ALPINE_VERSION=8.212.04-r0
_=/usr/bin/env
OLDPWD=/

可以看到kafka目录是在/opt/kafka下面

测试创建topic

bash-4.4# $KAFKA_HOME/bin/kafka-topics.sh --create --topic topic --partitions 3 --zookeeper zk01:2181 --replication-factor 1 
Created topic topic.

在各个节点查看topic

bash-4.4# $KAFKA_HOME/bin/kafka-topics.sh --zookeeper zk01:2181 --describe --topic topic
Topic: topic	PartitionCount: 3	ReplicationFactor: 1	Configs: 
	Topic: topic	Partition: 0	Leader: 1001	Replicas: 1001	Isr: 1001
	Topic: topic	Partition: 1	Leader: 1002	Replicas: 1002	Isr: 1002
	Topic: topic	Partition: 2	Leader: 1003	Replicas: 1003	Isr: 1003
bash-4.4# $KAFKA_HOME/bin/kafka-topics.sh --zookeeper zk02:2181 --describe --topic topic
Topic: topic	PartitionCount: 3	ReplicationFactor: 1	Configs: 
	Topic: topic	Partition: 0	Leader: 1001	Replicas: 1001	Isr: 1001
	Topic: topic	Partition: 1	Leader: 1002	Replicas: 1002	Isr: 1002
	Topic: topic	Partition: 2	Leader: 1003	Replicas: 1003	Isr: 1003
bash-4.4# $KAFKA_HOME/bin/kafka-topics.sh --zookeeper zk03:2181 --describe --topic topic
Topic: topic	PartitionCount: 3	ReplicationFactor: 1	Configs: 
	Topic: topic	Partition: 0	Leader: 1001	Replicas: 1001	Isr: 1001
	Topic: topic	Partition: 1	Leader: 1002	Replicas: 1002	Isr: 1002
	Topic: topic	Partition: 2	Leader: 1003	Replicas: 1003	Isr: 1003
[root@kf202 kafka]# docker exec -it kafka2 bash
bash-4.4# $KAFKA_HOME/bin/kafka-topics.sh --zookeeper zk01:2181 --describe --topic topic
Topic: topic	PartitionCount: 3	ReplicationFactor: 1	Configs: 
	Topic: topic	Partition: 0	Leader: 1001	Replicas: 1001	Isr: 1001
	Topic: topic	Partition: 1	Leader: 1002	Replicas: 1002	Isr: 1002
	Topic: topic	Partition: 2	Leader: 1003	Replicas: 1003	Isr: 1003
[root@kf202 kafka]# docker exec -it kafka3 bash
bash-4.4# $KAFKA_HOME/bin/kafka-topics.sh --zookeeper zk01:2181 --describe --topic topic
Topic: topic	PartitionCount: 3	ReplicationFactor: 1	Configs: 
	Topic: topic	Partition: 0	Leader: 1001	Replicas: 1001	Isr: 1001
	Topic: topic	Partition: 1	Leader: 1002	Replicas: 1002	Isr: 1002
	Topic: topic	Partition: 2	Leader: 1003	Replicas: 1003	Isr: 1003

可以看到topic创建成功且各个kafka节点都可以看到。

通过$KAFKA_HOME/bin/kafka-topics.sh --list --zookeeper zk01:2181可以查看所有主题列表。

至此基于docker-compose搭建的单机版zookeeper集群和kafka集群搭建完成。
接下来我们将实现filebeat收集到的日志先输出到kafka再由logstash从kafka读取。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

泽济天下

你的鼓励是我最大的动力。

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值