一、docker-compose安装RabbitMQ
1、在服务器上新建目录/data/rabbitmq
mkdir -p /data/rabbitmq
2、在/data/rabbitmq/目录下新建一个文件docker-compose.yml,打开文件,输入下面内容,然后保存。
vim docker-compose.yml
version: '3'
services:
rabbitmq:
image: rabbitmq:management
container_name: rabbitmq
restart: always
volumes:
- /data/rabbitmq/data:/var/lib/rabbitmq/
ports:
- 5672:5672
- 15672:15672
environment:
RABBITMQ_DEFAULT_VHOST: '/'
RABBITMQ_DEFAULT_USER: admin
RABBITMQ_DEFAULT_PASS: icp@2022
# 按下Esc
:wq
3、启动docker
docker-compose up -d
二、docker-compose安装kafka
1、在服务器上新建目录/opt/kafka和/opt/zookeeper
mkdir -p /opt/zookeeper && mkdir -p /opt/kafka
2、在/opt/kafka/目录下新建一个文件docker-compose.yml,打开文件,输入下面内容,然后保存。
vim docker-compose.yml
version: '3.7'
services:
zookeeper:
image: wurstmeister/zookeeper
volumes:
- /opt/zookeeper/data:/data
container_name: zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- 2181:2181
restart: always
kafka_node1:
image: wurstmeister/kafka
container_name: kafka_node1
depends_on:
- zookeeper
ports:
- 9092:9092
volumes:
- /home/kafka/data:/kafka
environment:
KAFKA_CREATE_TOPICS: "test"
KAFKA_BROKER_NO: 0
KAFKA_LISTENERS: PLAINTEXT://kafka_node1:9092
# 一定要将172.16.88.25修改为自己的ip
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://172.16.88.25:9092
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_HEAP_OPTS: "-Xmx512M -Xms16M"
# segment最后一次写入的时间 减去当前时间大于40s的话,会被打上删除标签
KAFKA_LOG_RETENTION_MS: 86400000
# 10M分一次segment
KAFKA_LOG_SEGMENT_BYTES: 104857600
# 每60s将打上删除标签的segment删除
KAFKA_LOG_RETENTION_CHECK_INTERVAL_MS: 600000
restart: always
kafka_manager:
image: hlebalbau/kafka-manager:stable
ports:
- 9000:9000
environment:
ZK_HOSTS: "zookeeper:2181"
depends_on:
- zookeeper
- kafka_node1
restart: always
# 按下Esc
:wq
3、启动docker
docker-compose up -d
4、验证
(1)通过容器名称进入到kafka容器中:
docker exec -it kafka /bin/bash
(2)创建一个名称为test的topic
kafka-topics.sh --create --topic test \
--zookeeper zookeeper:2181 --replication-factor 1 \
--partitions 1
(3)复制会话,进入kafka容器,查看刚刚创建的topic信息
kafka-topics.sh --zookeeper zookeeper:2181 \
--describe --topic test
(4)复制会话,进入kafka容器,打开生产者发送若干条消息:
kafka-console-producer.sh --topic=test \
--broker-list kafka:9092
(5)复制会话,进入kafka容器,打开消费者接收消息:
kafka-console-consumer.sh \
--bootstrap-server kafka:9092 \
--from-beginning --topic test
(6)如果可以成功接收到消息,则说明kafka已经启动成功了,可以进行本地开发以及调试工作了。
5、注:手动拉取镜像安装方式
(1)查看容器ID命令
docker ps
(2) 删除kafka和zookeeper的镜像
docker rm 容器id
(3)重新拉取kafka和 zookeeper的镜像
docker pull wurstmeister/zookeeper:latest
docker pull wurstmeister/kafka:2.12-2.3.1
(4) 运行容器
优先运行zookeeper
docker run -d --name zookeeper -p 2181:2181 -v /data/zookeeper/data:/opt/zookeeper-3.4.13/data -v /etc/localtime:/etc/localtime:ro -t wurstmeister/zookeeper
运行kafka
docker run -d --name kafka -p 9092:9092 -v /data/kafka/logs/:/opt/kafka/logs -v /data/kafka/logs/:/kafka/kafka-logs -v /etc/localtime:/etc/localtime:ro -e KAFKA_BROKER_ID=0 -e KAFKA_ZOOKEEPER_CONNECT=172.16.88.25:2181 -e KAFKA_ADVERTISED_LISTENERS=plaintext://172.16.88.25:9092 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 -e KAFKA_LOG_DIRS=/kafka/kafka-logs -t wurstmeister/kafka:2.12-2.3.1
注:1)数据目录映射到宿主机中;2)容器使用宿主机的系统时间 ;3)运行kafka一行要将
172.16.88.25换成自己的主机IP