Linux docker环境kafka集群搭建

环境:

宿主机ip: 192.168.50.113

borker集群配置:

broker1: 192.168.50.113:9091
broker2: 192.168.50.113:9092
broker3: 192.168.50.113:9093

安装zookeeper

http://blog.csdn.net/lylclz/article/details/78633074

创建目录(宿主机)

mkdir -p /data1/kafka/9091/logs;
mkdir -p /data1/kafka/9092/logs;
mkdir -p /data1/kafka/9093/logs;

创建kafka容器

docker run -it --name=kafka_container  --net=host -v /data1:/data1  centos /bin/bash

下载kafka安装包

yum -y install java;
yum -y install wget;
 wget https://archive.apache.org/dist/kafka/0.9.0.0/kafka_2.10-0.9.0.0.tgz;
tar -zxvf kafka_2.10-0.9.0.0.tgz;
mv kafka_2.10-0.9.0.0 /usr/local/kafka;

配置文件

vi /data1/kafka/9091/server.properties

broker.id=1
port=9091
host.name=192.168.50.113
num.network.threads=4
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data1/kafka/9091/logs
num.partitions=5
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
log.cleaner.enable=false
zookeeper.connect=192.168.50.113:2181,192.168.50.113:2182,192.168.50.113:2183
zookeeper.connection.timeout.ms=6000
queued.max.requests =500
log.cleanup.policy = delete

vi /data1/kafka/9092/server.properties

broker.id=2
port=9092
host.name=192.168.50.113
num.network.threads=4
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data1/kafka/9092/logs
num.partitions=5
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
log.cleaner.enable=false
zookeeper.connect=192.168.50.113:2181,192.168.50.113:2182,192.168.50.113:2183
zookeeper.connection.timeout.ms=6000
queued.max.requests =500
log.cleanup.policy = delete

vi /data1/kafka/9093/server.properties

broker.id=3
port=9093
host.name=192.168.50.113
num.network.threads=4
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data1/kafka/9093/logs
num.partitions=5
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
log.cleaner.enable=false
zookeeper.connect=192.168.50.113:2181,192.168.50.113:2182,192.168.50.113:2183
zookeeper.connection.timeout.ms=6000
queued.max.requests =500
log.cleanup.policy = delete

启动

borker1:

/usr/local/kafka/bin/kafka-server-start.sh /data1/kafka/9091/server.properties > /dev/null 2>/dev/null &

borker2

/usr/local/kafka/bin/kafka-server-start.sh /data1/kafka/9092/server.properties > /dev/null 2>/dev/null &

borker3

/usr/local/kafka/bin/kafka-server-start.sh /data1/kafka/9093/server.properties > /dev/null 2>/dev/null &

:

正常

[root@192 9091]# netstat -apn | egrep "(9091|9092|9093)" | grep LISTEN
tcp6       0      0 192.168.50.113:9091     :::*                    LISTEN      1976/java
tcp6       0      0 192.168.50.113:9092     :::*                    LISTEN      2037/java
tcp6       0      0 192.168.50.113:9093     :::*                    LISTEN      2092/java 
### 准备工作 在Linux环境中通过Docker容器化部署Kafka集群前,需先安装并配置好Docker环境。对于CentOS操作系统而言,可以通过命令`yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo`来添加阿里云的Docker源以便后续顺利安装Docker[^1]。 随后,确保已成功安装适合版本的Docker以及docker-compose工具。例如,在一台具有特定ID、规格和IP地址的服务器上(如ecs-kafka1,运行着CentOS 7.6,配备2vCPUs与4GiB内存),应下载对应平台架构下的docker-compose二进制文件,并给予其执行权限,这可通过如下指令完成: ```bash sudo curl -L https://github.com/docker/compose/releases/download/v2.21.0/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose sudo chmod +x /usr/local/bin/docker-compose ``` 上述操作适用于所有计划参与构成Kafka集群节点的服务实例[^3]。 ### 创建必要的目录结构和服务定义文件 为了简化管理和维护过程,建议创建专门用于存放项目相关资源的工作空间。在此基础上构建一个名为`docker-compose.yml`的服务描述文档,该文件将用来指定各个服务组件及其依赖关系。下面是一个简单的例子展示如何利用官方提供的Apache Kafka镜像快速搭建起一套基础版的消息队列系统[^4]。 #### docker-compose.yml 文件示例 ```yaml version: '3' services: zookeeper: image: wurstmeister/zookeeper:latest ports: - "2181:2181" kafka1: image: wurstmeister/kafka:2.13-2.8.0 depends_on: - zookeeper environment: KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka1:9092 KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 ALLOW_PLAINTEXT_LISTENER: "yes" volumes: - /var/run/docker.sock:/var/run/docker.sock kafka2: image: wurstmeister/kafka:2.13-2.8.0 depends_on: - zookeeper environment: KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka2:9093 KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 ALLOW_PLAINTEXT_LISTENER: "yes" volumes: - /var/run/docker.sock:/var/run/docker.sock kafka3: image: wurstmeister/kafka:2.13-2.8.0 depends_on: - zookeeper environment: KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka3:9094 KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 ALLOW_PLAINTEXT_LISTENER: "yes" volumes: - /var/run/docker.sock:/var/run/docker.sock ``` 此配置片段展示了三个独立运作却相互关联的Kafka broker实例连同它们共同依靠的一个ZooKeeper实例。注意这里使用了wurstmeister社区维护的一套预置好了基本参数设置的Docker镜像组合方案;实际生产场景下可根据需求调整具体选项或选用其他更合适的映像来源。 ### 启动集群 当一切准备就绪之后,只需切换至包含有前述YAML格式编排脚本的目标路径之下,接着输入`docker-compose up -d`即可让整个分布式体系自动拉取所需软件包并启动起来。此时应当能够观察到各成员间正常通信交流的状态,标志着初步完成了基于Docker技术实现的Kafka集群建设任务。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值