kafka--基础知识点--14--kafka部署

单机部署win10


此处使用docker-compose部署,因此前提是安装好docker和docker-compose

1 单机部署

1.1 kafka-single

----kafka-single
  ----docker-compose.yml

1.2 docker-compose.yml

version: "3"
services:
  zookeeper:
    image: wurstmeister/zookeeper
    ports:
      - "2181:2181"
  kafka:
    image: wurstmeister/kafka 
    ports:
      - "9092:9092"
    environment:
      KAFKA_ADVERTISED_HOST_NAME: 10.0.8.11  # IP是宿主机IP,不是kafka容器IP
      KAFKA_CREATE_TOPICS: "test:1:1"
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
    volumes:
        - /var/run/docker.sock:/var/run/docker.sock

1.3 启动

[root@lighthouse zookeeper-cluster]# docker-compose ps

1.4 测试是否启动成功

1.4.1 使用zkCli.sh测试

在本地配置zookeeper,但并不在本地启动zookeeper,主要是为了使用zkCli.sh查看kafka集群是否启动成功。zookeeper本地配置中的1.1-1.4。

(base) [root@lighthouse kafka-cluster]# zkCli.sh
Connecting to localhost:2181
...
...
...
WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] get /cluster/id
{"version":"1","id":"_LrguUKmSSy9Mowvn3VAxQ"}
[zk: localhost:2181(CONNECTED) 1] ls /brokers/ids
[1, 2, 3]
[zk: localhost:2181(CONNECTED) 2] ls /brokers/topics
[__consumer_offsets, test]
[zk: localhost:2181(CONNECTED) 3] 
14.2 使用客户端命令测试

查看topic列表

==使用生产者和消费者测试
kafka_2.12-2.2.0目录是通过直接下载kafka_2.12-2.2.0.tgz再解压即可。

查看topic列表

另打开一个终端窗口,进入kafka_2.12-2.2.0/bin目录

(base) [root@lighthouse bin]# ./kafka-topics.sh --list --zookeeper localhost:2181
__consumer_offsets
test

使用生产者和消费者测试

另打开一个终端窗口,进入kafka_2.12-2.2.0/bin目录

./kafka-console-producer.sh --broker-list localhost:9092 --topic test
>abc
>def
>opq

另打开一个终端窗口,进入kafka_2.12-2.2.0/bin目录

(base) [root@yangkang bin]# ./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
abc
def
opq

2 集群部署

单台物理机上使用docker启动3个zk容器和3个kafka容器

2.1 kafka-cluster目录结构

----kafka-cluster
  ----docker-compose.yml
  ----kafka
    ----kafka1
    ----kafka2
    ----kafka3
  ----zookeeper
    ----zoo1
      ----data
        ----myid
      ----datalog
      ----config
        ----zoo.cfg
    ----zoo2
      ----data
        ----myid
      ----datalog
      ----config
        ----zoo.cfg
    ----zoo3
      ----data
        ----myid
      ----datalog
      ----config
        ----zoo.cfg
  ----kafka_2.12-2.2.0

2.2 网络配置

zookeepermyidip端口映射
zoo11172.16.238.112181:2181
zoo22172.16.238.122182:2181
zoo33172.16.238.132183:2181
kafkabroker_idip端口映射
kafka11172.16.238.219092:9092
kafka22172.16.238.229093:9092
kafka33172.16.238.239094:9092

2.3 配置文件

2.3.1 docker-compose.yml
version: '3.1'

services:
  zoo1:
    image: zookeeper
#    restart: always
    hostname: zoo1
    ports:
      - 2181:2181
    networks:
      mynet:
        ipv4_address: 172.16.238.10
    volumes:
      - ./zookeeper/zoo1/data:/data
      - ./zookeeper/zoo1/datalog:/data/log
      - ./zookeeper/zoo1/config:/conf
  zoo2:
    image: zookeeper
#    restart: always
    hostname: zoo2
    ports:
      - 2182:2181
    networks:
      mynet:
        ipv4_address: 172.16.238.11
    volumes:
      - ./zookeeper/zoo2/data:/data
      - ./zookeeper/zoo2/datalog:/data/log
      - ./zookeeper/zoo2/config:/conf
  zoo3:
    image: zookeeper
#    restart: always
    hostname: zoo3
    ports:
      - 2183:2181
    networks:
      mynet:
        ipv4_address: 172.16.238.12
    volumes:
      - ./zookeeper/zoo3/data:/data
      - ./zookeeper/zoo3/datalog:/data/log
      - ./zookeeper/zoo3/config:/conf
  kafka1:
    image: wurstmeister/kafka
#    restart: always
    hostname: kafka1
    container_name: kafka1
    ports:
      - 9092:9092
    environment:
      KAFKA_BROKER_ID: 1 
      KAFKA_ADVERTISED_HOST_NAME: 10.0.8.11  # IP是宿主机IP,不是kafka1容器IP
      KAFKA_CREATE_TOPICS: "test:1:1"  # 启动后会自动创建一个1个分区1个副本的主题,"主题:分区数:副本数"
      KAFKA_MESSAGE_MAX_BYTES: 2000000 #单条消息最大字节数
      KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
    volumes:
       - /var/run/docker.sock:/var/run/docker.sock
       - ./kafka/kafka1:/kafka
    depends_on:
      - zoo1
      - zoo2
      - zoo3
    networks:
      mynet:
        ipv4_address: 172.16.238.21
  kafka2:
    image: wurstmeister/kafka
#    restart: always
    hostname: kafka2
    container_name: kafka2
    ports:
      - 9093:9092
    environment:
      KAFKA_BROKER_ID: 2 
      KAFKA_ADVERTISED_HOST_NAME: 10.0.8.11
      KAFKA_MESSAGE_MAX_BYTES: 2000000 #单条消息最大字节数
      KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
    volumes:
       - /var/run/docker.sock:/var/run/docker.sock
       - ./kafka/kafka2:/kafka
    depends_on:
      - zoo1
      - zoo2
      - zoo3
    networks:
      mynet:
        ipv4_address: 172.16.238.22
  kafka3:
    image: wurstmeister/kafka
#    restart: always
    hostname: kafka3
    container_name: kafka3
    ports:
      - 9094:9092
    environment:
      KAFKA_BROKER_ID: 3 
      KAFKA_ADVERTISED_HOST_NAME: 10.0.8.11 
      KAFKA_MESSAGE_MAX_BYTES: 2000000 #单条消息最大字节数
      KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
    volumes:
       - /var/run/docker.sock:/var/run/docker.sock
       - ./kafka/kafka3:/kafka
    depends_on:
      - zoo1
      - zoo2
      - zoo3
    networks:
      mynet:
        ipv4_address: 172.16.238.23
networks:
  mynet:
    driver: bridge
    ipam:
      driver: default
      config:
        - 
          subnet: 172.16.238.0/24
          gateway: 172.16.238.1

2.3.2 zoo1/config/zoo.cfg、zoo2/config/zoo.cfg、zoo3/config/zoo.cfg

这三个配置文件内容相同

dataDir=/data
dataLogDir=/datalog
tickTime=2000
initLimit=5
syncLimit=2
autopurge.snapRetainCount=3
autopurge.purgeInterval=0
maxClientCnxns=60
standaloneEnabled=true
admin.enableServer=true
clientPort=2181
server.1=172.16.238.10:2888:3888
server.2=172.16.238.11:2888:3888
server.3=172.16.238.12:2888:3888
4lw.commands.whitelist=*

2.3.3 myid

zoo1/data/myid

1

zoo2/data/myid

2

zoo3/data/myid

3

2.4 启动

[root@lighthouse zookeeper-cluster]# docker-compose ps

2.5 测试是否启动成功

2.5.1 使用zkCli.sh测试

在本地配置zookeeper,但并不在本地启动zookeeper,主要是为了使用zkCli.sh查看kafka集群是否启动成功。zookeeper本地配置中的1.1-1.4。

启动一个终端窗口,通过zoo1测试

(base) [root@lighthouse kafka-cluster]# zkCli.sh
Connecting to localhost:2181
...
...
...
WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] get /cluster/id
{"version":"1","id":"_LrguUKmSSy9Mowvn3VAxQ"}
[zk: localhost:2181(CONNECTED) 1] ls /brokers/ids
[1, 2, 3]
[zk: localhost:2181(CONNECTED) 2] ls /brokers/topics
[__consumer_offsets, test]
[zk: localhost:2181(CONNECTED) 3] 

另启动一个终端窗口,通过zoo2测试

(base) [root@lighthouse kafka-cluster]# zkCli.sh
Connecting to localhost:2181
...
...
...
WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] close
...
...
...
[zk: localhost:2181(CONNECTED) 1] connect localhost:2182
...
...
...
[zk: localhost:2182(CONNECTED) 2] get /cluster/id
{"version":"1","id":"_LrguUKmSSy9Mowvn3VAxQ"}
[zk: localhost:2181(CONNECTED) 3] ls /brokers/ids
[1, 2, 3]
[zk: localhost:2181(CONNECTED) 4] ls /brokers/topics
[__consumer_offsets, test]

另启动一个终端窗口,通过zoo3测试

通过zoo3测试与通过zoo2测试的方法相同。

2.5.2 使用客户端命令测试

kafka_2.12-2.2.0目录是通过直接下载kafka_2.12-2.2.0.tgz再解压即可。

查看topic列表

另打开一个终端窗口,进入kafka_2.12-2.2.0/bin目录

(base) [root@lighthouse bin]# ./kafka-topics.sh --list --zookeeper localhost:2181
__consumer_offsets
test

使用生产者和消费者测试

另打开一个终端窗口,进入kafka_2.12-2.2.0/bin目录

./kafka-console-producer.sh --broker-list localhost:9092 --topic test
>abc
>def
>opq

另打开一个终端窗口,进入kafka_2.12-2.2.0/bin目录

(base) [root@yangkang bin]# ./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
abc
def
opq
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值