docker zookeeper kafka kafka-manager 本地hbase hadoop

最近项目需要节约成本进行开发,所以要把docker利用的淋漓尽致,暂时只有一台服务器可用。
规划如下:zookeeper开启三个,kafka开启三个,hbase和hadoop在本地开启,不用docker。
参考:https://www.cnblogs.com/idea360/p/12411859.html
首先服务器上已经有了docker,这里我们使用docker-compose,提高开发效率,首先安装docker-compose:

sudo curl -L "https://github.com/docker/compose/releases/download/1.24.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

这里的1.24.1要进行改变,看你想用什么版本,兼容性如下:
compose文件格式版本 docker版本
3.4 17.09.0+
3.3 17.06.0+
3.2 17.04.0+
3.1 1.13.1+
3.0 1.13.0+
2.3 17.06.0+
2.2 1.13.0+
2.1 1.12.0+
2.0 1.10.0+
1.0 1.9.1.+
接下来:

sudo chmod +x /usr/local/bin/docker-compose
sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
docker-compose --version

即可查看是否安装成功
随后写两个文件分别为zookeeper和kafka:
(注:在zookeeper和kafka中可添加 volumes操作,映射目录,我这里没做:
例:
“opt/kafka/kafka1/data/:/kafka”)

version: '3.4'

services:
  zoo1:
    image: zookeeper:3.4.10
    restart: always
    hostname: zoo1
    container_name: zoo1
    ports:
    - 2184:2181
    environment:
      ZOO_MY_ID: 1
      ZOO_SERVERS: server.1=0.0.0.0:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888

  zoo2:
    image: zookeeper:3.4.10
    restart: always
    hostname: zoo2
    container_name: zoo2
    ports:
    - 2185:2181
    environment:
      ZOO_MY_ID: 2
      ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=0.0.0.0:2888:3888 server.3=zoo3:2888:3888

  zoo3:
    image: zookeeper:3.4.10
    restart: always
    hostname: zoo3
    container_name: zoo3
    ports:
    - 2186:2181
    environment:
      ZOO_MY_ID: 3
      ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=0.0.0.0:2888:3888

version: '3.4'


services:

  kafka1:
    image: wurstmeister/kafka:2.11-0.11.0.3
    restart: unless-stopped
    container_name: kafka1
    ports:
      - "9093:9092"
    external_links:
      - zoo1
      - zoo2
      - zoo3
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ADVERTISED_HOST_NAME: 172.21.0.3                   ## 修改:宿主机IP
      KAFKA_ADVERTISED_PORT: 9093                                 ## 修改:宿主机映射port
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://172.21.0.3:9093    ## 绑定发布订阅的端口。修改:宿主机IP
      KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181,zoo2:2181,zoo3:2181"
      KAFKA_delete_topic_enable: 'true'
    volumes:
      - "/home/cdata/data1/docker/kafka/kafka1/docker.sock:/var/run/docker.sock"
      - "/home/cdata/data1/docker/kafka/kafka1/data/:/kafka"
    


  kafka2:
    image: wurstmeister/kafka:2.11-0.11.0.3
    restart: unless-stopped
    container_name: kafka2
    ports:
      - "9094:9092"
    external_links:
      - zoo1
      - zoo2
      - zoo3
    environment:
      KAFKA_BROKER_ID: 2
      KAFKA_ADVERTISED_HOST_NAME: 172.21.0.3                 ## 修改:宿主机IP
      KAFKA_ADVERTISED_PORT: 9094                               ## 修改:宿主机映射port
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://172.21.0.3:9094   ## 修改:宿主机IP
      KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181,zoo2:2181,zoo3:2181"
      KAFKA_delete_topic_enable: 'true'
    volumes:
      - "/home/cdata/data1/docker/kafka/kafka2/docker.sock:/var/run/docker.sock"
      - "/home/cdata/data1/docker/kafka/kafka2/data/:/kafka"
    

  kafka3:
    image: wurstmeister/kafka:2.11-0.11.0.3
    restart: unless-stopped
    container_name: kafka3
    ports:
      - "9095:9092"
    external_links:
      - zoo1
      - zoo2
      - zoo3
    environment:
      KAFKA_BROKER_ID: 3
      KAFKA_ADVERTISED_HOST_NAME: 172.21.0.3                 ## 修改:宿主机IP
      KAFKA_ADVERTISED_PORT: 9095                              ## 修改:宿主机映射port
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://172.21.0.3:9095   ## 修改:宿主机IP
      KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181,zoo2:2181,zoo3:2181"
      KAFKA_delete_topic_enable: 'true'
    volumes:
      - "/home/cdata/data1/docker/kafka/kafka3/docker.sock:/var/run/docker.sock"
      - "/home/cdata/data1/docker/kafka/kafka3/data/:/kafka"

  kafka-manager:
    image: sheepkiller/kafka-manager:latest
    restart: unless-stopped
    container_name: kafka-manager
    hostname: kafka-manager
    ports:
      - "9000:9000"
    links:            # 连接本compose文件创建的container
      - kafka1
      - kafka2
      - kafka3
    external_links:   # 连接本compose文件以外的container
      - zoo1
      - zoo2
      - zoo3
    environment:
      ZK_HOSTS: zoo1:2181,zoo2:2181,zoo3:2181                 ## 修改:宿主机IP
      TZ: CST-8


写完yml文件后:分别执行docker-compose up -d命令,即可完成安装并且执行,随后打开9000端口也可以查看:
在这里插入图片描述
在这里插入图片描述
点击添加集群,输入相应的配置即可
在这里插入图片描述

在这里插入图片描述
可以进行topic的创建和查询。
接下来进行hadoop的安装,配置好相应的文件,开启namenode和datanode

随后安装hbase,安装好后,核心配置如下:
hbase-site.xml
分别绑定到zookeeper和kafka上即可
在这里插入图片描述
在这里插入图片描述

hbae-env.sh
在这里插入图片描述

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
当涉及到大数据应用程序的容器化时,通常需要使用多个容器来构建整个系统。以下是一个示例`docker-compose.yml`文件,其中包含一些常见的大数据相关容器: ```yaml version: '3' services: namenode: image: sequenceiq/hadoop-docker:2.7.1 container_name: namenode ports: - "50070:50070" volumes: - ./hadoop-namenode:/hadoop/dfs/name command: /etc/bootstrap.sh -d namenode datanode: image: sequenceiq/hadoop-docker:2.7.1 container_name: datanode volumes: - ./hadoop-datanode:/hadoop/dfs/data environment: - CORE_CONF_fs_defaultFS=hdfs://namenode:8020 command: /etc/bootstrap.sh -d datanode resourcemanager: image: sequenceiq/hadoop-docker:2.7.1 container_name: resourcemanager ports: - "8088:8088" environment: - YARN_CONF_yarn_resourcemanager_hostname=resourcemanager - YARN_CONF_yarn_resourcemanager_webapp_address=resourcemanager:8088 - YARN_CONF_yarn_resourcemanager_address=resourcemanager:8032 command: /etc/bootstrap.sh -d resourcemanager nodemanager: image: sequenceiq/hadoop-docker:2.7.1 container_name: nodemanager environment: - CORE_CONF_fs_defaultFS=hdfs://namenode:8020 - YARN_CONF_yarn_resourcemanager_hostname=resourcemanager command: /etc/bootstrap.sh -d nodemanager historyserver: image: sequenceiq/hadoop-docker:2.7.1 container_name: historyserver environment: - CORE_CONF_fs_defaultFS=hdfs://namenode:8020 - YARN_CONF_yarn_resourcemanager_hostname=resourcemanager command: /etc/bootstrap.sh -d historyserver spark-master: image: bde2020/spark-master:3.1.1-hadoop3.2 container_name: spark-master ports: - "8080:8080" environment: - SPARK_MODE=master command: /bin/bash entrypoint.sh spark-worker: image: bde2020/spark-worker:3.1.1-hadoop3.2 container_name: spark-worker environment: - SPARK_MODE=worker - SPARK_MASTER_URL=spark://spark-master:7077 command: /bin/bash entrypoint.sh kafka: image: wurstmeister/kafka:2.12-2.1.0 container_name: kafka ports: - "9092:9092" environment: - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092 - KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 - KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 depends_on: - zookeeper zookeeper: image: wurstmeister/zookeeper:3.4.6 container_name: zookeeper ports: - "2181:2181"
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

graceful coding

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值