单独启动docker-compose的其中一个容器

本文介绍了如何在不重启整个docker-compose环境的情况下,仅重启特定容器如worker。提供了详细的步骤,包括使用`docker-compose restart`命令,以及在本地代码更新后如何单独更新并重启worker容器,涉及停止、删除、备份、重建及启动等操作。
摘要由CSDN通过智能技术生成

(草稿,细节待完善)

 

我有一个 docker-compose.yml 文件,其中包含多个容器:redis,postgres,mysql,worker

在的工作过程中,我经常需要重新启动它才能更新。有没有什么好的方法来重新启动一个容器(例如 worker )而不重新启动其他容器?

解决方案这很简单:使用命令:

docker-compose restart worker 

您可以设置等待停止的时间,然后再杀死容器(以秒为单位)

docker-compose restart -t 30 worker 

 

步骤:

另外还有一种情况就是,本地修改了代码,需要重新打包发布,但是又不想全部docker-compose停止再启动,那么就可以单独修改其中一个。

1、首先通过 docker ps 查询已启动的容器(docker ps  -a 查询所有容器,包括未启动的)
 命令 docker container ls -a 也一样。

2、将要更新的容器停止docker-compose stop worker (docker-compose stop 是停止yaml包含的所有容器)

3、将第二步已停止的容器删除 docker container rm c2cbf59d4e3c  (c2cbf59d4e3c是worker的容器id)

4、查询所有的镜像 docker images

5、备份镜像,防止意外出错可恢复。docker save worker  -o  /home/bak/worker-bak.tar

6、删除镜像 docker rmi worker

7、将打包好的更新的jar文件按照docker-compose的描述地址放好,再根据文件编译新的镜像 docker build . -f Dockerfile-Worker -t  worker

8、启动docker-compose up -d  worker

9、重启docker-compose restart worker


 

Dockerfile-Worker

FROM jdk1.8
WORKDIR /app
COPY ./target/worker.jar app.jar
CMD java -Xmx512m -Duser.timezone=GMT+8 -jar app.jar

docker-compose.yml(仅供参考,还不完善)

version: '3'
services:
  redis:
    image: redis
    container_name: docker_redis
    volumes:
      - ./datadir:/data
      - ./conf/redis.conf:/usr/local/etc/redis/redis.conf
      - ./logs:/logs
    ports:
      - "20520:6379"

  mysql-db:
    container_name: mysql-docker        # 指定容器的名称
    image: mysql:5.7.16                   # 指定镜像和版本
    restart: always
    command: --default-authentication-plugin=mysql_native_password #这行代码解决无法访问的问题
    ports:
      - "3306:3306"
    environment:
      MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
      MYSQL_ROOT_HOST: ${MYSQL_ROOT_HOST}
    volumes:
      - "./mysql/data:/var/lib/mysql"           # 挂载数据目录
      - "./mysql/config:/etc/mysql/conf.d"      # 挂载配置文件目录


register-center:
    image: register-center
    volumes:
      - /data/logs:/app/logs
    mem_limit: 600m
    extra_hosts:
      - 'node1.nifi-dev.com:192.168.1.10'
      - 'node2.nifi-dev.com:192.168.1.10'
      - 'node3.nifi-dev.com:192.168.1.10'
    environment:
      - CUR_ENV=sit
    ports:
      - "20741:8761"
  config-center:
    image: config-center
    volumes:
      - /data/logs:/app/logs
      - /data/skywalking/agent:/agent
    mem_limit: 2000m
    extra_hosts:
      - 'node1.nifi-dev.com:192.168.1.10'
      - 'node2.nifi-dev.com:192.168.1.10'
      - 'node3.nifi-dev.com:192.168.1.10'
    depends_on:
      - register-center
    environment:
      - EUREKA_SERVER_LIST=http://register-center:8761/eureka/
    command: /wait-for.sh register-center:8761/eureka/apps -- java -javaagent:/agent/skywalking-agent.jar -Dskywalking.agent.service_name=config-center -Dskywalking.collector.backend_service=192.168.1.147:20764 -Xmx256m -jar /app/app.jar --server.port=8760

woeker:
    image: woeker
    volumes:
      - /data/logs:/app/logs
    mem_limit: 600m
    extra_hosts:
      - 'node1.nifi-dev.com:192.168.1.10'
      - 'node2.nifi-dev.com:192.168.1.10'
      - 'node3.nifi-dev.com:192.168.1.10'
    environment:
      - CUR_ENV=sit
    ports:
      - "20741:8761"

 

 

另:部署kafka集群的docker-compose.yml

version: '3'
services:

  zoo1:
    image: zookeeper
    restart: always
    container_name: zoo1
    ports:
      - "2181:2181"
    environment:
      ZOO_MY_ID: 1
      ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181
    volumes:
      - ./zoo1/data:/data
      - ./zoo1/datalog:/datalog

  zoo2:
    image: zookeeper
    restart: always
    container_name: zoo2
    ports:
      - "2182:2181"
    environment:
      ZOO_MY_ID: 2
      ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181
    volumes:
      - ./zoo2/data:/data
      - ./zoo2/datalog:/datalog

  zoo3:
    image: zookeeper
    restart: always
    container_name: zoo3
    ports:
       - "2183:2181"
    environment:
      ZOO_MY_ID: 3
      ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181
    volumes:
      - ./zoo3/data:/data
      - ./zoo3/datalog:/datalog
      
      kafka1:
    image: wurstmeister/kafka
    ports:
      - "20540:9092"
    environment:
      KAFKA_ADVERTISED_HOST_NAME: 192.168.1.73
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.1.73:20540
      KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
      KAFKA_ADVERTISED_PORT: 9092
      KAFKA_BROKER_ID: 1
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_LOG_DIRS: /kafka/logs
    volumes:
      - ./kafka1/logs:/kafka/logs
    depends_on:
      - zoo1
      - zoo2
      - zoo3
    container_name: kafka1

  kafka2:
    image: wurstmeister/kafka
    ports:
      - "9093:9092"
    environment:
      KAFKA_ADVERTISED_HOST_NAME: 192.168.1.73
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.1.73:9093
      KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
      KAFKA_ADVERTISED_PORT: 9093
      KAFKA_BROKER_ID: 2
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_LOG_DIRS: /kafka/logs
    volumes:
      - ./kafka2/logs:/kafka/logs
    depends_on:
      - zoo1
      - zoo2
      - zoo3
    container_name: kafka2
   
  kafka3:
    image: wurstmeister/kafka
    ports:
      - "9094:9092"
    environment:
      KAFKA_ADVERTISED_HOST_NAME: 192.168.1.73
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.1.73:9094
      KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
      KAFKA_ADVERTISED_PORT: 9094
      KAFKA_BROKER_ID: 3
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_LOG_DIRS: /kafka/logs
    volumes:
      - ./kafka3/logs:/kafka/logs
    depends_on:
      - zoo1
      - zoo2
      - zoo3
    container_name: kafka3


  ## 镜像:开源的web管理kafka集群的界面
  kafka-manager:
    image: sheepkiller/kafka-manager
    environment:
        ZK_HOSTS: 192.168.1.73
    ports:
      - "21105:9000" 

 

评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值