Docker Swarm集群搭建与测试

Docker Swarm

文档

https://docs.docker.com/engine/swarm/

参考

https://qiita.com/Brutus/items/b3dfe5957294caa82669

https://docs.docker.jp/swarm/overview.html

架构示意

Manager可以管理集群,也可以运行容器

worker只能运行容器

在这里插入图片描述

搭建

docker环境搭建

OS设置

# 关闭SELinux,firewalld

# 网络设置
[root@vm1 ~]# ip -br a | grep 0s8 | awk '{print $3}'
192.168.50.100/24

[root@vm2 ~]# ip -br a | grep 0s8 | awk '{print $3}'
192.168.50.120/24

安装docker,compose

# docker
[root@vm1 ~]# cat install-docker.sh
yum remove docker* -y
rm -rf /var/lib/docker
yum -y install wget
wget -O /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce docker-ce-cli containerd.io -y
docker --version
systemctl enable docker --now
docker run hello-world
[root@vm1 ~]# bash install-docker.sh

# 安装docker-compose(任意)
[root@vm1 ~]# curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
[root@vm1 ~]# chmod +x /usr/local/bin/docker-compose

[root@vm1 ~]# docker -v
Docker version 20.10.12, build e91ed57

[root@vm1 ~]# docker-compose -v
docker-compose version 1.29.2, build 5becea4c

[root@vm2 ~]# docker -v
Docker version 20.10.12, build e91ed57

[root@vm2 ~]# docker-compose -v
docker-compose version 1.29.2, build 5becea4c

docker0网络(不用修改)

[root@vm1 ~]# docker network inspect bridge -f "{{.IPAM.Config}}"
[{192.168.80.0/24  192.168.80.1 map[]}]

[root@vm2 ~]# docker network inspect bridge -f "{{.IPAM.Config}}"
[{192.168.90.0/24  192.168.90.1 map[]}]

Swarm集群搭建

初始化#1(manager)

[root@vm1 ~]# docker swarm init --advertise-addr 192.168.50.100
Swarm initialized: current node (kdcrkd6sqteevq9jgy70fd0h0) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-0h1lwim8eh8ygwmh9mg3chuep8lf3dh0z8iv79v3km7itl28ww-4yt8thj4v7a2sj45iab6v8qqj 192.168.50.100:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

# 亦可手动获取令牌token

[root@vm1 ~]# docker swarm join-token worker
To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-0h1lwim8eh8ygwmh9mg3chuep8lf3dh0z8iv79v3km7itl28ww-4yt8thj4v7a2sj45iab6v8qqj 192.168.50.100:2377

# 查看集群节点

[root@vm1 ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 *   vm1        Ready     Active         Leader           20.10.12

worker加入集群

[root@vm2 ~]# docker swarm join --token SWMTKN-1-0h1lwim8eh8ygwmh9mg3chuep8lf3dh0z8iv79v3km7itl28ww-4yt8thj4v7a2sj45iab6v8qqj 192.168.50.100:2377
This node joined a swarm as a worker.

查看集群节点

[root@vm2 ~]# docker node ls
Error response from daemon: This node is not a swarm manager. Worker nodes can't be used to view or modify cluster state. Please run this command on a manager node or promote the current node to a manager.

ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 *   vm1        Ready     Active         Leader           20.10.12
4hh92oj2meotbi0etnje15bzq     vm2        Ready     Active                          20.10.12

添加label

dns主机别名?

[root@vm1 ~]# docker node update --label-add name=swarm-master-1 vm1
vm1

[root@vm1 ~]# docker node update --label-add name=swarm-master-2 vm2
vm2

[root@vm1 ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 *   vm1        Ready     Active         Leader           20.10.12
4hh92oj2meotbi0etnje15bzq     vm2        Ready     Active                          20.10.12

查看label

[root@vm1 ~]# docker node inspect vm1 -f "{{.Spec.Labels}}"
map[name:swarm-master-1]

[root@vm1 ~]# docker node inspect vm2 -f "{{.Spec.Labels}}"
map[HOSTNAME:master-2 name:master-2]

这个label貌似没用?

[root@vm1 ~]# docker node update --help
Usage:  docker node update [OPTIONS] NODE
Update a node
Options:
      --availability string   Availability of the node ("active"|"pause"|"drain")
      --label-add list        Add or update a node label (key=value)
      --label-rm list         Remove a node label if exists
      --role string           Role of the node ("worker"|"manager")

[root@vm1 ~]# docker node update --label-add name=master-2 vm2
vm2
[root@vm1 ~]# echo $?
0
[root@vm1 ~]#

[root@vm1 ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 *   vm1        Ready     Active         Leader           20.10.12
4hh92oj2meotbi0etnje15bzq     vm2        Ready     Active                          20.10.12

[root@vm1 ~]# docker node promote master-2
Error: No such node: master-2

[root@vm1 ~]# docker node promote master-2
Error: No such node: master-2
[root@vm1 ~]# docker node update --label-add HOSTNAME=master-2 vm2
vm2
[root@vm1 ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 *   vm1        Ready     Active         Leader           20.10.12
4hh92oj2meotbi0etnje15bzq     vm2        Ready     Active                          20.10.12
[root@vm1 ~]# docker node promote master-2
Error: No such node: master-2

提升worker为master

[root@vm1 ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 *   vm1        Ready     Active         Leader           20.10.12
4hh92oj2meotbi0etnje15bzq     vm2        Ready     Active         Reachable        20.10.12

2号可以查看集群情况了

[root@vm2 ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0     vm1        Ready     Active         Leader           20.10.12
4hh92oj2meotbi0etnje15bzq *   vm2        Ready     Active         Reachable        20.10.12

查看节点的信息

[root@vm1 ~]# docker node inspect vm2 -f "{{.Spec.Labels}}"
map[HOSTNAME:master-2 name:master-2]
[root@vm1 ~]# docker node inspect vm2

创建网络

指定为overlay

[root@vm1 ~]# docker network create -d overlay --subnet=192.168.82.0/24 --gateway=192.168.82.1 --attachable swarm-net
xywzrf7ftwenaxbu0zmewh183

查看

[root@vm1 ~]# docker network inspect swarm-net -f "{{.IPAM}}"
{default map[] [{192.168.82.0/24  192.168.82.1 map[]}]}

创建service并验证

创建

[root@vm1 ~]# docker service create --replicas 3 -p 10080:80 --network swarm-net --name nginx-cluster nginx
r4v6w094yxl370bynyzghh37a
overall progress: 3 out of 3 tasks
1/3: running   [==================================================>]
2/3: running   [==================================================>]
3/3: running   [==================================================>]
verify: Service converged
[root@vm1 ~]#

查看

[root@vm1 ~]# docker service ls
ID             NAME            MODE         REPLICAS   IMAGE          PORTS
r4v6w094yxl3   nginx-cluster   replicated   3/3        nginx:latest   *:10080->80/tcp

[root@vm1 ~]# ss -ntl | grep 10080
LISTEN 0      128                *:10080            *:*

[root@vm1 ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS         PORTS     NAMES
3ddd0e479de6   nginx:latest   "/docker-entrypoint.…"   7 minutes ago   Up 7 minutes   80/tcp    nginx-cluster.1.6i1ncpnncxzxmsckwyahj9f7l

[root@vm1 ~]# docker port nginx-cluster.1.6i1ncpnncxzxmsckwyahj9f7l
[root@vm1 ~]# echo $?
0
[root@vm1 ~]#

访问

[root@vm1 ~]# curl 192.168.50.100:10080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...

在2号机上

[root@vm2 ~]# docker service  ls
ID             NAME            MODE         REPLICAS   IMAGE          PORTS
r4v6w094yxl3   nginx-cluster   replicated   3/3        nginx:latest   *:10080->80/tcp
[root@vm2 ~]# ss -ntl
State         Recv-Q        Send-Q               Local Address:Port                Peer Address:Port        Process
LISTEN        0             128                        0.0.0.0:22                       0.0.0.0:*
LISTEN        0             128                              *:10080                          *:*
LISTEN        0             128                              *:2377                           *:*
LISTEN        0             128                              *:7946                           *:*
LISTEN        0             128                           [::]:22                          [::]:*


[root@vm2 ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS         PORTS     NAMES
6b1e246bdc34   nginx:latest   "/docker-entrypoint.…"   8 minutes ago   Up 8 minutes   80/tcp    nginx-cluster.3.yofvioldzci3k4geve7lykyrs
0d6709372322   nginx:latest   "/docker-entrypoint.…"   8 minutes ago   Up 8 minutes   80/tcp    nginx-cluster.2.gp7szi6348r0gfakr6v1i42ga
[root@vm2 ~]#

# 另一个容器在1号机上运行
[root@vm1 ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS         PORTS     NAMES
3ddd0e479de6   nginx:latest   "/docker-entrypoint.…"   9 minutes ago   Up 9 minutes   80/tcp    nginx-cluster.1.6i1ncpnncxzxmsckwyahj9f7l

2号上也能看到overlay网络

[root@vm2 ~]# docker network ls | grep swarm-net
xywzrf7ftwen   swarm-net         overlay   swarm

访问2号机ip,看看转发

[root@vm2 ~]# curl 192.168.50.120:10080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
...

查看同一台主机的负载均衡分配情况

修改默认web主页

# master1上的1号机
[root@vm1 ~]# docker exec -it nginx-cluster.1.6i1ncpnncxzxmsckwyahj9f7l bash
root@3ddd0e479de6:/# echo '#1 in master 1' > /usr/share/nginx/html/index.html

# master2上的2,3号
[root@vm2 ~]# docker exec -it nginx-cluster.2.gp7szi6348r0gfakr6v1i42ga bash
root@0d6709372322:/# echo '#2 in master 2' > /usr/share/nginx/html/index.html
root@0d6709372322:/#

[root@vm2 ~]# docker exec -it nginx-cluster.3.yofvioldzci3k4geve7lykyrs bash
root@6b1e246bdc34:/#  echo '#3 in master 2' > /usr/share/nginx/html/index.html

访问测试,访问#2的ip(轮询)

[root@vm2 ~]# curl 192.168.50.120:10080
#3 in master 2
[root@vm2 ~]# curl 192.168.50.120:10080
#1 in master 1
[root@vm2 ~]# curl 192.168.50.120:10080
#2 in master 2
[root@vm2 ~]# curl 192.168.50.120:10080
#3 in master 2

访问测试,访问#1的ip(轮询)

[root@vm2 ~]# curl 192.168.50.100:10080
#2 in master 2
[root@vm2 ~]# curl 192.168.50.100:10080
#1 in master 1
[root@vm2 ~]# curl 192.168.50.100:10080
#3 in master 2
[root@vm2 ~]# curl 192.168.50.100:10080
#2 in master 2

验证HA

关闭某个节点,看看容器会不会自动转移到其他可用节点

# 关闭master2之前

[root@vm2 ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS          PORTS     NAMES
6b1e246bdc34   nginx:latest   "/docker-entrypoint.…"   18 minutes ago   Up 18 minutes   80/tcp    nginx-cluster.3.yofvioldzci3k4geve7lykyrs
0d6709372322   nginx:latest   "/docker-entrypoint.…"   18 minutes ago   Up 18 minutes   80/tcp    nginx-cluster.2.gp7szi6348r0gfakr6v1i42ga
[root@vm2 ~]#

# 关闭dockerd(模拟服务器宕机)

[root@vm2 ~]# systemctl stop docker
Warning: Stopping docker.service, but it can still be activated by:
  docker.socket

[root@vm2 ~]# systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)

关闭#2号后,无法查看节点情况,此时处于尝试唤醒#2机dockerd服务的状态

[root@vm1 ~]# docker node ls
Error response from daemon: rpc error: code = DeadlineExceeded desc = context deadline exceeded

过一会儿,docker被唤醒

[root@vm2 ~]# systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2022-01-23 13:22:38 JST; 28s ago

[root@vm1 ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 *   vm1        Ready     Active         Leader           20.10.12
4hh92oj2meotbi0etnje15bzq     vm2        Ready     Active         Reachable        20.10.12

关机看看

[root@vm2 ~]# shutdown -h now

网络完全不可通,#2状态为unreachable

[root@vm1 ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 *   vm1        Ready     Active         Leader           20.10.12
4hh92oj2meotbi0etnje15bzq     vm2        Ready     Active         Unreachable      20.10.12

只有一个master的时候无法执行service命令??

[root@vm1 ~]# docker service ls
Error response from daemon: rpc error: code = Unknown desc = The swarm does not have a leader. It's possible that too few managers are online. Make sure more than half of the managers are online.

[root@vm1 ~]# docker node ls
Error response from daemon: rpc error: code = Unknown desc = The swarm does not have a leader. It's possible that too few managers are online. Make sure more than half of the managers are online.

下方的3台追加后【验证HA(3台变2台)】,都跑到#1上了

[root@vm1 ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS          PORTS     NAMES
acc40b31df3f   nginx:latest   "/docker-entrypoint.…"   2 minutes ago    Up 2 minutes    80/tcp    nginx-cluster.3.ei48wkjvtmn53mfsjthlb52ef
d4635d8f2322   nginx:latest   "/docker-entrypoint.…"   2 minutes ago    Up 2 minutes    80/tcp    nginx-cluster.2.yyc2fmh73p23adbu44auuzf7r
3ddd0e479de6   nginx:latest   "/docker-entrypoint.…"   31 minutes ago   Up 31 minutes   80/tcp    nginx-cluster.1.6i1ncpnncxzxmsckwyahj9f7l

[root@vm1 ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 *   vm1        Ready     Active         Leader           20.10.12
4hh92oj2meotbi0etnje15bzq     vm2        Ready     Active         Reachable        20.10.12
lp4i21pj0sij9yz81f7u8dzy7     vm3        Ready     Active                          20.10.12

验证HA(3台变2台)

正常集群
[root@vm1 ~]# ip -br a | grep enp0s8 | awk '{print $3}'
192.168.50.100/24
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
[root@vm2 ~]# ip -br a | grep enp0s8 | awk '{print $3}'
192.168.50.120/24
[root@vm3 ~]# ip -br a | grep enp0s8 | awk '{print $3}'
192.168.50.130/24

[root@vm1 ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 *   vm1        Ready     Active         Leader           20.10.12
4hh92oj2meotbi0etnje15bzq     vm2        Ready     Active         Reachable        20.10.12
lp4i21pj0sij9yz81f7u8dzy7     vm3        Ready     Active                          20.10.12

创建一个服务(6台容器)

[root@vm1 ~]# docker service rm r4v6w094yxl3
r4v6w094yxl3

[root@vm1 ~]# docker service create --replicas 6 -p 10080:80 --network swarm-net --name nginx-cluster nginx                kzdm5zhgt1eo9goxy0rjwmklm
overall progress: 6 out of 6 tasks
1/6: running   [==================================================>]
2/6: running   [==================================================>]
3/6: running   [==================================================>]
4/6: running   [==================================================>]
5/6: running   [==================================================>]
6/6: running   [==================================================>]
verify: Service converged
[root@vm1 ~]#

各个主机上

[root@vm1 ~]# docker service ls
ID             NAME            MODE         REPLICAS   IMAGE          PORTS
kzdm5zhgt1eo   nginx-cluster   replicated   6/6        nginx:latest   *:10080->80/tcp

[root@vm3 ~]# docker service ls
Error response from daemon: This node is not a swarm manager. Worker nodes can't be used to view or modify cluster state. Please run this command on a manager node or promote the current node to a manager.


[root@vm1 ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS          PORTS     NAMES
31177737f781   nginx:latest   "/docker-entrypoint.…"   28 seconds ago   Up 25 seconds   80/tcp    nginx-cluster.5.9a7wd7yqo4lweayw3352kssru
791b93b799d8   nginx:latest   "/docker-entrypoint.…"   28 seconds ago   Up 25 seconds   80/tcp    nginx-cluster.2.bcrt9chkfrmbjyy30n4g3il71

[root@vm2 ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS         PORTS     NAMES
4a59afd4d2d2   nginx:latest   "/docker-entrypoint.…"   3 seconds ago   Up 2 seconds   80/tcp    nginx-cluster.1.c46oo5nzbchhbn6v07rft9d5b
bac746d67e4f   nginx:latest   "/docker-entrypoint.…"   4 seconds ago   Up 2 seconds   80/tcp    nginx-cluster.4.71wn1fz6yojbzunktws2qqslg


[root@vm3 ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS         PORTS     NAMES
f0985bc21d07   nginx:latest   "/docker-entrypoint.…"   7 seconds ago   Up 5 seconds   80/tcp    nginx-cluster.6.vvykeb5g04gihc5jg4la26903
d14c20577a85   nginx:latest   "/docker-entrypoint.…"   7 seconds ago   Up 5 seconds   80/tcp    nginx-cluster.3.qwccv5u0jlwfp5txp5593d8cc

failover(worker)

假设#3宕机(worker)

[root@vm3 ~]# shutdown -h now

# 检查到主机down,可用依然为Active。。。

[root@vm1 ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 *   vm1        Ready     Active         Leader           20.10.12
4hh92oj2meotbi0etnje15bzq     vm2        Ready     Active         Reachable        20.10.12
lp4i21pj0sij9yz81f7u8dzy7     vm3        Down      Active                          20.10.12

# 服务器运行情况,加了2个容器顶替down掉了,共有8个容器

[root@vm1 ~]# docker service ls
ID             NAME            MODE         REPLICAS   IMAGE          PORTS
kzdm5zhgt1eo   nginx-cluster   replicated   8/6        nginx:latest   *:10080->80/tcp

# 平均分配到剩下的2台主机里

[root@vm1 ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED              STATUS              PORTS     NAMES
fce351b79e7a   nginx:latest   "/docker-entrypoint.…"   About a minute ago   Up About a minute   80/tcp    nginx-cluster.3.fr0a93dgk3bhiqv49q84ebo4o
31177737f781   nginx:latest   "/docker-entrypoint.…"   3 minutes ago        Up 3 minutes        80/tcp    nginx-cluster.5.9a7wd7yqo4lweayw3352kssru
791b93b799d8   nginx:latest   "/docker-entrypoint.…"   3 minutes ago        Up 3 minutes        80/tcp    nginx-cluster.2.bcrt9chkfrmbjyy30n4g3il71

[root@vm2 ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED              STATUS              PORTS     NAMES
3fee32911501   nginx:latest   "/docker-entrypoint.…"   About a minute ago   Up About a minute   80/tcp    nginx-cluster.6.p50b0kedcjmum7wrp6bprm6uf
4a59afd4d2d2   nginx:latest   "/docker-entrypoint.…"   3 minutes ago        Up 3 minutes        80/tcp    nginx-cluster.1.c46oo5nzbchhbn6v07rft9d5b
bac746d67e4f   nginx:latest   "/docker-entrypoint.…"   3 minutes ago        Up 3 minutes        80/tcp    nginx-cluster.4.71wn1fz6yojbzunktws2qqslg

假设#3修复后(worker)

# 运行中容器不会停止?3号机原来的容器没有设置自启动的话

[root@vm3 ~]# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

# 启动后的#3的容器依然保留,但是在服务里的容器数由8变为6
[root@vm1 ~]# docker service ls
ID             NAME              MODE         REPLICAS   IMAGE          PORTS
kzdm5zhgt1eo   nginx-cluster     replicated   6/6        nginx:latest   *:10080->80/tcp

# 节点状态
[root@vm1 ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 *   vm1        Ready     Active         Leader           20.10.12
4hh92oj2meotbi0etnje15bzq     vm2        Ready     Active         Reachable        20.10.12
lp4i21pj0sij9yz81f7u8dzy7     vm3        Ready     Active                          20.10.12

# 此时新建服务的话,会优先分配到负载较低的#3机器(默认的负载时运行的容器数?)

[root@vm1 ~]# docker service create --replicas 2 -p 10081:80 --network swarm-net --name nginx-cluster-2 nginx              sfcq2vc5orxsgax5200mqtdbm
overall progress: 2 out of 2 tasks
1/2: running   [==================================================>]
2/2: running   [==================================================>]
verify: Service converged
[root@vm1 ~]#


[root@vm1 ~]# docker service ls
ID             NAME              MODE         REPLICAS   IMAGE          PORTS
kzdm5zhgt1eo   nginx-cluster     replicated   6/6        nginx:latest   *:10080->80/tcp
sfcq2vc5orxs   nginx-cluster-2   replicated   2/2        nginx:latest   *:10081->80/tcp

# #3只有一个容器,所以负载均衡的算法不是按数量的▲
[root@vm3 ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED              STATUS              PORTS     NAMES
e3465f370616   nginx:latest   "/docker-entrypoint.…"   About a minute ago   Up About a minute   80/tcp    nginx-cluster-2.1.twgr7jtplhr6c7i1cupymwf9k

删除某个容器
# 删除2号容器
[root@vm1 ~]# docker rm $(docker stop nginx-cluster-2.2.qs4tlxp1ilbtyd95xhkusjg48)
nginx-cluster-2.2.qs4tlxp1ilbtyd95xhkusjg48

# 在其他机器上自动新建
[root@vm2 ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS          PORTS     NAMES
719e0941efa4   nginx:latest   "/docker-entrypoint.…"   20 seconds ago   Up 15 seconds   80/tcp    nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688

删除服务

服务没有停止的选项,也没有start。只能create和rm

[root@vm1 ~]# docker service --help
Usage:  docker service COMMAND
Manage services
Commands:
  create      Create a new service
  inspect     Display detailed information on one or more services
  logs        Fetch the logs of a service or task
  ls          List services
  ps          List the tasks of one or more services
  rm          Remove one or more services
  rollback    Revert changes to a service's configuration
  scale       Scale one or multiple replicated services
  update      Update a service

删除服务

[root@vm1 ~]# docker service rm nginx-cluster
nginx-cluster
[root@vm1 ~]# docker service ls
ID             NAME              MODE         REPLICAS   IMAGE          PORTS
sfcq2vc5orxs   nginx-cluster-2   replicated   2/2        nginx:latest   *:10081->80/tcp

# 通过服务创建的容器都被删除

[root@vm1 ~]# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
[root@vm1 ~]# docker ps -a
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
[root@vm1 ~]#

[root@vm2 ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS              PORTS     NAMES
719e0941efa4   nginx:latest   "/docker-entrypoint.…"   2 minutes ago   Up About a minute   80/tcp    nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688
[root@vm2 ~]# docker ps -a
CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS              PORTS     NAMES
719e0941efa4   nginx:latest   "/docker-entrypoint.…"   2 minutes ago   Up About a minute   80/tcp    nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688

手动停止容器

[root@vm2 ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS         PORTS     NAMES
719e0941efa4   nginx:latest   "/docker-entrypoint.…"   4 minutes ago   Up 4 minutes   80/tcp    nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688

[root@vm2 ~]# docker stop nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688
nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688
[root@vm2 ~]# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
[root@vm2 ~]#

# 几秒后,新的容器被创建

[root@vm2 ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS          PORTS     NAMES
598c665b7a8e   nginx:latest   "/docker-entrypoint.…"   19 seconds ago   Up 13 seconds   80/tcp    nginx-cluster-2.2.qmgbury0aixey2sozive6377l

[root@vm2 ~]# docker ps -a
CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS                      PORTS     NAMES
598c665b7a8e   nginx:latest   "/docker-entrypoint.…"   24 seconds ago   Up 18 seconds               80/tcp    nginx-cluster-2.2.qmgbury0aixey2sozive6377l
719e0941efa4   nginx:latest   "/docker-entrypoint.…"   5 minutes ago    Exited (0) 24 seconds ago             nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688

[root@vm2 ~]# docker service ls
ID             NAME              MODE         REPLICAS   IMAGE          PORTS
sfcq2vc5orxs   nginx-cluster-2   replicated   2/2        nginx:latest   *:10081->80/tcp

如果再次把旧的容器启动的话

[root@vm2 ~]# docker start nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688
nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688
[root@vm2 ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS         PORTS     NAMES
598c665b7a8e   nginx:latest   "/docker-entrypoint.…"   2 minutes ago   Up 2 minutes   80/tcp    nginx-cluster-2.2.qmgbury0aixey2sozive6377l
719e0941efa4   nginx:latest   "/docker-entrypoint.…"   7 minutes ago   Up 1 second    80/tcp    nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688

# 容器可以启动,但是集群已经将其抛弃

[root@vm2 ~]# docker service ls
ID             NAME              MODE         REPLICAS   IMAGE          PORTS
sfcq2vc5orxs   nginx-cluster-2   replicated   2/2        nginx:latest   *:10081->80/tcp

# 删除服务后,旧的容器一并删除

[root@vm2 ~]# docker service rm $(docker service ls -q)
sfcq2vc5orxs

[root@vm2 ~]# docker service ls
ID        NAME      MODE      REPLICAS   IMAGE     PORTS
[root@vm2 ~]# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
[root@vm2 ~]# docker ps -a
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
[root@vm2 ~]#

其他

离开集群
[root@vm3 ~]# docker swarm leave
Error response from daemon: You are attempting to leave the swarm on a node that is participating as a manager. The only way to restore a swarm that has lost consensus is to reinitialize it with `--force-new-cluster`. Use `--force` to suppress this message.
[root@vm3 ~]# docker swarm leave --force

[root@vm3 ~]# docker node ls
Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again.

删除集群

全部的节点都执行了leave命令后,集群就消失了

[root@vm1 ~]# docker swarm leave --force
Node left the swarm.

[root@vm1 ~]# docker node ls
Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again.

QA

全部节点都设置为master?

只剩下一个节点(master或者worker)的时候,集群是不能管理的了?

具体容器在哪里跑只能上宿主机一台一台的docker ps查看?

docker service是保证服务时刻可用的,所有没有stop和start的选项,只有create和rm,通过docker stopdocker rm命令操作后,service会自动创建并启动新的容器(新的ID),运行中的容器消失,停止都被视为异常,都会新建容器来取代

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值