Docker swarm集群详解(二)

集群搭建,拉伸,缩减与集群监控

一   实验环境

server1(manager)172.25.2.1
server2(node)172.25.2.2
server3(node)172.25.2.3
物理机(用来测试)172.25.2.250

一、Docker swarm集群的搭建

        1)在管理节点server1上初始化swarm集群

[root@server1 ~]# docker swarm init
Swarm initialized: current node (0av4hrultxgojh9os0nfiycgo) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-285s2rlogan7siuapsqdsi9u3r2v22s7n2pnf6n8ldqs2ie8jl-1aebv5jv5wog2hcmwkkvsepdw 172.25.2.1:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
yum install -y bridge-utils

显示相关信息

[root@server1 ~]# brctl show
bridge name	bridge id		STP enabled	interfaces
docker0		8000.024211545028	no		
docker_gwbridge		8000.0242fffe330d	no		vethb5c7536

[root@server1 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
00c9ecf5b999        bridge              bridge              local
a7ec48b6f0ca        docker_gwbridge     bridge              local
78b8c58ab155        host                host                local
4609t88c0j29        ingress             overlay             swarm
485c668e0216        none                null                local

(2)server2,server3(work节点)加入集群

说明:这里添加我们使用server1初始化swarm是给的信息添加!

[root@server2 ~]# docker swarm join --token SWMTKN-1-285s2rlogan7siuapsqdsi9u3r2v22s7n2pnf6n8ldqs2ie8jl-1aebv5jv5wog2hcmwkkvsepdw 172.25.2.1:2377
This node joined a swarm as a worker.

docker swarm join --token SWMTKN-1-285s2rlogan7siuapsqdsi9u3r2v22s7n2pnf6n8ldqs2ie8jl-1aebv5jv5wog2hcmwkkvsepdw 172.25.2.1:2377
This node joined a swarm as a worker.

(3)在server1查看节点信息,是否添加上

[root@server1 ~]# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
0av4hrultxgojh9os0nfiycgo *   server1             Ready               Active              Leader              18.09.7
ax2bs9nqqiu5u2r5o5kyy7blx     server2             Ready               Active                                  18.09.7
7zflkp1aetvsqub3lf8b0xl5w     server3             Ready               Active                                  18.09.7

# 说明:一定要各自做好本地解析!

二、使用swarm集群,部署web服务

(1)三个节点导入nginx镜像

docker pull nginx # 远程下载
 
docker save -o nginx.tar nginx:latest # copy镜像

scp nginx.tar server2:`pwd`           # 传给节点

scp nginx.tar server3:`pwd`

docker load -i nginx.tar              # server2

docker load -i nginx.tar              # server2

(2)2.在集群上部署三个nginx容器,用来提供服务

说明:由于是跨主机的,所以必须改变网络模式

[root@server1 ~]# docker network create -d overlay webnet ##创建驱动为overlay的网络 qfphmyucr1vgw2owzjh72vku6 
[root@server1 ~]# docker service create --name web \ 
> --network webnet \ 
> --replicas 3 \  #  –replicas 3就是代表集群的个数变为3。manager会将容器平均分配到三个节点上 
> -p 80:80 \ 
> nginx 这里使用的是自己创建的驱动为overlay的网络(由于是跨主机,则必须是驱动为overlay的网络)
# 当然也可以不加--network webnet,使用默认的网络!

结果

[root@server1 ~]# docker network create -d overlay webnet
lvh7o1u3uut13qqb74mo7yn5u
[root@server1 ~]# docker service create --name web --network webnet   --no-resolve-image --replicas 3 -p 80:80 nginx:latest
mx9sc8hq9s41etyfnt4pl707g
overall progress: 3 out of 3 tasks 
1/3: running   [==================================================>] 
2/3: running   [==================================================>] 
3/3: running   [==================================================>] 
verify: Service converged 

# --no-resolve-image 这个参数听说 可以使用本地镜像

# 报:No such image:错误解决办法-->注意镜像的版本!

[root@server1 ~]# docker service ps web 
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
v7z7v8cpwlyu        web.1               nginx:latest        server3             Running             Running 44 seconds ago                       
zhhm80z0d5qx        web.2               nginx:latest        server2             Running             Running 44 seconds ago                       
u6m9rvkizspk        web.3               nginx:latest        server1             Running             Running 12 seconds ago 

查看一下80端口的开放情况,

tcp6       0      0 :::80                   :::*                    LISTEN      0          81279      2282/dockerd

tcp6       0      0 172.25.2.1:2377         172.25.2.2:53816        ESTABLISHED 0          30582      2282/dockerd 

tcp6       0      0 172.25.2.1:2377         172.25.2.3:34672        ESTABLISHED 0          32926      2282/dockerd

查看docker的服务列表,docker集群的web列表

[root@server1 ~]# docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
v6cal2a1q4lu        web                 replicated          3/3                 nginx:latest        *:80->80/tcp
[root@server1 ~]# docker service ps web 
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE           ERROR               PORTS
v7z7v8cpwlyu        web.1               nginx:latest        server3             Running             Running 5 minutes ago                       
zhhm80z0d5qx        web.2               nginx:latest        server2             Running             Running 5 minutes ago                       
u6m9rvkizspk        web.3               nginx:latest        server1             Running             Running 4 minutes ago     

# 看到三个节点均运行了起来                  

在三个节点上操作:复制发布首页面到容器nginx默认发布目录

echo server1 > index.html #server1

docker cp index.html web.3.u6m9rvkizspkff4xopsxcf8n8:/usr/share/nginx/html

########################

[root@server2 ~]# cat index.html 
server2
[root@server2 ~]# docker cp index.html web.2.zhhm80z0d5qxk13yjz3h5oza3:/usr/share/nginx/html

[root@server3 ~]# cat index.html 
server2
[root@server3 ~]# docker cp index.html web.1.v7z7v8cpwlyumblemapuwoou1:/usr/share/nginx/html

# 说明:web会自动补全

# 测试2:登陆节点服务器-->docker ps  可以看到nginx容器分别运行在三台不同的主机上!

测试:测试负载均衡

[root@server1 ~]# curl 172.25.2.1
server1
[root@server1 ~]# curl 172.25.2.1
serevr3
[root@server1 ~]# curl 172.25.2.1
server2
[root@server1 ~]# curl 172.25.2.1
server1
[root@server1 ~]# curl 172.25.2.1
serevr3
[root@server1 ~]# curl 172.25.2.1
server2
[root@server1 ~]# curl 172.25.2.1
server1
[root@server1 ~]# curl 172.25.2.1
serevr3
[root@server1 ~]# curl 172.25.2.1
server2
[root@server1 ~]# curl 172.25.2.1

# 说明:真实主机由于防火墙、Selinux!

[12:39:34][kiosk@foundation15:~]$ for i in {1..10}; do curl 172.25.2.2; done
server1
serevr3
server2
server1
serevr3
server2
server1
serevr3
server2

# 说明:测试的时候172.25.2.1不通,但是172.25.2.2通!

四、容器的拉神以及缩减(增加或减少服务数目)

for i in {1..10}; do curl 172.25.2.3/index.html ;done

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值