环境准备:1)搭建一个swarm mode集群环境
root@docker1:/home/docker/xu/swarm# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
2bjtayk2pbbcl384ef9dxvyi0 * docker1 Down Active Leader
b4f80o1a32afg5buim66w71se docker2 Down Active
2)创建一个基于overlay的mysql(名称)网络
root@docker1:/home/docker/xu/swarm# docker network ls
NETWORK ID NAME DRIVER SCOPE
1b1c3a449781 bridge bridge local
041a4a5c12d7 docker_gwbridge bridge local
a7c3520d5f74 host host local
10xf6wjdp7kc ingress overlay swarm
2t21vsd267ms mysql overlay swarm
a5b817842a3a none null local
1、Swarm mode有个内置的overlay网络ingress,该网络主要是用于swarm vip方式的负载均衡
root@docker1:/home/docker/xu/swarm# docker network ls
NETWORK ID NAME DRIVER SCOPE
1b1c3a449781 bridge bridge local
041a4a5c12d7 docker_gwbridge bridge local
a7c3520d5f74 host host local
10xf6wjdp7kc ingress overlay swarm
2t21vsd267ms mysql overlay swarm
a5b817842a3a none null local
2、Swarm mode vip负载均衡的官方解释
The swarm manager uses ingress load balancing to expose the services you want to make available externally to the swarm. The swarm manager can automatically assign the service a PublishedPort or you can configure a PublishedPort for the service. You can specify any unused port. If you do not specify a port, the swarm manager assigns the service a port in the 30000-32767 range.
External components, such as cloud load balancers, can access the service on the PublishedPort of any node in the cluster whether or not the node is currently running the task for the service. All nodes in the swarm route ingress connections to a running task instance.
3、Routing Mesh是实现vip负载均衡的关键技术, Routing Mesh的目的是每个主机上都为服务预留端口, 保证每台
机器上都可以访问到服务。实现的办法就是Ingress网络, 之前我们提到容器中会多出一块网络,我们Inspect ingress
网络,同时会发现网络对应的容器上多出一个容器 ingress-sbox。
root@docker1:/home/docker/xu/swarm# docker service inspect mysql
[
{
"ID": "ayxboy2ry1qlxojpslz9ujh0p",
"Version": {
"Index": 153
},
"CreatedAt": "2017-01-04T03:10:52.670583716Z",
"UpdatedAt": "2017-01-04T03:10:52.848750019Z",
"Spec": {
"Name": "mysql",
"TaskTemplate": {
"ContainerSpec": {
"Image": "docker1:5000/mysql",
"Env": [
"MYSQL_ROOT_PASSWORD=123456"
]
},
"Resources": {
"Limits": {},
"Reservations": {}
},
"RestartPolicy": {
"Condition": "any",
"MaxAttempts": 0
},
"Placement": {}
},
"Mode": {
"Replicated": {
"Replicas": 3
}
},
"UpdateConfig": {
"Parallelism": 1,
"FailureAction": "pause"
},
"Networks": [
{
"Target": "2t21vsd267ms5b4q1p5loga6g"
}
],
"EndpointSpec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 3306,
"PublishedPort": 3306
}
]
}
},
"Endpoint": {
"Spec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 3306,
"PublishedPort": 3306
}
]
},
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 3306,
"PublishedPort": 3306
}
],
"VirtualIPs": [
{
"NetworkID": "10xf6wjdp7kc8fvw4g96eu4h5",
"Addr": "10.255.0.6/16" //对应的是ingress网络
},
{
"NetworkID": "2t21vsd267ms5b4q1p5loga6g",
"Addr": "10.0.0.2/24" //对应的是mysql网络
}
]
},
"UpdateStatus": {
"StartedAt": "0001-01-01T00:00:00Z",
"CompletedAt": "0001-01-01T00:00:00Z"
}
}
]
root@docker1:/home/docker/xu/swarm# docker network inspect ingress
[
{
"Name": "ingress",
"Id": "10xf6wjdp7kc8fvw4g96eu4h5",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.255.0.0/16",
"Gateway": "10.255.0.1"
}
]
},
"Internal": false,
"Containers": {
"f94593ea66347a64f44e81177f4fc444dc5ba9eeac97b36007ad38825381fc36": {
"Name": "mysql.1.dk3fhi1d01j3x8lzg26rehmva",
"EndpointID": "dbde133c6e6d3e9f15a911681e350f933e4d1e94e1cdc8fa1015aed1e33eefc0",
"MacAddress": "02:42:0a:ff:00:07",
"IPv4Address": "10.255.0.7/16",
"IPv6Address": ""
},
"ingress-sbox": {
"Name": "ingress-endpoint",
"EndpointID": "dfc75c3e1818a9aba28343cdda481fea11819bb46ac2d8b763a5914bbf87fe30",
"MacAddress": "02:42:0a:ff:00:03",
"IPv4Address": "10.255.0.3/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "256"
},
"Labels": {}
}
]
4、vip 负载均衡数据流量
主机端口(例如8080) => 容器Ingress-sbox(例如10.255.0.3/16,如上ingress配置) => IPVS分发到containers。
访问主机之后数据包流到了一个特殊的Sandbox容器里, 这个容器和我们的容器共享一个Ingress网络,通过Iptables
和IPVS等重定向到了最终容器之上。 达到了服务在任何一台主机的8080端口都可达的目的。
5、DNS负载均衡与vip负载不一样,它主要依赖的用户自定义的overlay网络,例如本实验中的mysql
如何确定是创建支持dns或者是vip方式的负载均衡服务,主要是由参数--endpoint-mode决定,例如:
1)dns方式
docker service create --network overlay-test --name mysql --replicas=3 --endpoint-mode=dnsrr dockertest1:5000/mysql
2)vip方式
docker service create --network overlay-test -p 3306:3306 --name mysql --replicas=3 --endpoint-mode=vip dockertest1:5000/mysql
dns方式是不允许有-p这个参数的,同时vip是默认模式,所以在创建VIP类型,也可以不加--endpoint-mode参数。创建的DNS类型的时候,容器中不会出现ingress网络,请
看下面的代码块:
root@docker1:/home/docker/xu/swarm# docker exec -ti 66d0 /bin/bash
root@66d0acd4ac01:/# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
272: eth0@if273: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:00:00:02 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.2/24 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:aff:fe00:2/64 scope link
valid_lft forever preferred_lft forever
278: eth1@if279: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:12:00:04 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.4/16 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe12:4/64 scope link
valid_lft forever preferred_lft forever
其中1)lo网卡不用多说, 是本地网卡, 也叫回环网卡;
2)eth1和docker_gwbridge网桥构成的网络可以使容器内的服务可以在主机上访问, 主机上telnet 172.18.0.4可以访问到我们的服务,如下图
3)eth0属于之前创建的mysql网络,可以通过查看inspect网络得出该结果docker network inspect mysql
root@docker1:/home/docker/xu/swarm# docker network inspect mysql
[
{
"Name": "mysql",
"Id": "2t21vsd267ms5b4q1p5loga6g",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Containers": {
"66d0acd4ac010a0e488483074fa8dc5c1be3631c0be0121567676df5bdc5cf38": {
"Name": "mysql.1.9d4pzyi0okzdzwh501j3u8d1r",
"EndpointID": "1463b3e479b582530db5e92513d90df25f8ab28917c48346576b9d73e9c12b33",
"MacAddress": "02:42:0a:00:00:02",
"IPv4Address": "10.0.0.2/24",
"IPv6Address": ""
},
"cfe34fdf93dd4739c5f71125527368646631d5c4d4a89b30f768d16d9c38ea12": {
"Name": "mysql.2.cstky0b52xnumtfgq98zngo3j",
"EndpointID": "8d2f38ee93bc4f4e57f4bb5201d9297ff36587d34592f14f46aaea690e72cc46",
"MacAddress": "02:42:0a:00:00:0f",
"IPv4Address": "10.0.0.15/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "257"
},
"Labels": {}
}
]
该service启动了两个任务,分别对应的container的ip是10.0.0.2和10.0.0.15
DNS类型的service也不会有vim出现,请看下面DNS类型的配置格式
root@docker1:/home/docker/xu/swarm# docker service inspect mysql
[
{
"ID": "68zci8gtfsprbuscfibad5ypz",
"Version": {
"Index": 862
},
"CreatedAt": "2017-01-04T05:21:53.149763995Z",
"UpdatedAt": "2017-01-04T05:21:53.149763995Z",
"Spec": {
"Name": "mysql",
"TaskTemplate": {
"ContainerSpec": {
"Image": "docker1:5000/mysql",
"Env": [
"MYSQL_ROOT_PASSWORD=123456"
]
},
"Resources": {
"Limits": {},
"Reservations": {}
},
"RestartPolicy": {
"Condition": "any",
"MaxAttempts": 0
},
"Placement": {}
},
"Mode": {
"Replicated": {
"Replicas": 2
}
},
"UpdateConfig": {
"Parallelism": 1,
"FailureAction": "pause"
},
"Networks": [
{
"Target": "2t21vsd267ms5b4q1p5loga6g"
}
],
"EndpointSpec": {
"Mode": "dnsrr"
}
},
"Endpoint": {
"Spec": {}
},
"UpdateStatus": {
"StartedAt": "0001-01-01T00:00:00Z",
"CompletedAt": "0001-01-01T00:00:00Z"
}
}
]
关于Swarm mode的网络就分享到这,以后再继续分享