docker overlay network测试

docker自1.9版本后,引入了overlay网络(本文不具体分析其背后使用的技术)。重点解决之前docker网络在跨主机通信方面的不足。本文记录,参考官方指导文档,搭建测试overlay网络的过程。
文中使用的os为centos7,内核版本为3.10。而docker 1.9版本overlay网络要求内核版本在3.19以上。自docker 1.10版本后,docker overlay 支持3.10版本内核。因此,本文中使用的docker版本为1.10.3。
测试环境工包含三台virtualbox虚拟机。1台作为key-value store存储的机器,本文测试中选用的是etcd。另外,两台用来测试跨host的通信,分别为net1和net2。
docker info:

[root@net1 vagrant]# docker  info
Containers: 1
 Running: 1
 Paused: 0
 Stopped: 0
Images: 1
Server Version: 1.10.3
 Storage Driver: devicemapper
 Pool Name: docker-253:0-469034-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 10.74 GB
 Backing Filesystem: xfs
 Data file: /dev/vg-docker/data
 Metadata file: /dev/vg-docker/metadata
 Data Space Used: 41.16 MB
 Data Space Total: 10.74 GB
 Data Space Available: 10.7 GB
 Metadata Space Used: 761.9 kB
 Metadata Space Total: 10.63 GB
 Metadata Space Available: 10.63 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Library Version: 1.02.107-RHEL7 (2015-12-01)
Execution Driver: native-0.2
Logging Driver: json-file
Plugins: 
 Volume: local
 Network: null host overlay bridge
Kernel Version: 3.10.0-229.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 993.6 MiB
Name: net1
ID: TU6M:E6WM:PZDN:ULJX:EWKS:UPLQ:Z54D:XP52:64C7:Z4XN:TJ76:VG7O
WARNING: bridge-nf-call-ip6tables is disabled
Cluster store: etcd://172.28.0.2:4001
Cluster advertise: 172.28.0.3:0

注:使用默认的loop的时候,overlay网络测试存在问题。无法创建成功。
参考官方文档,需要配置docker daemon的如下参数:

--cluster-store=PROVIDER://URL Describes the location of the KV service.
--cluster-advertise=HOST_IP|HOST_IFACE:PORT
The IP address or interface of the HOST used for clustering.
--cluster-store-opt=KEY-VALUE OPTIONS
Options such as TLS certificate or tuning discovery Timers

docker daemon参数

/usr/bin/docker daemon -H fd:// --storage-driver=devicemapper --storage-opt dm.datadev=/dev/vg-docker/data --storage-opt dm.metadatadev=/dev/vg-docker/metadata  --cluster-store=etcd://172.28.0.2:4001 --cluster-advertise=eth1:0

端口,7946控制面,4789数据面。

firewall-cmd --permanent --add-port=7946/tcp
firewall-cmd --permanent --add-port=7946/udp
firewall-cmd --permanent --add-port=4789/udp

配置完上述参数后,即可创建网络。

docker network create -d overlay mutihost

[root@net1 vagrant]# docker  network ls
NETWORK ID          NAME                DRIVER
15bb57daf277        multihost           overlay             
3cd7ab7018e9        docker_gwbridge     bridge              
a874aa0d9e0b        bridge              bridge              
9fe04ff37f6f        none                null                
010a53c2bf04        host                host 

[root@net1 vagrant]# docker  network inspect multihost
[
    {
        "Name": "multihost",
        "Id": "15bb57daf27731da102c8a5c5bf903e574daa33f5286e938009734a8cd5ce93c",
        "Scope": "global",
        "Driver": "overlay",
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.0.0.0/24",
                    "Gateway": "10.0.0.1/24"
                }
            ]
        },
        "Containers": {},
        "Options": {}
    }
]

[root@net2 vagrant]# docker network inspect multihost
[
    {
        "Name": "multihost",
        "Id": "15bb57daf27731da102c8a5c5bf903e574daa33f5286e938009734a8cd5ce93c",
        "Scope": "global",
        "Driver": "overlay",
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.0.0.0/24",
                    "Gateway": "10.0.0.1/24"
                }
            ]
        },
        "Containers": {
            "37162168dca4ad715d12f6bc78d1bf0678ff9128fe5d55178e39ed08e847f80a": {
                "Name": "tender_kirch",
                "EndpointID": "765451d1201d570c626470d16c92515de55f4ea1df2a58a03bef8e1767873897",
                "MacAddress": "02:42:0a:00:00:05",
                "IPv4Address": "10.0.0.5/24",
                "IPv6Address": ""
            }
        },
        "Options": {}
    }
]


[root@net2 vagrant]# docker  run -it --rm=true --net=multihost centos /bin/bash
[root@37162168dca4 /]# ping 10.0.0.3
PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.505 ms
64 bytes from 10.0.0.3: icmp_seq=2 ttl=64 time=0.619 ms
64 bytes from 10.0.0.3: icmp_seq=3 ttl=64 time=0.632 ms
64 bytes from 10.0.0.3: icmp_seq=4 ttl=64 time=0.660 ms
64 bytes from 10.0.0.3: icmp_seq=5 ttl=64 time=0.663 ms

观察容器内的链路

[root@37162168dca4 /]# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
11: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT 
    link/ether 02:42:0a:00:00:05 brd ff:ff:ff:ff:ff:ff
13: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT 
    link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff
[root@37162168dca4 /]# ethtool -S eth1
NIC statistics:
     peer_ifindex: 14
[root@37162168dca4 /]# ethtool -S eth0
NIC statistics:
     peer_ifindex: 12

host上的网桥

[root@net2 vagrant]# brctl show
bridge name bridge id       STP enabled interfaces
docker0     8000.024297afd372   no      
docker_gwbridge     8000.0242117ceeda   no      veth2cef6db
ov-000100-15bb5     8000.96f96b0c7379   no      vetha6b50db
                            vx-000100-15bb5
[root@net2 vagrant]# ip -d link
12: vetha6b50db: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ov-000100-15bb5 state UP mode DEFAULT 
    link/ether be:c0:b4:39:e5:fc brd ff:ff:ff:ff:ff:ff promiscuity 1 
    veth addrgenmode eui64 
14: veth2cef6db: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP mode DEFAULT 
    link/ether ca:d6:b0:d9:e9:5c brd ff:ff:ff:ff:ff:ff promiscuity 1 
    veth addrgenmode eui64 
                            vx-000100-15bb5
[root@net2 vagrant]# ip -d link show vx-000100-15bb5
10: vx-000100-15bb5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ov-000100-15bb5 state UNKNOWN mode DEFAULT 
    link/ether 96:f9:6b:0c:73:79 brd ff:ff:ff:ff:ff:ff promiscuity 1 
    vxlan id 256 srcport 0 0 dstport 4789 proxy l2miss l3miss ageing 300 addrgenmode eui64 

从容器内查询的结果看,链接overaly网络的时候,会创建两个网桥,其中ov-000100-15bb5 网桥用来中有两个设备,一个veth peer用来链接容器和网桥,还有一个vxlan设备,从查询数据看vxlan id为256。
另外,还有一个docker_gwbridge网桥,容器也通过veth pair设备连接到了该网桥。这个网络的作用主要是方便容器对外提供服务。

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值