深入了解docker native overlay network 原理

link : http://techblog.d2-si.eu/2017/04/25/deep-dive-into-docker-overlay-networks-part-1.html

Introduction
At D2SI, we have been using Docker since its very beginning and have been helping many projects go into production. We believe that going into production requires a strong understanding of the technology to be able to debug complex issues, analyze unexpected behaviors or troubleshoot performance degradations. That is why we have tried to understand as best as we can the technical components used by Docker.

This blog post is focused on the Docker network overlays. The Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This article will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers.

This post is derived from the presentation I gave at DockerCon2017 in Austin. The slides are available here.

All the code used in this post is available on GitHub.

Docker Overlay Networks
First, we are going to build an overlay network between Docker hosts. In our example, we will do this with three hosts: two running Docker and one running Consul. Docker will use Consul to store the overlay networks metadata that needs to be shared by all the Docker engines: container IPs, MAC addresses and location. Before Docker 1.12, Docker required an external Key-Value store (Etcd or Consul) to create overlay networks and Docker Swarms (now often referred to as “classic Swarm”). Starting with Docker 1.12, Docker can now rely on an internal Key-Value store to create Swarms and overlay networks (“Swarm mode” or “new swarm”). We chose to use Consul because it allows us to look into the keys stored by Docker and understand better the role of the Key-Value store. We are running Consul on a single node but in a real environment we would need a cluster of at least three nodes for resiliency.

In our example, the servers will have the following IP addresses:

consul: 10.0.0.5
docker0: 10.0.0.10
docker1: 10.0.0.0.11
Servers setup

Starting the Consul and Docker services
The first thing we need to do is to start a Consul server. To do this, we simply download Consul from here. We can then start a very minimal Consul service with the following command:

$ consul agent -server -dev -ui -client 0.0.0.0
We use the following flags:

server: start the consul agent in server mode
dev: create a standalone Consul server without any persistency
ui: start a small web interface allowing us to easily look at the keys stored by Docker and their values
client 0.0.0.0: bind all network interfaces for client access (default is 127.0.0.1 only)
To configure the Docker engines to use Consul as an Key-Value store, we start the daemons with the cluster-store option:

$ dockerd -H fd:// –cluster-store=consul://consul:8500 –cluster-advertise=eth0:2376
The cluster-advertise option specifies which IP to advertise in the cluster for a docker host (this option is not optional). This command assumes that consul resolves to 10.0.0.5 in our case.

If we look at the at the Consul UI, we can see that Docker created some keys, but the network key: http://consul:8500/v1/kv/docker/network/v1.0/network/ is still empty.

Consul content

You can easily create the same environment in AWS using the terraform setup in the GitHub repository. All the default configuration (in particular the region to use) is in variables.tf. You will need to give a value to the key_pair variable, either using the command line (terraform apply -var key_pair=demo) or by modifying the variables.tf file. The three instances are configured with userdata: consul and docker are installed and started with the good options, an entry is added to /etc/hosts so consul resolves into the IP address of the consul server. When connecting to consul or docker servers, you should use the public IP addresses (given in terraform outputs) and connect with user “admin” (the terraform setup uses a debian AMI).

Creating an Overlay
We can now create an overlay network between our two Docker nodes:

docker0:~$ docker network create –driver overlay –subnet 192.168.0.0/24 demonet
13fb802253b6f0a44e17e2b65505490e0c80527e1d78c4f5c74375aff4bf882a
We are using the overlay driver, and are choosing 192.168.0.0/24 as a subnet for the overlay (this parameter is optional but we want to have addresses very different from the ones on the hosts to simplify the analysis).

Let’s check that we configured our overlay correctly by listing networks on both hosts.

docker0:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
eb096cb816c0 bridge bridge local
13fb802253b6 demonet overlay global
d538d58b17e7 host host local
f2ee470bb968 none null local

docker1:~$ docker network ls
docker network ls
NETWORK ID NAME DRIVER SCOPE
eb7a05eba815 bridge bridge local
13fb802253b6 demonet overlay global
4346f6c422b2 host host local
5e8ac997ecfa none null local
This looks good: both Docker nodes know the demonet network and it has the same id (13fb802253b6) on both hosts.

Let’s now check that our overlay works by creating a container on docker0 and trying to ping it from docker1. On docker0, we create a C0 container, attach it to our overlay, explicitly give it an IP address (192.168.0.100) and make it sleep. On docker1 we create a container attached to the overlay network and running a ping command targeting C0.

docker0:~$ docker run -d –ip 192.168.0.100 –net demonet –name C0 debian sleep 3600

docker1:~$ docker run -it –rm –net demonet debian bash
root@e37bf5e35f83:/# ping 192.168.0.100
PING 192.168.0.100 (192.168.0.100): 56 data bytes
64 bytes from 192.168.0.100: icmp_seq=0 ttl=64 time=0.618 ms
64 bytes from 192.168.0.100: icmp_seq=1 ttl=64 time=0.483 ms
We can see that the connectivity between both containers is OK. If we try to ping C0 from docker1, it does not work because docker1 does not know anything about 192.168.0.0/24 which is isolated in the overlay.

docker1:~$ ping 192.168.0.100
PING 192.168.0.100 (192.168.0.100) 56(84) bytes of data.
^C— 192.168.0.100 ping statistics —
4 packets transmitted, 0 received, 100% packet loss, time 3024ms
Here is what we have built so far:

First overlay

Under the hood
Now that we have built an overlay let’s try and see what makes it work.

Network configuration of the containers
What is the network configuration of C0 on docker0? We can exec into the container to find out:

docker0:~$ docker exec C0 ip addr show
1: lo:

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值