Docker网络
一、理解Docker0
所有容器在不指定网络的情况下,都是docker0路由的,docker会给我们的容器分配一个默认的可用ip。
Docker使用的是Linux的桥接模式,宿主机中是一个Docker容器的网桥docker0.
Docker中所有的网络接口都是虚拟的,虚拟的转发效率高。
–link
实现容器间通过容器名进行访问
docker run -d -it --name centos03 --link centos01 centos
[root@bigdata ~]# docker exec -it centos03 ping centos01
PING centos01 (172.17.0.2) 56(84) bytes of data.
64 bytes from centos01 (172.17.0.2): icmp_seq=1 ttl=64 time=0.178 ms
64 bytes from centos01 (172.17.0.2): icmp_seq=2 ttl=64 time=0.088 ms
64 bytes from centos01 (172.17.0.2): icmp_seq=3 ttl=64 time=0.080 ms
^C
--- centos01 ping statistics ---
# 反向却无法访问
[root@bigdata ~]# docker exec -it centos01 ping centos03
本质:–link 就是centos03在hosts中增加了一个centos01的ip地址映射:
[root@b32d47daacf4 /]# cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2 centos01 ecc3cb7aa4cc
172.17.0.4 b32d47daacf4
[root@b32d47daacf4 /]# exit
自定义网络
查看所有的Docker网络
网络模式:
bridge :桥接 (docker 默认,自己创建的也是bridge 模式)
host : 和宿主机共享网络
none :不配置网络
container:容器网络联通(用得少,局限大)
创建自定义网络
# 自定义网络
# --driver bridge 网络模式
# --subnet 192.168.0.0/16 子网地址
# --gateway 192.168.0.1 网关
# mynet 创建的网络名称
[root@bigdata ~]# docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 mynet
34bee5b668a9b16dd2bfa9c38b09c53878f36391cc72673cd59b1514ca919a3d
[root@bigdata ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
e2afe2f2b6ee bridge bridge local
f761228f33f2 host host local
34bee5b668a9 mynet bridge local
e4ae86616ee7 none null local
[root@bigdata ~]#
创建两个以mynet为网络的容器
[root@bigdata ~]# docker run -it -d -P --name centos-net01 --net mynet centos
[root@bigdata ~]# docker run -it -d -P --name centos-net02 --net mynet centos
测试连通性(可以互相访问)
网络连通
创建一个docker自动配置网络的一个容器centos01
# IP:172.17.0.2
[root@bigdata ~]# docker run -it -d -P --name centos01 centos
# 测试与centos-net01连通(不同网段无法连通)
[root@bigdata ~]# docker exec -it centos01 ping centos-net01
ping: centos-net01: Name or service not known
[root@bigdata ~]#
测试:centos01连通到mynet网络
[root@bigdata ~]# docker network connect mynet centos01
[root@bigdata ~]# docker exec -it centos01 ping centos-net01
PING centos-net01 (192.168.0.2) 56(84) bytes of data.
64 bytes from centos-net01.mynet (192.168.0.2): icmp_seq=1 ttl=64 time=0.196 ms
64 bytes from centos-net01.mynet (192.168.0.2): icmp_seq=2 ttl=64 time=0.080 ms
64 bytes from centos-net01.mynet (192.168.0.2): icmp_seq=3 ttl=64 time=0.122 ms
^C
--- centos-net01 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 4ms
rtt min/avg/max/mdev = 0.080/0.132/0.196/0.049 ms
[root@bigdata ~]# docker exec -it centos01 ping centos-net02
PING centos-net02 (192.168.0.3) 56(84) bytes of data.
64 bytes from centos-net02.mynet (192.168.0.3): icmp_seq=1 ttl=64 time=0.058 ms
64 bytes from centos-net02.mynet (192.168.0.3): icmp_seq=2 ttl=64 time=0.157 ms
^C
--- centos-net02 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 2ms
rtt min/avg/max/mdev = 0.058/0.107/0.157/0.050 ms
[root@bigdata ~]#
连通后就可以访问,这就是一个容器两个IP
实战:部署Redis集群
shell脚本
# 创建网卡
[root@bigdata /]# docker network create redisnet --subnet 172.38.0.0/16
# 通过脚本创建六个redis配置
for port in $(seq 1 6); \
do \
mkdir -p /mydata/redis/node-${port}/conf
touch /mydata/redis/node-${port}/conf/redis.conf
cat << EOF >/mydata/redis/node-${port}/conf/redis.conf
port 6379
bind 0.0.0.0
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-announce-ip 172.38.0.1${port}
cluster-announce-port 6379
cluster-announce-bus-port 16379
appendonly yes
EOF
done
for port in $(seq 1 6); \
do \
docker run -p 637${port}:6379 -p 1637${port}:16379 --name redis-${port} \
-v /mydata/redis/node-${port}/data:/data \
-v /mydata/redis/node-${port}/conf/redis.conf:/etc/redis/redis.conf \
-d --net redisnet --ip 172.38.0.1${port} redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
done
docker run -p 6371:6379 -p 16371:16379 --name redis-1 \
-v /mydata/redis/node-1/data:/data \
-v /mydata/redis/node-1/conf/redis.conf:/etc/redis/redis.conf \
-d --net redisnet --ip 172.38.0.11 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
docker run -p 6372:6379 -p 16372:16379 --name redis-2 \
-v /mydata/redis/node-2/data:/data \
-v /mydata/redis/node-2/conf/redis.conf:/etc/redis/redis.conf \
-d --net redisnet --ip 172.38.0.12 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
docker run -p 6373:6379 -p 16373:16379 --name redis-3 \
-v /mydata/redis/node-3/data:/data \
-v /mydata/redis/node-3/conf/redis.conf:/etc/redis/redis.conf \
-d --net redisnet --ip 172.38.0.13 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
docker run -p 6374:6379 -p 16374:16379 --name redis-4 \
-v /mydata/redis/node-4/data:/data \
-v /mydata/redis/node-4/conf/redis.conf:/etc/redis/redis.conf \
-d --net redisnet --ip 172.38.0.14 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
docker run -p 6375:6379 -p 16375:16379 --name redis-5 \
-v /mydata/redis/node-5/data:/data \
-v /mydata/redis/node-5/conf/redis.conf:/etc/redis/redis.conf \
-d --net redisnet --ip 172.38.0.15 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
docker run -p 6376:6379 -p 16376:16379 --name redis-6 \
-v /mydata/redis/node-6/data:/data \
-v /mydata/redis/node-6/conf/redis.conf:/etc/redis/redis.conf \
-d --net redisnet --ip 172.38.0.16 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
# 创建集群
[root@bigdata /]# docker exec -it redis-1 /bin/sh
/data # ls
appendonly.aof nodes.conf
/data # redis-cli --cluster create 172.38.0.11:6379 172.38.0.12:6379 172.38.0.13:6379 172.38.0.14:6379
172.38.0.15:6379 172.38.0.16:6379 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.38.0.15:6379 to 172.38.0.11:6379
Adding replica 172.38.0.16:6379 to 172.38.0.12:6379
Adding replica 172.38.0.14:6379 to 172.38.0.13:6379
M: aa2e634460065c7a8d69ea7f4cfac1a2e4c2a2fd 172.38.0.11:6379
slots:[0-5460] (5461 slots) master
M: d4db91e5cf27db7fd103a7c358c2447cd5a8831f 172.38.0.12:6379
slots:[5461-10922] (5462 slots) master
M: 54ffa9984cab9cff88074baeca402b7f0fb13c12 172.38.0.13:6379
slots:[10923-16383] (5461 slots) master
S: 5549f0fd5d9671a1a2ed3e5f5985cc4256600320 172.38.0.14:6379
replicates 54ffa9984cab9cff88074baeca402b7f0fb13c12
S: efec1d4e1540b42807e0e5b721eaa68ac59d179a 172.38.0.15:6379
replicates aa2e634460065c7a8d69ea7f4cfac1a2e4c2a2fd
S: be7c2d8b5af4234d6a8ced0020edd92c1a20d994 172.38.0.16:6379
replicates d4db91e5cf27db7fd103a7c358c2447cd5a8831f
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
...
>>> Performing Cluster Check (using node 172.38.0.11:6379)
M: aa2e634460065c7a8d69ea7f4cfac1a2e4c2a2fd 172.38.0.11:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
M: 54ffa9984cab9cff88074baeca402b7f0fb13c12 172.38.0.13:6379
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
M: d4db91e5cf27db7fd103a7c358c2447cd5a8831f 172.38.0.12:6379
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: efec1d4e1540b42807e0e5b721eaa68ac59d179a 172.38.0.15:6379
slots: (0 slots) slave
replicates aa2e634460065c7a8d69ea7f4cfac1a2e4c2a2fd
S: 5549f0fd5d9671a1a2ed3e5f5985cc4256600320 172.38.0.14:6379
slots: (0 slots) slave
replicates 54ffa9984cab9cff88074baeca402b7f0fb13c12
S: be7c2d8b5af4234d6a8ced0020edd92c1a20d994 172.38.0.16:6379
slots: (0 slots) slave
replicates d4db91e5cf27db7fd103a7c358c2447cd5a8831f
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
/data #