Kubernetes(十二)--网络模型和网络策略

一、Kubernetes网络模型和CNI插件

在Kubernetes中设计了一种网络模型,要求无论容器运行在集群中的哪个节点,所有容器都能通过一个扁平的网络平面进行通信,即在同一IP网络中。需要注意的是:在K8S集群中,IP地址分配是以Pod对象为单位,而非容器,同一Pod内的所有容器共享同一网络名称空间。
1、Kubernetes网络模型

容器间的通信:同一个Pod内的多个容器间的通信,它们之间通过lo网卡进行通信。

Pod之间的通信:通过Pod IP地址进行通信。

Pod和Service之间的通信:Pod IP地址和Service IP进行通信,两者并不属于同一网络,实现方式是通过IPVS或iptables规则转发。

Service和集群外部客户端的通信,实现方式:Ingress、NodePort、Loadbalance

K8S网络的实现不是集群内部自己实现,而是依赖于第三方网络插件----CNI(Container Network Interface)

flannel、calico、canel等是目前比较流行的第三方网络插件。

这三种的网络插件需要实现Pod网络方案的方式通常有以下几种:

​虚拟网桥
多路复用(MacVLAN)
硬件交换(SR-IOV)

​ 无论是上面的哪种方式在容器当中实现,都需要大量的操作步骤,而K8S支持CNI插件进行编排网络,以实现Pod和集群网络管理功能的自动化。每次Pod被初始化或删除,kubelet都会调用默认的CNI插件去创建一个虚拟设备接口附加到相关的底层网络,为Pod去配置IP地址、路由信息并映射到Pod对象的网络名称空间。

​ 在配置Pod网络时,kubelet会在默认的/etc/cni/net.d/目录中去查找CNI JSON配置文件,然后通过type属性到/opt/cni/bin中查找相关的插件二进制文件,如下面的"portmap"。然后CNI插件调用IPAM插件(IP地址管理插件)来配置每个接口的IP地址:

[root@master dashboard]# cat /etc/cni/net.d/10-flannel.conflist
{
  "name": "cbr0",
  "cniVersion": "0.3.1",
  "plugins": [
    {
      "type": "flannel",
      "delegate": {
        "hairpinMode": true,
        "isDefaultGateway": true
      }
    },
    {
      "type": "portmap",
      "capabilities": {
        "portMappings": true
      }
    }
  ]
}

CNI主要是定义容器网络模型规范,链接容器管理系统和网络插件,两者主要通过上面的JSON格式文件进行通信,实现容器的网络功能。CNI的主要核心是:在创建容器时,先创建好网络名称空间(netns),然后调用CNI插件为这个netns配置网络,最后在启动容器内的进程。

​ 常见的CNI网络插件包含以下几种:

Flannel:为Kubernetes提供叠加网络的网络插件,基于TUN/TAP隧道技术,使用UDP封装IP报文进行创建叠 加网络,借助etcd维护网络的分配情况,缺点:无法支持网络策略访问控制。
Calico:基于BGP的三层网络插件,也支持网络策略进而实现网络的访问控制;它在每台主机上都运行一个虚拟路由,利用Linux内核转发网络数据包,并借助iptables实现防火墙功能。实际上Calico最后的实现就是将每台主机都变成了一台路由器,将各个网络进行连接起来,实现跨主机通信的功能。
Canal:由Flannel和Calico联合发布的一个统一网络插件,提供CNI网络插件,并支持网络策略实现。
其他的还包括Weave Net、Contiv、OpenContrail、Romana、NSX-T、kube-router等等。而Flannel和Calico是目前最流行的选择方案。

2、Flannel网络插件
​ 在各节点上的Docker主机在docker0上默认使用同一个子网,不同节点的容器都有可能会获取到相同的地址,那么在跨节点通信时就会出现地址冲突的问题。并且在多个节点上的docker0使用不同的子网,也会因为没有准确的路由信息导致无法准确送达报文。

​ 而为了解决这一问题,Flannel的解决办法是,预留一个使用网络,如10.244.0.0/16,然后自动为每个节点的Docker容器引擎分配一个子网,如10.244.1.0/24和10.244.2.0/24,并将分配信息保存在etcd持久存储。

Flannel是采用不同类型的后端网络模型进行处理。其后端的类型有以下几种:

VxLAN:使用内核中的VxLAN模块进行封装报文。也是flannel推荐的方式,其报文格式如下:
在这里插入图片描述

host-gw:即Host GateWay,通过在节点上创建目标容器地址的路由直接完成报文转发,要求各节点必须在同一个2层网络,对报文转发性能要求较高的场景使用。
UDP:用普通 UDP 报文封装完成隧道转发 ,其性能较前两种方式要低很多,仅应
该在不支持前两种方式的环境中使用。

flannel 的配置参数

为了跟踪各子网分配信息等, flannel 使用 etcd 来存储虚拟 IP 和主机 IP 之间的映射,298 今 Kubernetes 进阶实战
各个节点上运行的 flanneld 守护进程负责监视 etcd 中的信息并完成报文路由 。 默认情况下,
flannel 的配置信息保存于 etcd 的键名 Icoreos .com/network/config 之下,可以使用 etcd 服务
的客户端工具来设定或修改其可用的相关配置 。 config 的值是一个 JSON 格式的字典数据结
构,它可以使用的键包含以下几个 。
I ) Network: flannel 于全局使用的 CIDR 格式的 IPv4 网络,字符串格式, 此为必选键,
余下的均为可选 。
2 ) SubnetLen :将 Network 属性指定的 IPv4 网络基于指定位的掩码切割为供各节点使
用的子网,此网络的掩码小于 24 时(如 16 ),其切割子网时使用的掩码默认为 24 位 。
3 ) SubnetMin :可用作分配给节点使用 的起始子网,默认为切分完成后的第一个子网;
字符串格式 。
4 )SubnetMax : 可用作分配给节点使用的最大子网,默认为切分完成后最大的一个子网;
字符串格式 。
5 ) Backend : flannel 要使用的后端类型,以及后端的相关配置,字典格式; VxLAN 、
host-gw 和 UDP 后端各有其相关的参数 。

flannel 默认使用 VxLAN 后端,但 VxLAN direct routing 和 host-gw 却有着更好的性能表现,接下来分析这几种网络模型 。

2.1、VxLAN后端和direct routing
​ VxLAN(Virtual extensible Local Area Network)虚拟可扩展局域网,采用MAC in UDP封装方式,具体的实现方式为:

1、将虚拟网络的数据帧添加到VxLAN首部,封装在物理网络的UDP报文中
2、以传统网络的通信方式传送该UDP报文
3、到达目的主机后,去掉物理网络报文的头部信息以及VxLAN首部,并交付给目的终端
跨节点的Pod之间的通信就是以上的一个过程,整个过程中通信双方对物理网络是没有感知的。如下网络图:
在这里插入图片描述

flannel的部署可以直接在官方上找到其YAML文件,如下:

[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
##查看flannel的相关信息
[root@master dashboard]#  docker image ls |grep flannel
quay.io/coreos/flannel                                            v0.12.0-amd64       4e9f801d2217        10 months ago       52.8MB
[root@master dashboard]# kubectl get daemonset -n kube-system
NAME                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-flannel-ds-amd64     3         3         3       3            3           <none>                   256d
kube-flannel-ds-arm       0         0         0       0            0           <none>                   256d
kube-flannel-ds-arm64     0         0         0       0            0           <none>                   256d
kube-flannel-ds-ppc64le   0         0         0       0            0           <none>                   256d
kube-flannel-ds-s390x     0         0         0       0            0           <none>                   256d
kube-proxy                3         3         3       3            3           kubernetes.io/os=linux   256d

flannel运行后,在各Node宿主机多了一个网络接口

#master节点的flannel.1网络接口,其网段为:10.244.0.0
[root@master kube_manifest]# ifconfig
......
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.0.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::f8b8:bff:fee4:4ebe  prefixlen 64  scopeid 0x20<link>
        ether fa:b8:0b:e4:4e:be  txqueuelen 0  (Ethernet)
        RX packets 24320  bytes 7363502 (7.0 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 29731  bytes 11204091 (10.6 MiB)
        TX errors 0  dropped 31 overruns 0  carrier 0  collisions 0
......
#node1节点的flannel.1网络接口,其网段为:10.244.1.0
[root@node1 ~]# ifconfig
cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.1.1  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fe80::4f0:55ff:fedf:ff77  prefixlen 64  scopeid 0x20<link>
        ether 06:f0:55:df:ff:77  txqueuelen 1000  (Ethernet)
        RX packets 383408  bytes 26505509 (25.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 405269  bytes 146719628 (139.9 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.1.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::f803:67ff:fecf:d005  prefixlen 64  scopeid 0x20<link>
        ether fa:03:67:cf:d0:05  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 8 overruns 0  carrier 0  collisions 0

.....
#node2节点的flannel.1网络接口,其网段为:10.244.2.0
[root@node2 ~]# ifconfig
cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.2.1  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fe80::8a4:d3ff:fedf:6222  prefixlen 64  scopeid 0x20<link>
        ether 0a:a4:d3:df:62:22  txqueuelen 1000  (Ethernet)
        RX packets 1172649  bytes 115975893 (110.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1261882  bytes 376854639 (359.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.2.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::f8d3:d4ff:fe99:fc09  prefixlen 64  scopeid 0x20<link>
        ether fa:d3:d4:99:fc:09  txqueuelen 0  (Ethernet)
        RX packets 28429  bytes 6632331 (6.3 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 22146  bytes 7280132 (6.9 MiB)
        TX errors 0  dropped 26 overruns 0  carrier 0  collisions 0
......

从上面的结果可以知道 :

flannel创建了一个flannel.1接口,它是专门用来封装隧道协议的,默认分给集群的Pod网段为10.244.0.0/16。
flannel给master节点配置的Pod网络为10.244.0.0段,给node1节点配置的Pod网络为10.244.1.0段,给node2节点配置的Pod网络为10.244.2.0段,如果有更多的节点,以此类推。

[root@master kube_manifest]# cat deploy-demo.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deploy
  namespace: default
spec: 
  replicas: 2
  selector:
    matchLabels:
      app: myapp
      release: canary
  template:
    metadata: 
      labels:
        app: myapp
        release: canary
    spec:
      containers:
      - name: myapp
        image: ikubernetes/myapp:v2
        ports:
        - name: http
          containerPort: 80
[root@master kube_manifest]# kubectl apply -f  deploy-demo.yaml 
deployment.apps/myapp-deploy created
[root@master kube_manifest]# kubectl get pods -o wide
NAME                           READY   STATUS    RESTARTS   AGE   IP            NODE    NOMINATED NODE   READINESS GATES
myapp-deploy-559ff5c66-2wftq   1/1     Running   0          39s   10.244.2.58   node2   <none>           <none>
myapp-deploy-559ff5c66-jxrl6   1/1     Running   0          39s   10.244.1.89   node1   <none>           <none>

可以看到,2个Pod都分别运行在两个node节点之上,在两个node节点上查看网络接口可以发现在各个节点上多了一个虚拟接口cni0。它是由flannel创建的一个虚拟网桥叫cni0,在Pod本地通信使用。 注意:cni0虚拟网桥,仅作用于本地通信。

##node1节点
[root@node1 ~]# ifconfig
cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.1.1  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fe80::4f0:55ff:fedf:ff77  prefixlen 64  scopeid 0x20<link>
        ether 06:f0:55:df:ff:77  txqueuelen 1000  (Ethernet)
        RX packets 383408  bytes 26505509 (25.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 405269  bytes 146719628 (139.9 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

##node2节点
[root@node2 ~]# ifconfig
cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.2.1  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fe80::8a4:d3ff:fedf:6222  prefixlen 64  scopeid 0x20<link>
        ether 0a:a4:d3:df:62:22  txqueuelen 1000  (Ethernet)
        RX packets 1172649  bytes 115975893 (110.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1261882  bytes 376854639 (359.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel为每个Pod创建一对veth虚拟设备,一端放在容器接口上,一端放在cni0桥上。 使用brctl查看该网桥。

##node1节点
[root@node1 ~]# brctl show cni0
bridge name	bridge id		STP enabled	interfaces
cni0		8000.06f055dfff77	no		veth0c6a888d
							veth5252acd7
							veth68e4d53e
							vetha9619287
##node2节点
[root@node2 ~]# brctl show cni0
bridge name	bridge id		STP enabled	interfaces
cni0		8000.0aa4d3df6222	no		veth246c35cf
							veth5686adb2
							veth8674777b
							veth940cd2a0
							veth9aa7e25f
#宿主机ping测试访问Pod ip
##node1
[root@node1 ~]# ping 10.244.1.89
PING 10.244.1.89 (10.244.1.89) 56(84) bytes of data.
64 bytes from 10.244.1.89: icmp_seq=1 ttl=64 time=0.128 ms
64 bytes from 10.244.1.89: icmp_seq=2 ttl=64 time=0.100 ms
64 bytes from 10.244.1.89: icmp_seq=3 ttl=64 time=0.107 ms
^C
--- 10.244.1.89 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
##node2
[root@node2 ~]# ping 10.244.2.58
PING 10.244.2.58 (10.244.2.58) 56(84) bytes of data.
64 bytes from 10.244.2.58: icmp_seq=1 ttl=64 time=0.109 ms
64 bytes from 10.244.2.58: icmp_seq=2 ttl=64 time=0.086 ms
64 bytes from 10.244.2.58: icmp_seq=3 ttl=64 time=0.094 ms
64 bytes from 10.244.2.58: icmp_seq=4 ttl=64 time=0.050 ms
64 bytes from 10.244.2.58: icmp_seq=5 ttl=64 time=0.101 ms
^C
--- 10.244.2.58 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4001ms
rtt min/avg/max/mdev = 0.050/0.088/0.109/0.020 ms

在现有的Flannel VxLAN网络中,两台主机上的Pod间通信,也是正常的,如node1节点上的Pod访问node2节点上的Pod

[root@master kube_manifest]#  kubectl exec -it myapp-deploy-559ff5c66-jxrl6 -- /bin/sh 
/ # ping 10.244.2.58
PING 10.244.2.58 (10.244.2.58): 56 data bytes
64 bytes from 10.244.2.58: seq=0 ttl=62 time=0.987 ms
64 bytes from 10.244.2.58: seq=1 ttl=62 time=0.335 ms
64 bytes from 10.244.2.58: seq=2 ttl=62 time=0.816 ms
64 bytes from 10.244.2.58: seq=3 ttl=62 time=1.588 ms
^C
--- 10.244.2.58 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.335/0.931/1.588 ms

可以看到容器跨主机是可以正常通信的,接下来看看容器的跨主机通信是如何实现的,master上查看路由表信息:

[root@master kube_manifest]# ip route
default via 10.10.20.254 dev ens37 proto static metric 100 
10.10.20.0/24 dev ens37 proto kernel scope link src 10.10.20.207 metric 100 
10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1 
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink 
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 
192.168.147.0/24 dev ens33 proto kernel scope link src 192.168.147.132 metric 100 

发送到10.244.1.0/24和10.244.2.0/24网段的数据报文发给本机的flannel.1接口,即进入二层隧道,然后对数据报文进行封装(封装VxLAN首部–>UDP首部–>IP首部–>以太网首部),到达目标Node节点后,由目标Node上的flannel.1进行解封装。使用tcpdump进行 抓一下包,如下:

#在node1的容器内部进行ping另外node2主机上的Pod并进行抓包
[root@master kube_manifest]#  kubectl exec -it myapp-deploy-559ff5c66-jxrl6 -- /bin/sh 
/ # ping 10.244.2.58
PING 10.244.2.58 (10.244.2.58): 56 data bytes
64 bytes from 10.244.2.58: seq=0 ttl=62 time=3.107 ms
64 bytes from 10.244.2.58: seq=1 ttl=62 time=0.849 ms
64 bytes from 10.244.2.58: seq=2 ttl=62 time=0.487 ms
64 bytes from 10.244.2.58: seq=3 ttl=62 time=0.543 ms
64 bytes from 10.244.2.58: seq=4 ttl=62 time=0.403 ms
64 bytes from 10.244.2.58: seq=5 ttl=62 time=0.874 ms
64 bytes from 10.244.2.58: seq=6 ttl=62 time=0.570 ms
64 bytes from 10.244.2.58: seq=7 ttl=62 time=0.935 ms
64 bytes from 10.244.2.58: seq=8 ttl=62 time=0.928 ms
64 bytes from 10.244.2.58: seq=9 ttl=62 time=0.902 ms
64 bytes from 10.244.2.58: seq=10 ttl=62 time=0.835 ms
##node1上的抓包
[root@node1 ~]# tcpdump -i flannel.1 -nn host 10.244.2.58
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on flannel.1, link-type EN10MB (Ethernet), capture size 262144 bytes
12:27:09.800730 IP 10.244.1.89 > 10.244.2.58: ICMP echo request, id 5888, seq 47, length 64
12:27:09.801337 IP 10.244.2.58 > 10.244.1.89: ICMP echo reply, id 5888, seq 47, length 64
12:27:10.801066 IP 10.244.1.89 > 10.244.2.58: ICMP echo request, id 5888, seq 48, length 64
12:27:10.801540 IP 10.244.2.58 > 10.244.1.89: ICMP echo reply, id 5888, seq 48, length 64
12:27:11.801671 IP 10.244.1.89 > 10.244.2.58: ICMP echo request, id 5888, seq 49, length 64
12:27:11.802496 IP 10.244.2.58 > 10.244.1.89: ICMP echo reply, id 5888, seq 49, length 64
12:27:12.802093 IP 10.244.1.89 > 10.244.2.58: ICMP echo request, id 5888, seq 50, length 64
12:27:12.802464 IP 10.244.2.58 > 10.244.1.89: ICMP echo reply, id 5888, seq 50, length 64
12:27:13.802675 IP 10.244.1.89 > 10.244.2.58: ICMP echo request, id 5888, seq 51, length 64
12:27:13.803006 IP 10.244.2.58 > 10.244.1.89: ICMP echo reply, id 5888, seq 51, length 64
12:27:14.803072 IP 10.244.1.89 > 10.244.2.58: ICMP echo request, id 5888, seq 52, length 64
12:27:14.803498 IP 10.244.2.58 > 10.244.1.89: ICMP echo reply, id 5888, seq 52, length 64
12:27:15.803621 IP 10.244.1.89 > 10.244.2.58: ICMP echo request, id 5888, seq 53, length 64
12:27:15.804075 IP 10.244.2.58 > 10.244.1.89: ICMP echo reply, id 5888, seq 53, length 64
12:27:16.804157 IP 10.244.1.89 > 10.244.2.58: ICMP echo request, id 5888, seq 54, length 64
12:27:16.804536 IP 10.244.2.58 > 10.244.1.89: ICMP echo reply, id 5888, seq 54, length 64
##在宿主机node1上ping node2节点的pod,并抓包
[root@node1 ~]# ping 10.244.2.58
PING 10.244.2.58 (10.244.2.58) 56(84) bytes of data.
64 bytes from 10.244.2.58: icmp_seq=1 ttl=63 time=0.820 ms
64 bytes from 10.244.2.58: icmp_seq=2 ttl=63 time=0.810 ms
64 bytes from 10.244.2.58: icmp_seq=3 ttl=63 time=0.755 ms
64 bytes from 10.244.2.58: icmp_seq=4 ttl=63 time=0.746 ms
64 bytes from 10.244.2.58: icmp_seq=5 ttl=63 time=10.1 ms
64 bytes from 10.244.2.58: icmp_seq=6 ttl=63 time=0.970 ms
64 bytes from 10.244.2.58: icmp_seq=7 ttl=63 time=0.787 ms
64 bytes from 10.244.2.58: icmp_seq=8 ttl=63 time=0.797 ms
64 bytes from 10.244.2.58: icmp_seq=9 ttl=63 time=0.968 ms
64 bytes from 10.244.2.58: icmp_seq=10 ttl=63 time=0.787 ms
64 bytes from 10.244.2.58: icmp_seq=11 ttl=63 time=0.790 ms
64 bytes from 10.244.2.58: icmp_seq=12 ttl=63 time=0.567 ms
##node2上抓包
[root@node2 ~]# tcpdump -i flannel.1 -nn host 10.244.2.58
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on flannel.1, link-type EN10MB (Ethernet), capture size 262144 bytes
12:31:49.658931 IP 10.244.1.0 > 10.244.2.58: ICMP echo request, id 41296, seq 6, length 64
12:31:49.659067 IP 10.244.2.58 > 10.244.1.0: ICMP echo reply, id 41296, seq 6, length 64
12:31:50.660284 IP 10.244.1.0 > 10.244.2.58: ICMP echo request, id 41296, seq 7, length 64
12:31:50.660396 IP 10.244.2.58 > 10.244.1.0: ICMP echo reply, id 41296, seq 7, length 64
12:31:51.660855 IP 10.244.1.0 > 10.244.2.58: ICMP echo request, id 41296, seq 8, length 64
12:31:51.660983 IP 10.244.2.58 > 10.244.1.0: ICMP echo reply, id 41296, seq 8, length 64
12:31:52.662022 IP 10.244.1.0 > 10.244.2.58: ICMP echo request, id 41296, seq 9, length 64
12:31:52.662106 IP 10.244.2.58 > 10.244.1.0: ICMP echo reply, id 41296, seq 9, length 64
12:31:53.663342 IP 10.244.1.0 > 10.244.2.58: ICMP echo request, id 41296, seq 10, length 64
12:31:53.663462 IP 10.244.2.58 > 10.244.1.0: ICMP echo reply, id 41296, seq 10, length 64
12:31:54.663851 IP 10.244.1.0 > 10.244.2.58: ICMP echo request, id 41296, seq 11, length 64
12:31:54.663975 IP 10.244.2.58 > 10.244.1.0: ICMP echo reply, id 41296, seq 11, length 64

可以看到报文都是经过flannel.1网络接口进入2层隧道进而转发。​ VXLAN是Linux内核本身支持的一种网络虚拟化技术,是内核的一个模块,在内核态实现封装解封装,构建出覆盖网络,其实就是一个由各宿主机上的Flannel.1设备组成的虚拟二层网络。

​ 由于VXLAN由于额外的封包解包,导致其性能较差,所以Flannel就有了host-gw模式,即把宿主机当作网关,除了本地路由之外没有额外开销,性能和calico差不多,由于没有叠加来实现报文转发,这样会导致路由表庞大。因为一个节点对应一个网络,也就对应一条路由条目。

​ host-gw虽然VXLAN网络性能要强很多。,但是种方式有个缺陷:要求各物理节点必须在同一个二层网络中。物理节点必须在同一网段中。这样会使得一个网段中的主机量会非常多,万一发一个广播报文就会产生干扰。在私有云场景下,宿主机不在同一网段是很常见的状态,所以就不能使用host-gw了。

​ VXLAN还有另外一种功能,VXLAN也支持类似host-gw的玩法,如果两个节点在同一网段时使用host-gw通信,如果不在同一网段中,即 当前pod所在节点与目标pod所在节点中间有路由器,就使用VXLAN这种方式,使用叠加网络。 结合了Host-gw和VXLAN,这就是VXLAN的Direct routing模式

Flannel VxLAN的Direct routing模式配置

修改kube-flannel.yml文件,将flannel的configmap对象改为

[root@master ~]# vim kube-flannel.yml 
......
 net-conf.json: |
    {
      "Network": "10.244.0.0/16",	#默认网段
      "Backend": {
        "Type": "vxlan",
        "Directrouting": true	#增加此字段
      }
    }
......

[root@master kube_manifest]# kubectl apply -f flannel/kube-flannel.yml 
podsecuritypolicy.policy/psp.flannel.unprivileged configured
clusterrole.rbac.authorization.k8s.io/flannel configured
clusterrolebinding.rbac.authorization.k8s.io/flannel configured
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg configured
daemonset.apps/kube-flannel-ds created

#查看路由信息
[root@master kube_manifest]# ip route
default via 10.10.20.254 dev ens37 proto static metric 100 
10.10.20.0/24 dev ens37 proto kernel scope link src 10.10.20.207 metric 100 
10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1 
10.244.1.0/24 via 10.10.20.210 dev ens37 
10.244.2.0/24 via 10.10.20.202 dev ens37 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 
192.168.147.0/24 dev ens33 proto kernel scope link src 192.168.147.132 metric 10

从上面的结果可以看到,发往10.244.1.0/24和10.244.2.0/24的包都是直接经过ens37网口直接发出去的,这就是Directrouting模式。如果两个节点是跨网段的,则flannel自动降级为VxLAN模式。

​ 此时,在各个集群节点上执行“iptables -nL”命令可以看到, iptables filter表的 FORWARD 链上由其生成了如下两条转发规则, 它 显 式 放行 了10. 244. 0. 0/ 16 网络 进出 的 所有 报文, 用于确保由物理接口接收或发送的目标地址或源地址为10. 244. 0. 0/ 16网络 的 所有报文均能够正常通行 。这些是 Direct Routing模式得以实现的必要条件:

[root@master kube_manifest]# iptables -nL
......
Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
......
ACCEPT     all  --  10.244.0.0/16        0.0.0.0/0           
ACCEPT     all  --  0.0.0.0/0            10.244.0.0/16       
......

再在此之前创建的Pod和宿主机上进行ping测试,可以看到在flannel.1接口上已经抓不到包了,在eth37上可以用抓到ICMP的包.

[root@node1 ~]# tcpdump -i flannel.1 -nn host 10.244.2.58
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on flannel.1, link-type EN10MB (Ethernet), capture size 262144 bytes
^C
0 packets captured
0 packets received by filter
0 packets dropped by kernel

[root@node2 ~]# tcpdump -i ens37 -nn host 10.244.2.58
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on flannel.1, link-type EN10MB (Ethernet), capture size 262144 bytes
13:02:26.094695 IP 10.244.1.89 > 10.244.2.58: ICMP echo request, id 7424, seq 0, length 64
13:02:27.094798 IP 10.244.1.89 > 10.244.2.58: ICMP echo request, id 7424, seq 1, length 64
13:02:28.095322 IP 10.244.1.89 > 10.244.2.58: ICMP echo request, id 7424, seq 2, length 64
13:02:29.095854 IP 10.244.1.89 > 10.244.2.58: ICMP echo request, id 7424, seq 3, length 64
13:02:30.096752 IP 10.244.1.89 > 10.244.2.58: ICMP echo request, id 7424, seq 4, length 64
13:02:31.097346 IP 10.244.1.89 > 10.244.2.58: ICMP echo request, id 7424, seq 5, length 64

Host-gw后端
​ Flannel除了上面2种数据传输的方式以外,还有一种是host-gw的方式,host-gw后端是通过添加必要的路由信息使用节点的二层网络直接发送Pod的通信报文。它的工作方式类似于Directrouting的功能,但是其并不具备VxLan的隧道转发能力。

​ 编辑kube-flannel的配置清单,将ConfigMap资源kube-flannel-cfg的data字段中网络配置进行修改,如下:

[root@master kube_manifest]#  vim kube-flannel.yml 
......
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "host-gw"
      }
    }
......

[root@master kube_manifest]#  kubectl apply -f kube-flannel.yml 

配置完成后,各节点会生成类似directrouting一样的 路由和iptables规则,用于实现二层转发Pod网络的通信报文,省去了隧道转发模式的额外开销。但是存在的问题点是,对于不在同一个二层网络的报文转发,host-gw是无法实现的.

#查看路由表信息,可以看到其报文的发送方向都是和Directrouting是一样的
[root@master kube_manifest]# ip route
default via 10.10.20.254 dev ens37 proto static metric 100 
10.10.20.0/24 dev ens37 proto kernel scope link src 10.10.20.207 metric 100 
10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1 
10.244.1.0/24 via 10.10.20.210 dev ens37 
10.244.2.0/24 via 10.10.20.202 dev ens37 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 
192.168.147.0/24 dev ens33 proto kernel scope link src 192.168.147.132 metric 10

进行ping包测试
[root@node1 ~]# ping 10.244.2.58

#在ens37上进行抓包
[root@node2 ~]# tcpdump -i ens37 -nn host 10.244.2.58
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
23:11:05.556972 IP 10.10.20.210 > 10.244.2.58: ICMP echo request, id 59528, seq 1, length 64
23:11:05.557794 IP 10.244.2.58 > 10.10.20.210: ICMP echo reply, id 59528, seq 1, length 64
23:11:06.558231 IP 10.10.20.210 > 10.244.2.58: ICMP echo request, id 59528, seq 2, length 64
23:11:06.558610 IP 10.244.2.58 > 10.10.20.210: ICMP echo reply, id 59528, seq 2, length 64

​ 该模式下,报文转发的相关流程如下:

1、node1上的pod(10.244.1.89/24)向node 2上的pod(10.244.2.58/24)发送报文,发现目的地址为:10.244.2.58,并非本网段,会直接发送到网关(10.10.20.210);

2、网关发现该目标地址为10.244.2.58,要到达10.244.2.0/24网段,需要送达到node2 的物理网卡,node2接收以后发现该报文的目标地址属于本机上的另一个虚拟网卡,然后转发到相对应的Pod。
工作模式流程图如下:
在这里插入图片描述
​ 以上就是Flannel网络模型的三种工作模式,但是flannel自身并不具备为Pod网络实现网络策略和网络通信隔离的功能,为此只能借助于Calico联合统一的项目Calnal项目进行构建网络策略的功能。

二、网络策略

网络策略(Network Policy )是 Kubernetes 的一种资源。Network Policy 通过 Label 选择 Pod,并指定其他 Pod 或外界如何与这些 Pod 通信。

​ Pod的网络流量包含流入(Ingress)和流出(Egress)两种方向。默认情况下,所有 Pod 是非隔离的,即任何来源的网络流量都能够访问 Pod,没有任何限制。当为 Pod 定义了 Network Policy,只有 Policy 允许的流量才能访问 Pod。

​ Kubernetes的网络策略功能也是由第三方的网络插件实现的,因此,只有支持网络策略功能的网络插件才能进行配置网络策略,比如Calico、Canal、kube-router等等。

注意:Kubernetes自1.8版本才支持Egress网络策略,在该版本之前仅支持Ingress网络策略。

2.1、部署Canal提供网络策略功能
Calico可以独立地为Kubernetes提供网络解决方案和网络策略,也可以和flannel相结合,由flannel提供网络解决方案,Calico仅用于提供网络策略,此时将Calico称为Canal。结合flannel工作时,Calico提供的默认配置清单式以flannel默认使用的10.244.0.0/16为Pod网络,因此在集群中kube-controller-manager启动时就需要通过–cluster-cidr选项进行设置使用该网络地址,并且—allocate-node-cidrs的值应设置为true。

##设置RBAC
[root@master kube_manifest]# kubectl apply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/hosted/canal/rbac.yaml
clusterrole.rbac.authorization.k8s.io/calico created
clusterrole.rbac.authorization.k8s.io/flannel configured
clusterrolebinding.rbac.authorization.k8s.io/canal-flannel created
clusterrolebinding.rbac.authorization.k8s.io/canal-calico created


#部署Canal提供网络策略
[root@master kube_manifest]# kubectl apply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/hosted/canal/canal.yaml
configmap/canal-config unchanged
serviceaccount/canal unchanged
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org unchanged
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org unchanged
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org unchanged
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org unchanged
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org unchanged
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org unchanged
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org unchanged
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org unchanged
error: unable to recognize "https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/hosted/canal/canal.yaml": no matches for kind "DaemonSet" in version "extensions/v1beta1"

##报错,提示DaemonSet与extensions/v1beta1版本不匹配
##这里先下载cannal的yaml文件到本地,然后修改version "extensions/v1beta1"为"apps/v1"
[root@master kube_manifest]# wget https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/hosted/canal/canal.yaml

[root@master kube_manifest]# vim canal.yaml 
......
kind: DaemonSet
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
......
##重新部署
[root@master kube_manifest]# kubectl apply -f calico/canal.yaml 
configmap/canal-config unchanged
daemonset.apps/canal created
serviceaccount/canal unchanged
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org unchanged
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org unchanged
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org unchanged
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org unchanged
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org unchanged
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org unchanged
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org unchanged
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org unchanged

[root@master kube_manifest]# kubectl get ds canal -n kube-system
NAME    DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                 AGE
canal   3         3         2       3            2           beta.kubernetes.io/os=linux   26m


##部署canal需要的镜像,建议先拉取镜像,避免耗死资源:
quay.io/calico/node:v3.2.6
quay.io/calico/cni:v3.2.6
quay.io/coreos/flannel:v0.9.1

[root@master kube_manifest]# kubectl get pods -n kube-system -o wide |grep canal
canal-dfwk9                      3/3     Running             0          12m     192.168.147.134   node2    <none>           <none>
canal-lg6h8                      3/3     Running             0          12m     192.168.147.133   node1    <none>           <none>
canal-p8v5k                      3/3     Running             0          12m     192.168.147.132   master   <none>           <none>

Canal作为DaemonSet部署到每个节点,属于kube-system这个名称空间。需要注意的是,Canal只是直接使用了Calico和flannel项目,代码本身没有修改,Canal只是一种部署的模式,用于安装和配置项目。

2.2、配置网络策略

​ 在Kubernetes系统中,报文的流入和流出的核心组件是Pod资源,它们也是网络策略功能的主要应用对象。NetworkPolicy对象通过podSelector选择 一组Pod资源作为控制对象。NetworkPolicy是定义在一组Pod资源之上用于管理入站流量,或出站流量的一组规则,有可以是出入站规则一起生效,规则的生效模式通常由spec.policyTypes进行 定义。如下图

在这里插入图片描述
默认情况下,Pod对象的流量控制是为空的,报文可以自由出入。在附加网络策略之后,Pod对象会因为NetworkPolicy而被隔离,一旦名称空间中有任何NetworkPolicy对象匹配了某特定的Pod对象,则该Pod将拒绝NetworkPolicy规则中不允许的所有连接请求,但是那些未被匹配到的Pod对象依旧可以接受所有流量。

​ 就特定的Pod集合来说,入站和出站流量默认是放行状态,除非有规则可以进行匹配。还有一点需要注意的是,在spec.policyTypes中指定了生效的规则类型,但是在networkpolicy.spec字段中嵌套定义了没有指明任何规则的Ingress或Egress时,则表示拒绝入站或出站的一切流量。定义网络策略的基本格式如下:

[root@master ~]# kubectl explain networkpolicy
KIND:     NetworkPolicy
VERSION:  networking.k8s.io/v1

DESCRIPTION:
     NetworkPolicy describes what network traffic is allowed for a set of Pods

FIELDS:
   apiVersion	<string>
     APIVersion defines the versioned schema of this representation of an
     object. Servers should convert recognized schemas to the latest internal
     value, and may reject unrecognized values. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources

   kind	<string>
     Kind is a string value representing the REST resource this object
     represents. Servers may infer this from the endpoint the client submits
     requests to. Cannot be updated. In CamelCase. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds

   metadata	<Object>
     Standard object's metadata. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

   spec	<Object>
     Specification of the desired behavior for this NetworkPolicy.

[root@master ~]# kubectl explain networkpolicy.spec
KIND:     NetworkPolicy
VERSION:  networking.k8s.io/v1

RESOURCE: spec <Object>

DESCRIPTION:
     Specification of the desired behavior for this NetworkPolicy.

     NetworkPolicySpec provides the specification of a NetworkPolicy

FIELDS:
   egress	<[]Object>
     List of egress rules to be applied to the selected pods. Outgoing traffic
     is allowed if there are no NetworkPolicies selecting the pod (and cluster
     policy otherwise allows the traffic), OR if the traffic matches at least
     one egress rule across all of the NetworkPolicy objects whose podSelector
     matches the pod. If this field is empty then this NetworkPolicy limits all
     outgoing traffic (and serves solely to ensure that the pods it selects are
     isolated by default). This field is beta-level in 1.8

   ingress	<[]Object>
     List of ingress rules to be applied to the selected pods. Traffic is
     allowed to a pod if there are no NetworkPolicies selecting the pod (and
     cluster policy otherwise allows the traffic), OR if the traffic source is
     the pod's local node, OR if the traffic matches at least one ingress rule
     across all of the NetworkPolicy objects whose podSelector matches the pod.
     If this field is empty then this NetworkPolicy does not allow any traffic
     (and serves solely to ensure that the pods it selects are isolated by
     default)

   podSelector	<Object> -required-
     Selects the pods to which this NetworkPolicy object applies. The array of
     ingress rules is applied to any pods selected by this field. Multiple
     network policies can select the same set of pods. In this case, the ingress
     rules for each are combined additively. This field is NOT optional and
     follows standard label selector semantics. An empty podSelector matches all
     pods in this namespace.

   policyTypes	<[]string>
     List of rule types that the NetworkPolicy relates to. Valid options are
     "Ingress", "Egress", or "Ingress,Egress". If this field is not specified,
     it will default based on the existence of Ingress or Egress rules; policies
     that contain an Egress section are assumed to affect Egress, and all
     policies (whether or not they contain an Ingress section) are assumed to
     affect Ingress. If you want to write an egress-only policy, you must
     explicitly specify policyTypes [ "Egress" ]. Likewise, if you want to write
     a policy that specifies that no egress is allowed, you must specify a
     policyTypes value that include "Egress" (since such a policy would not
     include an Egress section and would otherwise default to just [ "Ingress"
     ]). This field is beta-level in 1.8


网络策略示例:

定义两个名称空间dev和pro,然后通过网络策略规则来控制两个名称空间中pod的访问

##定义网络名称空间
[root@master ~]# kubectl create namespace dev
namespace/dev created
[root@master ~]# kubectl create namespace pro
namespace/pro created
##定义网络策略,并应用在dev这个名称空间中
[root@master network-policy]# vim ingress-def.yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all-ingress
  #namespace: dev
spec:
  podSelector: {}
  policyTypes:
   - Ingress ##显示指明了Ingress,而没有定义任何规则则默认拒绝所有Ingress
[root@master network-policy]# kubectl apply -f ingress-def.yaml -n dev
networkpolicy.networking.k8s.io/deny-all-ingress created
[root@master network-policy]# kubectl get netpol -n dev
NAME               POD-SELECTOR   AGE
deny-all-ingress   <none>         71s
##在dev名称空间中创建一个自主式pod
[root@master network-policy]# vim pod-dev.yaml

apiVersion: v1
kind: Pod
metadata:
  name: pod-dev
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1

[root@master network-policy]# kubectl apply -f pod-dev.yaml -n dev
pod/pod-dev created
[root@master network-policy]# kubectl get pods -n dev -o wide
NAME      READY   STATUS    RESTARTS   AGE   IP           NODE    NOMINATED NODE   READINESS GATES
pod-dev   1/1     Running   0          80m   10.244.2.3   node2   <none>           <none>
##访问测试,发现一直卡住,也没有任何提示,实际上是被上面定义的网络策略阻止了,限制了入站请求。
[root@master network-policy]# curl 10.244.2.3


##在pro名称空间中定义pod,并访问
[root@master network-policy]# kubectl apply -f pod-dev.yaml -n pro
pod/pod-dev created

##由于在pro名称空间中没有定义任何网络策略,因此可以访问
[root@master network-policy]# kubectl get pods -n pro -o wide
NAME      READY   STATUS    RESTARTS   AGE   IP           NODE    NOMINATED NODE   READINESS GATES
pod-dev   1/1     Running   0          80m   10.244.2.4   node2   <none>           <none>
[root@master network-policy]#curl 10.244.2.4
Hello MyAPP | Version: v1 | <a href="hostname.html">Pod Name</a>

##修改上面定义的规则,改为允许所有ingress请求
[root@master network-policy]# vim ingress-def-2.yaml 

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-all-ingress
  #namespace: dev
spec:
  podSelector: {}
  ingress:  ##表示允许所有规则
   - {}
  policyTypes:
   - Ingress ##显示指明了Ingress,而没有定义任何规则,则默认拒绝所有

[root@master network-policy]# kubectl delete networkpolicy  deny-all-ingress -n dev
networkpolicy.networking.k8s.io "deny-all-ingress" deleted
[root@master network-policy]# kubectl apply -f ingress-def-2.yaml -n dev
networkpolicy.networking.k8s.io/allow-all-ingress created
[root@master network-policy]# kubectl get networkpolicy -n dev
NAME                POD-SELECTOR   AGE
allow-all-ingress   <none>         3s
##再次访问dev名称空间中的pod(10.244.2.3)
[root@node1 ~]# curl 10.244.2.3
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

##允许dev名称空间中的部分pod可以被访问,例如标签为app=myapp的可以被访问
##在dev名称空间中再创建一个pod,其标签为app=myapp
[root@master network-policy]# vim deploy-demo.yaml 

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deploy
  namespace: dev
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp
      release: canary
  template:
    metadata:
      labels:
        app: myapp
        release: canary
    spec:
      containers:
      - name: myapp
        image: ikubernetes/myapp:v2
        ports:
        - name: http
          containerPort: 80
[root@master network-policy]# kubectl apply -f deploy-demo.yaml -n dev
deployment.apps/myapp-deploy created
[root@master network-policy]# kubectl get pod -o wide  -n dev
NAME                           READY   STATUS    RESTARTS   AGE    IP           NODE    NOMINATED NODE   READINESS GATES
myapp-deploy-559ff5c66-44frj   1/1     Running   0          42s    10.244.2.5   node2   <none>           <none>
myapp-deploy-559ff5c66-8khwj   1/1     Running   0          42s    10.244.2.6   node2   <none>           <none>
pod-dev                        1/1     Running   0          151m   10.244.2.3   node2   <none>           <none>
##定义网络策略规则
[root@master network-policy]# vim allow-netpol-demo.yaml

apiVersion: networking.k8s.io/v1   #定义API版本
kind: NetworkPolicy                #定义资源类型
metadata:
  name: allow-myapp-ingress        #定义NetwokPolicy的名字
spec:                              #NetworkPolicy规则定义
  podSelector:                     #匹配拥有标签app:myapp的Pod资源
    matchLabels:
      app: myapp
  ingress:
   - from:
     - ipBlock:                     #定义可以访问的网段
         cidr: 10.244.0.0/16
         except:
          - 10.244.1.2/32
     ports:                         #开放的协议和端口定义
      - protocol: TCP
        port: 80
~       
##先使拒绝所有入站的网络策略生效
[root@master network-policy]# kubectl apply -f ingress-def.yaml -n dev
networkpolicy.networking.k8s.io/deny-all-ingress created
[root@master network-policy]# kubectl apply -f allow-netpol-demo.yaml -n dev
networkpolicy.networking.k8s.io/allow-myapp-ingress created
[root@master network-policy]# kubectl get netpol -n dev
NAME                  POD-SELECTOR   AGE
allow-myapp-ingress   app=myapp      6s
deny-all-ingress      <none>         18s
[root@master network-policy]# kubectl get pods -n dev -o wide
NAME                           READY   STATUS    RESTARTS   AGE    IP           NODE    NOMINATED NODE   READINESS GATES
myapp-deploy-559ff5c66-44frj   1/1     Running   0          16m    10.244.2.5   node2   <none>           <none>
myapp-deploy-559ff5c66-8khwj   1/1     Running   0          16m    10.244.2.6   node2   <none>           <none>
pod-dev                        1/1     Running   0          166m   10.244.2.3   node2   <none>           <none>

##访问测试,发现没有app=myapp标签的pod资源不能被访问,其他两个拥有标签的则可以正常访问(仅限于80端口)
[root@node1 ~]# curl 10.244.2.3
^C
[root@node1 ~]# curl 10.244.2.5
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
[root@node1 ~]# curl 10.244.2.6
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
##访问拥有app=myapp标签的pod的其他端口,同样不能访问,同样一直被阻塞
[root@node1 ~]# curl 10.244.2.6:443
^C
[root@node1 ~]# curl 10.244.2.5:443
#正常的网络访问,但pod没有开启443端口是会返回错误,例如:
[root@node1 ~]# curl 10.244.2.5:443
curl:(7) Failed CONNECT TO 10.244.2.5:443;Connecttion refused

##在pro名称空间,定义Egress出站规则 
##拒绝所有出站流量
[root@master network-policy]# vim egress-def.yaml 

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all-egress
  #namespace: dev
spec:
  podSelector: {}
  policyTypes:
   - Egress ##显示指明了Engress,而没有定义任何规则则默认拒绝所有
[root@master network-policy]# kubectl apply -f egress-def.yaml -n pro
networkpolicy.networking.k8s.io/deny-all-egress created
[root@master network-policy]# kubectl get netpol -n pro
NAME              POD-SELECTOR   AGE
deny-all-egress   <none>         26s
##在pro名称空间中定义pod
[root@master network-policy]# kubectl apply -f pod-dev.yaml -n pro
pod/pod-dev unchanged
[root@master network-policy]# kubectl get pod -n pro
NAME      READY   STATUS    RESTARTS   AGE
pod-dev   1/1     Running   0          92m

##ping kube-system名称空间中的pod(10.244.1.14)资源,可以发现已被阻塞住,说明egress规则生效
[root@master network-policy]# kubectl exec pod-dev -it -n pro -- /bin/sh
/ # ping 10.244.2.5
PING 10.244.2.5 (10.244.2.5): 56 data bytes

##把上面的规则修改为放行所有的出站请求
[root@master network-policy]# vim egress-def.yaml 

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all-egress
  #namespace: dev
spec:
  podSelector: {}
  egress:  ##放行所有
  - {}
  policyTypes:
   - Egress 
[root@master network-policy]# kubectl apply -f egress-def.yaml -n pro
networkpolicy.networking.k8s.io/deny-all-egress configured
##再次进入pod交互界面
[root@master network-policy]# kubectl exec pod-dev -it -n pro -- /bin/sh
/ # ping 10.244.1.14
PING 10.244.1.14 (10.244.1.14): 56 data bytes
64 bytes from 10.244.1.14: seq=0 ttl=62 time=1.044 ms
64 bytes from 10.244.1.14: seq=1 ttl=62 time=1.762 ms
64 bytes from 10.244.1.14: seq=2 ttl=62 time=0.559 ms
64 bytes from 10.244.1.14: seq=3 ttl=62 time=0.785 ms
^C
--- 10.244.1.14 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.559/1.037/1.762 ms

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值