Kubernetes网络隔离
隔离手段:NetworkPolicy
要在Kubernetes集群中使用NetworkPolicy,CNI网络插件必须维护一个NetworkPolicy Controller,支持Kubernetes 的NetworkPolicy。实现了NetworkPolicy的网络插件包括Weave和Calico等,但不包括Flannel。通过控制循环的方式对NetworkPolicy对象的增删改查作出响应,然后在宿主机上完成iptables规则的配置工作。
部署Weave Net
controlplane $ cat /opt/weave-kube.yaml
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: ServiceAccount
metadata:
name: weave-net
labels:
name: weave-net
namespace: kube-system
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: weave-net
labels:
name: weave-net
rules:
- apiGroups:
- ''
resources:
- pods
- namespaces
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- networkpolicies
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- nodes/status
verbs:
- patch
- update
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: weave-net
labels:
name: weave-net
roleRef:
kind: ClusterRole
name: weave-net
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: weave-net
namespace: kube-system
- apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: weave-net
labels:
name: weave-net
namespace: kube-system
rules:
- apiGroups:
- ''
resourceNames:
- weave-net
resources:
- configmaps
verbs:
- get
- update
- apiGroups:
- ''
resources:
- configmaps
verbs:
- create
- apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: weave-net
labels:
name: weave-net
namespace: kube-system
roleRef:
kind: Role
name: weave-net
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: weave-net
namespace: kube-system
- apiVersion: apps/v1
kind: DaemonSet
metadata:
name: weave-net
labels:
name: weave-net
namespace: kube-system
spec:
minReadySeconds: 5
selector:
matchLabels:
name: weave-net
template:
metadata:
labels:
name: weave-net
spec:
containers:
- name: weave
command:
- /home/weave/launch.sh
env:
- name: IPALLOC_RANGE
value: 10.32.0.0/24
- name: HOSTNAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
image: 'docker.io/weaveworks/weave-kube:2.6.0'
readinessProbe:
httpGet:
host: 127.0.0.1
path: /status
port: 6784
resources:
requests:
cpu: 10m
securityContext:
privileged: true
volumeMounts:
- name: weavedb
mountPath: /weavedb
- name: cni-bin
mountPath: /host/opt
- name: cni-bin2
mountPath: /host/home
- name: cni-conf
mountPath: /host/etc
- name: dbus
mountPath: /host/var/lib/dbus
- name: lib-modules
mountPath: /lib/modules
- name: xtables-lock
mountPath: /run/xtables.lock
- name: weave-npc
env:
- name: HOSTNAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
image: 'docker.io/weaveworks/weave-npc:2.6.0'
resources:
requests:
cpu: 10m
securityContext:
privileged: true
volumeMounts:
- name: xtables-lock
mountPath: /run/xtables.lock
hostNetwork: true
hostPID: true
restartPolicy: Always
securityContext:
seLinuxOptions: {}
serviceAccountName: weave-net
tolerations:
- effect: NoSchedule
operator: Exists
volumes:
- name: weavedb
hostPath:
path: /var/lib/weave
- name: cni-bin
hostPath:
path: /opt
- name: cni-bin2
hostPath:
path: /home
- name: cni-conf
hostPath:
path: /etc
- name: dbus
hostPath:
path: /var/lib/dbus
- name: lib-modules
hostPath:
path: /lib/modules
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
updateStrategy:
type: RollingUpdate
controlplane $ kubectl apply -f /opt/weave-kube.yaml
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.apps/weave-net created
controlplane $ kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-9l9jz 1/1 Running 0 109s
coredns-fb8b8dccf-fzlhr 1/1 Running 0 109s
etcd-controlplane 1/1 Running 0 52s
kube-apiserver-controlplane 1/1 Running 0 61s
kube-controller-manager-controlplane 1/1 Running 0 53s
kube-proxy-xkpmr 1/1 Running 0 109s
kube-scheduler-controlplane 1/1 Running 1 51s
weave-net-mpg84 2/2 Running 1 26s
部署MongoDB测试
controlplane $ kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mongodb-standalone-0 1/1 Running 0 8m51s 10.32.0.194 node01 <none> <none>
mongodb-test-0 1/1 Running 0 8s 10.32.0.195 node01 <none> <none>
# 通过mongodb-test-0连接mongodb-standalone-0
controlplane $ kubectl exec -it mongodb-test-0 /bin/sh
sh-4.4$ bin/mongo --host 10.32.0.194:27017
Percona Server for MongoDB shell version v4.0.23-18
connecting to: mongodb://10.32.0.194:27017/?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("5020ce6f-04db-4294-b3eb-eeaffcfc930a") }
Percona Server for MongoDB server version: v4.0.23-18
...
2021-03-17T10:12:58.227+0000 I CONTROL [initandlisten]
> show dbs
admin 0.000GB
config 0.000GB
local 0.000GB
>
部署NetworkPolicy
[ceph@k8s-master network]$ vim network-policy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
app: database
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 10.32.1.0/24
- namespaceSelector:
matchLabels:
name: holmes
- podSelector:
matchLabels:
app: database
ports:
- protocol: TCP
port: 27017
egress:
- to:
- ipBlock:
cidr: 10.32.0.0/24
ports:
- protocol: TCP
port: 27017
[ceph@k8s-master network]$ kubectl apply -f network-policy.yaml
networkpolicy.networking.k8s.io/test-network-policy configured
[ceph@k8s-master network]$ kubectl get networkpolicy
NAME POD-SELECTOR AGE
test-network-policy app=database 23s
NetworkPolicy定义的规则,其实就是“白名单”。上面通过policyTypes 定义了ingress(流入)请求与egress(流出)请求。
ingress字段中,定义了from和ports,即:允许流入的“白名单”和端口。流入的白名单里指定了三种并列的情况,分别是:ipBlock、namespaceSelector 和 podSelector。egress字段里则定义了to和ports,即:允许流出的“白名单”和端口。
综上所述,这个NetworkPolicy对象,指定的隔离规则如下所示:
- 该隔离规则只对 default Namespace 下的,携带了 app=database 标签的 Pod 有效。限制的请求类型包括 ingress(流入)和 egress(流出)。
- Kubernetes 会拒绝任何访问被隔离 Pod 的请求,除非这个请求来自于以下“白名单”里的对象,并且访问的是被隔离 Pod 的 27017 端口。这些“白名单”对象包括:
- default Namespace 里的,携带了 name=holmes 标签的 Pod;
- 任何 Namespace 里的、携带了app=database 标签的 Pod;
- 任何源地址属于 10.32.1.0/24 网段请求。
- Kubernetes 会拒绝被隔离 Pod 对外发起任何请求,除非请求的目的地址属于 10.0.0.0/24 网段,并且访问的是该网段地址的 27017 端口。
再次连接测试
# mongodb-test-0 不符合 networkpolicy 的白名单设置
controlplane $ kubectl describe po mongodb-test-0
Name: mongodb-test-0
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: node01/172.17.0.72
Start Time: Wed, 17 Mar 2021 10:21:26 +0000
Labels: app=database-1
controller-revision-hash=mongodb-test-5fbb49574f
selector=mongodb-test
statefulset.kubernetes.io/pod-name=mongodb-test-0
Annotations: <none>
Status: Running
IP: 10.32.0.195
Controlled By: StatefulSet/mongodb-test
...
controlplane $ kubectl exec -it mongodb-test-0 /bin/sh
sh-4.4$ bin/mongo --host 10.32.0.194:27017
Percona Server for MongoDB shell version v4.0.23-18
connecting to: mongodb://10.32.0.194:27017/?gssapiServiceName=mongodb
2021-03-17T10:30:59.566+0000 E QUERY [js] Error: couldn't connect to server 10.32.0.194:27017, connection attempt failed: SocketException: Error connecting to 10.32.0.194:27017 :: caused by :: Connection timed out :
connect@src/mongo/shell/mongo.js:356:17
@(connect):2:6
exception: connect failed
sh-4.4$
# mongodb-standalone-0 连接 mongodb-test-0,正常连接
controlplane $ kubectl exec -it mongodb-standalone-0 /bin/sh
sh-4.4$ bin/mongo --host 10.32.0.195:27017
Percona Server for MongoDB shell version v4.0.23-18
connecting to: mongodb://10.32.0.195:27017/?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("ceb6eafa-43c1-4028-987f-0b4a3856a9ef") }
Percona Server for MongoDB server version: v4.0.23-18
...
2021-03-17T10:21:28.796+0000 I CONTROL [initandlisten]
> show dbs
admin 0.000GB
config 0.000GB
local 0.000GB
从上面的实验过程可知,单集群可以很好实现网络隔离。如多租户场景,只需为每个租户配置一套NetworkPolicy,使其应用Pod仅能访问该用户下的实例Pod即可。