kube-router -> agrs 配置如下:
- "--run-router=true"
- "--run-firewall=false"
- "--run-service-proxy=false"
- "--enable-cni=false"
- "--enable-pod-egress=false"
- "--enable-ibgp=true"
- "--enable-overlay=true"
- "--peer-router-ips=<CHANGE ME>"
- "--peer-router-asns=<CHANGE ME>"
- "--cluster-asn=<CHANGE ME>"
- "--advertise-cluster-ip=true"
- "--advertise-external-ip=true"
- "--advertise-loadbalancer-ip=true"
完成YAML
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
k8s-app: kube-router
tier: node
name: kube-router
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: kube-router
tier: node
template:
metadata:
labels:
k8s-app: kube-router
tier: node
spec:
priorityClassName: system-node-critical
serviceAccountName: kube-router
containers:
- name: kube-router
image: docker.io/cloudnativelabs/kube-router
imagePullPolicy: Always
args:
- "--run-router=true"
- "--run-firewall=false"
- "--run-service-proxy=false"
- "--enable-cni=false"
- "--enable-pod-egress=false"
- "--enable-ibgp=true"
- "--enable-overlay=true"
- "--peer-router-ips=<CHANGE ME>"
- "--peer-router-asns=<CHANGE ME>"
- "--cluster-asn=<CHANGE ME>"
- "--advertise-cluster-ip=true"
- "--advertise-external-ip=true"
- "--advertise-loadbalancer-ip=true"
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
livenessProbe:
httpGet:
path: /healthz
port: 20244
initialDelaySeconds: 10
periodSeconds: 3
resources:
requests:
cpu: 250m
memory: 250Mi
securityContext:
privileged: true
hostNetwork: true
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/not-ready
operator: Exists
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-router
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: kube-router
namespace: kube-system
rules:
- apiGroups:
- ""
resources:
- namespaces
- pods
- services
- nodes
- endpoints
verbs:
- list
- get
- watch
- apiGroups:
- "networking.k8s.io"
resources:
- networkpolicies
verbs:
- list
- get
- watch
- apiGroups:
- extensions
resources:
- networkpolicies
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: kube-router
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kube-router
subjects:
- kind: ServiceAccount
name: kube-router
namespace: kube-system
kubectl -n kube-system get pods -l k8s-app=cilium
全部成了直接路由的方式
再次验证安装
上面的示例中,我们看到已安装的路由
10.0.0.0/24 via 10.0.2.119 dev cilium_host src 10.0.2.119 mtu 1450
10.0.1.0/24 via 10.0.2.119 dev cilium_host src 10.0.2.119 mtu 1450
10.0.2.0/24 via 10.0.2.119 dev cilium_host src 10.0.2.119
10.0.3.0/24 via 10.0.2.119 dev cilium_host src 10.0.2.119 mtu 1450
-
BGP 路由:如果 kube-router 确定可以通过本地主机已知的路由器访问远程 PodCIDR,则安装这种类型的路由。它将指示 pod 到 pod 的流量直接转发到该路由器,而无需任何封装。
- 温馨提示我的是3台node节点
10.244.0.0/24 via 192.168.8.66 dev ens32 proto 17
10.244.1.0/24 via 192.168.8.65 dev ens32 proto 17
10.244.2.0/24 via 192.168.8.61 dev ens32 proto 17
官网提供的命令验证
运行以下命令以验证您的集群是否具有正确的网络连接:
[root@node66 ~]# cilium connectivity test
ℹ️ Monitor aggregation detected, will skip some flow validation steps
✨ [kubernetes] Creating namespace for connectivity check...
✨ [kubernetes] Deploying echo-same-node service...
✨ [kubernetes] Deploying same-node deployment...
✨ [kubernetes] Deploying client deployment...
✨ [kubernetes] Deploying client2 deployment...
✨ [kubernetes] Deploying echo-other-node service...
✨ [kubernetes] Deploying other-node deployment...
⌛ [kubernetes] Waiting for deployments [client client2 echo-same-node] to become ready...
⌛ [kubernetes] Waiting for deployments [echo-other-node] to become ready...
⌛ [kubernetes] Waiting for CiliumEndpoint for pod cilium-test/client-6488dcf5d4-jvt8j to appear...
⌛ [kubernetes] Waiting for CiliumEndpoint for pod cilium-test/client2-5998d566b4-cpztk to appear...
⌛ [kubernetes] Waiting for CiliumEndpoint for pod cilium-test/echo-other-node-f4d46f75b-h9m9d to appear...
⌛ [kubernetes] Waiting for CiliumEndpoint for pod cilium-test/echo-same-node-745bd5c77-jp8qf to appear...
⌛ [kubernetes] Waiting for Service cilium-test/echo-other-node to become ready...
⌛ [kubernetes] Waiting for Service cilium-test/echo-same-node to become ready...
⌛ [kubernetes] Waiting for NodePort 192.168.8.61:442 (cilium-test/echo-other-node) to become ready...
⌛ [kubernetes] Waiting for NodePort 192.168.8.61:58175 (cilium-test/echo-same-node) to become ready...
⌛ [kubernetes] Waiting for NodePort 192.168.8.65:442 (cilium-test/echo-other-node) to become ready...
⌛ [kubernetes] Waiting for NodePort 192.168.8.65:58175 (cilium-test/echo-same-node) to become ready...
✅ 69/69 tests successful (0 warnings)
Congratulations! You have a fully functional Kubernetes cluster with Cilium. 🎉
官网参考
https://docs.cilium.io/en/v1.11/gettingstarted/kube-router/