MetalLB的部署环境为基于docker和flannel部署
安装MetalLB比较简单。
官方提供了manifest文件部署(yaml部署),helm3部署和Kustomize部署三种方式,我还是使用manifest文件部署。
很多教程为了简化部署的步骤,都是写着直接用kubectl命令部署一个yaml的url
这样好处是部署简单快捷,但是坏处就是本地自己没有存档,不方便修改等操作,因此我把yaml文件下载到本地保存再进行部署
v0.12.1的两个部署文件
# 下载v0.12.1的两个部署文件 $ wget https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml $ wget https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml |
可能会被墙,所以我把文件粘出来
namespace.yaml
apiVersion: v1 kind: Namespace metadata: name: metallb-system labels: app: metallb |
metallb.yaml
apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: labels: app: metallb name: controller spec: allowPrivilegeEscalation: false allowedCapabilities: [] allowedHostPaths: [] defaultAddCapabilities: [] defaultAllowPrivilegeEscalation: false fsGroup: ranges: - max: 65535 min: 1 rule: MustRunAs hostIPC: false hostNetwork: false hostPID: false privileged: false readOnlyRootFilesystem: true requiredDropCapabilities: - ALL runAsUser: ranges: - max: 65535 min: 1 rule: MustRunAs seLinux: rule: RunAsAny supplementalGroups: ranges: - max: 65535 min: 1 rule: MustRunAs volumes: - configMap - secret - emptyDir --- apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: labels: app: metallb name: speaker spec: allowPrivilegeEscalation: false allowedCapabilities: - NET_RAW allowedHostPaths: [] defaultAddCapabilities: [] defaultAllowPrivilegeEscalation: false fsGroup: rule: RunAsAny hostIPC: false hostNetwork: true hostPID: false hostPorts: - max: 7472 min: 7472 - max: 7946 min: 7946 privileged: true readOnlyRootFilesystem: true requiredDropCapabilities: - ALL runAsUser: rule: RunAsAny seLinux: rule: RunAsAny supplementalGroups: rule: RunAsAny volumes: - configMap - secret - emptyDir --- apiVersion: v1 kind: ServiceAccount metadata: labels: app: metallb name: controller namespace: metallb-system --- apiVersion: v1 kind: ServiceAccount metadata: labels: app: metallb name: speaker namespace: metallb-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: app: metallb name: metallb-system:controller rules: - apiGroups: - '' resources: - services verbs: - get - list - watch - apiGroups: - '' resources: - services/status verbs: - update - apiGroups: - '' resources: - events verbs: - create - patch - apiGroups: - policy resourceNames: - controller resources: - podsecuritypolicies verbs: - use --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: app: metallb name: metallb-system:speaker rules: - apiGroups: - '' resources: - services - endpoints - nodes verbs: - get - list - watch - apiGroups: ["discovery.k8s.io"] resources: - endpointslices verbs: - get - list - watch - apiGroups: - '' resources: - events verbs: - create - patch - apiGroups: - policy resourceNames: - speaker resources: - podsecuritypolicies verbs: - use --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: labels: app: metallb name: config-watcher namespace: metallb-system rules: - apiGroups: - '' resources: - configmaps verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: labels: app: metallb name: pod-lister namespace: metallb-system rules: - apiGroups: - '' resources: - pods verbs: - list --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: labels: app: metallb name: controller namespace: metallb-system rules: - apiGroups: - '' resources: - secrets verbs: - create - apiGroups: - '' resources: - secrets resourceNames: - memberlist verbs: - list - apiGroups: - apps resources: - deployments resourceNames: - controller verbs: - get --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: app: metallb name: metallb-system:controller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: metallb-system:controller subjects: - kind: ServiceAccount name: controller namespace: metallb-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: app: metallb name: metallb-system:speaker roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: metallb-system:speaker subjects: - kind: ServiceAccount name: speaker namespace: metallb-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: app: metallb name: config-watcher namespace: metallb-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: config-watcher subjects: - kind: ServiceAccount name: controller - kind: ServiceAccount name: speaker --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: app: metallb name: pod-lister namespace: metallb-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: pod-lister subjects: - kind: ServiceAccount name: speaker --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: app: metallb name: controller namespace: metallb-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: controller subjects: - kind: ServiceAccount name: controller --- apiVersion: apps/v1 kind: DaemonSet metadata: labels: app: metallb component: speaker name: speaker namespace: metallb-system spec: selector: matchLabels: app: metallb component: speaker template: metadata: annotations: prometheus.io/port: '7472' prometheus.io/scrape: 'true' labels: app: metallb component: speaker spec: containers: - args: - --port=7472 - --config=config - --log-level=info env: - name: METALLB_NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: METALLB_HOST valueFrom: fieldRef: fieldPath: status.hostIP - name: METALLB_ML_BIND_ADDR valueFrom: fieldRef: fieldPath: status.podIP # needed when another software is also using memberlist / port 7946 # when changing this default you also need to update the container ports definition # and the PodSecurityPolicy hostPorts definition #- name: METALLB_ML_BIND_PORT # value: "7946" - name: METALLB_ML_LABELS value: "app=metallb,component=speaker" - name: METALLB_ML_SECRET_KEY valueFrom: secretKeyRef: name: memberlist key: secretkey image: quay.io/metallb/speaker:v0.12.1 name: speaker ports: - containerPort: 7472 name: monitoring - containerPort: 7946 name: memberlist-tcp - containerPort: 7946 name: memberlist-udp protocol: UDP livenessProbe: httpGet: path: /metrics port: monitoring initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 1 successThreshold: 1 failureThreshold: 3 readinessProbe: httpGet: path: /metrics port: monitoring initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 1 successThreshold: 1 failureThreshold: 3 securityContext: allowPrivilegeEscalation: false capabilities: add: - NET_RAW drop: - ALL readOnlyRootFilesystem: true hostNetwork: true nodeSelector: kubernetes.io/os: linux serviceAccountName: speaker terminationGracePeriodSeconds: 2 tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: metallb component: controller name: controller namespace: metallb-system spec: revisionHistoryLimit: 3 selector: matchLabels: app: metallb component: controller template: metadata: annotations: prometheus.io/port: '7472' prometheus.io/scrape: 'true' labels: app: metallb component: controller spec: containers: - args: - --port=7472 - --config=config - --log-level=info env: - name: METALLB_ML_SECRET_NAME value: memberlist - name: METALLB_DEPLOYMENT value: controller image: quay.io/metallb/controller:v0.12.1 name: controller ports: - containerPort: 7472 name: monitoring livenessProbe: httpGet: path: /metrics port: monitoring initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 1 successThreshold: 1 failureThreshold: 3 readinessProbe: httpGet: path: /metrics port: monitoring initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 1 successThreshold: 1 failureThreshold: 3 securityContext: allowPrivilegeEscalation: false capabilities: drop: - all readOnlyRootFilesystem: true nodeSelector: kubernetes.io/os: linux securityContext: runAsNonRoot: true runAsUser: 65534 fsGroup: 65534 serviceAccountName: controller terminationGracePeriodSeconds: 0 |
官方提供的yaml文件之后,我们再提前准备好configmap的配置
github上面有提供一个参考文件,layer2模式需要的配置并不多
只做最基础的一些参数配置定义即可:
指定一段和k8s节点在同一个子网的IP
configmap-metallb.yaml
apiVersion: v1 kind: ConfigMap metadata: namespace: metallb-system name: config data: config: | address-pools: - name: default protocol: layer2 addresses: - 10.8.2.100-10.8.2.150 |
开始进行部署,整体可以分为三步:
部署namespace
部署deployment和daemonset
配置configmap
开始进行部署
# 创建namespace $ kubectl apply -f namespace.yaml namespace/metallb-system created $ kubectl get ns NAME STATUS AGE default Active 8d kube-node-lease Active 8d kube-public Active 8d kube-system Active 8d metallb-system Active 8s nginx-quic Active 8d
# 部署deployment和daemonset,以及相关所需的其他资源 $ kubectl apply -f metallb.yaml Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ podsecuritypolicy.policy/controller created podsecuritypolicy.policy/speaker created serviceaccount/controller created serviceaccount/speaker created clusterrole.rbac.authorization.k8s.io/metallb-system:controller created clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created role.rbac.authorization.k8s.io/config-watcher created role.rbac.authorization.k8s.io/pod-lister created role.rbac.authorization.k8s.io/controller created clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created rolebinding.rbac.authorization.k8s.io/config-watcher created rolebinding.rbac.authorization.k8s.io/pod-lister created rolebinding.rbac.authorization.k8s.io/controller created daemonset.apps/speaker created deployment.apps/controller created speaker # controller这个deployment来检查service的状态 $ kubectl get deploy -n metallb-system NAME READY UP-TO-DATE AVAILABLE AGE controller 1/1 1 1 86s # speaker则是使用ds部署到每个节点上面用来协商VIP、收发ARP、NDP等数据包 $ kubectl get ds -n metallb-system NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE speaker 3 3 3 3 3 kubernetes.io/os=linux 64s $ kubectl get pod -n metallb-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES controller-57fd9c5bb-svtjw 1/1 Running 0 117s 10.8.65.4 tiny-flannel-worker-8-11.k8s.tcinternal <none> <none> speaker-bf79q 1/1 Running 0 117s 10.31.8.11 tiny-flannel-worker-8-11.k8s.tcinternal <none> <none> speaker-fl5l8 1/1 Running 0 117s 10.31.8.12 tiny-flannel-worker-8-12.k8s.tcinternal <none> <none> speaker-nw2fm 1/1 Running 0 117s 10.31.8.1 tiny-flannel-master-8-1.k8s.tcinternal <none> <none>
$ kubectl apply -f configmap-metallb.yaml configmap/config created |
测试验证:
自定义一个服务来进行测试,测试镜像使用nginx,默认情况下会返回请求客户端的IP和端口
把service配置中的type字段指定为LoadBalancer
loadBalancerIP建议不指定
nginx-quic-lb.yaml
$ cat > nginx-quic-lb.yaml <<EOF apiVersion: v1 kind: Namespace metadata: name: nginx-quic
---
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-lb namespace: nginx-quic spec: selector: matchLabels: app: nginx-lb replicas: 4 template: metadata: labels: app: nginx-lb spec: containers: - name: nginx-lb image: tinychen777/nginx-quic:latest imagePullPolicy: IfNotPresent ports: - containerPort: 80
---
apiVersion: v1 kind: Service metadata: name: nginx-lb-service namespace: nginx-quic spec: externalTrafficPolicy: Cluster internalTrafficPolicy: Cluster selector: app: nginx-lb ports: - protocol: TCP port: 80 # match for service access port targetPort: 80 # match for pod access port type: LoadBalancer loadBalancerIP: EOF |
创建测试服务
$ kubectl apply -f nginx-quic-lb.yaml namespace/nginx-quic created deployment.apps/nginx-lb created service/nginx-lb-service created |
查看服务状态,这时候TYPE已经变成LoadBalancer,EXTERNAL-IP显示为我定义的10.8.2.100
查看服务状态
# 查看服务状态,这时候TYPE已经变成LoadBalancer $ kubectl get svc -n nginx-quic NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-lb-service LoadBalancer 10.96.86.118 10.8.2.100 80:32251/TCP 16h |
此时我们再去查看k8s机器中的nginx-lb-service状态,可以看到ClusetIP、LoadBalancer、nodeport的相关信息以及流量策略TrafficPolicy等配置
$ kubectl get svc -n nginx-quic nginx-lb-service -o yaml apiVersion: v1 kind: Service metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"nginx-lb-service","namespace":"nginx-quic"},"spec":{"externalTrafficPolicy":"Cluster","internalTrafficPolicy":"Cluster","ports":[{"port":80,"protocol":"TCP","targetPort":80}],"selector":{"app":"nginx-lb"},"type":"LoadBalancer"}} creationTimestamp: "2023-01-31T09:47:45Z" name: nginx-lb-service namespace: nginx-quic resourceVersion: "9300" uid: 80e7d78a-0290-4352-bce9-d06147d299f4 spec: clusterIP: 10.96.86.118 clusterIPs: - 10.96.86.118 externalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - nodePort: 32251 port: 80 protocol: TCP targetPort: 80 selector: app: nginx-lb sessionAffinity: None type: LoadBalancer status: loadBalancer: ingress: - ip: 10.8.2.100 |