本文档描述了如何在CentOS上使用kubeadm安装单个控制平面节点Kubernetes集群v1.15,然后部署外部OpenStack云提供程序和Cinder CSI插件以将Cinder卷用作Kubernetes中的持久卷。
OpenStack上的准备
该集群在OpenStack VM上运行,因此让我们首先在OpenStack中创建一些东西。
- 该Kubernetes集群的项目/租户
- 该项目的用户使用Kubernetes,以查询节点信息并附加卷等
- 专用网络和子网
- 此专用网络的路由器,并将其连接到公用网络以获取浮动IP
- 所有Kubernetes VM的安全组
- 一个虚拟机作为控制平面节点,几个虚拟机作为工作节点
安全组将具有以下规则来打开Kubernetes的端口。
控制平面节点
工作节点
控制和辅助节点上的CNI端口
仅在使用特定的CNI插件时才需要打开CNI特定的端口。在本指南中,我们将使用Weave Net。在安全组中仅需要打开Weave Net端口(TCP 6781-6784和UDP 6783-6784)。
控制平面节点至少需要2个cpu和4GB RAM。启动虚拟机后,请验证其主机名,并确保其与Nova中的节点名相同。如果主机名不可解析,请将其添加到中/etc/hosts。
例如,如果虚拟机名为master1,并且其内部IP为192.168.1.4。将其添加到/etc/hosts并将主机名设置为master1。
echo "192.168.1.4 master1" >> /etc/hostshostnamectl set-hostname --static master1
安装Docker和Kubernetes
接下来,我们将按照官方文档使用kubeadm安装docker和Kubernetes。
按照容器运行时文档中的步骤安装Docker 。
请注意,最佳做法是将systemd 用作Kubernetes 的驱动程序cgroup。如果您使用内部容器注册表,请将其添加到docker config中。
# Install Docker CE## Set up the repository### Install required packages.yum install yum-utils device-mapper-persistent-data lvm2### Add Docker repository.yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo## Install Docker CE.yum update && yum install docker-ce-18.06.2.ce## Create /etc/docker directory.mkdir /etc/docker# Configure the Docker daemoncat > /etc/docker/daemon.json <
按照安装Kubeadm文档中的步骤安装kubeadm。
cat < /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpgEOF# Set SELinux in permissive mode (effectively disabling it)# Caveat: In a production environment you may not want to disable SELinux, please refer to Kubernetes documents about SELinuxsetenforce 0sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/configyum install -y kubelet kubeadm kubectl --disableexcludes=kubernetessystemctl enable --now kubeletcat < /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOFsysctl --system# check if br_netfilter module is loadedlsmod | grep br_netfilter# if not, load it explicitly with modprobe br_netfilter
有关如何创建单个控制群集的官方文档可以从使用kubeadm文档创建单个控制平面群集中找到。
我们将主要遵循该文档,但还会为云提供商添加其他内容。为了使事情更清晰,我们将 kubeadm-config.yml用于控制平面节点。在此配置中,我们指定使用外部OpenStack云提供程序,以及在何处找到其配置。我们还在API服务的运行时配置中启用了存储API,因此我们可以将OpenStack卷用作Kubernetes中的持久卷。
apiVersion: kubeadm.k8s.io/v1beta1kind: InitConfigurationnodeRegistration: kubeletExtraArgs: cloud-provider: "external"---apiVersion: kubeadm.k8s.io/v1beta2kind: ClusterConfigurationkubernetesVersion: "v1.15.1"apiServer: extraArgs: enable-admission-plugins: NodeRestriction runtime-config: "storage.k8s.io/v1=true"controllerManager: extraArgs: external-cloud-volume-plugin: openstack extraVolumes: - name: "cloud-config" hostPath: "/etc/kubernetes/cloud-config" mountPath: "/etc/kubernetes/cloud-config" readOnly: true pathType: Filenetworking: serviceSubnet: "10.96.0.0/12" podSubnet: "10.224.0.0/16" dnsDomain: "cluster.local"
现在,我们将为/etc/kubernetes/cloud-configOpenStack 创建云配置。请注意,此处的租户是我们一开始为所有Kubernetes VM创建的租户。所有虚拟机都应在该项目/租户中启动。另外,您需要在此租户中创建一个用户,以便Kubernetes进行查询。ca文件是OpenStack API端点的CA根证书,例如,https://openstack.cloud:5000/v3 在撰写本文时,云提供程序不允许不安全的连接(跳过CA检查)。
[Global]region=RegionOneusername=usernamepassword=passwordauth-url=https://openstack.cloud:5000/v3tenant-id=14ba698c0aec4fd6b7dc8c310f664009domain-id=defaultca-file=/etc/kubernetes/ca.pem[LoadBalancer]subnet-id=b4a9a292-ea48-4125-9fb2-8be2628cb7a1floating-network-id=bc8a590a-5d65-4525-98f3-f7ef29c727d5[BlockStorage]bs-version=v2[Networking]public-network-name=publicipv6-support-disabled=false
下一步运行kubeadm以启动控制平面节点
kubeadm init --config=kubeadm-config.yml
完成初始化后,将admin config复制到.kube
mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config
在此阶段,控制平面节点已创建但尚未就绪。所有节点都有污点node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule,正在等待由云控制器管理器初始化。
# kubectl describe no master1Name: master1Roles: master......Taints: node-role.kubernetes.io/master:NoSchedule node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule node.kubernetes.io/not-ready:NoSchedule......
现在,将控制管理器与kubeadm结合使用,将OpenStack云控制管理器部署到集群中。
使用cloud-config为openstack云提供商创建一个秘密。
kubectl create secret -n kube-system generic cloud-config --from-literal=cloud.conf="$(cat /etc/kubernetes/cloud-config)" --dry-run -o yaml > cloud-config-secret.yamlkubectl apply -f cloud-config-secret.yaml
获取OpenStack API端点的CA证书,并将其放入中/etc/kubernetes/ca.pem。
创建RBAC资源。
kubectl apply -f https://github.com/kubernetes/cloud-provider-openstack/raw/release-1.15/cluster/addons/rbac/cloud-controller-manager-roles.yamlkubectl apply -f https://github.com/kubernetes/cloud-provider-openstack/raw/release-1.15/cluster/addons/rbac/cloud-controller-manager-role-bindings.yaml
我们将以DaemonSet而不是Pod的形式运行OpenStack云控制器管理器。该管理器仅在控制平面节点上运行,因此,如果有多个控制平面节点,则将运行多个Pod以实现高可用性。创建openstack-cloud-controller-manager-ds.yaml包含以下清单,然后应用它。
---apiVersion: v1kind: ServiceAccountmetadata: name: cloud-controller-manager namespace: kube-system---apiVersion: apps/v1kind: DaemonSetmetadata: name: openstack-cloud-controller-manager namespace: kube-system labels: k8s-app: openstack-cloud-controller-managerspec: selector: matchLabels: k8s-app: openstack-cloud-controller-manager updateStrategy: type: RollingUpdate template: metadata: labels: k8s-app: openstack-cloud-controller-manager spec: nodeSelector: node-role.kubernetes.io/master: "" securityContext: runAsUser: 1001 tolerations: - key: node.cloudprovider.kubernetes.io/uninitialized value: "true" effect: NoSchedule - key: node-role.kubernetes.io/master effect: NoSchedule - effect: NoSchedule key: node.kubernetes.io/not-ready serviceAccountName: cloud-controller-manager containers: - name: openstack-cloud-controller-manager image: docker.io/k8scloudprovider/openstack-cloud-controller-manager:v1.15.0 args: - /bin/openstack-cloud-controller-manager - --v=1 - --cloud-config=$(CLOUD_CONFIG) - --cloud-provider=openstack - --use-service-account-credentials=true - --address=127.0.0.1 volumeMounts: - mountPath: /etc/kubernetes/pki name: k8s-certs readOnly: true - mountPath: /etc/ssl/certs name: ca-certs readOnly: true - mountPath: /etc/config name: cloud-config-volume readOnly: true - mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec name: flexvolume-dir - mountPath: /etc/kubernetes name: ca-cert readOnly: true resources: requests: cpu: 200m env: - name: CLOUD_CONFIG value: /etc/config/cloud.conf hostNetwork: true volumes: - hostPath: path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec type: DirectoryOrCreate name: flexvolume-dir - hostPath: path: /etc/kubernetes/pki type: DirectoryOrCreate name: k8s-certs - hostPath: path: /etc/ssl/certs type: DirectoryOrCreate name: ca-certs - name: cloud-config-volume secret: secretName: cloud-config - name: ca-cert secret: secretName: openstack-ca-cert
控制管理器运行时,它将查询OpenStack以获取有关节点的信息并删除污点。在节点信息中,您将看到OpenStack中VM的UUID。
# kubectl describe no master1Name: master1Roles: master......Taints: node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoSchedule......sage:docker: network plugin is not ready: cni config uninitialized......PodCIDR: 10.224.0.0/24ProviderID: openstack:///548e3c46-2477-4ce2-968b-3de1314560a5
现在安装您喜欢的CNI,控制平面节点将准备就绪。
例如,要安装Weave Net,请运行以下命令:
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '')"
接下来,我们将设置工作程序节点。
首先,以与在控制平面节点中安装方式相同的方式安装docker和kubeadm。要将它们加入集群,我们需要从控制平面节点安装输出中获得令牌和ca cert哈希。如果它已过期或丢失,我们可以使用以下命令重新创建它。
# check if token is expiredkubeadm token list# re-create token and show join commandkubeadm token create --print-join-command
kubeadm-config.yml使用上述令牌和ca cert哈希为工作节点创建。
apiVersion: kubeadm.k8s.io/v1beta2discovery: bootstrapToken: apiServerEndpoint: 192.168.1.7:6443 token: 0c0z4p.dnafh6vnmouus569 caCertHashes: ["sha256:fcb3e956a6880c05fc9d09714424b827f57a6fdc8afc44497180905946527adf"]kind: JoinConfigurationnodeRegistration: kubeletExtraArgs: cloud-provider: "external"
apiServerEndpoint是控制平面节点,令牌和caCertHashes可从“ kubeadm令牌创建”命令的输出中打印的连接命令中获取。
运行kubeadm,工作节点将加入集群。
kubeadm join --config kubeadm-config.yml
在这个阶段,我们将拥有一个具有外部OpenStack云提供商的Kubernetes集群。提供程序告知Kubernetes Kubernetes节点与OpenStack VM之间的映射。如果Kubernetes想要将持久卷附加到Pod,则可以从映射中找出Pod在哪个OpenStack VM上运行,并将底层OpenStack卷相应地附加到VM。
部署Cinder CSI
与Cinder的集成由外部Cinder CSI插件提供,如Cinder CSI文档中所述。
我们将执行以下步骤来安装Cinder CSI插件。首先,使用CA证书为OpenStack的API端点创建一个秘密。与我们在上面的云提供商中使用的证书文件相同。
kubectl create secret -n kube-system generic openstack-ca-cert --from-literal=ca.pem="$(cat /etc/kubernetes/ca.pem)" --dry-run -o yaml > openstack-ca-cert.yamlkubectl apply -f openstack-ca-cert.yaml
然后创建RBAC资源。
kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/release-1.15/manifests/cinder-csi-plugin/cinder-csi-controllerplugin-rbac.yamlkubectl apply -f https://github.com/kubernetes/cloud-provider-openstack/raw/release-1.15/manifests/cinder-csi-plugin/cinder-csi-nodeplugin-rbac.yaml
Cinder CSI插件包括控制器插件和节点插件。控制器与Kubernetes API和Cinder API通信以创建/附加/分离/删除Cinder卷。节点插件依次在每个工作节点上运行,以将存储设备(附加的卷)绑定到Pod,并在删除过程中取消绑定。创建cinder-csi-controllerplugin.yaml并应用它以创建csi控制器。
kind: ServiceapiVersion: v1metadata: name: csi-cinder-controller-service namespace: kube-system labels: app: csi-cinder-controllerpluginspec: selector: app: csi-cinder-controllerplugin ports: - name: dummy port: 12345---kind: StatefulSetapiVersion: apps/v1metadata: name: csi-cinder-controllerplugin namespace: kube-systemspec: serviceName: "csi-cinder-controller-service" replicas: 1 selector: matchLabels: app: csi-cinder-controllerplugin template: metadata: labels: app: csi-cinder-controllerplugin spec: serviceAccount: csi-cinder-controller-sa containers: - name: csi-attacher image: quay.io/k8scsi/csi-attacher:v1.0.1 args: - "--v=5" - "--csi-address=$(ADDRESS)" env: - name: ADDRESS value: /var/lib/csi/sockets/pluginproxy/csi.sock imagePullPolicy: "IfNotPresent" volumeMounts: - name: socket-dir mountPath: /var/lib/csi/sockets/pluginproxy/ - name: csi-provisioner image: quay.io/k8scsi/csi-provisioner:v1.0.1 args: - "--provisioner=csi-cinderplugin" - "--csi-address=$(ADDRESS)" env: - name: ADDRESS value: /var/lib/csi/sockets/pluginproxy/csi.sock imagePullPolicy: "IfNotPresent" volumeMounts: - name: socket-dir mountPath: /var/lib/csi/sockets/pluginproxy/ - name: csi-snapshotter image: quay.io/k8scsi/csi-snapshotter:v1.0.1 args: - "--connection-timeout=15s" - "--csi-address=$(ADDRESS)" env: - name: ADDRESS value: /var/lib/csi/sockets/pluginproxy/csi.sock imagePullPolicy: Always volumeMounts: - mountPath: /var/lib/csi/sockets/pluginproxy/ name: socket-dir - name: cinder-csi-plugin image: docker.io/k8scloudprovider/cinder-csi-plugin:v1.15.0 args : - /bin/cinder-csi-plugin - "--v=5" - "--nodeid=$(NODE_ID)" - "--endpoint=$(CSI_ENDPOINT)" - "--cloud-config=$(CLOUD_CONFIG)" - "--cluster=$(CLUSTER_NAME)" env: - name: NODE_ID valueFrom: fieldRef: fieldPath: spec.nodeName - name: CSI_ENDPOINT value: unix://csi/csi.sock - name: CLOUD_CONFIG value: /etc/config/cloud.conf - name: CLUSTER_NAME value: kubernetes imagePullPolicy: "IfNotPresent" volumeMounts: - name: socket-dir mountPath: /csi - name: secret-cinderplugin mountPath: /etc/config readOnly: true - mountPath: /etc/kubernetes name: ca-cert readOnly: true volumes: - name: socket-dir hostPath: path: /var/lib/csi/sockets/pluginproxy/ type: DirectoryOrCreate - name: secret-cinderplugin secret: secretName: cloud-config - name: ca-cert secret: secretName: openstack-ca-cert
创建cinder-csi-nodeplugin.yaml并应用它来创建csi节点。
kind: DaemonSetapiVersion: apps/v1metadata: name: csi-cinder-nodeplugin namespace: kube-systemspec: selector: matchLabels: app: csi-cinder-nodeplugin template: metadata: labels: app: csi-cinder-nodeplugin spec: serviceAccount: csi-cinder-node-sa hostNetwork: true containers: - name: node-driver-registrar image: quay.io/k8scsi/csi-node-driver-registrar:v1.1.0 args: - "--v=5" - "--csi-address=$(ADDRESS)" - "--kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)" lifecycle: preStop: exec: command: ["/bin/sh", "-c", "rm -rf /registration/cinder.csi.openstack.org /registration/cinder.csi.openstack.org-reg.sock"] env: - name: ADDRESS value: /csi/csi.sock - name: DRIVER_REG_SOCK_PATH value: /var/lib/kubelet/plugins/cinder.csi.openstack.org/csi.sock - name: KUBE_NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName imagePullPolicy: "IfNotPresent" volumeMounts: - name: socket-dir mountPath: /csi - name: registration-dir mountPath: /registration - name: cinder-csi-plugin securityContext: privileged: true capabilities: add: ["SYS_ADMIN"] allowPrivilegeEscalation: true image: docker.io/k8scloudprovider/cinder-csi-plugin:v1.15.0 args : - /bin/cinder-csi-plugin - "--nodeid=$(NODE_ID)" - "--endpoint=$(CSI_ENDPOINT)" - "--cloud-config=$(CLOUD_CONFIG)" env: - name: NODE_ID valueFrom: fieldRef: fieldPath: spec.nodeName - name: CSI_ENDPOINT value: unix://csi/csi.sock - name: CLOUD_CONFIG value: /etc/config/cloud.conf imagePullPolicy: "IfNotPresent" volumeMounts: - name: socket-dir mountPath: /csi - name: pods-mount-dir mountPath: /var/lib/kubelet/pods mountPropagation: "Bidirectional" - name: kubelet-dir mountPath: /var/lib/kubelet mountPropagation: "Bidirectional" - name: pods-cloud-data mountPath: /var/lib/cloud/data readOnly: true - name: pods-probe-dir mountPath: /dev mountPropagation: "HostToContainer" - name: secret-cinderplugin mountPath: /etc/config readOnly: true - mountPath: /etc/kubernetes name: ca-cert readOnly: true volumes: - name: socket-dir hostPath: path: /var/lib/kubelet/plugins/cinder.csi.openstack.org type: DirectoryOrCreate - name: registration-dir hostPath: path: /var/lib/kubelet/plugins_registry/ type: Directory - name: kubelet-dir hostPath: path: /var/lib/kubelet type: Directory - name: pods-mount-dir hostPath: path: /var/lib/kubelet/pods type: Directory - name: pods-cloud-data hostPath: path: /var/lib/cloud/data type: Directory - name: pods-probe-dir hostPath: path: /dev type: Directory - name: secret-cinderplugin secret: secretName: cloud-config - name: ca-cert secret: secretName: openstack-ca-cert
当它们都在运行时,为Cinder创建一个存储类。
apiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: csi-sc-cinderpluginprovisioner: csi-cinderplugin
然后,我们可以使用此类创建PVC。
apiVersion: v1kind: PersistentVolumeClaimmetadata: name: myvolspec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: csi-sc-cinderplugin
创建PVC时,将相应地创建一个Cinder卷。
# kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEmyvol Bound pvc-14b8bc68-6c4c-4dc6-ad79-4cb29a81faad 1Gi RWO csi-sc-cinderplugin 3s
在OpenStack中,卷名称将与Kubernetes持久卷生成的名称匹配。在此示例中为:pvc-14b8bc68-6c4c-4dc6-ad79-4cb29a81faad
现在,我们可以使用PVC创建容器。
apiVersion: v1kind: Podmetadata: name: webspec: containers: - name: web image: nginx ports: - name: web containerPort: 80 hostPort: 8081 protocol: TCP volumeMounts: - mountPath: "/usr/share/nginx/html" name: mypd volumes: - name: mypd persistentVolumeClaim: claimName: myvol
pod运行时,该卷将附加到pod。如果回到OpenStack,我们可以看到Cinder卷已安装到运行Pod的worker节点上。
# openstack volume show 6b5f3296-b0eb-40cd-bd4f-2067a0d6287f+--------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+| Field | Value |+--------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+| attachments | [{u'server_id': u'1c5e1439-edfa-40ed-91fe-2a0e12bc7eb4', u'attachment_id': u'11a15b30-5c24-41d4-86d9-d92823983a32', u'attached_at': u'2019-07-24T05:02:34.000000', u'host_name': u'compute-6', u'volume_id': u'6b5f3296-b0eb-40cd-bd4f-2067a0d6287f', u'device': u'/dev/vdb', u'id': u'6b5f3296-b0eb-40cd-bd4f-2067a0d6287f'}] || availability_zone | nova || bootable | false || consistencygroup_id | None || created_at | 2019-07-24T05:02:18.000000 || description | Created by OpenStack Cinder CSI driver || encrypted | False || id | 6b5f3296-b0eb-40cd-bd4f-2067a0d6287f || migration_status | None || multiattach | False || name | pvc-14b8bc68-6c4c-4dc6-ad79-4cb29a81faad || os-vol-host-attr:host | rbd:volumes@rbd#rbd || os-vol-mig-status-attr:migstat | None || os-vol-mig-status-attr:name_id | None || os-vol-tenant-attr:tenant_id | 14ba698c0aec4fd6b7dc8c310f664009 || properties | attached_mode='rw', cinder.csi.openstack.org/cluster='kubernetes' || replication_status | None || size | 1 || snapshot_id | None || source_volid | None || status | in-use || type | rbd || updated_at | 2019-07-24T05:02:35.000000 || user_id | 5f6a7a06f4e3456c890130d56babf591 |+--------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
摘要
在本演练中,我们将Kubernetes集群部署在OpenStack VM上,并使用外部OpenStack云提供商将其与OpenStack集成。然后,在此Kubernetes集群上,我们部署了Cinder CSI插件,该插件可以创建Cinder卷并将它们作为持久卷显示在Kubernetes中。
原文:https://kubernetes.io/blog/2020/02/07/deploying-external-openstack-cloud-provider-with-kubeadm/