kubeadm k8s 1.24之后版本安装,带cri-dockerd

最后编辑时间:2024/3/26

适用于1.24之后的版本

单节点配置

  1. 检查是否已经安装kubectl, kubelet, kubeadm直接输入命令确定,如果提示没有该指令则正确

    kubectl
    kubelet
    kubeadm
    

    如果之前安装,首先reset,然后使用apt remove和snap remove删除

    sudo kubeadm reset
    sudo apt remove kubectl kubelet kubeadm
    sudo snap remove kubectl kubelet kubeadm
    
  2. 关闭防火墙

    查看防火墙状态 inactive说明是未激活

    sudo ufw status
    

    开机不启动防火墙,重启即可生效

    sudo ufw disable
    
  3. 确保docker已经安装,并正确配置cgroup管理器,例如

    配置docker

    sudo mkdir -p /etc/docker
    sudo vi /etc/docker/daemon.json
    
    #{
    #  "registry-mirrors": ["https://2m9jza5s.mirror.aliyuncs.com"],
    #  "insecure-registries": ["localhost:32000"],
    #  "exec-opts": [ "native.cgroupdriver=systemd" ],
    #  "data-root": "/data/wzh/docker/image",
    #  "default-runtime": "nvidia",
    #    "runtimes": {
    #        "nvidia": {
    #            "path": "/usr/bin/nvidia-container-runtime",
    #            "runtimeArgs": []
    #        }
    #    }
    #}
    {
      "registry-mirrors": ["https://2m9jza5s.mirror.aliyuncs.com"],  # 必要
      "insecure-registries": ["localhost:32000"],  
      "exec-opts": [ "native.cgroupdriver=systemd" ],  # 必要
      "data-root": "/data/wzh/docker/image",  # 配置镜像目录
    }
    

    "https://???.mirror.aliyuncs.com"配成自己的,见链接

    sudo systemctl restart docker
    
  4. 安装cri-dockerd
    以下内容适用1.24之后版本

    进入https://github.com/Mirantis/cri-dockerd/releases

    下载对应cri-dockerd

    博主的机器为ubuntu-20,因此下载cri-dockerd_0.3.12.3-0.ubuntu-focal_amd64.deb

    然后适用apt安装,注意选择当前目录./

    sudo apt install ./cri-dockerd_0.3.12.3-0.ubuntu-focal_amd64.deb
    

    然后启用cri-dockerd

    sudo systemctl daemon-reload
    sudo systemctl enable cri-docker.socket
    sudo systemctl start cri-docker.socket cri-docker
    cri-dockerd --version
    ls -al /var/run/cri-dockerd.sock
    
  5. 安装kubectl, kubelet, kubeadm

    # 检查这个kubernetes-cni
    sudo apt install -y kubelet=1.28.2-00 kubectl=1.28.2-00 kubeadm=1.28.2-00
    # apt list kubernetes-cni -a,可以查找有什么版本
    # sudo journalctl -u kubelet # 查看kubelet状态
    # systemctl status kubelet # 查看kubelet状态
    
  6. 禁用swap

    sudo vi /etc/default/kubelet
    # 添加下面这行
    KUBELET_EXTRA_ARGS="--fail-swap-on=false"
    sudo systemctl daemon-reload
    sudo systemctl restart kubelet
    
    sudo vi /etc/fstab
    注释掉带 `/swap.img`的那行
    
  7. 出错后首先重置:

    sudo kubeadm reset
    rm -rf ~/.kube
    sudo rm -rf /etc/cni/net.d
    
  8. 配置dockerd

    sudo vi /etc/containerd/config.toml
    #如果看到了这行:
    disabled_plugins : ["cri"]
    
    #将这行用#注释或者将"cri"删除
    #disabled_plugins : ["cri"]
     
    disabled_plugins : []
    
    #重启容器运行时
    sudo systemctl restart containerd
    
  9. 配置镜像位置
    停止cri-docker服务:sudo systemctl stop cri-docker

    编辑vi /usr/lib/systemd/system/cri-docker.service

    找到ExecStart,在最后添加–pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9

    ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9
    

    重新加载服务:sudo systemctl daemon-reload

    启动cri-docker服务:sudo systemctl start cri-docker

  10. kubeadm初始化

  11.  sudo kubeadm init --kubernetes-version=v1.28.2 --apiserver-advertise-address=0.0.0.0 --image-repository registry.aliyuncs.com/google_containers --ignore-preflight-errors=Swap --pod-network-cidr=10.24.0.0/16 --cri-socket unix:///var/run/cri-dockerd.sock
    
  12. 出错使用下述进行debug

    sudo journalctl -xeu kubelet
    
  13. init成功后,提示如下,表示成功了:

    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    Alternatively, if you are the root user, you can run:
    
      export KUBECONFIG=/etc/kubernetes/admin.conf
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 10.181.8.94:6443 --token 0desqq.a4oq0rwqyursqah9 \
            --discovery-token-ca-cert-hash sha256:7e181cd0f0a435adf7746b17b09b10dba5c9d83936e92fffdc1e67cbf4a9cc06
    

    配置登录选项

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
  14. init成功后,检查kubectl

 $ kubectl get pod -A

此时仍有两个没有打开

  1. 需要配置网络

创建文件flannel.yaml,内容如下,

---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    k8s-app: flannel
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - clustercidrs
  verbs:
  - list
  - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: flannel
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    k8s-app: flannel
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
    k8s-app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.0
       #image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.2
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        image: lizhenliang/flannel:v0.11.0-amd64
       #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.21.5
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: lizhenliang/flannel:v0.11.0-amd64
       #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.21.5
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate

创建完成后执行kubectl apply -f flannel.yaml,执行很快,但是需要等待一会才会启动,一会会出现

wzh@chen:~$ kubectl get pod -A
NAMESPACE      NAME                           READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-xqpqb          1/1     Running   0          11h
kube-system    coredns-7f6cbbb7b8-w5lp8       1/1     Running   0          12h
kube-system    coredns-7f6cbbb7b8-xmps6       1/1     Running   0          12h
kube-system    etcd-chen                      1/1     Running   0          12h
kube-system    kube-apiserver-chen            1/1     Running   0          12h
kube-system    kube-controller-manager-chen   1/1     Running   0          12h
kube-system    kube-proxy-c5tks               1/1     Running   0          12h
kube-system    kube-scheduler-chen            1/1     Running   0          12h
wzh@chen:~$ kubectl get nodes
NAME   STATUS   ROLES                  AGE   VERSION
chen   Ready    control-plane,master   13h   v1.28.2

现在master可以在去除所有污点后执行(“:…” -> “-” ),以下未去除污点操作,可以使用kubectl describe进行查看是否有污点:

$ kubectl taint nodes --all node-role.kubernetes.io/master-
$ kubectl taint nodes --all foo-
  • 26
    点赞
  • 31
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

whyte王

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值