kubeadm安装

什么时kubeadm

之前介绍过minikube的安装和基本使用,非常简单易用,当时是在单机环境中学习、验证k8s。但在集群的生产环境中,minikube就无用武之地了。

k8s的集群安装、部署非常复杂,需要专业的知识。为了简化,k8s社区出现了一个专门在集群中安装k8s的工具,就是kubeadm。

请添加图片描述
kubeadm和minikube类似,用镜像和容器封装k8s的组件,其目标是集群部署。

准备工作

既然是集群,所以至少需要两台电脑。可以在本地搭建虚拟机,也可以购买云服务器,如果是新用户,云服务器的价格并不贵。本人是购买了腾讯云的两台轻量应用服务器。注册,云服务器必须在一个地域和可用区。

  1. 修改主机名称

    # 修改master节点,改为master;修改worker节点名称,改为worker01
    sudo vi /etc/hostname
    
  2. 安装docker。本人的操作系统是Ubuntu,如何安装docker,参见本人的一篇文章:(Ubuntu上安装Docker_雨打夏夜的博客-CSDN博客

    docker安装完成后,对docker进行配置,把cgroup的驱动程序改为systemd,然后重启docker:

    cat <<EOF | sudo tee /etc/docker/daemon.json
    {
      "exec-opts": ["native.cgroupdriver=systemd"],
      "log-driver": "json-file",
      "log-opts": {
        "max-size": "100m"
      },
      "storage-driver": "overlay2"
    }
    EOF
    
    sudo systemctl enable docker
    sudo systemctl daemon-reload
    sudo systemctl restart docker
    
  3. 修改iptables

    cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
    br_netfilter
    EOF
    
    cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward=1 # better than modify /etc/sysctl.conf
    EOF
    
    sudo sysctl --system
    
  4. 关闭Linux的swap分区,建议永久关闭

    sudo swapoff -a
    sudo sed -ri '/\sswap\s/s/^#?/#/' /etc/fstab
    

安装kubeadm

  1. 修改软件源,分别执行

    sudo apt install -y apt-transport-https ca-certificates curl
    
    curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
    
    cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
    deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
    EOF
    
    sudo apt update
    
  2. 安装kubeadm、kubelet和kubectl,并指定版本:

    sudo apt install -y kubeadm=1.23.3-00 kubelet=1.23.3-00 kubectl=1.23.3-00
    
  3. 安装完成之后,验证版本是否正确

    ubuntu@master:~$ kubeadm version
    kubeadm version: &version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-25T21:24:08Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"linux/amd64"}
    
    ubuntu@master:~$ kubectl version --short
    Client Version: v1.23.3
    Server Version: v1.23.3
    
  4. 锁定版本

    sudo apt-mark hold kubeadm kubelet kubectl
    

下载 k8s组件镜像

  1. 从阿里云仓库中下载相关镜像,使用下面的脚本,脚本文件名称任意,比如:kubeadm.sh,放在当前执行目录下。

    repo=registry.aliyuncs.com/google_containers
    
    for name in `kubeadm config images list --kubernetes-version v1.23.3`; do
    
        src_name=${name#k8s.gcr.io/}
        src_name=${src_name#coredns/}
    
        docker pull $repo/$src_name
    
        docker tag $repo/$src_name $name
        docker rmi $repo/$src_name
    done
    
  2. 给脚本文件加权限

    chmode 777 kubeadm.sh
    
  3. 执行脚本,下载镜像

    ./kubeadm.sh
    
  4. 镜像下载完成后,查看镜像相关信息:

    ubuntu@master:~$ docker images
    REPOSITORY                                       TAG       IMAGE ID       CREATED         SIZE
    k8s.gcr.io/kube-apiserver                        v1.23.3   f40be0088a83   7 months ago    135MB
    k8s.gcr.io/kube-controller-manager               v1.23.3   b07520cd7ab7   7 months ago    125MB
    k8s.gcr.io/kube-scheduler                        v1.23.3   99a3486be4f2   7 months ago    53.5MB
    k8s.gcr.io/kube-proxy                            v1.23.3   9b7cc9982109   7 months ago    112MB
    k8s.gcr.io/etcd                                  3.5.1-0   25f8c7f3da61   10 months ago   293MB
    k8s.gcr.io/coredns/coredns                       v1.8.6    a4ca41631cc7   11 months ago   46.8MB
    k8s.gcr.io/pause                                 3.6       6270bb605e12   12 months ago   683kB
    

    注意:镜像版本有要求,读者下载完成后,和上面的IMAGE ID对比下,确认是相同的镜像,否则可能安装失败。

master节点安装

  1. 执行下面的命令

    sudo kubeadm init \
        --pod-network-cidr=10.10.0.0/16 \
        --apiserver-advertise-address=10.0.4.12 \
        --kubernetes-version=v1.23.3
    

    注意:apiserver-advertise-address的值,应该是读者自己电脑的局域网ip,或云服务器的内网ip。10.0.4.12就是我云服务器的内网ip。

  2. 安装完成后,会看到如下提示,按照要求分别执行相关脚本

    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
  3. 安装完成后另一个重要的提示如下,其他节点要加入该集群,要用到该指令:

    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 10.0.4.12:6443 --token ka3rxd.i2qidqbz6n4rukem \
            --discovery-token-ca-cert-hash sha256:9ce266be50b57614e2a4b67db01eaf24fede7a3c5fdf6bb8b18a99348baed671 
    
  4. 安装完成后,可使用kubectl version、kubectl get node检查版本和节点状态

    ubuntu@master:~$ kubectl version --short
    Client Version: v1.23.3
    Server Version: v1.23.3
    ubuntu@master:~$ kubectl get node
    NAME     STATUS     ROLES                  AGE     VERSION
    master   NotReady   control-plane,master   2m42s   v1.23.3
    

    Master 节点的状态是“NotReady”,是由于还缺少网络插件,集群的内部网络还没有正常运作。

Flannel 网络插件

  1. Flannel网络插件,在Git仓库中:flannel/kube-flannel.yml at master · flannel-io/flannel · GitHub,对应的yml文件如下:

    注意:我修改了该yml文件中的“net-conf.json”字段,把 Network 改成刚才 kubeadm 的参数 --pod-network-cidr 设置的地址段。

    net-conf.json: |
     {
       "Network": "10.10.0.0/16",
       "Backend": {
         "Type": "vxlan"
       }
     }
    
---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.10.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
       #image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
       #image: flannelcni/flannel:v0.19.2 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
       #image: flannelcni/flannel:v0.19.2 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate
  1. 将上面的yml文件放在当前目录下,执行下面的命令:

    kubectl apply -f kube-flannel.yml
    

    上面脚本执行完成后,要等待几分钟,等待相关镜像下载、容器启动。

  2. 几分钟后,再看节点状态:

    ubuntu@master:~$ kubectl get node
    NAME     STATUS   ROLES                  AGE   VERSION
    master   Ready    control-plane,master   54m   v1.23.3
    

安装work节点

  1. 参照【下载 k8s组件镜像】章节,在worker节点先下载相关镜像,可加快安装速度

  2. 执行安装master后,提示出现的命令

    ubuntu@worker:~$ sudo kubeadm join 10.0.4.12:6443 --token ka3rxd.i2qidqbz6n4rukem         --discovery-token-ca-cert-hash sha256:9ce266be50b57614e2a4b67db01eaf24fede7a3c5fdf6bb8b18a99348baed671
    [preflight] Running pre-flight checks
    [preflight] Reading configuration from the cluster...
    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
    W0918 16:18:01.313202   19199 utils.go:69] The recommended value for "resolvConf" in "KubeletConfiguration" is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Starting the kubelet
    [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
    
    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.
    
    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
    

    注意:执行上面的命令,需要确认master节点主机的防火墙设置,对外开放6443端口。如果是自己的虚拟机,可以关闭防火墙,如果是云服务器,则在防火墙/安全组中放行6443端口。

  3. 在master节点上执行如下命令kubectl get node:

    ubuntu@master:~$ kubectl get node
    NAME     STATUS   ROLES                  AGE     VERSION
    master   Ready    control-plane,master   64m     v1.23.3
    worker   Ready    <none>                 4m23s   v1.23.3
    

测试

  1. 运行nginx测试:

    ubuntu@master:~$ kubectl run ngx --image=nginx:alpine
    pod/ngx created
    
  2. 查看pod,nginx运行在worker节点上。

    ubuntu@master:~$ kubectl get pod -o wide
    NAME   READY   STATUS    RESTARTS   AGE   IP          NODE     NOMINATED NODE   READINESS GATES
    ngx    1/1     Running   0          18s   10.10.1.2   worker   <none>           <none>
    

安装完成。

kubeadmKubernetes官方提供的一个用于快速部署Kubernetes集群的工具。下面是使用kubeadm安装Kubernetes 1.26的步骤: 1. 首先,确保你的机器满足Kubernetes的最低要求,包括操作系统版本、内存和CPU等。你可以在Kubernetes官方文档中找到详细的要求。 2. 安装Docker或者其他容器运行时。Kubernetes使用容器来运行应用程序和服务,所以需要先安装一个容器运行时。你可以选择Docker、containerd等。 3. 安装kubeadmkubelet和kubectl。这三个组件是Kubernetes的核心组件,kubeadm用于初始化集群,kubelet用于管理节点,kubectl用于与集群进行交互。你可以从Kubernetes官方网站下载对应版本的二进制文件,然后将它们添加到系统的PATH中。 4. 初始化Master节点。在Master节点上运行以下命令来初始化集群: ``` sudo kubeadm init ``` 这个命令会自动下载所需的镜像,并生成一个加入集群的命令。你需要将这个命令保存下来,后面会用到。 5. 配置kubectl。在Master节点上运行以下命令来配置kubectl: ``` mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config ``` 6. 安装网络插件。Kubernetes需要一个网络插件来实现Pod之间的通信。你可以选择Calico、Flannel等网络插件,并按照其官方文档进行安装和配置。 7. 加入Worker节点。在Worker节点上运行第4步中生成的加入集群的命令,将Worker节点加入到集群中。 至此,你已经成功安装Kubernetes 1.26集群。你可以使用kubectl命令来管理和操作集群。
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值