Kubeadm部署Kubernetes-V1.30

Kubeadm部署Kubernetes-V1.30

💡 kubeadm是Kubernetes官方提供的用于快速安部署Kubernetes集群的工具。伴随Kubernetes每个版本的发布,kubeadm可能会对集群配置方面的一些实践做调整!

准备

系统配置

在安装之前,需要先做好如下准备。

3台Linux主机如下:

修改主机名:hostnamectl set-hostname 主机名

主机名系统版本IP
masterCentOS7.9172.16.0.52
node1CentOS7.9172.16.0.16
node2CentOS7.9172.16.0.17
[root@localhost ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.0.52 master
172.16.0.16 node1
172.16.0.17 node2

在各个主机上完成下面的系统配置。

如果系统启用了selinux,使用下面的命令禁用selinux:

setenforce 0

vim /etc/selinux/config
SELINUX=disabled

如果各个主机启用了防火墙策略,需要开放Kubernetes各个组件所需要的端口,可以查看Ports and Protocols中的内容, 开放相关端口或者关闭主机的防火墙

[root@master ~]# systemctl stop firewalld
[root@master ~]# systemctl disable firewalld

关闭swap(避免内存交换至磁盘导致性能下降)

[root@master ~]# swapoff -a
[root@master ~]# sed -ri '/\sswap\s/s/^#?/#/' /etc/fstab

创建/etc/modules-load.d/containerd.conf配置文件,确保在系统启动时自动加载所需的内核模块,以满足容器运行时的要求:

[root@node2 ~]# cat << EOF > /etc/modules-load.d/containerd.conf
> overlay
> br_netfilter
> EOF

执行以下命令使配置生效:

[root@node2 ~]# modprobe overlay
[root@node2 ~]# modprobe br_netfilter

创建/etc/sysctl.d/99-kubernetes-cri.conf配置文件:

[root@master ~]# cat << EOF > /etc/sysctl.d/99-kubernetes-cri.conf
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> net.ipv4.ip_forward = 1
> user.max_user_namespaces=28633
> EOF

执行以下命令使配置生效:

[root@master ~]# sysctl -p /etc/sysctl.d/99-kubernetes-cri.conf

💡 在文件名/etc/sysctl.d/99-kubernetes-cri.conf中,“99”
代表文件的优先级或顺序。sysctl是Linux内核参数的配置工具,它可以通过修改/proc/sys/目录下的文件来设置内核参数。在/etc/sysctl.d/目录中,可以放置一系列的配置文件,以便在系统启动时自动加载这些参数。这些配置文件按照文件名的字母顺序逐个加载。数字前缀用于指定加载的顺序,较小的数字表示较高的优先级。

配置服务器支持开启ipvs的前提条件

由于ipvs已经加入到了内核的主干,所以为kube-proxy开启ipvs的前提需要加载以下的内核模块:

创建/etc/modules-load.d/ipvs.conf文件,保证在节点重启后能自动加载所需模块:

[root@master ~]# cat > /etc/modules-load.d/ipvs.conf <<EOF
> ip_vs
> ip_vs_rr
> ip_vs_wrr
> ip_vs_sh
> EOF

执行以下命令使配置立即生效:

modprobe ip_vs
modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh

使用lsmod | grep -e ip_vs -e nf_conntrack命令查看是否已经正确加载所需的内核模块。

[root@master ~]# lsmod | grep -e ip_vs -e nf_conntrack
ip_vs_sh               12688  0
ip_vs_wrr              12697  0
ip_vs_rr               12600  0
ip_vs                 145458  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          139264  1 ip_vs
libcrc32c              12644  3 xfs,ip_vs,nf_conntrack

接下来还需要确保各个节点上已经安装了ipset软件包,为了便于查看ipvs的代理规则,最好安装一下管理工具ipvsadm。

[root@master ~]# yum -y install ipset ipvsadm

如果不满足以上前提条件,即使kube-proxy的配置开启了ipvs模式,也会退回到iptables模式。

部署容器运行时Containerd

在各个服务器节点上安装容器运行时Containerd。

下载Containerd的二进制包, 需要注意cri-containerd-(cni-)-VERSION-OS-ARCH.tar.gz发行包自containerd 1.6版本起已经被弃用,在某些 Linux 发行版上无法正常工作,并将在containerd 2.0版本中移除,这里下载containerd-<VERSION>-<OS>-<ARCH>.tar.gz的发行包,后边再单独下载安装runc和CNI plugins:

[root@master ~]# wget https://github.com/containerd/containerd/releases/download/v1.7.11/containerd-1.7.11-linux-amd64.tar.gz

将其解压缩到/usr/local下:

[root@master ~]# tar -zxvf containerd-1.7.11-linux-amd64.tar.gz -C /usr/local/

接下来从runc的github上单独下载安装runc,该二进制文件是静态构建的,并且应该适用于任何Linux发行版。

[root@master ~]# wget https://github.com/opencontainers/runc/releases/download/v1.1.9/runc.amd64
[root@master ~]# install -m 755 runc.amd64 /usr/local/sbin/runc

接下来生成containerd的配置文件:

[root@master ~]# mkdir -p /etc/containerd
[root@master ~]# containerd config default > /etc/containerd/config.toml

根据文档Container runtimes中的内容,对于使用systemd作为init system的Linux的发行版,使用systemd作为容器的cgroup driver可以确保服务器节点在资源紧张的情况更加稳定,因此这里配置各个节点上containerd的cgroup driver为systemd。

修改前面生成的配置文件/etc/containerd/config.toml

[root@master ~]# vim /etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  ...
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true

再修改/etc/containerd/config.toml中的

[plugins."io.containerd.grpc.v1.cri"]
  ...
  # sandbox_image = "registry.k8s.io/pause:3.8"
  sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"

为了通过systemd启动containerd,请还需要从https://raw.githubusercontent.com/containerd/containerd/main/containerd.service下载containerd.service单元文件,并将其放置在 /etc/systemd/system/containerd.service中。

cat << EOF > /etc/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target

[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd

Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5

# Having non-zero Limit*s causes performance problems due to accounting overhea
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity

# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target
EOF

配置containerd开机启动,并启动containerd,执行以下命令:

systemctl daemon-reload
systemctl enable containerd --now 
systemctl status containerd

下载安装crictl工具:

wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.29.0/crictl-v1.29.0-linux-amd64.tar.gz
tar -zxvf crictl-v1.29.0-linux-amd64.tar.gz
install -m 755 crictl /usr/local/bin/crictl

使用crictl测试一下,确保可以打印出版本信息并且没有错误信息输出:

[root@master ~]# crictl --runtime-endpoint=unix:///run/containerd/containerd.sock  version
Version:  0.1.0
RuntimeName:  containerd
RuntimeVersion:  v1.7.11
RuntimeApiVersion:  v1

安装 CNI plugins

CNI(container network interface)是容器网络接口,它是一种标准设计和库,为了让用户在容器创建或者销毁时都能够更容易的配置容器网络。这一步主要是为contained nerdctl的客户端工具所安装的依赖 .

客户端工具有两种,分别是crictl和nerdctl, 推荐使用nerdctl

[root@master ~]# wget https://github.com/containernetworking/plugins/releases/download/v1.4.1/cni-plugins-linux-amd64-v1.4.1.tgz
[root@master ~]# mkdir -p /opt/cni/bin
[root@master ~]# tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.4.1.tgz

安装nerdctl

下载地址:https://github.com/containerd/nerdctl/releases


[root@master ~]# tar xf nerdctl-1.7.6-linux-amd64.tar.gz
[root@master ~]# cp nerdctl /usr/local/bin/

使用kubeadm部署Kubernetes

安装kubeadm和kubelet

下面在各节点安装kubeadm和kubelet:

添加 Kubernetes 的 yum 仓库。在仓库定义中的 exclude 参数确保了与 Kubernetes 相关的软件包在运行 yum update 时不会升级,因为升级 Kubernetes 需要遵循特定的过程。请注意,此仓库仅包含适用于 Kubernetes 1.30 的软件包; 对于其他 Kubernetes 次要版本,则需要更改 URL 中的 Kubernetes 次要版本以匹配你所需的次要版本 (你还应该检查正在阅读的安装文档是否为你计划安装的 Kubernetes 版本的文档)。

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF

安装 kubelet、kubeadm 和 kubectl,并启用 kubelet 以确保它在启动时自动启动:

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable --now kubelet

配置crictl

# 所有节点都操作
cat <<EOF|tee /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
EOF

root@k8s-master01 ~]# crictl ps -a
CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD

初始化集群(Master节点执行)

打印初始化配置到yaml文件

[root@master ~]# kubeadm config print init-defaults >  kubeadm-config.yaml

修改初始化默认配置文件

[root@master ~]# cat kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.16.0.52  # master的地址
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: master   # master节点名称
  taints:   # 设置污点,不让pod运行在控制面
  - effect: PreferNoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.30.0
networking:
  podSubnet: 10.244.0.0/16
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {}

查看下载的镜像(Master节点执行)

[root@master ~]# kubeadm config images list --config kubeadm-config.yaml
registry.aliyuncs.com/google_containers/kube-apiserver:v1.30.0
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.30.0
registry.aliyuncs.com/google_containers/kube-scheduler:v1.30.0
registry.aliyuncs.com/google_containers/kube-proxy:v1.30.0
registry.aliyuncs.com/google_containers/coredns:v1.11.1
registry.aliyuncs.com/google_containers/pause:3.9
registry.aliyuncs.com/google_containers/etcd:3.5.12-0

提前拉取镜像(Master节点执行)

[root@master ~]# kubeadm config images pull --config kubeadm-config.yaml

初始化集群(Master节点执行)

[root@master ~]# kubeadm init --config kubeadm-config.yaml

💡 保存这段内容后续node节点进行加入

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a
regular user:

mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf
$HOME/.kube/config sudo chown ( i d − u ) : (id -u): (idu):(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster. Run “kubectl apply
-f [podnetwork].yaml” with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following
on each as root:

kubeadm join 172.16.0.52:6443 --token abcdef.0123456789abcdef
–discovery-token-ca-cert-hash sha256:0101adf324789ddd352b3be9da0eaf8940e577caa029d687cf4956a51a4c4ea4

根据提示配置信息(Master节点执行):

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf

Node节点加入操作

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable --now kubelet
kubeadm join 172.16.0.52:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:0101adf324789ddd352b3be9da0eaf8940e577caa029d687cf4956a51a4c4ea4

访问集群

[root@master ~]# kubectl get node
NAME     STATUS     ROLES           AGE     VERSION
master   NotReady   control-plane   8m52s   v1.30.2
node1    NotReady   <none>          5s      v1.30.2
node2    NotReady   <none>          4m15s   v1.30.2

你会注意到节点的状态是NotReady,这是由于还缺少网络插件,集群的内部网络还没有正常运作。

安装 Flannel 网络插件

Kubernetes 定义了 CNI 标准,有很多网络插件,这里选择最常用的 Flannel,可以在它的 GitHub 仓库找到相关文档。

[root@master ~]# cat kube-flannel.yml
apiVersion: v1
kind: Namespace
metadata:
  labels:
    k8s-app: flannel
    pod-security.kubernetes.io/enforce: privileged
  name: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: flannel
  name: flannel
  namespace: kube-flannel
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: flannel
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - clustercidrs
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: flannel
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
kind: ConfigMap
metadata:
  labels:
    app: flannel
    k8s-app: flannel
    tier: node
  name: kube-flannel-cfg
  namespace: kube-flannel
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: flannel
    k8s-app: flannel
    tier: node
  name: kube-flannel-ds
  namespace: kube-flannel
spec:
  selector:
    matchLabels:
      app: flannel
      k8s-app: flannel
  template:
    metadata:
      labels:
        app: flannel
        k8s-app: flannel
        tier: node
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      containers:
      - args:
        - --ip-masq
        - --kube-subnet-mgr
        command:
        - /opt/bin/flanneld
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        image: docker.io/flannel/flannel:v0.25.4
        name: kube-flannel
        resources:
          requests:
            cpu: 100m
            memory: 50Mi
        securityContext:
          capabilities:
            add:
            - NET_ADMIN
            - NET_RAW
          privileged: false
        volumeMounts:
        - mountPath: /run/flannel
          name: run
        - mountPath: /etc/kube-flannel/
          name: flannel-cfg
        - mountPath: /run/xtables.lock
          name: xtables-lock
      hostNetwork: true
      initContainers:
      - args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        command:
        - cp
        image: docker.io/flannel/flannel-cni-plugin:v1.4.1-flannel1
        name: install-cni-plugin
        volumeMounts:
        - mountPath: /opt/cni/bin
          name: cni-plugin
      - args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        command:
        - cp
        image: docker.io/flannel/flannel:v0.25.4
        name: install-cni
        volumeMounts:
        - mountPath: /etc/cni/net.d
          name: cni
        - mountPath: /etc/kube-flannel/
          name: flannel-cfg
      priorityClassName: system-node-critical
      serviceAccountName: flannel
      tolerations:
      - effect: NoSchedule
        operator: Exists
      volumes:
      - hostPath:
          path: /run/flannel
        name: run
      - hostPath:
          path: /opt/cni/bin
        name: cni-plugin
      - hostPath:
          path: /etc/cni/net.d
        name: cni
      - configMap:
          name: kube-flannel-cfg
        name: flannel-cfg
      - hostPath:
          path: /run/xtables.lock
          type: FileOrCreate
        name: xtables-lock

因为咱们刚刚定义了podSubnet的网段为10.244.0.0/16 和flannel是同一个网段 所以就不需要修改yaml的网段了

然后我们安装 Flannel:

💡因为dockerhub被封没法拉取到flannel镜像我们找个有梯子的机器拉取flannel镜像下来之后导入到Harbor,在修改yaml里的镜像,让他去Harbor拉取

[root@master ~]# cat kube-flannel.yml | grep image
        image: docker.io/flannel/flannel:v0.25.4
        image: docker.io/flannel/flannel-cni-plugin:v1.4.1-flannel1
        image: docker.io/flannel/flannel:v0.25.4
进行替换:
docker.io/flannel/flannel:v0.25.4 > harbor.aihuashen.com/k8s/flannel/flannel:v0.25.4
docker.io/flannel/flannel-cni-plugin:v1.4.1-flannel1 > harbor.aihuashen.com/k8s/flannel/flannel-cni-plugin:v1.4.1-flannel1

[root@master ~]# kubectl apply -f kube-flannel.yml
namespace/kube-flannel created
serviceaccount/flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

[root@master ~]# kubectl get pod -n kube-flannel
NAME                    READY   STATUS    RESTARTS   AGE
kube-flannel-ds-6b78j   1/1     Running   0          3m12s
kube-flannel-ds-7lzgc   1/1     Running   0          3m12s
kube-flannel-ds-xgq58   1/1     Running   0          3m12s

查看节点情况

[root@master ~]# kubectl get node
NAME     STATUS   ROLES           AGE   VERSION
master   Ready    control-plane   50m   v1.30.2
node1    Ready    <none>          41m   v1.30.2
node2    Ready    <none>          45m   v1.30.2
  • 4
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值