Centos7以Containerd为容器进行时从零开始搭建K8S(1.27.4)

一、安装容器进行时

1、升级Linux内核,具体原因第四章节进行解释

wget http://linux-mirrors.fnal.gov/linux/elrepo/archive/kernel/el7/x86_64/RPMS/kernel-lt-5.4.241-1.el7.elrepo.x86_64.rpm
wget http://linux-mirrors.fnal.gov/linux/elrepo/archive/kernel/el7/x86_64/RPMS/kernel-lt-devel-5.4.241-1.el7.elrepo.x86_64.rpm
yum install kernel-lt-5.4.241-1.el7.elrepo.x86_64.rpm kernel-lt-devel-5.4.241-1.el7.elrepo.x86_64.rpm
grub2-set-default  0
init 6

2、将 overlay 和 br_netfilter 写入 /etc/modules-load.d/k8s.conf 文件并加载相应的内核模块,可以确保在系统启动时自动加载这些模块,以满足 Kubernetes 集群对容器文件系统和网络功能的需求。

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

为满足当前会话需要,手动加载模块


sudo modprobe overlay
sudo modprobe br_netfilter

3、配置内核参数

# 设置所需的 sysctl 参数,参数在重新启动后保持不变
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# 应用 sysctl 参数而不重新启动
sudo sysctl --system

参数作用如下:

net.bridge.bridge-nf-call-iptables = 1:这个参数用于启用 Linux 桥接模块与 iptables 之间的连接,确保在网络桥接时可以执行 iptables 规则。Kubernetes 中使用网络插件和服务代理(如 kube-proxy)来实现网络功能,这些功能通常需要在 iptables 中配置规则来进行流量转发和网络地址转换(NAT)。启用该参数可以确保 iptables 规则对桥接的网络流量生效。

net.bridge.bridge-nf-call-ip6tables = 1:这个参数类似于上述的 bridge-nf-call-iptables,但针对 IPv6 流量和 ip6tables 规则。如果你的 Kubernetes 集群使用 IPv6 地址或支持 IPv6 网络功能,启用这个参数可以确保 ip6tables 规则对桥接的 IPv6 流量生效。

net.ipv4.ip_forward = 1:这个参数用于启用 Linux 主机的 IP 转发功能。在 Kubernetes 集群中,节点之间的网络流量需要通过主机进行转发,以实现容器之间的通信和跨节点的服务访问。启用该参数允许 Linux 主机将收到的网络流量转发到目标节点和容器。

4、安装Containerd

安装Containerd的方式主要有三种,一个以二进制形式形式安装,一个是从Docker源中安装,最后一个是从源码中编译。为节约时间,本文将从Docker源中进行安装。

yum-config-manager 命令是用于管理 YUM 软件包管理器的存储库配置的命令。它提供了一种方便的方式来添加、启用、禁用、移除和列出 YUM 存储库。根据以下命令安装配置Docker源,

 sudo yum install -y yum-utils
 sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

安装Containerd

yum install containerd.io

编辑Containerd配置文件,配置文件中sandbox_image可以根据实际进行修改,使用官方源可能因网络原因无法进行集群初始化

[root@k8s-master-1 ~]# vim /etc/containerd/config.toml
enabled_plugins = ["cri"]

disabled_plugins = []
imports = []
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/var/lib/containerd"
state = "/run/containerd"
temp = ""
version = 2

[cgroup]
  path = ""

[debug]
  address = ""
  format = ""
  gid = 0
  level = ""
  uid = 0

[grpc]
  address = "/run/containerd/containerd.sock"
  gid = 0
  max_recv_message_size = 16777216
  max_send_message_size = 16777216
  tcp_address = ""
  tcp_tls_ca = ""
  tcp_tls_cert = ""
  tcp_tls_key = ""
  uid = 0

[metrics]
  address = ""
  grpc_histogram = false

[plugins]

  [plugins."io.containerd.gc.v1.scheduler"]
    deletion_threshold = 0
    mutation_threshold = 100
    pause_threshold = 0.02
    schedule_delay = "0s"
    startup_delay = "100ms"

  [plugins."io.containerd.grpc.v1.cri"]
    device_ownership_from_security_context = false
    disable_apparmor = false
    disable_cgroup = false
    disable_hugetlb_controller = true
    disable_proc_mount = false
    disable_tcp_service = true
    enable_selinux = false
    enable_tls_streaming = false
    enable_unprivileged_icmp = false
    enable_unprivileged_ports = false
    endpoint = "unix:///var/run/containerd/containerd.sock"
    ignore_image_defined_volumes = false
    max_concurrent_downloads = 3
    max_container_log_line_size = 16384
    netns_mounts_under_state_dir = false
    restrict_oom_score_adj = false
    sandbox_image = "m.daocloud.io/registry.k8s.io/pause:3.9"
    selinux_category_range = 1024
    stats_collect_period = 10
    stream_idle_timeout = "4h0m0s"
    stream_server_address = "127.0.0.1"
    stream_server_port = "0"
    systemd_cgroup = false
    tolerate_missing_hugetlb_controller = true
    unset_seccomp_profile = ""

    [plugins."io.containerd.grpc.v1.cri".cni]
      bin_dir = "/opt/cni/bin"
      conf_dir = "/etc/cni/net.d"
      conf_template = ""
      ip_pref = ""
      max_conf_num = 1

    [plugins."io.containerd.grpc.v1.cri".containerd]
      default_runtime_name = "runc"
      disable_snapshot_annotations = true
      discard_unpacked_layers = false
      ignore_rdt_not_enabled_errors = false
      no_pivot = false
      snapshotter = "overlayfs"

      [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
        base_runtime_spec = ""
        cni_conf_dir = ""
        cni_max_conf_num = 0
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        runtime_engine = ""
        runtime_path = ""
        runtime_root = ""
        runtime_type = ""

        [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]

      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]

        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
          base_runtime_spec = ""
          cni_conf_dir = ""
          cni_max_conf_num = 0
          container_annotations = []
          pod_annotations = []
          privileged_without_host_devices = false
          runtime_engine = ""
          runtime_path = ""
          runtime_root = ""
          runtime_type = "io.containerd.runc.v2"

          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            BinaryName = ""
            CriuImagePath = ""
            CriuPath = ""
            CriuWorkPath = ""
            IoGid = 0
            IoUid = 0
            NoNewKeyring = false
            NoPivotRoot = false
            Root = ""
            ShimCgroup = ""
            SystemdCgroup = false

      [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
        base_runtime_spec = ""
        cni_conf_dir = ""
        cni_max_conf_num = 0
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        runtime_engine = ""
        runtime_path = ""
        runtime_root = ""
        runtime_type = ""

        [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]

    [plugins."io.containerd.grpc.v1.cri".image_decryption]
      key_model = "node"

    [plugins."io.containerd.grpc.v1.cri".registry]
      config_path = ""

      [plugins."io.containerd.grpc.v1.cri".registry.auths]

      [plugins."io.containerd.grpc.v1.cri".registry.configs]

      [plugins."io.containerd.grpc.v1.cri".registry.headers]

      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]

    [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
      tls_cert_file = ""
      tls_key_file = ""

  [plugins."io.containerd.internal.v1.opt"]
    path = "/opt/containerd"

  [plugins."io.containerd.internal.v1.restart"]
    interval = "10s"

  [plugins."io.containerd.internal.v1.tracing"]
    sampling_ratio = 1.0
    service_name = "containerd"

  [plugins."io.containerd.metadata.v1.bolt"]
    content_sharing_policy = "shared"

  [plugins."io.containerd.monitor.v1.cgroups"]
    no_prometheus = false

  [plugins."io.containerd.runtime.v1.linux"]
    no_shim = false
    runtime = "runc"
    runtime_root = ""
    shim = "containerd-shim"
    shim_debug = false

  [plugins."io.containerd.runtime.v2.task"]
    platforms = ["linux/amd64"]
    sched_core = false

  [plugins."io.containerd.service.v1.diff-service"]
    default = ["walking"]

  [plugins."io.containerd.service.v1.tasks-service"]
    rdt_config_file = ""

  [plugins."io.containerd.snapshotter.v1.aufs"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.devmapper"]
    async_remove = false
    base_image_size = ""
    discard_blocks = false
    fs_options = ""
    fs_type = ""
    pool_name = ""
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.native"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.overlayfs"]
    root_path = ""
    upperdir_label = false

启动containerd同时配置开机自启

systemctl enable --now containerd.service

配置 crictl 工具与 Containerd 运行时和镜像服务之间的通信端点。

crictl config --set runtime-endpoint=unix:///run/containerd/containerd.sock --set image-endpoint=unix:///run/containerd/containerd.sock

容器进行时到这里便安装完毕。

二、集群初始化

1、配置阿里云k8s yum源,并安装相应工具及组件

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
setenforce 0

yum install -y --nogpgcheck kubelet-1.27.4 kubeadm-1.27.4 kubectl-1.27.4

2、关闭交换分区

sudo swapoff -a

该命令用于临时关闭交换分区,永久关闭需在/etc/fstab中将交换分区的相关注释去除,不进行此步kubelet将无法正常运行

3、启动kubelet

systemctl enable --now kubelet

4、添加hosts文件

LOCALIP=$(ifconfig | grep -A 2 ens33 | awk 'NR==2{print $2}')
echo $LOCALIP `hostname` >> /etc/hosts

5、执行集群初始化

kubeadm init --image-repository "m.daocloud.io/registry.k8s.io" --apiserver-advertise-address <master节点IP> --kubernetes-version 1.27.4

--image-repository "m.daocloud.io/registry.k8s.io":这个参数用于指定容器镜像的仓库地址。为避免网络问题导致初始化失败,将Kubernetes 组件的容器镜像指定从该国内源拉取和使用。

--apiserver-advertise-address <master节点IP>:这个参数用于指定 Kubernetes API Server 的地址。

--kubernetes-version 1.27.3:这个参数用于指定要安装的 Kubernetes 版本。

其他可选项可参考官方文档kubeadm init | Kubernetes

6、初始化完毕可以看到以下提示,后续需根据提示进行操作

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a Pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  /docs/concepts/cluster-administration/addons/

Master节点的安装配置到这里就结束了,但是Master节点上具有污点,通常用于运行 Kubernetes 控制平面组件,如 API Server、Scheduler、Controller Manager 等。这些组件对集群的管理和控制至关重要,需要高可靠性和专用资源。所以设置了污点防止其他非关键的 Pod 在 Master 节点上运行,从而避免资源争用和潜在的干扰。为使Pod可以正常调度还需配置Worker节点,与master节点类似都需要进行以上步骤(除kubeadm init外),即进行到4步。

三、Worker节点配置

1、从Master节点中获取到加入集群的token,这条命令将生成一条加入当前集群的命令行,在Worker节点上执行即可

[root@k8s-master-1 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.10.135:6443 --token 03g5qj.jcnh0irkda3lx9xw --discovery-token-ca-cert-hash sha256:64a74784d6eb6fea498926df8a0ba13a75e1e90c638ee665bfe0a13c83335e97 

2、执行成功后Master执行命令可以看到以下显示,证明Worker节点已经加入成功

[root@k8s-master-1 ~]# kubectl get node
NAME           STATUS     ROLES           AGE    VERSION
k8s-master-1   NotReady   control-plane   3d8h   v1.27.4
k8s-node1      NotReady   <none>          3d8h   v1.27.4

此时集群没有安装网络插件,节点的状态会显示为 "NotReady",因为节点无法正常与其他节点和控制平面组件进行通信

四、网络插件安装

Kubernetes 网络插件包括:

Flannel:适用于简单网络部署的轻量级网络插件。

Calico:提供网络策略和安全性特性的网络插件。

Weave:为容器提供网络和服务发现的网络插件。

Cilium:基于 eBPF 技术提供高级网络功能和安全性的网络插件。

本文将安装Cilium作为集群的网络插件。使用Cilium具有Kubernetes 必须配置为使用 CNI;Linux 内核 >= 4.9.17。这两个要求,Centos7内核版本为3.10,安装Cilium后无法正常启动,这便是第一步对内核进行升级的原因

1、下载适用于当前操作系统的包。可GitHub - cilium/cilium-cli: CLI to install, manage & troubleshoot Kubernetes clusters running Cilium从上获取到适用于当前操作系统的包,本文仅供参考

wget https://github.com/cilium/cilium-cli/releases/download/v0.15.0/cilium-linux-amd64.tar.gz

 2、安装

cilium install --version 1.14.0

3、查看安装状态。当cilium pod状态均为running时,安装完成。

[root@k8s-master-1 ~]# kubectl get pod -n kube-system
NAME                                   READY   STATUS    RESTARTS        AGE
cilium-2jhc6                           1/1     Running   5 (109m ago)    3d8h
cilium-dzgvr                           1/1     Running   2 (175m ago)    3d8h
cilium-operator-76c55fc6b6-v678x       1/1     Running   13 (17m ago)    3d8h

4、此时再次查看Node节点状态可以发现节点状态均为健康

[root@k8s-master-1 ~]# kubectl get node
NAME           STATUS     ROLES           AGE    VERSION
k8s-master-1   Ready      control-plane   3d8h   v1.27.4
k8s-node1      Ready      <none>          3d8h   v1.27.4

此时就可以愉快的部署应用了。

5、简易Nginx部署 。可通过以下步骤创建一个Nginx应用。

[root@k8s-master-1 ~]# vim nginx-dp.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80


[root@k8s-master-1 ~]# kubectl apply -f nginx-dp.yaml
deployment.apps/nginx-deployment created

6、查看状态。可以看到Nginx已经正常启动

[root@k8s-master-1 ~]# kubectl get pod 
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-57d84f57dc-vq75z   1/1     Running   0          29s

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值