k8s多master多node环境搭建-k8s版本大于等于1.24,容器运行时为containerd

1.服务器规划

集群角色IP主机名安装组件
控制节点 1172.16.32.133master1apiserver、controller-manager、scheduler、kubelet、etcd、docker、kube-proxy、keepalived、nginx、calico
控制节点2172.16.32.134master2apiserver、controller-manager、scheduler、kubelet、etcd、docker、kube-proxy、keepalived、nginx、calico
工作节点1172.16.32.135node1kubelet、docker、kube-proxy、calico、coredns
工作节点2172.16.32.136node2kubelet、docker、kube-proxy、calico、coredns
k8s环境规划:
podSubnet(pod网段)10.244.0.0/16
serviceSubnet(service网段): 10.96.0.0/12
# 先配置为单master单node模式,再扩展master和node节点
# 服务器配置最低需要2C 4G

演示环境k8s版本为1.26.0

2.服务器初始化

1)设置IP为静态IP

每个节点都需要执行

[root@master1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens32
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"	# 由dhcp修改为static
IPADDR="172.16.32.133"	# 设置IP地址
NETMASK="255.255.255.0"	# 设置子网掩码
GATEWAY="172.16.32.2"	# 设置网关:由ip route或netstat -rn等命令查询
DNS1="172.16.32.2"	# 设置DNS
DNS2="114.114.11.114"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens32"
UUID="71ffc482-d255-4de0-a06c-0a5c036e8e96"
DEVICE="ens32"
ONBOOT="yes"

# 重启网络
[root@master1 ~]# service network restart

master1节点网络配置信息
在这里插入图片描述
master2节点网络配置信息
在这里插入图片描述
node1节点网络配置信息
在这里插入图片描述
node2节点网络配置信息
在这里插入图片描述

2)替换yum源为国内源

每个节点都需要执行

# 切换目录
[root@localhost ~]# cd /etc/yum.repos.d/

# 备份默认repo
[root@localhost yum.repos.d]# cp CentOS-Base.repo CentOS-Base.repo-bak

# 将CentOS-Base.repo文件置空
[root@localhost yum.repos.d]# echo '' > CentOS-Base.repo

# 编辑CentOS-Base.repo文件内容
[root@localhost yum.repos.d]# vi CentOS-Base.repo
# 将下述“CentOS-Base.repo”文件内容粘贴进去

CentOS-Base.repo文件内容

[base]
name=CentOS-$releasever - Base - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/os/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/os/$basearch/
        http://mirrors.cloud.aliyuncs.com/centos/$releasever/os/$basearch/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
 
#released updates 
[updates]
name=CentOS-$releasever - Updates - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/updates/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/updates/$basearch/
        http://mirrors.cloud.aliyuncs.com/centos/$releasever/updates/$basearch/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
 
#additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/extras/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/extras/$basearch/
        http://mirrors.cloud.aliyuncs.com/centos/$releasever/extras/$basearch/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
 
#additional packages that extend functionality of existing packages
[centosplus]
name=CentOS-$releasever - Plus - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/centosplus/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/centosplus/$basearch/
        http://mirrors.cloud.aliyuncs.com/centos/$releasever/centosplus/$basearch/
gpgcheck=1
enabled=0
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
 
#contrib - packages by Centos Users
[contrib]
name=CentOS-$releasever - Contrib - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/contrib/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/contrib/$basearch/
        http://mirrors.cloud.aliyuncs.com/centos/$releasever/contrib/$basearch/
gpgcheck=1
enabled=0
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7

在这里插入图片描述

3)安装基础软件包

每个节点都需要执行

[root@master1 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo libaio-devel wget vim ncurses-devel autoconf automake zlib-devel  python-devel epel-release openssh-server socat  ipvsadm conntrack telnet ipvsadm vim

等待安装完成,需要一段时间(和服务器网络等因素相关)
在这里插入图片描述

4)设置主机名

每个节点分别需要执行

[root@localhost ~]# hostnamectl set-hostname master1 && bash  -- 133
[root@localhost ~]# hostnamectl set-hostname master2 && bash  -- 134
[root@localhost ~]# hostnamectl set-hostname node1 && bash  -- 135
[root@localhost ~]# hostnamectl set-hostname node2 && bash  -- 136

在这里插入图片描述

5)关闭selinux

每个节点都需要执行

# 临时关闭
[root@master1 ~]# setenforce 0
# 永久关闭
[root@master1 ~]# vim /etc/selinux/config
# 将SELINUX设置为disabled
SELINUX=disabled
# 此参数修改完成后,需要重启服务器才能生效
# 检查修改状态
[root@master1 ~]# getenforce
显示为Disabled则说明修改成功

在这里插入图片描述

6)配置主机hosts文件

每个节点都需要执行

# 修改四台机器的/etc/hosts文件,增加如下
172.16.32.133 master1
172.16.32.134 master2
172.16.32.135 node1
172.16.32.136 node2
# 修改后显示为
[root@master1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.32.133 master1
172.16.32.134 master2
172.16.32.135 node1
172.16.32.136 node2

在这里插入图片描述

7)配置四台主机之间免密登录

每个节点都需要执行

# 生成证书,并将SSH信息复制到其他节点(可选)
[root@master1 ~]# ssh-keygen	#一路回车,无需输入密码
[root@master1 ~]# ssh-copy-id master1	# 复制ssh信息到master1节点
[root@master1 ~]# ssh-copy-id master2	# 复制ssh信息到master2节点
[root@master1 ~]# ssh-copy-id node1	# 复制ssh信息到node1节点
[root@master1 ~]# ssh-copy-id node2	# 复制ssh信息到node2节点

在这里插入图片描述

8)关闭交换分区swap

每个节点都需要执行

# 临时关闭
[root@master1 ~]# swapoff -a

# 永久关闭
[root@master1 ~]# vim /etc/fstab
# /dev/mapper/centos-swap swap                    swap    defaults        0 0
# 永久关闭后,需要重新挂载或者重启服务器后才会生效,所以演示结合临时关闭和永久关闭使用

在这里插入图片描述

9)修改内核参数

每个节点都需要执行

# 加载模块
[root@docker ~]# modprobe br_netfilter
# 修改内核参数(创建docker.conf文件并写入内核参数)
[root@docker ~]# cat > /etc/sysctl.d/docker.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
# 使参数生效
[root@docker ~]# sysctl -p /etc/sysctl.d/docker.conf

在这里插入图片描述

10)关闭firewalld防火墙

每个节点都需要执行

[root@docker ~]# systemctl stop firewalld && systemctl disable firewalld
# 或 [root@master1 ~]# systemctl disable firewalld --now

在这里插入图片描述

11)配置阿里云repo源

每个节点都需要执行

[root@master1 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

在这里插入图片描述

12)配置安装k8s组件需要的阿里云repo源

每个节点都需要执行

[root@master1 ~]# vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0

在这里插入图片描述

13)配置时间同步

每个节点都需要执行

# 安装ntp服务
[root@docker ~]# yum -y install ntp ntpdate
# 同步时间(若存在本地时间服务器,可将cn.pool.ntp.org换成时间服务器IP)
[root@docker ~]# ntpdate cn.pool.ntp.org
# 编写计划任务
[root@docker ~]# crontab -e
输入:
* */1 * * * /usr/sbin/ntpdate cn.pool.ntp.org
# 可使用crontab -l命令查看

在这里插入图片描述

14)安装containerd.io-1.6.6

每个节点都需要执行

1.安装containerd指定版本

# 安装containerd指定版本
[root@master1 yum.repos.d]# yum -y install containerd.io-1.6.6
# 创建目录并生成containerd默认配置文件
[root@master1 yum.repos.d]# mkdir -p /etc/containerd/
[root@master1 yum.repos.d]# containerd config default > /etc/containerd/config.toml

# 修改以下内容:
# 把SystemdCgroup = false修改成SystemdCgroup = true
# 把sandbox_image = "k8s.gcr.io/pause:3.6"修改成sandbox_image="registry.aliyuncs.com/google_containers/pause:3.7"
# 配置 containerd 开机启动,并启动 containerd
[root@master1 yum.repos.d]# systemctl enable containerd --now

config.toml文件内容如下

disabled_plugins = []
imports = []
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/var/lib/containerd"
state = "/run/containerd"
temp = ""
version = 2

[cgroup]
  path = ""

[debug]
  address = ""
  format = ""
  gid = 0
  level = ""
  uid = 0

[grpc]
  address = "/run/containerd/containerd.sock"
  gid = 0
  max_recv_message_size = 16777216
  max_send_message_size = 16777216
  tcp_address = ""
  tcp_tls_ca = ""
  tcp_tls_cert = ""
  tcp_tls_key = ""
  uid = 0

[metrics]
  address = ""
  grpc_histogram = false

[plugins]

  [plugins."io.containerd.gc.v1.scheduler"]
    deletion_threshold = 0
    mutation_threshold = 100
    pause_threshold = 0.02
    schedule_delay = "0s"
    startup_delay = "100ms"

  [plugins."io.containerd.grpc.v1.cri"]
    device_ownership_from_security_context = false
    disable_apparmor = false
    disable_cgroup = false
    disable_hugetlb_controller = true
    disable_proc_mount = false
    disable_tcp_service = true
    enable_selinux = false
    enable_tls_streaming = false
    enable_unprivileged_icmp = false
    enable_unprivileged_ports = false
    ignore_image_defined_volumes = false
    max_concurrent_downloads = 3
    max_container_log_line_size = 16384
    netns_mounts_under_state_dir = false
    restrict_oom_score_adj = false
    sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.7"
    selinux_category_range = 1024
    stats_collect_period = 10
    stream_idle_timeout = "4h0m0s"
    stream_server_address = "127.0.0.1"
    stream_server_port = "0"
    systemd_cgroup = false
    tolerate_missing_hugetlb_controller = true
    unset_seccomp_profile = ""

    [plugins."io.containerd.grpc.v1.cri".cni]
      bin_dir = "/opt/cni/bin"
      conf_dir = "/etc/cni/net.d"
      conf_template = ""
      ip_pref = ""
      max_conf_num = 1

    [plugins."io.containerd.grpc.v1.cri".containerd]
      default_runtime_name = "runc"
      disable_snapshot_annotations = true
      discard_unpacked_layers = false
      ignore_rdt_not_enabled_errors = false
      no_pivot = false
      snapshotter = "overlayfs"

      [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
        base_runtime_spec = ""
        cni_conf_dir = ""
        cni_max_conf_num = 0
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        runtime_engine = ""
        runtime_path = ""
        runtime_root = ""
        runtime_type = ""

        [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]

      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]

        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
          base_runtime_spec = ""
          cni_conf_dir = ""
          cni_max_conf_num = 0
          container_annotations = []
          pod_annotations = []
          privileged_without_host_devices = false
          runtime_engine = ""
          runtime_path = ""
          runtime_root = ""
          runtime_type = "io.containerd.runc.v2"

          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            BinaryName = ""
            CriuImagePath = ""
            CriuPath = ""
            CriuWorkPath = ""
            IoGid = 0
            IoUid = 0
            NoNewKeyring = false
            NoPivotRoot = false
            Root = ""
            ShimCgroup = ""
            SystemdCgroup = true

      [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
        base_runtime_spec = ""
        cni_conf_dir = ""
        cni_max_conf_num = 0
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        runtime_engine = ""
        runtime_path = ""
        runtime_root = ""
        runtime_type = ""

        [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]

    [plugins."io.containerd.grpc.v1.cri".image_decryption]
      key_model = "node"

    [plugins."io.containerd.grpc.v1.cri".registry]
      config_path = ""

      [plugins."io.containerd.grpc.v1.cri".registry.auths]

      [plugins."io.containerd.grpc.v1.cri".registry.configs]

      [plugins."io.containerd.grpc.v1.cri".registry.headers]

      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]

    [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
      tls_cert_file = ""
      tls_key_file = ""

  [plugins."io.containerd.internal.v1.opt"]
    path = "/opt/containerd"

  [plugins."io.containerd.internal.v1.restart"]
    interval = "10s"

  [plugins."io.containerd.internal.v1.tracing"]
    sampling_ratio = 1.0
    service_name = "containerd"

  [plugins."io.containerd.metadata.v1.bolt"]
    content_sharing_policy = "shared"

  [plugins."io.containerd.monitor.v1.cgroups"]
    no_prometheus = false

  [plugins."io.containerd.runtime.v1.linux"]
    no_shim = false
    runtime = "runc"
    runtime_root = ""
    shim = "containerd-shim"
    shim_debug = false

  [plugins."io.containerd.runtime.v2.task"]
    platforms = ["linux/amd64"]
    sched_core = false

  [plugins."io.containerd.service.v1.diff-service"]
    default = ["walking"]

  [plugins."io.containerd.service.v1.tasks-service"]
    rdt_config_file = ""

  [plugins."io.containerd.snapshotter.v1.aufs"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.btrfs"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.devmapper"]
    async_remove = false
    base_image_size = ""
    discard_blocks = false
    fs_options = ""
    fs_type = ""
    pool_name = ""
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.native"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.overlayfs"]
    root_path = ""
    upperdir_label = false

  [plugins."io.containerd.snapshotter.v1.zfs"]
    root_path = ""

  [plugins."io.containerd.tracing.processor.v1.otlp"]
    endpoint = ""
    insecure = false
    protocol = ""

[proxy_plugins]

[stream_processors]

  [stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
    accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
    args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
    env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
    path = "ctd-decoder"
    returns = "application/vnd.oci.image.layer.v1.tar"

  [stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
    accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
    args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
    env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
    path = "ctd-decoder"
    returns = "application/vnd.oci.image.layer.v1.tar+gzip"

[timeouts]
  "io.containerd.timeout.bolt.open" = "0s"
  "io.containerd.timeout.shim.cleanup" = "5s"
  "io.containerd.timeout.shim.load" = "5s"
  "io.containerd.timeout.shim.shutdown" = "3s"
  "io.containerd.timeout.task.state" = "2s"

[ttrpc]
  address = ""
  gid = 0
  uid = 0

2.crictl文件生成

# crictl文件生成
[root@master1 yum.repos.d]# cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF
# 重启containerd使crictl文件生效
[root@master1 yum.repos.d]# systemctl restart containerd

在这里插入图片描述
3.配置containerd镜像加速器

# 配置containerd镜像加速器
# 编辑vim /etc/containerd/config.toml文件
# 将config_path = "",修改成如下目录:config_path = "/etc/containerd/certs.d"

# 创建配置文件
[root@master1 yum.repos.d]# mkdir -p /etc/containerd/certs.d/docker.io/
[root@master1 yum.repos.d]# vim /etc/containerd/certs.d/docker.io/hosts.toml
[host."https://axcmsqgw.mirror.aliyun.com",host."https://docker.mirrors.ustc.edu.cn",host."https://registry.docker-cn.com"]
capabilities = ["pull","push"]

# 重启containerd:
[root@master1 yum.repos.d]# systemctl restart containerd

在这里插入图片描述

15)安装docker-ce

每个节点都需要执行

# 安装docker-ce最新版
[root@master1 ~]# yum install -y docker-ce

# 配置开机自启动
[root@master1 ~]# systemctl enable docker --now

# 配置docker镜像加速器
[root@master1 ~]# vim  /etc/docker/daemon.json
{
 "registry-mirrors":["https://axcmsqgw.mirror.aliyun.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com"],
 "exec-opts": ["native.cgroupdriver=systemd"]
}
# 重启docker
[root@master1 ~]# systemctl restart docker

在这里插入图片描述

16)安装初始化k8s需要的软件包-1.26.0

每个节点都需要执行

# 安装软件包指定版本
[root@master1 ~]# yum -y install kubectl-1.26.0 kubeadm-1.26.0 kubelet-1.26.0
# 设置kubelet开机自启
[root@master1 ~]# systemctl enable kubelet

在这里插入图片描述

3.使用kubeadm初始化k8s集群

master1节点执行

1)设置容器运行时

# 设置容器运行时
[root@master1 ~]# crictl config runtime-endpoint unix:///run/containerd/containerd.sock

# 使用kubeadm初始化k8s集群
[root@master1 ~]# kubeadm config print init-defaults > kubeadm.yaml
# 根据我们自己的需求修改配置,比如修改 imageRepository 的值,kube-proxy 的模式为 ipvs,需要注意的是由于我们使用的containerd作为运行时,所以在初始化节点的时候需要指定cgroupDriver为systemd
# advertiseAddress: 172.16.32.133 # 当前节点的ip
# criSocket: unix:///var/run/containerd/containerd.sock  # 指定containerd容器运行时
# name: master1 # 当前节点主机名
# imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers # 指定阿里云镜像仓库地址
# kubernetesVersion: 1.26.0 # k8s版本
# networking:中,新增pod网段
# podSubnet: 10.244.0.0/16 #指定pod网段, 新增
# scheduler下一行,开始新增以下内容
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd

kubeadm.yaml文件内容如下

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.16.32.133
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: master1
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.26.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd

2)基于kubeadm.yaml初始化k8s集群

# 基于kubeadm.yaml初始化k8s集群(执行该命令将自动拉取相关镜像)
[root@master1 ~]# kubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemVerification
# 执行完成后,可使用命令进行查看:
[root@master1 ~]# crictl images ls # 或ctr -n k8s.io images ls 命令进行查看
IMAGE                                                                         TAG    
registry.aliyuncs.com/google_containers/pause                                 3.7    
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns                   v1.9.3 
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd                      3.5.6-0
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver            v1.26.0
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager   v1.26.0
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy                v1.26.0
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler            v1.26.0
registry.cn-hangzhou.aliyuncs.com/google_containers/pause                     3.9  

# 将上述镜像打包至k8s_1.26.0.tar.gz中(命名空间需要指定为k8s.io,否则无法使用)
# [root@master1 ~]# ctr -n k8s.io images export k8s_1.26.0.tar.gz registry.aliyuncs.com/google_containers/pause:3.7 registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.9.3 registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.6-0 registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.26.0 registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.26.0 registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.26.0 registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.26.0 registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9
# -n:指定命名空间

# 配置kubectl的配置文件config,相当于对kubectl进行授权,这样kubectl命令可以使用这个证书对k8s集群进行管理
[root@master1 ~]# mkdir -p $HOME/.kube
[root@master1 ~]# cp  /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master1 ~]# chown $(id -u):$(id -g) $HOME/.kube/config
[root@master1 ~]# kubectl get nodes
NAME      STATUS     ROLES           AGE    VERSION
master1   NotReady   control-plane   10m   v1.26.0

初始化完成
在这里插入图片描述
镜像查看
在这里插入图片描述
k8s节点查看
在这里插入图片描述

3)其他服务器镜像拉取

# 其余节点可直接使用k8s_1.26.0.tar.gz压缩包进行解压
[root@node1 ~]# ctr -n k8s.io images import k8s_1.26.0.tar.gz
# 查看镜像
[root@node1 ~]# crictl images ls

master1节点执行传输
在这里插入图片描述
其他节点导入镜像
在这里插入图片描述

4.安装k8s网络组件-calico

master1节点执行

# 在node1节点上安装calico
# 方法1:通过docker拉取镜像后,打包,然后通过ctr解压到k8s默认命名空间中(实例使用方式)
# 方法2:将calico.tar.gz传输到node1上,然后通过ctr解压到k8s默认命名空间中

# 拉取镜像
[root@master1 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/zhangxiaoye/app:calico_cni_v3.18.0
[root@master1 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/zhangxiaoye/app:calico_node_v3.18.0
[root@master1 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/zhangxiaoye/app:calico_pod2daemon-flexvol_v3.18.0
[root@master1 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/zhangxiaoye/app:calico_kube-controllers_v3.18.0

# 打标签
[root@master1  ~]# docker tag registry.cn-hangzhou.aliyuncs.com/zhangxiaoye/app:calico_pod2daemon-flexvol_v3.18.0 docker.io/calico/pod2daemon-flexvol:v3.18.0
[root@master1 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/zhangxiaoye/app:calico_node_v3.18.0 docker.io/calico/node:v3.18.0
[root@master1 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/zhangxiaoye/app:calico_cni_v3.18.0 docker.io/calico/cni:v3.18.0
[root@master1 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/zhangxiaoye/app:calico_kube-controllers_v3.18.0 docker.io/calico/kube-controllers:v3.18.0

# 压缩打包
[root@master1 ~]# docker save -o calico.tar.gz docker.io/calico/pod2daemon-flexvol:v3.18.0 docker.io/calico/node:v3.18.0 docker.io/calico/cni:v3.18.0 docker.io/calico/kube-controllers:v3.18.0

# 手动解压
[root@master1 ~]# ctr -n k8s.io images import calico.tar.gz
[root@master1 ~]# crictl images ls
IMAGE                                                                         TAG    
docker.io/calico/cni                                                          v3.18.0
docker.io/calico/kube-controllers                                             v3.18.0
docker.io/calico/node                                                         v3.18.0
docker.io/calico/pod2daemon-flexvol                                           v3.18.0

# 上传calico.yaml到master1上,使用yaml文件安装calico 网络插件。
[root@master1 ~]# kubectl apply -f  calico.yaml
# 注:在线下载配置文件地址是: https://docs.projectcalico.org/manifests/calico.yaml

[root@master1 ~]# kubectl get nodes -n kube-system
NAME      STATUS   ROLES           AGE   VERSION
master1   Ready    control-plane   22m   v1.26.0

[root@master1 ~]# kubectl get nodes -n kube-system -owide
NAME      STATUS   ROLES           AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION           CONTAINER-RUNTIME
master1   Ready    control-plane   22m   v1.26.0   172.16.32.133   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   containerd://1.6.33

[root@master1 ~]# kubectl get nodes
NAME      STATUS   ROLES           AGE   VERSION
master1   Ready    control-plane   61m   v1.26.0

calico.tar.gz镜像内容
在这里插入图片描述
k8s集群节点信息查看
在这里插入图片描述

5.扩容第一个node节点

# 在master1上查看加入节点的命令:
[root@master1 ~]# kubeadm token create --print-join-command
kubeadm join 172.16.32.133:6443 --token vk6vx7.qiump2589c4v0rdf --discovery-token-ca-cert-hash sha256:73ab57ac1666b74b86c4214927f473e6216da345cc612ac881a1694cbab2d96c

# 把node1加入k8s集群:
[root@node1 ~]# kubeadm join 172.16.32.133:6443 --token vk6vx7.qiump2589c4v0rdf --discovery-token-ca-cert-hash sha256:73ab57ac1666b74b86c4214927f473e6216da345cc612ac881a1694cbab2d96c --ignore-preflight-errors=SystemVerification

# 在master1上查看集群节点状况:
[root@master1 ~]# kubectl get nodes
NAME      STATUS   ROLES           AGE    VERSION
master1   Ready      control-plane   30m    v1.26.0
node1     NotReady   <none>          114s   v1.26.0

# 可以对node1打个标签,显示node1
[root@master1 ~]# kubectl label nodes node1 node-role.kubernetes.io/node1=work
# 取消标签
[root@master1 ~]# kubectl label nodes node1 node-role.kubernetes.io/node1-

[root@master1 ~]# kubectl get nodes
NAME      STATUS   ROLES           AGE     VERSION
master1   Ready      control-plane   31m    v1.26.0
node1     Ready   node1           3m8s   v1.26.0

master1节点
在这里插入图片描述
node1节点执行
在这里插入图片描述
为node1节点打标签
在这里插入图片描述

6.扩容第二个node节点

# node2需要执行2.服务器初始化的全部内容,且需要在master1和node1中,配置hosts和免密登录
在master1上查看加入节点的命令:
[root@master1 ~]# kubeadm token create --print-join-command
kubeadm join 172.16.32.133:6443 --token dvmtiu.lqhptuon50ti7lwo --discovery-token-ca-cert-hash sha256:73ab57ac1666b74b86c4214927f473e6216da345cc612ac881a1694cbab2d96c

把node2加入k8s集群:
[root@node2 ~]# kubeadm join 172.16.32.133:6443 --token dvmtiu.lqhptuon50ti7lwo --discovery-token-ca-cert-hash sha256:73ab57ac1666b74b86c4214927f473e6216da345cc612ac881a1694cbab2d96c --ignore-preflight-errors=SystemVerification
#在master1上查看集群节点状况:
[root@master1 ~]# kubectl get nodes
NAME      STATUS   ROLES           AGE     VERSION
master1   Ready    control-plane   70m     v1.26.0
node1     Ready    <none>          41m     v1.26.0
node2     Ready    <none>          2m25s   v1.26.0

node2节点执行加入集群
在这里插入图片描述
master1节点查看k8s集群状态
在这里插入图片描述

7. 扩容第二个master节点

1)证书拷贝及master2镜像解压

# master2需要执行2.服务器初始化的全部内容,且需要在master1、node1、node2中,配置hosts和免密登录
#把master1节点的证书拷贝到master2上
# 在master2创建证书存放目录:
[root@master2 ~]# cd /root && mkdir -p /etc/kubernetes/pki/etcd &&mkdir -p ~/.kube/
#把master1节点的证书拷贝到master2上:
[root@master1 ~]# scp /etc/kubernetes/pki/ca.crt master2:/etc/kubernetes/pki/  
[root@master1 ~]# scp /etc/kubernetes/pki/ca.key master2:/etc/kubernetes/pki/  
[root@master1 ~]# scp /etc/kubernetes/pki/sa.key master2:/etc/kubernetes/pki/  
[root@master1 ~]# scp /etc/kubernetes/pki/sa.pub master2:/etc/kubernetes/pki/
[root@master1 ~]# scp /etc/kubernetes/pki/front-proxy-ca.crt master2:/etc/kubernetes/pki/ 
[root@master1 ~]# scp /etc/kubernetes/pki/front-proxy-ca.key master2:/etc/kubernetes/pki/
[root@master1 ~]# scp /etc/kubernetes/pki/etcd/ca.crt master2:/etc/kubernetes/pki/etcd/
[root@master1 ~]# scp /etc/kubernetes/pki/etcd/ca.key master2:/etc/kubernetes/pki/etcd/

# 在master2上执行:
# 把安装calico需要的镜像calico.tar.gz传到master2节点,解压:
[root@master2 ~]# ctr -n k8s.io images import k8s_1.26.0.tar.gz
[root@master2 ~]# ctr -n k8s.io images import calico.tar.gz
# 在master1上,检查 kubeadm-config ConfigMap 是否正确配置了 controlPlaneEndpoint。可以使用 kubectl 命令获取 kubeadm-config ConfigMap 的信息
[root@master1 ~]# kubectl -n kube-system edit cm kubeadm-config -o yaml(添加内容见下图)
# 添加如下字段:
controlPlaneEndpoint: "172.16.32.133:6443" # IP为master1节点的IP 
# 重启kubelet:
[root@master1 ~]# systemctl restart kubelet

添加位置如下
在这里插入图片描述

2)将master2加入k8s集群

# 在master1生成加入节点的命令:
[root@master1 ~]# kubeadm token create --print-join-command
kubeadm join 172.16.32.133:6443 --token ou8oxf.fcg1ntlw9zcgowxn --discovery-token-ca-cert-hash sha256:73ab57ac1666b74b86c4214927f473e6216da345cc612ac881a1694cbab2d96c

# 在master2上执行命令,把master2以控制节点加入集群
[root@master2 ~]# kubeadm join 172.16.32.133:6443 --token ou8oxf.fcg1ntlw9zcgowxn --discovery-token-ca-cert-hash sha256:73ab57ac1666b74b86c4214927f473e6216da345cc612ac881a1694cbab2d96c --control-plane --ignore-preflight-errors=SystemVerification

# 以下三行命令执行后,才能在master2上管理k8s集群
[root@master2 ~]# mkdir -p $HOME/.kube
[root@master2 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master2 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

# 在master1或master2上查看集群状况:
[root@master2 ~]# kubectl get nodes
NAME      STATUS   ROLES           AGE     VERSION
master1   Ready    control-plane   82m   v1.26.0
master2   Ready    control-plane   15s   v1.26.0
node1     Ready    <none>          53m   v1.26.0
node2     Ready    <none>          14m   v1.26.0
# 上面可以看到master2已经加入到集群了

在这里插入图片描述

8.测试

侧而是在k8s创建pod是否可以正常访问网络

# 把busybox-1-28.tar.gz上传到node1、node2节点,手动解压
[root@node1 ~]# ctr -n k8s.io images import busybox-1-28.tar.gz
[root@master1 ~]# kubectl run busybox --image docker.io/library/busybox:1.28  --image-pull-policy=IfNotPresent --restart=Never --rm -it busybox -- sh
/ # ping www.baidu.com
PING www.baidu.com (183.2.172.185): 56 data bytes
64 bytes from 183.2.172.185: seq=0 ttl=127 time=30.745 ms
#通过上面可以看到能访问网络,说明calico网络插件已经被正常安装了

/ # nslookup kubernetes.default.svc.cluster.local
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name:      kubernetes.default.svc.cluster.local
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
# 10.96.0.10 就是我们coreDNS的clusterIP,说明coreDNS配置好了。
# 解析内部Service的名称,是通过coreDNS去解析的。

/ # exit #退出pod

# IP能ping通,但是域名无法解析的时候,可以尝试重启CoreDNS的Pod
[root@master1 ~]# kubectl delete pods -l k8s-app=kube-dns -n kube-system
# 查看CoreDNS
[root@node1 ~]# kubectl get pods -l k8s-app=kube-dns -n kube-system
NAME                       READY   STATUS    RESTARTS   AGE
coredns-567c556887-9lcws   1/1     Running   0          100m
coredns-567c556887-t2lgp   1/1     Running   0          100m

# Pod删除
[root@node1 ~]# kubectl delete pods xxx -n kube-system

在这里插入图片描述

  • 4
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值