完整的k8s搭建服务器流程

一、准备

1、禁用selinux

#临时禁用
setenforce 0
#永久禁用
sed -i 's/enforcing/disabled/' /etc/selinux/config
#检查selinux是否已禁用
sestatus

2、禁用交换分区

#命令行临时禁用
swapoff -a
#永久禁用
vim /etc/fstab
注释掉有swap字样的那行,重启

3、允许iptables转发、启用br_netfilter模块

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

echo 1 > /proc/sys/net/ipv4/ip_forward

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
bridge
br_netfilter
EOF

sysctl --system

#停止防火墙
systemctl stop firewalld
systemctl disable firewalld

4、修改hostname,使每台服务器的hostname唯一

hostnamectl set-hostname server-xxxxx

#把新设置的hostname映射到服务器ip上
vim /etc/hosts
127.0.0.1 server-xxxxx
或
局域网ip server-xxxxx

二、开始安装

1、安装containerd

centos

yum install -y yum-utils device-mapper-persistent-data lvm2
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -o /etc/yum.repos.d/docker-ce.repo
yum makecache && yum -y install containerd.io

ubuntu

apt install -y apt-transport-https ca-certificates
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
apt update && apt install -y containerd.io

debian

apt install -y apt-transport-https ca-certificates
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/debian/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/debian $(lsb_release -cs) stable"
apt update && apt install -y containerd.io

修改containerd配置

containerd config default > /etc/containerd/config.toml
sed -i 's/registry.k8s.io\/pause:[0-9].[0-9]/registry.aliyuncs.com\/google_containers\/pause:3.9/g' /etc/containerd/config.toml 

systemctl restart containerd

修改containerd镜像源

vim /etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
        endpoint = ["https://atomhub.openatom.cn"]
    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io/library"]
        endpoint = ["https://atomhub.openatom.cn/library"]
    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.k8s.io"]
        endpoint = ["https://registry.aliyuncs.com/google_containers"]

systemctl restart containerd

2、离线安装containerd

下载

wget https://github.com/containerd/containerd/releases/download/v1.7.21/containerd-1.7.21-linux-amd64.tar.gz
tar zxvf containerd-1.7.21-linux-amd64.tar.gz
chmod 755 /bin/*
cp -n bin/* /usr/bin/

启动服务


cat > /usr/lib/systemd/system/containerd.service <<EOF
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target

[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/bin/containerd

Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target
EOF

systemctl start containerd && systemctl enable containerd

3、安装docker

centos

yum install -y yum-utils device-mapper-persistent-data lvm2
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -o /etc/yum.repos.d/docker-ce.repo
yum makecache && yum -y install docker-ce

ubuntu

apt install -y  apt-transport-https ca-certificates
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
apt update && apt install -y docker-ce

debian

apt install -y  apt-transport-https ca-certificates
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/debian/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/debian $(lsb_release -cs) stable"
apt update && apt install -y docker-ce

修改docker配置

mkdir -p /etc/docker
cat > /etc/docker/daemon.json << EOF
{
    "registry-mirrors": [
        "http://mirrors.ustc.edu.cn/",
        "http://docker.jx42.com",
        "https://0c105db5188026850f80c001def654a0.mirror.swr.myhuaweicloud.com",
        "https://5tqw56kt.mirror.aliyuncs.com",
        "https://docker.1panel.live",
        "http://mirror.azure.cn/",
        "https://hub.rat.dev/",
        "https://docker.ckyl.me/",
        "https://docker.chenby.cn",
        "https://docker.hpcloud.cloud"
    ],
    "exec-opts":["native.cgroupdriver=systemd"]
}
EOF

systemctl enable docker && systemctl start docker

3、离线安装docker

centos

yum install -y yum-utils device-mapper-persistent-data lvm2
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -o /etc/yum.repos.d/docker-ce.repo
yum makecache && yum -y install conntrack cri-tools ebtables ethtool kubernetes-cni socat

ubuntu

apt install -y  apt-transport-https ca-certificates
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
apt update && apt install -y conntrack cri-tools ebtables ethtool kubernetes-cni socat

debian

apt install -y  apt-transport-https ca-certificates
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/debian/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/debian $(lsb_release -cs) stable"
apt update && apt install -y conntrack cri-tools ebtables ethtool kubernetes-cni socat

解压二进制包

wget https://download.docker.com/linux/static/stable/x86_64/docker-27.2.0.tgz
tar zxvf docker-27.2.0.tgz  -C ./
cp -n ./docker/* /usr/bin/

添加自启动配置

vim /usr/lib/systemd/system/docker.service

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutStartSec=0
RestartSec=2
Restart=always

# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3

# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
# Older systemd versions default to a LimitNOFILE of 1024:1024, which is insufficient for many
# applications including dockerd itself and will be inherited. Raise the hard limit, while
# preserving the soft limit for select(2).
LimitNOFILE=1024:524288

# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500

[Install]
WantedBy=multi-user.target

vim /usr/lib/systemd/system/docker.socket

[Unit]
Description=Docker Socket for the API
PartOf=docker.service
 
[Socket]
# If /var/run is not implemented as a symlink to /run, you may need to
# specify ListenStream=/var/run/docker.sock instead.
ListenStream=/run/docker.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker

[Install]
WantedBy=sockets.target

启动

groupadd docker
systemctl daemon-reload
systemctl enable docker && systemctl start docker

4、安装cri-docker

wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.15/cri-dockerd-0.3.15.amd64.tgz
tar -xf cri-dockerd-0.3.15.amd64.tgz
cp cri-dockerd/cri-dockerd /usr/bin/cri-dockerd
curl https://github.com/Mirantis/cri-dockerd/raw/master/packaging/systemd/cri-docker.service -L -o /usr/lib/systemd/system/cri-docker.service
curl https://raw.githubusercontent.com/Mirantis/cri-dockerd/master/packaging/systemd/cri-docker.socket -L -o /usr/lib/systemd/system/cri-docker.socket 

#修改cri-docker配置
vim /usr/lib/systemd/system/cri-docker.service
#修改ExecStart加上pod-infra-container-image参数
ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9

systemctl daemon-reload
systemctl start cri-docker

#查看cri-docker信息,安装了k8s后crictl命令才可用
crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock info

5、安装ipvs

#centos
yum -y install ipvsadm ipset
#ubuntu&debian
apt -y install ipvsadm ipset

#如果/etc/sysconfig/modules/ipvs.modules文件不存在则
mkdir -p /etc/sysconfig/modules
cat > /etc/sysconfig/modules/ipvs.modules << EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
#modprobe -- nf_conntrack_ipv4 #4以上的内核就没有ipv4
modprobe -- nf_conntrack
EOF
 
chmod 755 /etc/sysconfig/modules/ipvs.modules
sh /etc/sysconfig/modules/ipvs.modules

#检测是否加载
lsmod | grep ip_vs

6、安装kubernetes

centos

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum makecache
#查看所有kubelet可安装版本
yum list --showduplicates kubelet
yum install -y kubelet-1.28.2-0 kubeadm-1.28.2-0 kubectl-1.28.2-0

ubuntu

apt update && apt install -y apt-transport-https ca-certificates gnupg

curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 

cat << EOF > /etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

apt update
#查看所有kubelet可安装版本
apt-cache madison kubelet
apt install -y kubelet=1.28.2-00 kubeadm=1.28.2-00 kubectl=1.28.2-00

设置所有组件自启动

systemctl enable containerd
systemctl enable docker
systemctl enable cri-docker
systemctl enable kubelet

7、离线安装kubernetes

安装 crictl(kubeadm/kubelet 容器运行时接口(CRI)所需)

wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.28.0/crictl-v1.28.0-linux-amd64.tar.gz
tar zxvf crictl-v1.28.0-linux-amd64.tar.gz -C /usr/local/bin/

安装 kubeadmkubeletkubectl 并添加 kubelet 系统服务

wget https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubeadm
wget https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubelet
wget https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubectl
chmod 755 kubeadm kubelet kubectl
cp kubeadm kubelet kubectl /usr/bin/

curl -sSL https://raw.githubusercontent.com/kubernetes/release/v0.16.2/cmd/krel/templates/latest/kubelet/kubelet.service -o /etc/systemd/system/kubelet.service
sudo mkdir -p /etc/systemd/system/kubelet.service.d
curl -sSL https://raw.githubusercontent.com/kubernetes/release/v0.16.2/cmd/krel/templates/latest/kubeadm/10-kubeadm.conf -o /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

三、集群搭建

1、初始化master节点

#如果kubelet服务已经启动,先关闭
systemctl stop kubelet

#可以先行拉取镜像,排除拉取问题
kubeadm config images pull \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.28.2 \
--cri-socket=unix:///var/run/cri-dockerd.sock

*如果拉取镜像很慢或者觉得有问题存在,可以查看服务日志
查看cri-docker服务日志
journalctl -xefu cri-docker

#如果前面曾经初始化过、或者初始化错参数,可以重置集群为未初始化
kubeadm reset -f \
--cri-socket=unix:///var/run/cri-dockerd.sock

#开始初始化
kubeadm init \
--apiserver-advertise-address=服务器内网ip地址 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.28.2 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--cri-socket=unix:///var/run/cri-dockerd.sock

#如果想跳过cri-docker,直接让k8s跟container通信,需要变更一个参数
--cri-socket=unix:///run/containerd/containerd.sock

k8s的默认网络代理使用iptables,换用ipvs的话,性能会更高

kubectl edit -n kube-system cm kube-proxy
修改 mode: "ipvs"

#删除 kube-proxy,k8s会自动重建
kubectl get pod -n kube-system |grep kube-proxy| awk '{print $1}'|xargs kubectl -n kube-system delete pod

#接着查看日志,有打印 Using ipvs Proxier 表示使用成功
kubectl get pod -n kube-system | grep kube-proxy
kubectl logs -n kube-system kube-proxy-xxxxx

#试试查看一下转发规则
ipvsadm -Ln

环境变量配置

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown root:root $HOME/.kube/config
echo export KUBECONFIG=/etc/kubernetes/admin.conf >> /etc/profile

systemctl daemon-reload
systemctl restart kubelet

查看master节点

kubectl get nodes

会看到一个control-plane节点,但是NotReady状态,需要安装网络插件

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
安装后,多等一会

kubectl get nodes
节点就是Ready了

另一个网络插件Calico的安装

  • Flannel vs Calico:选择 Flannel 还是 Calico 主要取决于你的具体需求。如果你的集群规模较小,不需要太多复杂的网络功能,Flannel 是一个合适的选择。而如果你需要一个功能强大的网络插件来支持大规模集群和复杂的网络策略,那么 Calico 可能更适合你。

  • 最佳实践:对于未来可能需要扩展或集成更多设备和策略的集群,建议使用 Calico,因为它提供了更好的可扩展性和更丰富的功能集。而对于小规模集群或测试环境,Flannel 可能是一个更简单易用的选择。

kubectl apply -f https://docs.tigera.io/archive/v3.25/manifests/calico.yaml

kubectl get pods --namespace=kube-system | grep calico-node
如果输出结果中显示了calico-node的Pod状态为Running,则表示Calico已经成功安装

2、worker节点加入集群

在worker节点上检查文件,不存在就从master上拷贝过来

#网络插件配置
scp /etc/cni/net.d/* worker的ip:/etc/cni/net.d/

#master集群配置
scp /etc/kubernetes/admin.conf worker的ip:/etc/kubernetes/

#启动参数
scp /etc/systemd/system/kubelet.service.d/10-kubeadm.conf worker的ip:/etc/systemd/system/kubelet.service.d/

*/etc/cni/net.d/*,例如如果master已经安装过网络插件,并且用的是Flannel,应该拷贝/etc/cni/net.d/10-flannel.conflist

{
  "name": "cbr0",
  "cniVersion": "0.3.1",
  "plugins": [
    {
      "type": "flannel",
      "delegate": {
        "hairpinMode": true,
        "isDefaultGateway": true
      }
    },
    {
      "type": "portmap",
      "capabilities": {
        "portMappings": true
      }
    }
  ]
}

*如果网络插件是Calico,应该拷贝/etc/cni/net.d/10-calico.conflist

{
  "name": "k8s-pod-network",
  "cniVersion": "0.3.1",
  "plugins": [
    {
      "type": "calico",
      "log_level": "info",
      "log_file_path": "/var/log/calico/cni/cni.log",
      "datastore_type": "kubernetes",
      "nodename": "server-180",
      "mtu": 0,
      "ipam": {
          "type": "calico-ipam"
      },
      "policy": {
          "type": "k8s"
      },
      "kubernetes": {
          "kubeconfig": "/etc/cni/net.d/calico-kubeconfig"
      }
    },
    {
      "type": "portmap",
      "snat": true,
      "capabilities": {"portMappings": true}
    },
    {
      "type": "bandwidth",
      "capabilities": {"bandwidth": true}
    }
  ]
}

*如果/etc/systemd/system/kubelet.service.d/10-kubeadm.conf在master上也没有,用下面内容保存

[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS

添加worker节点

#在master服务器上运行命令
kubeadm token create --print-join-command

这将生成一个 kubeadm join 命令,将上面生成的命令复制并在新的 Worker 节点上执行。这将使新的节点以 Worker 的身份加入集群
*注意,需要在生成的kubeadm join 命令后面再加cri-socket参数,例如

kubeadm join 10.1.3.178:6443 --token z994lz.s0ogba045j84195c --discovery-token-ca-cert-hash sha256:89d69bc4b7c03bc8328713794c7aa4af798b0e65a64021a329bb9bf1d7afd23e --cri-socket=unix:///var/run/cri-dockerd.sock

#worker节点也做环境变量配置,这样就可以在worker节点上使用kubectl命令了,缺少的文件从master上拷贝过来
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown root:root $HOME/.kube/config
echo export KUBECONFIG=/etc/kubernetes/admin.conf >> /etc/profile
systemctl daemon-reload
systemctl restart kubelet

查看所有已加入集群的节点

kubectl get nodes

3、CNI插件

CNI插件就是上面需要用到的网络插件的底层,如果kubeadm init不成功(或者init成功后kubectl老是卡住),可以 journalctl查看各个服务日志,可能会出现/opt/cni/bin/目录下某某bin文件不存在的报错(例如portmap、flannel),检查一下/opt/cni/bin/目录下是否有该可执行文件,没有的话下载

#基础的
mkdir -p /opt/cni/bin/
wget https://github.com/containernetworking/plugins/releases/download/v1.5.1/cni-plugins-linux-amd64-v1.5.1.tgz
tar zxvf cni-plugins-linux-amd64-v1.5.1.tgz -C /opt/cni/bin/

#flannel插件
wget https://github.com/flannel-io/cni-plugin/releases/download/v1.5.1-flannel2/cni-plugin-flannel-linux-amd64-v1.5.1-flannel2.tgz
tar zxvf cni-plugin-flannel-linux-amd64-v1.5.1-flannel2.tgz -C /opt/cni/bin/
mv /opt/cni/bin/flannel-amd64 /opt/cni/bin/flannel

systemctl daemon-reload
systemctl restart containerd
systemctl restart docker
systemctl restart cri-docker
systemctl restart kubelet

4、其他master节点加入集群

在其他master节点上,拷贝主master上的证书

mkdir -p /etc/kubernetes/pki/etcd

#从主master上拷贝以下文件,路径不变
/etc/kubernetes/pki/ca.crt
/etc/kubernetes/pki/ca.key
/etc/kubernetes/pki/sa.key
/etc/kubernetes/pki/sa.pub
/etc/kubernetes/pki/front-proxy-ca.crt
/etc/kubernetes/pki/front-proxy-ca.key
/etc/kubernetes/pki/etcd/ca.crt
/etc/kubernetes/pki/etcd/ca.key

检查kubernetes配置

kubectl -n kube-system edit cm kubeadm-config -o yaml

#检查是否有controlPlaneEndpoint参数,大概在这么个位置
kind: ClusterConfiguration
kubernetesVersion: v1.28.2
controlPlaneEndpoint: 10.1.3.187:6443

#如果没有的话,需要加上这个参数
controlPlaneEndpoint: 主master的ip:6443

到其他master节点执行

#在master服务器上运行命令
kubeadm token create --print-join-command

这将生成一个 kubeadm join 命令,将上面生成的命令复制并在新的 Worker 节点上执行。这将使新的节点以 Worker 的身份加入集群
*注意,需要在生成的kubeadm join 命令后面再加cri-socket参数,例如

kubeadm join 10.1.3.178:6443 --token z994lz.s0ogba045j84195c --discovery-token-ca-cert-hash sha256:89d69bc4b7c03bc8328713794c7aa4af798b0e65a64021a329bb9bf1d7afd23e --cri-socket=unix:///var/run/cri-dockerd.sock

#此处都是和添加worker节点操作一样,只是在join命令时需要多加一个参数--control-plane

kubeadm join 10.1.3.178:6443 --token z994lz.s0ogba045j84195c --discovery-token-ca-cert-hash sha256:89d69bc4b7c03bc8328713794c7aa4af798b0e65a64021a329bb9bf1d7afd23e --cri-socket=unix:///var/run/cri-dockerd.sock --control-plane

 环境变量配置

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
echo export KUBECONFIG=/etc/kubernetes/admin.conf >> /etc/profile

systemctl daemon-reload
systemctl restart kubelet

查看所有已加入集群的节点

kubectl get nodes

四、核心组件说明

k8s的核心组件都是pod形式存在,kubectl get pods -n kube-system即可看到所有

参考:

k8s–多master高可用集群环境搭建_kubernetes高可用多master搭建-CSDN博客

Master 主控节点

ETCD(配置存储中心)

etcd服务是Kubernetes提供默认的存储系统,保存所有集群数据,使用时需要为etcd数据提供备份计划。

kube-apiserver(k8s集群的大脑)

kube-apiserver用于暴露Kubernetes API。任何的资源请求/调用操作都是通过kube-apiserver提供的接口进行。
提供了集群管理的RESTAPI接口(包括鉴权、数据校验及集群状态变更)
负责其他模块之间的数据交互,承担通信枢纽功能
是资源配额控制的入口
提供完备的集群安全机制

kube-controller-manager(控制器管理器)

运行管理控制器,是集群中处理常规任务的后台线程。逻辑上,每个控制器是一个单独的进程,但为了降低复杂性,它们都被编译成单个二进制文件,并在单个进程中运行。
由一系列控制器组成,通过apiserver监控整个集群的状态,并确保集群处于预期的工作状态
● Node Controller
● Deployment Controller
● Service Controller
● Volume Controller
● Endpoint Controller
● Garbage Controller
● Namespace Controller
● Job Controller
● Resource quta Controller

Scheduler(调度程序,监控node资源的状况)

主要功能是接收调度pod到适合的运算节点上
● 预算策略( predict )
● 优选策略( priorities )

Worker节点

Kubelet(容器的守护进程)

容器的搭起,销毁等动作,负责pod的生命周期,运行node上
简单地说, kubelet的主要功能就是定时从某个地方获取节点上pod的期望状态(运行什么容器、运行的副本数量网络或者存储如何配置等等) ,并调用对应的容器平台接口达到这个状态
定时汇报当前节点的状态给apiserver,以供调度的时候使用
镜像和容器的清理工作保证节点上镜像不会占满磁盘空间,退出的容器不会占用太多资源

kube-proxy(网络代理和负载均衡器)

运行在node上,最先用iptables做隔离,现在流行用ipvs,更方便
kube-proxy是K8S在每个节点上运行网络代理, service资源的载体
●建立了pod网络和集群网络的关系( clusterip- >podip )
●常用三种流量调度模式
●Userspace (废弃)
●Iptables (废弃)
●Ipvs(推荐)
●负责建立和删除包括更新调度规则、通知apiserver自己的更新,或者从apiserver哪里获取其他kube-proxy的调度规则变化来更新自己的Endpoint Controller 负责维护Service和Pod的对应关系
Kube-proxy负责service的实现,即实现了K8s内部从pod到Service和外部从node port到service的访问

注:Pod网络是kube-kubelet提供,不是直接由Kube-proxy提供

各组件的工作流程:

User(采用命令kubectl)—> API server(响应,调度不同的Schedule)—> Schedule(调度)—> Controller Manager(创建不同的资源)—> etcd(写入状态)—> 查找集群(node哪个有资源,通过Schedule,到对应的node上创建pod)

五、后续

1、安装helm

https://github.com/helm/helm

Helm | 安装Helm

Helm 是 Kubernetes 的包管理器,将来会有越来越多的组件转用helm来部署

wget https://get.helm.sh/helm-v3.15.4-linux-amd64.tar.gz
tar zxvf helm-v3.15.4-linux-amd64.tar.gz
cp linux-amd64/helm /usr/local/bin/
chmod +x /usr/local/bin/helm

2、安装管理界面dashboard

https://github.com/kubernetes/dashboard

helm安装方式

# 添加kubernetes-dashboard repository
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
# 部署一个 "kubernetes-dashboard" 发布版本
helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard

非helm安装方式

#获取dashboard资源文件
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml -O kubernetes-dashboard.yaml

#修改yaml文件,暴露nodeport端口
spec:
  type: NodePort# 新增
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30100# 新增
  selector:
    k8s-app: kubernetes-dashboard

#加载
kubectl apply -f kubernetes-dashboard.yaml

# 创建 dashboard-admin 用户
kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard
# 绑定 clusterrolebinding
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-admin
# 创建登录token
kubectl create token dashboard-admin -n kubernetes-dashboard

#访问
https://服务器ip地址:30001
把上面创建出来的登录token复制到token输入框里登录

3、彻底删除k8s

#清空K8S集群设置
kubeadm reset -f --cri-socket=unix:///var/run/cri-dockerd.sock
#如果用了ipvs
ipvsadm --clear

#停止K8S
systemctl stop kubelet
systemctl stop cri-docker.socket cri-docker
systemctl stop docker.socket docker

#删除K8S相关软件
yum -y remove kubelet kubeadm kubectl docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin docker-ce-rootless-extras
#如果是离线安装的docker
yum -y remove kubelet kubeadm kubectl containerd.io
rm -rf /usr/bin/docker* /usr/lib/systemd/system/docker.service /usr/lib/systemd/system/docker.socket

#手动删除所有镜像、容器和卷
rm -rf /var/lib/docker
rm -rf /var/lib/containerd

#彻底删除相关文件
rm -rf $HOME/.kube ~/.kube/ /etc/kubernetes/ /etc/systemd/system/kubelet.service.d /usr/lib/systemd/system/kubelet.service /usr/lib/systemd/system/cri-docker.service /usr/bin/kube* /etc/cni /opt/cni /var/lib/etcd /etc/docker/daemon.json /etc/containerd/config.toml /usr/lib/systemd/system/containerd.service

六、tips

1、一些有用命令

#用container命令行查看镜像列表
ctr image list

#查看container下k8s拉取的镜像
ctr -n k8s.io image list

#用cri-docker命令行查看镜像列表
crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock image

#强制删除某个pod
kubectl delete pod <pod>  -n <namespace> --grace-period=0 --force

#查看iptables的转发规则
iptables -L

#查看ipvs的转发规则
ipvsadm -Ln

2、pod无法分配到master节点上

这是因为master节点默认会设置污点,pod默认情况下会避免被分配到有污点的节点上,所以我们要根据需求把master节点的污点去掉

#查看所有master节点
kubectl get nodes|grep control-plane

#查看master节点的污点名称
kubectl describe node node名称|grep Taints

#去除污点,注意最后面是个减号,表示删除
kubectl taint node node名称 污点名称-

3、拉取镜像太慢

参考:

K8S Containerd导入Docker image镜像_containerd导入docker镜像-CSDN博客

K8S在创建容器时,或多或少有些镜像无法正常拉取(网络等原因)。
还在使用Docker Engine时我们能方便的pull第三方同步的镜像,然后tag成需要的标签版本,让K8S从本地获取到想要的镜像。
因Docker将其容器格式和运行时runC捐赠给OCI(开放容器标准),OCI标准化了容器工具和底层实现之间的大量接口。

以加速calico网络插件拉取为例

# 拉取docker镜像
docker pull calico/cni:v3.25.0
docker pull calico/node:v3.25.0
# 为镜像打上k8s需要的 tag
docker tag calico/cni:v3.25.0 docker.io/calico/cni:v3.25.0
docker tag calico/node:v3.25.0 docker.io/calico/node:v3.25.0
# 将镜像保存下来
docker save -o ./calico-cni.tar calico/cni:v3.25.0 docker.io/calico/cni:v3.25.0
docker save -o ./calico-node.tar calico/node:v3.25.0 docker.io/calico/node:v3.25.0

然后进行镜像导入。注意要导入至K8S使用的containerd默认命名空间是 k8s.io 否则它会找不到镜像

# 导入,-n 参数为指定命名空间
ctr -n k8s.io image import calico-cni.tar
ctr -n k8s.io image import calico-node.tar
# 确认下导入
ctr -n k8s.io image list | grep calico
# crictl是Kubernetes社区定义的CRI接口工具,在这边也确认下
crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock image | grep calico

#加载
wget https://docs.tigera.io/archive/v3.25/manifests/calico.yaml
kubectl apply -f calico.yaml

至此K8S已能在本地找到相应镜像(记得确认imagePullPolicy已设置为IfNotPresentNever

4、一些可能出现的错误

Failed to start docker.service: Unit docker.service is masked

systemctl unmask docker.socket
systemctl unmask docker.service

[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist

modprobe bridge
modprobe br_netfilter
sysctl --system

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值