目录
本文是记录基于docker部署Kubernetes(K8S)1.23版本的流程,如果有不清楚的地方可以评论区讨论,如果有需要的资源也可以评论区说,都可以免费提供,毕竟都是技术探讨。
资源列表
操作系统 | 主机名 | 配置 | IP |
CentOS7.3.1611 | master | 2C4G | 192.168.207.131 |
CentOS7.3.1611 | node1 | 2C4G | 192.168.207.165 |
CentOS7.3.1611 | node2 | 2C4G | 192.168.207.166 |
基础环境
所有节点都要操作
关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
关闭selinux
setenforce 0
sed -i "s/.*SELINUX=.*/SELINUX=disabled/g" /etc/selinux/config
关闭swap
swapoff -a # 临时关闭
sed -i 's/.*swap.*/#&/g' /etc/fstab
添加hosts解析
swapoff -a # 临时关闭
sed -i 's/.*swap.*/#&/g' /etc/fstab
cat >> /etc/hosts << EOF
192.168.207.131 master
192.168.207.165 node1
192.168.207.166 node2
EOF
时间同步
yum -y install chrony
systemctl enable chronyd --now
systemctl restart chronyd
chronyc sources -v
桥接的IPv4流量传递到iptables的链
modprobe overlay
modprobe br_netfilter
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sysctl --system
准备Dokcer
所有节点都要操作
安装Docker
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum -y install docker-ce docker-ce-cli containerd.io
# 启动服务
systemctl start docker
systemctl enable docker
配置Docker
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum -y install docker-ce docker-ce-cli containerd.io
# 启动服务
systemctl start docker
systemctl enable docker
# 设置驱动
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
# 加载daemon并重启docker
systemctl daemon-reload
systemctl restart docker
安装kubeadm工具
所有节点都要操作
配置yum源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
安装kubeadm
# 这里指定了版本号,若需要其他版本的可自行更改
yum install -y kubelet-1.23.0 kubeadm-1.23.0 kubectl-1.23.0
systemctl enable kubelet
初始化Master节点
Master节点操作即可
# --apiserver-advertise-address指定当前节点的IP
# --kubernetes-version指定版本号要与安装的版本一致
kubeadm init \
--apiserver-advertise-address=192.168.207.131 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.23.0 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16
[root@master ~]# kubeadm init --apiserver-advertise-address=192.168.207.131 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.23.0 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.23.0
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 20.10
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.1.0.1 192.168.207.131]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.207.131 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.207.131 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.502636 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 062vz7.hu91efigyrsn7upi
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.207.131:6443 --token 062vz7.hu91efigyrsn7upi \
--discovery-token-ca-cert-hash sha256:1f8552dcd75ab054866a82b9218b9b20952758da76c28c00eba6c2811ed19074
配置Master节点
# 初始化成功以后要根据提示执行以下3条命令,才可以操作集群
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
常见故障
# 如果kubelet报以下错误可以尝试执行yum -y install systemd把systemd更新一下
11月 24 16:39:53 master kubelet[24746]: E1124 16:39:53.511808 24746 node_container_manager_linux.go:61] "Failed to create cgroup" err="Cannot set property TasksAccounting, or unknown property." cgroupName=[kubepods]
11月 24 16:39:53 master kubelet[24746]: E1124 16:39:53.511848 24746 kubelet.go:1431] "Failed to start ContainerManager" err="Cannot set property TasksAccounting, or unknown property."
# 如果第一次初始化没有成功,可以使用kubeadm reset重置一下
Node节点加入集群
所有Node节点操作
# 在master节点初始化的时候返回信息中最后的命令就是node节点加入集群的命令,将该命令复制到node节点执行即可
[root@node1 ~]# kubeadm join 192.168.207.131:6443 --token 062vz7.hu91efigyrsn7upi \
> --discovery-token-ca-cert-hash sha256:1f8552dcd75ab054866a82b9218b9b20952758da76c28c00eba6c2811ed19074
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 20.10
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
如果加入集群的命令找不到了可以在master节点生成一个
[root@master ~]# kubeadm token create --print-join-command
kubeadm join 192.168.207.131:6443 --token wf2wz6.9f55svearl4em1lp --discovery-token-ca-cert-hash sha256:1f8552dcd75ab054866a82b9218b9b20952758da76c28c00eba6c2811ed19074
部署网络插件(CNI)
Master节点操作
[root@master ~]# kubeadm token create --print-join-command
kubeadm join 192.168.207.131:6443 --token wf2wz6.9f55svearl4em1lp --discovery-token-ca-cert-hash sha256:1f8552dcd75ab054866a82b9218b9b20952758da76c28c00eba6c2811ed19074
[root@master ~]# kubectl apply -f kube-flannel.yaml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
验证
查看节点状态
[root@master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 14m v1.23.0
node1 Ready <none> 4m50s v1.23.0
node2 Ready <none> 4m41s v1.23.0
查看集群组件状态
[root@master ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true","reason":""}