建议部署1.24以下版本
简述部署流程
- master、nodes节点要先安装kubelet、kubeadm、docker
- 在master节点上运行kubeadm init进程进行初始化集群
- 将各node节点加入集群中:kubeadm join
- node节点状态ready
虚拟机配置
虚拟机CPU最好2核心数以上,内存2GB
节点及功能 | 主机名 | IP |
---|---|---|
master | master | 192.168.11.6 |
node | node1 | 192.168.11.7 |
node | node2 | 192.168.11.8 |
修改IP地址、主机名和host解析
#修改主机名
[root@master~]# hostnamectl set-hostname master
[root@master~]# hostnamectl set-hostname node1
[root@master~]# hostnamectl set-hostname node2
修改IP和所有主机名的映射关系(所有机器)
[root@master~]# vi /etc/hosts
192.168.11.6 master
192.168.11.7 node1
192.168.11.8 node2
[root@master ~]# reboot
节点上的其他设置(三台机器)
关闭防火墙、SELINUX、SWAP(查看swap是否关闭 free -m)
[root@master ~]# systemctl stop firewalld && systemctl disable firewalld
[root@master ~]# setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
[root@master ~]# swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
[root@master ~]# systemctl stop NetworkManager && systemctl disable NetworkManager
设置固定的IP地址
[root@master ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33
修改内核参数 略
https://blog.csdn.net/u010383467/article/details/107771427
yum
RedHat系列的包管理工具是yum、同步服务器时间、安装epel-release源
[root@master ~]# yum update
[root@master ~]# yum install -y wget ntp yum-utils epel-release
安装docker 略
Docker安装说明
https://blog.csdn.net/qq_19636353/article/details/103524120
## 安装依赖包
yum install -y yum-utils device-mapper-persistent-data lvm2
# 添加Docker软件包源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
## 或查看并安装指定的版本
yum list docker-ce --showduplicates | sort -r
# 安装docker-ce 最新社区版本
yum install -y docker-ce
## 或查看并安装指定的版本
yum install -y docker-ce-19.03.13
# 启动Docker服务并设置开机启动
systemctl enable docker && systemctl start docker
安装kubernetes
查看版本
查看历史版本
[root@master hello]# yum list|grep kubernetes
[root@master ~]# yum list --showduplicates kubeadm|grep 1.18
#已安装版本
[root@master hello]# yum list installed | grep kubernetes
kubernetes-client.x86_64 1.5.2-0.7.git269f928.el7 @extras
kubernetes-master.x86_64 1.5.2-0.7.git269f928.el7 @extras
创建源文件
创建Kubernetes的仓库源文件,并编辑kubernetes.repo
[root@master yum.repos.d]# vim kubernetes.repo
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
enabled=1
为了方便在node节点上安装,将这个两个配置文件通过scp命令复制到node节点服务器上
[root@master yum.repos.d]# scp kubernetes.repo docker-ce.repo node1:/etc/yum.repos.d/
[root@master yum.repos.d]# scp kubernetes.repo docker-ce.repo node2:/etc/yum.repos.d/
安装组件 kubelet、kubeadm 和 kubectl
安装kubernetes 1.18.16 各组件:kubelet、kubeadm 和 kubectl
[root@master yum.repos.d]# yum install -y kubeadm-1.18.16 kubectl-1.18.16 kubelet-1.18.16
...
[root@master ~]# systemctl enable kubelet #现在仅仅可以设置开机启动
[root@master ~]# systemctl status kubelet
此时启动是有错误提示,可以查看日志
[root@master ~]# systemctl stop kubelet #将kubelet关闭
[root@master ~]# tail /var/log/messages
提示:此时kubelet的服务运行状态是异常的,因为缺少主配置文件kubelet.conf。但可以暂不处理,因为在完成Master节点的初始化后才会生成这个配置文件。
#安装成功后,查看安装的清单目录及配置文件
[root@master ~]# rpm -ql kubelet
至此以上操作在 master 和 node 两台机器上执行
.
以下均在k8s-master上执行
镜像下载(手动)
查看 kubernetes 1.18.6 所需要的镜像列表
[root@master ~]# kubeadm config images list
[root@master ~]# kubeadm config images list --kubernetes-version=v1.18.16
W0318 00:50:18.172197 1578 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.18.16
k8s.gcr.io/kube-controller-manager:v1.18.16
k8s.gcr.io/kube-scheduler:v1.18.16
k8s.gcr.io/kube-proxy:v1.18.16
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7
镜像下载(手动),从docker hub上
[root@master1 ~]#
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.16 && \
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.16 && \
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.16 && \
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.16 && \
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 && \
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0 && \
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.7
#下载完成
[root@master1 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.18.16 f64b8b5e96a6 4 weeks ago 117MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver v1.18.16 26e38f7f559a 4 weeks ago 173MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager v1.18.16 b3c57ca578fb 4 weeks ago 162MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler v1.18.16 5a84bb672db8 4 weeks ago 96.1MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 13 months ago 683kB
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns 1.6.7 67da37a9a360 13 months ago 43.8MB
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd 3.4.3-0 303ce5db0e90 16 months ago 288MB
#修改tag
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.16 k8s.gcr.io/kube-apiserver:v1.18.16
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.16 k8s.gcr.io/kube-controller-manager:v1.18.16
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.16 k8s.gcr.io/kube-scheduler:v1.18.16
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.16 k8s.gcr.io/kube-proxy:v1.18.16
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.7 k8s.gcr.io/coredns:1.6.7
#删除旧的tag
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.16
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.16
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.16
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.16
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.7
#查看镜像
[root@master ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.18.16 c3d62d6fe412 3 weeks ago 117MB
k8s.gcr.io/kube-apiserver v1.18.16 56acd67ea15a 3 weeks ago 173MB
k8s.gcr.io/kube-controller-manager v1.18.16 ffce5e64d915 3 weeks ago 162MB
k8s.gcr.io/kube-scheduler v1.18.16 0e0972b2b5d1 3 weeks ago 95.3MB
k8s.gcr.io/pause 3.2 80d28bedfe5d 5 months ago 683kB
k8s.gcr.io/coredns 1.6.7 67da37a9a360 6 months ago 43.8MB
k8s.gcr.io/etcd 3.4.3-0 303ce5db0e90 9 months ago 288MB
780528005/hello 1 99d2892c37e7 2 days ago 639MB
#k8s v1.26.1
[root@k8s-master kubelet.service.d]# kubeadm config images list
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.6-0
#说明
k8s.gcr.io/pause就是基础架构容器,可以不用启动,其他容器可以将它当成模板进行网络、存储卷复制。
特别注意:其中有两个附件:CoreDNS和kube-proxy
CoreDNS:CoreDNS已经经历过三个版本:sky-dns --> kube-dns(1.3版本)--> CoreDNS(1.11版本)
kube-proxy:作为附件运行自托管于k8s之上,来帮忙负责生成service资源相关的iptables或者ipvs规则,在1.11版本默认使用ipvs。
导出镜像,导入到node节点,或重新下载
kubeadm init 初始化
指定选项进行初始化
--apiserver-advertise-address 用于指定kube-apiserver监听的ip地址,就是 master地址
--pod-network-cidr=10.244.0.0/16 Pod中间网络通讯,使用flannel,要求是10.244.0.0/16,这个IP段就是Pod的IP段
--service-cidr=10.1.0.0/16 : Service(服务)网段(和微服务架构有关)
--image-repository: 指定阿里云镜像仓库地址 --image-repository registry.aliyuncs.com/google_containers
[root@master ~]# kubeadm init --kubernetes-version=1.18.16 --apiserver-advertise-address=192.168.11.6 --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=Swap
W0809 09:51:21.141205 24590 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.6
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.11.10]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [192.168.11.10 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.11.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0809 09:51:32.208586 24590 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0809 09:51:32.213279 24590 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 25.023211 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: qzcf6b.cc7l99mv55xzzy41
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.11.10:6443 --token qzcf6b.cc7l99mv55xzzy41 \
--discovery-token-ca-cert-hash sha256:ee4f790639a9fae897b34c7c253354add49258e1219bb3dbc4fbb3b2f1759bf1
[root@master ~]#
注:以上kubeadm join... 这条信息很重要,以后其他节点都是用这条命令才能加入这个集群
如果忘记请用以下命令查看:kubeadm token create --print-join-command
9、配置 kubectl
kubectl 是管理 Kubernetes Cluster 的命令行工具, Master 初始化完成后需要做一些配置工作才能使用kubectl,参考初始化结果给出的命令进行以下配置:
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# chown $(id -u):$(id -g) $HOME/.kube/config
关于kubeadm init 提示的错误处理
WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
处理方法:手动下载所需镜像
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver.
处理方法1:设置docker分组为systemd
[root@master ~]# vi /usr/lib/systemd/system/docker.service
在ExecStart=/usr/bin/dockerd 追加 --exec-opt native.cgroupdriver=systemd
[root@master1 ~]# systemctl daemon-reload && restart docker
[root@master ~]# docker info | grep Cgroup
Cgroup Driver: systemd
处理方法2:https://www.cnblogs.com/linyouyi/p/11626241.html
处理方法3:修改/var/lib/kubelet/kubeadm-flags.env文件,将其中的–cgroup-driver=cgroupfs 修改为
–cgroup-driver=systemd
错误3 [ERROR CRI]: container runtime is not running
[root@master ~] rm -rf /etc/containerd/config.toml
[root@master ~] systemctl restart containerd
[ERROR NumCPU]: the number of available CPUs 1 is less than the required 2
处理方法:设置虚拟机CPU核数2
查看配置文件
#各组件配置文件所在的位置
[root@master kubernetes]# cd /etc/kubernetes/ && ls
admin.conf controller-manager.conf kubelet.conf manifests pki scheduler.conf
kubectl get po -n kube-system
[root@naster1 kubernetes]# kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-66bff467f8-9p26v 0/1 Pending 0 10m
coredns-66bff467f8-v89k6 0/1 Pending 0 10m
etcd-master1 1/1 Running 0 11m
kube-apiserver-master1 1/1 Running 0 11m
kube-controller-manager-master1 1/1 Running 0 11m
kube-proxy-h8524 1/1 Running 0 10m
kube-scheduler-master1 1/1 Running 0 11m
安装Pod网络插件flannel
下载flannel资源配置清单,并从docker hub上拉取flannel的镜像(在所有机器上操作)
#添加域名
[root@master ~]# vi /etc/hosts
199.232.96.133 raw.githubusercontent.com
#下载flannel资源配置清单
[root@master ~]# yum -y install wget
[root@master ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
#执行flannel的yml文件使之运行
[root@master ~]# kubectl apply -f kube-flannel.yml
#非异常情况,可直接执行
[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml --no-check-certificate
#查看flannel的运行状态
[root@master1 ~]# kubectl get ds -l app=flannel -n kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-flannel-ds 3 3 3 3 3 <none> 105m
关于CNI
CNI(Conteinre Network Interface)是 google 和 CoreOS 主导制定的容器网络标准,它本身并不是实现或者代码,可以理解成一个协议。这个标准是在 rkt 网络提议的基础上发展起来的,综合考虑了灵活性、扩展性、ip 分配、多网卡等因素。
这个协议连接了两个组件:容器管理系统和网络插件。它们之间通过 JSON 格式的文件进行通信,实现容器的网络功能。具体的事情都是插件来实现的,包括:创建容器网络空间(network namespace)、把网络接口(interface)放到对应的网络空间、给网络接口分配 IP 等等。
K8s网络组件Flannel的介绍和部署
https://www.yisu.com/zixun/7746.html
添加Node节点
向集群中添加node节点,使用之前在master初始化时输出的join命令
#在node1、node2 的控制台输入
kubeadm join 192.168.11.10:6443 --token 6wzmpr.1uvhmkb75lg0kcwf \
--discovery-token-ca-cert-hash sha256:ee4f790639a9fae897b34c7c253354add49258e1219bb3dbc4fbb3b2f1759bf1
默认token的有效期为24小时,当过期之后,该token就不可用了,如果后续有nodes节点加入,则需要生成新的token。
WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
#在master节点上重新生成token,再执行kubeadm join ...
[root@master ~]# kubeadm token create
6wzmpr.1uvhmkb75lg0kcwf
# 3.查看当前的token列表
[root@master ~]# kubeadm token list
master上查看node节点
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 2d17h v1.18.16
node1 Ready node 2d16h v1.18.16
node2 Ready <none> 1h v1.18.16
#设置node2的角色为node
[root@master ~]# kubectl label node node2 node-role.kubernetes.io/node=node
node/node1 labeled
查看集群是否健康
[root@master ~]# kubectl get cs 或 kubectl get componentstatus
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
#关于controller-manager 和scheduler状态显示 Unhealthy
https://www.gjie.cn/2618.html
[root@master ~]# kubectl get ns #查看有哪些名称空间
NAME STATUS AGE
default Active 51m
kube-node-lease Active 51m
kube-public Active 51m
kube-system Active 51m
查看pod,测试保证所有pod都在Running状态
[root@master ~]# kubectl get Pods -A
[root@master ~]# kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-66bff467f8-7pm6v 1/1 Running 1 3d12h 10.244.0.5 master <none> <none>
kube-system coredns-66bff467f8-bfcs6 1/1 Running 2 3d12h 10.244.0.6 master <none> <none>
kube-system etcd-master 1/1 Running 2 3d12h 192.168.11.10 master <none> <none>
kube-system kube-apiserver-master 1/1 Running 12 3d12h 192.168.11.10 master <none> <none>
kube-system kube-controller-manager-master 1/1 Running 1 36h 192.168.11.10 master <none> <none>
kube-system kube-flannel-ds-amd64-4gdws 1/1 Running 0 35h 192.168.11.12 node2 <none> <none>
kube-system kube-flannel-ds-amd64-krfw6 1/1 Running 2 3d10h 192.168.11.10 master <none> <none>
kube-system kube-flannel-ds-amd64-mshnh 1/1 Running 0 3d10h 192.168.11.11 node1 <none> <none>
kube-system kube-proxy-6l8wj 1/1 Running 0 35h 192.168.11.12 node2 <none> <none>
kube-system kube-proxy-89sjv 1/1 Running 1 3d12h 192.168.11.10 master <none> <none>
kube-system kube-proxy-gwl85 1/1 Running 3 3d12h 192.168.11.11 node1 <none> <none>
kube-system kube-scheduler-master 1/1 Running 1 36h 192.168.11.10 master <none> <none>
kubelet 状态上报的方式
https://zhuanlan.zhihu.com/p/110980720
分布式系统中服务端会通过心跳机制确认客户端是否存活,在 k8s 中,kubelet 也会定时上报心跳到 apiserver,以此判断该 node 是否存活,若 node 超过一定时间没有上报心跳,其状态会被置为 NotReady,宿主上容器的状态也会被置为 Nodelost 或者 Unknown 状态。kubelet 自身会定期更新状态到 apiserver,通过参数 --node-status-update-frequency 指定上报频率,默认是 10s 上报一次,kubelet 不止上报心跳信息还会上报自身的一些数据信息。
创建资源
Kubernetes部署SpringBoot项目
https://blog.csdn.net/qq_19636353/article/details/107812515
关于单机版部署
设置master节点可以部署pod
#允许k8s-master节点被master调度
kubectl taint nodes k8s-master node-role.kubernetes.io/master-
#禁止k8s-master节点被master调度
kubectl taint nodes k8s-master node-role.kubernetes.io/master=:NoSchedule
#允许k8s-node01节点被master调度
kubectl taint nodes k8s-node01 node-role.kubernetes.io/master-
#禁止k8s-node01节点被master调度
kubectl taint nodes k8s-node01 node-role.kubernetes.io/master=:NoSchedule
不必理会错误 `error: taint "node-role.kubernetes.io/master" not found'
dashboard
kubernetes1.18安装Dashboard
https://www.cnblogs.com/guoxiaobo/p/15025312.html
kubernetes dashboard v1.18安装
https://www.jianshu.com/p/7c7bb60aaa46
参考资料
Kubeadm部署Kubernetes1.18.6集群
https://blog.csdn.net/alanpo_/article/details/107823370?utm_medium=distribute.pc_relevant.none-task-blog-baidulandingword-10&spm=1001.2101.3001.4242
kubeadm部署-kubernetes-1.18.6集群
https://blog.csdn.net/u010383467/article/details/107771427
Centos7安装kubernetes1.18.1
https://www.cnblogs.com/woailifang/p/12763847.html
部署kubernates 1.22.2
https://www.cnblogs.com/sonyy/p/16670413.html
kubeadm搭建kubernetes集群之三:加入node节点
https://www.cnblogs.com/yhaing/p/8568234.html
KubeAdmin安装k8s
https://blog.csdn.net/weixin_46837396/article/details/119777362
Kubernates基础笔记
https://www.jianshu.com/p/bf4badc6e10e
https://www.cnblogs.com/huhyoung/p/9657186.html
https://www.cnblogs.com/LouisZJ/articles/11187714.html
如何部署一个Kubernetes集群
https://blog.csdn.net/weixin_44296862/article/details/108211749