k8s--kubeadm安装部署k8s

通过kubeadm安装搭建k8s

参考

1.装备三台主机,虚拟机也可以,这里使用Ubuntu和CentOS

2.开放端口

Master节点
6443,2379~2380,10250,10251,10255
工作节点
10250,10255,30000~32767

3.安装Docker

CentOs7安装Docker

4.安装kubectl

官网安装

 curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl

5.安装kubelet和kubeadm

5.1 官网使用的时谷歌的链接,是无法访问的,这里使用国内的

CentOs运行下面命令

cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y kubelet kubeadm
systemctl enable kubelet && systemctl start kubelet

Ubuntu运行下面命令

apt-get update && apt-get install -y apt-transport-https
curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm
查看kubelet的状态
systemctl status kubelet -l

或者:

systemctl status kubelet

5.2关闭swapoff

swapoff -a
[root@localhost roy]#  cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Sun Apr  3 13:25:12 2022
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=72cb4874-ea8d-4aca-b235-3251431f92ba /boot                   xfs     defaults        0 0
#这行要注释掉,但是我在虚拟机的Ubuntu里没有这一行
#/dev/mapper/centos-swap swap                    swap    defaults        0 0
[root@localhost roy]# echo "vm.swappiness = 0">> /etc/sysctl.conf  
[root@localhost roy]# swapoff -a 
#查看swap状态
[root@localhost roy]# free -m
              total        used        free      shared  buff/cache   available
Mem:           2701        1310         105          21        1285        1213
Swap:             0           0           0

[root@localhost roy]# sysctl -w vm.swappiness=0
vm.swappiness = 0

5.3关闭防火墙

# 查看防火墙状态
[root@localhost roy]# firewall-cmd --state
running
[root@localhost roy]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@localhost roy]# firewall-cmd --state
not running

5.4查看,关闭SELinux

查看其状态

[root@localhost roy]# sestatus
SELinux status:                 disabled

getenforce或者sestatus
修改方式一
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config && setenforce 0
修改方式二/etc/selinux/config
SELINUX=disabled

root@localhost ~]# setenforce 0
#切换成宽容模式
[root@localhost ~]# getenforce
Permissive
[root@localhost ~]# setenforce 1
#切换成强制模式
[root@localhost ~]# getenforce
Enforcing

5.5配置内核参数,将桥接的IPv4流量传递到iptables的链(不太懂)

[root@localhost roy]# cat > /etc/sysctl.d/k8s.conf <<EOF
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> EOF
[root@localhost roy]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
kernel.kptr_restrict = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /usr/lib/sysctl.d/60-libvirtd.conf ...
fs.aio-max-nr = 1048576
* Applying /etc/sysctl.d/99-sysctl.conf ...
vm.swappiness = 0
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
* Applying /etc/sysctl.conf ...
vm.swappiness = 0

5.6添加阿里kubernetes源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

5.7/etc/hosts中添加主机ip映射信息,因为pod网络插件要安装

查看域名的IP

18.140.226.100 docs.projectcalico.org
3.0.239.142 docs.projectcalico.org

5.8配置网络,安装calico

这一步如果是将kubeadm reset过的话,要检查是否有calico的容器在正常运行,如果没有,就需要在主节点执行这一步,然后,将主节点的/etc/kubernetes/admin.conf文件内容再更新到work节点去,否则在创建deployment时会出现之后的“错误9”

[root@localhost roy]# kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created

5.9初始化master

如果之前初始化过一次,reset就行

[root@localhost roy]# kubeadm reset

这里使用了Calico网络插件

[root@localhost roy]# kubeadm init    --image-repository registry.aliyuncs.com/google_containers  --service-cidr=10.10.0.0/16 --pod-network-cidr=10.122.0.0/16
[init] Using Kubernetes version: v1.23.5
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost.localdomain] and IPs [10.10.0.1 192.168.218.141]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost localhost.localdomain] and IPs [192.168.218.141 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost localhost.localdomain] and IPs [192.168.218.141 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.503528 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: b346aa.en8ll0rnl9k2d33n
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.218.141:6443 --token b346aa.en8ll0rnl9k2d33n \
	--discovery-token-ca-cert-hash sha256:8851afbcbeb1023360a43d367a1641f4c00d505a95759af115959af3f28551e2 

--------------------------以下部分待定内容,可忽视------------------start-------------

初始化 master

kubeadm init --image-repository registry.aliyuncs.com/google_containers --ignore-preflight-errors=Swap

kubeadm init --image-repository registry.aliyuncs.com/google_containers --ignore-preflight-errors=Swap

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

配置网络

[root@localhost network-scripts]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
The connection to the server raw.githubusercontent.com was refused - did you specify the right host or port?
[root@localhost network-scripts]# vim /etc/hosts
[root@localhost network-scripts]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

--------------------------以上待定内容,可忽视------------------end--------------------

5.10根据提示创建kubectl

[root@localhost roy]# mkdir -p $HOME/.kube
[root@localhost roy]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
cp:是否覆盖"/root/.kube/config"? y
[root@localhost roy]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 执行下面命令,使kubectl可以自动补充
[root@localhost roy]# source <(kubectl completion bash)

查看集群的节点

[root@localhost roy]# kubectl get node
NAME                    STATUS   ROLES                  AGE    VERSION
localhost.localdomain   Ready    control-plane,master   101s   v1.23.5
# localhost.localdomain是master节点
[root@localhost roy]# kubectl get pod --all-namespaces
NAMESPACE     NAME                                            READY   STATUS              RESTARTS   AGE
kube-system   coredns-6d8c4cb4d-6mh2v                         0/1     ContainerCreating   0          94s
kube-system   coredns-6d8c4cb4d-z2g84                         0/1     ContainerCreating   0          94s
kube-system   etcd-localhost.localdomain                      1/1     Running             7          107s
kube-system   kube-apiserver-localhost.localdomain            1/1     Running             0          110s
kube-system   kube-controller-manager-localhost.localdomain   1/1     Running             0          109s
kube-system   kube-proxy-lscg2                                1/1     Running             0          94s
kube-system   kube-scheduler-localhost.localdomain            1/1     Running             41         107s

安装kubernetes-dashboard

[root@localhost roy]# wget  https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc7/aio/deploy/recommended.yaml
--2022-04-04 22:55:12--  https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc7/aio/deploy/recommended.yaml
正在解析主机 raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.109.133, 185.199.108.133, 185.199.110.133, ...
正在连接 raw.githubusercontent.com (raw.githubusercontent.com)|185.199.109.133|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:7566 (7.4K) [text/plain]
正在保存至: “recommended.yaml”

100%[========================================>] 7,566       7.57KB/s 用时 1.0s   

2022-04-04 22:55:32 (7.57 KB/s) - 已保存 “recommended.yaml” [7566/7566])

[root@localhost roy]# kubectl create -f recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
Warning: spec.template.spec.nodeSelector[beta.kubernetes.io/os]: deprecated since v1.14; use "kubernetes.io/os" instead
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
Warning: spec.template.metadata.annotations[seccomp.security.alpha.kubernetes.io/pod]: deprecated since v1.19, non-functional in v1.25+; use the "seccompProfile" field instead
deployment.apps/dashboard-metrics-scraper created

6.向集群加入一个节点

6.1如果带加入的work节点之前已经加入过某个集群,则需要重置一下

切换为root用户

sudo su

token 有效时间在24小时内

root@roy-ubuntu:/etc/kubernetes# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0405 13:03:57.987109   19652 removeetcdmember.go:80] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
root@roy-ubuntu:/etc/kubernetes# rm /etc/cni/net.d/* -f
root@roy-ubuntu:/etc/kubernetes# 
root@roy-ubuntu:/etc/kubernetes# systemctl daemon-reload
root@roy-ubuntu:/etc/kubernetes# systemctl restart kubelet
root@roy-ubuntu:/etc/kubernetes# kubeadm join 192.168.218.141:6443 --token b346aa.en8ll0rnl9k2d33n \
> --discovery-token-ca-cert-hash sha256:8851afbcbeb1023360a43d367a1641f4c00d505a95759af115959af3f28551e2
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

root@roy-ubuntu:/etc/kubernetes# 

6.2将节点加入集群

root@roy-ubuntu:/etc/kubernetes# kubeadm join 192.168.218.141:6443 --token o30tbb.sqs8rjbfcmxlaa63 --discovery-token-ca-cert-hash sha256:d23042df89d215c0ec7f200f4f2ba72ebd071c8d62002e7222d187121014efa6
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

在工作节点此时是没法使用kubectl的

root@roy-ubuntu:~/.kube/config# kubectl get nodes
W0621 09:23:05.602878 1420230 loader.go:221] Config not found: /etc/kubernetes/admin.conf
The connection to the server localhost:8080 was refused - did you specify the right host or port?

原因:

kubectl命令需要使用kubernetes-admin来运行

解决:

将主节点(master节点)中的【/etc/kubernetes/admin.conf】文件拷贝到工作节点相同目录下:
配置环境变量:

echo “export KUBECONFIG=/etc/kubernetes/admin.conf” >> ~/.bash_profile

立即生效:

source ~/.bash_profile

完整操作过程

root@roy-ubuntu:/etc/kubernetes# ls
kubelet.conf  manifests  pki
root@roy-ubuntu:/etc/kubernetes# touch admin.conf
#这里我复制粘贴过来的
root@roy-ubuntu:/etc/kubernetes# vim admin.conf 
root@roy-ubuntu:/etc/kubernetes# echo “export KUBECONFIG=/etc/kubernetes/admin.conf” >> ~/.bash_profile
root@roy-ubuntu:/etc/kubernetes# kubectl get pods
No resources found in default namespace.
root@roy-ubuntu:/etc/kubernetes# kubectl get nodes
NAME                    STATUS   ROLES                  AGE   VERSION
localhost.localdomain   Ready    control-plane,master   47h   v1.23.5
roy-ubuntu              Ready    <none>                 33h   v1.23.5

*****************************************************************************************************************************************************************************************************************************************************************************************************************************************************-

错误:

1. Could not open lock file /var/lib/dpkg/lock-frontend-open(13: Permission denied)

在这里插入图片描述

解决:执行sudo su进入管理员界面
CentOs效果:
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述在这里插入图片描述
Ubuntu效果

root@roy-virtual-machine:/home/roy/Desktop# curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--  100  2537  100  2537    0     0  11910      0 --:--:-- --:--:-- --:--:-- 11910
OK
root@roy-virtual-machine:/home/roy/Desktop# cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
> deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
> EOF
root@roy-virtual-machine:/home/roy/Desktop# apt-get update
Get:1 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial InRelease [9,383 B]
Hit:2 https://mirrors.tuna.tsinghua.edu.cn/ubuntu focal InRelease        
Ign:3 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
Hit:4 https://mirrors.tuna.tsinghua.edu.cn/ubuntu focal-updates InRelease
Get:3 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages [54.7 kB]
Hit:5 https://mirrors.tuna.tsinghua.edu.cn/ubuntu focal-backports InRelease
Hit:6 https://mirrors.tuna.tsinghua.edu.cn/ubuntu focal-security InRelease
Hit:7 https://download.docker.com/linux/ubuntu focal InRelease
Fetched 64.1 kB in 1s (53.3 kB/s)
Reading package lists... Done
root@roy-virtual-machine:/home/roy/Desktop# apt-get install -y kubelet kubeadm
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  conntrack cri-tools ebtables kubectl kubernetes-cni socat
Suggested packages:
  nftables
The following NEW packages will be installed:
  conntrack cri-tools ebtables kubeadm kubectl kubelet kubernetes-cni
  socat
0 upgraded, 8 newly installed, 0 to remove and 15 not upgraded.
Need to get 77.7 MB of archives.
After this operation, 335 MB of additional disk space will be used.
Get:1 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 cri-tools amd64 1.23.0-00 [15.3 MB]
Get:2 https://mirrors.tuna.tsinghua.edu.cn/ubuntu focal/main amd64 conntrack amd64 1:1.4.5-2 [30.3 kB]
Get:3 https://mirrors.tuna.tsinghua.edu.cn/ubuntu focal/main amd64 ebtables amd64 2.0.11-3build1 [80.3 kB]
Get:4 https://mirrors.tuna.tsinghua.edu.cn/ubuntu focal/main amd64 socat amd64 1.7.3.3-2 [323 kB]
Get:5 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 kubernetes-cni amd64 0.8.7-00 [25.0 MB]
Get:6 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 kubelet amd64 1.23.5-00 [19.5 MB]
Get:7 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 kubectl amd64 1.23.5-00 [8,931 kB]
Get:8 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 kubeadm amd64 1.23.5-00 [8,582 kB]
Fetched 77.7 MB in 33s (2,382 kB/s)                                      
Selecting previously unselected package conntrack.
(Reading database ... 180069 files and directories currently installed.)
Preparing to unpack .../0-conntrack_1%3a1.4.5-2_amd64.deb ...
Unpacking conntrack (1:1.4.5-2) ...
Selecting previously unselected package cri-tools.
Preparing to unpack .../1-cri-tools_1.23.0-00_amd64.deb ...
Unpacking cri-tools (1.23.0-00) ...
Selecting previously unselected package ebtables.
Preparing to unpack .../2-ebtables_2.0.11-3build1_amd64.deb ...
Unpacking ebtables (2.0.11-3build1) ...
Selecting previously unselected package kubernetes-cni.
Preparing to unpack .../3-kubernetes-cni_0.8.7-00_amd64.deb ...
Unpacking kubernetes-cni (0.8.7-00) ...
Selecting previously unselected package socat.
Preparing to unpack .../4-socat_1.7.3.3-2_amd64.deb ...
Unpacking socat (1.7.3.3-2) ...
Selecting previously unselected package kubelet.
Preparing to unpack .../5-kubelet_1.23.5-00_amd64.deb ...
Unpacking kubelet (1.23.5-00) ...
Selecting previously unselected package kubectl.
Preparing to unpack .../6-kubectl_1.23.5-00_amd64.deb ...
Unpacking kubectl (1.23.5-00) ...
Selecting previously unselected package kubeadm.
Preparing to unpack .../7-kubeadm_1.23.5-00_amd64.deb ...
Unpacking kubeadm (1.23.5-00) ...
Setting up conntrack (1:1.4.5-2) ...
Setting up kubectl (1.23.5-00) ...
Setting up ebtables (2.0.11-3build1) ...
Setting up socat (1.7.3.3-2) ...
Setting up cri-tools (1.23.0-00) ...
Setting up kubernetes-cni (0.8.7-00) ...
Setting up kubelet (1.23.5-00) ...
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
Setting up kubeadm (1.23.5-00) ...
Processing triggers for man-db (2.9.1-1) ...
root@roy-virtual-machine:/home/roy/Desktop# 


在这里插入图片描述

2. cgroup-driver问题

master节点 init问题
init问题

配置docker的cgroup-driver

cgroup-driver问题

mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ],
  "data-root": "/data/docker"
}
EOF

查看docker的cgroup-driver驱动

docker info

或者

 docker info|grep "Cgroup Driver" 

修改docker的cgroup-driver

vim /etc/docker/daemon.json

查看kubelet的cgroup-driver驱动

这里有两个路径的配置,应该是第一个: /var/lib/kubelet/config.yaml
修改kubelet:
1.

cat > /var/lib/kubelet/config.yaml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOF
cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

修改了之后重启

systemctl daemon-reload
systemctl restart docker
systemctl restart kubelet

重要参考
init问题****
错误分析:journalctl -xeu kubelet,查看fail内容

journalctl -xeu kubelet | grep fail

3. k8s The HTTP call equal to ‘curl -sSL ttp://localhost:10255/healthz’ failed with error: Get http…

修改文件
没有就创建一个:

#Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment=“KUBELET_KUBECONFIG_ARGS=–bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf
–kubeconfig=/etc/kubernetes/kubelet.conf” Environment=“KUBELET_CONFIG_ARGS=–config=/var/lib/kubelet/config.yaml”
#This is a file that “kubeadm init” and “kubeadm join” generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
#This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
#the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS
$KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS

vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

Environment=“KUBELET_SYSTEM_PODS_ARGS=–pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --fail-swap-on=false”

再执行

systemctl daemon-reload 
systemctl restart kubelet

4. The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz’ failed with error: Get “http://localhost:10248/healthz”: dial tcp [::1]:10248: connect: connection refused.

[kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz’ failed with error: Get"http://localhost:10248/healthz": dial tcp [::1]:10248: connect:connection refused. [kubelet-check] It seems like the kubelet isn’t running or healthy.
[kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz’ failed with error: Get"http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

修改配置文件

vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

Environment=“KUBELET_SYSTEM_PODS_ARGS=–pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --fail-swap-on=false”

systemctl daemon-reload
systemctl restart kubelet

5. [ERROR FileContent–proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1

 sysctl -w net.ipv4.ip_forward=1 

6.k8s节点加入时提示Initial timeout of 40s passed

如果节点之前加入过集群,要重新加入的话,需要重置一下(就好像集群出问题,kubeadm被重置,从节点就不在集群中了,之后要重新加入)

kubeadm reset -f 

7.The connection to the server localhost:8080 was refused - did you specify the right host or port?

kubernetes master要与本机绑定,集群初始化的时候没有绑定,此时设置在本机的环境变量即可解决问题。
这里我之前设置过,现在应该是重启之后配置的环境变量失效了。

这是我在集群搭建好之后,使用VMware搭建的,有一次关掉了两台虚拟机,关掉了VMware。之后重启,发现工作节点在执行kubectl get nodes时出现的错误。之前还有一个错误,就是kubelet没有正常启动。因为swap又重新开启了。关掉就好。
解决当前这个问题:

#让配置文件生效
source /etc/profile

如果是初始化时就如下操作

#具体根据情况,此处记录linux设置该环境变量
#方式一:编辑文件设置
#	   vim /etc/profile
#	   在底部增加新的环境变量 export KUBECONFIG=/etc/kubernetes/admin.conf
#方式二:直接追加文件内容
#	echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile

8. 6443:connect refused(虚拟机非正常关机,导致k8s的etcd出问题)

虚拟机没有正常关机,之后重新开机后,发现kubectl get node报错了

  1. 查看容器的运行情况
docker ps -a
  1. 先看apiserver服务是否正常
  2. 如果正常,看看什么容器退出了,如果是主要组件的容器退出了,就看看容器的日志

查看etc的容器日志
在这里插入图片描述
etcd和apiServer的容器退出了,说明etcd出了问题

在这里插入图片描述
在这里插入图片描述panic: failed to recover v3 backend from snapshot

这里我们可以通过docker 查看相关容器的运行情况,当时就发现etcd相关的两个 容器退出了请添加图片描述

9. Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “kubernetes”)

场景:我在初始化kubeadm之后去执行kubectl get node,报的错误。

[root@localhost kubernetes]# kubectl get node
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

解决:初始化kubeadm之后,会在路径/etc/kubernetes/下产生一个admin.conf文件。需要复制到路径$HOME/.kube

#mkdir -p $HOME/.kube  如果没有该文件,就新建一个
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

10. token的过期

token有效期24小时
生成新token

kubeadm token generate

kubeadm reset都做了什么

[root@localhost systemd]# kubeadm reset
Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
根据提供的引用内容,以下是使用kubeadm部署Kubernetes 1.27.4的步骤: 1. 确认k8s版本和环境:首先,确认您要部署Kubernetes版本为1.27.4,并确保您的环境满足部署要求,例如操作系统版本、CPU和内存等。 2. 创建配置文件:根据您的需求,创建Kubernetes集群的配置文件,包括证书、网络插件、镜像源等。您可以根据实际情况进行配置。 3. 安装kubeadm:在两台Ubuntu 16.04 64位双核CPU虚拟机上安装kubeadm。您可以使用以下命令安装kubeadm: ```shell sudo apt-get update sudo apt-get install -y kubeadm ``` 4. 初始化Master节点:在其中一台虚拟机上执行以下命令初始化Master节点: ```shell sudo kubeadm init --kubernetes-version=1.27.4 ``` 该命令将会初始化Kubernetes Master节点,并生成一个加入集群的命令。 5. 部署网络插件:根据您的配置文件选择网络插件,这里以flannel为例。在Master节点上执行以下命令部署flannel网络插件: ```shell kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml ``` 6. 加入Worker节点:在另一台虚拟机上执行Master节点生成的加入集群的命令,将其加入到Kubernetes集群中: ```shell sudo kubeadm join <Master节点IP>:<Master节点端口> --token <Token值> --discovery-token-ca-cert-hash <证书哈希值> ``` 请将`<Master节点IP>`、`<Master节点端口>`、`<Token值>`和`<证书哈希值>`替换为实际的值。 至此,您已成功使用kubeadm部署Kubernetes 1.27.4集群。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值