kubectl get nodes参考链接:k8s
1.网络配置
1.修改主机名
sudo vi /etc/hostname
添加下面的内容:
k8s-master
k8s-node1
k8s-node2
k8s-node3
k8s-node4
2.配置IP与主机名的映射关系,三节点配置相同。
sudo vi /etc/hosts
添加下面的内容:
10.11.252.51 cufeinfo-master
10.11.252.44 cufeinfo-node1
10.11.252.45 cufeinfo-node2
10.11.252.46 cufeinfo-node3
10.11.252.50 cufeinfo-node4
测试:
ping cufeinfo-node1
ping cufeinfo-node2
2.docker-ce的配置与安装
相关链接:https://www.linuxprobe.com/ubuntu-docker-ce.html
在线安装Docker-ce(本教程不推荐)
(建议下面的手动安装方式,因为在线可能会出现版本不一致)
注意: 该国内源目前提供 18.09
版本,与k8s不符。k8s推荐安装Docker ce 18.06
安装链接:https://blog.csdn.net/javalee5156/article/details/83583489
2.安全控制
下面不确定是否必须:
1.禁止防火墙
sudo ufw disable
2.关闭swap
#成功
sudo swapoff -a
#永久关闭swap分区
sudo sed -i 's/.*swap.*/#&/' /etc/fstab
3.禁止selinux
安装操控selinux的命令
sudo apt install -y selinux-utils
禁止selinux
setenforce 0
#重新启动操作系统
查看selinux是否已经关闭
sudo getenforce
(Disabled,表示已经关闭)
## 2.安装k8s组件
```bash
sudo apt-get update
sudo apt-get install -y apt-transport-https
#下载k8s密钥
转为超级用户:
su
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
3.k8s安装
-
创建配置文件
sudo touch /etc/apt/sources.list.d/kubernetes.list
-
添加写权限
itcast@master:~$ sudo chmod 666 /etc/apt/sources.list.d/kubernetes.list
再添加,内容如下:
deb http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial main
-
执行
sudo apt update
更新操作系统源,开始会遇见如下错误tcast@master:~$ sudo apt update Get:1 http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial InRelease [8,993 B] Err:1 http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB Hit:2 http://mirrors.aliyun.com/ubuntu cosmic InRelease Hit:3 http://mirrors.aliyun.com/ubuntu cosmic-updates InRelease Hit:4 http://mirrors.aliyun.com/ubuntu cosmic-backports InRelease Hit:5 http://mirrors.aliyun.com/ubuntu cosmic-security InRelease Err:6 https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu cosmic InRelease Could not wait for server fd - select (11: Resource temporarily unavailable) [IP: 202.141.176.110 443] Reading package lists... Done W: GPG error: http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB E: The repository 'http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial InRelease' is not signed. N: Updating from such a repository can't be done securely, and is therefore disabled by default. N: See apt-secure(8) manpage for repository creation and user configuration details.
其中:
The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB
签名认证失败,需要重新生成。记住上面的NO_PUBKEY
6A030B21BA07F4FB
-
添加认证key
运行如下命令,添加错误中对应的key(错误中NO_PUBKEY后面的key的后8位)
gpg --keyserver keyserver.ubuntu.com --recv-keys BA07F4FB
接着运行如下命令,确认看到OK,说明成功,之后进行安装:
gpg --export --armor BA07F4FB | sudo apt-key add -
-
再次重新
sudo apt update
更新系统下载源数据列表
itcast@master:~$ sudo apt update
Hit:1 https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu cosmic InRelease
Hit:2 http://mirrors.aliyun.com/ubuntu cosmic InRelease
Hit:3 http://mirrors.aliyun.com/ubuntu cosmic-updates InRelease
Hit:4 http://mirrors.aliyun.com/ubuntu cosmic-backports InRelease
Hit:5 http://mirrors.aliyun.com/ubuntu cosmic-security InRelease
Get:6 http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial InRelease [8,993 B]
Ign:7 http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial/main amd64 Packages
Get:7 http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial/main amd64 Packages [26.6 kB]
Fetched 26.6 kB in 42s (635 B/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done
165 packages can be upgraded. Run 'apt list --upgradable' to see them.
以上没有报和错误异常,表示成功。
4.1.2 禁止基础设施
-
禁止防火墙
$ sudo ufw disable Firewall stopped and disabled on system startup
-
关闭swap
# 成功 $ sudo swapoff -a # 永久关闭swap分区 $ sudo sed -i 's/.*swap.*/#&/' /etc/fstab
-
禁止selinux
# 安装操控selinux的命令
$ sudo apt install -y selinux-utils
# 禁止selinux
$ setenforce 0
# 重启操作系统
$ shutdown -r now
# 查看selinux是否已经关闭
$ sudo getenforce
Disabled(表示已经关闭)
4.2 k8s系统网络配置
(1) 配置内核参数,将桥接的IPv4流量传递到iptables的链
创建/etc/sysctl.d/k8s.conf
文件
添加内容如下:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness = 0
(2) 执行命令使修改生效
# 【候选】建议执行下面的命令
$ sudo modprobe br_netfilter
$ sudo sysctl -p /etc/sysctl.d/k8s.conf
4.3 安装k8s
注意: 切换到root用户
$ su
-
安装Kubernetes 目前安装版本
v1.13.1
$ apt update && apt-get install -y kubelet=1.13.1-00 kubernetes-cni=0.6.0-00 kubeadm=1.13.1-00 kubectl=1.13.1-00
-
设置为开机重启
$ sudo systemctl enable kubelet && systemctl start kubelet $ sudo shutdown -r now
k8s组件安装解决方法:
#因为网络限制,不能直接从google进行安装,转为从阿里云的国内镜像进行安装。
#依次运行下面的命令:
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
#修改安装源(即配置文件):/etc/apt/sources.list.d/kubernetes.list,可以通过vim的方式进行修改。
#参考地址中是这个
cat </etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
#超级用户:
su
如果上面报dev command not find 则执行下面的
cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
deb http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial main
EOF
apt-get update
指定版本
apt-get install -y kubelet=1.20.5-00 kubectl=1.20.5-00 kubeadm=1.20.5-00 kubernetes-cni=0.8.7-00
sudo apt-cache madison kubelet
apt-get install -y kubelet=1.20.5-00
sudo apt-cache madison kubectl
apt-get install -y kubectl=1.20.5-00
sudo apt-cache madison kubeadm
apt-get install -y kubeadm=1.20.5-00
最新版本:
apt-get install -y kubelet kubeadm kubectl
安装成功~
配置节点网络:
配置每台机器的/etc/netplan/50-cloud-init.yaml,把DHCP的IP改为固定IP:
vi /etc/netplan/50-cloud-init.yaml
k8s-master:
network:
ethernets:
ens33:
addresses: [10.11.252.51/24]
dhcp4: false
gateway4: 10.11.252.2
nameservers:
addresses: [10.11.252.2]
optional: true
version: 2
#重新启动ip配置
netplan apply
k8s-node1:
vi /etc/netplan/50-cloud-init.yaml
network:
ethernets:
ens33:
addresses: [10.11.252.44/24]
dhcp4: false
gateway4: 10.11.252.2
nameservers:
addresses: [10.11.252.2]
optional: true
version: 2
#重新启动ip配置
netplan apply
k8s-node2:
vi /etc/netplan/50-cloud-init.yaml
network:
ethernets:
ens33:
addresses: [10.11.252.45/24]
dhcp4: false
gateway4: 10.11.252.2
nameservers:
addresses: [10.11.252.2]
optional: true
version: 2
#重新启动ip配置
netplan apply
k8s-node3:
vi /etc/netplan/50-cloud-init.yaml
network:
ethernets:
ens33:
addresses: [10.11.252.46/24]
dhcp4: false
gateway4: 10.11.252.2
nameservers:
addresses: [10.11.252.2]
optional: true
version: 2
#重新启动ip配置
netplan apply
k8s-node4:
vi /etc/netplan/50-cloud-init.yaml
network:
ethernets:
ens33:
addresses: [10.11.252.50/24]
dhcp4: false
gateway4: 10.11.252.2
nameservers:
addresses: [10.11.252.2]
optional: true
version: 2
#重新启动ip配置
netplan apply
查看可用k8s版本:
sudo apt-cache madison kubelet
查看需要的镜像:
kubeadm config images list
结果:
k8s.gcr.io/kube-apiserver:v1.20.5
k8s.gcr.io/kube-controller-manager:v1.20.5
k8s.gcr.io/kube-scheduler:v1.20.5
k8s.gcr.io/kube-proxy:v1.20.5
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
拉镜像(所有节点都要拉取镜像)
sudo docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.20.5
sudo docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.20.5 k8s.gcr.io/kube-apiserver:v1.20.5
sudo docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.20.5
sudo docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.20.5 k8s.gcr.io/kube-controller-manager:v1.20.5
sudo docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.20.5
sudo docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.20.5 k8s.gcr.io/kube-scheduler:v1.20.5
sudo docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.5
sudo docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.5 k8s.gcr.io/kube-proxy:v1.20.5
sudo docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
sudo docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2
sudo docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0
sudo docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0
sudo docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0
sudo docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0
master节点的安装:
(可以多次尝试运行此命令)
kubeadm init --pod-network-cidr 10.244.0.0/16
k8s-master:
kubeadm init \
--apiserver-advertise-address=10.11.252.51 \
--image-repository registry.aliyuncs.com/google_containers \
--pod-network-cidr=10.244.0.0/16
kubeadm.conf
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.11.252.51
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: k8s-master
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
查看要拉取的镜像:
kubeadm config images list --config kubeadm.conf
#下载全部当前版本的k8s所关联的镜像
kubeadm config images pull --config ./kubeadm.conf
#初始化并且启动
$ sudo kubeadm init --config ./kubeadm.conf
成功界面:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.11.252.45:6443 --token r6vfc4.nb7amo172q7akjim \
--discovery-token-ca-cert-hash sha256:614b6d5ed6b6f98eb51bfdbb65711d5131ffe8f0afd7c1fa049507b8e923ea54
根据界面提示,需要执行以下命令,以创建集群:
rm -rf $HOME/.kube
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
创建系统服务并启动:
# 启动kubelet 设置为开机自启动
sudo systemctl enable kubelet
# 启动k8s服务程序
sudo systemctl start kubelet
此时可以通过以下命令查看集群状态:
kubectl get nodes
kubectl get cs
遇到下面的问题:
解决方法:方法
注释掉/etc/kubernetes/manifests
下的kube-controller-manager.yaml
和kube-scheduler.yaml
的- --port=0
。
sudo vim /etc/kubernetes/manifests/kube-controller-manager.yaml
sudo vim /etc/kubernetes/manifests/kube-scheduler.yaml
#进行注释或删除: - --port=0
#重新启动:
systemctl restart kubelet
kubectl get cs
现在只有一个master节点。
配置内部通信 flannel 网络(master和node都要配)
先配置内部通信 flannel 网络:
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
确保kubeadm.conf中的podsubnet的地址和kube-flannel.yml中的网络配置一样
加载配置文件:
kubectl apply -f kube-flannel.yml
状态变为ready:
kubectl get nodes
结果是:
root@cufeinfo1-desktop:/etc/kubernetes/manifests# kubectl get nodes
NAME STATUS ROLES AGE VERSION
cufeinfo1-desktop Ready control-plane,master 28m v1.20.5
如果没变为ready应该是镜像下载失败,手动下载,镜像版本由当前flannel版本决定。
docker pull quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
docker tag quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64
配置 node节点
在node节点进行操作
# 启动kubelet 设置为开机自启动
sudo systemctl enable kubelet
# 启动k8s服务程序
sudo systemctl start kubelet
拷贝配置文件到每个node:(master节点)
scp /etc/kubernetes/admin.conf cufeinfo-node1@cufeinfo-node1:/home/cufeinfo-node1/baas/file
scp /etc/kubernetes/admin.conf cufeinfo-node2@cufeinfo-node2:/home/cufeinfo-node2/baas/file
scp /etc/kubernetes/admin.conf cufeinfo-node3@cufeinfo-node3:/home/cufeinfo-node3/baas/file
scp /etc/kubernetes/admin.conf cufeinfo-node4@cufeinfo-node4:/home/cufeinfo-node4/baas/file
配置并加入节点,加入中的哈希值是之前配置时生成的。
node节点下操作:
非root下运行:
mkdir -p $HOME/.kube
sudo cp -i $HOME/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
root下运行:
kubeadm join 10.11.252.45:6443 --token hieol5.o88aa2wdwi1zdwjz \
--discovery-token-ca-cert-hash sha256:8bf9403f30021ecc8f375377da2866cd25632eb84d309bcda4999758343eb955
查看node是否已经加入到k8s集群中(需要等一段时间才能ready):
kubectl get nodes
结果:
cufeinfo-cluster-monitor Ready <none> 3m5s v1.20.5
cufeinfo1-desktop Ready control-plane,master 61m v1.20.5
cufeinfo2-desktop Ready <none> 6m42s v1.20.5
重启后遇到下面打问题:The connection to the server 10.11.252.45:6443 was refused - did you specify the right host or port?
(节点遇到notready也可以使用这个解决方法)
解决方法:
sudo systemctl restart kubelet.service
部署ningx应用,测试集群
在Kubernetes集群中创建一个pod,验证是否正常运行(master节点root下操作~):
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get pod,svc
#结果:
root@cufeinfo1-desktop:/etc/kubernetes/manifests# kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/nginx-6799fc88d8-mr4vf 0/1 ContainerCreating 0 50s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 64m
service/nginx NodePort 10.101.147.155 <none> 80:31862/TCP 22s
#注意下面用到的接口是:31862
部署成功:
(root下)
curl 127.0.0.1:31862
快速扩容为3副本(master节点的root下):
kubectl scale deployment nginx --replicas=3
kubectl get pod,svc
通过yaml部署应用
编写配置文件mysql-rc.yaml:
apiVersion: v1
kind: ReplicationController
metadata:
name: mysql
spec:
replicas: 1 #Pod副本的期待数量
selector:
app: mysql #符合目标的Pod拥有此标签
template: #根据此模板创建Pod的副本(实例)
metadata:
labels:
app: mysql #Pod副本拥有的标签,对应RC的Selector
spec:
containers: #Pod内容器的定义部分
- name: mysql #容器的名称
image: hub.c.163.com/library/mysql #容器对应的Docker image
ports:
- containerPort: 3306 #容器应用监听的端口号
env: #注入容器内的环境变量
- name: MYSQL_ROOT_PASSWORD
value: "123456"
加载文件到集群中,等待几分钟等待docker下载完成。
(master节点的root下,与创建的mysql-rc.yaml在相同的目录下。)
kubectl create -f mysql-rc.yaml
kubectl get pods
集群创建完毕。
部署 Dashboard
docker pull registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1
docker tag registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
暴露端口,修改type: ClusterIP->type: NodePort
kubectl -n kube-system edit service kubernetes-dashboard
查看开发的端口:
kubectl -n kube-system get service kubernetes-dashboard
结果L:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.97.16.43 <none> 443:30139/TCP 2m45s
注意上面的端口,下面要用到。
访问网址:https://k8s-master:30139
连接一下即可打开面板:
ubernetes 仪表板
Kubeconfig
请选择您已配置用来访问集群的 kubeconfig 文件,请浏览配置对多个集群的访问一节,了解更多关于如何配置和使用 kubeconfig 文件的信息
令牌
每个服务帐号都有一条保密字典保存持有者令牌,用来在仪表板登录,请浏览验证一节,了解更多关于如何配置和使用持有者令牌的信息
结果:
创建service account并绑定默认cluster-admin管理员集群角色:
kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
结果1:
Name: dashboard-admin-token-7zq5n
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: dashboard-admin
kubernetes.io/service-account.uid: 281ff8b3-40a6-4476-b748-d6c082d417de
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1066 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IkVrNDRrd1FNNXoxVWF4MDZDYnFHOGFGQm1Qd0w0NEFHcWJDcVRBOGJDY00ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tN3pxNW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMjgxZmY4YjMtNDBhNi00NDc2LWI3NDgtZDZjMDgyZDQxN2RlIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.q3kh8CQ2FmFUoCE_k01fer7_AHXhDbFf42JitSskCxuW5Az3RhDi6kD-Z0tybPtXvwEXx5zuMk87wBPdxuZm13KBA-Y6ZuXiEwHrrjkC32G9mTsSLbiC37l3rhRjMDjfqv0B3_1i7K5dkgqHggcD5QnEOEA-v5MpEYmi_8dooLJBFtEpEOE8TwkcIB0M3dfOKS5Kb3OSwQ3x9x5sDEVnHOwWiywVfUsZ6Wz7XlJ-ay4hq-oJAHyOoQ2ihjUDRR23WQ8rA1_YpptI519N963M6tbBqeYDPVmwXpkUuGiMDRyuS-ov_mormP0BdwHl08EZkeWYfhp4c3qoMFRyCqnvtg
安装与使用的教程:k8s
遇到下面的问题:
The connection to the server 10.11.252.45:6443 was refused - did you specify
解决方法:
sudo swapoff -a
exit
strace -eopenat kubectl version
su
kubectl get nodes