k8s多主集群测试

上一篇我们成功搭建了多主集群(写文章-CSDN创作中心),接下来就可以测试集群的功能了。

[root@master1 ~]# cat ns.yml 
apiVersion: v1
kind: Namespace
metadata:
  name: mldong-test
  
[root@master1 ~]# cat nginx-deployment.yml 
apiVersion: v1
kind: Service
metadata:
  name: nginx-pod
  namespace: mldong-test
  labels:
    app: nginx
spec:
  type: NodePort     #未指定默认为ClusterIP
  ports:
  - port: 80   #不指定targetPort&nodePort,默认访问80
    targetPort: 80    # 指定Service 将要监听的端口。Service将在80端口上监听来自集群内部的请求
    nodePort: 32180   # 在宿主机上映射一个随机端口,可自定义
    name: web
  selector:
    app: nginx-deployment      #与Deployment标签对应
    
#apiVersion: v1        #ClusterIP模式
#kind: Service
#metadata:
#  name: nginx
#  namespace: mldong-test
#spec:
#  type: ClusterIP
#  ports:
#  - port: 80
#    protocol: TCP
#    targetPort: 80
#  selector:
#    app: nginx-deployment

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: mldong-test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-deployment
  template:
    metadata:
      labels:
        app: nginx-deployment
    spec:
      containers:
      - name: nginx
        image: nginx:latest 
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
          name: web
          protocol: TCP
          
  #kubectl get pods -n mldong-test
  NAME                                READY   STATUS    RESTARTS   AGE
  nginx-deployment-5799775699-wd9sv   1/1     Running   0          28m
  #kubectl get service -n mldong-test
  NAME             TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
  nginx-nodeport   NodePort   172.21.1.95   <none>        80:32180/TCP   76s
查看node节点或者是pod资源(cpu,内存资源)使用情况

kubectl top 组件名     # 例如kubectl top node  kubectl top pod

进入pod内部

kubectl exec -ti pod名 /bin/bash      # 先用kubectl get pod查看  有需要的复制pod名使用这个

[root@master1 ~]# kubectl create deployment demoapp --image=ikubernetes/demoapp:v1.0 --replicas=3
deployment.apps/demoapp created
[root@master1 ~]# kubectl get pods
NAME                       READY   STATUS              RESTARTS   AGE
demoapp-55c5f88dcb-2cfk5   0/1     ContainerCreating   0          11s
demoapp-55c5f88dcb-f9qrn   0/1     ContainerCreating   0          11s
demoapp-55c5f88dcb-j4np2   0/1     ContainerCreating   0          11s

[root@master1 ~]# docker pull ikubernetes/demoapp:v1.0
v1.0: Pulling from ikubernetes/demoapp
c9b1b535fdd9: Pull complete 
3cbce035cd7c: Pull complete 
b83463f478a5: Pull complete 
34b1f286d5e2: Pull complete 
Digest: sha256:6698b205eb18fb0171398927f3a35fe27676c6bf5757ef57a35a4b055badf2c3
Status: Downloaded newer image for ikubernetes/demoapp:v1.0
docker.io/ikubernetes/demoapp:v1.0
[root@master1 ~]# kubectl get pods
NAME                       READY   STATUS    RESTARTS   AGE
demoapp-55c5f88dcb-2cfk5   1/1     Running   0          3m55s
demoapp-55c5f88dcb-f9qrn   1/1     Running   0          3m55s
demoapp-55c5f88dcb-j4np2   1/1     Running   0          3m55s

[root@master1 ~]# kubectl get pod -o wide
NAME                       READY   STATUS    RESTARTS   AGE    IP           NODE   NOMINATED NODE   READINESS GATES
demoapp-55c5f88dcb-2cfk5   1/1     Running   0          3h9m   10.244.2.6   host   <none>           <none>
demoapp-55c5f88dcb-f9qrn   1/1     Running   0          3h9m   10.244.2.5   host   <none>           <none>
demoapp-55c5f88dcb-j4np2   1/1     Running   0          3h9m   10.244.2.4   host   <none>           <none>
[root@master1 ~]# curl 10.244.2.5
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-55c5f88dcb-f9qrn, ServerIP: 10.244.2.5!
[root@master1 ~]# curl 10.244.2.6
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-55c5f88dcb-2cfk5, ServerIP: 10.244.2.6!
[root@master1 ~]# curl 10.244.2.4
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-55c5f88dcb-j4np2, ServerIP: 10.244.2.4!

[root@zabbix-server ~]# kubectl get cs   //查看健康(全是healthy)
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
scheduler            Healthy   ok                              
controller-manager   Healthy   ok                              
etcd-0               Healthy   {"health":"true","reason":""}   
[root@zabbix-server ~]# kubectl get ns
NAME              STATUS   AGE
default           Active   21h
kube-flannel      Active   19h
kube-node-lease   Active   21h
kube-public       Active   21h
kube-system       Active   21h

查看各组件运行状态(都是running)
kubectl get pod --all-namespaces -o wide

#使用如下命令了解Service对象demoapp使用的NodePort,格式:<集群端口>:<POd端口>,以便于在集群
外部进行访问
[root@master1 ~]#kubectl create service nodeport demoapp --tcp=80:80
 [root@master1 ~]#kubectl get svc
 NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)     AGE
 demoapp      NodePort    10.110.101.190   <none>     80:30037/TCP   102s
kubernetes   ClusterIP   10.96.0.1        <none>      443/TCP        67m
        
        
[root@master1 ~]#curl 10.110.101.190
 iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp
78b49597cf-wcjkp, ServerIP: 10.244.2.3!
 [root@master1 ~]#curl 10.110.101.190
 iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp
78b49597cf-zmlmv, ServerIP: 10.244.1.4!
 [root@master1 ~]#curl 10.110.101.190
 iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp
78b49597cf-7pdww, ServerIP: 10.244.2.2!
      
 #用户可以于集群外部通过“http://NodeIP:30037”这个URL访问demoapp上的应用,例如于集群外通过浏
览器访问“http://<kubernetes-node>:30037”。
[root@rocky8 ~]#curl 10.0.0.100:30037
 iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp
78b49597cf-wcjkp, ServerIP: 10.244.2.3!
 [root@rocky8 ~]#curl 10.0.0.101:30037
iKubernetes demoapp v1.0 !! ClientIP: 10.244.1.0, ServerName: demoapp
78b49597cf-7pdww, ServerIP: 10.244.2.2!
 [root@rocky8 ~]#curl 10.0.0.102:30037
 iKubernetes demoapp v1.0 !! ClientIP: 10.244.2.0, ServerName: demoapp
78b49597cf-zmlmv, ServerIP: 10.244.1.4!
 #扩容
[root@master1 ~]#kubectl scale deployment demoapp --replicas 5
 deployment.apps/demoapp scaled
 [root@master1 ~]#kubectl get pod 
NAME                       READY   STATUS    RESTARTS   AGE
 demoapp-78b49597cf-44hqj   1/1     Running   0        41m
demoapp-78b49597cf-45jd8   1/1     Running    0       9s
demoapp-78b49597cf-49js5   1/1     Running     0      41m
demoapp-78b49597cf-9lw2z   1/1     Running     0      9s
demoapp-78b49597cf-jtwkt   1/1     Running    0         41m

#缩容
[root@master1 ~]#kubectl scale deployment demoapp --replicas 2
 deployment.apps/demoapp scaled
 #可以看到销毁pod的过程
[root@master1 ~]#kubectl get pod 
NAME                       READY   STATUS        RESTARTS   AGE
 demoapp-78b49597cf-44hqj   1/1     Terminating   0      
 demoapp-78b49597cf-45jd8   1/1     Terminating   0
demoapp-78b49597cf-49js5    1/1     Running       0
demoapp-78b49597cf-9lw2z    1/1     Terminating   0
demoapp-78b49597cf-jtwkt   1/1     Running       0
    
 #再次查看,最终缩容成功
[root@master1 ~]#kubectl get pod 
NAME                       READY   STATUS    RESTARTS   AGE
 demoapp-78b49597cf-49js5   1/1     Running    0    42m
demoapp-78b49597cf-jtwkt   1/1     Running     0    42m    

暴露服务,外网访问

[root@zabbix-server ~]# cat nginx-pod.yaml   
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    app: nginx
spec:
  containers:
  - name: nginx-container
    image: nginx:latest
    ports:
    - containerPort: 80
[root@zabbix-server ~]# kubectl apply -f nginx-pod.yaml   或执行 kubectl run nginx-pod --image=nginx  或 kubectl create deployment nginx --image=nginx     # deployment表示静态资源
pod/nginx-pod created
[root@zabbix-server ~]# kubectl get pods
NAME                       READY   STATUS    RESTARTS   AGE
demoapp-55c5f88dcb-29ndh   1/1     Running   0          6m47s
demoapp-55c5f88dcb-cnhsj   1/1     Running   0          6m5s
demoapp-55c5f88dcb-j5jjw   1/1     Running   0          7m57s
nginx-pod                  1/1     Running   0          9s
[root@zabbix-server ~]# kubectl get pod -o wide
NAME                       READY   STATUS    RESTARTS   AGE     IP            NODE   NOMINATED NODE   READINESS GATES
demoapp-55c5f88dcb-29ndh   1/1     Running   0          8m6s    10.244.2.9    host   <none>           <none>
demoapp-55c5f88dcb-cnhsj   1/1     Running   0          7m24s   10.244.2.10   host   <none>           <none>
demoapp-55c5f88dcb-j5jjw   1/1     Running   0          9m16s   10.244.2.8    host   <none>           <none>
nginx-pod                  1/1     Running   0          88s     10.244.2.11   host   <none>           <none>
[root@zabbix-server ~]# curl 10.244.2.11      #访问pod 此时只能通过上面的cluster-ip访问
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

[root@master1 opt]#kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed
 
kubectl            #Kubernetes命令行工具
expose             #子命令,用于将资源暴露为一个新的 Service。
deployment nginx   #指定想要暴露的资源的类型和名称
--port=80:         #指定Service 将要监听的端口。Service将在80端口上监听来自集群内部的请求。
--type=NodePort    #指定Service的类型。
#NodePort类型会将Service的端口映射到集群中每个节点的一个静态端口上,使集群外部访问该Service

[root@master01 ~]#kubectl scale deployment nginx --replicas=3     #扩展Pod
deployment.apps/nginx scaled
 
scale       #用于更改一个资源(如 Deployment、ReplicaSet、StatefulSet 等)的副本数量。
 
deployment nginx  #指定要更改的资源的类型和名称。
--replicas=3      #这是一个参数,指定了要将Deployment的副本数量更改为 3。

[root@zabbix-server ~]# kubectl expose pod nginx-pod --type=NodePort --port=80       #将pod暴露成service 
service/nginx-pod exposed
[root@zabbix-server ~]# kubectl get svc     #查看service.会在宿主机上映射一个随机端口
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        21h
nginx-pod    NodePort    10.108.30.84   <none>        80:32210/TCP   24s

//外网访问nginx服务
[root@zabbix-server ~]# curl 192.168.190.36:32210
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@zabbix-server ~]# cat nginx-pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    app: nginx
spec:
  containers:
  - name: nginx-container
    image: nginx:latest
    ports:
    - containerPort: 80 

尝试外部访问

**幕后彩蛋**

运维常见问题

问题1:安装拉取kubectl,ETCD等镜像失败处理

yum list|grep kube
#kubeadm  config images list 
#kubeadm  config images list  --image-repository registry.aliyuncs.com/google_containers  
#Master节点上执行master_images.sh
#!/bin/bash
​
images=(
    kube-apiserver:v1.17.3
    kube-proxy:v1.17.3
    kube-controller-manager:v1.17.3
    kube-scheduler:v1.17.3
    coredns:1.6.5
    etcd:3.4.3-0
    pause:3.1
)
​
for imageName in ${images[@]} ; do
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
#   docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName  k8s.gcr.io/$imageName
done
​
或者   kubeadm config images list  #列出初始化需要的镜像
#kubeadm  config images pull --kubernetes-version=v1.24.3(版本号) --image-repository registry.aliyuncs.com/google_containers   --cri-socket 
unix:///var/run/cri-dockerd.sock
#docker image save `docker image ls --format "{{.Repository}}:{{.Tag}}"` -o k8s-images-v1.24.3.tar
#gzip k8s-images-v1.24.3(版本号).tar
docker load -i k8s-images-v1.24.3.tar.gz   #加载镜像包

问题2:从节点问题

[root@k8s-node2 ~]# journalctl -u kubelet -f            //查看从节点日志

问题3:token过期

//忘记token,通过以下命令查看
kubeadm token list
​
//通过在控制平面节点上运行以下命令来创建新令牌:
kubeadm token create --print-join-command
​
//没有 --discovery-token-ca-cert-hash 的值,则可以通过在控制平面节点上执行以下命令链来获取
[root@zabbix-server ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null |  openssl dgst -sha256 -hex | sed 's/^.* //'
​
//加入节点示范:
kubeadm join 192.168.190.156:6443 --token n8mmg4.7pezadotuhs09lzs \
    --discovery-token-ca-cert-hash sha256:19aa1b069cb53ea16a94461a1c07fb06f02cbf6f32d6ab492b7b5397444279fb 

以上命令链做了以下几件事: 使用 openssl x509 -pubkey 提取证书中的公钥。 将公钥转换成 DER 格式。 计算 DER 格式的公钥的 SHA-256 指纹。 使用 sed 命令去除指纹前面的额外信息,只保留纯十六进制值。

问题4: [kubelet-check] Initial timeout of 40s passed.解决方案

第1步:
将init-config.yaml 中的 advertiseAddress: 1.2.3.4 修改为advertiseAddress: 10.0.128.0,其中10.0.128.0为 master节点的ip地址。
第2步:
$ kubeadm reset
​
第3步
$ kubeadm init --config=init-config.yaml    #将 advertiseAddress 修改为 master 节点 ip 地址

问题5:node节点无法使用kubelet

   由于node节点上没有admin.conf。出现这个问题的原因是kubectl命令需要使用kubernetes-admin的身份来运行,在“kubeadm int”启动集群的步骤中就生成了/etc/kubernetes/admin.conf,而node节点上是没有这个文件的,也就是系统认为工作节点时不能具有管理节点的权限,所以可以把master节点上的admin.conf拷贝到其他node节点上,这样就能通过认证,也能顺利的使用kubectl命令行工具了
scp /etc/kubernetes/admin.conf root@k8s-node1:/etc/kubernetes/
scp /etc/kubernetes/admin.conf root@k8s-node2:/etc/kubernetes/
 
#添加到环境变量中,否则每次开机重启都会失效
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile

 

问题6:[ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1

执行 sysctl -w net.ipv4.ip_forward=1

问题7:kubectl get cs发现集群不健康,更改以下两个文件

[root@master1 opt]#vim /etc/kubernetes/manifests/kube-scheduler.yaml
......
16     - --bind-address=192.168.83.30 #修改成k8s的控制节点master1的ip
.....
19 #    - --port=0                    #注释该行
......
25         host: 127.0.0.1            #把httpGet:字段下的hosts修改为master1的ip
......
39         host: 127.0.0.1            #同样为httpGet:字段下的hosts
......
 
 
[root@master1 opt]#vim /etc/kubernetes/manifests/kube-controller-manager.yaml
......
17     - --bind-address=192.168.83.30
......
26 #   - --port=0
......
37         host: 192.168.83.30
......
51         host: 192.168.83.30
......
​
//修改完毕后重新启动kubectl并查看集群状态
[root@master1 opt]#systemctl restart kubelet
[root@master1 opt]#kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}
[root@master1 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
controller-manager   Healthy   ok                              
scheduler            Healthy   ok                              
etcd-0               Healthy   {"health":"true","reason":""} 

问题8:master节点删除与重建

kubectl drain paas-m-k8s-master-1 --delete-local-data --force --ignore-daemonsets
kubectl delete node paas-m-k8s-master-1
​
//清理etcd集群
kubectl -n kube-system exec -it etcd-paas-m-k8s-master-2 -- /bin/sh
//查看member list
etcdctl --endpoints=127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key member list
//把下掉的那个删除
加入集群
kubeadm reset
配置一个对域名的解析,修改 /etc/hosts
//生成join命令
[root@paas-m-k8s-master-2 ~]# kubeadm init phase upload-certs --upload-certs
I0110 10:10:11.254956   12245 version.go:252] remote version is much newer: v1.29.0; falling back to: stable-1.18
W0110 10:10:13.812440   12245 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
23d8e27402b4f982d9ec894c37b1a3271c9f27bef2e653ca471426cc57025324
​
[root@paas-m-k8s-master-2 ~]# kubeadm token create --print-join-command
W0110 10:11:40.990463   14694 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join apiserver.cluster.local:6443 --token yubedv.0rg185no5jgqwn07     --discovery-token-ca-cert-hash sha256:be87c7200420224f1f8d439a5f058de7be88282eec1fc833b346b38c62ddf482
​
master1加入集群
kubeadm join apiserver.cluster.local:6443 \
--token yubedv.0rg185no5jgqwn07 \
--discovery-token-ca-cert-hash sha256:be87c7200420224f1f8d439a5f058de7be88282eec1fc833b346b38c62ddf482 \
--control-plane --certificate-key 23d8e27402b4f982d9ec894c37b1a3271c9f27bef2e653ca471426cc57025324

问题9:域名解析不到

[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
error execution phase preflight: unable to fetch the kubeadm-config ConfigMap: failed to get config map: Get https://apiserver.cluster.local:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s: dial tcp: lookup apiserver.cluster.local on 10.138.xx.xx:53: no such host

是域名解析的问题,找不到apiserver.cluster.local

解决:

直接在/ets/hosts里配上

存活的master的ip apiserver.cluster.local

问题10:etcd健康检查失败

原因是之前的etcd记录还存在,查看

etcdctl --endpoints=127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key member list

删除

etcdctl --endpoints=127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=etc/kubernetes/pki/etcd/peer.key member remove 7eab7c23b19f6778
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值