CKA 1.21试题

考试资源

https://kubernetes.io/docs/concepts/services-networking/network-policies/ 

https://kubernetes.io/zh/docs/concepts/services-networking/ingress/#the-ingress-resource/

https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume/

https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolumeclaim

https://kubernetes.io/zh/docs/concepts/storage/volumes/#emptydir

配置命令条补齐方法

1. kubectl --help

        找到【Settings Commands】中的【completion】

2. kubectl completion --help

        找到【source <(kubectl completion zsh)】后,换成【source <(kubectl completion bash)】即可

        题1 问题权重: 4%

Context
为部署流水线创建一个新的ClusterRole 并将其绑定到范围为特定的namespace 的特定ServiceAccount。
Task
创建一个名为deployment-clusterrole 且仅允许创建以下资源类型的新ClusterRole:

  • Deployment
  • StatefulSet
  • DaemonSet

在现有的namespace app-team1 中创建一个名为cicd-token 的新ServiceAccount。
限于namespace app-team1,将新的ClusterRole deployment-clusterrole 绑定到新的ServiceAccount cicd-token

        题1答案

student@k8s-client:~$ kubectl config use-context k8s
Switched to context "k8s".
student@k8s-client:~$ source <(kubectl completion bash)
student@k8s-client:~$

student@k8s-client:~$ kubectl create clusterrole deployment-clusterrole --resource="deployment,statefulSet,daemonSet" --verb="create"
clusterrole.rbac.authorization.k8s.io/deployment-clusterrole created

student@k8s-client:~$ kubectl describe clusterrole deployment-clusterrole
Name:         deployment-clusterrole
Labels:       <none>
Annotations:  <none>
PolicyRule:
  Resources          Non-Resource URLs  Resource Names  Verbs
  ---------          -----------------  --------------  -----
  daemonsets.apps    []                 []              [create]
  deployments.apps   []                 []              [create]
  statefulsets.apps  []                 []              [create]

student@k8s-client:~$ kubectl create sa cicd-token -n app-team1
serviceaccount/cicd-token created
student@k8s-client:~$ kubectl create clusterrolebinding chypass --clusterrole=deployment-clusterrole --serviceaccount="app-team1:cicd-token"
clusterrolebinding.rbac.authorization.k8s.io/chypass created

student@k8s-client:~$ kubectl describe clusterrolebindings chypass
Name:         chypass
Labels:       <none>
Annotations:  <none>
Role:
  Kind:  ClusterRole
  Name:  deployment-clusterrole
Subjects:
  Kind            Name        Namespace
  ----            ----        ---------
  ServiceAccount  cicd-token  app-team1

        题2 问题权重: 4%

设置配置环境:
[student@node-1]$ kubectl config use-context ek8s
Task
将名为ek8s-node-0 的node 设置为不可用,并重新调度该node 上的所有运行的pods

        题2答案 

student@k8s-client:~$ kubectl config use-context ek8s
Switched to context "ek8s".
student@k8s-client:~$ kubectl get nodes
NAME                         STATUS     ROLES                  AGE   VERSION
k8s-master.lab.example.com   Ready      control-plane,master   25d   v1.21.0
k8s-node1.lab.example.com    NotReady   <none>                 25d   v1.21.0
k8s-node2.lab.example.com    NotReady   <none>                 25d   v1.21.0

student@k8s-client:~$ kubectl cordon k8s-node2.lab.example.com 
node/k8s-node2.lab.example.com cordoned

student@k8s-client:~$ kubectl drain k8s-node2.lab.example.com --ignore-daemonsets --delete-emptydir-data
node/k8s-node2.lab.example.com already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-59zwx, kube-system/kube-proxy-b6mz6
evicting pod kubernetes-dashboard/kubernetes-dashboard-67484c44f6-7s5xd
evicting pod ingress-nginx/ingress-nginx-controller-6bb89cfc4f-xwzxk
evicting pod kubernetes-dashboard/dashboard-metrics-scraper-856586f554-9tr4p
student@k8s-client:~$ kubectl uncordon k8s-node2.lab.example.com
node/k8s-node2.lab.example.com already uncordoned

        题3 问题权重: 4% 

设置配置环境:
[student@node-1]$ kubectl config use-context mk8s
Task
现有的Kubernetes 集群正在运行版本1.21.0。仅将主节点上的所有Kubernetes 控制平面和节点组件升级到版本1.21.1
可使用以下命令通过ssh 连接到master 节点:
[student@node-1]$ ssh mk8s-master-0
可使用以下命令在该master 节点上获取更高权限:
[student@mk8s-master-0]$ sudo -i
另外,在主节点上升级kubelet 和kubectl
确保在升级之前drain 主节点,并在升级后uncordon 主节点。请不要升级工作节点,etcd,container 管理器,CNI 插件,DNS 服务或
任何其他插件

        题3答案 

student@k8s-client:~$ kubectl config use-context mk8s
Switched to context "mk8s".
student@k8s-client:~$ kubectl get nodes
NAME                         STATUS     ROLES                  AGE   VERSION
k8s-master.lab.example.com   Ready      control-plane,master   26d   v1.21.0
k8s-node1.lab.example.com    Ready      <none>                 26d   v1.21.0
k8s-node2.lab.example.com    NotReady   <none>                 26d   v1.21.0
student@k8s-client:~$ kubectl cordon k8s-master.lab.example.com
node/k8s-master.lab.example.com cordoned
student@k8s-client:~$ kubectl drain k8s-master.lab.example.com --ignore-daemonsets 
node/k8s-master.lab.example.com already cordoned
error: unable to drain node "k8s-master.lab.example.com", aborting command...

There are pending nodes to be drained:
 k8s-master.lab.example.com
error: cannot delete Pods with local storage (use --delete-emptydir-data to override): kubernetes-dashboard/dashboard-metrics-scraper-856586f554-d6wxq, kubernetes-dashboard/kubernetes-dashboard-67484c44f6-6hsnz
student@k8s-client:~$ ssh k8s-master.lab.example.com
The authenticity of host 'k8s-master.lab.example.com (192.168.126.100)' can't be established.
ECDSA key fingerprint is SHA256:ZIqAzzdITsStxxkhdeP4tdAX95MEiWv7nyPPdsnos1U.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'k8s-master.lab.example.com' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 18.04.5 LTS (GNU/Linux 4.15.0-156-generic x86_64)

...

student@k8s-master:~$ sudo -i
root@k8s-master:~# apt-cache policy kubectl
kubectl:
  Installed: 1.21.0-00

...

root@k8s-master:~# apt-get install kubectl=1.21.1-00 kubelet=1.21.1-00 kubeadm=1.21.1
-00 -y

Reading package lists... Done

...

Setting up kubelet (1.21.1-00) ...
Setting up kubectl (1.21.1-00) ...
Setting up kubeadm (1.21.1-00) ...        

root@k8s-master:~# systemctl enable kubelet
root@k8s-master:~# systemctl restart kubelet
root@k8s-master:~# kubeadm upgrade apply v1.21.1
root@k8s-master:~# exit
logout
student@k8s-master:~$ exit
logout

student@k8s-client:~$ kubectl uncordon k8s-master.lab.example.com
node/k8s-master.lab.example.com uncordoned
student@k8s-client:~$ kubectl get nodes
NAME                         STATUS     ROLES                  AGE   VERSION
k8s-master.lab.example.com   Ready      control-plane,master   26d   v1.21.1
k8s-node1.lab.example.com    Ready      <none>                 26d   v1.21.0
k8s-node2.lab.example.com    NotReady   <none>                 26d   v1.21.0

        题4 问题权重: 7%  

此项目无需要更改配置环境。但是在执行此项目之前,请确保您已返回base 节点:
Task

首先,为运行在https://127.0.0.1:2379 上的现有etcd 实例创建快照并将快照保存到/data/backup/etcd-snapshot.db
为给定实例创建快照预计能在几秒钟内完成。如果该操作似乎挂起,则命令可能有问题。用CTRL+C 来取消操作,然后重试。
然后还原位于/srv/data/etcd-snapshot-previous.db 的先前快照。
提供了以下TLS 证书和秘钥,以通过etcdctl 连接到服务器。

  • CA 证书:/opt/KUIN00601/ca.crt
  • 客户端证书:/opt/KUIN00601/etcd-client.crt
  • 客户端密钥:/opt/KUIN00601/etcd-client.key

        题4答案 

# 声名使用的API版本
student@k8s-client:~$ export "ETCDCTL_API=3"

# 查看证书参数
student@k8s-client:~$ etcdctl --help

student@k8s-client:~$ etcdctl --endpoints="https://127.0.0.1:2379" --cacert=/opt/KUIN00601/ca.crt --cert=/opt/KUIN00601/etcd-client.crt --key=/opt/KUIN00601/etcd-client.key snapshot save /data/backup/etcd-snapshot.db

student@k8s-client:~$ etcdctl --endpoints="https://127.0.0.1:2379" --cacert=/opt/KUIN00601/ca.crt --cert=/opt/KUIN00601/etcd-client.crt --key=/opt/KUIN00601/etcd-client.key snapshot restore /srv/data/etcd-snapshot-previous.db
2021-09-20 16:40:30.340006 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32
 

        题5 问题权重: 7% 

设置配置环境:
[student@node-1]$ kubectl config use-context hk8s
Task
创建一个名为allow-port-from-namespace 的新NetworkPolicy,以允许现有namespace corp-net 中的Pods 连接到连接到
namespace echo 中的Pods 的端口8080。
进一步确保新的NetworkPolicy:
● 不允许对没有在监听端口8080 的Pods 的访问。
● 不允许非来自namespace corp-net 中的Pods 的访问


https://kubernetes.io/docs/concepts/services-networking/network-policies/ 

        题5答案 

student@k8s-client:~$ kubectl config use-context hk8s
Switched to context "hk8s".
student@k8s-client:~$ kubectl label ns corp-net ns=corp-net
namespace/corp-net labeled
student@k8s-client:~$ vim 5.yaml  (内容参考 https://kubernetes.io/docs/concepts/services-networking/network-policies/ 

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name:
allow-port-from-namespace
  namespace: echo
spec:
  podSelector:
{}
    matchLabels:
      role: db

  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - ipBlock:
        cidr: 172.17.0.0/16
        except:
        - 172.17.1.0/24

    - namespaceSelector:
        matchLabels:

          ns: corp-net
          project: myproject
    - podSelector:
        matchLabels:
          role: frontend

    ports:
    - protocol: TCP
      port:
8080
  egress:
  - to:
    - ipBlock:
        cidr: 10.0.0.0/24
    ports:
    - protocol: TCP
      port: 5978

student@k8s-client:~$ kubectl apply -f 5.yaml 
networkpolicy.networking.k8s.io/allow-port-from-namespace created

        题6 问题权重: 7% 

 设置配置环境:
[student@node-1]$ kubectl config use-context k8s
Task
请重新配置现有的deployment front-end 以及添加名为http 的端口规范来公开现有容器nginx 的端口80/tcp。
创建一个名为front-end-svc 的新服务,以公开容器端口http。
配置此服务,以通过各个Pods 所在的节点上的NodePort 来公开它们

        题6答案  

student@k8s-client:~$ kubectl config use-context k8s
Switched to context "k8s".
student@k8s-client:~$ kubectl edit deployment front-end
deployment.apps/front-end edited
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: front-end
    spec:
      containers:
      - image: nginx
        imagePullPolicy: Always
        name: nginx
        ports:
        - containerPort: 80
          name: http

        resources: {}
student@k8s-client:~$ kubectl expose deployment front-end --type=NodePort --port=80 --name=front-end-svc
service/front-end-svc exposed

        题7 问题权重: 7% 

设置配置环境:
[student@node-1]$ kubectl config use-context k8s
Task
如下创建一个新的nginx Ingress 资源:
● 名称:pong
● Namespace: ing-internal
● 使用服务端口5678 在路径/hello 上公开service hello
可以使用以下命令检查service hello 的可用性,该命令应返回hello:
[student@node-1]$ curl -kL <INTERNAL_IP>/hello
https://kubernetes.io/zh/docs/concepts/services-networking/ingress/#the-ingress-resource

        题7答案  

student@k8s-client:~$ kubectl config use-context k8s
Switched to context "k8s".
student@k8s-client:~$ kubectl create ingress pong --rule="/hello=hello:5678" -n ing-internal
ingress.networking.k8s.io/pong created

        题8 问题权重: 4% 

设置配置环境:
[student@node-1]$ kubectl config use-context k8s
Task
将deployment 从webserver 扩展至3 pods

        题8答案  

student@k8s-client:~$ kubectl config use-context k8s
Switched to context "k8s".
student@k8s-client:~$ kubectl get pods
NAME                         READY   STATUS    RESTARTS   AGE
front-end-5f64577768-77xw4   1/1     Running   0          19m
webserver-74d49b7f8c-st6v6   1/1     Running   0          2d18h
student@k8s-client:~$ kubectl scale deployment webserver --replicas=3

student@k8s-client:~$ kubectl get pods
NAME                         READY   STATUS    RESTARTS   AGE
front-end-5f64577768-77xw4   1/1     Running   0          21m
webserver-74d49b7f8c-6d8rr   1/1     Running   0          72s
webserver-74d49b7f8c-st6v6   1/1     Running   0          2d18h
webserver-74d49b7f8c-z68pw   1/1     Running   0          72s
 

        题9 问题权重: 4% 

设置配置环境:
[student@node-1]$ kubectl config use-context k8s
Task
按如下要求调度一个pod:
● 名称:nginx-kusc00401
● Image: nginx
● Node selector: disk=spinning

        题9答案 

student@k8s-client:~$ kubectl config use-context k8s
Switched to context "k8s".
student@k8s-client:~$ kubectl run --image=nginx nginx-kusc00401 -o yaml --dry-run=client > 9.yaml
student@k8s-client:~$ vim 9.yaml

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: nginx-kusc00401

  name: nginx-kusc00401
spec:
  containers:
  - image: nginx
    name: nginx-kusc00401
  nodeSelector:
    disk: spinning

    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

student@k8s-client:~$ kubectl apply -f 9.yaml
pod/nginx-kusc00401 created

        题10 问题权重: 4% 

设置配置环境:
[student@node-1]$ kubectl config use-context k8s
Task
检查有多少worker nodes 已准备就绪(不包括被打上Taint: NoSchedule 的节点),并将数量写入/opt/KUSC00402/kusc00402.txt

        题10答案 

student@k8s-client:~$ kubectl get nodes
NAME                         STATUS     ROLES                  AGE   VERSION
k8s-master.lab.example.com   Ready      control-plane,master   28d   v1.21.1
k8s-node1.lab.example.com    Ready      <none>                 28d   v1.21.0
k8s-node2.lab.example.com    NotReady   <none>                 28d   v1.21.0
student@k8s-client:~$ kubectl describe node k8s-master.lab.example.com | grep -i Taint
Taints:             node-role.kubernetes.io/master:NoSchedule
student@k8s-client:~$ kubectl describe node k8s-node1.lab.example.com | grep -i Taint 
Taints:             <none>
student@k8s-client:~$ kubectl describe node k8s-node2.lab.example.com | grep -i Taint 
Taints:             node.kubernetes.io/unreachable:NoExecute
student@k8s-client:~$ echo 3-2 > /opt/KUSC00402/kusc00402.txt

        题11 问题权重: 4% 

设置配置环境:
[student@node-1]$ kubectl config use-context k8s
Task
创建一个名为的kucc8的pod, 在pod 里面分别为以下每个images 单独运行一个app container(可能会有1-4 个images):
nginx+redis+memcached 

        题11答案

student@k8s-client:~$ kubectl config use-context k8s
Switched to context "k8s".
student@k8s-client:~$ kubectl run kucc8 --image=nginx --dry-run=client -o yaml > 11.yaml
student@k8s-client:~$ vim 11.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: kucc8
  name: kucc8
spec:
  containers:
  - image: nginx
    name: nginx

  - image: redis
    name: redis

  - image: memcached
    name: memcached

    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

 student@k8s-client:~$ kubectl apply -f 11.yaml 
pod/kucc8 created
student@k8s-client:~$ kubectl get pod
NAME                         READY   STATUS    RESTARTS   AGE
front-end-5f64577768-77xw4   1/1     Running   0          5h42m
kucc8                        3/3     Running   0          2m13s
nginx-kusc00401              1/1     Running   0          5h14m
webserver-74d49b7f8c-6d8rr   1/1     Running   0          5h21m
webserver-74d49b7f8c-st6v6   1/1     Running   0          2d23h
webserver-74d49b7f8c-z68pw   1/1     Running   0          5h21m

        题12 问题权重: 4%  

https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume/

设置配置环境:
[student@node-1]$ kubectl config use-context hk8s
Task
创建名为app-config 的persistent volume,容量为1Gi,访问模式为ReadWriteOnce。volume 类型为hostPath,位于/srv/app-config

        题12答案

 student@k8s-client:~$ kubectl config use-context hk8s
Switched to context "hk8s".
student@k8s-client:~$ kubectl get pv
No resources found
student@k8s-client:~$ vim 12.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: task-pv-volume
app-config
  labels:
    type: local

spec:
  storageClassName: manual
  capacity:
    storage:
1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data
/srv/app-config"

student@k8s-client:~$ kubectl apply -f 12.yaml 
persistentvolume/app-config created
student@k8s-client:~$ kubectl get pv
NAME         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
app-config   1Gi        RWO            Retain           Available 

题13 问题权重: 7% 

https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolumeclaim
设置配置环境:
[student@node-1]$ kubectl config use-context ok8s
Task
创建一个新的PersistentVolumeClaim:
● 名称:pv-volume
● class: csi-hostpath-sc
● 容量:10Mi
创建一个新的Pod,此Pod 将作为volume 挂载到PersistentVolumeClaim:
● 名称: web-server
● Image: nginx
● 挂载路径:/usr/share/nginx/html
配置新的Pod,以对volume 具有ReadWriteOnce 权限
最后,使用kubectl edit 或kubectl patch 将PersistentVolumeClaim 的容量扩展为70Mi,并记录此更改。

题13答案 

student@k8s-client:~$ kubectl config use-context ok8s
Switched to context "ok8s".
student@k8s-client:~$ vim 13.yaml 

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pv-volume
spec:
  storageClassName: csi-hostpath-sc
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Mi
---
apiVersion: v1
kind: Pod
metadata:
  name: web-server
spec:
  volumes:
    - name: task-pv-storage
      persistentVolumeClaim:
        claimName: pv-volume
  containers:
    - name: task-pv-container
      image: nginx
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: task-pv-storage

student@k8s-client:~$ kubectl apply -f 13.yaml 
persistentvolumeclaim/pv-volume created
pod/web-server created

student@k8s-client:~$ kubectl get pvc
NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
pv-volume   Bound    pvc-1a53adf4-911d-4a68-9cd8-43b39e9c2664   10Mi       RWO            csi-hostpath-sc   11m

student@k8s-client:~$ kubectl edit pvc pv-volume --record
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 70Mi
persistentvolumeclaim/pv-volume edited

        题14 问题权重: 5% 

设置配置环境:
[student@node-1]$ kubectl config use-context k8s
Task
监控pod foo 的日志并:
● 提取与错误unable-to-access-website 相对应的日志行
● 将这些日志行写入/opt/KUTR00101/foobar

        题14答案 

student@k8s-client:~$ kubectl logs foo | grep "unable-to-access-website" > /opt/KUTR00101/foobar

        题15 问题权重: 7%

设置配置环境:
[student@node-1]$ kubectl config use-context k8s
Context
在不更改其现有容器的情况下,需要将一个现有的Pod 集成到Kubernetes 的内置日志记录体结构中(例如kubectl logs)。添加 streaming sidecar 容器是实现此要求的一种好方法。
Task
将一个busybox sidecar 容器添加到现有的Pod big-corp-app。新的sidecar 容器必须运行以下命令: /bin/sh -c tail -n+1 -f /var/log/big-corp-app.log
使用名为logs 的volume mount 来让文件/var/log/big-corp-app.log 可用于sidecar 容器。
不要更改现有容器。不要修改日志文件的路径,两个容器都必须通过/var/log/big-corp-app.log 来访问该文件。 

必要信息,添加如下内容:
https://kubernetes.io/zh/docs/concepts/storage/volumes/#emptydir 

        题15答案 

student@k8s-client:~$ kubectl config use-context k8s
Switched to context "k8s".
student@k8s-client:~$ kubectl get pod big-corp-app -o yaml > 15.yaml
student@k8s-client:~$ cp 15.yaml 15.yaml.bk
student@k8s-client:~$ vim 15.yaml

apiVersion: v1
kind: Pod
metadata:
  name: big-corp-app
  namespace: default
spec:
  containers:
  - args:
    - /bin/sh
    - -c
    - |
      i=0; while true; do
        echo "$(date) INFO $i" >> /var/log/big-corp-app.log;
        i=$((i+1));
        sleep 1;
      done
    image: busybox
    name: big-corp-app

    volumeMounts:
    - mountPath: /var/log/
      name: logs

  - command: ["/bin/sh","-c","tail -n+1 -f /var/log/big-corp-app.log"]
    image: busybox
    name: sidecar
    volumeMounts:
    - mountPath: /var/log/
      name: logs

  volumes:
  - name: logs
    emptyDir: {}
student@k8s-client:~$ kubectl delete pod big-corp-app
pod "big-corp-app" deleted
student@k8s-client:~$ kubectl apply -f 15.yaml 
pod/big-corp-app created

student@k8s-client:~$ kubectl logs big-corp-app -c sidecar
Wed Sep 29 15:45:57 UTC 2021 INFO 0
Wed Sep 29 15:45:58 UTC 2021 INFO 1
Wed Sep 29 15:45:59 UTC 2021 INFO 2
Wed Sep 29 15:46:00 UTC 2021 INFO 3
Wed Sep 29 15:46:01 UTC 2021 INFO 4
Wed Sep 29 15:46:02 UTC 2021 INFO 5
Wed Sep 29 15:46:03 UTC 2021 INFO 6
Wed Sep 29 15:46:04 UTC 2021 INFO 7
Wed Sep 29 15:46:05 UTC 2021 INFO 8
Wed Sep 29 15:46:06 UTC 2021 INFO 9
Wed Sep 29 15:46:07 UTC 2021 INFO 10
Wed Sep 29 15:46:08 UTC 2021 INFO 11
Wed Sep 29 15:46:09 UTC 2021 INFO 12
Wed Sep 29 15:46:10 UTC 2021 INFO 13
Wed Sep 29 15:46:11 UTC 2021 INFO 14
Wed Sep 29 15:46:12 UTC 2021 INFO 15
Wed Sep 29 15:46:13 UTC 2021 INFO 16
Wed Sep 29 15:46:14 UTC 2021 INFO 17
Wed Sep 29 15:46:15 UTC 2021 INFO 18
Wed Sep 29 15:46:17 UTC 2021 INFO 19
Wed Sep 29 15:46:18 UTC 2021 INFO 20
Wed Sep 29 15:46:19 UTC 2021 INFO 21
Wed Sep 29 15:46:20 UTC 2021 INFO 22

        题16 问题权重: 5%   

设置配置环境
[student@node-1]$ kubectl config use-context k8s
Task
通过pod label name=cpu-utilizer,找到运行时占用大量CPU 的pod,并将占用CPU 最高的pod 名称写入文件
/opt/KUTR000401/KUTR00401.txt(已存在)。

        题16答案 

student@k8s-client:~$ kubectl config use-context k8s
Switched to context "k8s".
student@k8s-client:~$ kubectl top pod -l name=cpu-utilizer
student@k8s-client:~$ echo "xxx  pod_name" > /opt/KUTR000401/KUTR00401.txt

        题17 问题权重: 13%   

设置配置环境:
[student@node-1]$ kubectl config use-context wk8s
Task
名为wk8s-node-0 的Kubernetes worker node 处于NotReady 状态。调查发生这种情况的原因,并采取相应措施将node 恢复为Ready
状态,确保所做的任何更改永久有效。
可使用以下命令通过ssh 连接到故障node:
[student@node-1]$ ssh wk8s-node-0
可使用以下命令在该node 上获取更高权限:
[student@wk8s-node-0]$ sudo -i

        题17答案 

kubectl config use-context wk8s
ssh wk8s-node-0
sudo -i
systemctl status kubelet
systemctl start kubelet
systemctl enable kubelet

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值