CKA OFFCIAL TEST准备工作&考试说明&练习题

本文详细介绍了Kubernetes考试的准备过程,包括证书详情、考试注意事项、环境设置、练习题目及解决方案,涵盖了RBAC角色设置、节点管理、服务暴露、网络策略等多个关键知识点,旨在帮助考生熟悉考试流程和技术要求。
摘要由CSDN通过智能技术生成

1. 准备工作

1.1 证书详情

https://training.linuxfoundation.cn/certificate/details/1

注意事项:

  1. 注册账号,填写公司发票或者个人发票
  2. 付款后需要实名认证,实名认证的时间在一个天左右
  3. 考试报名,推荐名称写英文,比如: Zhen Su ,考试需要检查护照
    (写中文名字,需要准备:身份证+VISA信用卡身份证+国际驾照,两个卡需要有签名)
  4. 考前检查工作:
    • Chrome 浏览器是设置为允许所有 Cookie
    • 安装 PSI Google Chrome Extension (科学上网才能安装)
    • 检查网络(下载电影,速度2M)
  5. 预约考试时间:提前3天预约,如果预约考试成功但是没有准备准时参加考试,取消考试资格 与补考资格。(下次考试在2088)
  6. 考试当天提前20分钟登陆考试页面,等待考官

1.2 考试注意事项

  1. 在浏览器上方,共享桌面和摄像头(确保考试电脑麦克风、摄像头是正常的),将自己的电脑静音。
  2. 关闭Chrome浏览器以外的应用程序
  3. 检查你的房间(1. 中途不能有人进出 2. 房间的墙不能是透明的 3. 背景不能是窗户 4. 房间不能有人 5. 检查你考试的桌子,不能有电子产品)
  4. 检查你的手腕,不能有电子产品
  5. 考试中途可以提休息申请,计算在2个小时范围内
  6. 考试只允许使用Web页面上的考试控制–记事本
  7. 考试没有复制粘贴快捷键,鼠标右键
  8. 考试中途请闭嘴
  9. 考试中间头部正对摄像头
  10. 考试中途,考官会不定时的让你举起双手
  11. 考试允许访问kubernetes.io的网站,在官网上的链接可能会跳转到其他网站,尽量避免二次跳转
  12. 考试过程中,Chrome浏览器最多打开两个标签页
  13. MacBook提前修改下『安全性与隐私』。屏幕录制中添加Google Chrome,这个操作需要关闭应用重新打开

2. 考试说明

2.1 考试环境

  • 此考试中的每项任务都必须在指定的集群/配置上下文中完成。
  • 为了最小化切换,任务被分组以便给定集群上的所有问题连续出现。(实际并没有连续显示
  • 考试环境由七个集群组成,由不同数量的容器组成,如下所示:

CKA Clusters

ClusterMembers练习环境 节点
-console物理机
k8s1 master
2 worker
k8s-master
k8-worker1, k8s-worker2
ek8s1 master
2 worker
ek8s-master
ek8-worker1, ek8s-worker2
  • 在每个任务开始时,您将收到命令以确保您在正确的集群上完成任务。
  • 可以使用如下命令通过 ssh 访问组成每个集群的节点:ssh <nodename>
  • 您可以通过发出以下命令在任何节点上获得提升的权限:sudo -i
  • 您还可以随时使用 sudo 以提升的权限执行命令。
  • 完成每项任务后,您必须返回基本节点(主机名 node-1)。
  • 不支持嵌套的 ssh。
  • 您可以使用 kubectl 和适当的上下文从基本节点处理任何集群。当通过 ssh 连接到集群成员时,您将只能通过 kubectl 在该特定集群上工作。
  • 为方便起见,所有环境,即基本系统和集群节点,都预安装和预配置了以下附加命令行工具:
    • kubectl带有别名k和 Bash 自动完成功能
    • jq 用于 YAML/JSON 处理
    • tmux 用于终端复用
    • curl并用于测试 Web 服务wget
    • man 和手册页以获取更多文档
  • 将在适当的任务中提供有关连接到集群节点的进一步说明。
  • 如果未指定显式命名空间,则应使用默认命名空间。
  • 如果您需要销毁/重新创建资源以执行特定任务,您有责任在销毁资源之前适当地备份资源定义。
  • CKA 和 CKAD 环境当前运行 Kubernetes v1.23.2。 在 K8s 发布日期的大约 4 到 8 周内,CKA、CKS 和 CKAD 考试环境将与最新的 K8s 次要版本保持一致。

2.2 考试期间允许的资源

在考试期间,考生可以:

  • 查看命令行终端中显示的考试内容说明

  • 查看发行版安装的文件(即 /usr/share 及其子目录)

  • 使用其Chrome和Chromium浏览器中打开一个附加选项卡,以获取资产:https://kubernetes.io/docs/,https://github.com/kubernetes/, https://kubernetes.io/blog/和他们的子域。这包括这些页面的所有可用的语言翻译(如https://kubernetes.io/zh/docs/)

    下载书签,并导入

    https://gitee.com/suzhen99/k8s/blob/master/Bookmarks/CKA-Bookmark.html
    https://www.examslocal.com/linuxfoundation
    
  • 不能打开其他选项卡,也不能导航到其他站点(包括https://discuss.kubernetes.io/)。

  • 上述允许的站点可能包含指向外部站点的链接。考生有责任不点击任何导致他们导航到不允许的域的链接。

2.3 考试技术说明

  1. 可以通过运行 sudo -i获得 root 权限

  2. 任何时候都允许重新启动您的服务器

  3. 不要停止或篡改 certerminal 进程,因为这将结束您的考试

  4. 不要阻止传入端口 8080/tcp、4505/tcp 和 4506/tcp。这包括在发行版的默认防火墙配置文件中找到的防火墙规则以及交互式防火墙命令

  5. 使用 Ctrl+Alt+W 代替 Ctrl+W

    5.1 Ctrl+W 是一个键盘快捷键,用于关闭谷歌浏览器中的当前选项卡

  6. 您的考试终端不支持 Ctrl+CCtrl+V。 要复制和粘贴文本,请使用;

    6.1 Linux:选择文本进行复制和中键进行粘贴(如果没有中键,则可以同时选择左右键)

    6.2 Mac:+C 复制,+V 粘贴

    6.3 Windows:Ctrl+Insert 复制,Shift+Insert 粘贴

    6.4 此外,您可能会发现在粘贴到命令行之前使用记事本(参见“考试控制”下的顶部菜单)操作文本会很有帮助

  7. 此考试中包含的服务和应用程序的安装可能需要修改系统安全策略才能成功完成

  8. 考试期间只有一个终端控制台可用。GNU Screen 和 tmux 等终端多路复用器可用于创建虚拟控制台

2.4 可接受的测试地点

以下是对可接受的测试地点的期望:

  • 整洁的工作区
    • 表面上没有纸、书写工具、电子设备或其他物体等物体
    • 测试表面下方无纸、垃圾桶或其他物体等物体
  • 干净的墙壁
    • 墙上没有纸/打印件
    • 绘画和其他墙壁装饰是可以接受的
    • 考生将被要求在考试开始前移除非装饰物品
  • 灯光
    • 空间必须光线充足,以便监考人员能够看到考生的脸、手和周围的工作区域
    • 考生身后没有明亮的灯光或窗户
  • 其他
    • 考生在考试期间必须保持在相机范围内
    • 空间必须是私密的,没有过多的噪音。不允许进入咖啡店、商店、开放式办公环境等公共场所。

有关考试期间政策、程序和规则的更多信息,请参阅[考生手册]

2.5 其他资源

如果您需要其他帮助,请使用您的 LF 帐户登录 https://trainingsupport.linuxfoundation.org 并使用搜索栏查找问题的答案,或从提供的类别中选择您的请求类型

2.A 考试小提示

  • 配置补全

    $ echo 'source <(kubectl completion bash)' >> ~/.bashrc
    
  • 注意在哪个集群操作的

  • 注意在哪个节点操作的

  • 注意在哪个ns操作的

3. 练习题

Task 1. RBAC - role based access control(4/25%)

Task weight: 4%
Set configuration context:

$ kubectl config use-context ck8s

Context:

You have been asked to create a new ClusterRole for a deployment pipeline and bind it to a specific ServiceAccount scoped to a specific namespace.

Task:

  • Create a new ClusterRole named deployment-clusterrole, which only allows to create the following resource types:

    • Deployment
    • StatefulSet
    • DaemonSet
  • Create a new ServiceAccount named cicd-token in the existing namespace app-team1

  • bind the new ClusterRole deployment-clusterrole to the new Service Account cicd-token, limited to the namespace app-team1

内容:

为部署管道创建一个新的 ClusterRole 并将其绑定到范围为特定 namespace 的特定 ServiceAccount

任务:

  • 创建一个名字为 deployment-clusterrole 且仅允许创建以下资源类型的新 ClusterRole
    • Deployment
    • StatefulSet
    • DaemonSet
  • 在现有的 namespace app-team1 中创建一个名为 cicd-token 的新 ServiceAccount
  • 限于 namespace app-team1 ,将新的 ClusterRole deployment-clusterrole 绑定到新的 ServiceAccount cicd-token

答案:

  1. 切换 kubernetes 集群
*$ kubectl config use-context ck8s
  1. 创建 ClusterRole(资源名后面的 s 可以不加)
$ kubectl create clusterrole --help

*$ kubectl create clusterrole deployment-clusterrole \
  --verb=create \
  --resource=Deployment,StatefulSet,DaemonSet
  1. 创建 serviceaccount
*$ kubectl --namespace app-team1 \
    create serviceaccount cicd-token
  1. 绑定 rolebinding
$ kubectl create rolebinding --help

*$ kubectl create rolebinding cicd-token-deployment-clusterrole \
  --clusterrole=deployment-clusterrole \
  --serviceaccount=app-team1:cicd-token \
  --namespace=app-team1
  1. 验证
$ kubectl describe clusterrole deployment-clusterrole
Name:         deployment-clusterrole
Labels:       <none>
Annotations:  <none>
PolicyRule:
  Resources          Non-Resource URLs  Resource Names  Verbs
  ---------          -----------------  --------------  -----
  `daemonsets.apps`    []                 []              [`create`]
  `deployments.apps`   []                 []              [`create`]
  `statefulsets.apps`  []                 []              [`create`]
  
$ kubectl -n app-team1 get serviceaccounts
NAME         SECRETS   AGE
`cicd-token` 1         16m
default      1         18m

$ kubectl -n app-team1 get rolebindings
NAME                                ROLE                                 AGE
cicd-token-deployment-clusterrole   ClusterRole/deployment-clusterrole   11m

$ kubectl -n app-team1 describe rolebindings cicd-token-deployment-clusterrole
Name:         cicd-token-deployment-clusterrole
Labels:       <none>
Annotations:  <none>
Role:
  Kind: `ClusterRole`
  Name: `deployment-clusterrole`
Subjects:
  Kind            Name        Namespace
  ----            ----        ---------
  ServiceAccount `cicd-token``app-team1`

Task 2. drain - highly-available(4/25%)

Task weight: 4%
Set configuration context:

$ kubectl config use-context ck8s

Task:

Set the node named k8s-worker1 as unavailable and reschedule all the pods running on it

任务:

将名为 k8s-worker1 的 node 设置为不可用, 并重新调度该 node 上所有运行的 pods

答案:

  1. 切换 kubernetes 集群
*$ kubectl config use-context ck8s
  1. 确认节点状态
$ kubectl get nodes
k8s-master    Ready    control-plane   9d    v1.24.1
k8s-worker1   Ready    <none>          9d    v1.24.1
k8s-worker2   Ready    <none>          9d    v1.24.1
  1. 驱逐应用,并设置节点不可调度
$ kubectl drain k8s-worker1
node/k8s-worker1 cordoned
error: unable to drain node "k8s-worker1" due to error:[cannot delete DaemonSet-managed Pods (use `--ignore-daemonsets` to ignore): kube-system/calico-node-g5wj7, kube-system/kube-proxy-8pv56, cannot delete Pods with local storage (use `--delete-emptydir-data` to override): kube-system/metrics-server-5fdbb498cc-k4mgt], continuing command...
There are pending nodes to be drained:
 k8s-worker1
cannot delete DaemonSet-managed Pods (use `--ignore-daemonsets` to ignore): kube-system/calico-node-g5wj7, kube-system/kube-proxy-8pv56
cannot delete Pods with local storage (use `--delete-emptydir-data` to override): kube-system/metrics-server-5fdbb498cc-k4mgt

*$ kubectl drain k8s-worker1 --ignore-daemonsets --delete-emptydir-data
  1. 验证
$ kubectl get nodes
NAME          STATUS                     ROLES                  AGE   VERSION
k8s-master    Ready                      control-plane,master   84m   v1.24.1
k8s-worker1   Ready,`SchedulingDisabled` <none>                 79m   v1.24.1
k8s-worker2   Ready                      <none>                 76m   v1.24.1

$ kubectl get pod -A -owide | grep worker1
kube-system    `calico-node-j6r9s`                          1/1     Running             1 (9d ago)    9d    192.168.147.129   k8s-worker1   <none>           <none>
kube-system    `kube-proxy-psz2g`                           1/1     Running             1 (9d ago)    9d    192.168.147.129   k8s-worker1   <none>           <none>

Task 3. upgrade - Kubeadm(7/25%)

Task weight: 7%
Set configuration context:

$ kubectl config use-context ck8s

Task:

  • Given an existing Kubernetes cluster running version 1.24.1, upgrade all of the Kubernetes control plane and node components on the master node only to version 1.24.2

  • You are also expected to upgrade kubelet and kubectl on the master node

Be sure to drain the master node before upgrading it and uncordon it after the upgrade. do not upgrade the worker nodes,etcd,the container manager,the CNI plugin,the DNS service or any other addons

任务:

  • 现有的 kubernetes 集群正在运行的版本是 1.24.1仅将主节点上的所有 kubernetes 控制平面和节点组件升级到版本 1.24.2

  • 另外, 在主节点上升级 kubeletkubectl

确保在升级前 drain 主节点, 并在升级后 uncordon 主节点。请不要升级工作节点,etcd,container 管理器,CNI 插件,DNS 服务或任何其他插件
答案:
  1. 切换 kubernetes 集群
*$ kubectl config use-context ck8s
  1. 确认节点状态
$ kubectl get nodes
NAME          STATUS                     ROLES                  AGE   VERSION
`k8s-master`  Ready                      control-plane,master   97m  `v1.24.1`
k8s-worker1   Ready,SchedulingDisabled   <none>                 92m   v1.24.1
k8s-worker2   Ready                      <none>                 89m   v1.24.1
  1. 登录到 k8s-master
*$ ssh root@k8s-master
  1. 执行 “kubeadm upgrade” / 对于第一个控制面节点 / 升级 kubeadm:
apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm=1.24.2-00 && \
apt-mark hold kubeadm

  1. Verify the upgrade plan:(验证升级计划:)
kubeadm version

kubeadm upgrade plan

  1. 选择要升级到的目标版本,运行合适的命令。例如:
kubeadm upgrade apply v1.24.2 --etcd-upgrade=false
:<<EOF
...
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: `y`
...
EOF
  1. 腾空节点
kubectl drain k8s-master --ignore-daemonsets

  1. 升级 kubectl 和 kubelet
apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet=1.24.2-00 kubectl=1.24.2-00 && \
apt-mark hold kubelet kubectl

  1. 重启 kubelet
systemctl daemon-reload 
systemctl restart kubelet

  1. 解除节点的保护
kubectl uncordon k8s-master
  1. 验证结果
<Ctrl-D>

$ kubectl get nodes
NAME          STATUS                     ROLES                  AGE    VERSION
k8s-master    Ready                      control-plane,master   157m  `v1.24.2`
k8s-worker1   Ready,SchedulingDisabled   <none>                 152m   v1.24.1
k8s-worker2   Ready                      <none>                 149m   v1.24.1

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.2",....

Task 4. snapshot - Implement etcd(7/25%)

Task weight: 7%

No configuration context change required for this item.

Task:

  • First, create a snapshot of the existing etcd instance running at https://127.0.0.1:2379, SAVING THE SNAPSHOT TO /srv/backup/etcd-snapshot.db
Creating a snapshot of the given instance is expected to complete in seconds. if the operation seems to hang,something's likely wrong with your command.Use CTRL + C to cancel the operation and try again
  • Next, restore an existing,previous snapshot located at /srv/data/etcd-snapshot-previous.db
The following TLS certificates/key are supplied for connecting to the server with etcdctl:
  • CA certificate: /opt/KUIN00601/ca.crt
  • Client certificate: /opt/KUIN00601/etcd-client.crt
  • Client key: /opt/KUIN00601/etcd-client.key
此项目无需更改配置环境。

任务:

  • 首先,为运行在 https://127.0.0.1:2379上的现有etcd实例创建快照,并将快照保存到/srv/backup/etcd-snapshot.db
为给定实例创建快照预计能在几秒钟内完成。 如果该操作似乎挂起, 则命令可能有问题。 用 CTRL+ c 来取消操作, 然后重试。
  • 然后,还原位于/srv/data/etcd-snapshot-previous.db的现有先前快照。
提供了以下TLS证书和密钥,以通过 etcdctl 连接到服务器。
  • CA 证书:/opt/KUIN00601/ca.crt
  • 客户端证书:/opt/KUIN00601/etcd-client.crt
  • 客户端密钥:/opt/KUIN00601/etcd-client.key
Hint - 提示 参考网址: https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#built-in-snapshot
考试时,是单独的集群
练习环境中,etcd 快照的题,单独做

答案:

  1. 备份命令
$ ETCDCTL_API=3 etcdctl snapshot save --help

*$ ETCDCTL_API=3 etcdctl \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/opt/KUIN00601/ca.crt \
  --cert=/opt/KUIN00601/etcd-client.crt \
  --key=/opt/KUIN00601/etcd-client.key \
  snapshot save /srv/backup/etcd-snapshot.db
  
  1. 还原命令
*$ sudo mv /etc/kubernetes/manifests /etc/kubernetes/manifests.bk

*$ kubectl get pod -A
The connection to the server 192.168.147.128:6443 was refused - did you specify the right host or port?

*$ sudo mv /var/lib/etcd /var/lib/etcd.bk

*$ sudo chown $USER /srv/data/etcd-snapshot-previous.db

*$ sudo ETCDCTL_API=3 etcdctl \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/opt/KUIN00601/ca.crt \
  --cert=/opt/KUIN00601/etcd-client.crt \
  --key=/opt/KUIN00601/etcd-client.key \
  --data-dir /var/lib/etcd \
  snapshot restore /srv/data/etcd-snapshot-previous.db

*$ sudo mv /etc/kubernetes/manifests.bk /etc/kubernetes/manifests
  1. 验证
$ ETCDCTL_API=3 etcdctl snapshot status /srv/backup/etcd-snapshot.db
89703627, 14521, 1929, 4.3 MB

$ kubectl get componentstatuses
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true","reason":""}

Task 5. network policy - network interface(7/20%)

Task weight: 7%
Set configuration context:

$ kubectl config use-context ck8s

Task:

  • Create a new NetworkPolicy named allow-port-from-namespace that allows Pods in the existing namespace internal to connect to port 8080 of other pods in the same namespace

  • Ensure that the new NetworkPolicy:

    • does not allow access to pods not listening on port 8080
    • does not allow access from pods not in namespace internal

任务:

  • 创建一个名为allow-port-from-namespace的新NetworkPolicy, 以允许现有 namespace internal 中的 Pods 连接到同一 namespace 中其他 Pods 的端口 8080

  • 确保新的 NetworkPolicy

    • 不允许对没有在监听端口 8080 的 Pods 的访问
    • 不允许不来自 namespace internal 的 Pods 的访问
Hint - 提示
  • 务必分析清楚规则是进还是出
  • 考试时候,对应的namespace可能不存在

答案:

  1. 切换 kubernetes
*$ kubectl config use-context ck8s
  1. 确认 namespace internal 是否存在
$ kubectl get namespaces internal
NAME       STATUS   AGE
internal   Active   41m
  1. 查看标签
*$ kubectl get namespaces internal --show-labels 
NAME     STATUS   AGE    LABELS
internal Active   111s  `kubernetes.io/metadata.name=internal`
  1. 编辑 yaml
*$ echo set number et ts=2 cuc > ~/.vimrc

$ vim 5.yml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
# name: test-network-policy
  name: allow-port-from-namespace 
# namespace: default
  namespace: internal
spec:
# podSelector:
  podSelector: {}
#   matchLabels:
#     role: db
  policyTypes:
  - Ingress
# - Egress
  ingress:
  - from:
#   - ipBlock:
#       cidr: 172.17.0.0/16
#       except:
#       - 172.17.1.0/24
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: internal
#   - podSelector:
#       matchLabels:
#         role: frontend
    ports:
    - protocol: TCP
#     port: 6379
      port: 8080
# egress:
# - to:
#   - ipBlock:
#       cidr: 10.0.0.0/24
#   ports:
#   - protocol: TCP
#     port: 5978
  1. 应用
*$ kubectl apply -f 5.yml
  1. 验证
$ kubectl -n internal describe networkpolicies allow-port-from-namespace 
Name:         allow-port-from-namespace
Namespace:    internal
Created on:   YYYY-mm-dd 21:39:09 +0800 CST
Labels:       <none>
Annotations:  <none>
Spec:
  PodSelector:     <none> (Allowing the specific traffic to all pods in this namespace)
  Allowing ingress traffic:
    To Port: 8080/TCP
    From:
      NamespaceSelector: kubernetes.io/metadata.name=internal
  Not affecting egress traffic
  Policy Types: Ingress

Task 6. expose - service types(7/20%)

Task weight: 7%
Set configuration context:

$ kubectl config use-context ck8s

Task:

  • Reconfigure the existing deployment front-end and add a port specification named http exposing port 80/tcp of the existing container nginx.

  • Create a new service named front-end-svc exposing the container port http.

  • Configure the new service to also expose the individual pods via a NodePort on the nodes on which they are scheduled.

任务

  • 请重新配置现有的 deployment front-end:添加名为 http 的端口规范来公开现有容器 nginx 的端口 80/tcp
  • 创建一个名为 front-end-svc 的新服务, 以公开容器端口 http
  • 配置此服务, 以通过在排定的节点上的 NodePort 来公开各个 pods。

答案:

  1. 切换 kubernetes 集群
*$ kubectl config use-context ck8s
  1. 确认 deployments
$ kubectl get deployments front-end
NAME        READY   UP-TO-DATE   AVAILABLE   AGE
`front-end` 1/1     1            1           10m
  1. 获取 ports 编写信息(可选),建议使用网页检索
$ kubectl explain --help

$ kubectl explain pod.spec.containers

$ kubectl explain pod.spec.containers.ports

$ kubectl explain deploy.spec.template.spec.containers.ports
  1. 编辑 deployments front-end
*$ kubectl edit deployments front-end
...此处省略...
  template:
...此处省略...
    spec:
      containers:
      - image: nginx
        # 添加 3 行
        ports:
        - name: http
          containerPort: 80
...此处省略...
$ kubectl get deployments front-end
NAME        READY   UP-TO-DATE   AVAILABLE   AGE
front-end  `1/1`    1            1           12m
  1. 🅰️ 创建 Service
*$ kubectl expose deployment front-end \
   --port=80 --target-port=http \
   --name=front-end-svc \
   --type=NodePort

  1. 🅱️ 创建 Service(建议)
*$ kubectl get deployments front-end --show-labels
NAME        READY   UP-TO-DATE   AVAILABLE   AGE   LABELS
front-end   0/1     1            0           37s  `app=front-end`

*$ vim 6.yml
apiVersion: v1
kind: Service
metadata:
# name: my-service
  name: front-end-svc
spec:
  # 题意要求(确认)
  type: NodePort
  selector:
#   app: MyApp
    app: front-end
  ports:
    - port: 80
      targetPort: http
*$ kubectl apply -f 6.yml
  1. 验证
$ kubectl get services front-end-svc
NAME            TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
front-end-svc  `NodePort`  10.106.46.251   <none>        80:`32067`/TCP   39s

$ curl k8s-worker1:32067
...输出省略...
<title>Welcome to nginx!</title>
...输出省略...

Task 7. ingress nginx - Ingress controllers(7/20%)

Task wight: 7%
Set configuration context:

$ kubectl config use-context ck8s

Task:

  • Create a new nginx Ingress resource as follows:
    • Name: ping
    • Namespace: ing-internal
    • Exposing service hi on path /hi using service port 5678
The availability of service hello can be checked using the following command,which should return hi:
$ curl -kL <INTERNAL_IP>/hi

任务:

  • 如下创建一个新的 nginx ingress 资源:
  • 名称: ping
  • namespace: ing-internal
  • 使用服务端口 5678 在路径/hi 上公开服务 hi
可以使用以下命令检查服务 hello 的可用性, 该命令返回 hi:
$ curl -kL <INTERNAL_IP>/hi

答案:

  1. 切换 kubernetes 集群
*$ kubectl config use-context ck8s
  1. 安装 ingressclasses

    🅰️ 考试时

*$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.5.1/deploy/static/provider/cloud/deploy.yaml

​ 🅱️ 练习环境

  • 浏览器打开:
    “https://github.com/kubernetes”
    ingress-nginx

    ​ deploy/
    ​ static/provider/
    ​ cloud/
    ​ deploy.yaml

  • 复制文件内容,粘贴至d.yml

*$ vim d.yml

2️⃣ 练习环境:做完第二题,需删除 image 后面的hash值
containerd

...
        image: registry.k8s.io/ingress-nginx/controller:v1.5.1
...
        image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20220916-gd32f8c343
...
        image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20220916-gd32f8c343
...

2️⃣ 练习环境:未做第二题,不用删除 image 后面的hash值
docker

*$ kubectl apply -f d.yml
  1. 验证已安装
*$ kubectl get ingressclasses
NAME    CONTROLLER             PARAMETERS   AGE
`nginx` k8s.io/ingress-nginx   <none>       11m

*$ kubectl get pod -A | grep ingress
ingress-nginx   ingress-nginx-admission-create-w2h4k        0/1     Completed   0              92s
ingress-nginx   ingress-nginx-admission-patch-k6pgk         0/1     Completed   1              92s
ingress-nginx   `ingress-nginx-controller-58b94f55c8-gl7gk`   1/1  `Running`    0              92s
  1. 编辑 yaml 文件
*$ vim 7.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
# name: minimal-ingress
  name: ping
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
# 添加1行
  namespace: ing-internal
spec:
# ingressClassName: nginx-example
  ingressClassName: nginx
  rules:
  - http:
      paths:
#     - path: /testpath
      - path: /hi
        pathType: Prefix
        backend:
          service:
#           name: test
            name: hi
            port:
#             number: 80
              number: 5678
  1. 应用生效
*$ kubectl apply -f 7.yml

$ kubectl -n ing-internal get ingress
NAME   CLASS   HOSTS   ADDRESS   PORTS   AGE
`ping` nginx   *                 80      11m
  1. 验证
*$ kubectl get pods -A -o wide | grep ingress
...输出省略...
ingress-nginx   `ingress-nginx-controller`-769f969657-4zfjv   1/1     Running     0             12m  `172.16.126.15`    k8s-worker2   <none>           <none>

*$ curl 172.16.126.15/hi
hi

Task 8. scale - scale applications(4/15%)

Task weight: 4%
Set configuration context:

$ kubectl config use-context ck8s

Task:

  • scale the deployment webserver to 6 pods

任务:

  • 将 deployment webserver 扩展至 6 pods

答案:

  1. 切换 kubernetes 集群
*$ kubectl config use-context ck8s
  1. 查看
$ kubectl get deployments webserver 
NAME        READY   UP-TO-DATE   AVAILABLE   AGE
webserver  `1/1`    1            1           30s
  1. 扩展副本🅰️ edit
*$ kubectl edit deployments webserver
...省略...
spec:
  progressDeadlineSeconds: 600
# replicas: 1
  replicas: 6
...省略...

🅱️ scale

$ kubectl scale deployment webserver --replicas 6
  1. 验证
$ kubectl get deployments webserver -w
NAME        READY   UP-TO-DATE   AVAILABLE   AGE
webserver  `6/6`    6            6           120s
<Ctrl-C>

Task 9. schedule - Pod scheduling(4/15%)

Task weight 4%
Set configuration context:

$ kubectl config use-context ck8s
Task:
  • Schedule a pod as follows:
    • Name: nginx-kusc00401
      • image: nginx
      • Node selector: disk=spinning

任务

  • 按如下要求调度一个 pod:
  • 名称: nginx-kusc00401
  • image: nginx
  • Node selector: disk=spinning
Hint - 提示 官网手册中搜索 nodeselect
将 Pod 指派给节点 | Kubernetes

答案:

  1. 切换 kubernetes 集群
*$ kubectl config use-context ck8s
  1. 创建 pod
*$ vim 9.yml
apiVersion: v1
kind: Pod
metadata:
# name: nginx
  name: nginx-kusc00401
  labels:
    env: test
spec:
  containers:
  - name: nginx
    # 符合题题意要求
    image: nginx
    imagePullPolicy: IfNotPresent
  nodeSelector:
#   disktype: ssd
    disk: spinning
  1. 应用
*$ kubectl apply -f 9.yml
  1. 确认
$ kubectl get pod nginx-kusc00401 -o wide -w
NAME              READY   STATUS    RESTARTS   AGE   IP              NODE          NOMINATED NODE   READINESS GATES
nginx-kusc00401   1/1    Running   0          11s   172.16.126.30   `k8s-worker2`  <none>           <none>
<Ctrl-C>

$ kubectl get nodes -l disk=spinning
NAME          STATUS   ROLES    AGE   VERSION
`k8s-worker2`   Ready    <none>   9d    v1.24.1

Task 10. describe - Pod scheduling(4/15%)

Task weight 4%
Set configuration context:

$ kubectl config use-context ck8s

Task:

  • Check to see how many nodes are ready(not including nodes tainted NoSchedule) and write the number to /opt/KUSC00402/kusc00402.txt

任务:

  • 检查有多少个 nodes 已准备就绪(不包括被打上 tainted: NoSchedule 的节点) , 并将数量写入/opt/KUSC00402/kusc00402.txt
Hint - 提示
  • 仔细看 taints 是否为 NoSchedule

答案:

  1. 切换 kubernetes 集群
*$ kubectl config use-context ck8s
  1. 确认节点状态
$ kubectl get nodes
k8s-master    Ready                      control-plane   9d    v1.24.2
k8s-worker1   Ready,SchedulingDisabled   <none>          9d    v1.24.1
k8s-worker2   Ready                      <none>          9d    v1.24.1
  1. 检查有 Taint 的节点
*$ kubectl describe nodes | grep -i taints
Taints:             node-role.kubernetes.io/control-plane:`NoSchedule`
Taints:             node.kubernetes.io/unschedulable:`NoSchedule`
Taints:             <none>
  1. 写入结果
*$ echo 1 > /opt/KUSC00402/kusc00402.txt

Task 11. multi Containers(4/15%)

Task weight 4%
Set configuration context:

$ kubectl config use-context ck8s

Task:

  • Create a pod named kucc1 with a single app container for each of the following images running in side (there may be between 1 and 4 images specified):

nginx + redis + memcached + consul

任务:

  • 创建一个名字为kucc1的 pod, 在pod里面分别为以下每个images单独运行一个app container(可能会有 1-4 个 images),容器名称和镜像如下:

nginx + redis + memcached + consul

答案:

  1. 切换 kubernetes 集群
*$ kubectl config use-context ck8s
  1. 创建 pod
*$ vim 11.yml 
apiVersion: v1
kind: Pod
metadata:
# name: myapp-pod
  name: kucc1
spec:
  containers:
# - name: myapp-container
  - name: nginx
#   image: busybox:1.28
    image: nginx
# 添加
  - name: redis
    image: redis
  - name: memcached
    image: memcached
  - name: consul
    image: consul
  1. 应用
*$ kubectl apply -f 11.yml
  1. 验证
$ kubectl get pod kucc1 -w
NAME    READY   STATUS    RESTARTS   AGE
kucc1  `4/4`    Running   0          77s

Ctrl-C

Task 12. pv - persistent volumes(4/10%)

Task weight: 4%
Set configuration context:

Task:

  • Create a persistent volume with name app-data, of capacity 1Gi and access mode ReadWriteMany. The type of volume is hostPath and its location is /srv/app-data

任务:

  • 创建名为 app-data 的 persistent volume, 容量为 1Gi, 访问模式为 ReadWriteMany。 volume类型为 hostPath, 位于/srv/app-data

答案:

网页搜索pv
配置 Pod 以使用 PersistentVolume 作为存储 | Kubernetes

  1. 编辑 yaml
*$ vim 12.yml
apiVersion: v1
kind: PersistentVolume
metadata:
# name: task-pv-volume
  name: app-data
spec:
# storageClassName: manual
  capacity:
#   storage: 10Gi
    storage: 1Gi
  accessModes:
#   - ReadWriteOnce
    - ReadWriteMany
  hostPath:
#   path: "/mnt/data"
    path: "/srv/app-data"
# 增加
    type: DirectoryOrCreate
  1. 应用生效
*$ kubectl apply -f 12.yml
  1. 验证
$ kubectl get pv
NAME        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
`app-data` `1Gi`      `RWX`           Retain           Available                                   4s

Task 13. pvc - persistent volume claims(7/10%)

Task weight: 7%
Set configuration context:

$ kubectl config use-context ck8s

Task:

  • Create a new PersistenVolumeClaim:

    • Name: pv-volume
    • Class: csi-hostpath-sc
    • Capacity: 10Mi
  • Create a new pod which mounts the PersistenVolumeClaim as a volume:

    • Name: web-server
    • image: nginx
    • Mount path: /usr/share/nginx/html
  • Configure the new pod to have ReadWriteOnce access on the volume.

  • Finally, using kubectl edit or kubectl patch expand the PersistentVolumeClaim to a capacity of 70Mi and record that change.

任务:

  • 创建一个新的 PersistentVolumeClaim

    • 名称: pv-volume
    • class: csi-hostpath-sc
    • 容量: 10Mi
  • 创建一个新的 pod, 此 pod 将作为 volume 挂载到 PersistentVolumeClaim

    • 名称:web-server
    • image:nginx
    • 挂载路径:/usr/share/nginx/html
  • 配置新的 pod, 以对 volume 具有 ReadWriteOnce 权限。

  • 最后, 使用 kubectl edit 或者 kubectl patchPersistentVolumeClaim 的容量扩展为 70Mi, 并记录此次更改。

答案:

  1. 切换 kubernetes
*$ kubectl config use-context ck8s
  1. 创建 pvc

    网页搜索pvc

    配置 Pod 以使用 PersistentVolume 作为存储 | Kubernetes

*$ vim 13pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
# name: claim1
  name: pv-volume
spec:
  accessModes:
    - ReadWriteOnce
# storageClassName: fast
  storageClassName: csi-hostpath-sc
  resources:
    requests:
#     storage: 30Gi
      storage: 10Mi
*$ kubectl apply -f 13pvc.yml

$ kubectl get pvc
NAME       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
pv-volume  `Bound`   pvc-89935613-3af9-4193-9a68-116067cf1a34   10Mi       RWO            csi-hostpath-sc   6s

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM               STORAGECLASS      REASON   AGE
app-data                                   1Gi        RWX            Retain           Available                                                  72m
pvc-89935613-3af9-4193-9a68-116067cf1a34   10Mi       RWO            Delete          `Bound`      default/pv-volume   csi-hostpath-sc            39s
  1. 创建 pod
*$ vim 13pod.yml
apiVersion: v1
kind: Pod
metadata:
# name: task-pv-pod
  name: web-server
spec:
  volumes:
    - name: task-pv-storage
      persistentVolumeClaim:
#       claimName: task-pv-claim
        claimName: pv-volume
  containers:
#   - name: task-pv-container
    - name: web-server
      image: nginx
#     ports:
#       - containerPort: 80
#         name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: task-pv-storage
*$ kubectl apply -f 13pod.yml
pod/web-server created

$ kubectl get pod web-server
NAME         READY   STATUS    RESTARTS   AGE
web-server   1/1    `Running`  0          9s
  1. 允许扩容

    网页搜索storageclass

    存储类 | Kubernetes

*$ kubectl edit storageclasses csi-hostpath-sc
...输出省略...
# 添加1行
allowVolumeExpansion: true

$ kubectl get storageclasses -A 
NAME             PROVISIONER     RECLAIMPOLICY  VOLUMEBINDINGMODE  ALLOWVOLUMEEXPANSION  AGE
csi-hostpath-sc   k8s-sigs.io/nfs-subdir-external-provisioner   Delete          Immediate           `true`                   5m51s
  1. 并记录此次更改

    https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands

*$ kubectl edit pvc pv-volume --record
...此处省略...
spec:
...此处省略...
#     storage: 10Mi
      storage: 70Mi
...此处省略...
  1. 确认(练习环境显示值还是10Mi,考试环境会正常显示)
$ kubectl get pvc
NAME       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
pv-volume  Bound    pvc-9a5fb9b6-b127-4868-b936-cb4f17ef910e  `70Mi`      RWO            csi-hostpath-sc   31m

Task 14. logs - node logging(5/30%)

Task weight: 5%
Set configuration context:

$ kubectl config use-context ck8s

Task:

  • Monitor the logs of pod bar and:
  • Exract log lines corresponding to error unable-to-access-website
  • Write them to /opt/KUTR00101/bar

任务:

  • 监控 pod bar 的日志:
  • 提取与错误 unable-to-access-website 相对应的日志行
  • 将这些日志行写入到/opt/KUTR00101/bar

答案:

  1. 切换 kubernetes 集群
*$ kubectl config use-context ck8s
  1. 查看 logs
*$ kubectl logs bar | grep unable-to-access-website > /opt/KUTR00101/bar
  1. 验证
$ cat /opt/KUTR00101/bar
YYYY-mm-dd 07:13:03,618: ERROR `unable-to-access-website`

Task 15. sidecar - Manage container stdout(7/30%)

Task weight 7%
Set configuration context:

$ kubectl config use-context ck8s

context:

Without changing its existing containers, an existing pod needs to be integrated into kubernetes’s built-in logging architecture (e.g. kubectl logs). Adding a streaming sidecar container is a good and common way to accomplish this requirement.

Task:

  • Add a busybox sidecar container name is sidedcar to the existing pod big-corp-app. The new sidecar container has to run the following command:

    /bin/sh -c tail -f /var/log/legacy-app.log
    
  • Use a volume mount named logs to make the file /var/log/legacy-app.log available to the sidecar container.

Don't modify the existing container.
Don't modify the path of the log file,both containers must access it at /var/log/legacy-app.log.

内容:

在不更改其现有容器的情况下, 需要将一个现有的 pod 集成到 kubernetes 的内置日志记录体系结构中(例如 kubectl logs) 。 添加 streamimg sidecar 容器是实现此要求的一种好方法。

任务:

  • 将一个 busybox sidecar 容器名称为sidecar添加到现有的 big-corp-app。 新的 sidecar 容器必须运行以下命令:

    /bin/sh -c tail -f /var/log/legacy-app.log
    
  • 使用名为 logs 的 volume mount 来让文件/var/log/legacy-app.log 可用于 sidecar 容器

不要更改现有容器
不要修改日志文件的路径, 两个容器必须通过 /var/log/legacy-app.log来访问该文件。

答案:

  1. 切换 kubernetes 集群
*$ kubectl config use-context ck8s 
  1. 编辑 yaml
*$ kubectl get pod big-corp-app -o yaml > 15.yml

*$ vim 15.yml
...此处省略...
spec:
  containers:
...此处省略...
    volumeMounts:
# 已有容器 增加2行
    - name: logs
      mountPath: /var/log
# 新容器 增加5行
  - name: sidecar
    image: busybox
    args: [/bin/sh, -c, 'tail -f /var/log/legacy-app.log']
    volumeMounts:
    - name: logs
      mountPath: /var/log
...此处省略...
  volumes:
# 增加 2 行
  - name: logs
    emptyDir: {}
...此处省略...
mA
*$ kubectl replace -f 15.yml --force
pod "big-corp-app" deleted
pod/big-corp-app replaced

mB
*$ kubectl delete -f 15.yml
*$ kubectl apply -f 15.yml
  1. 确认
$ kubectl get pod big-corp-app -w
NAME           READY   STATUS    RESTARTS   AGE
big-corp-app  `2/2`    Running   1          37s

$ kubectl logs -c sidecar big-corp-app

Task 16. top - monitor applications(5/30%)

Task weight: 5%
Set configuration context:

$ kubectl config use-context ck8s

Task:

  • From the pod label name=cpu-loader, find pods running high cpu workloads and write the name of the pod consuming most cpu to the file /opt/KUTR00401/KUTR00401.txt (which already exists)

任务:

  • 通过 pod label name=cpu-loader, 找到运行时占用大量 CPU 的 pod, 并将占用 CPU 最高的 pod 名称写入到文件/opt/KUTR00401/KUTR00401.txt(已存在)

答案:

  1. 切换 kubernetes 集群
*$ kubectl config use-context ck8s
  1. 查找
$ kubectl top pod -h

*$ kubectl top pod -l name=cpu-loader -A
NAMESPACE   NAME                          CPU(cores)   MEMORY(bytes)
default    `bar`                          `1m`         5Mi
default     cpu-loader-5b898f96cd-56jf5   0m           3Mi
default     cpu-loader-5b898f96cd-9zlt5   0m           4Mi
default     cpu-loader-5b898f96cd-bsvsb   0m           4Mi           
  1. 写入
*$ echo bar > /opt/KUTR00401/KUTR00401.txt

Task 17. Daemon - cluster component(13/30%)

Task weight: 13%
Set configuration context:

$ kubectl config use-context ck8s

Task:

  • A kubernetes woker node, named k8s-worker1 is in state NotReady. Investigate why this is the case, and perform any appropriate steps to bring the node to a Ready state, ensuring that any changes are made permanent.
You can ssh to the failed node useing:
$ ssh k8s-worker1
You can assume elevated privileges on the node with the following command:
$ sudo -i

任务:

  • 名为 k8s-worker1 的 kubernetes worker node 处于 NotReady 状态。 调查发生这种情况的原因, 并采取相应措施将 node 恢复为 Ready 状态,确保所做的任何更改永久生效。
可使用以下命令通过 ssh 连接到故障 node:
$ ssh k8s-worker1
可使用以下命令在该 node 上获取更高权限:
$ sudo -i
Hint - 提示
  • 这题与第二题有关联
  • kiosk@k8s-master:~$ cka-setup 17

答案:

  1. 切换集群环境
*$ kubectl config use-context ck8s
  1. 确认节点状态
$ kubectl get nodes
NAME          STATUS                     ROLES                  AGE     VERSION
k8s-master    Ready                      control-plane,master   43d   v1.24.1
k8s-worker1  `NotReady`                  <none>                 43d   v1.24.1

*$ kubectl describe nodes k8s-worker1
...输出省略...
Conditions:
  Type                 Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
  ----                 ------    -----------------                 ------------------                ------              -------
  NetworkUnavailable   False     Tue, 31 May YYYY 11:25:06 +0000   Tue, 31 May YYYY 11:25:06 +0000   CalicoIsUp          Calico is running on this node
  MemoryPressure       Unknown   Tue, 31 May YYYY 13:51:08 +0000   Tue, 31 May YYYY 13:53:42 +0000  `NodeStatusUnknown   Kubelet stopped posting node status.`
  DiskPressure         Unknown   Tue, 31 May YYYY 13:51:08 +0000   Tue, 31 May YYYY 13:53:42 +0000  `NodeStatusUnknown   Kubelet stopped posting node status.`
  PIDPressure          Unknown   Tue, 31 May YYYY 13:51:08 +0000   Tue, 31 May YYYY 13:53:42 +0000  `NodeStatusUnknown   Kubelet stopped posting node status.`
  Ready                Unknown   Tue, 31 May YYYY 13:51:08 +0000   Tue, 31 May YYYY 13:53:42 +0000  `NodeStatusUnknown   Kubelet stopped posting node status.`
...输出省略...
  1. 启动服务
*$ ssh k8s-worker1

*$ sudo -i

*# systemctl enable --now kubelet.service
# systemctl status kubelet

q 退出status

Ctrl-D 退出sudo

Ctrl-D 退出ssh

  1. 确认
*$ kubectl get nodes
NAME          STATUS                     ROLES                  AGE   VERSION
k8s-master    Ready                      control-plane,master   43d   v1.24.1
k8s-worker1  `Ready`,SchedulingDisabled  <none>                 43d   v1.24.1

A1. 准备练习环境-准备考试的同学

[VMware/k8s-master]

  • 先恢复快照1.24
  • Inert *.iso, 复选『连接
sudo mount -o uid=1000 /dev/sr0 /media/

/media/cka-setup

kubectl get pod -A | grep -v Running
# 确认都 Running 后,做一个关机快照 == CKA

A2. exam-grade

$ media/cka-grade

 Spend Time: up 1 hours, 1 minutes  Wed 01 Jun YYYY 04:58:06 PM UTC
================================================================================
 PASS	Task1.  - RBAC
 PASS	Task2.  - drain
 PASS	Task3.  - upgrade
 PASS	Task4.  - snapshot
 PASS	Task5.  - network-policy
 PASS	Task6.  - service
 PASS	Task7.  - ingress-nginx
 PASS	Task8.  - replicas
 PASS	Task9.  - schedule
 PASS	Task10. - NoSchedule
 PASS	Task11. - multi_pods
 PASS	Task12. - pv
 PASS	Task13. - Dynamic-Volume
 PASS	Task14. - logs
 PASS	Task15. - Sidecar
 PASS	Task16. - Metric
 PASS	Task17. - Daemon (kubelet, containerd, docker)
================================================================================
 The results of your CKA v1.24:  `PASS`	 Your score: `100`
$ media/cka-grade 1

 Spend Time: up 1 hours, 2 minutes  Wed 01 Jun YYYY 04:58:14 PM UTC
================================================================================
 `PASS`	Task1.  - RBAC
================================================================================
 The results of your CKA v1.24:  FAIL	 Your score: 4
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

焱宣

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值