目录
一、RBAC 4%
Context
为部署管道创建一个新的ClusterRole并将其绑定到范围为特定的namespace的特定ServiceAccount
TASK
创建一个名为deployment-clusterrole且仅允许创建一下资源类型的新ClusterRole
- Deployment
- StatefulSet
- DaemonSet
在现有的namespace app-team1中创建一个名为cicd-token的新ServiceAccount
限于namespace app-team1,将新的ClusterRole deployment-clusterrole绑定到新的ServiceAccount cicd-token
$ kubectl create clusterrole deployment-clusterrole --verb=create --resource=Deployments,StatefulSets,DaemonSets
$ kubectl create serviceaccount cicd-token -n app-team1
$ kubectl create rolebinding cicd-token-binding -n app-team1 --clusterrole=deployment-clusterrole --serviceaccount=app-team1:cicd-token
$ kubectl describe rolebindings.rbac.authorization.k8s.io -n app-team1 cicd-token-binding
Name: cicd-token-binding
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: deployment-clusterrole
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount cicd-token app-team1
二、驱逐节点 4%
TASK
将名为ek8s-node-1的节点设置为不可用,并重新调度在其上运行的所有Pod。
$ kubectl cordon ek8s-node-1
$ kubectl drain ek8s-node-1 --ignore-daemonsets --force
三、集群升级 7%
TASK
现有的Kubernetes集群正在运行版本1.20.0仅将主节点上的所有Kubernetes控制平面和节点组件升级到版本1.20.1
另外,在主节点上升级kubelet和kubectl
确定在升级之前drain主节点,并在升级后uncordon主节点。请不要升级工作节点,etcd,container管理器,CNI插件,DNS服务或任何其他插件。
$ kubectl cordon master
$ kubectl drain master --ignore-daemonsets --force
$ apt-get install kubeadm=1.20.1-0
$ kubeadm upgrade plan
$ kubeadm upgrade apply v1.20.1 --etcd-upgrade=false
$ apt-get install kubectl=1.20.1-0 kubelet=1.20.1-0
$ kubectl uncordon master
$ systemctl restart kubelet.service
四、ETCD备份恢复 7%
TASK
首先,为运行在https://127.0.0.1:2379上现有etcd实例创建快照并将快照保存到/var/lib/backup/etcd-snapshot.db
然后还原位于/var/lib/backup/etcd-snapshot-previous.db
提供了以下TLS证书与密钥,以通过etcdctl连接到服务器
- CA证书:/opt/KUIN00601/ca.crt
- 客户端证书:/opt/KUIN00601/etcd-client.crt
- 客户端密钥:/opt/KUIN00601/etcd-client.key
$ ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 \
--cacert=/opt/KUIN00601/ca.crt --cert=/opt/KUIN00601/etcd-client.crt --key=/opt/KUIN00601/etcd-client.key \
snapshot save /var/lib/backup/etcd-snapshot.db
$ ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 snapshot restore /var/lib/backup/etcd-snapshot-previous.db
五、网络策略 4%
TASK
在已有的namespace foobar 中创建一个名为allow-port-from-namespace的新NetworkPolicy,以允许namespace corp-bar访问其Pods的端口9200
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-port-from-namespace
namespace: foobar
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
project: corp-bar
ports:
- protocol: TCP
port: 9200
六、SVC 4%
TASK
重新配置现有的deployment front-end,并添加名为HTTP的端口规范,以暴漏现有容器nginx的端口80/tcp。
创建一个名为front-end-svc的新服务,以暴漏容器端口HTTP。
配置新服务以通过调度它们的节点上的NodePort暴漏各个Pod
$ kubectl expose deployment front-end --type=NodePort --name=front-end-svc --port=80 --target-port=80 --protocol=TCP
七、Ingress 7%
TASK
创建一个名为nginx的ingress并遵守以下规则
Name:pong
Namespace:ing-internal
Exposing service hello on path:/hello 使用端口5678
验证
curl -kL <INTERNAL_IP>/hello
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: pong
namespace: ing-internal
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /hello
pathType: Prefix
backend:
service:
name: hello
port:
number: 5678
八、扩容 4%
TASK
将depolyment web-server扩容到4个pods
$ kubectl scale deployment web-server --replicas=4
九、通过node标签调度pod 4%
TASK
按以下规则调度pod:
name:nginx-kusc00401
image:nginx
Node selector:disk=ssd
apiVersion: v1
kind: Pod
metadata:
name: nginx-kusc00401
spec:
containers:
- name: nginx-kusc00401
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
dis: ssd
十、节点数量 4%
TASK
检查并查看有多少个节点准备就绪(不包括已获得Noschedule的节点)并将其写入/opt/KUSC00402/kusc00402.txt
$ echo `kubectl describe node |grep -i taints |grep -v -i noschedule |wc -l` > /opt/KUSC00402/kusc00402.txt
十一、创建多容器pod 4%
TASK
创建一个名为kucc8的pod,并且使用以下镜像(可以指定了1至4个镜像)
nginx+redis+memcached+consul
apiVersion: v1
kind: Pod
metadata:
name: kucc8
spec:
containers:
- name: nginx
image: nginx
- name: redis
image: redis
- name: memcached
image: memcached
- name: consul
image: consul
十二、pv 4%
TASK
创建一个名为app-config的pv,1GI大小,权限为ReadOnlyMany使用hostPath类型挂载本地位置为/srv/app-config
apiVersion: v1
kind: PersistentVolume
metadata:
name: app-config
spec:
capacity:
storage: 1Gi
accessModes:
- ReadOnlyMany
hostPath:
path: "/srv/app-config"
十三、pvc 7%
TASK
创建一个pvc满足以下要求:
Name:pv-volume
Class:csi-hostpath-sc
Capcity:10Mi
创建一个pod并挂载PVC:
name:test
image:nginx
Mount Path:/usr/share/nginx/html
编辑pod volume权限为ReadWriteOnce
最后,使用kubectl edit 或者kubectl patch将pvc大小改为70Mi
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-volume
spec:
storageClassName: csi-hostpath-sc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
---
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: pv-volume
containers:
- name: test
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
$ kubectl edit pvc pv-volume --record
十四、输出日志 5%
TASK
监控bar的日志
并将error字段内容输出到/opt/KUTR0001/bar
$ kubectl logs bar | grep error >> /opt/KUTR0001/bar
十五、sidecar 13%
TASK
在pod big-corp-app中增加一个busybox的sidecar,新的sidecar容器使用以下命令:
/bin/sh -c tail -n+1 f /var/log/big-corp-app.log
挂载一个卷的名字叫logs并确保/var/log/big-corp-app.log文件在sidecar中可达
不要修改已经存在的容器
不要修改日志路径与文件
apiVersion: v1
kind: Pod
metadata:
name: counter
spec:
containers:
- name: count
image: busybox
args:
- /bin/sh
- -c
- >
i=0;
while true;
do
echo "$i: $(date)" >> /var/log/legacy-app.log;
i=$((i+1));
sleep 1;
done
volumeMounts:
- name: logs
mountPath: /var/log
- name: busybox
image: busybox
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/legacy-app.log']
volumeMounts:
- name: logs
mountPath: /var/log
volumes:
- name: logs
emptyDir: {}
十六、top 5%
TASK
从pod标签为name=cpu-loader中找到CPU负载最大的pod名称,并输出到/opt/KUTR00401/KUTR00401.txt(该文件已经存在)
$ kubectl top pods -A -l name=cpu-loader --sort-by='cpu'
十七、kubelet 13%
TASK
名为wk8s-node-0的工作节点状态为NotReady,找到并解决此问题
$ systemctl start kubelet.service
$ systemctl status kubelet.service
$ systemctl enable kubelet.service