cka2021-11月最新答案

1 篇文章 0 订阅

cka 2021 - 11-18 最新答案
source <(kubectl completion bash)
kubectl create clusterrole deployment-clusterrole --verb=create --resource=deployments,statefulsets,daemonsets
kubectl create ns app-team1
kubectl -n app-team1 create serviceaccount cict-token
kubectl -n app-team1 create rolebinding cicd-token-binding --clusterrole=deployment-clusterrole --serviceaccount=app-team1:cicd-token
kubectl describe -n app-team1 rolebindings.rbac.authorization.k8s.io cicd-token-binding
2
kubectl drain node1 --ignore-daemonsets
3
升级集群 1.22.2 升级到 1.22.2
drain 主 节点后 再 uncordon 主节点
考试时候 先
ssh master
sudo -i
source <(kubectl completion bash)
kubectl drain master --ignore-daemonsets
查看下 kubectl get nodes
apt-cache show kubeadm|grep 1.22.2-00
apt-get update
apt-get install kubeadm=1.22.2-00
kubeadm version
kubeadm upgrade plan
kubeadm upgrade apply 1.22.2 --etcd-upgrade=false
apt-get install kubelet=1.22.2-00
kubelet version
apt-get install kubectl=1.22.2-00
kubelet --version
systemctl status kubelet
systemctl daemon-reload
system restart kubelet
kubectl uncordon master
退出ssh
kubectl get node
4
export ETCDCTL_API=3
etcdctl --endpoints=https://127.0.0.1:2379 --cacert="/opt/KUIN00601/ca.crt" --cert="/opt/KUIN00601/server.crt" --key="/opt/KUIN00601/server.key" snapshot save /srv/data/etcd-snapshot.db
恢复
etcdctl snapshot restore --endpoints=https://127.0.0.1:2379 --cacert="/opt/KUIN00601/ca.crt" --cert="/opt/KUIN00601/server.crt" --key="/opt/KUIN00601/server.key" /var/lib/etcd-snapshot-previous.db
5
网络策略 networkpolicy-resource networkpolicy.yaml
https://kubernetes.io/zh/docs/concepts/services-networking/network-policies/#networkpolicy-resource

apiVersion: networking.k8s.io/v1 不同 networkpolicy.yaml
kind: NetworkPolicy
metadata:
name: allow-port-from-namespace
namespace: foobar
spec:
podSelector:{}
policyTypes:

  • Ingress
    ingress:
  • from:
    • namespaceSelector:
      matchLabels:
      project: corp-bar
      ports:
    • protocol: TCP
      port: 5679

apiVersion: networking.k8s.io/v1 同一 networkpolicy.yaml
kind: NetworkPolicy
metadata:
name: allow-port-from-namespace
namespace: foobar
spec:
podSelector:{}
policyTypes:

  • Ingress
    ingress:
  • from:
    • podSelector:{}
      ports:
    • protocol: TCP
      port: 9000
      6 kubectl expose deployment front-end --port=80 --target-port=80
      –protocol=TCP --type=NodePort --name=front-end-svc
      kubectl get deployments.apps
      kubectl edit deployments.apps front-end
      nginx下面 name nginx 下面
      name 下面添加 三行
      ports:
  • containerPort: 80
    name: http

7 ingress
https://kubernetes.io/zh/docs/concepts/services-networking/ingress/#the-ingress-resource
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: pong
namespace: ing-internal
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:

  • http:
    paths:
    • path: /hi
      pathType: Prefix
      backend:
      service:
      name: hi
      port:
      number: 5678
      kubectl get ingress -n ing-internal -owide 得到IP curl -kL IP
      kubectl edit ingress pong -n ing-internal
      8 kubectl scale deployment web-server --replicas=6

9 调度pod
kubectl get nodes --show-labels
kubectl label node node2 disk=ssd 忽略本行
kubectl run nginx-kusc00401 --image=nginx --dry-run=client -o yaml > pod_nginx.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx-kaos
name: nginx-kaos
spec:
containers:

  • image: nginx
    name: nginx-kaos
    resources: {}
    nodeSelector:
    disk: ssd
    dnsPolicy: ClusterFirst
    restartPolicy: Always
    status: {}

10
kubectl describe nodes|grep -i taint
kubectl describe nodes|grep -i taint|grep NoSchedule
2 是数量 也可能没有NoSchedule
echo 2 > /KUSC004002.TXT

11
kubectl run kucc5 --image=nginx --dry-run=client -oyaml > 11.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: kucc8
name: kucc8
spec:
containers:

  • image: redis
    name: redis
  • image: consul
    name: consul

12
https://kubernetes.io/zh/docs/tasks/configure-pod-container/configure-persistent-volume-storage/
apiVersion: v1
kind: PersistentVolume
metadata:
name: app-config
labels:
type: local
spec:
capacity:
storage: 1Gi
accessModes:
- ReadOnlyMany
hostPath:
path: “/srv/app-config”
13 pvc
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-volume
spec:
storageClassName: csi-hostpath-sc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi

apiVersion: v1
kind: Pod
metadata:
name: web-server
spec:
volumes:
- name: pv-volume
persistentVolumeClaim:
claimName: pv-volume
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: “http-server”
volumeMounts:
- mountPath: “/usr/share/nginx/html”
name: pv-volume
spec 下面的 storage
kubectl edit pvc pv-volume --record
修改 70Mi
14
kubectl run bar --image=nginx
可以把 指定的 输出 access-website 替换为 题目要求的
kubectl logs bar |grep access-website > /opt/kutr00101/bar

15
https://kubernetes.io/zh/docs/concepts/cluster-administration/logging/
kubectl get pod big-corp-app -oyaml>15.yaml
添加进去
volumeMounts:

  • name: logs
    mountPath: /var/log
  • name: sidecar 看题目要求填写
    image: busybox
    args: [/bin/sh, -c, ‘tail -n+1 -f /var/log/11-factor-app.log’]
    volumeMounts:
    • name: logs
      mountPath: /var/log
      volumes: 下面
  • name: logs
    emptyDir: {}
    kubectl delete -f 15.yaml
    kubectl apply -f 15.yaml
    kubectl exec 11-factor-app -c sidecar – tail -f /var/log/11-factor-app.log
    16
    kubectl top pod -l name=cpu-user -A --sort-by=‘cpu’
    找到后 echo 对应的 >> xxxx

17
节点为 NotReady 修改为正常
ssh 到异常节点
sudo -i
systemctl status kubelet
systemctl start kubelet 启动即可
systemctl enable kubelet 开机自启动
kubectl get node

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值