k8s练习(三)

1、Let us try that. Upgrade the application by setting the image on the deployment to kodekloud/webapp-color:v2

Do not delete and re-create the deployment. Only set the new image name for the existing deployment.

#Run the command 
kubectl edit deployment frontend 
#and modify the image to kodekloud/webapp-color:v2.Next, save and exit. The pods should be recreated with the new image.

在这里插入图片描述
2、The reason the application is failed is because we have not created the secrets yet. Create a new secret named db-secret with the data given below.

You may follow any one of the methods discussed in lecture to create the secret.

Secret Name: db-secret

Secret 1: DB_Host=sql01

Secret 2: DB_User=root

Secret 3: DB_Password=password123
在这里插入图片描述

controlplane ~ ✖ kubectl create secret generic db-secret --from-literal=DB_Host=sql01 --from-literal=DB_User=root --from-literal=DB_Password=password123
secret/db-secret created

controlplane ~ ➜  kubectl get secrets db-secret 
NAME        TYPE     DATA   AGE
db-secret   Opaque   3      96s

eg:
在这里插入图片描述
在这里插入图片描述
3、Create a multi-container pod with 2 containers.

Use the spec given below.
If the pod goes into the crashloopbackoff then add the command sleep 1000 in the lemon container.

Name: yellow

Container 1 Name: lemon

Container 1 Image: busybox

Container 2 Name: gold

Container 2 Image: redis
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
4、The application outputs logs to the file /log/app.log. View the logs and try to identify the user having issues with Login.

Inspect the log file inside the pod.
在这里插入图片描述
在这里插入图片描述
5、Edit the pod to add a sidecar container to send logs to Elastic Search. Mount the log volume to the sidecar container.

Only add a new container. Do not modify anything else. Use the spec provided below.

Name: app

Container Name: sidecar

Container Image: kodekloud/filebeat-configured

Volume Mount: log-volume

Mount Path: /var/log/event-simulator/

Existing Container Name: app

Existing Container Image: kodekloud/event-simulator
在这里插入图片描述
在这里插入图片描述

root@controlplane ~ ✖ cat /tmp/kubectl-edit-2237082501.yaml
apiVersion: v1
kind: Pod
metadata:
  name: app
  namespace: elastic-stack
  labels:
    name: app
spec:
  containers:
  - name: app
    image: kodekloud/event-simulator
    volumeMounts:
    - mountPath: /log
      name: log-volume

  - name: sidecar
    image: kodekloud/filebeat-configured
    volumeMounts:
    - mountPath: /var/log/event-simulator/
      name: log-volume

  volumes:
  - name: log-volume
    hostPath:
      # directory location on host
      path: /var/log/webapp
      # this field is optional
      type: DirectoryOrCreate

6、We need to take node01 out for maintenance. Empty the node of all applications and mark it unschedulable.

Node node01 Unschedulable
Pods evicted from node01
在这里插入图片描述
6、The maintenance tasks have been completed. Configure the node node01 to be schedulable again.
在这里插入图片描述
7、hr-app is a critical app and we do not want it to be removed and we do not want to schedule any more pods on node01.
Mark node01 as unschedulable so that no new pods are scheduled on this node.
Make sure that hr-app is not affected.
在这里插入图片描述
8、We will be upgrading the master node first. Drain the master node of workloads and mark it UnSchedulable
在这里插入图片描述
9、The master node in our cluster is planned for a regular maintenance reboot tonight. While we do not anticipate anything to go wrong, we are required to take the necessary backups. Take a snapshot of the ETCD database using the built-in snapshot functionality.(ETCD的备份与恢复)9-10

Store the backup file at location /opt/snapshot-pre-boot.db

root@controlplane /var/lib/etcd ➜  ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 \
> --cacert=/etc/kubernetes/pki/etcd/ca.crt \
> --cert=/etc/kubernetes/pki/etcd/server.crt \
> --key=/etc/kubernetes/pki/etcd/server.key \
> snapshot save /opt/snapshot-pre-boot.db
Snapshot saved at /opt/snapshot-pre-boot.db

root@controlplane /var/lib/etcd ➜  

root@controlplane /var/lib/etcd ➜  ls /opt/
cni  containerd  snapshot-pre-boot.db

10、Luckily we took a backup. Restore the original state of the cluster using the backup file.

First Restore the snapshot:

root@controlplane:~# ETCDCTL_API=3 etcdctl  --data-dir /var/lib/etcd-from-backup \
snapshot restore /opt/snapshot-pre-boot.db
2022-06-02 02:59:34.790712 I | mvcc: restore compact to 3826
2022-06-02 02:59:34.797951 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32

注意:在本例中,我们将快照恢复到不同的目录,但在进行备份的同一服务器(控制平面节点)中。因此,恢复命令所需的唯一选项是——data-dir。
接下来,更新/etc/kubernetes/manifest /etcd.yaml:
我们已经恢复了etcd快照controlplane——/var/lib/etcd-from-backup到一个新的路径,所以,唯一的变化是在YAML文件,是改变的hostPath体积叫做etcd-data从旧的目录(/ var / lib / etcd)到新目录(/ var / lib / etcd-from-backup)。

  volumes:
  - hostPath:
      path: /var/lib/etcd-from-backup
      type: DirectoryOrCreate
    name: etcd-data

通过这个更改,容器上的/var/lib/etcd指向控制平面上的/var/lib/etc - d-from-backup(这就是我们想要的)

当这个文件更新时,ETCD pod 将自动重新创建,因为这是一个静态pod,位于/etc/kubernetes/manifest目录下。

注1:当ETCD pod 已经改变,它将自动重启,以及kube-controller-manager和kube-scheduler。等待1-2分钟,这个pod 重新启动。您可以运行“docker ps | grep etcd”命令查看etcd pod何时重启。

注2:如果etcd pod没有准备好1/1,那么通过kubectl delete pod -n kube-system etc -controlplane重启它,等待1分钟。

注3:这是确保ETCD在重新创建ETCD pod后使用恢复数据的最简单方法。你不需要改变其他任何东西。

如果您将YAML文件中的——data-dir改为/var/lib/etcd-from-backup,确保etcd-data的volumeMounts也被更新,挂载路径指向/var/lib/etcd -from-backup(这个完整步骤是可选的,不需要完成恢复)

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值