Etcd备份恢复进阶-k8s+Velero+minio-Day 06

前面讲过etcd 快照级别的备份恢复,基本上日常工作是够用的,但是快照级别的备份恢复,都是全局的,没有办法备份恢复指定的一条或某几条数据,所以这里引入了velero,可以针对指定资源来做备份恢复

1.Velero

1.1 Velero简介

官网:https://velero.io/

(1)Velero是vmware开源的一个云原生的灾难恢复和迁移工具,它本身也是开源的(Velero的开发公司Heptio,已被VMware收购),采用Go语言编写,可以安全的备份、恢复和迁移Kubernetes集群资源数据。


(2)Velero支持标准的K8S集群,既可以是私有云平台也可以是公有云,除了灾备之外它还能做资源移转,支持把容器应用从一个集群迁移到另一个集群。


(3)Velero的工作方式就是把k8s中的数据备份到对象存储以实现高可用和持久化,默认的备份保存时间为720小时,并在需要的时候进行下载和恢复。

1.2 Velero与etcd快照备份的区别

(1)etcd 快照是全局完成备份(类似于MySQL全部备份),即使需要恢复一个资源对象(类似于只恢复MySQL的一个库),但是也需要做全局恢复到备份的状态(类似于MySQL的全库恢复),即会影响其它namespace中pod运行服务(类似于会影响MySQL其它数据库的数据)。


(2)Velero可以有针对性的备份,比如按照namespace单独备份、只备份单独的资源对象等,在恢复的时候可以根据备份只恢复单独的namespace或资源对象,而不影响其它namespace中pod运行服务。
并且velero是增量备份的,就是第一次备份后,集群写入了新数据,再二次备份的时候,velero只备份etcd中发生了变化的数据,旧的数据不会再进行备份。


(3)velero支持ceph、oss等对象存储。


(4)velero支持任务计划实现周期备份,但etcd快照也可以基于cronjob实现。


(5)velero支持对AWS EBS创建快照及还原
https://www.qloudx.com/velero-for-kubernetes-backup-restore-stateful-workloads-with-aws-ebs-snapshots/
https://github.com/vmware-tanzu/velero-plugin-for-aws

1.3 velero整体架构

在这里插入图片描述

1.4 备份流程

官网:https://velero.io/docs/v1.11/how-velero-works/

~# velero backup create myserver-ns-backup-${DATE} --include-namespaces myserver --kubeconfig=./awsuser.kubeconfig -- namespace velero-system

(1)Velero 客户端调用Kubernetes API Server创建Backup 对象。
(2)BackupController 基于watch 机制通过API Server获取到备份任务并进行验证。
(3)验证通过后BackupController开始执行备份动作,其会通过请求API Server获取需要备份的数据。
(4)BackupController 将获取到的数据备份上传到指定的对象存储server端。

默认情况下,velero backup create为任何持久卷创建磁盘快照。您可以通过指定其他标志来调整快照。运行velero backup create——help查看可用标志。可以通过选项——snapshot-volumes=false禁用快照。

在这里插入图片描述

1.5 恢复流程

(1)Velero 客户端调用k8s api-server来创建 Restore 对象。
(2)RestoreController 基于watch 机制通过API Server获取到还原对象并进行验证。
(3)RestoreController从对象存储服务中获取备份信息。然后,它在备份的资源上运行一些预处理,以确保这些资源可以在新的集群上工作。例如,使用备份的API版本来验证恢复资源将在目标集群上工作。
(4)RestoreController启动恢复过程,每次恢复一个符合条件的资源。


默认情况下,Velero执行非破坏性恢复,这意味着它不会删除目标集群上的任何数据。如果备份中的资源已经存在于目标集群中,Velero将跳过该资源。您可以配置Velero使用更新策略,而不是使用——existing-resource-policy恢复标志。当此标志设置为update时,Velero将尝试更新目标集群中的现有资源,以匹配来自备份的资源。

在这里插入图片描述

2. minio

官网:https://www.minio.org.cn/index.shtml

2.1 简介

MinIO 提供高性能、与S3 兼容的对象存储系统,让你自己能够构建自己的云储存服务。
MinIO原生支持 Kubernetes,它可用于每个独立的公共云、每个 Kubernetes 发行版、私有云和边缘的对象存储套件。
MinIO是软件定义的,不需要购买其他任何硬件,在 GNU AGPL v3 下是 100% 开源的。

2.2 编辑minio Dockerfile

这里我选择自建minio镜像

[root@containerd-build-image minio]# cat Dockerfile 
FROM tsk8s.top/baseimages/centos-base:7.9.2009

ADD minio /usr/local/bin/
ADD run_minio.sh /root

RUN chmod u+x /usr/local/bin/minio && \
    mkdir -p /data/ && \
    chmod u+x /root/run_minio.sh

CMD ["/root/run_minio.sh"]

2.3 编辑服务启动脚本

[root@containerd-build-image minio]# cat run_minio.sh 
#!/bin/bash
/usr/local/bin/minio server /data --console-address :9090

2.4 编辑构建镜像脚本

[root@containerd-build-image minio]# cat build-image.sh
#!/bin/bash
nerdctl build -t tsk8s.top/tsk8s/minio:v1 . && \
nerdctl push tsk8s.top/tsk8s/minio:v1 && \
nerdctl rmi tsk8s.top/tsk8s/minio:v1

2.5 下载minio二进制文件

[root@containerd-build-image minio]# wget https://dl.min.io/server/minio/release/linux-amd64/minio

2.6 构建镜像

[root@containerd-build-image minio]# sh build-image.sh

在这里插入图片描述

3. 部署minio到k8s集群

3.1 编写deployment yaml

[root@k8s-harbor01 minio]# pwd
/root/yaml/deployment/minio

[root@k8s-harbor01 minio]# cat deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: minio
  name: minio
  namespace: myserver
spec:
  replicas: 1
  selector:
    matchLabels:
      app: minio
  template:
    metadata:
      labels:
        app: minio
    spec:
      containers:
      - name: minio
        image: tsk8s.top/tsk8s/minio:v1
        #command:
        #- /bin/bash
        #- -c
        #args: 
        #- minio server /data --console-address :9090    
        volumeMounts:
        - name: nfs-minio
          mountPath: /data
      volumes:
      - name: nfs-minio
        nfs:
          server: 10.31.200.104
          path: /data/k8s-data/minio-data
      imagePullSecrets:
        - name: dockerhub-image-pull-key

3.2 创建nfs数据持久化目录并创建minio

[root@k8s-harbor01 minio]# mkdir -p /data/k8s-data/minio-data

[root@k8s-harbor01 minio]# kubectl apply -f deployment.yaml
[root@k8s-harbor01 minio]# kubectl get po -n myserver |grep minio
minio-5cc5fc9498-lmzwh   1/1     Running   0          41s

3.3 创建svc

3.3.1 创建nodeport类型的svc

[root@k8s-harbor01 minio]# cat nodeport.yaml
apiVersion: v1
kind: Service
metadata:
  name: minio-nodeport
  namespace: myserver
spec:
  ports:
  - name: web # web页面
    port: 80
    targetPort: 9090
  - name: tcp # 程序链接
    port: 8080
    targetPort: 9000
  type: NodePort
  selector:
    app: minio

[root@k8s-harbor01 minio]# kubectl apply -f nodeport.yaml
service/minio-nodeport created

[root@k8s-harbor01 minio]# kubectl get svc -n myserver
NAME             TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)                       AGE
minio-nodeport   NodePort   10.100.61.157   <none>        80:32369/TCP,8080:30000/TCP   7m56s

3.3.2 创建cluster ip类型的svc

[root@k8s-harbor01 minio]# cat svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: minio
  namespace: myserver
spec:
  ports:
  - name: tcp
    port: 9000
    targetPort: 9000
  selector:
    app: minio

[root@k8s-harbor01 minio]# kubectl apply -f svc.yaml
service/minio created

[root@k8s-harbor01 minio]# kubectl get svc -n myserver
NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                       AGE
minio            ClusterIP   10.100.39.202   <none>        9000/TCP                      6s
minio-nodeport   NodePort    10.100.61.157   <none>        80:32369/TCP,8080:30000/TCP   10m

3.3.3 访问并创建一个bucket

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

4. 部署Velero

注意Velero版本和k8s版本的兼容性,官网:https://github.com/vmware-tanzu/velero
安装:https://velero.io/docs/v1.11/contributions/minio/

在这里插入图片描述

4.1 下载velero 1.11

放到一台能执行kubectl命令的机器

[root@k8s-harbor01 minio]# cd /usr/local/src/
[root@k8s-harbor01 src]# wget https://github.com/vmware-tanzu/velero/releases/download/v1.11.0/velero-v1.11.0-linux-amd64.tar.gz
[root@k8s-harbor01 src]# ll -rth
总用量 35M
-rw-r--r-- 1 root root 35M 712 11:30 velero-v1.11.0-linux-amd64.tar.gz

[root@k8s-harbor01 src]# tar xf velero-v1.11.0-linux-amd64.tar.gz
[root@k8s-harbor01 src]# ls
velero-v1.11.0-linux-amd64  velero-v1.11.0-linux-amd64.tar.gz
[root@k8s-harbor01 src]# cd velero-v1.11.0-linux-amd64/
[root@k8s-harbor01 velero-v1.11.0-linux-amd64]# ls
examples  LICENSE  velero

[root@k8s-harbor01 velero-v1.11.0-linux-amd64]# mv velero /usr/local/bin/
[root@k8s-harbor01 velero-v1.11.0-linux-amd64]# velero -h|head
Velero is a tool for managing disaster recovery, specifically for Kubernetes
cluster resources. It provides a simple, configurable, and operationally robust
way to back up your application state and associated data.

If you're familiar with kubectl, Velero supports a similar model, allowing you to
execute commands such as 'velero get backup' and 'velero create schedule'. The same
operations can also be performed as 'velero backup get' and 'velero schedule create'.

Usage:
  velero [command]

在这里插入图片描述

4.2 配置velero客户端认证环境

如果直接使用admin 级别的.kube/config,这一步可以跳过

4.2.1 创建数据目录

[root@k8s-harbor01 velero-v1.11.0-linux-amd64]# mkdir -p /data/velero
[root@k8s-harbor01 velero-v1.11.0-linux-amd64]# cd /data/velero

4.2.2 配置访问minio的认证文件

[root@k8s-harbor01 velero]# cat velero-auth.txt
[default]
aws_access_key_id = minioadmin
aws_secret_access_key = minioadmin

4.3 创建备份恢复专用用户

如果直接使用admin 级别的.kube/config,这一步可以跳过

4.3.1 准备user-csr文件签发证书

[root@k8s-harbor01 velero]# cat awsuser-csr.json
{
  "CN": "awsuser",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

4.3.2 准备证书签发环境

[root@k8s-harbor01 velero]# wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64 
[root@k8s-harbor01 velero]# wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64 
[root@k8s-harbor01 velero]# wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd64

[root@k8s-harbor01 velero]# ll -rth
总用量 40M
-rw-r--r-- 1 root root  76 712 11:40 velero-auth.txt
-rw-r--r-- 1 root root 220 712 11:44 awsuser-csr.json
-rw-rw-r-- 1 1037 1037 11M 712 13:28 cfssljson_1.6.1_linux_amd64
-rw-rw-r-- 1 1037 1037 16M 712 13:28 cfssl_1.6.1_linux_amd64
-rw-rw-r-- 1 1037 1037 13M 712 13:28 cfssl-certinfo_1.6.1_linux_amd64

[root@k8s-harbor01 velero]# mv cfssl-certinfo_1.6.1_linux_amd64 cfssl-certinfo
[root@k8s-harbor01 velero]# mv cfssl_1.6.1_linux_amd64 cfssl
[root@k8s-harbor01 velero]# mv cfssljson_1.6.1_linux_amd64 cfssljson
[root@k8s-harbor01 velero]# cp cfssl-certinfo cfssl cfssljson /usr/local/bin/
[root@k8s-harbor01 velero]# chmod  a+x /usr/local/bin/cfssl*

4.3.3 执行证书签发

k8s版本:>= 1.24.x

[root@k8s-harbor01 velero]# cp /etc/kubeasz/clusters/k8s-cluster1/ssl/ca-config.json ./
[root@k8s-harbor01 velero]# cfssl gencert -ca=/etc/kubeasz/clusters/k8s-cluster1/ssl/ca.pem -ca-key=/etc/kubeasz/clusters/k8s-cluster1/ssl/ca-key.pem -config=./ca-config.json -profile=kubernetes ./awsuser-csr.json | cfssljson -bare awsuser
2023/07/12 13:42:33 [INFO] generate received request
2023/07/12 13:42:33 [INFO] received CSR
2023/07/12 13:42:33 [INFO] generating key: rsa-2048
2023/07/12 13:42:34 [INFO] encoded CSR
2023/07/12 13:42:34 [INFO] signed certificate with serial number 717922920253183599313985855243172711160642184434
2023/07/12 13:42:34 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

[root@k8s-harbor01 velero]# ll awsuser*
-rw-r--r-- 1 root root  997 712 13:42 awsuser.csr
-rw-r--r-- 1 root root  220 712 11:44 awsuser-csr.json
-rw------- 1 root root 1675 712 13:42 awsuser-key.pem
-rw-r--r-- 1 root root 1391 712 13:42 awsuser.pem

k8s版本:1.23 <=

/usr/local/bin/cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem -ca-key=/etc/kubernetes/ssl/ca-key.pem -config=/etc/kubeasz/clusters/k8s-cluster1/ssl/ca-config.json -profile=kubernetes ./awsuser-csr.json | cfssljson -bare awsuser

4.3.4 分发证书到api-server证书路径

[root@k8s-harbor01 velero]# mkdir -p /etc/kubernetes/ssl/ # 因为我的k8s是使用kubeasz安装的,所以k8s集群的证书不放在这里,所以也就没有这个目录
[root@k8s-harbor01 velero]# cp awsuser-key.pem /etc/kubernetes/ssl/
[root@k8s-harbor01 velero]# cp awsuser.pem /etc/kubernetes/ssl/

4.3.5 生成集群认证config文件

[root@k8s-harbor01 velero]# export KUBE_APISERVER="https://10.31.200.100:6443"
[root@k8s-harbor01 velero]# kubectl config set-cluster kubernetes \
> --certificate-authority=/etc/kubeasz/clusters/k8s-cluster1/ssl/ca.pem \
> --embed-certs=true \
> --server=${KUBE_APISERVER} \
> --kubeconfig=./awsuser.kubeconfig
Cluster "kubernetes" set.

[root@k8s-harbor01 velero]# ll -rt awsuser.kubeconfig
-rw------- 1 root root 1951 712 13:57 awsuser.kubeconfig

[root@k8s-harbor01 velero]# cat awsuser.kubeconfig
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURtakNDQW9LZ0F3SUJBZ0lVZE1iYk5JTUJDTGhvMUgrUnRZcXkxOFJISTVFd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pERUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVFzd0NRWURWUVFIRXdKWQpVekVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweEZqQVVCZ05WQkFNVERXdDFZbVZ5CmJtVjBaWE10WTJFd0lCY05Nak13TkRJME1UQTFOakF3V2hnUE1qRXlNekF6TXpFeE1EVTJNREJhTUdReEN6QUoKQmdOVkJBWVRBa05PTVJFd0R3WURWUVFJRXdoSVlXNW5XbWh2ZFRFTE1Ba0dBMVVFQnhNQ1dGTXhEREFLQmdOVgpCQW9UQTJzNGN6RVBNQTBHQTFVRUN4TUdVM2x6ZEdWdE1SWXdGQVlEVlFRREV3MXJkV0psY201bGRHVnpMV05oCk1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBOGF4RExLWWhZWjY4d3JlY0hJY2gKM3ZWZ2x2SG5GN05EUUJWeDg2Wk9uR3lQQXdlL0YwZGIxcmJ4U2g2blZNdStGU1huenE2K0JaWi9hSlFoaTBmaQp2Yi9KN2hKMzRkNGFNMTlUblhpUG16dlFpemR1R0hIeEprbkc4L2N5M25iQnM5WFJZQnJ4WkJwd1ZzTVZTMC9kCjFockFzeXhXSzh5bnVwUzE2ZmVlSnBPaUdRVTBUK3NrR0Jua3BQVXUyRjV6d3NpWWJpeWRDRWswQUl2b3c1ODIKRFNDVk9qMDhGbEx3cENkN3dFLzBJMEdDQXRpVlJwcHpYNzBkSnFRNklmTE5CbnpQWDJkSGJqaDBlVFlRUGE5TgpRUGhnV21KMjNPaE8rVlAzQVQrN1FBQlgrYU1vaDVWL2Ywc0NFbE14UGlyWlJvem1xb2EvVUpQOWxFc0pBOEZ1Cjl3SURBUUFCbzBJd1FEQU9CZ05WSFE4QkFmOEVCQU1DQVFZd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlYKSFE0RUZnUVVPelFkSEdhemxuSVY4WDlTV2Z0d21vSVNidTR3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUwwVQpDV1pXSWs1QlBod3psZjYybnhNNWY3YXpDYUZ2ektXVVA1UUFLeVRIbzZkRUxLaUFsSzhHU3dScVJmY2d1Y3BMCjg5ZXRWYkdCdnA1aG9yZkpPcEg5VmhRY1JiYzVSbHpmMy9ldFo5RExpRVNmOWtCMVh5NFRXVWZjQmMzWW00UVcKRXE1QjdLc2duajJSNzNzN1JLbTVCQXAyeUxpWTRtR0RrZHFVd3Irb2lQbHFJL0Fvam9PTUJ4QWxncS83ZVFlKwo5VWxQZFBwTDcwZmljbG5zR2hyTUNoTmNLZUovdGZ5WkZPL1A2YTVHVWlNSkdLbFRwMHUvanQweUlHckhnN0t2CjAzK3VxR3lnT3ByeUpZWVBzbGtEVE1OekJFb3V0Z3Fqazk5K0U0VGNzM3NZQUVYelh1Ymk3dEVweldwUE5FL0QKQWNESkNLRyt4N0VxQ3BpOC93RT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://10.31.200.100:6443
  name: kubernetes
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

4.3.6 设置客户端证书认证

[root@k8s-harbor01 velero]# kubectl config set-credentials awsuser \
--client-certificate=/etc/kubernetes/ssl/awsuser.pem \
--client-key=/etc/kubernetes/ssl/awsuser-key.pem \
--embed-certs=true \
--kubeconfig=./awsuser.kubeconfig

[root@k8s-harbor01 velero]# cat awsuser.kubeconfig
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURtakNDQW9LZ0F3SUJBZ0lVZE1iYk5JTUJDTGhvMUgrUnRZcXkxOFJISTVFd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pERUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVFzd0NRWURWUVFIRXdKWQpVekVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweEZqQVVCZ05WQkFNVERXdDFZbVZ5CmJtVjBaWE10WTJFd0lCY05Nak13TkRJME1UQTFOakF3V2hnUE1qRXlNekF6TXpFeE1EVTJNREJhTUdReEN6QUoKQmdOVkJBWVRBa05PTVJFd0R3WURWUVFJRXdoSVlXNW5XbWh2ZFRFTE1Ba0dBMVVFQnhNQ1dGTXhEREFLQmdOVgpCQW9UQTJzNGN6RVBNQTBHQTFVRUN4TUdVM2x6ZEdWdE1SWXdGQVlEVlFRREV3MXJkV0psY201bGRHVnpMV05oCk1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBOGF4RExLWWhZWjY4d3JlY0hJY2gKM3ZWZ2x2SG5GN05EUUJWeDg2Wk9uR3lQQXdlL0YwZGIxcmJ4U2g2blZNdStGU1huenE2K0JaWi9hSlFoaTBmaQp2Yi9KN2hKMzRkNGFNMTlUblhpUG16dlFpemR1R0hIeEprbkc4L2N5M25iQnM5WFJZQnJ4WkJwd1ZzTVZTMC9kCjFockFzeXhXSzh5bnVwUzE2ZmVlSnBPaUdRVTBUK3NrR0Jua3BQVXUyRjV6d3NpWWJpeWRDRWswQUl2b3c1ODIKRFNDVk9qMDhGbEx3cENkN3dFLzBJMEdDQXRpVlJwcHpYNzBkSnFRNklmTE5CbnpQWDJkSGJqaDBlVFlRUGE5TgpRUGhnV21KMjNPaE8rVlAzQVQrN1FBQlgrYU1vaDVWL2Ywc0NFbE14UGlyWlJvem1xb2EvVUpQOWxFc0pBOEZ1Cjl3SURBUUFCbzBJd1FEQU9CZ05WSFE4QkFmOEVCQU1DQVFZd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlYKSFE0RUZnUVVPelFkSEdhemxuSVY4WDlTV2Z0d21vSVNidTR3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUwwVQpDV1pXSWs1QlBod3psZjYybnhNNWY3YXpDYUZ2ektXVVA1UUFLeVRIbzZkRUxLaUFsSzhHU3dScVJmY2d1Y3BMCjg5ZXRWYkdCdnA1aG9yZkpPcEg5VmhRY1JiYzVSbHpmMy9ldFo5RExpRVNmOWtCMVh5NFRXVWZjQmMzWW00UVcKRXE1QjdLc2duajJSNzNzN1JLbTVCQXAyeUxpWTRtR0RrZHFVd3Irb2lQbHFJL0Fvam9PTUJ4QWxncS83ZVFlKwo5VWxQZFBwTDcwZmljbG5zR2hyTUNoTmNLZUovdGZ5WkZPL1A2YTVHVWlNSkdLbFRwMHUvanQweUlHckhnN0t2CjAzK3VxR3lnT3ByeUpZWVBzbGtEVE1OekJFb3V0Z3Fqazk5K0U0VGNzM3NZQUVYelh1Ymk3dEVweldwUE5FL0QKQWNESkNLRyt4N0VxQ3BpOC93RT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://10.31.200.100:6443
  name: kubernetes
contexts: null
current-context: ""
kind: Config
preferences: {}
users:
- name: awsuser
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQxVENDQXIyZ0F3SUJBZ0lVZmNERzhyVnBsY0NhQjMxbitsckhId0FlcFBJd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pERUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVFzd0NRWURWUVFIRXdKWQpVekVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweEZqQVVCZ05WQkFNVERXdDFZbVZ5CmJtVjBaWE10WTJFd0lCY05Nak13TnpFeU1EVXpPREF3V2hnUE1qQTNNekEyTWprd05UTTRNREJhTUdJeEN6QUoKQmdOVkJBWVRBa05PTVJBd0RnWURWUVFJRXdkQ1pXbEthVzVuTVJBd0RnWURWUVFIRXdkQ1pXbEthVzVuTVF3dwpDZ1lEVlFRS0V3TnJPSE14RHpBTkJnTlZCQXNUQmxONWMzUmxiVEVRTUE0R0ExVUVBeE1IWVhkemRYTmxjakNDCkFTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS2dnRGNpdGtwZVBRYjJlb2NnbzBwS1gKek9JRGFEVGU2MlBBRTdTZkE1TldBVDhHTEIvTFMrM3NlVEVBSm1nUmRneHQ3b0JManNnZzZFeHlVZjgwSnlSaAp3Y1pXSGFwaUdHSDVvTHRweHlOcFZxbUhyTVZCalFpVzR6eVUwS0g1U2FyaFVocTlBMENzUzNTenNGU3FvSWNOCjY2RTdYVWNoaDNzK0VkU25IUUpOMlM1cUVsOWhYZDlGUnFlU0x1Rjg1amVpdGczNkN3dHVFSHVlQ1BmdUg5d2YKa3FqbnJCak53WlNpWTZUWEdVSTFNN3A2dlh4a1pmWFV3aXZmb0ZnVFc3bU5vdG8wa0J2ZHRnWVU4ekJlM3ZpLwprM25vWW00dW5JUUJaRWhlV09UNkxmV0xMMWI5bHBNUTloSTlWM2U5N05mTnFERHpyN1Z4MkRlOWppckJ2VGNDCkF3RUFBYU4vTUgwd0RnWURWUjBQQVFIL0JBUURBZ1dnTUIwR0ExVWRKUVFXTUJRR0NDc0dBUVVGQndNQkJnZ3IKQmdFRkJRY0RBakFNQmdOVkhSTUJBZjhFQWpBQU1CMEdBMVVkRGdRV0JCVFNWdFlxM0JPNzYvbHR2N0w3QzJYeQpUeVZyVlRBZkJnTlZIU01FR0RBV2dCUTdOQjBjWnJPV2NoWHhmMUpaKzNDYWdoSnU3akFOQmdrcWhraUc5dzBCCkFRc0ZBQU9DQVFFQUdaQXVUZFhsUlBMUEZSY3lJTUNSVzBrZUlGcjU4bDZQc2s5N1FLTWV3dmFudXUwc3FFZGEKOFl3MTZidy8vb1RPR0ViSDI2MXh4RDdhcXhhMzRvOXJ0ZGNka2NWTE0vck5RSkNRUVltbkkwK1YvN2hLeEs5TQpOZWxEaFJrTDFWclN1cDdtQjN4MjQvbEdrV0c5QmJMeXBEbjRyYXRtSnJRakJ3Y3JhM24wd29jKzdZTDA3SXQyCjE1VGxZNGNaRG42VG43RnE0WTRqVSs3S1AzSzB3OEtNVWRnVkl0RVhYTm5HekpBZHVPa2djZUQ0bTVEUjgxNzEKS29nZ2pRZ2FPT0pFVXpQWlpNeTRWdUVPYVR3RnR6OG5FT2NGeVVhYkZFdEdjaHFMY3ladzdrcWkwa2VvZzljbApRYXRuODRxMTdBWUh5RU44bDZNa01hNm1DZjl6NFNlUm5nPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb2dJQkFBS0NBUUVBcUNBTnlLMlNsNDlCdlo2aHlDalNrcGZNNGdOb05ON3JZOEFUdEo4RGsxWUJQd1lzCkg4dEw3ZXg1TVFBbWFCRjJERzN1Z0V1T3lDRG9USEpSL3pRbkpHSEJ4bFlkcW1JWVlmbWd1Mm5ISTJsV3FZZXMKeFVHTkNKYmpQSlRRb2ZsSnF1RlNHcjBEUUt4TGRMT3dWS3FnaHczcm9UdGRSeUdIZXo0UjFLY2RBazNaTG1vUwpYMkZkMzBWR3A1SXU0WHptTjZLMkRmb0xDMjRRZTU0STkrNGYzQitTcU9lc0dNM0JsS0pqcE5jWlFqVXp1bnE5CmZHUmw5ZFRDSzkrZ1dCTmJ1WTJpMmpTUUc5MjJCaFR6TUY3ZStMK1RlZWhpYmk2Y2hBRmtTRjVZNVBvdDlZc3YKVnYyV2t4RDJFajFYZDczczE4Mm9NUE92dFhIWU43Mk9Lc0c5TndJREFRQUJBb0gvWjhOZ3ZucGgveWhyYXB4OQpQUXYwNm9URjdyZ3JtSFg4MFpPNmxiK09kV0NmWkVacTlUU0RxRlJLZC9PZndKc0dRS1dZalpZVWlXL0Y3MmlmCk11TDlSWGNRSVZrRTlpT1U2T01vVUlMNFpPS2VtZ01pbnB2V2IzYXd5TE1RRU9mS2o2eUEvLytvQWtKVVQ3S1gKSWFvdGFTMHVRRlJqUFlOMEdwdzBaUXErM0t3dk1MNzI2bkZNTTNqZFVsOFdCTzRmREVUSkNucEc2TnFaMW4vNwp4T2xtVkcxczBrdTBmUWlyV3BTRWFGSjF0VEpra2ZKdFdselhJNHFPM3lXZEp6eVBMT3N3R3VMVFVCT2c4V1c1CktoZ3E4VERaSmt0VHE3R1hjZG4vQUs3L0NNTnRsQmZiaFNBbGpCcURHaWtsVlJvcnBDRnMxbkRYNlE3NHExVHcKVGhteEFvR0JBTW5xeHhYRDFLUk4xRWtQUDJCZmtFWDdTcFRrMFVFQnRHVjJQdUw5NlZxaS90QUEwQUVsdXJDSgpwWlVNVXJYdnhFMjd1cTBrMFhuT2YranNvSjZiNGpZVUNrbjFRL2VwcGFzK0JoTjBiMFFpSzFabEFvWU9BMnErCk9XaWtHdE1zYWNlbVpwZmh6a0lYeHV5QUtyQkUrNWVMY1EzK3dDTXU2UTlwSWF6aGZ5aEZBb0dCQU5Vb05JNzYKZkMwdStwc1g4TzVIa2FuK0pJMDNVTWZLVXNQUyt2dHAvR1U4cGdTdkJweTNqRlIzSSticmFROEZPYVdRQmIwNQpJVEVERVVuKzZmZUc5MGlmNEx1dytNODJiWnJ6dzBOSGtyWlN3ZkgvbW82cEMvd1VkS0xEa1V2Wlc4MXprSEV5CnlDeUJYV0paNU9KWlBrY2h2MVNUb2o0dk14cEN4cVJPRHIxTEFvR0JBS2NwOFBKcTh6T25uNVZ3d3laVlY0c28KZE9GNTRtZXdNcHBCWCtUckUzTlBPQ2dhVkJwdkV2VXVyK0FLbUxzNUtrcTBuZUxVZFh3allyQUNueU5RcU9IZQppM29aVE5EUUtYRHc0M1RkMDNRVDJjOG54d1FXdSt0Mld1N3YxYWw0dm9aa2s5RXdSMk5lYmZqRVR4TXB1U1VJCmMydUR1YXduSFJuK0Ryd2kzL1FGQW9HQkFJWklDOFErM0ZlQ2p2R0JoWkEybWZjaldWZDFEM2l0WnJKaWlTWTEKUUlGdVVaQUZ5djZUU3Q0ekovVGpQSTN2MXI4TUdmRjR1Z1lzVG9uMUF1T2lyTW1kbm0vZkx2OHE1S1dIQnUydApleWxNdlUxOG5wdGN0MllZWk5uY3BmM0ljbUxkZUpNM1VJOW85N0ZydkJzejZWM2FUclF6UlRRemU5Z0JWUzVRCjFrdzlBb0dBU0lCcFczdktLZ1dNMlJTcXdXNFZTNWRDeVlGWDhjeUErRTduM1piM3I2Rkd3OTJjRHl1MlhHZ1kKNlhSNXcwRGR5VWVhM1hqa1IvZldzZnl1M0xsMnBuRWdxcmdVKytUTW95bWxQWS9sTGlZNmNHZDJ1OGxxVml2Swp0QURtT3JtVndEVzBlZHdqSGZ3RzlXYS9zelRhMDhtU0NZdTlibnF2RnlhSzd2bTcwdTA9Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

4.3.7 设置上下文参数,声明当前的k8s集群

[root@k8s-harbor01 velero]# kubectl config set-context kubernetes \
--cluster=kubernetes \
--user=awsuser \
--namespace=velero-system \
--kubeconfig=./awsuser.kubeconfig

[root@k8s-harbor01 velero]# cat awsuser.kubeconfig
…………省略部分内容
contexts:
- context:
    cluster: kubernetes
    namespace: velero-system
    user: awsuser
  name: kubernetes
…………省略部分内容

4.3.8 设置默认上下文,声明默认集群

[root@k8s-harbor01 velero]# kubectl config use-context kubernetes --kubeconfig=awsuser.kubeconfig

[root@k8s-harbor01 velero]# kubectl config use-context kubernetes --kubeconfig=awsuser.kubeconfig
Switched to context "kubernetes".
[root@k8s-harbor01 velero]# !cat
cat awsuser.kubeconfig
…………省略部分内容
current-context: kubernetes
…………省略部分内容

4.3.9 创建awsuser账户

[root@k8s-harbor01 velero]# kubectl create ns velero-system

[root@k8s-harbor01 velero]# kubectl create clusterrolebinding awsuser --clusterrole=cluster-admin --user=awsuser
clusterrolebinding.rbac.authorization.k8s.io/awsuser created

# 参数介绍:
clusterrolebinding:角色绑定
awsuser:角色名称
cluster-admin:权限
--user=awsuser:授权给awsuser

# 验证
[root@k8s-harbor01 velero]# kubectl --kubeconfig=./awsuser.kubeconfig get no
NAME           STATUS                     ROLES    AGE   VERSION
k8s-master01   Ready,SchedulingDisabled   master   78d   v1.26.1
k8s-master02   Ready,SchedulingDisabled   master   78d   v1.26.1
k8s-master03   Ready,SchedulingDisabled   master   78d   v1.26.1
k8s-node01     Ready                      node     78d   v1.26.1
k8s-node02     Ready                      node     78d   v1.26.1
k8s-node03     Ready                      node     78d   v1.26.1

4.4 安装velero服务端

4.4.1 执行安装

[root@k8s-harbor01 velero]# velero --kubeconfig ./awsuser.kubeconfig \
	install \
    --provider aws \
    --plugins velero/velero-plugin-for-aws:v1.5.5 \
    --bucket velerodata  \
    --secret-file ./velero-auth.txt \
    --use-volume-snapshots=false \
	--namespace velero-system \
--backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio.myserver:9000
# 注意s3Url=http://minio.myserver:9000,这里我用的是k8s内部域名地址

# 这里的velero镜像有点大,需要一点时间才能拉取下来,可以去到node手动拉取
[root@k8s-harbor01 velero]# kubectl get po -n velero-system
NAME                     READY   STATUS    RESTARTS   AGE
velero-98bc8c975-4pnjs   1/1     Running   0          31m

4.4.2 检查

[root@k8s-harbor01 velero]# velero backup -h

在这里插入图片描述

5. 备份恢复演示

5.1 对default ns进行备份

[root@k8s-harbor01 velero]# DATE=`date +%Y%m%d%H%M%S`
[root@k8s-harbor01 velero]# echo $DATE
20230712181436

[root@k8s-harbor01 velero]# velero backup create default-backup-${DATE} \
--include-cluster-resources=true \ # 该参数的作用是全局备份,避免default名称空间有pv等全局资源漏备份
--include-namespaces default \ # 指定备份的名称空间
--kubeconfig=./awsuser.kubeconfig \
--namespace velero-system
Backup request "default-backup-20230712181436" submitted successfully.
Run `velero backup describe default-backup-20230712181436` or `velero backup logs default-backup-20230712181436` for more details.

5.2 查看备份

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

6. 备份恢复演示

6.1 备份一个ns,删除一个该ns下的资源,再恢复

6.1.1 创建测试pod

[root@k8s-harbor01 yaml]# cat temp-unubtu.yaml
apiVersion: v1
kind: Pod
metadata:
  name: temp-ubuntu
  namespace: myserver
spec:
  containers:
  - name: temp-ubuntu
    image: tsk8s.top/tsk8s/debian-shell:v1
    #command: ['']
  imagePullSecrets:
    - name: dockerhub-image-pull-key

[root@k8s-harbor01 yaml]# kubectl apply -f temp-unubtu.yaml
pod/temp-ubuntu created

[root@k8s-harbor01 yaml]# kubectl get po -n myserver
NAME                     READY   STATUS    RESTARTS   AGE
minio-5cc5fc9498-lmzwh   1/1     Running   0          26h
temp-ubuntu              1/1     Running   0          44s

6.1.2 备份数据

[root@k8s-harbor01 ~]# velero backup create myserver-backup-`date +%F` --include-cluster-resources=true --include-namespaces myserver --kubeconfig=/data/velero/awsuser.kubeconfig --namespace velero-system

6.1.3 检查备份

在这里插入图片描述
在这里插入图片描述

6.1.4 删除pod

[root@k8s-harbor01 ~]# kubectl get po -n myserver
NAME                     READY   STATUS    RESTARTS   AGE
minio-5cc5fc9498-lmzwh   1/1     Running   0          42h
temp-ubuntu              1/1     Running   0          16h
[root@k8s-harbor01 ~]# kubectl delete po -n myserver temp-ubuntu
pod "temp-ubuntu" deleted
[root@k8s-harbor01 ~]# kubectl get po -n myserver
NAME                     READY   STATUS    RESTARTS   AGE
minio-5cc5fc9498-lmzwh   1/1     Running   0          42h

6.1.5 恢复数据

[root@k8s-harbor01 ~]# velero restore create --from-backup myserver-backup-2023-07-12 --wait --kubeconfig=/data/velero/awsuser.kubeconfig --namespace velero-system
Restore request "myserver-backup-2023-07-12-20230713115829" submitted successfully.
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background.
....................
Restore completed with status: Completed. You may check for more information using the commands `velero restore describe myserver-backup-2023-07-12-20230713115829` and `velero restore logs myserver-backup-2023-07-12-20230713115829`.

# 参数讲解
--from-backup myserver-backup-2023-07-12:从myserver-backup-2023-07-12这个备份文件恢复数据
--wait:当前进程等待恢复成功再退出
--namespace velero-system:server端所在的ns

6.1.6 验证数据恢复情况

[root@k8s-harbor01 ~]# kubectl get po -n myserver
NAME                     READY   STATUS    RESTARTS   AGE
minio-5cc5fc9498-lmzwh   1/1     Running   0          44h
temp-ubuntu              1/1     Running   0          22s

6.2 备份一个ns,删除这个ns,再恢复

6.2.1 创建测试数据

[root@k8s-harbor01 ~]# kubectl create ns test
namespace/test created

# 创建一个测试secret,拉取镜像使用,看无删除恢复后能否正常拉取镜像
[root@k8s-harbor01 ~]# kubectl get secret -n test
NAME                       TYPE                             DATA   AGE
dockerhub-image-pull-key   kubernetes.io/dockerconfigjson   1      11s

# 创建一个测试用的deployment
[root@k8s-harbor01 deployment]# cat deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-deployment
  namespace: test
spec:
  replicas: 3
  selector:
    matchLabels:
      app: test-deploy
  template:
    metadata:
      labels:
        app: test-deploy
    spec:
      containers:
      - name: test-deploy
        image: tsk8s.top/baseimages/debian:7
        imagePullPolicy: Always
        args: ["tail", "-f", "/etc/hosts"]
      imagePullSecrets:
        - name: dockerhub-image-pull-key

[root@k8s-harbor01 deployment]# kubectl apply -f deploy.yaml
deployment.apps/test-deployment created

[root@k8s-harbor01 deployment]# kubectl get po -n test
NAME                              READY   STATUS    RESTARTS   AGE
test-deployment-ddc496886-5sws8   1/1     Running   0          6s
test-deployment-ddc496886-px9nk   1/1     Running   0          6s
test-deployment-ddc496886-s6zbk   1/1     Running   0          6s

6.2.2 备份ns

[root@k8s-harbor01 deployment]# velero backup create test-ns-backup-`date +%F` \
--include-cluster-resources=true \
--include-namespaces test \
--kubeconfig=/root/.kube/config \
--namespace velero-system
Backup request "test-ns-backup-2023-07-13" submitted successfully.
Run `velero backup describe test-ns-backup-2023-07-13` or `velero backup logs test-ns-backup-2023-07-13` for more details.

[root@k8s-harbor01 deployment]# velero backup get -n velero-system|grep test-ns-backup-`date +%F`
test-ns-backup-2023-07-13       Completed   0        0          2023-07-13 14:25:44 +0800 CST   29d       default            <none>

在这里插入图片描述

6.2.3 删除整个ns

[root@k8s-harbor01 deployment]# kubectl delete ns test
namespace "test" deleted
[root@k8s-harbor01 deployment]# kubectl get ns |grep test
[root@k8s-harbor01 deployment]#

6.2.4 恢复数据

[root@k8s-harbor01 deployment]# velero restore create --from-backup test-ns-backup-2023-07-13 --wait \
--kubeconfig=/root/.kube/config \
--namespace velero-system
Restore request "test-ns-backup-2023-07-13-20230713143510" submitted successfully.
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background.
..................
Restore completed with status: Completed. You may check for more information using the commands `velero restore describe test-ns-backup-2023-07-13-20230713143510` and `velero restore logs test-ns-backup-2023-07-13-20230713143510`.

在这里插入图片描述
在这里插入图片描述

6.2.5 验证

在这里插入图片描述

6.3 备份恢复指定的资源对象

这种用的比较少,一般肯定都是备份整个集群级别的资源

6.3.1 创建测试资源

[root@k8s-harbor01 pod]# cat temp-unubtu.yaml
apiVersion: v1
kind: Pod
metadata:
  name: temp-ubuntu
  namespace: test
spec:
  containers:
  - name: temp-ubuntu
    image: tsk8s.top/tsk8s/debian-shell:v1
    command: ['tail', '-f', '/etc/hosts']
  imagePullSecrets:
    - name: dockerhub-image-pull-key
[root@k8s-harbor01 pod]# kubectl apply -f temp-unubtu.yaml
pod/temp-ubuntu created
[root@k8s-harbor01 pod]# kubectl get po -n test
NAME                              READY   STATUS    RESTARTS   AGE
temp-ubuntu                       1/1     Running   0          4s
test-deployment-ddc496886-5sws8   1/1     Running   0          16m
test-deployment-ddc496886-px9nk   1/1     Running   0          16m
test-deployment-ddc496886-s6zbk   1/1     Running   0          16m

6.3.2 备份新创建的pod和secret

[root@k8s-harbor01 pod]# velero backup create ns-test-pod-backup-`date +%F` --include-cluster-resources=true --ordered-resources 'pods=test/temp-ubuntu;secret=test/dockerhub-image-pull-key' --namespace velero-system --include-namespaces=test
Backup request "ns-test-pod-backup-2023-07-13" submitted successfully.
Run `velero backup describe ns-test-pod-backup-2023-07-13` or `velero backup logs ns-test-pod-backup-2023-07-13` for more details.


# 备份多种不同类型的资源或多个ns下的资源
velero backup create ns-test-pod-backup-`date +%F` --include-cluster-resources=true --ordered-resources 'pods=$ns/pod_name,$ns/pod_name;service=$ns/$svc_name;sts=...' --namespace velero-system --include-namespaces=$ns1,$ns2,.....

在这里插入图片描述

6.3.3 删除pod和secret

[root@k8s-harbor01 pod]# kubectl delete secret -n test dockerhub-image-pull-key
secret "dockerhub-image-pull-key" deleted
[root@k8s-harbor01 pod]# kubectl delete po -n test temp-ubuntu
pod "temp-ubuntu" deleted

6.3.4 恢复数据

[root@k8s-harbor01 pod]# velero restore create --from-backup ns-test-pod-backup-2023-07-13 --wait \
--kubeconfig=/root/.kube/config \
--namespace velero-system

6.3.5 验证数据

[root@k8s-harbor01 pod]# kubectl get po,secret -n test
NAME                                  READY   STATUS    RESTARTS   AGE
pod/temp-ubuntu                       1/1     Running   0          29s  # 恢复成功
pod/test-deployment-ddc496886-5sws8   1/1     Running   0          30m
pod/test-deployment-ddc496886-px9nk   1/1     Running   0          30m
pod/test-deployment-ddc496886-s6zbk   1/1     Running   0          30m

NAME                              TYPE                             DATA   AGE
secret/dockerhub-image-pull-key   kubernetes.io/dockerconfigjson   1      29s  # 恢复成功

6.4 批量备份不同的namespace

6.4.1 编辑脚本

[root@k8s-harbor01 scripts]# cat velero-bakcup.sh
#!/bin/bash
NS_NAME=`kubectl get ns | awk '{if (NR>2){print}}' | awk '{print $1}'`
DATE=`date +%F`
cd /data/velero/

for i in $NS_NAME;do
  velero backup create ${i}-ns-backup-${DATE} \
  --include-cluster-resources=true \
  --include-namespaces ${i} \
  --kubeconfig=/root/.kube/config \
  --namespace velero-system
done

[root@k8s-harbor01 scripts]# sh velero-bakcup.sh
Backup request "kube-node-lease-ns-backup-2023-07-13" submitted successfully.
Run `velero backup describe kube-node-lease-ns-backup-2023-07-13` or `velero backup logs kube-node-lease-ns-backup-2023-07-13` for more details.
Backup request "kube-public-ns-backup-2023-07-13" submitted successfully.
Run `velero backup describe kube-public-ns-backup-2023-07-13` or `velero backup logs kube-public-ns-backup-2023-07-13` for more details.
Backup request "kube-system-ns-backup-2023-07-13" submitted successfully.
Run `velero backup describe kube-system-ns-backup-2023-07-13` or `velero backup logs kube-system-ns-backup-2023-07-13` for more details.
Backup request "myserver-ns-backup-2023-07-13" submitted successfully.
Run `velero backup describe myserver-ns-backup-2023-07-13` or `velero backup logs myserver-ns-backup-2023-07-13` for more details.
Backup request "test-ns-backup-2023-07-13" submitted successfully.
Run `velero backup describe test-ns-backup-2023-07-13` or `velero backup logs test-ns-backup-2023-07-13` for more details.
Backup request "velero-system-ns-backup-2023-07-13" submitted successfully.
Run `velero backup describe velero-system-ns-backup-2023-07-13` or `velero backup logs velero-system-ns-backup-2023-07-13` for more details.

6.4.2 查看备份

在这里插入图片描述

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值