[源码分析]Kubernests-csi与Openstack-Cinder使用Ceph-rbd创建快照过程对比及源码分析

Kubernests-csi与Openstack-Cinder使用Ceph-rbd创建快照过程对比及源码分析


Ceph版本:v14.2.20(nautilus)
Cinder版本:v3.5.0(queens)
ceph-csi版本:release-v3.3

Cinder

cinder是开源项目openstack中提供块存储服务的子项目,该项目由python开发,主要是为虚拟机实例提供虚拟磁盘。

代码链接如下:
openstack/cinder:queens

  • 创建快照:
    代码位于 cinder/cinder/volume/drivers/rbd.py
    方法create_snapshot
    def create_snapshot(self, snapshot):
        """Creates an rbd snapshot."""
        with RBDVolumeProxy(self, snapshot.volume_name) as volume:
            snap = utils.convert_str(snapshot.name)
            volume.create_snap(snap)
            volume.protect_snap(snap)

过程很简单,

  1. 直接创建快照
  2. 然后对快照进行protect
  • 快照恢复卷
    代码位于 cinder/cinder/volume/drivers/rbd.py
    方法 create_volume_from_snapshot
    def create_volume_from_snapshot(self, volume, snapshot):
        """Creates a volume from a snapshot."""
        volume_update = self._clone(volume, self.configuration.rbd_pool,
                                    snapshot.volume_name, snapshot.name)
        if self.configuration.rbd_flatten_volume_from_snapshot:
            self._flatten(self.configuration.rbd_pool, volume.name)
        if int(volume.size):
            self._resize(volume)
        return volume_update

过程也很简单,
1、直接克隆快照,
2、根据用户配置参数判断是否还原快照克隆出来的镜像
3、扩容等

  • 克隆卷
    代码位于 cinder/cinder/volume/drivers/rbd.py
    方法 create_cloned_volume
    def create_cloned_volume(self, volume, src_vref):
        """Create a cloned volume from another volume.

        Since we are cloning from a volume and not a snapshot, we must first
        create a snapshot of the source volume.

        The user has the option to limit how long a volume's clone chain can be
        by setting rbd_max_clone_depth. If a clone is made of another clone
        and that clone has rbd_max_clone_depth clones behind it, the dest
        volume will be flattened.
        """
        src_name = utils.convert_str(src_vref.name)
        dest_name = utils.convert_str(volume.name)
        clone_snap = "%s.clone_snap" % dest_name

        # Do full copy if requested
        if self.configuration.rbd_max_clone_depth <= 0:
            with RBDVolumeProxy(self, src_name, read_only=True) as vol:
                vol.copy(vol.ioctx, dest_name)
                self._extend_if_required(volume, src_vref)
            return

        # Otherwise do COW clone.
        with RADOSClient(self) as client:
            src_volume = self.rbd.Image(client.ioctx, src_name)
            LOG.debug("creating snapshot='%s'", clone_snap)
            try:
                # Create new snapshot of source volume
                src_volume.create_snap(clone_snap)
                src_volume.protect_snap(clone_snap)
                # Now clone source volume snapshot
                LOG.debug("cloning '%(src_vol)s@%(src_snap)s' to "
                          "'%(dest)s'",
                          {'src_vol': src_name, 'src_snap': clone_snap,
                           'dest': dest_name})
                self.RBDProxy().clone(client.ioctx, src_name, clone_snap,
                                      client.ioctx, dest_name,
                                      features=client.features)
            except Exception as e:
                src_volume.unprotect_snap(clone_snap)
                src_volume.remove_snap(clone_snap)
                src_volume.close()
                msg = (_("Failed to clone '%(src_vol)s@%(src_snap)s' to "
                         "'%(dest)s', error: %(error)s") %
                       {'src_vol': src_name,
                        'src_snap': clone_snap,
                        'dest': dest_name,
                        'error': e})
                LOG.exception(msg)
                raise exception.VolumeBackendAPIException(data=msg)

            depth = self._get_clone_depth(client, src_name)
            # If dest volume is a clone and rbd_max_clone_depth reached,
            # flatten the dest after cloning. Zero rbd_max_clone_depth means
            # infinite is allowed.
            if depth >= self.configuration.rbd_max_clone_depth:
                LOG.info("maximum clone depth (%d) has been reached - "
                         "flattening dest volume",
                         self.configuration.rbd_max_clone_depth)
                dest_volume = self.rbd.Image(client.ioctx, dest_name)
                try:
                    # Flatten destination volume
                    LOG.debug("flattening dest volume %s", dest_name)
                    dest_volume.flatten()
                except Exception as e:
                    msg = (_("Failed to flatten volume %(volume)s with "
                             "error: %(error)s.") %
                           {'volume': dest_name,
                            'error': e})
                    LOG.exception(msg)
                    src_volume.close()
                    raise exception.VolumeBackendAPIException(data=msg)
                finally:
                    dest_volume.close()

                try:
                    # remove temporary snap
                    LOG.debug("remove temporary snap %s", clone_snap)
                    src_volume.unprotect_snap(clone_snap)
                    src_volume.remove_snap(clone_snap)
                except Exception as e:
                    msg = (_("Failed to remove temporary snap "
                             "%(snap_name)s, error: %(error)s") %
                           {'snap_name': clone_snap,
                            'error': e})
                    LOG.exception(msg)
                    src_volume.close()
                    raise exception.VolumeBackendAPIException(data=msg)

            try:
                volume_update = self._enable_replication_if_needed(volume)
            except Exception:
                self.RBDProxy().remove(client.ioctx, dest_name)
                src_volume.unprotect_snap(clone_snap)
                src_volume.remove_snap(clone_snap)
                err_msg = (_('Failed to enable image replication'))
                raise exception.ReplicationError(reason=err_msg,
                                                 volume_id=volume.id)
            finally:
                src_volume.close()

            self._extend_if_required(volume, src_vref)

        LOG.debug("clone created successfully")
        return volume_update

代码逻辑较长,我们只看重点

                src_volume.create_snap(clone_snap)
                src_volume.protect_snap(clone_snap)
                self.RBDProxy().clone(client.ioctx, src_name, clone_snap,
                                      client.ioctx, dest_name,
                                      features=client.features)

排除异常和改变默认参数设定后,仅为3步:
1、创建快照
2、保护快照
2、克隆快照

Ceph-csi

csi(Container StorageInterface)是开源项目Kubernetes从1.9版本开始引入容器存储接口,用于在Kubernetes和外部存储系统之间建立一套标准的存储管理接口,通过该接口为容器提供存储服务。

ceph-csi是开源分布式存储项目Ceph是Kubernetes-csi的一个具体实现项目,该项目由golang开发,即实现了rbd块存储,也实现了cephfs文件存储。
代码链接如下:
ceph-csi:release-v3.3

  • 创建快照
    代码位于internal/rbd/controllerserver.go
    函数入口为CreateSnapshot,我们直接关注重点函数 doSnapshotClone(函数名有些怪异,如果分析错了,请指正)
func (cs *ControllerServer) doSnapshotClone(ctx context.Context, parentVol *rbdVolume, rbdSnap *rbdSnapshot, cr *util.Credentials) (bool, *rbdVolume, error) {
	// generate cloned volume details from snapshot
	cloneRbd := generateVolFromSnap(rbdSnap)
	defer cloneRbd.Destroy()
	// add image feature for cloneRbd
	f := []string{librbd.FeatureNameLayering, librbd.FeatureNameDeepFlatten}
	cloneRbd.imageFeatureSet = librbd.FeatureSetFromNames(f)
	ready := false

	err := cloneRbd.Connect(cr)
	if err != nil {
		return ready, cloneRbd, err
	}

	err = createRBDClone(ctx, parentVol, cloneRbd, rbdSnap, cr)
	if err != nil {
		util.ErrorLog(ctx, "failed to create snapshot: %v", err)
		return ready, cloneRbd, status.Error(codes.Internal, err.Error())
	}

	defer func() {
		if err != nil {
			if !errors.Is(err, ErrFlattenInProgress) {
				// cleanup clone and snapshot
				errCleanUp := cleanUpSnapshot(ctx, cloneRbd, rbdSnap, cloneRbd, cr)
				if errCleanUp != nil {
					util.ErrorLog(ctx, "failed to cleanup snapshot and clone: %v", errCleanUp)
				}
			}
		}
	}()

	if parentVol.isEncrypted() {
		cryptErr := parentVol.copyEncryptionConfig(&cloneRbd.rbdImage)
		if cryptErr != nil {
			util.WarningLog(ctx, "failed copy encryption "+
				"config for %q: %v", cloneRbd.String(), cryptErr)
			return ready, nil, status.Errorf(codes.Internal,
				err.Error())
		}
	}

	err = cloneRbd.createSnapshot(ctx, rbdSnap)
	if err != nil {
		// update rbd image name for logging
		rbdSnap.RbdImageName = cloneRbd.RbdImageName
		util.ErrorLog(ctx, "failed to create snapshot %s: %v", rbdSnap, err)
		return ready, cloneRbd, err
	}

	err = cloneRbd.getImageID()
	if err != nil {
		util.ErrorLog(ctx, "failed to get image id: %v", err)
		return ready, cloneRbd, err
	}
	var j = &journal.Connection{}
	// save image ID
	j, err = snapJournal.Connect(rbdSnap.Monitors, rbdSnap.RadosNamespace, cr)
	if err != nil {
		util.ErrorLog(ctx, "failed to connect to cluster: %v", err)
		return ready, cloneRbd, err
	}
	defer j.Destroy()

	err = j.StoreImageID(ctx, rbdSnap.JournalPool, rbdSnap.ReservedID, cloneRbd.ImageID)
	if err != nil {
		util.ErrorLog(ctx, "failed to reserve volume id: %v", err)
		return ready, cloneRbd, err
	}

	err = cloneRbd.flattenRbdImage(ctx, cr, false, rbdHardMaxCloneDepth, rbdSoftMaxCloneDepth)
	if err != nil {
		if errors.Is(err, ErrFlattenInProgress) {
			return ready, cloneRbd, nil
		}
		return ready, cloneRbd, err
	}
	ready = true
	return ready, cloneRbd, nil
}

其中第一步,函数createRBDClone(ctx, parentVol, cloneRbd, rbdSnap, cr)过程较为复杂,需要展开分析,
代码位于internal/rbd/snapshot.go

func createRBDClone(ctx context.Context, parentVol, cloneRbdVol *rbdVolume, snap *rbdSnapshot, cr *util.Credentials) error {
	// create snapshot
	err := parentVol.createSnapshot(ctx, snap)
	if err != nil {
		util.ErrorLog(ctx, "failed to create snapshot %s: %v", snap, err)
		return err
	}

	snap.RbdImageName = parentVol.RbdImageName
	// create clone image and delete snapshot
	err = cloneRbdVol.cloneRbdImageFromSnapshot(ctx, snap)
	if err != nil {
		util.ErrorLog(ctx, "failed to clone rbd image %s from snapshot %s: %v", cloneRbdVol.RbdImageName, snap.RbdSnapName, err)
		err = fmt.Errorf("failed to clone rbd image %s from snapshot %s: %w", cloneRbdVol.RbdImageName, snap.RbdSnapName, err)
	}
	errSnap := parentVol.deleteSnapshot(ctx, snap)
	if errSnap != nil {
		util.ErrorLog(ctx, "failed to delete snapshot: %v", errSnap)
		delErr := deleteImage(ctx, cloneRbdVol, cr)
		if delErr != nil {
			util.ErrorLog(ctx, "failed to delete rbd image: %s with error: %v", cloneRbdVol, delErr)
		}
		return err
	}

	err = cloneRbdVol.getImageInfo()
	if err != nil {
		util.ErrorLog(ctx, "failed to get rbd image: %s details with error: %v", cloneRbdVol, err)
		delErr := deleteImage(ctx, cloneRbdVol, cr)
		if delErr != nil {
			util.ErrorLog(ctx, "failed to delete rbd image: %s with error: %v", cloneRbdVol, delErr)
		}
		return err
	}

	return nil
}

其中有很多校验和异常场景我们忽略,只看正常流程,
1、克隆

​ a、 源image创建快照

​ b、克隆快照

​ c、删除1.a创建的快照

2、用1.b克隆快照得到的image创建快照

3、还原1.b快照克隆出来的镜像

存疑:源码中未看到快照保护和解保护流程,而进行clone与删除操作。欢迎讨论与指正

  • 快照恢复
    代码位于 internal/rbd/controllerserver.go
    函数createVolumeFromSnapshot
func (cs *ControllerServer) createVolumeFromSnapshot(ctx context.Context, cr *util.Credentials, secrets map[string]string, rbdVol *rbdVolume, snapshotID string) error {
	rbdSnap := &rbdSnapshot{}
	if acquired := cs.SnapshotLocks.TryAcquire(snapshotID); !acquired {
		util.ErrorLog(ctx, util.SnapshotOperationAlreadyExistsFmt, snapshotID)
		return status.Errorf(codes.Aborted, util.VolumeOperationAlreadyExistsFmt, snapshotID)
	}
	defer cs.SnapshotLocks.Release(snapshotID)

	err := genSnapFromSnapID(ctx, rbdSnap, snapshotID, cr, secrets)
	if err != nil {
		if errors.Is(err, util.ErrPoolNotFound) {
			util.ErrorLog(ctx, "failed to get backend snapshot for %s: %v", snapshotID, err)
			return status.Error(codes.InvalidArgument, err.Error())
		}
		return status.Error(codes.Internal, err.Error())
	}

	// update parent name(rbd image name in snapshot)
	rbdSnap.RbdImageName = rbdSnap.RbdSnapName
	// create clone image and delete snapshot
	err = rbdVol.cloneRbdImageFromSnapshot(ctx, rbdSnap)
	if err != nil {
		util.ErrorLog(ctx, "failed to clone rbd image %s from snapshot %s: %v", rbdVol, rbdSnap, err)
		return err
	}

	util.DebugLog(ctx, "create volume %s from snapshot %s", rbdVol.RequestName, rbdSnap.RbdSnapName)
	return nil
}

过程比较简单,只是clone快照

  • 克隆卷
    代码位于 internal/rbd/clone.gp
    函数 createCloneFromImage
func (rv *rbdVolume) createCloneFromImage(ctx context.Context, parentVol *rbdVolume) error {
	// generate temp cloned volume
	tempClone := rv.generateTempClone()
	// snapshot name is same as temporary cloned image, This helps to
	// flatten the temporary cloned images as we cannot have more than 510
	// snapshots on an rbd image
	tempSnap := &rbdSnapshot{}
	tempSnap.RbdSnapName = tempClone.RbdImageName
	tempSnap.Pool = rv.Pool

	cloneSnap := &rbdSnapshot{}
	cloneSnap.RbdSnapName = rv.RbdImageName
	cloneSnap.Pool = rv.Pool

	var (
		errClone   error
		errFlatten error
		err        error
	)
	var j = &journal.Connection{}

	j, err = volJournal.Connect(rv.Monitors, rv.RadosNamespace, rv.conn.Creds)
	if err != nil {
		return status.Error(codes.Internal, err.Error())
	}
	defer j.Destroy()

	// create snapshot and temporary clone and delete snapshot
	err = createRBDClone(ctx, parentVol, tempClone, tempSnap, rv.conn.Creds)
	if err != nil {
		return err
	}

	defer func() {
		if err != nil || errClone != nil {
			cErr := cleanUpSnapshot(ctx, tempClone, cloneSnap, rv, rv.conn.Creds)
			if cErr != nil {
				util.ErrorLog(ctx, "failed to cleanup image %s or snapshot %s: %v", cloneSnap, tempClone, cErr)
			}
		}

		if err != nil || errFlatten != nil {
			if !errors.Is(errFlatten, ErrFlattenInProgress) {
				// cleanup snapshot
				cErr := cleanUpSnapshot(ctx, parentVol, tempSnap, tempClone, rv.conn.Creds)
				if cErr != nil {
					util.ErrorLog(ctx, "failed to cleanup image %s or snapshot %s: %v", tempSnap, tempClone, cErr)
				}
			}
		}
	}()
	// flatten clone
	errFlatten = tempClone.flattenRbdImage(ctx, rv.conn.Creds, false, rbdHardMaxCloneDepth, rbdSoftMaxCloneDepth)
	if errFlatten != nil {
		return errFlatten
	}
	// create snap of temp clone from temporary cloned image
	// create final clone
	// delete snap of temp clone
	errClone = createRBDClone(ctx, tempClone, rv, cloneSnap, rv.conn.Creds)
	if errClone != nil {
		// set errFlatten error to cleanup temporary snapshot and temporary clone
		errFlatten = errors.New("failed to create user requested cloned image")
		return errClone
	}
	err = rv.getImageID()
	if err != nil {
		util.ErrorLog(ctx, "failed to get volume id %s: %v", rv, err)
		return err
	}

	if parentVol.isEncrypted() {
		err = parentVol.copyEncryptionConfig(&rv.rbdImage)
		if err != nil {
			return fmt.Errorf("failed to copy encryption config for %q: %w", rv, err)
		}
	}
	err = j.StoreImageID(ctx, rv.JournalPool, rv.ReservedID, rv.ImageID)
	if err != nil {
		util.ErrorLog(ctx, "failed to store volume %s: %v", rv, err)
		return err
	}
	return nil
}

函数createRBDClone(ctx, parentVol, cloneRbd, rbdSnap, cr)过程较为复杂,需要展开分析,
代码位于internal/rbd/snapshot.go

func createRBDClone(ctx context.Context, parentVol, cloneRbdVol *rbdVolume, snap *rbdSnapshot, cr *util.Credentials) error {
	// create snapshot
	err := parentVol.createSnapshot(ctx, snap)
	if err != nil {
		util.ErrorLog(ctx, "failed to create snapshot %s: %v", snap, err)
		return err
	}

	snap.RbdImageName = parentVol.RbdImageName
	// create clone image and delete snapshot
	err = cloneRbdVol.cloneRbdImageFromSnapshot(ctx, snap)
	if err != nil {
		util.ErrorLog(ctx, "failed to clone rbd image %s from snapshot %s: %v", cloneRbdVol.RbdImageName, snap.RbdSnapName, err)
		err = fmt.Errorf("failed to clone rbd image %s from snapshot %s: %w", cloneRbdVol.RbdImageName, snap.RbdSnapName, err)
	}
	errSnap := parentVol.deleteSnapshot(ctx, snap)
	if errSnap != nil {
		util.ErrorLog(ctx, "failed to delete snapshot: %v", errSnap)
		delErr := deleteImage(ctx, cloneRbdVol, cr)
		if delErr != nil {
			util.ErrorLog(ctx, "failed to delete rbd image: %s with error: %v", cloneRbdVol, delErr)
		}
		return err
	}

	err = cloneRbdVol.getImageInfo()
	if err != nil {
		util.ErrorLog(ctx, "failed to get rbd image: %s details with error: %v", cloneRbdVol, err)
		delErr := deleteImage(ctx, cloneRbdVol, cr)
		if delErr != nil {
			util.ErrorLog(ctx, "failed to delete rbd image: %s with error: %v", cloneRbdVol, delErr)
		}
		return err
	}

	return nil
}

1、克隆

​ a、 源image创建快照

​ b、克隆快照

​ c、删除1.a创建的快照

2、还原1.b快照克隆出来的镜像

3、克隆

​ a、 第2步还原得到的景象创建快照

​ b、克隆快照

​ c、删除3.a创建的快照

对比

对比项openstack-cinder-rbdkubernets-ceph-csi-rbd
开发语言pythongolang
复杂程度简单复杂
创建快照1、直接创建快照1、克隆
a、 源image创建快照
b、克隆快照
c、删除1.a创建的快照
2、用1.b克隆快照得到的image创建快照
3、还原1.b快照克隆出来的镜像
快照恢复1、快照克隆1、快照克隆
卷克隆1、创建快照
2、快照克隆
1、克隆
a、 源image创建快照
b、克隆快照
c、删除1.a创建的快照
2、还原1.b快照克隆出来的镜像
3、克隆
a、 第2步还原得到的景象创建快照
b、克隆快照
c、删除3.a创建的快照
耦合强度强耦合无耦合
### OpenStackCeph-deploy部署的Ceph集群对接配置指南 #### 1. 准备工作 在将OpenStack通过`ceph-deploy`部署的Ceph集群进行对接之前,需确保以下条件已满足: - 已完成Ceph集群的正常部署并验证其功能[^1]。 - OpenStack环境已完成基本安装,并能够正常运行。 - 网络连通性:Ceph集群中的Monitor节点和OSD节点应能OpenStack控制节点通信。 #### 2. 配置Ceph客户端环境 为了使OpenStack能够访问Ceph存储池,需要在OpenStack环境中设置Ceph客户端的相关参数。具体操作如下: ##### (1)复制Ceph配置文件至OpenStack节点 将Ceph集群的配置文件`ceph.conf`以及密钥环文件`ceph.client.admin.keyring`从Ceph管理节点拷贝到OpenStack控制节点和其他相关计算节点上: ```bash scp /etc/ceph/ceph.conf root@<openstack-control-node>:/etc/ceph/ scp /etc/ceph/ceph.client.admin.keyring root@<openstack-control-node>:/etc/ceph/ ``` ##### (2)修改权限 确保Ceph配置文件和密钥环文件具有适当的安全权限: ```bash chmod 644 /etc/ceph/ceph.conf chmod 600 /etc/ceph/ceph.client.admin.keyring chown ceph:ceph /etc/ceph/ceph.* ``` #### 3. 创建RBD Pool供OpenStack使用 创建一个新的RADOS Block Device (RBD) pool来专门服务于OpenStack实例的数据存储需求: ```bash ceph osd pool create volumes 128 128 replicated ``` 其中,“volumes”为新创建的pool名称;后面的两个数值分别表示副本数量和PG数(Placement Groups)。可以根据实际硬件资源调整这些参数[^2]。 #### 4. 修改Glance服务以支持Ceph作为镜像后端 编辑Glance的服务配置文件 `/etc/glance/glance-api.conf` ,添加或更新以下部分: ```ini [glance_store] stores = rbd, file default_store = rbd rbd_pool = images rbd_ceph_conf = /etc/ceph/ceph.conf rbd_user = glance rbd_secret_uuid = <uuid> ``` 注意替换 `<uuid>` 字段为你生成的实际UUID值。如果尚未存在名为 `images` 的pool,则先创建它: ```bash ceph osd pool create images 128 128 replicated ``` 接着,在数据库中注册该用户的secret key以便后续认证过程顺利进行: ```sql INSERT INTO secrets VALUES ('<uuid>', '<base64-encoded-secret>'); ``` 最后重启Glance API服务使其生效: ```bash systemctl restart openstack-glance-api.service ``` #### 5. Nova组件启用libvirt驱动程序连接Ceph RBD 对于Nova而言,默认情况下会利用libvirt库实现虚拟机磁盘映射到远程块设备的功能。因此我们需要确认nova-compute服务已经加载了必要的模块并且正确指向我们的Ceph集群地址。 打开 `/etc/nova/nova.conf` 文件查找 `[libvirt]` 节点下的选项做相应更改: ```ini [libvirt] disk_cachemodes="network=writeback" hw_disk_discard='unmap' inject_password=false inject_key=false inject_partition=-2 live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_TUNNELLED volume_clear=zero volume_clear_size=0 cpu_mode=host-passthrough image_backend=rbd rbd_pool=volumes rbd_user=cinder rbd_secret_uuid=<same-as-above> rados_mon_hosts=node01,node02,node03 ``` 同样记得把占位符替换成真实数据源信息之后保存退出再重新启动对应进程即可应用改动效果: ```bash systemctl daemon-reload && systemctl restart libvirtd nova-compute ``` #### 6. Cinder卷插件激活RBD类型持久化存储方案 为了让用户可以请求动态分配额外空间给自己的云服务器实例用作临时扩展用途或者长期归档资料之需的话还需要进一步配置好cinder-volume服务才行啊! 前往路径 `/etc/cinder/cinder.conf` 查找合适位置追加下面几行内容进去吧: ```ini [cinder-ceph] enabled_backends = rbd backend_name = rbd volume_driver = cinder.volume.drivers.rbd.RBDDriver rbd_pool = volumes rbd_user = cinder rbd_secret_uuid = <same-as-above> flavor_id = auto auth_enabled = true keyring_conf_path = /etc/ceph/ceph.client.cinder.keyring report_discard_supported = True ``` 再次强调一遍哦~这里的用户名密码组合应该跟前面提到过的保持一致哈!另外由于涉及到跨域身份校验机制的缘故所以最好提前准备好相应的证书材料以免到时候遇到麻烦事呢😊 最终别忘了同步一下最新的元数据记录表单结构形式呀~ ```bash su -s /bin/sh -c "cinder-manage db sync" cinder ``` 至此整个流程就算大功告成了🎉 不过还是强烈建议大家按照官方文档指引逐步检验每一步骤的结果准确性从而最大程度降低潜在风险隐患的发生概率哟~ --- ###
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值