手撕FastMVSNet,浅谈对网络结构的了解

提示:文章写完后,目录可以自动生成,如何生成可参考右边的帮助文档


前言

之前我介绍了fast网络的测试实现流程fast网络实战流程,改写了一部分代码方便结果的输出。
之后我又做了一下工作:
1.对网络的数据集处理部分修改,实现适应自采集数据,当然这个自采集数据的形式根据DTU构建的。
2.对网络主题进行分析,了解网络的代码实现原理。
现在,我将以上两点进行总结。


一、数据集预处理脚本 dataset.py

本次学习只针对网络的测试,因此我们只需看 class DTU_Test_Set(Dataset):

1.读取扫描scan数据集id

test_set = [1, 4, 9, 10, 11, 12, 13, 15, 23, 24, 29, 32, 33, 34, 48, 49, 62, 75, 77,
                110, 114, 118]
test_lighting_set = [3]  

由于fast网络使用的是修正过的图像所以针对一个视图有7种光照,我们只要一种即可,源代码中使用光照3。如下图红笔所画
在这里插入图片描述

2.数据参数读入

首先需要说明网络的参数在 ./FastMVSNet/configs/dtu.yaml中
构造数据初始化代码块为

def __init__(self, root_dir, dataset_name,
                 num_view=3,
                 height=1152, width=1600,
                 num_virtual_plane=128,
                 interval_scale=1.6,
                 base_image_size=64,
                 depth_folder=""):

需要说明在测试时不需要真实的深度图,因此depth_folder=‘ ’,其他的参数修改在dtu.yaml中进行

3.读取pair.txt

pair文件写的是视图数,参考图像id,相关联源图像个数,相关联源图像id,相似评分
在这里插入图片描述

cluster_file_path = "Cameras/pair.txt"
self.cluster_file_path = osp.join(root_dir, self.cluster_file_path)
self.cluster_list = open(self.cluster_file_path).read().split()

该代码读取了上述信息。

4.读取图片信息和内外参信息

image_folder = osp.join(self.root_dir, "Eval/Rectified/scan{}".format(ind))
cam_folder = osp.join(self.root_dir, "Cameras")

1.确定参考图像

for p in range(0, int(self.cluster_list[0])):  #代码对所有图像进行处理

此代码表示读取了pair文件的第一个信息,即图片数。因为每一个图片需要生成一张深度图,因此使用循环

for p in range(0, int(self.cluster_list[0])): 
	ref_index = int(self.cluster_list[22 * p + 1])  #每隔22个读取参考id
	ref_image_path = osp.join(image_folder, "rect_{:03d}_{}_r5000.png".format(ref_index,lighting_ind))   
	ref_cam_path = osp.join(cam_folder, "{:08d}_cam.txt".format(ref_index))
	
	view_image_paths.append(ref_image_path)
	view_cam_paths.append(ref_cam_path)

上述是读取参考图像路径

2.确定源图像

for p in range(0, int(self.cluster_list[0])): 
	ref_index = int(self.cluster_list[22 * p + 1])  #每隔22个读取参考id
	ref_image_path = osp.join(image_folder, "rect_{:03d}_{}_r5000.png".format(ref_index,lighting_ind))   
	ref_cam_path = osp.join(cam_folder, "{:08d}_cam.txt".format(ref_index))
	
	view_image_paths.append(ref_image_path)
	view_cam_paths.append(ref_cam_path)
	#以下为源图像路径读取
	for view in range(self.num_view - 1):
		view_index = int(self.cluster_list[22 * p + 2 * view + 3])
        view_image_path = osp.join(image_folder, "rect_{:03d}_{}_r5000.png".format(view_index + 1,lighting_ind))
        view_cam_path = osp.join(cam_folder, "{:08d}_cam.txt".format(view_index))
        view_image_paths.append(view_image_path)
        view_cam_paths.append(view_cam_path)
        paths["view_image_paths"] = view_image_paths
        paths["view_cam_paths"] = view_cam_paths
        

其中num_view 为设定相邻的源图像个数,一般为pair评分最高的几个,论文里默认为5.

5.适应自采集数据的数据预处理修改

网络写的比较死板,不能够很好的通用其他网络的dtu测试集,因此更改网络数据集处理部分主要从以下出发:
1.取消光照度,这是因为其他的测试中只有一种光照
2.原始代码用固定22个元素读取参考图像id,这对于自采集数据很不友好,因此需要将pair文件逐行读取并加入循环,这样读取效果为,一行参考图像id,对应一行源图像的个数、id和相似度评分。

二、FastMVSNet网络主体 mode.py

1.特征提取网络

代码如下(示例):

self.feature_fetcher = FeatureFetcher()

该网络的结构如下,最终生成的特征图尺寸为原图的 1/4*1/4

class ImageConv(nn.Module):
    def __init__(self, base_channels, in_channels=3):
        super(ImageConv, self).__init__()
        self.base_channels = base_channels
        self.out_channels = 8 * base_channels
        self.conv0 = nn.Sequential(
            Conv2d(in_channels, base_channels, 3, 1, padding=1),
            Conv2d(base_channels, base_channels, 3, 1, padding=1),
        )

        self.conv1 = nn.Sequential(
            Conv2d(base_channels, base_channels * 2, 5, stride=2, padding=2),
            Conv2d(base_channels * 2, base_channels * 2, 3, 1, padding=1),
            Conv2d(base_channels * 2, base_channels * 2, 3, 1, padding=1),
        )

        self.conv2 = nn.Sequential(
            Conv2d(base_channels * 2, base_channels * 4, 5, stride=2, padding=2),
            Conv2d(base_channels * 4, base_channels * 4, 3, 1, padding=1),
            nn.Conv2d(base_channels * 4, base_channels * 4, 3, padding=1, bias=False)
        )


    def forward(self, imgs):
        out_dict = {}

        conv0 = self.conv0(imgs)
        out_dict["conv0"] = conv0
        conv1 = self.conv1(conv0)
        out_dict["conv1"] = conv1
        conv2 = self.conv2(conv1)
        out_dict["conv2"] = conv2

        return out_dict

2.深度扩散模块

该步骤是将稀疏的高分辨率深度图扩散为密集的深度图,但都是粗深度图,未经优化。
代码如下(示例):

self.feature_grad_fetcher = FeatureGradFetcher()

3.正则化网络

self.coarse_vol_conv = VolumeConv(img_base_channels * 4, vol_base_channels)

虽然之前得到的特征图是1/41/4的尺寸,但经过ref_feature = ref_feature[:, :, ::2,::2].contiguous()已经将特征图尺度变为1/81/8,因此构造的代价体尺度为1/8Wx1/8HxDxF。
此步最大的创新在于3d cnn时,选用了反卷积步骤,同时使用前后阶段的特征相加得到了1/8*1/8尺度的深度图,这区别于point-mvsnet的小分辨率结果

此正则化3d cnn如下:

class VolumeConv(nn.Module):
    def __init__(self, in_channels, base_channels):
        super(VolumeConv, self).__init__()
        self.in_channels = in_channels
        self.out_channels = base_channels * 8
        self.base_channels = base_channels
        self.conv1_0 = Conv3d(in_channels, base_channels * 2, 3, stride=2, padding=1)
        self.conv2_0 = Conv3d(base_channels * 2, base_channels * 4, 3, stride=2, padding=1)
        self.conv3_0 = Conv3d(base_channels * 4, base_channels * 8, 3, stride=2, padding=1)

        self.conv0_1 = Conv3d(in_channels, base_channels, 3, 1, padding=1)

        self.conv1_1 = Conv3d(base_channels * 2, base_channels * 2, 3, 1, padding=1)
        self.conv2_1 = Conv3d(base_channels * 4, base_channels * 4, 3, 1, padding=1)

        self.conv3_1 = Conv3d(base_channels * 8, base_channels * 8, 3, 1, padding=1)
        self.conv4_0 = Deconv3d(base_channels * 8, base_channels * 4, 3, 2, padding=1, output_padding=1)
        self.conv5_0 = Deconv3d(base_channels * 4, base_channels * 2, 3, 2, padding=1, output_padding=1)
        self.conv6_0 = Deconv3d(base_channels * 2, base_channels, 3, 2, padding=1, output_padding=1)

        self.conv6_2 = nn.Conv3d(base_channels, 1, 3, padding=1, bias=False)

    def forward(self, x):
        conv0_1 = self.conv0_1(x)

        conv1_0 = self.conv1_0(x)
        conv2_0 = self.conv2_0(conv1_0)
        conv3_0 = self.conv3_0(conv2_0)

        conv1_1 = self.conv1_1(conv1_0)
        conv2_1 = self.conv2_1(conv2_0)
        conv3_1 = self.conv3_1(conv3_0)

        conv4_0 = self.conv4_0(conv3_1)

        conv5_0 = self.conv5_0(conv4_0 + conv2_1)
        conv6_0 = self.conv6_0(conv5_0 + conv1_1)

        conv6_2 = self.conv6_2(conv6_0 + conv0_1)

        return conv6_2

4.高斯牛顿优化

高斯牛顿优化深度图,

 if isGN:
            feature_pyramids = {}
            chosen_conv = ["conv1", "conv2"]
            for conv in chosen_conv:
                feature_pyramids[conv] = []
            for i in range(num_view):
                curr_img = img_list[:, i, :, :, :]
                curr_feature_pyramid = self.flow_img_conv(curr_img)
                for conv in chosen_conv:
                    feature_pyramids[conv].append(curr_feature_pyramid[conv])

            for conv in chosen_conv:
                feature_pyramids[conv] = torch.stack(feature_pyramids[conv], dim=1)

            if isTest:
                for conv in chosen_conv:
                    feature_pyramids[conv] = torch.detach(feature_pyramids[conv])

三。逐行解读标注源代码

class FastMVSNet(nn.Module):
    def __init__(self,
                 img_base_channels=8,
                 vol_base_channels=8,
                 flow_channels=(64, 64, 16, 1),
                 k=16,
                 ):
        super(FastMVSNet, self).__init__()
        self.k = k

        self.feature_fetcher = FeatureFetcher()
        self.feature_grad_fetcher = FeatureGradFetcher()
        self.point_grad_fetcher = PointGrad()

        self.coarse_img_conv = ImageConv(img_base_channels)
        self.coarse_vol_conv = VolumeConv(img_base_channels * 4, vol_base_channels)
        self.propagation_net = PropagationNet(img_base_channels)
        self.flow_img_conv = ImageConv(img_base_channels)

    def forward(self, data_batch, img_scales, inter_scales, isGN, isTest=False):
        preds = collections.OrderedDict()
        img_list = data_batch["img_list"]
        cam_params_list = data_batch["cam_params_list"]

        cam_extrinsic = cam_params_list[:, :, 0, :3, :4].clone()  # (B, V, 3, 4)
        R = cam_extrinsic[:, :, :3, :3]
        t = cam_extrinsic[:, :, :3, 3].unsqueeze(-1)
        R_inv = torch.inverse(R)
        cam_intrinsic = cam_params_list[:, :, 1, :3, :3].clone()   #读内外参 2022/5/23
        
        
        if isTest:
            cam_intrinsic[:, :, :2, :3] = cam_intrinsic[:, :, :2, :3] / 4.0

        depth_start = cam_params_list[:, 0, 1, 3, 0]
        depth_interval = cam_params_list[:, 0, 1, 3, 1]
        num_depth = cam_params_list[0, 0, 1, 3, 2].long()

        depth_end = depth_start + (num_depth - 1) * depth_interval     #划分深度区间 2022/5/23

        batch_size, num_view, img_channel, img_height, img_width = list(img_list.size())
        #print('batch_size==============')
        #print(batch_size)  ###测试2022/5/24

        coarse_feature_maps = []
        for i in range(num_view):
            curr_img = img_list[:, i, :, :, :]
            curr_feature_map = self.coarse_img_conv(curr_img)["conv2"]
            coarse_feature_maps.append(curr_feature_map)    #尺度为1/4 * 1/4  2022/5/24

        feature_list = torch.stack(coarse_feature_maps, dim=1)

        feature_channels, feature_height, feature_width = list(curr_feature_map.size())[1:]

        depths = []
        for i in range(batch_size):     #测试时,batch_size=1 2022/5/24
            depths.append(torch.linspace(depth_start[i], depth_end[i], num_depth, device=img_list.device) \
                          .view(1, 1, num_depth, 1))
        depths = torch.stack(depths, dim=0)  # (B, 1, 1, D, 1)   #构建深度光轴体,深度范围 2022/5/24

        feature_map_indices_grid = get_pixel_grids(feature_height, feature_width)
        # print("before:", feature_map_indices_grid.size())
        feature_map_indices_grid = feature_map_indices_grid.view(1, 3, feature_height, feature_width)[:, :, ::2, ::2].contiguous()
        # print("after:", feature_map_indices_grid.size())
        feature_map_indices_grid = feature_map_indices_grid.view(1, 1, 3, -1).expand(batch_size, 1, 3, -1).to(img_list.device)

        ref_cam_intrinsic = cam_intrinsic[:, 0, :, :].clone()
        uv = torch.matmul(torch.inverse(ref_cam_intrinsic).unsqueeze(1), feature_map_indices_grid)  # (B, 1, 3, FH*FW)

        cam_points = (uv.unsqueeze(3) * depths).view(batch_size, 1, 3, -1)  # (B, 1, 3, D*FH*FW)
        world_points = torch.matmul(R_inv[:, 0:1, :, :], cam_points - t[:, 0:1, :, :]).transpose(1, 2).contiguous() \
            .view(batch_size, 3, -1)  # (B, 3, D*FH*FW)     #单应性变换 2022/5/24  

        preds["world_points"] = world_points

        num_world_points = world_points.size(-1)
        assert num_world_points == feature_height * feature_width * num_depth / 4

        point_features = self.feature_fetcher(feature_list, world_points, cam_intrinsic, cam_extrinsic)   #单应性变换构建特征体 2022/5/24
        ref_feature = coarse_feature_maps[0]
        #print("before ref feature:", ref_feature.size())
        ref_feature = ref_feature[:, :, ::2,::2].contiguous()   #特征尺度又缩1/2 ,缩短方法为隔一个元素取一个,此时尺度为1/8 * 1/8   2022/5/24
        #print("after ref feature:", ref_feature.size())
        ref_feature = ref_feature.unsqueeze(2).expand(-1, -1, num_depth, -1, -1)\
                        .contiguous().view(batch_size,feature_channels,-1)
        point_features[:, 0, :, :] = ref_feature
        #print(point_features[:, 0, :, :])   这就是在参考图像的所有特征体  2022/5/24

        avg_point_features = torch.mean(point_features, dim=1)
        avg_point_features_2 = torch.mean(point_features ** 2, dim=1)

        point_features = avg_point_features_2 - (avg_point_features ** 2)

        cost_volume = point_features.view(batch_size, feature_channels, num_depth, feature_height // 2, feature_width // 2)  #resize到 1/8*1/8  2022/5/24

        filtered_cost_volume = self.coarse_vol_conv(cost_volume).squeeze(1)   #3d cnn  2022/5/24   ####反卷积,它的核心是在原来图像上插入空白数据。而空洞卷积就是在卷积核插入空白数据,或是说在卷积是跳过特征图的部分数据。

        probability_volume = F.softmax(-filtered_cost_volume, dim=1)
        depth_volume = []
        for i in range(batch_size):
            depth_array = torch.linspace(depth_start[i], depth_end[i], num_depth, device=depth_start.device)
            depth_volume.append(depth_array)
        depth_volume = torch.stack(depth_volume, dim=0)  # (B, D)
        depth_volume = depth_volume.view(batch_size, num_depth, 1, 1).expand(probability_volume.shape)
        pred_depth_img = torch.sum(depth_volume * probability_volume, dim=1).unsqueeze(1)  # (B, 1, FH, FW)    稀疏高分粗深度图2022/5/24

        prob_map = get_propability_map(probability_volume, pred_depth_img, depth_start, depth_interval)

        # image guided depth map propagation
        pred_depth_img = F.interpolate(pred_depth_img, (feature_height, feature_width), mode="nearest")
        prob_map = F.interpolate(prob_map, (feature_height, feature_width), mode="bilinear")
        pred_depth_img = self.propagation_net(pred_depth_img, img_list[:, 0, :, :, :])      #密集深度图 2022/5/24  

        preds["coarse_depth_map"] = pred_depth_img
        preds["coarse_prob_map"] = prob_map

        if isGN:
            feature_pyramids = {}
            chosen_conv = ["conv1", "conv2"]
            for conv in chosen_conv:
                feature_pyramids[conv] = []
            for i in range(num_view):
                curr_img = img_list[:, i, :, :, :]
                curr_feature_pyramid = self.flow_img_conv(curr_img)
                for conv in chosen_conv:
                    feature_pyramids[conv].append(curr_feature_pyramid[conv])

            for conv in chosen_conv:
                feature_pyramids[conv] = torch.stack(feature_pyramids[conv], dim=1)

            if isTest:
                for conv in chosen_conv:
                    feature_pyramids[conv] = torch.detach(feature_pyramids[conv])


            def gn_update(estimated_depth_map, interval, image_scale, it):
                nonlocal chosen_conv
                # print(estimated_depth_map.size(), image_scale)
                flow_height, flow_width = list(estimated_depth_map.size())[2:]
                if flow_height != int(img_height * image_scale):
                    flow_height = int(img_height * image_scale)
                    flow_width = int(img_width * image_scale)
                    estimated_depth_map = F.interpolate(estimated_depth_map, (flow_height, flow_width), mode="nearest")
                else:
                    # if it is the same size return directly
                    return estimated_depth_map
                    # pass
                
                if isTest:
                    estimated_depth_map = estimated_depth_map.detach()

                # GN step
                cam_intrinsic = cam_params_list[:, :, 1, :3, :3].clone()
                if isTest:
                    cam_intrinsic[:, :, :2, :3] *= image_scale
                else:
                    cam_intrinsic[:, :, :2, :3] *= (4 * image_scale)

                ref_cam_intrinsic = cam_intrinsic[:, 0, :, :].clone()
                feature_map_indices_grid = get_pixel_grids(flow_height, flow_width) \
                    .view(1, 1, 3, -1).expand(batch_size, 1, 3, -1).to(img_list.device)

                uv = torch.matmul(torch.inverse(ref_cam_intrinsic).unsqueeze(1),
                                  feature_map_indices_grid)  # (B, 1, 3, FH*FW)

                interval_depth_map = estimated_depth_map
                cam_points = (uv * interval_depth_map.view(batch_size, 1, 1, -1))
                world_points = torch.matmul(R_inv[:, 0:1, :, :], cam_points - t[:, 0:1, :, :]).transpose(1, 2) \
                    .contiguous().view(batch_size, 3, -1)  # (B, 3, D*FH*FW)

                grad_pts = self.point_grad_fetcher(world_points, cam_intrinsic, cam_extrinsic)

                R_tar_ref = torch.bmm(R.view(batch_size * num_view, 3, 3),
                                      R_inv[:, 0:1, :, :].repeat(1, num_view, 1, 1).view(batch_size * num_view, 3, 3))

                R_tar_ref = R_tar_ref.view(batch_size, num_view, 3, 3)
                d_pts_d_d = uv.unsqueeze(-1).permute(0, 1, 3, 2, 4).contiguous().repeat(1, num_view, 1, 1, 1)
                d_pts_d_d = R_tar_ref.unsqueeze(2) @ d_pts_d_d
                d_uv_d_d = torch.bmm(grad_pts.view(-1, 2, 3), d_pts_d_d.view(-1, 3, 1)).view(batch_size, num_view, 1,
                                                                                             -1, 2, 1)
                all_features = []
                for conv in chosen_conv:
                    curr_feature = feature_pyramids[conv]
                    c, h, w = list(curr_feature.size())[2:]
                    curr_feature = curr_feature.contiguous().view(-1, c, h, w)
                    curr_feature = F.interpolate(curr_feature, (flow_height, flow_width), mode="bilinear")
                    curr_feature = curr_feature.contiguous().view(batch_size, num_view, c, flow_height, flow_width)

                    all_features.append(curr_feature)

                all_features = torch.cat(all_features, dim=2)

                if isTest:
                    point_features, point_features_grad = \
                        self.feature_grad_fetcher.test_forward(all_features, world_points, cam_intrinsic, cam_extrinsic)
                else:
                    point_features, point_features_grad = \
                        self.feature_grad_fetcher(all_features, world_points, cam_intrinsic, cam_extrinsic)

                c = all_features.size(2)
                d_uv_d_d_tmp = d_uv_d_d.repeat(1, 1, c, 1, 1, 1)
                # print("d_uv_d_d tmp size:", d_uv_d_d_tmp.size())
                J = point_features_grad.view(-1, 1, 2) @ d_uv_d_d_tmp.view(-1, 2, 1)
                J = J.view(batch_size, num_view, c, -1, 1)[:, 1:, ...].contiguous()\
                    .permute(0, 3, 1, 2, 4).contiguous().view(-1, c * (num_view - 1), 1)

                # print(J.size())
                resid = point_features[:, 1:, ...] - point_features[:, 0:1, ...]
                first_resid = torch.sum(torch.abs(resid), dim=(1, 2))
                # print(resid.size())
                resid = resid.permute(0, 3, 1, 2).contiguous().view(-1, c * (num_view - 1), 1)

                J_t = torch.transpose(J, 1, 2)
                H = J_t @ J
                b = -J_t @ resid
                delta = b / (H + 1e-6)
                # #print(delta.size())
                _, _, h, w = estimated_depth_map.size()
                flow_result = estimated_depth_map  + delta.view(-1, 1, h, w)

                # check update results
                interval_depth_map = flow_result
                cam_points = (uv * interval_depth_map.view(batch_size, 1, 1, -1))
                world_points = torch.matmul(R_inv[:, 0:1, :, :], cam_points - t[:, 0:1, :, :]).transpose(1, 2) \
                    .contiguous().view(batch_size, 3, -1)  # (B, 3, D*FH*FW)

                point_features = \
                    self.feature_fetcher(all_features, world_points, cam_intrinsic, cam_extrinsic)

                resid = point_features[:, 1:, ...] - point_features[:, 0:1, ...]
                second_resid = torch.sum(torch.abs(resid), dim=(1, 2))
                # print(first_resid.size(), second_resid.size())

                # only accept good update
                flow_result = torch.where((second_resid < first_resid).view(batch_size, 1, flow_height, flow_width),
                                          flow_result, estimated_depth_map)
                return flow_result

            for i, (img_scale, inter_scale) in enumerate(zip(img_scales, inter_scales)):
                if isTest:
                    pred_depth_img = torch.detach(pred_depth_img)
                    print("update: {}".format(i))
                flow = gn_update(pred_depth_img, inter_scale* depth_interval, img_scale, i)
                preds["flow{}".format(i+1)] = flow
                pred_depth_img = flow       #gn后深度图 2022/5/24

        return preds


总结

以上就是我对fast的理解,在学习网络过程中,可以发现通过修改网络的数据预处理部分,可以实现自采集数据的测试。
修改后的代码我会在日后上传

  • 2
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 3
    评论
经典卷积神经网络是指使用numpy纯写的卷积神经网络代码,该代码可以帮助理解卷积神经网络的原理。它不使用任何神经网络框架,适合那些有意愿深入理解卷积神经网络底层的人群。这个的代码相对简单,但是通过研究它,可以充分理解卷积神经网络的工作原理。 卷积神经网络(CNN)是一种常用于图像处理和识别的深度学习模型。它通过卷积层、池化层和全连接层等组成,实现了对图像特征的提取和分类。在卷积神经网络中,卷积层通过滤波器(卷积核)对输入图像进行卷积操作,以提取图像的局部特征。池化层则通过降采样的方式,减少特征图的尺寸,同时保留重要的特征信息。全连接层将特征图转化为一维向量,并通过神经网络的计算得出最终的分类结果。 通过经典卷积神经网络的代码,我们可以更加深入地了解卷积神经网络的计算过程。该代码中的全连接层实际上就是指上述提到的全连接神经网络,它将最后一次卷积操作的输出作为输入,并通过神经网络的计算产生最终的输出结果。 总之,经典卷积神经网络可以帮助我们更好地理解卷积神经网络的原理和计算过程。通过研究这个代码,我们可以深入了解卷积操作、池化操作和全连接操作在卷积神经网络中的应用,从而更好地应用和设计卷积神经网络模型。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值