深度学习笔记06_pytorch实现人脸识别

一、我的环境:

1.语言环境:Python 3.9

2.编译器:Pycharm

3.深度学习环境:

  • torch==2.1.2+cu118
  • torchvision==0.16.2+cu118

二、GPU设置:

       若使用的是cpu则可忽略

import torch

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)

、导入数据:

  • pathlib.Path函数将字符串类型的文件夹路径转换为pathlib.Path对象。
  • glob()方法获取data_dir路径下的所有文件路径,并以列表形式存储在data_paths中。
  • split()函数对data_paths中的每个文件路径执行分割操作,获得各个文件所属的类别名称,并存储在classeNames中。
  • 打印classeNames列表,显示每个文件所属的类别名称
import os,PIL,random,pathlib

data_dir = './data/'
data_dir = pathlib.Path(data_dir)

data_paths  = list(data_dir.glob('*'))
classeNames = [str(path).split("\\")[1] for path in data_paths]
print(classeNames)

运行结果:

['Angelina Jolie',
 'Brad Pitt',
 'Denzel Washington',
 'Hugh Jackman',
 'Jennifer Lawrence',
 'Johnny Depp',
 'Kate Winslet',
 'Leonardo DiCaprio',
 'Megan Fox',
 'Natalie Portman',
 'Nicole Kidman',
 'Robert Downey Jr',
 'Sandra Bullock',
 'Scarlett Johansson',
 'Tom Cruise',
 'Tom Hanks',
 'Will Smith']

、数据可视化:

# 指定图像文件夹路径
image_folder = './data/Angelina Jolie/'

# 获取文件夹中的所有图像文件
image_files = [f for f in os.listdir(image_folder) if f.endswith((".jpg", ".png", ".jpeg"))]

# 创建Matplotlib图像
fig, axes = plt.subplots(2, 4, figsize=(16, 6))

# 使用列表推导式加载和显示图像
for ax, img_file in zip(axes.flat, image_files):
    img_path = os.path.join(image_folder, img_file)
    img = Image.open(img_path)
    ax.imshow(img)
    ax.axis('off')

# 显示图像
plt.tight_layout()
plt.show()

 运行结果:

​​

五、划分数据集

# 关于transforms.Compose的更多介绍可以参考:https://blog.csdn.net/qq_38251616/article/details/124878863
train_transforms = transforms.Compose([
    transforms.Resize([224, 224]),  # 将输入图片resize成统一尺寸
    # transforms.RandomHorizontalFlip(), # 随机水平翻转
    transforms.ToTensor(),          # 将PIL Image或numpy.ndarray转换为tensor,并归一化到[0,1]之间
    transforms.Normalize(           # 标准化处理-->转换为标准正太分布(高斯分布),使模型更容易收敛
        mean=[0.485, 0.456, 0.406], 
        std=[0.229, 0.224, 0.225])  # 其中 mean=[0.485,0.456,0.406]与std=[0.229,0.224,0.225] 从数据集中随机抽样计算得到的。
])

total_data = datasets.ImageFolder("./data/",transform=train_transforms)
print(total_data)
batch_size = 32

train_dl = torch.utils.data.DataLoader(train_dataset,
                                           batch_size=batch_size,
                                           shuffle=True,
                                           num_workers=1)
test_dl = torch.utils.data.DataLoader(test_dataset,
                                          batch_size=batch_size,
                                          shuffle=True,
                                          num_workers=1)
for X, y in test_dl:
    print("Shape of X [N, C, H, W]: ", X.shape)
    print("Shape of y: ", y.shape, y.dtype)
    break

运行结果:

Shape of X [N, C, H, W]:  torch.Size([32, 3, 224, 224])
Shape of y:  torch.Size([32]) torch.int64

六、调用官方的VGG-16模型

        VGG-16(Visual Geometry Group-16)是由牛津大学视觉几何组(Visual Geometry Group)提出的一种深度卷积神经网络架构,用于图像分类和对象识别任务。VGG-16在2014年被提出,是VGG系列中的一种。VGG-16之所以备受关注,是因为它在ImageNet图像识别竞赛中取得了很好的成绩,展示了其在大规模图像识别任务中的有效性。

  • 深度:VGG-16由16个卷积层和3个全连接层组成,因此具有相对较深的网络结构。这种深度有助于网络学习到更加抽象和复杂的特征
  • 卷积层的设计:VGG-16的卷积层全部采用3x3的卷积核和步长为1的卷积操作,同时在卷积层之后都接有ReLU激活函数。这种设计的好处在于,通过堆叠多个较小的卷积核,可以提高网络的非线性建模能力,同时减少了参数数量,从而降低了过拟合的风险
  • 池化层:在卷积层之后,VGG-16使用最大池化层来减少特征图的空间尺寸,帮助提取更加显著的特征并减少计算量
  • 全连接层:VGG-16在卷积层之后接有3个全连接层,最后一个全连接层输出与类别数相对应的向量,用于进行分类

网络结构图:

​​

from torchvision.models import vgg16

device = "cuda" if torch.cuda.is_available() else "cpu"
print("Using {} device".format(device))
    
# 加载预训练模型,并且对模型进行微调
model = vgg16(pretrained = True).to(device) # 加载预训练的vgg16模型

for param in model.parameters():
    param.requires_grad = False # 冻结模型的参数,这样子在训练的时候只训练最后一层的参数

# 修改classifier模块的第6层(即:(6): Linear(in_features=4096, out_features=2, bias=True))
# 注意查看我们下方打印出来的模型
model.classifier._modules['6'] = nn.Linear(4096,len(classeNames)) # 修改vgg16模型中最后一层全连接层,输出目标类别个数
model.to(device)  

加载并打印模型

import torchsummary as summary

model = Model()
summary.summary(model, (3, 224, 224))

运行结果:

----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
            Conv2d-1         [-1, 64, 224, 224]           1,792
              ReLU-2         [-1, 64, 224, 224]               0
            Conv2d-3         [-1, 64, 224, 224]          36,928
              ReLU-4         [-1, 64, 224, 224]               0
         MaxPool2d-5         [-1, 64, 112, 112]               0
            Conv2d-6        [-1, 128, 112, 112]          73,856
              ReLU-7        [-1, 128, 112, 112]               0
            Conv2d-8        [-1, 128, 112, 112]         147,584
              ReLU-9        [-1, 128, 112, 112]               0
        MaxPool2d-10          [-1, 128, 56, 56]               0
           Conv2d-11          [-1, 256, 56, 56]         295,168
             ReLU-12          [-1, 256, 56, 56]               0
           Conv2d-13          [-1, 256, 56, 56]         590,080
             ReLU-14          [-1, 256, 56, 56]               0
           Conv2d-15          [-1, 256, 56, 56]         590,080
             ReLU-16          [-1, 256, 56, 56]               0
        MaxPool2d-17          [-1, 256, 28, 28]               0
           Conv2d-18          [-1, 512, 28, 28]       1,180,160
             ReLU-19          [-1, 512, 28, 28]               0
           Conv2d-20          [-1, 512, 28, 28]       2,359,808
             ReLU-21          [-1, 512, 28, 28]               0
           Conv2d-22          [-1, 512, 28, 28]       2,359,808
             ReLU-23          [-1, 512, 28, 28]               0
        MaxPool2d-24          [-1, 512, 14, 14]               0
           Conv2d-25          [-1, 512, 14, 14]       2,359,808
             ReLU-26          [-1, 512, 14, 14]               0
           Conv2d-27          [-1, 512, 14, 14]       2,359,808
             ReLU-28          [-1, 512, 14, 14]               0
           Conv2d-29          [-1, 512, 14, 14]       2,359,808
             ReLU-30          [-1, 512, 14, 14]               0
        MaxPool2d-31            [-1, 512, 7, 7]               0
AdaptiveAvgPool2d-32            [-1, 512, 7, 7]               0
           Linear-33                 [-1, 4096]     102,764,544
             ReLU-34                 [-1, 4096]               0
          Dropout-35                 [-1, 4096]               0
           Linear-36                 [-1, 4096]      16,781,312
             ReLU-37                 [-1, 4096]               0
          Dropout-38                 [-1, 4096]               0
           Linear-39                   [-1, 17]          69,649
================================================================
Total params: 134,330,193
Trainable params: 69,649
Non-trainable params: 134,260,544
----------------------------------------------------------------
Input size (MB): 0.57
Forward/backward pass size (MB): 218.77
Params size (MB): 512.43
Estimated Total Size (MB): 731.78
----------------------------------------------------------------

七、训练函数

# 训练循环
def train(dataloader, model, loss_fn, optimizer):
    size = len(dataloader.dataset)  # 训练集的大小
    num_batches = len(dataloader)   # 批次数目, (size/batch_size,向上取整)

    train_loss, train_acc = 0, 0  # 初始化训练损失和正确率
    
    for X, y in dataloader:  # 获取图片及其标签
        X, y = X.to(device), y.to(device)
        
        # 计算预测误差
        pred = model(X)          # 网络输出
        loss = loss_fn(pred, y)  # 计算网络输出和真实值之间的差距,targets为真实值,计算二者差值即为损失
        
        # 反向传播
        optimizer.zero_grad()  # grad属性归零
        loss.backward()        # 反向传播
        optimizer.step()       # 每一步自动更新
        
        # 记录acc与loss
        train_acc  += (pred.argmax(1) == y).type(torch.float).sum().item()
        train_loss += loss.item()
            
    train_acc  /= size
    train_loss /= num_batches

    return train_acc, train_loss

八、测试函数

def test (dataloader, model, loss_fn):
    size        = len(dataloader.dataset)  # 测试集的大小
    num_batches = len(dataloader)          # 批次数目, (size/batch_size,向上取整)
    test_loss, test_acc = 0, 0
    
    # 当不进行训练时,停止梯度更新,节省计算内存消耗
    with torch.no_grad():
        for imgs, target in dataloader:
            imgs, target = imgs.to(device), target.to(device)
            
            # 计算loss
            target_pred = model(imgs)
            loss        = loss_fn(target_pred, target)
            
            test_loss += loss.item()
            test_acc  += (target_pred.argmax(1) == target).type(torch.float).sum().item()

    test_acc  /= size
    test_loss /= num_batches

    return test_acc, test_loss

九、模型训练

if __name__ == "__main__":
    main()

设置超参数 

  • loss_fn = nn.CrossEntropyLoss()   # 创建损失函数
  • learn_rate = 1e-4 # 学习率
  • opt =  torch.optim.SGD(model.parameters(),lr=learn_rate)

设置动态学习率

def adjust_learning_rate(optimizer, epoch, start_lr):
    # 每 2 个epoch衰减到原来的 0.92
    lr = start_lr * (0.92 ** (epoch // 2))
    for param_group in optimizer.param_groups:
        param_group['lr'] = lr

learn_rate = 1e-4 # 初始学习率
optimizer  = torch.optim.SGD(model.parameters(), lr=learn_rate)
def main():
    loss_fn = nn.CrossEntropyLoss()  # 创建损失函数
    epochs = 40
    train_loss = []
    train_acc = []
    test_loss = []
    test_acc = []
    learn_rate = 1e-4  # 初始学习率

    model = vgg16().to(device)
    optimizer = torch.optim.SGD(model.parameters(), lr=learn_rate)

    best_acc = 0  # 设置一个最佳准确率,作为最佳模型的判别指标
    total_data, train_transforms = get_total_data()
    classes = list(total_data.class_to_idx)

    total_data = datasets.ImageFolder("./data/", transform=train_transforms)
    train_size = int(0.8 * len(total_data))
    test_size = len(total_data) - train_size
    train_dataset, test_dataset = torch.utils.data.random_split(total_data, [train_size, test_size])

    for epoch in range(epochs):
        t1 = int(time.time() * 1000)
        # 更新学习率(使用自定义学习率时使用)
        adjust_learning_rate(optimizer, epoch, learn_rate)

        model.train()
        epoch_train_acc, epoch_train_loss = train(get_data_loader(train_dataset), model, loss_fn, optimizer)
        # scheduler.step()  # 更新学习率(调用官方动态学习率接口时使用)

        model.eval()
        epoch_test_acc, epoch_test_loss = test(get_data_loader(test_dataset), model, loss_fn)

        # 保存最佳模型到 best_model
        if epoch_test_acc > best_acc:
            best_acc = epoch_test_acc
            best_model = copy.deepcopy(model)

        train_acc.append(epoch_train_acc)
        train_loss.append(epoch_train_loss)
        test_acc.append(epoch_test_acc)
        test_loss.append(epoch_test_loss)

        # 获取当前的学习率
        lr = optimizer.state_dict()['param_groups'][0]['lr']
        t2 = int(time.time() * 1000)
        template = (
            'Epoch:{:2d}, duration:{}ms, Train_acc:{:.1f}%, Train_loss:{:.3f}, Test_acc:{:.1f}%, Test_loss:{:.3f}, Lr:{:.2E}')
        print(
            template.format(epoch + 1, t2 - t1, epoch_train_acc * 100, epoch_train_loss, epoch_test_acc * 100,epoch_test_loss, lr))
    print('Done')

运行结果:

​​

十、模型评估

    plt.rcParams['font.sans-serif'] = ['SimHei']  # 用来正常显示中文标签
    plt.rcParams['axes.unicode_minus'] = False  # 用来正常显示负号
    plt.rcParams['figure.dpi'] = 100  # 分辨率

    epochs_range = range(epochs)

    plt.figure(figsize=(12, 3))
    plt.subplot(1, 2, 1)

    plt.plot(epochs_range, train_acc, label='Training Accuracy')
    plt.plot(epochs_range, test_acc, label='Test Accuracy')
    plt.legend(loc='lower right')
    plt.title('Training and Validation Accuracy')

    plt.subplot(1, 2, 2)
    plt.plot(epochs_range, train_loss, label='Training Loss')
    plt.plot(epochs_range, test_loss, label='Test Loss')
    plt.legend(loc='upper right')
    plt.title('Training and Validation Loss')
    plt.show()

运行结果:

​​

十一、预测

torch.squeeze():对数据的维度进行压缩,去掉维数为1的的维度。

torch.unsqueeze():对数据维度进行扩充。给指定位置加上维数为一的维度。

from PIL import Image 

classes = list(total_data.class_to_idx)

def predict_one_image(image_path, model, transform, classes):
    
    test_img = Image.open(image_path).convert('RGB')
    plt.imshow(test_img)  # 展示预测的图片

    test_img = transform(test_img)
    img = test_img.to(device).unsqueeze(0)
    
    model.eval()
    output = model(img)

    _,pred = torch.max(output,1)
    pred_class = classes[pred]
    print(f'预测结果是:{pred_class}')
    predict_one_image(image_path='./data/Angelina Jolie/001_fe3347c0.jpg',
                      model=best_model,
                      transform=train_transforms,
                      classes=classes)

运行结果:

预测结果是:Angelina Jolie

十二、模型评估

best_model.eval()
epoch_test_acc, epoch_test_loss = test(test_dl, best_model, loss_fn)
print(epoch_test_acc, epoch_test_loss)

运行结果:

0.16944444444444445 2.5215547680854797
# 查看是否与我们记录的最高准确率一致
print(epoch_test_acc) #0.16944444444444445

十三、增加测试集accuracy

更改优化器:

opt = torch.optim.Adam(model.parameters(), lr=learn_rate)

        通过手动构建VGG-16网络框架,更改优化器,增加BatchNormalization层,增加dropout层,测试集accuracy最高可达到49.2%。

运行结果:

Epoch: 1, duration:69608ms, Train_acc:14.9%, Train_loss:2.686, Test_acc:6.9%, Test_loss:2.941, Lr:1.00E-04
Epoch: 2, duration:80626ms, Train_acc:25.6%, Train_loss:2.261, Test_acc:23.1%, Test_loss:2.418, Lr:1.00E-04
Epoch: 3, duration:79576ms, Train_acc:39.4%, Train_loss:1.802, Test_acc:27.8%, Test_loss:2.141, Lr:9.20E-05
Epoch: 4, duration:79454ms, Train_acc:49.0%, Train_loss:1.492, Test_acc:33.1%, Test_loss:2.040, Lr:9.20E-05
...
Epoch:19, duration:78589ms, Train_acc:100.0%, Train_loss:0.002, Test_acc:49.2%, Test_loss:2.450, Lr:4.72E-05
Epoch:20, duration:78571ms, Train_acc:100.0%, Train_loss:0.001, Test_acc:48.1%, Test_loss:2.417, Lr:4.72E-05
Epoch:21, duration:78740ms, Train_acc:100.0%, Train_loss:0.001, Test_acc:49.2%, Test_loss:2.464, Lr:4.34E-05
Epoch:22, duration:79259ms, Train_acc:100.0%, Train_loss:0.001, Test_acc:48.1%, Test_loss:2.535, Lr:4.34E-05
Epoch:23, duration:78921ms, Train_acc:100.0%, Train_loss:0.001, Test_acc:48.6%, Test_loss:2.470, Lr:4.00E-05

        通过手动构建VGG-16网络框架,更改优化器,增加BatchNormalization层,增加dropout层,增加全局平均池化层代替全连接层(模型轻量化),测试集accuracy最高可达到67.5%。

class vgg16(nn.Module):
    def __init__(self):
        super(vgg16, self).__init__()
        # 卷积块1
        self.block1 = nn.Sequential(
            nn.Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
            nn.ReLU(),
            nn.Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
            nn.ReLU(),
            nn.BatchNorm2d(64),
            nn.MaxPool2d(kernel_size=(2, 2), stride=(2, 2))
        )
        # 卷积块2
        self.block2 = nn.Sequential(
            nn.Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
            nn.ReLU(),
            nn.Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
            nn.ReLU(),
            nn.BatchNorm2d(128),
            nn.MaxPool2d(kernel_size=(2, 2), stride=(2, 2))
        )
        # 卷积块3
        self.block3 = nn.Sequential(
            nn.Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
            nn.ReLU(),
            nn.Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
            nn.ReLU(),
            nn.Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
            nn.ReLU(),
            nn.BatchNorm2d(256),
            nn.MaxPool2d(kernel_size=(2, 2), stride=(2, 2))
        )
        # 卷积块4
        self.block4 = nn.Sequential(
            nn.Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
            nn.ReLU(),
            nn.Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
            nn.ReLU(),
            nn.Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
            nn.ReLU(),
            nn.BatchNorm2d(512),
            nn.MaxPool2d(kernel_size=(2, 2), stride=(2, 2))
        )
        # 卷积块5
        self.block5 = nn.Sequential(
            nn.Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
            nn.ReLU(),
            nn.Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
            nn.ReLU(),
            nn.Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
            nn.ReLU(),
            nn.BatchNorm2d(512),
            nn.MaxPool2d(kernel_size=(2, 2), stride=(2, 2))
        )

        self.dropout = nn.Dropout(p=0.5)
        self.avgpool = nn.AdaptiveAvgPool2d(output_size=(1, 1))
        # 全连接网络层,用于分类
        self.classifier = nn.Sequential(
            nn.Linear(in_features=512, out_features=17),
        )

    def forward(self, x):
        x = self.block1(x)
        x = self.block2(x)
        x = self.block3(x)
        x = self.block4(x)
        x = self.block5(x)
        x = self.dropout(x)
        x = self.avgpool(x)
        x = torch.flatten(x, start_dim=1)
        x = self.classifier(x)

        return x

运行结果:

Epoch: 1, duration:26824ms, Train_acc:13.2%, Train_loss:2.679, Test_acc:3.9%, Test_loss:3.005, Lr:1.00E-04
Epoch: 2, duration:22293ms, Train_acc:21.9%, Train_loss:2.354, Test_acc:20.3%, Test_loss:2.390, Lr:1.00E-04
Epoch: 3, duration:22289ms, Train_acc:28.8%, Train_loss:2.084, Test_acc:23.1%, Test_loss:2.347, Lr:9.20E-05
...
Epoch:36, duration:22374ms, Train_acc:100.0%, Train_loss:0.003, Test_acc:66.9%, Test_loss:1.012, Lr:2.42E-05
Epoch:37, duration:22353ms, Train_acc:100.0%, Train_loss:0.003, Test_acc:65.3%, Test_loss:1.032, Lr:2.23E-05
Epoch:38, duration:22360ms, Train_acc:100.0%, Train_loss:0.003, Test_acc:67.2%, Test_loss:1.042, Lr:2.23E-05
Epoch:39, duration:22407ms, Train_acc:100.0%, Train_loss:0.003, Test_acc:67.5%, Test_loss:1.023, Lr:2.05E-05
Epoch:40, duration:22449ms, Train_acc:100.0%, Train_loss:0.002, Test_acc:66.7%, Test_loss:1.002, Lr:2.05E-05

十三、保存并加载模型

# 模型保存
PATH = './model.pth'  # 保存的参数文件名
torch.save(model.state_dict(), PATH)

# 将参数加载到model当中
model.load_state_dict(torch.load(PATH, map_location=device))

 保存最佳模型:

best_acc = 0
# 保存最佳模型到 best_model
if epoch_test_acc > best_acc:
    best_acc = epoch_test_acc
    best_model = copy.deepcopy(model)

十四、总结

 本次基于深度学习的pytorch实现人脸识别项目总结如下:

1.学习调用官方VGG-16网络框架,但是得到的效果并不好,通过查询文章手动构建VGG-16模型,可以得到较好准确率;

2.3x3卷积核的好处

  • 参数少: 一个3x3卷积核拥有9个权重参数,而一个5x5卷积核则需要25个权重参数,因此采用3x3卷积核可以大幅度减少网络的参数数量,从而减少过拟合的风险;
  • 提高非线性能力: 多个3x3卷积核串联起来可以形成一个感受野更大的卷积核,而且这个组合具有更强的非线性能力。在VGG中,多次使用3x3卷积核相当于采用了更大的卷积核,可以提高网络的特征提取能力;
  • 减少计算量: 一个3x3的卷积核可以通过步长为1的卷积操作,得到与一个5x5卷积核步长为2相同的感受野,但计算量更小(即2个3x3代替一个5x5);3个3x3代替一个7x7的卷积;因此,VGG网络采用多个3x3的卷积核,可以在不增加计算量的情况下增加感受野,提高网络的性能;

3.批归一化

  • nn.BatchNorm2d(256)是一个在PyTorch中用于卷积神经网络模型中的操作,它可以对输入的二维数据(如图片)的每个通道进行归一化处理。
  • Batch Normalization 通过对每批数据的均值和方差进行标准化,使得每层的输出都具有相同的均值和方差,从而加快训练速度,减少过拟合现象。nn.BatchNorm2d(256)中的256表示进行标准化的通道数,通常设置为输入数据的特征数或者输出数据的通道数。
  • 在使用nn.BatchNorm2d(256)时,需要将其作为神经网络的一部分,将其添加进网络层中,位置是在卷积后、ReLU前,经过训练之后,每个卷积层的输出在经过BatchNorm层后都经过了归一化处理,从而使得神经网络的训练效果更加稳定。

4.通过手动构建VGG-16网络框架,更改优化器,增加BatchNormalization层,增加dropout层,测试集accuracy最高可达到49.2%;

5.通过手动构建VGG-16网络框架,更改优化器,增加BatchNormalization层,增加dropout层,增加全局平均池化层代替全连接层(模型轻量化),测试集accuracy最高可达到67.5%。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值