Pytorch迁移学习训练VGG16和模型测试(华为云modelarts)

数据集

kaggle的猫狗分类的比赛:https://www.kaggle.com/c/dogs-vs-cats-redux-kernels-edition/data
数据集分为:train和test两部分,将train中的数据分成两个文件夹:cat和dog,猫和狗的数据分别放在两个文件夹中,并将数据分成一部分作为验证集。
在这里插入图片描述

训练工具

华为云的modelarts中的notebook,华为云近期有免费的P100服务器,供开发者使用,虽然每次只能用一个小时,但训练这种简单的模型已经够用了。

训练代码

数据准备

由于我使用的是华为云服务器,需要将obs中的数据下载到云服务器上,不是用华为云服务器的可以忽略该步骤。

from modelarts.session import Session
session = Session()
#将数据集导入到服务器
session.obs.download_dir(src_obs_dir="obs://cat-vs-dog20200426/train/", dst_local_dir="./data")
session.obs.download_dir(src_obs_dir="obs://cat-vs-dog20200426/valid/", dst_local_dir="./data")
session.obs.download_dir(src_obs_dir="obs://cat-vs-dog20200426/test/", dst_local_dir="./data")

训练代码

导入工具包

import torch as t
import torchvision
from torchvision import datasets , transforms, models
from torch.autograd import Variable
import matplotlib.pyplot as plt
import pylab
import os
import time

从我的obs中导入vgg16的预训练文件(因为网上在线下载较慢,所以提前下载好),不是用华为云服务器的可以省略这步

from modelarts.session import Session
session = Session()
session.obs.download_file(src_obs_file="obs://cat-vs-dog20200426/vgg16-397923af.pth", dst_local_dir="/home/ma-user/.torch/models/")

如果是使用gpu,将数据和模型存入cuda

use_gpu = t.cuda.is_available()
print(use_gpu)

数据的预处理,用ImageFolder函数打开数据集,并且图像resize为(224,224)。

data_dir = "./data"
data_transform = {x:transforms.Compose([transforms.Resize([224,224]),
                                        transforms.ToTensor()])
                  for x in ["train","valid"]}
#以文件夹的方式打开
image_datasets = {x:datasets.ImageFolder(root= os.path.join(data_dir,x),
                                         transform=data_transform[x])
                  for x in ["train","valid"]}
dataloader = {x:t.utils.data.DataLoader(dataset = image_datasets[x],
                                        batch_size = 20,
                                        shuffle = True)
              for x in ["train","valid"]
              }

定义训练的模型,因为vgg16的原本的输出是1000类,我们是二分类,所以要将模型的最后的softmax的输出改为2

model = models.vgg16(pretrained = True)

for parma in model.parameters():
    parma.require_gard = False

model.classifier = t.nn.Sequential(  # 全连接层
            t.nn.Linear(7*7*512, 4096),
            t.nn.ReLU(),
            t.nn.Dropout(p=0.5),  # 防止训练过程发生过拟合,torch.nn.Dropout对所有元素中每个元素按照概率0.5更改为零
            t.nn.Linear(4096, 4096),
            t.nn.ReLU(),
            t.nn.Dropout(p=0.5),
            t.nn.Linear(4096, 2)
        )
print(model)

打印输出的结果如下:

VGG(
  (features): Sequential(
    (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): ReLU(inplace)
    (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (3): ReLU(inplace)
    (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (6): ReLU(inplace)
    (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (8): ReLU(inplace)
    (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (11): ReLU(inplace)
    (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (13): ReLU(inplace)
    (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (15): ReLU(inplace)
    (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (18): ReLU(inplace)
    (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (20): ReLU(inplace)
    (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (22): ReLU(inplace)
    (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (25): ReLU(inplace)
    (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (27): ReLU(inplace)
    (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (29): ReLU(inplace)
    (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  )
  (classifier): Sequential(
    (0): Linear(in_features=25088, out_features=4096, bias=True)
    (1): ReLU()
    (2): Dropout(p=0.5)
    (3): Linear(in_features=4096, out_features=4096, bias=True)
    (4): ReLU()
    (5): Dropout(p=0.5)
    (6): Linear(in_features=4096, out_features=2, bias=True)
  )
)

定义损失为交叉熵损失,模型的优化方式为Adam,并将模型放入cuda。

cost = t.nn.CrossEntropyLoss()
optimizer = t.optim.Adam(model.classifier.parameters(),lr=0.00001) #只优化全连接分类

if use_gpu:
    model = model.cuda()
    cost = cost.cuda()
print(model)

训练代码
模型的参数更新只更新全连接层部分,只1个epochs模型就收敛的很好了,P100训练大概也就十几分钟。

n_epochs = 1
time_open =time.time()
for epoch in range(n_epochs):
    print("Epoch {}/{}".format(epoch,n_epochs))
    print("-"*10)
    for i in ["train","valid"]:
        running_loss = 0.0
        running_correct = 0.0
        if i =="train":
            print("Training......")
            model.train(True)
        else:
            print("Valid......")
            model.train(False)
        for batch,data in enumerate(dataloader[i],1):
            X, y = data
            if (use_gpu):
                X, y = X.cuda(), y.cuda()
            #print(X.shape)

            X, y = Variable(X), Variable(y)
            output = model(X)
            #print(output.data)
            _,pred = t.max(output.data, 1)
            optimizer.zero_grad()
            loss = cost(output, y)
            if i =="train":
                loss.backward()
                optimizer.step()
            running_loss += loss.data
            running_correct += t.sum(pred == y.data)
            #print(running_loss,running_correct)
            if batch%200 ==0 and i == "train":
                print("Batch:{},Train_loss:{:.4},Train_acc:{:.4}".format(
                    batch, running_loss/(batch*20), float(running_correct)/(batch*20.0)))
        epoch_loss = running_loss/len(image_datasets[i])
        epoch_acc = float(running_correct)/len(image_datasets[i])
        print("{} Loss:{:.4} Acc:{:.4}".format(i,epoch_loss,epoch_acc))
        time_end=time.time()
        print(time_end-time_open)

训练结果

Epoch 0/1
----------
Training......
Batch:200,Train_loss:0.01097,Train_acc:0.9225
Batch:400,Train_loss:0.008127,Train_acc:0.9395
Batch:600,Train_loss:0.006853,Train_acc:0.9481
Batch:800,Train_loss:0.006385,Train_acc:0.9508
Batch:1000,Train_loss:0.006009,Train_acc:0.9535

模型的保存
模型的保存有只保存参数和保存全部模型的方式,由于华为云服务器只支持第一种方式,所以保存模型代码如下:

# 仅保存和加载模型参数(推荐使用)
t.save(model.state_dict(), 'cat-dog-vgg16.pth')
#model_object.load_state_dict(torch.load('params.pkl'))

测试代码

测试代码不在啰嗦,这里直接放代码,并读取test中的所有文件,将测试结果写入csv文件

from torchvision import transforms, datasets as ds ,models
import torchvision as tv
from torch.utils.data import DataLoader
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image,ImageDraw
import torch as t
from torch.autograd import Variable


transform = transforms.Compose(
    [
        transforms.Resize([224,224]),
        transforms.ToTensor()
    ]
)
label_id_name_dict = \
         {
            0: "猫",
            1: "狗",
         }
use_gpu = t.cuda.is_available()
print(use_gpu)
test_image_name = './data/test/2.jpg'

#将模型导入到云服务器中
from modelarts.session import Session
session = Session()
session.obs.download_file(src_obs_file="obs://cat-vs-dog20200426/vgg16-397923af.pth", dst_local_dir="/home/ma-user/.torch/models/")

model = models.vgg16(pretrained = True)
for parma in model.parameters():
    parma.require_gard = False

model.classifier = t.nn.Sequential(  # 全连接层
            t.nn.Linear(7*7*512, 4096),
            t.nn.ReLU(),
            t.nn.Dropout(p=0.5),  # 防止训练过程发生过拟合,torch.nn.Dropout对所有元素中每个元素按照概率0.5更改为零
            t.nn.Linear(4096, 4096),
            t.nn.ReLU(),
            t.nn.Dropout(p=0.5),
            t.nn.Linear(4096, 2)
        )
#print(model)
# 构建一个网络结构
# 将模型参数加载到新模型中
state_dict = t.load('cat-dog-vgg16.pkl')
model.load_state_dict(state_dict)
if (use_gpu):
       model= model.cuda()

def predict(model, test_image_name):
    test_image = Image.open(test_image_name)
    test_image_tensor = transform(test_image).unsqueeze(0)
    #print(test_image_tensor)
    if (use_gpu):
        test_image_tensor= test_image_tensor.cuda()
    model.eval()
    t.no_grad()
    out = model(test_image_tensor).data.cpu().numpy()
    #print(out)
    pred = np.amax(out,axis=1)
    idx = np.argmax(out, axis=1)
    lable = label_id_name_dict[idx[0]]
    #print(lable)
    print(test_image_name +":" + lable)
   # 			
test_image_name = './data/test/20.jpg'
predict(model,test_image_name)
import os
# 遍历指定目录,显示目录下的所有文件名
def eachFile(filepath):
    pathDir =  os.listdir(filepath)
    for allDir in pathDir:
        child = os.path.join('%s%s' % (filepath, allDir))
        predict(model,child)
        #print(child)# .decode('gbk')是解决中文显示乱码问题
eachFile("./data/test/")
import csv
# 1. 创建文件对象
csvFile=open("./cat_dog_result.csv",'w',newline='')
# 2. 基于文件对象构建 csv写入对象
writer=csv.writer(csvFile)
# 3. 构建列表头
writer.writerow(['id','lablel'])
# 4. 写入csv文件内容
def save_result_to_csv(model, filepath):
    pathDir =  os.listdir(filepath)
    for allDir in pathDir:
        child = os.path.join('%s%s' % (filepath, allDir))
        test_image = Image.open(child)
        test_image_tensor = transform(test_image).unsqueeze(0)
        if (use_gpu):
            test_image_tensor= test_image_tensor.cuda()
        model.eval()
        t.no_grad()
        out = model(test_image_tensor).data.cpu().numpy()
        #print(out)
        idx = np.argmax(out, axis=1)
        label = label_id_name_dict[idx[0]]
        #print(label)
        writer.writerow([allDir[:-4],idx[0]])
        #print(allDir[:-4],idx[0],label[0])

save_result_to_csv(model,"./data/test/")
csvFile.close()

#将数据在存入到obs中,不是用云服务器的可以忽略
from modelarts.session import Session
session = Session()
session.obs.upload_file(src_local_file='./cat_dog_result.csv', dst_obs_dir='obs://cat-vs-dog20200426/VGG16/')
session.obs.upload_file(src_local_file='./cat-dog-vgg16.pkl', dst_obs_dir='obs://cat-vs-dog20200426/VGG16/')
session.obs.upload_file(src_local_file='./cat-dog-vgg16.ipynb', dst_obs_dir='obs://cat-vs-dog20200426/VGG16/')

拓展

要训练其他的模型分类的代码,也是如此,就将模型换成相应模型,稍微修改模型的结构。
例如resnet50、densenet101

#resnet50
model = models.resnet50(pretrained = True)
model.fc = t.nn.Linear(2048, 2)
optimizer = t.optim.Adam(model.fc.parameters(),lr=0.00001) #只有化全连接分类
#densenet101
model = models.densenet121(pretrained = True)
model.classifier = t.nn.Sequential(  # 全连接层
            t.nn.Linear(1024, 4096),
            t.nn.ReLU(),
            t.nn.Dropout(p=0.5),  # 防止训练过程发生过拟合,torch.nn.Dropout对所有元素中每个元素按照概率0.5更改为零
            t.nn.Linear(4096, 2)
        )
optimizer = t.optim.Adam(model.classifier.parameters(),lr=0.00001) #只有化全连接分类
好的,以下是关于使用PyTorch进行迁移学习训练VGG16模型的一些基本步骤: 1. 导入必要的PyTorch库: ``` import torch import torch.nn as nn import torch.optim as optim from torch.optim import lr_scheduler import torchvision from torchvision import datasets, models, transforms ``` 2. 加载数据集并进行数据增强: ``` data_transforms = { 'train': transforms.Compose([ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'val': transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), } data_dir = 'path/to/dataset' image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x]) for x in ['train', 'val']} dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4, shuffle=True, num_workers=4) for x in ['train', 'val']} dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']} class_names = image_datasets['train'].classes ``` 3. 加载预训练VGG16模型: ``` model_ft = models.vgg16(pretrained=True) ``` 4. 修改VGG16的最后一层,使其适应数据集: ``` num_ftrs = model_ft.classifier[6].in_features model_ft.classifier[6] = nn.Linear(num_ftrs, len(class_names)) ``` 5. 设置损失函数和优化器: ``` criterion = nn.CrossEntropyLoss() optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9) ``` 6. 设置学习率调整器: ``` exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1) ``` 7. 训练模型: ``` def train_model(model, criterion, optimizer, scheduler, num_epochs=25): for epoch in range(num_epochs): print('Epoch {}/{}'.format(epoch, num_epochs - 1)) print('-' * 10) # Each epoch has a training and validation phase for phase in ['train', 'val']: if phase == 'train': scheduler.step() model.train() # Set model to training mode else: model.eval() # Set model to evaluate mode running_loss = 0.0 running_corrects = 0 # Iterate over data. for inputs, labels in dataloaders[phase]: inputs = inputs.to(device) labels = labels.to(device) # zero the parameter gradients optimizer.zero_grad() # forward # track history if only in train with torch.set_grad_enabled(phase == 'train'): outputs = model(inputs) _, preds = torch.max(outputs, 1) loss = criterion(outputs, labels) # backward + optimize only if in training phase if phase == 'train': loss.backward() 好的,下面是关于使用 PyTorch 迁移学习 VGG16 模型的一些步骤和示例代码。 1. 导入 PyTorch 和 torchvision 库 ```python import torch import torch.nn as nn import torchvision.models as models import torchvision.transforms as transforms ``` 2. 加载预训练VGG16 模型 ```python vgg16 = models.vgg16(pretrained=True) ``` 3. 冻结 VGG16 模型的参数,只训练最后的全连接层 ```python for param in vgg16.parameters(): param.requires_grad = False classifier = nn.Sequential( nn.Linear(25088, 4096), nn.ReLU(inplace=True), nn.Dropout(p=0.5), nn.Linear(4096, 4096), nn.ReLU(inplace=True), nn.Dropout(p=0.5), nn.Linear(4096, 1000), nn.ReLU(inplace=True), nn.Dropout(p=0.5), nn.Linear(1000, 2), ) vgg16.classifier = classifier ``` 4. 加载数据并进行预处理 ```python data_transforms = { 'train': transforms.Compose([ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'val': transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), } train_data = torchvision.datasets.ImageFolder('./data/train', transform=data_transforms['train']) val_data = torchvision.datasets.ImageFolder('./data/val', transform=data_transforms['val']) train_loader = torch.utils.data.DataLoader(train_data, batch_size=32, shuffle=True, num_workers=4) val_loader = torch.utils.data.DataLoader(val_data, batch_size=32, shuffle=False, num_workers=4) ``` 5. 定义损失函数和优化器 ```python criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(vgg16.classifier.parameters(), lr=0.001) ``` 6. 训练模型 ```python num_epochs = 10 for epoch in range(num_epochs): running_loss = 0.0 running_corrects = 0 for images, labels in train_loader: images = images.cuda() labels = labels.cuda() optimizer.zero_grad() outputs = vgg16(images) _, preds = torch.max(outputs, 1) loss = criterion(outputs, labels) loss.backward() optimizer.step() running_loss += loss.item() * images.size(0) running_corrects += torch.sum(preds == labels.data) epoch_loss = running_loss / len(train_data) epoch_acc = running_corrects.double() / len(train_data) print('Epoch [{}/{}], Loss: {:.4f}, Accuracy: {:.4f}'.format(epoch+1, num_epochs, epoch_loss, epoch_acc)) ``` 7. 在验证集上评估模型 ```python vgg16.eval() val_loss = 0.0 val_corrects = 0 for images, labels in val_loader: images =好的,下面是关于使用 PyTorch 进行迁移学习训练 VGG16 模型的步骤: 1.导入 PyTorch 和其他必要的库: ``` python import torch import torch.nn as nn import torch.optim as optim from torch.optim import lr_scheduler import numpy as np import torchvision from torchvision import datasets, models, transforms import matplotlib.pyplot as plt import time import os ``` 2.加载数据集并进行数据增强: ``` python data_transforms = { 'train': transforms.Compose([ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'val': transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), } data_dir = 'path/to/data' image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x]) for x in ['train', 'val']} dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4, shuffle=True, num_workers=4) for x in ['train', 'val']} dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']} class_names = image_datasets['train'].classes device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") ``` 3.加载预训练VGG16 模型并替换最后一层: ``` python model_ft = models.vgg16(pretrained=True) num_ftrs = model_ft.classifier[6].in_features model_ft.classifier[6] = nn.Linear(num_ftrs, 2) model_ft = model_ft.to(device) criterion = nn.CrossEntropyLoss() optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9) exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1) ``` 4.训练模型: ``` python def train_model(model, criterion, optimizer, scheduler, num_epochs=25): since = time.time() best_model_wts = copy.deepcopy(model.state_dict()) best_acc = 0.0 for epoch in range(num_epochs): print('Epoch {}/{}'.format(epoch, num_epochs - 1)) print('-' * 10) # Each epoch has a training and validation phase for phase in ['train', 'val']: if phase == 'train': scheduler.step() model.train() # Set model to training mode else: model.eval() # Set model to evaluate mode running_loss = 0.0 running_corrects = 0 # Iterate over data. for inputs, labels in dataloaders[phase]: inputs = inputs.to(device) labels = labels.to(device) # zero the parameter gradients optimizer.zero_grad() # forward # track history if only in train with torch.set_grad_enabled(phase == 'train'): outputs = model(inputs) _, preds = torch.max(outputs, 1) loss = criterion好的,下面是关于使用PyTorch进行迁移学习和Fine-tuning VGG16模型的一些指导: 1. 下载预训练模型: 首先需要下载预训练模型,可以使用PyTorch中自带的torchvision库来下载: ``` import torchvision.models as models vgg16 = models.vgg16(pretrained=True) ``` 这将下载VGG16的预训练权重。 2. 用新数据集进行微调: 接下来,需要将新数据集加载到模型中进行微调,可以使用torchvision.datasets和torch.utils.data.DataLoader来完成数据的预处理和加载。 ``` import torchvision.datasets as datasets import torchvision.transforms as transforms from torch.utils.data import DataLoader # 数据集预处理 data_transforms = transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ]) # 加载数据集 train_dataset = datasets.ImageFolder(root='./train/', transform=data_transforms) train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True) # 微调模型 for param in vgg16.parameters(): param.requires_grad = False vgg16.classifier[6] = nn.Linear(4096, num_classes) # 修改分类层 optimizer = optim.SGD(vgg16.classifier[6].parameters(), lr=0.001, momentum=0.9) criterion = nn.CrossEntropyLoss() for epoch in range(num_epochs): for inputs, labels in train_loader: inputs, labels = inputs.to(device), labels.to(device) optimizer.zero_grad() outputs = vgg16(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() ``` 在此代码中,我们首先使用transforms.Compose将数据集进行预处理,然后使用ImageFolder将数据集加载到内存中。接下来,我们使用for循环对模型进行微调。需要将所有参数的requires_grad设置为False,这是因为我们只想更新分类器层的参数。修改完分类器层之后,我们使用SGD作为优化器来更新参数。在每个epoch中,我们将数据集分批次加载到模型中,并计算输出和损失,然后使用反向传播来更新模型的权重。 3. 保存模型: 在训练完成后,可以使用torch.save来保存模型的权重。 ``` torch.save(vgg16.state_dict(), 'vgg16_weights.pth') ``` 以上是使用PyTorch迁移学习和Fine-tuning VGG16模型的基本指导,希望能对您有所帮助。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值