mnist手写数字识别与优化

mnist手写数字识别,是AI领域的“Hello World”,我们通过剖析这个程序,来加深对深度学习的认识。

本博客的代码来自“《PaddlePaddle从入门到炼丹》四——卷积神经网络”,博客链接如下。

https://blog.csdn.net/qq_33200967/article/details/83506694

在这篇博客中,使用了深度学习的深层神经网络和卷积神经网络,代码运行在baidu的AIStudio,选择GPU运行方式,本文主要记录在神经网络不同层深的情况下的数字识别率,来认识神经网络的特性。

一、深层神经网络

这一部分,我们有把神经网络该为单层/双层/三层/四层,超过或等于三层一般被称为深层神经网络。

修改代码,选择分类器为多层感知器

# 获取分类器
model = multilayer_perceptron(image)

1. 单层神经网络

网络模型为,输入层-->输出层,修改代码如下,

# 定义多层感知器
def multilayer_perceptron(input):
    # 第一个全连接层,激活函数为ReLU
    #hidden1 = fluid.layers.fc(input=input, size=100, act='relu')
    # 第二个全连接层,激活函数为ReLU
    #hidden2 = fluid.layers.fc(input=hidden1, size=100, act='relu')
    # 以softmax为激活函数的全连接输出层,大小为label大小
    fc = fluid.layers.fc(input=input, size=10, act='softmax')
    return fc

下面是训练和测试数据的准确率数据,

Pass:0, Batch:0, Cost:3.12611, Accuracy:0.14844
Pass:0, Batch:100, Cost:0.57369, Accuracy:0.84375
Pass:0, Batch:200, Cost:0.34888, Accuracy:0.92188
Pass:0, Batch:300, Cost:0.35908, Accuracy:0.89844
Pass:0, Batch:400, Cost:0.46956, Accuracy:0.85938
Test:0, Cost:0.35950, Accuracy:0.93750
Pass:1, Batch:0, Cost:0.31775, Accuracy:0.93750
Pass:1, Batch:100, Cost:0.27652, Accuracy:0.92188
Pass:1, Batch:200, Cost:0.27065, Accuracy:0.92969
Pass:1, Batch:300, Cost:0.28560, Accuracy:0.89844
Pass:1, Batch:400, Cost:0.41429, Accuracy:0.86719
Test:1, Cost:0.31951, Accuracy:0.93750
Pass:2, Batch:0, Cost:0.26233, Accuracy:0.94531
Pass:2, Batch:100, Cost:0.24122, Accuracy:0.92969
Pass:2, Batch:200, Cost:0.25647, Accuracy:0.92188
Pass:2, Batch:300, Cost:0.27161, Accuracy:0.91406
Pass:2, Batch:400, Cost:0.38802, Accuracy:0.85938
Test:2, Cost:0.30584, Accuracy:0.93750
Pass:3, Batch:0, Cost:0.23583, Accuracy:0.93750
Pass:3, Batch:100, Cost:0.22913, Accuracy:0.92969
Pass:3, Batch:200, Cost:0.24992, Accuracy:0.91406
Pass:3, Batch:300, Cost:0.26481, Accuracy:0.91406
Pass:3, Batch:400, Cost:0.36972, Accuracy:0.86719
Test:3, Cost:0.29924, Accuracy:0.93750
Pass:4, Batch:0, Cost:0.22000, Accuracy:0.93750
Pass:4, Batch:100, Cost:0.22292, Accuracy:0.93750
Pass:4, Batch:200, Cost:0.24542, Accuracy:0.91406
Pass:4, Batch:300, Cost:0.26016, Accuracy:0.91406
Pass:4, Batch:400, Cost:0.35566, Accuracy:0.85938
Test:4, Cost:0.29542, Accuracy:0.93750

令人惊奇的是,单层神经网络也可以达到约93%的准确率,这个的确出人意外。

2. 双层神经网络

网络模型为,输入层->隐层->输出层。

代码如下,

# 定义多层感知器
def multilayer_perceptron(input):
    # 第一个全连接层,激活函数为ReLU
    hidden1 = fluid.layers.fc(input=input, size=100, act='relu')
    # 第二个全连接层,激活函数为ReLU
    #hidden2 = fluid.layers.fc(input=hidden1, size=100, act='relu')
    # 以softmax为激活函数的全连接输出层,大小为label大小
    fc = fluid.layers.fc(input=hidden1, size=10, act='softmax')
    return fc

训练和测试准确率如下,

Pass:0, Batch:0, Cost:3.01273, Accuracy:0.13281
Pass:0, Batch:100, Cost:0.47009, Accuracy:0.85156
Pass:0, Batch:200, Cost:0.25585, Accuracy:0.92969
Pass:0, Batch:300, Cost:0.28107, Accuracy:0.92188
Pass:0, Batch:400, Cost:0.42777, Accuracy:0.86719
Test:0, Cost:0.26124, Accuracy:0.93750
Pass:1, Batch:0, Cost:0.16402, Accuracy:0.96094
Pass:1, Batch:100, Cost:0.19026, Accuracy:0.93750
Pass:1, Batch:200, Cost:0.18443, Accuracy:0.94531
Pass:1, Batch:300, Cost:0.20063, Accuracy:0.96094
Pass:1, Batch:400, Cost:0.29557, Accuracy:0.90625
Test:1, Cost:0.19170, Accuracy:0.93750
Pass:2, Batch:0, Cost:0.13216, Accuracy:0.97656
Pass:2, Batch:100, Cost:0.12720, Accuracy:0.96875
Pass:2, Batch:200, Cost:0.16064, Accuracy:0.96094
Pass:2, Batch:300, Cost:0.14157, Accuracy:0.97656
Pass:2, Batch:400, Cost:0.22292, Accuracy:0.92969
Test:2, Cost:0.16187, Accuracy:1.00000
Pass:3, Batch:0, Cost:0.10297, Accuracy:0.96875
Pass:3, Batch:100, Cost:0.10456, Accuracy:0.95312
Pass:3, Batch:200, Cost:0.15482, Accuracy:0.95312
Pass:3, Batch:300, Cost:0.10015, Accuracy:0.99219
Pass:3, Batch:400, Cost:0.18723, Accuracy:0.92969
Test:3, Cost:0.14261, Accuracy:1.00000
Pass:4, Batch:0, Cost:0.08154, Accuracy:0.98438
Pass:4, Batch:100, Cost:0.09554, Accuracy:0.94531
Pass:4, Batch:200, Cost:0.14740, Accuracy:0.93750
Pass:4, Batch:300, Cost:0.08526, Accuracy:0.99219
Pass:4, Batch:400, Cost:0.15813, Accuracy:0.95312
Test:4, Cost:0.13024, Accuracy:1.00000

经过3个pass的训练,测试数据可以达到100%的准确率,训练数据的准确率也达到99%。

3. 三层神经网络

网络结构为,输入层->隐层->隐层->输出层。

代码如下,

# 定义多层感知器
def multilayer_perceptron(input):
    # 第一个全连接层,激活函数为ReLU
    hidden1 = fluid.layers.fc(input=input, size=100, act='relu')
    # 第二个全连接层,激活函数为ReLU
    hidden2 = fluid.layers.fc(input=hidden1, size=100, act='relu')
    # 以softmax为激活函数的全连接输出层,大小为label大小
    fc = fluid.layers.fc(input=hidden2, size=10, act='softmax')
    return fc

训练和测试数据集的准确度如下,

Pass:0, Batch:0, Cost:2.39492, Accuracy:0.11719
Pass:0, Batch:100, Cost:0.38125, Accuracy:0.85156
Pass:0, Batch:200, Cost:0.23671, Accuracy:0.93750
Pass:0, Batch:300, Cost:0.30749, Accuracy:0.91406
Pass:0, Batch:400, Cost:0.40188, Accuracy:0.87500
Test:0, Cost:0.21345, Accuracy:1.00000
Pass:1, Batch:0, Cost:0.12216, Accuracy:0.98438
Pass:1, Batch:100, Cost:0.16529, Accuracy:0.94531
Pass:1, Batch:200, Cost:0.17492, Accuracy:0.95312
Pass:1, Batch:300, Cost:0.12466, Accuracy:0.96875
Pass:1, Batch:400, Cost:0.31326, Accuracy:0.90625
Test:1, Cost:0.15984, Accuracy:0.93750
Pass:2, Batch:0, Cost:0.08932, Accuracy:0.98438
Pass:2, Batch:100, Cost:0.10728, Accuracy:0.96094
Pass:2, Batch:200, Cost:0.13919, Accuracy:0.97656
Pass:2, Batch:300, Cost:0.07744, Accuracy:0.98438
Pass:2, Batch:400, Cost:0.24006, Accuracy:0.94531
Test:2, Cost:0.13407, Accuracy:0.93750
Pass:3, Batch:0, Cost:0.06845, Accuracy:0.98438
Pass:3, Batch:100, Cost:0.09714, Accuracy:0.95312
Pass:3, Batch:200, Cost:0.11451, Accuracy:0.96875
Pass:3, Batch:300, Cost:0.06234, Accuracy:0.98438
Pass:3, Batch:400, Cost:0.16804, Accuracy:0.96094
Test:3, Cost:0.11795, Accuracy:1.00000
Pass:4, Batch:0, Cost:0.05074, Accuracy:0.99219
Pass:4, Batch:100, Cost:0.09888, Accuracy:0.96094
Pass:4, Batch:200, Cost:0.09145, Accuracy:0.96875
Pass:4, Batch:300, Cost:0.04734, Accuracy:0.98438
Pass:4, Batch:400, Cost:0.12088, Accuracy:0.96875
Test:4, Cost:0.10956, Accuracy:1.00000

相比较双层神经网络,三层神经网络可以更快的的高准确率,如在第一个pass后,测试数据的准确度就可以达到100%;但是,之后测试数据识别率有降低而后又回到100%的情形,不明白其中的原因。

4. 四层神经网络

四层神经网络的结构为,输入层->隐层->隐层->隐层->输出层。

代码如下,

# 定义多层感知器
def multilayer_perceptron(input):
    # 第一个全连接层,激活函数为ReLU
    hidden1 = fluid.layers.fc(input=input, size=100, act='relu')
    # 第二个全连接层,激活函数为ReLU
    hidden2 = fluid.layers.fc(input=hidden1, size=100, act='relu')
    # 第三个全连接层,激活函数为ReLU
    hidden3 = fluid.layers.fc(input=hidden1, size=100, act='relu')
    # 以softmax为激活函数的全连接输出层,大小为label大小
    fc = fluid.layers.fc(input=hidden3, size=10, act='softmax')
    return fc

训练数据和测试数据的准确率如下,

Pass:0, Batch:0, Cost:2.47239, Accuracy:0.11719
Pass:0, Batch:100, Cost:0.45294, Accuracy:0.85156
Pass:0, Batch:200, Cost:0.26287, Accuracy:0.92188
Pass:0, Batch:300, Cost:0.29945, Accuracy:0.89062
Pass:0, Batch:400, Cost:0.45551, Accuracy:0.85938
Test:0, Cost:0.23784, Accuracy:1.00000
Pass:1, Batch:0, Cost:0.16192, Accuracy:0.96094
Pass:1, Batch:100, Cost:0.19751, Accuracy:0.92969
Pass:1, Batch:200, Cost:0.16019, Accuracy:0.95312
Pass:1, Batch:300, Cost:0.16501, Accuracy:0.95312
Pass:1, Batch:400, Cost:0.29107, Accuracy:0.91406
Test:1, Cost:0.15401, Accuracy:1.00000
Pass:2, Batch:0, Cost:0.10620, Accuracy:0.97656
Pass:2, Batch:100, Cost:0.10332, Accuracy:0.95312
Pass:2, Batch:200, Cost:0.14469, Accuracy:0.96094
Pass:2, Batch:300, Cost:0.11048, Accuracy:0.96875
Pass:2, Batch:400, Cost:0.22040, Accuracy:0.94531
Test:2, Cost:0.12461, Accuracy:1.00000
Pass:3, Batch:0, Cost:0.07755, Accuracy:0.97656
Pass:3, Batch:100, Cost:0.07912, Accuracy:0.97656
Pass:3, Batch:200, Cost:0.12591, Accuracy:0.96094
Pass:3, Batch:300, Cost:0.09253, Accuracy:0.97656
Pass:3, Batch:400, Cost:0.17517, Accuracy:0.95312
Test:3, Cost:0.11185, Accuracy:1.00000
Pass:4, Batch:0, Cost:0.06638, Accuracy:0.98438
Pass:4, Batch:100, Cost:0.07160, Accuracy:0.98438
Pass:4, Batch:200, Cost:0.11147, Accuracy:0.96094
Pass:4, Batch:300, Cost:0.08665, Accuracy:0.97656
Pass:4, Batch:400, Cost:0.15449, Accuracy:0.95312
Test:4, Cost:0.10896, Accuracy:1.00000

四层的神经网络表现收敛快速/稳定且准确率较三层网络高。但是,是否层数越高越好呢?我没有接着测试。

二、卷积神经网络

卷积神经网络是由一个或多个卷积层、池化层以及全连接层组成。这里,我们选取一直三个卷积+池化做测试对比。

首先,修改代码,设置分类器的类型为CNN(Convolutional Neural Network, 卷积神经网络)

# 获取分类器
#model = multilayer_perceptron(image)
model = convolutional_neural_network(image)

1. 含有一个卷积+池化层

网络结构为,输入层->卷积层->池化层->全连接层。

代码如下,

# 卷积神经网络
def convolutional_neural_network(input):
    # 第一个卷积层,卷积核大小为3*3,一共有32个卷积核
    conv1 = fluid.layers.conv2d(input = input, num_filters=32, filter_size=3, stride=1)
    # 第一个池化层,池化大小为2*2,步长为1,最大池化
    pool1 = fluid.layers.pool2d(input=conv1, pool_size=2, pool_stride=1, pool_type='max')
    # 第二个卷积层,卷积核大小为3*3,一共有64个卷积核
    #conv2 = fluid.layers.conv2d(input=pool1, num_filters=64, filter_size=3, stride=1)
    # 第二个池化层,池化大小为2*2,步长为1,最大池化
    #pool2 = fluid.layers.pool2d(input=conv2, pool_size=2, pool_stride=1, pool_type='max')
    # 以softmax为激活函数的全连接输出层,大小为label大小
    fc = fluid.layers.fc(input=pool1, size=10, act='softmax')
    return fc

训练数据和测试数据准确率如下,

Pass:0, Batch:0, Cost:3.51505, Accuracy:0.10156
Pass:0, Batch:100, Cost:0.34445, Accuracy:0.89844
Pass:0, Batch:200, Cost:0.21319, Accuracy:0.95312
Pass:0, Batch:300, Cost:0.23849, Accuracy:0.94531
Pass:0, Batch:400, Cost:0.47284, Accuracy:0.87500
Test:0, Cost:0.19661, Accuracy:1.00000
Pass:1, Batch:0, Cost:0.18348, Accuracy:0.97656
Pass:1, Batch:100, Cost:0.08346, Accuracy:0.96875
Pass:1, Batch:200, Cost:0.09432, Accuracy:0.96875
Pass:1, Batch:300, Cost:0.14066, Accuracy:0.96875
Pass:1, Batch:400, Cost:0.12486, Accuracy:0.96094
Test:1, Cost:0.12581, Accuracy:0.93750
Pass:2, Batch:0, Cost:0.09072, Accuracy:0.97656
Pass:2, Batch:100, Cost:0.09942, Accuracy:0.96875
Pass:2, Batch:200, Cost:0.06352, Accuracy:0.98438
Pass:2, Batch:300, Cost:0.11163, Accuracy:0.97656
Pass:2, Batch:400, Cost:0.10208, Accuracy:0.96875
Test:2, Cost:0.13513, Accuracy:0.93750
Pass:3, Batch:0, Cost:0.09934, Accuracy:0.97656
Pass:3, Batch:100, Cost:0.04291, Accuracy:0.97656
Pass:3, Batch:200, Cost:0.04950, Accuracy:0.98438
Pass:3, Batch:300, Cost:0.09500, Accuracy:0.96875
Pass:3, Batch:400, Cost:0.06909, Accuracy:0.98438
Test:3, Cost:0.12898, Accuracy:1.00000
Pass:4, Batch:0, Cost:0.08588, Accuracy:0.97656
Pass:4, Batch:100, Cost:0.03593, Accuracy:0.98438
Pass:4, Batch:200, Cost:0.04931, Accuracy:0.96875
Pass:4, Batch:300, Cost:0.08851, Accuracy:0.97656
Pass:4, Batch:400, Cost:0.09598, Accuracy:0.97656
Test:4, Cost:0.11659, Accuracy:1.00000

可以看出,一个卷积层的网络,其训练数据的准确率已经超过了4层神经网络,表现优异。

2. 含有2个卷积层的网络

网络结构为, 输入层->卷积层->池化层->卷积层->池化层->全连接层。

代码如下,

# 卷积神经网络
def convolutional_neural_network(input):
    # 第一个卷积层,卷积核大小为3*3,一共有32个卷积核
    conv1 = fluid.layers.conv2d(input = input, num_filters=32, filter_size=3, stride=1)
    # 第一个池化层,池化大小为2*2,步长为1,最大池化
    pool1 = fluid.layers.pool2d(input=conv1, pool_size=2, pool_stride=1, pool_type='max')
    # 第二个卷积层,卷积核大小为3*3,一共有64个卷积核
    conv2 = fluid.layers.conv2d(input=pool1, num_filters=64, filter_size=3, stride=1)
    # 第二个池化层,池化大小为2*2,步长为1,最大池化
    pool2 = fluid.layers.pool2d(input=conv2, pool_size=2, pool_stride=1, pool_type='max')
    # 以softmax为激活函数的全连接输出层,大小为label大小
    fc = fluid.layers.fc(input=pool2, size=10, act='softmax')
    return fc

测试数据和验证数据的准确度如下,

Pass:0, Batch:0, Cost:4.55156, Accuracy:0.06250
Pass:0, Batch:100, Cost:0.21274, Accuracy:0.93750
Pass:0, Batch:200, Cost:0.13221, Accuracy:0.95312
Pass:0, Batch:300, Cost:0.14602, Accuracy:0.97656
Pass:0, Batch:400, Cost:0.21743, Accuracy:0.94531
Test:0, Cost:0.10561, Accuracy:0.93750
Pass:1, Batch:0, Cost:0.13267, Accuracy:0.96875
Pass:1, Batch:100, Cost:0.07436, Accuracy:0.96875
Pass:1, Batch:200, Cost:0.05657, Accuracy:0.98438
Pass:1, Batch:300, Cost:0.17919, Accuracy:0.96875
Pass:1, Batch:400, Cost:0.16327, Accuracy:0.97656
Test:1, Cost:0.09448, Accuracy:0.93750
Pass:2, Batch:0, Cost:0.09776, Accuracy:0.98438
Pass:2, Batch:100, Cost:0.03945, Accuracy:0.98438
Pass:2, Batch:200, Cost:0.05310, Accuracy:0.98438
Pass:2, Batch:300, Cost:0.14646, Accuracy:0.97656
Pass:2, Batch:400, Cost:0.06727, Accuracy:0.96875
Test:2, Cost:0.09720, Accuracy:1.00000
Pass:3, Batch:0, Cost:0.06443, Accuracy:0.98438
Pass:3, Batch:100, Cost:0.09163, Accuracy:0.96875
Pass:3, Batch:200, Cost:0.01216, Accuracy:1.00000
Pass:3, Batch:300, Cost:0.10314, Accuracy:0.98438
Pass:3, Batch:400, Cost:0.08002, Accuracy:0.97656
Test:3, Cost:0.11338, Accuracy:1.00000
Pass:4, Batch:0, Cost:0.02246, Accuracy:0.98438
Pass:4, Batch:100, Cost:0.01949, Accuracy:0.99219
Pass:4, Batch:200, Cost:0.06579, Accuracy:0.97656
Pass:4, Batch:300, Cost:0.15396, Accuracy:0.98438
Pass:4, Batch:400, Cost:0.03079, Accuracy:0.99219
Test:4, Cost:0.12499, Accuracy:1.00000

包含2个卷积的神经网络,其训练数据的准确度十分稳定的接近100%,这个的确十分厉害;不过,测试数据的准确度不是在前2个pass只达到93%,不知是和原因。

3. 含有3个卷积的神经网络

网络结构为, 输入层->卷积层->池化层->卷积层->池化层->卷积层->池化层->全连接层。

代码如下,第三个卷积层的代码是我自己设的,我不确定第三个卷积层是否要用128个卷积核。设置128个卷积核,是因为看到第一层设了32个卷积核,第二层设了64个卷积核,所以我猜第三层要设为128个卷积核,但是我没有任何的数学依据。

# 卷积神经网络
def convolutional_neural_network(input):
    # 第一个卷积层,卷积核大小为3*3,一共有32个卷积核
    conv1 = fluid.layers.conv2d(input = input, num_filters=32, filter_size=3, stride=1)
    # 第一个池化层,池化大小为2*2,步长为1,最大池化
    pool1 = fluid.layers.pool2d(input=conv1, pool_size=2, pool_stride=1, pool_type='max')
    # 第二个卷积层,卷积核大小为3*3,一共有64个卷积核
    conv2 = fluid.layers.conv2d(input=pool1, num_filters=64, filter_size=3, stride=1)
    # 第二个池化层,池化大小为2*2,步长为1,最大池化
    pool2 = fluid.layers.pool2d(input=conv2, pool_size=2, pool_stride=1, pool_type='max')
    # 第三个卷积层,卷积核大小为3*3,一共有128个卷积核
    conv3 = fluid.layers.conv2d(input=pool2, num_filters=128, filter_size=3, stride=1)
    # 第三个池化层,池化大小为2*2,步长为1,最大池化
    pool3 = fluid.layers.pool2d(input=conv3, pool_size=2, pool_stride=1, pool_type='max')
    # 以softmax为激活函数的全连接输出层,大小为label大小
    fc = fluid.layers.fc(input=pool3, size=10, act='softmax')
    return fc

训练数据和测试数据的准确度如下,

Pass:0, Batch:0, Cost:6.76483, Accuracy:0.16406
Pass:0, Batch:100, Cost:0.13075, Accuracy:0.95312
Pass:0, Batch:200, Cost:0.18448, Accuracy:0.96875
Pass:0, Batch:300, Cost:0.21740, Accuracy:0.97656
Pass:0, Batch:400, Cost:0.40639, Accuracy:0.92969
Test:0, Cost:0.21052, Accuracy:0.93750
Pass:1, Batch:0, Cost:0.22828, Accuracy:0.96094
Pass:1, Batch:100, Cost:0.06976, Accuracy:0.97656
Pass:1, Batch:200, Cost:0.15817, Accuracy:0.96875
Pass:1, Batch:300, Cost:0.16659, Accuracy:0.98438
Pass:1, Batch:400, Cost:0.16523, Accuracy:0.96875
Test:1, Cost:0.14129, Accuracy:1.00000
Pass:2, Batch:0, Cost:0.15643, Accuracy:0.96875
Pass:2, Batch:100, Cost:0.04042, Accuracy:0.98438
Pass:2, Batch:200, Cost:0.09001, Accuracy:0.98438
Pass:2, Batch:300, Cost:0.19979, Accuracy:0.96094
Pass:2, Batch:400, Cost:0.26533, Accuracy:0.96094
Test:2, Cost:0.34692, Accuracy:0.93750
Pass:3, Batch:0, Cost:0.32040, Accuracy:0.97656
Pass:3, Batch:100, Cost:0.23548, Accuracy:0.96094
Pass:3, Batch:200, Cost:0.14403, Accuracy:0.96875
Pass:3, Batch:300, Cost:0.10629, Accuracy:0.97656
Pass:3, Batch:400, Cost:0.36311, Accuracy:0.94531
Test:3, Cost:0.34852, Accuracy:0.93750
Pass:4, Batch:0, Cost:0.12174, Accuracy:0.99219
Pass:4, Batch:100, Cost:0.15106, Accuracy:0.96875
Pass:4, Batch:200, Cost:0.17723, Accuracy:0.96875
Pass:4, Batch:300, Cost:0.18383, Accuracy:0.96875
Pass:4, Batch:400, Cost:0.17384, Accuracy:0.96094
Test:4, Cost:0.16233, Accuracy:1.00000

奇怪的是,3个卷积的并没有比2个卷积的更好,难道代码设置有问题?

 

 

 

 

 

 

 

 

 

 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值