Tensorflow入门(谭秉峰)(六)MNIST数据集手写体识别(模型的保存,saver=tf.train.Saver())(GPU训练)

# Author: Zubin
# -*- coding: utf-8 -*
#有时候需要把模型保存起来,提供下一次测试或者计算机宕机时候数据恢复
from MNIST_defiction import *
import  tensorflow as tf

#每个批次的大小
batch_size=100
#计算一共有多少个批次
n_batch=mnist.train.num_examples//batch_size

#mnist数据集常数
INPUT_NODE=784
#第一层神经元的个数
L1_NODE=500
#输出类别数
OUTPUT_NODE=10



'''
定义两个placeholder.28x28的像素=784(将28*28像素的灰度图片转换成一个向量)
定义一层隐藏层,神经元个数为10个
'''

x=tf.placeholder(tf.float32,[None,784])
y=tf.placeholder(tf.float32,[None,10])
#辍学率
keep_prob=tf.placeholder(tf.float32)


#创建一个简单的神经网络,权重初始化为0
#第一层神经元
W1=tf.Variable(tf.truncated_normal([INPUT_NODE,L1_NODE],stddev=0.1))
b1=tf.Variable(tf.constant(0.1,shape=[L1_NODE]))
L1_Output=tf.nn.relu(tf.matmul(x,W1)+b1)
L1_drop=tf.nn.dropout(L1_Output,keep_prob)

#第四层神经元(输出层)
W2=tf.Variable(tf.truncated_normal([L1_NODE,OUTPUT_NODE],stddev=0.1))
b2=tf.Variable(tf.constant(0.1,shape=[OUTPUT_NODE]))
prediction=tf.nn.relu(tf.matmul(L1_drop,W2)+b2)

#二次代价函数
#loss=tf.reduce_mean(tf.square(y-prediction))
#交叉熵代价函数
loss=tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y,logits=prediction))
#使用Adam自适应优化算法
train_step=tf.train.AdamOptimizer(0.01).minimize(loss)

#初试化变量
init=tf.global_variables_initializer()

#将结果存放在一个布尔整型列表中
correct_prediction=tf.equal(tf.argmax(y,1),tf.argmax(prediction,1))
#求准确率函数:将布尔型转为整型
accuracy=tf.reduce_mean(tf.cast(correct_prediction,tf.float32))

#创建Saver()节点
saver=tf.train.Saver()


with tf.Session() as sess:
    sess.run(init)
    #整个数据集迭代21次
    for epoch in range(1001):
         if epoch%10==0:
             save_path=saver.save(sess,"./ckpt/my_model.ckpt")
             #每一批训练多少张图片
             for batch in range (n_batch):
                #batch_x为图片,batch_ys为标签
                 batch_xs,batch_ys=mnist.train.next_batch(batch_size)
                 sess.run(train_step,feed_dict={x:batch_xs,y:batch_ys,keep_prob:0.7})

             #迭代一次完求准确率
             train_acc=sess.run(accuracy,feed_dict={x:mnist.train.images,y:mnist.train.labels,keep_prob:1.0})
             test_acc=sess.run(accuracy,feed_dict={x:mnist.test.images,y:mnist.test.labels,keep_prob:1.0})
             #保存训练完的模型
             print("After "+str(epoch)+" training steps,"+"training accuracy is: "+str(train_acc)+", testing accuracy is: "+str(test_acc))
    save_path = saver.save(sess, "./ckpt/my_model_final.ckpt")

输出结果 :

G:\professional_software\Anaconda3\envs\tensorflow_cpu\python.exe D:/Pycharm/MNIST_DEV/MNIST_data/train_save_model.py
WARNING:tensorflow:From D:\Pycharm\MNIST_DEV\MNIST_defiction.py:6: read_data_sets (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use alternatives such as official/mnist/dataset.py from tensorflow/models.
WARNING:tensorflow:From G:\professional_software\Anaconda3\envs\tensorflow_cpu\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:260: maybe_download (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Please write your own downloading logic.
Extracting D:/Pycharm/MNIST_DEV/MNIST_data/train-images-idx3-ubyte.gz
WARNING:tensorflow:From G:\professional_software\Anaconda3\envs\tensorflow_cpu\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:262: extract_images (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.data to implement this functionality.
WARNING:tensorflow:From G:\professional_software\Anaconda3\envs\tensorflow_cpu\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:267: extract_labels (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.data to implement this functionality.
Extracting D:/Pycharm/MNIST_DEV/MNIST_data/train-labels-idx1-ubyte.gz
WARNING:tensorflow:From G:\professional_software\Anaconda3\envs\tensorflow_cpu\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:110: dense_to_one_hot (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.one_hot on tensors.
Extracting D:/Pycharm/MNIST_DEV/MNIST_data/t10k-images-idx3-ubyte.gz
Extracting D:/Pycharm/MNIST_DEV/MNIST_data/t10k-labels-idx1-ubyte.gz
WARNING:tensorflow:From G:\professional_software\Anaconda3\envs\tensorflow_cpu\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:290: DataSet.__init__ (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use alternatives such as official/mnist/dataset.py from tensorflow/models.
Training data size:  55000
Validating data size:  5000
Testing data size:  10000
Example training data:  [0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.3803922  0.37647063 0.3019608
 0.46274513 0.2392157  0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.3529412
 0.5411765  0.9215687  0.9215687  0.9215687  0.9215687  0.9215687
 0.9215687  0.9843138  0.9843138  0.9725491  0.9960785  0.9607844
 0.9215687  0.74509805 0.08235294 0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.54901963 0.9843138  0.9960785  0.9960785
 0.9960785  0.9960785  0.9960785  0.9960785  0.9960785  0.9960785
 0.9960785  0.9960785  0.9960785  0.9960785  0.9960785  0.9960785
 0.7411765  0.09019608 0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.8862746  0.9960785  0.81568635 0.7803922  0.7803922  0.7803922
 0.7803922  0.54509807 0.2392157  0.2392157  0.2392157  0.2392157
 0.2392157  0.5019608  0.8705883  0.9960785  0.9960785  0.7411765
 0.08235294 0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.14901961 0.32156864
 0.0509804  0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.13333334 0.8352942  0.9960785  0.9960785  0.45098042 0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.32941177
 0.9960785  0.9960785  0.9176471  0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.32941177 0.9960785  0.9960785
 0.9176471  0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.4156863  0.6156863  0.9960785  0.9960785  0.95294124 0.20000002
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.09803922
 0.45882356 0.8941177  0.8941177  0.8941177  0.9921569  0.9960785
 0.9960785  0.9960785  0.9960785  0.94117653 0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.26666668 0.4666667  0.86274517 0.9960785  0.9960785
 0.9960785  0.9960785  0.9960785  0.9960785  0.9960785  0.9960785
 0.9960785  0.5568628  0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.14509805 0.73333335 0.9921569
 0.9960785  0.9960785  0.9960785  0.8745099  0.8078432  0.8078432
 0.29411766 0.26666668 0.8431373  0.9960785  0.9960785  0.45882356
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.4431373  0.8588236  0.9960785  0.9490197  0.89019614 0.45098042
 0.34901962 0.12156864 0.         0.         0.         0.
 0.7843138  0.9960785  0.9450981  0.16078432 0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.6627451  0.9960785
 0.6901961  0.24313727 0.         0.         0.         0.
 0.         0.         0.         0.18823531 0.9058824  0.9960785
 0.9176471  0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.07058824 0.48627454 0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.32941177 0.9960785  0.9960785  0.6509804  0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.54509807
 0.9960785  0.9333334  0.22352943 0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.8235295  0.9803922  0.9960785  0.65882355
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.9490197  0.9960785  0.93725497 0.22352943 0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.34901962 0.9843138  0.9450981
 0.3372549  0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.01960784 0.8078432  0.96470594 0.6156863  0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.01568628 0.45882356
 0.27058825 0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.         0.         0.
 0.         0.         0.         0.        ]
Example training data label:  [0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
X shape: (100, 784)
Y shape: (100, 10)
WARNING:tensorflow:From D:/Pycharm/MNIST_DEV/MNIST_data/train_save_model.py:47: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version.
Instructions for updating:

Future major versions of TensorFlow will allow gradients to flow
into the labels input on backprop by default.

See @{tf.nn.softmax_cross_entropy_with_logits_v2}.

2019-04-21 21:29:35.100531: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
2019-04-21 21:29:35.454283: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1405] Found device 0 with properties: 
name: GeForce GTX 1060 with Max-Q Design major: 6 minor: 1 memoryClockRate(GHz): 1.3415
pciBusID: 0000:01:00.0
totalMemory: 6.00GiB freeMemory: 4.96GiB
2019-04-21 21:29:35.454565: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1484] Adding visible gpu devices: 0
2019-04-21 21:29:35.872464: I tensorflow/core/common_runtime/gpu/gpu_device.cc:965] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-04-21 21:29:35.872639: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971]      0 
2019-04-21 21:29:35.872734: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] 0:   N 
2019-04-21 21:29:35.872948: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1097] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4714 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 with Max-Q Design, pci bus id: 0000:01:00.0, compute capability: 6.1)
After 0 training steps,training accuracy is: 0.84541816, testing accuracy is: 0.844
After 10 training steps,training accuracy is: 0.96056366, testing accuracy is: 0.9556
After 20 training steps,training accuracy is: 0.9719091, testing accuracy is: 0.9633
After 30 training steps,training accuracy is: 0.9748, testing accuracy is: 0.9658
After 40 training steps,training accuracy is: 0.976, testing accuracy is: 0.9651
After 50 training steps,training accuracy is: 0.97385454, testing accuracy is: 0.9626
After 60 training steps,training accuracy is: 0.97778183, testing accuracy is: 0.9648
After 70 training steps,training accuracy is: 0.9775091, testing accuracy is: 0.9659
After 80 training steps,training accuracy is: 0.9817273, testing accuracy is: 0.9681
After 90 training steps,training accuracy is: 0.9797818, testing accuracy is: 0.9697
After 100 training steps,training accuracy is: 0.9838182, testing accuracy is: 0.9722
After 110 training steps,training accuracy is: 0.98283637, testing accuracy is: 0.9698
After 120 training steps,training accuracy is: 0.9836, testing accuracy is: 0.9699
After 130 training steps,training accuracy is: 0.9832, testing accuracy is: 0.9694
After 140 training steps,training accuracy is: 0.9829091, testing accuracy is: 0.9695
After 150 training steps,training accuracy is: 0.9851636, testing accuracy is: 0.9711
After 160 training steps,training accuracy is: 0.9831455, testing accuracy is: 0.971
After 170 training steps,training accuracy is: 0.98432726, testing accuracy is: 0.9706
After 180 training steps,training accuracy is: 0.9843636, testing accuracy is: 0.9702
After 190 training steps,training accuracy is: 0.98556364, testing accuracy is: 0.9708
After 200 training steps,training accuracy is: 0.9856, testing accuracy is: 0.9706
After 210 training steps,training accuracy is: 0.98385453, testing accuracy is: 0.9696
After 220 training steps,training accuracy is: 0.98507273, testing accuracy is: 0.9717
After 230 training steps,training accuracy is: 0.9842182, testing accuracy is: 0.9696
After 240 training steps,training accuracy is: 0.9840364, testing accuracy is: 0.9707
After 250 training steps,training accuracy is: 0.9858182, testing accuracy is: 0.9706
After 260 training steps,training accuracy is: 0.9868, testing accuracy is: 0.9722
After 270 training steps,training accuracy is: 0.9854182, testing accuracy is: 0.9702
After 280 training steps,training accuracy is: 0.98687273, testing accuracy is: 0.9721
After 290 training steps,training accuracy is: 0.98538184, testing accuracy is: 0.9691
After 300 training steps,training accuracy is: 0.9877818, testing accuracy is: 0.9748
After 310 training steps,training accuracy is: 0.986, testing accuracy is: 0.972
After 320 training steps,training accuracy is: 0.9873273, testing accuracy is: 0.9725
After 330 training steps,training accuracy is: 0.98685455, testing accuracy is: 0.9703
After 340 training steps,training accuracy is: 0.98767275, testing accuracy is: 0.9732
After 350 training steps,training accuracy is: 0.98661816, testing accuracy is: 0.9719
After 360 training steps,training accuracy is: 0.98734546, testing accuracy is: 0.9734
After 370 training steps,training accuracy is: 0.98834544, testing accuracy is: 0.9736
After 380 training steps,training accuracy is: 0.9877818, testing accuracy is: 0.97
After 390 training steps,training accuracy is: 0.9870727, testing accuracy is: 0.9722
After 400 training steps,training accuracy is: 0.9892909, testing accuracy is: 0.9722
After 410 training steps,training accuracy is: 0.98865455, testing accuracy is: 0.9726
After 420 training steps,training accuracy is: 0.9887818, testing accuracy is: 0.9735
After 430 training steps,training accuracy is: 0.98776364, testing accuracy is: 0.973
After 440 training steps,training accuracy is: 0.9879818, testing accuracy is: 0.9711
After 450 training steps,training accuracy is: 0.9895091, testing accuracy is: 0.9734
After 460 training steps,training accuracy is: 0.9891273, testing accuracy is: 0.9719
After 470 training steps,training accuracy is: 0.9893091, testing accuracy is: 0.9741
After 480 training steps,training accuracy is: 0.9891091, testing accuracy is: 0.9732
After 490 training steps,training accuracy is: 0.9890909, testing accuracy is: 0.9722
After 500 training steps,training accuracy is: 0.98996365, testing accuracy is: 0.9741
After 510 training steps,training accuracy is: 0.9902727, testing accuracy is: 0.9746
After 520 training steps,training accuracy is: 0.9878727, testing accuracy is: 0.971
After 530 training steps,training accuracy is: 0.99063635, testing accuracy is: 0.9727
After 540 training steps,training accuracy is: 0.9878727, testing accuracy is: 0.9708
After 550 training steps,training accuracy is: 0.988, testing accuracy is: 0.9709
After 560 training steps,training accuracy is: 0.9906727, testing accuracy is: 0.9729
After 570 training steps,training accuracy is: 0.98725456, testing accuracy is: 0.9674
After 580 training steps,training accuracy is: 0.9896909, testing accuracy is: 0.9719
After 590 training steps,training accuracy is: 0.9893091, testing accuracy is: 0.972
After 600 training steps,training accuracy is: 0.98832726, testing accuracy is: 0.9721
After 610 training steps,training accuracy is: 0.99036366, testing accuracy is: 0.9711
After 620 training steps,training accuracy is: 0.99014544, testing accuracy is: 0.9726
After 630 training steps,training accuracy is: 0.98974544, testing accuracy is: 0.9714
After 640 training steps,training accuracy is: 0.9902727, testing accuracy is: 0.9711
After 650 training steps,training accuracy is: 0.99021816, testing accuracy is: 0.9731
After 660 training steps,training accuracy is: 0.9900727, testing accuracy is: 0.9727
After 670 training steps,training accuracy is: 0.9894182, testing accuracy is: 0.9708
After 680 training steps,training accuracy is: 0.9893091, testing accuracy is: 0.9689
After 690 training steps,training accuracy is: 0.9906, testing accuracy is: 0.9711
After 700 training steps,training accuracy is: 0.9898546, testing accuracy is: 0.972
After 710 training steps,training accuracy is: 0.98956364, testing accuracy is: 0.9696
After 720 training steps,training accuracy is: 0.9885273, testing accuracy is: 0.9689
After 730 training steps,training accuracy is: 0.9900182, testing accuracy is: 0.9729
After 740 training steps,training accuracy is: 0.99038184, testing accuracy is: 0.9735
After 750 training steps,training accuracy is: 0.99227273, testing accuracy is: 0.9731
After 760 training steps,training accuracy is: 0.9912, testing accuracy is: 0.9725
After 770 training steps,training accuracy is: 0.9901091, testing accuracy is: 0.9714
After 780 training steps,training accuracy is: 0.9911091, testing accuracy is: 0.971
After 790 training steps,training accuracy is: 0.98834544, testing accuracy is: 0.9678
After 800 training steps,training accuracy is: 0.99085456, testing accuracy is: 0.97
After 810 training steps,training accuracy is: 0.99258184, testing accuracy is: 0.9741
After 820 training steps,training accuracy is: 0.9913091, testing accuracy is: 0.971
After 830 training steps,training accuracy is: 0.98856366, testing accuracy is: 0.9683
After 840 training steps,training accuracy is: 0.9907454, testing accuracy is: 0.971
After 850 training steps,training accuracy is: 0.99169093, testing accuracy is: 0.9736
After 860 training steps,training accuracy is: 0.9922, testing accuracy is: 0.9739
After 870 training steps,training accuracy is: 0.99094546, testing accuracy is: 0.9714
After 880 training steps,training accuracy is: 0.9911636, testing accuracy is: 0.9714
After 890 training steps,training accuracy is: 0.9903273, testing accuracy is: 0.9691
After 900 training steps,training accuracy is: 0.9911091, testing accuracy is: 0.9703
After 910 training steps,training accuracy is: 0.9893636, testing accuracy is: 0.9696
After 920 training steps,training accuracy is: 0.99076366, testing accuracy is: 0.9698
After 930 training steps,training accuracy is: 0.9892182, testing accuracy is: 0.9692
After 940 training steps,training accuracy is: 0.9909818, testing accuracy is: 0.9708
After 950 training steps,training accuracy is: 0.99085456, testing accuracy is: 0.9735
After 960 training steps,training accuracy is: 0.9907454, testing accuracy is: 0.9721
After 970 training steps,training accuracy is: 0.99045455, testing accuracy is: 0.9697
After 980 training steps,training accuracy is: 0.99192727, testing accuracy is: 0.9708
After 990 training steps,training accuracy is: 0.9896909, testing accuracy is: 0.9687
After 1000 training steps,training accuracy is: 0.9901818, testing accuracy is: 0.9693

Process finished with exit code 0

 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值