CNN toolbox

我的Matlab是2013版的,里面并没有Deep learning这个工具箱,所以需要我们去下载,这是我借鉴别人的一个工具箱:https://github.com/rasmusbergpalm/DeepLearnToolbox里面有关于CNN的详细代码,在此我对其中的一些代码进行注释,如有不合理的地方还希望大家多多指正。

在此之前,我们先要对深度卷积网络进行一下简单的了解,看其每一层是如何进行变换的,下面的代码也是以此为基础进行编写的,这个在我之前的一篇博文中已经提及到了CNN深度卷积网络的简单介绍当然作为初学者也是借鉴的其他老师的东西,大家可以看一下,方便后面代码的理解。

首先cnnmaintest.m

在运行的时候可能会出现“out of merry”之类的错误,可能是你的电脑内存不足,或者是自习调试代码,我运行的时候出现了类似的错误提示,然后一步一步运行代码,最后就出结果了。

<span style="font-size:14px;">clear all; close all; clc;  
addpath('../data');  
addpath('../util');  
load mnist_uint8;  
train_x = double(reshape(train_x',28,28,60000))/255;  
test_x = double(reshape(test_x',28,28,10000))/255;  
train_y = double(train_y');  
test_y = double(test_y');
cnn.layers = {  
    struct('type', 'i') %input layer  
    struct('type', 'c', 'outputmaps', 6, 'kernelsize', 5) %convolution layer  
    struct('type', 's', 'scale', 2) %sub sampling layer  
    struct('type', 'c', 'outputmaps', 12, 'kernelsize', 5) %convolution layer  
    struct('type', 's', 'scale', 2) %subsampling layer  
};   
% 这里把cnn的设置给cnnsetup,它会据此构建一个完整的CNN网络,并返回  
cnn = cnnsetup(cnn, train_x, train_y);   
opts.alpha = 1;  % 学习率  
% 每次挑出一个batchsize的batch来训练,也就是每用batchsize个样本就调整一次权值,而不是  
% 把所有样本都输入了,计算所有样本的误差了才调整一次权值  
opts.batchsize = 50;   
opts.numepochs = 10;  
% 然后开始把训练样本给它,开始训练这个CNN网络  
cnn = cnntrain(cnn, train_x, train_y, opts);  
% 然后就用测试样本来测试  
[er, bad] = cnntest(cnn, test_x, test_y);    
%plot mean squared error  
plot(cnn.rL);  
%show test error  
disp([num2str(er*100) '% error']);  </span>

CNN中的layer有三种,其中i代表input,即输入层,c代表convolution,即卷积层,s代表subsampling,即二次抽样层。

c中的outputmaps是convolution之后有多少张图,比如第一层convolution之后就有六个特征图。

c中的kernelsize 其实就是用来convolution的patch是多大。

s中的scale就是pooling的size为scale*scale的区域。

然后就是上面代码中的cnnsetup和cnntrain函数,我们先来看

cnnsetup.m

这部分代码主要是对CNN网络进行构建,其主要结构在CNN深度卷积网络的简单介绍中已经详细介绍,大家可以参考一下,在了解其结构后在看这部分代码就简单多了。

function net = cnnsetup(net, x, y)
    inputmaps = 1; %输入原始图像;
    mapsize = size(squeeze(x(:, :, 1)));%图像的存放方式是三维的reshape(train_x',28,28,60000),前面
     % 两维表示图像的行与列,第三维就表示有多少个图像。这样squeeze(x(:, :, 1))就相当于取第一个
     % 图像样本后,再把第三维移除,就变成了28x28的矩阵,也就是得到一幅图像,再size一下就得
     %到了训练样本图像的行与列了;
    for l = 1 : numel(net.layers)  % 通过传入net这个结构体来逐层构建CNN网络,net.layers中有五个元素,
        %就表示CNN共有五层 ;
        if strcmp(net.layers{l}.type, 's') % 如果这层是子采样层,每张图的大小28*28;
            mapsize = mapsize / net.layers{l}.scale;% 除以scale=2,即pooling之后图像的大小,pooling后的  
  %图像为14*14,注意mapsize保存的都是上一层每张特征map的大小,它会随着循环进行不断更新;
  assert(all(floor(mapsize)==mapsize), ['Layer ' num2str(l) ' size must be integer. Actual: ' num2str(mapsize)]);
            for j = 1 : inputmaps  % inputmap就是上一层有多少张特征图;
                net.layers{l}.b{j} = 0;% 将偏置初始化为0;
            end
        end
        if strcmp(net.layers{l}.type, 'c')
            mapsize = mapsize - net.layers{l}.kernelsize + 1;% 如果是卷积层,用kernelsize*kernelsize大小
            %的卷积核卷积,得到的新的map的大小;
            fan_out = net.layers{l}.outputmaps * net.layers{l}.kernelsize ^ 2 
%链接到后一层权值的个数;
            for j = 1 : net.layers{l}.outputmaps  
   %链到前层权值的个数;
                fan_in = inputmaps * net.layers{l}.kernelsize ^ 2;
                for i = 1 : inputmaps  
      %  input map随机初始化权值,也就是共有outputmaps个卷积核,对上层的
      %每个特征map,都需要用这么多个卷积核去卷积提取特征。 rand(n)是产生n×n的 0-1之间均匀
      %取值的数值的矩阵,再减去0.5就相当于产生-0.5到0.5之间的随机数再 *2 就放大到 [-1, 1]。
      %然后再乘以后面那一数,why?  
      % 反正就是将卷积核每个元素初始化为[-sqrt(6 / (fan_in + fan_out)), sqrt(6 / (fan_in + fan_out))]
      %之间的随机数。因为这里是权值共享的,也就是对于一张特征map,所有感受野位置的卷积核
      %都是一样的,所以只需要保存的是 inputmaps * outputmaps 个卷积核;
                    net.layers{l}.k{i}{j} = (rand(net.layers{l}.kernelsize) - 0.5) * 2 * sqrt(6 / (fan_in + fan_out));
                end
                net.layers{l}.b{j} = 0; % 将偏置初始化为0;
       % 只有在卷积层的时候才会改变特征map的个数,pooling的时候不会改变。这层输出的特征map
       %个数就是输入到下一层的特征map个数;
            end
            inputmaps = net.layers{l}.outputmaps;
        end
    end
    % 'onum' is the number of labels, that's why it is calculated using size(y, 1). If you have 20 labels so the output of the network will be 20 neurons.
    % onum 是标签的个数,也就是输出层神经元的个数。你要分多少个类,自然就有多少个输出神经元;
    % 'fvnum' is the number of output neurons at the last layer, the layer just before the output layer.
    % fvnum 是输出层的前面一层的神经元个数,这一层的上一层是经过pooling后的层,
    %包含有inputmaps个特征map。每个特征map的大小是mapsize。所以,该层的神经元个数是 inputmaps * (每个特征map的大小);
    % 'ffb' is the biases of the output neurons.
    % ffb 是输出层每个神经元对应的基biases
    % 'ffW' is the weights between the last layer and the output neurons. Note that the last layer is fully connected to the output layer, that's why the size of the weights is (onum * fvnum)
    % ffW 输出层前一层 与 输出层 连接的权值,这两层之间是全连接的
    fvnum = prod(mapsize) * inputmaps;
    onum = size(y, 1);

    net.ffb = zeros(onum, 1);
    net.ffW = (rand(onum, fvnum) - 0.5) * 2 * sqrt(6 / (onum + fvnum));
end

cnntrain.m

<span style="font-size:18px;">net = cnnff(net, batch_x);  
net = cnnbp(net, batch_y);  
net = cnnapplygrads(net, opts);</span>
cnntrain是用back propagation来计算gradient的,下面依次来看这三个函数。

cnnff.m

function net = cnnff(net, x)
    n = numel(net.layers);% 该网络的层数
    net.layers{1}.a{1} = x;% 网络的第一层就是输入,但这里的输入包含了多个训练图像
    inputmaps = 1;% 输入层只有一个特征map,也就是原始的输入图像

    for l = 2 : n   %  for each layer
        if strcmp(net.layers{l}.type, 'c')
            %  !!below can probably be handled by insane matrix operations
            for j = 1 : net.layers{l}.outputmaps   %  for each output map
     % 对上一层的每一张特征map,卷积后的大小就是(输入map宽 - 卷积核的宽+1)
     %*(输入map高 - 卷积核高 + 1);
                z = zeros(size(net.layers{l - 1}.a{1}) - [net.layers{l}.kernelsize - 1 net.layers{l}.kernelsize - 1 0]);
               % z保存的就是该层中所有的特征map ;  
                for i = 1 : inputmaps   %  for each input map
 % 将上一层的每一个特征map(也就是这层的输入map)与该层的卷积核进行卷积,
 %然后将对上一层特征map的所有结果加起来。也就是说,当前层的一张特征map,
 %是用一种卷积核去卷积上一层中所有的特征map,然后所有特征map对应位置的
 %卷积值的和。另外,有些论文或者实际应用中,并不是与全部的特征map链接的,
 %有可能只与其中的某几个连接
                    z = z + convn(net.layers{l - 1}.a{i}, net.layers{l}.k{i}{j}, 'valid');
                end
                %  add bias, pass through nonlinearity
                net.layers{l}.a{j} = sigm(z + net.layers{l}.b{j});
   % 加上对应偏置b,然后再用sigmoid函数算出特征map中每个位置的激活值,
   %作为该层输出特征map
            end
            %  set number of input maps to this layers number of outputmaps
            inputmaps = net.layers{l}.outputmaps;
        elseif strcmp(net.layers{l}.type, 's')% 如果是二次采样层
            %  downsample
            for j = 1 : inputmaps          
              z = convn(net.layers{l - 1}.a{j}, ones(net.layers{l}.scale) / (net.layers{l}.scale ^ 2), 'valid');   %  !! replace with variable
        % 例如我们要在scale=2的域上面执行mean pooling,那么可以卷积大小为2*2,
        %每个元素都是1/4的卷积核        
              net.layers{l}.a{j} = z(1 : net.layers{l}.scale : end, 1 : net.layers{l}.scale : end, :);
        % 最终pooling的结果需要从上面得到的卷积结果中以scale=2为步长,
        %跳着把mean pooling的值读出来
            end
        end
    end
% 把最后一层得到的特征map拉成一条向量,作为最终提取到的特征向量
    %  concatenate all end layer feature maps into vector
    net.fv = [];
    for j = 1 : numel(net.layers{n}.a)% 最后一层的特征map的个数
        sa = size(net.layers{n}.a{j});% 第j个特征map的大小
        net.fv = [net.fv; reshape(net.layers{n}.a{j}, sa(1) * sa(2), sa(3))];
    end
    %  feedforward into output perceptrons
    net.o = sigm(net.ffW * net.fv + repmat(net.ffb, 1, size(net.fv, 2)));
% 计算网络的最终输出值。sigmoid(W*X + b),注意是同时计算了batchsize个样本的输出值
end

cnnbp.m

这个toolbox里的subsampling是不用计算gradient的。

function net = cnnbp(net, y)
    n = numel(net.layers);
    net.e = net.o - y;%误差
    %  loss function
    net.L = 1/2* sum(net.e(:) .^ 2) / size(net.e, 2);% 代价函数也即均方误差

    %%  backprop deltas
    net.od = net.e .* (net.o .* (1 - net.o));   %  output delta误差项
    net.fvd = (net.ffW' * net.od);              %  feature vector delta
    if strcmp(net.layers{n}.type, 'c')         %  only conv layers has sigm function
        net.fvd = net.fvd .* (net.fv .* (1 - net.fv));
    end

    %  reshape feature vector deltas into output map style
    sa = size(net.layers{n}.a{1});% 最后一层特征map的大小,这里的最后一层都是指输出层的前一层
    fvnum = sa(1) * sa(2);>% 因为是将最后一层特征map拉成一条向量,所以对于一个样本来说,特征维数是这样
    for j = 1 : numel(net.layers{n}.a)% 最后一层的特征map的个数
        net.layers{n}.d{j} = reshape(net.fvd(((j - 1) * fvnum + 1) : j * fvnum, :), sa(1), sa(2), sa(3));
   % 在fvd里面保存的是所有样本的特征向量(在cnnff.m函数中用特征map拉成的),
   %所以这里需要重新变换回来特征map的形式。d 保存的是 delta,也就是灵敏度或者残差
    end

    for l = (n - 1) : -1 : 1% 对于 输出层前面的层(与输出层计算残差的方式不同)
        if strcmp(net.layers{l}.type, 'c')% 该层特征map的个数
            for j = 1 : numel(net.layers{l}.a)
                net.layers{l}.d{j} = net.layers{l}.a{j} .* (1 - net.layers{l}.a{j}) .* (expand(net.layers{l + 1}.d{j}, [net.layers{l + 1}.scale net.layers{l + 1}.scale 1]) / net.layers{l + 1}.scale ^ 2);
            % net.layers{l}.d{j} 保存的是 第l层 的 第j个 map 的 灵敏度map。 也就是每个神经元节点的delta的值,expand的操作相当于对l+1层的灵敏度map进行上采样。
            %然后前面的操作相当于对该层的输入a进行sigmoid求导
            end
        else
            if strcmp(net.layers{l}.type, 's')
            for i = 1 : numel(net.layers{l}.a)
                z = zeros(size(net.layers{l}.a{1}));% 第l层特征map的个数
                for j = 1 : numel(net.layers{l + 1}.a)% 第l+1层特征map的个数
                     z = z + convn(net.layers{l + 1}.d{j}, rot180(net.layers{l + 1}.k{i}{j}), 'full');
                end
                net.layers{l}.d{i} = z;
            end
        end
    end

    %%  calc gradients梯度计算
    for l = 2 : n
        if strcmp(net.layers{l}.type, 'c')
            for j = 1 : numel(net.layers{l}.a)
                for i = 1 : numel(net.layers{l - 1}.a) % dk保存的是误差对卷积核的导数
                    net.layers{l}.dk{i}{j} = convn(flipall(net.layers{l - 1}.a{i}), net.layers{l}.d{j}, 'valid') / size(net.layers{l}.d{j}, 3);
                end % db是误差对于bias的导数
                net.layers{l}.db{j} = sum(net.layers{l}.d{j}(:)) / size(net.layers{l}.d{j}, 3);
            end
        end
    end
    % 最后一层perceptron的gradient的计算
    net.dffW = net.od * (net.fv)' / size(net.od, 2);
    net.dffb = mean(net.od, 2);

    function X = rot180(X)
        X = flipdim(flipdim(X, 1), 2);
    end
end

cnnapplygrades.m

依次进行梯度更新。

function net = cnnapplygrads(net, opts)
    for l = 2 : numel(net.layers)
        if strcmp(net.layers{l}.type, 'c')
            for j = 1 : numel(net.layers{l}.a)
                for ii = 1 : numel(net.layers{l - 1}.a)
 % 这里没什么好说的,就是普通的权值更新的公式:W_new = W_old - alpha * de/dW(误差对权值导数)
                    net.layers{l}.k{ii}{j} = net.layers{l}.k{ii}{j} - opts.alpha * net.layers{l}.dk{ii}{j};
                end
                net.layers{l}.b{j} = net.layers{l}.b{j} - opts.alpha * net.layers{l}.db{j};
            end
        end
    end

    net.ffW = net.ffW - opts.alpha * net.dffW;
    net.ffb = net.ffb - opts.alpha * net.dffb;
end

cnntest.m

得到最终结果。

function [er, bad] = cnntest(net, x, y)
    %  feedforward
    net = cnnff(net, x);% 前向传播得到输出
    [~, h] = max(net.o);% 找到最大的输出对应的标签
    [~, a] = max(y); % 找到最大的期望输出对应的索
    bad = find(h ~= a);% 找到他们不相同的个数,也就是错误的次数
    er = numel(bad) / size(y, 2);% 计算错误率
end









深度学习工具包 Deprecation notice. ----- This toolbox is outdated and no longer maintained. There are much better tools available for deep learning than this toolbox, e.g. [Theano](http://deeplearning.net/software/theano/), [torch](http://torch.ch/) or [tensorflow](http://www.tensorflow.org/) I would suggest you use one of the tools mentioned above rather than use this toolbox. Best, Rasmus. DeepLearnToolbox ================ A Matlab toolbox for Deep Learning. Deep Learning is a new subfield of machine learning that focuses on learning deep hierarchical models of data. It is inspired by the human brain's apparent deep (layered, hierarchical) architecture. A good overview of the theory of Deep Learning theory is [Learning Deep Architectures for AI](http://www.iro.umontreal.ca/~bengioy/papers/ftml_book.pdf) For a more informal introduction, see the following videos by Geoffrey Hinton and Andrew Ng. * [The Next Generation of Neural Networks](http://www.youtube.com/watch?v=AyzOUbkUf3M) (Hinton, 2007) * [Recent Developments in Deep Learning](http://www.youtube.com/watch?v=VdIURAu1-aU) (Hinton, 2010) * [Unsupervised Feature Learning and Deep Learning](http://www.youtube.com/watch?v=ZmNOAtZIgIk) (Ng, 2011) If you use this toolbox in your research please cite [Prediction as a candidate for learning deep hierarchical models of data](http://www2.imm.dtu.dk/pubdb/views/publication_details.php?id=6284) ``` @MASTERSTHESIS\{IMM2012-06284, author = "R. B. Palm", title = "Prediction as a candidate for learning deep hierarchical models of data", year = "2012", } ``` Contact: rasmusbergpalm at gmail dot com Directories included in the toolbox ----------------------------------- `NN/` - A library for Feedforward Backpropagation Neural Networks `CNN/` - A library for Convolutional Neural Networks `DBN/` - A library for Deep Belief Networks `SAE/` - A library for Stacked Auto-Encoders `CAE/` - A library for Convolutional Auto-Encoders `util/` - Utility functions used by the libraries `data/` - Data used by the examples `tests/` - unit tests to verify toolbox is working For references on each library check REFS.md Setup ----- 1. Download. 2. addpath(genpath('DeepLearnToolbox')); Example: Deep Belief Network --------------------- ```matlab function test_example_DBN load mnist_uint8; train_x = double(train_x) / 255; test_x = double(test_x) / 255; train_y = double(train_y); test_y = double(test_y); %% ex1 train a 100 hidden unit RBM and visualize its weights rand('state',0) dbn.sizes = [100]; opts.numepochs = 1; opts.batchsize = 100; opts.momentum = 0; opts.alpha = 1; dbn = dbnsetup(dbn, train_x, opts); dbn = dbntrain(dbn, train_x, opts); figure; visualize(dbn.rbm{1}.W'); % Visualize the RBM weights %% ex2 train a 100-100 hidden unit DBN and use its weights to initialize a NN rand('state',0) %train dbn dbn.sizes = [100 100]; opts.numepochs = 1; opts.batchsize = 100; opts.momentum = 0; opts.alpha = 1; dbn = dbnsetup(dbn, train_x, opts); dbn = dbntrain(dbn, train_x, opts); %unfold dbn to nn nn = dbnunfoldtonn(dbn, 10); nn.activation_function = 'sigm'; %train nn opts.numepochs = 1; opts.batchsize = 100; nn = nntrain(nn, train_x, train_y, opts); [er, bad] = nntest(nn, test_x, test_y); assert(er < 0.10, 'Too big error'); ``` Example: Stacked Auto-Encoders --------------------- ```matlab function test_example_SAE load mnist_uint8; train_x = double(train_x)/255; test_x = double(test_x)/255; train_y = double(train_y); test_y = double(test_y); %% ex1 train a 100 hidden unit SDAE and use it to initialize a FFNN % Setup and train a stacked denoising autoencoder (SDAE) rand('state',0) sae = saesetup([784 100]); sae.ae{1}.activation_function = 'sigm'; sae.ae{1}.learningRate = 1; sae.ae{1}.inputZeroMaskedFraction = 0.5; opts.numepochs = 1; opts.batchsize = 100; sae = saetrain(sae, train_x, opts); visualize(sae.ae{1}.W{1}(:,2:end)') % Use the SDAE to initialize a FFNN nn = nnsetup([784 100 10]); nn.activation_function = 'sigm'; nn.learningRate = 1; nn.W{1} = sae.ae{1}.W{1}; % Train the FFNN opts.numepochs = 1; opts.batchsize = 100; nn = nntrain(nn, train_x, train_y, opts); [er, bad] = nntest(nn, test_x, test_y); assert(er < 0.16, 'Too big error'); ``` Example: Convolutional Neural Nets --------------------- ```matlab function test_example_CNN load mnist_uint8; train_x = double(reshape(train_x',28,28,60000))/255; test_x = double(reshape(test_x',28,28,10000))/255; train_y = double(train_y'); test_y = double(test_y'); %% ex1 Train a 6c-2s-12c-2s Convolutional neural network %will run 1 epoch in about 200 second and get around 11% error. %With 100 epochs you'll get around 1.2% error rand('state',0) cnn.layers = { struct('type', 'i') %input layer struct('type', 'c', 'outputmaps', 6, 'kernelsize', 5) %convolution layer struct('type', 's', 'scale', 2) %sub sampling layer struct('type', 'c', 'outputmaps', 12, 'kernelsize', 5) %convolution layer struct('type', 's', 'scale', 2) %subsampling layer }; cnn = cnnsetup(cnn, train_x, train_y); opts.alpha = 1; opts.batchsize = 50; opts.numepochs = 1; cnn = cnntrain(cnn, train_x, train_y, opts); [er, bad] = cnntest(cnn, test_x, test_y); %plot mean squared error figure; plot(cnn.rL); assert(er<0.12, 'Too big error'); ``` Example: Neural Networks --------------------- ```matlab function test_example_NN load mnist_uint8; train_x = double(train_x) / 255; test_x = double(test_x) / 255; train_y = double(train_y); test_y = double(test_y); % normalize [train_x, mu, sigma] = zscore(train_x); test_x = normalize(test_x, mu, sigma); %% ex1 vanilla neural net rand('state',0) nn = nnsetup([784 100 10]); opts.numepochs = 1; % Number of full sweeps through data opts.batchsize = 100; % Take a mean gradient step over this many samples [nn, L] = nntrain(nn, train_x, train_y, opts); [er, bad] = nntest(nn, test_x, test_y); assert(er < 0.08, 'Too big error'); %% ex2 neural net with L2 weight decay rand('state',0) nn = nnsetup([784 100 10]); nn.weightPenaltyL2 = 1e-4; % L2 weight decay opts.numepochs = 1; % Number of full sweeps through data opts.batchsize = 100; % Take a mean gradient step over this many samples nn = nntrain(nn, train_x, train_y, opts); [er, bad] = nntest(nn, test_x, test_y); assert(er < 0.1, 'Too big error'); %% ex3 neural net with dropout rand('state',0) nn = nnsetup([784 100 10]); nn.dropoutFraction = 0.5; % Dropout fraction opts.numepochs = 1; % Number of full sweeps through data opts.batchsize = 100; % Take a mean gradient step over this many samples nn = nntrain(nn, train_x, train_y, opts); [er, bad] = nntest(nn, test_x, test_y); assert(er < 0.1, 'Too big error'); %% ex4 neural net with sigmoid activation function rand('state',0) nn = nnsetup([784 100 10]); nn.activation_function = 'sigm'; % Sigmoid activation function nn.learningRate = 1; % Sigm require a lower learning rate opts.numepochs = 1; % Number of full sweeps through data opts.batchsize = 100; % Take a mean gradient step over this many samples nn = nntrain(nn, train_x, train_y, opts); [er, bad] = nntest(nn, test_x, test_y); assert(er < 0.1, 'Too big error'); %% ex5 plotting functionality rand('state',0) nn = nnsetup([784 20 10]); opts.numepochs = 5; % Number of full sweeps through data nn.output = 'softmax'; % use softmax output opts.batchsize = 1000; % Take a mean gradient step over this many samples opts.plot = 1; % enable plotting nn = nntrain(nn, train_x, train_y, opts); [er, bad] = nntest(nn, test_x, test_y); assert(er < 0.1, 'Too big error'); %% ex6 neural net with sigmoid activation and plotting of validation and training error % split training data into training and validation data vx = train_x(1:10000,:); tx = train_x(10001:end,:); vy = train_y(1:10000,:); ty = train_y(10001:end,:); rand('state',0) nn = nnsetup([784 20 10]); nn.output = 'softmax'; % use softmax output opts.numepochs = 5; % Number of full sweeps through data opts.batchsize = 1000; % Take a mean gradient step over this many samples opts.plot = 1; % enable plotting nn = nntrain(nn, tx, ty, opts, vx, vy); % nntrain takes validation set as last two arguments (optionally) [er, bad] = nntest(nn, test_x, test_y); assert(er < 0.1, 'Too big error'); ``` [![Bitdeli Badge](https://d2weczhvl823v0.cloudfront.net/rasmusbergpalm/deeplearntoolbox/trend.png)](https://bitdeli.com/free "Bitdeli Badge")
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值