matlab2019a中迁移学习快速入门(Deep Learning Toolbox系列篇4)

12 篇文章 24 订阅

此示例说明如何使用迁移学习对预训练的卷积神经网络 AlexNet 进行重新训练,以对新图像集进行分类。尝试此示例,了解如何在 MATLAB® 中轻松开始深度学习。

深度学习应用中常常用到迁移学习。您可以采用预训练的网络,基于它学习新任务。与使用随机初始化的权重从头训练网络相比,通过迁移学习微调网络要更快更简单。您可以使用较少数量的训练图像快速地将已学习的特征迁移到新任务。

本示例用到了AlexNet,如果没有下载会报错,并提示下载链接,,在报错的红字中有安装的路径(Add-On-Explorer),只需要点击进去登陆便可以下载此模块。

示例1:

clear 
clc
close all

unzip('MerchData.zip');
imds = imageDatastore('MerchData','IncludeSubfolders',true,'LabelSource','foldernames');
[imdsTrain,imdsValidation] = splitEachLabel(imds,0.7,'randomized');

net = alexnet;

layersTransfer = net.Layers(1:end-3);
numClasses = numel(categories(imdsTrain.Labels));
layers = [
    layersTransfer
    fullyConnectedLayer(numClasses,'WeightLearnRateFactor',10,'BiasLearnRateFactor',10)
    softmaxLayer
    classificationLayer];

options = trainingOptions('sgdm', ...
    'MiniBatchSize',10, ...
    'MaxEpochs',6, ...
    'InitialLearnRate',1e-4, ...
    'ValidationData',imdsValidation, ...
    'ValidationFrequency',3, ...
    'Verbose',false, ...
    'Plots','training-progress');

netTransfer = trainNetwork(imdsTrain,layers,options);

YPred = classify(netTransfer,imdsValidation);
accuracy = mean(YPred == imdsValidation.Labels)

 训练过后的结果:

示例2:

clear
clc
close all
%% 加载数据
% 解压缩示例图像并加载这些图像作为图像数据存储。
% imageDatastore 根据文件夹名称自动标记图像,并将数据存储为 ImageDatastore 对象。
% 通过图像数据存储可以存储大图像数据,包括无法放入内存的数据。将数据拆分,其中 70% 用作训练数据,30% 用作测试数据。
unzip('MerchData.zip');
imds = imageDatastore('MerchData', ...
    'IncludeSubfolders',true, ...
    'LabelSource','foldernames');

[imdsTrain,imdsTest] = splitEachLabel(imds,0.7,'randomized');

%% 在这个非常小的数据集中,现在有 55 个训练图像和 20 个验证图像。显示一些示例图像。
numTrainImages = numel(imdsTrain.Labels);
idx = randperm(numTrainImages,16);
figure
for i = 1:16
    subplot(4,4,i)
    I = readimage(imdsTrain,idx(i));
    imshow(I)
end


%% 加载预训练网络
net = alexnet;

%% 显示网络架构。该网络有五个卷积层和三个全连接层。
net.Layers
%% 第一层(图像输入层)需要大小为 227×227×3 的输入图像,其中 3 是颜色通道数。
inputSize = net.Layers(1).InputSize

%% 提取图像特征
augimdsTrain = augmentedImageDatastore(inputSize(1:2),imdsTrain);
augimdsTest = augmentedImageDatastore(inputSize(1:2),imdsTest);

layer = 'fc7';
featuresTrain = activations(net,augimdsTrain,layer,'OutputAs','rows');
featuresTest = activations(net,augimdsTest,layer,'OutputAs','rows');


%% 从训练数据和测试数据中提取类标签。
YTrain = imdsTrain.Labels;
YTest = imdsTest.Labels;

%% 拟合图像分类器
% 使用从训练图像中提取的特征作为预测变量,并使用 fitcecoc (Statistics and Machine Learning Toolbox) 拟合多类支持向量机 (SVM)。
classifier = fitcecoc(featuresTrain,YTrain);

%% 对测试图像进行分类
% 使用经过训练的 SVM 模型和从测试图像中提取的特征对测试图像进行分类。
YPred = predict(classifier,featuresTest);

% 显示四个示例测试图像及预测的标签。
idx = [1 5 10 15];
figure
for i = 1:numel(idx)
    subplot(2,2,i)
    I = readimage(imdsTest,idx(i));
    label = YPred(idx(i));
    imshow(I)
    title(char(label))
end

% 计算针对测试集的分类准确度。准确度是网络预测正确的标签的比例。
accuracy = mean(YPred == YTest)



上例是一个小数据集的迁移学习,其数据集内容为:

测试用例的标签为:

深度学习工具包 Deprecation notice. ----- This toolbox is outdated and no longer maintained. There are much better tools available for deep learning than this toolbox, e.g. [Theano](http://deeplearning.net/software/theano/), [torch](http://torch.ch/) or [tensorflow](http://www.tensorflow.org/) I would suggest you use one of the tools mentioned above rather than use this toolbox. Best, Rasmus. DeepLearnToolbox ================ A Matlab toolbox for Deep Learning. Deep Learning is a new subfield of machine learning that focuses on learning deep hierarchical models of data. It is inspired by the human brain's apparent deep (layered, hierarchical) architecture. A good overview of the theory of Deep Learning theory is [Learning Deep Architectures for AI](http://www.iro.umontreal.ca/~bengioy/papers/ftml_book.pdf) For a more informal introduction, see the following videos by Geoffrey Hinton and Andrew Ng. * [The Next Generation of Neural Networks](http://www.youtube.com/watch?v=AyzOUbkUf3M) (Hinton, 2007) * [Recent Developments in Deep Learning](http://www.youtube.com/watch?v=VdIURAu1-aU) (Hinton, 2010) * [Unsupervised Feature Learning and Deep Learning](http://www.youtube.com/watch?v=ZmNOAtZIgIk) (Ng, 2011) If you use this toolbox in your research please cite [Prediction as a candidate for learning deep hierarchical models of data](http://www2.imm.dtu.dk/pubdb/views/publication_details.php?id=6284) ``` @MASTERSTHESIS\{IMM2012-06284, author = "R. B. Palm", title = "Prediction as a candidate for learning deep hierarchical models of data", year = "2012", } ``` Contact: rasmusbergpalm at gmail dot com Directories included in the toolbox ----------------------------------- `NN/` - A library for Feedforward Backpropagation Neural Networks `CNN/` - A library for Convolutional Neural Networks `DBN/` - A library for Deep Belief Networks `SAE/` - A library for Stacked Auto-Encoders `CAE/` - A library for Convolutional Auto-Encoders `util/` - Utility functions used by the libraries `data/` - Data used by the examples `tests/` - unit tests to verify toolbox is working For references on each library check REFS.md Setup ----- 1. Download. 2. addpath(genpath('DeepLearnToolbox')); Example: Deep Belief Network --------------------- ```matlab function test_example_DBN load mnist_uint8; train_x = double(train_x) / 255; test_x = double(test_x) / 255; train_y = double(train_y); test_y = double(test_y); %% ex1 train a 100 hidden unit RBM and visualize its weights rand('state',0) dbn.sizes = [100]; opts.numepochs = 1; opts.batchsize = 100; opts.momentum = 0; opts.alpha = 1; dbn = dbnsetup(dbn, train_x, opts); dbn = dbntrain(dbn, train_x, opts); figure; visualize(dbn.rbm{1}.W'); % Visualize the RBM weights %% ex2 train a 100-100 hidden unit DBN and use its weights to initialize a NN rand('state',0) %train dbn dbn.sizes = [100 100]; opts.numepochs = 1; opts.batchsize = 100; opts.momentum = 0; opts.alpha = 1; dbn = dbnsetup(dbn, train_x, opts); dbn = dbntrain(dbn, train_x, opts); %unfold dbn to nn nn = dbnunfoldtonn(dbn, 10); nn.activation_function = 'sigm'; %train nn opts.numepochs = 1; opts.batchsize = 100; nn = nntrain(nn, train_x, train_y, opts); [er, bad] = nntest(nn, test_x, test_y); assert(er < 0.10, 'Too big error'); ``` Example: Stacked Auto-Encoders --------------------- ```matlab function test_example_SAE load mnist_uint8; train_x = double(train_x)/255; test_x = double(test_x)/255; train_y = double(train_y); test_y = double(test_y); %% ex1 train a 100 hidden unit SDAE and use it to initialize a FFNN % Setup and train a stacked denoising autoencoder (SDAE) rand('state',0) sae = saesetup([784 100]); sae.ae{1}.activation_function = 'sigm'; sae.ae{1}.learningRate = 1; sae.ae{1}.inputZeroMaskedFraction = 0.5; opts.numepochs = 1; opts.batchsize = 100; sae = saetrain(sae, train_x, opts); visualize(sae.ae{1}.W{1}(:,2:end)') % Use the SDAE to initialize a FFNN nn = nnsetup([784 100 10]); nn.activation_function = 'sigm'; nn.learningRate = 1; nn.W{1} = sae.ae{1}.W{1}; % Train the FFNN opts.numepochs = 1; opts.batchsize = 100; nn = nntrain(nn, train_x, train_y, opts); [er, bad] = nntest(nn, test_x, test_y); assert(er < 0.16, 'Too big error'); ``` Example: Convolutional Neural Nets --------------------- ```matlab function test_example_CNN load mnist_uint8; train_x = double(reshape(train_x',28,28,60000))/255; test_x = double(reshape(test_x',28,28,10000))/255; train_y = double(train_y'); test_y = double(test_y'); %% ex1 Train a 6c-2s-12c-2s Convolutional neural network %will run 1 epoch in about 200 second and get around 11% error. %With 100 epochs you'll get around 1.2% error rand('state',0) cnn.layers = { struct('type', 'i') %input layer struct('type', 'c', 'outputmaps', 6, 'kernelsize', 5) %convolution layer struct('type', 's', 'scale', 2) %sub sampling layer struct('type', 'c', 'outputmaps', 12, 'kernelsize', 5) %convolution layer struct('type', 's', 'scale', 2) %subsampling layer }; cnn = cnnsetup(cnn, train_x, train_y); opts.alpha = 1; opts.batchsize = 50; opts.numepochs = 1; cnn = cnntrain(cnn, train_x, train_y, opts); [er, bad] = cnntest(cnn, test_x, test_y); %plot mean squared error figure; plot(cnn.rL); assert(er<0.12, 'Too big error'); ``` Example: Neural Networks --------------------- ```matlab function test_example_NN load mnist_uint8; train_x = double(train_x) / 255; test_x = double(test_x) / 255; train_y = double(train_y); test_y = double(test_y); % normalize [train_x, mu, sigma] = zscore(train_x); test_x = normalize(test_x, mu, sigma); %% ex1 vanilla neural net rand('state',0) nn = nnsetup([784 100 10]); opts.numepochs = 1; % Number of full sweeps through data opts.batchsize = 100; % Take a mean gradient step over this many samples [nn, L] = nntrain(nn, train_x, train_y, opts); [er, bad] = nntest(nn, test_x, test_y); assert(er < 0.08, 'Too big error'); %% ex2 neural net with L2 weight decay rand('state',0) nn = nnsetup([784 100 10]); nn.weightPenaltyL2 = 1e-4; % L2 weight decay opts.numepochs = 1; % Number of full sweeps through data opts.batchsize = 100; % Take a mean gradient step over this many samples nn = nntrain(nn, train_x, train_y, opts); [er, bad] = nntest(nn, test_x, test_y); assert(er < 0.1, 'Too big error'); %% ex3 neural net with dropout rand('state',0) nn = nnsetup([784 100 10]); nn.dropoutFraction = 0.5; % Dropout fraction opts.numepochs = 1; % Number of full sweeps through data opts.batchsize = 100; % Take a mean gradient step over this many samples nn = nntrain(nn, train_x, train_y, opts); [er, bad] = nntest(nn, test_x, test_y); assert(er < 0.1, 'Too big error'); %% ex4 neural net with sigmoid activation function rand('state',0) nn = nnsetup([784 100 10]); nn.activation_function = 'sigm'; % Sigmoid activation function nn.learningRate = 1; % Sigm require a lower learning rate opts.numepochs = 1; % Number of full sweeps through data opts.batchsize = 100; % Take a mean gradient step over this many samples nn = nntrain(nn, train_x, train_y, opts); [er, bad] = nntest(nn, test_x, test_y); assert(er < 0.1, 'Too big error'); %% ex5 plotting functionality rand('state',0) nn = nnsetup([784 20 10]); opts.numepochs = 5; % Number of full sweeps through data nn.output = 'softmax'; % use softmax output opts.batchsize = 1000; % Take a mean gradient step over this many samples opts.plot = 1; % enable plotting nn = nntrain(nn, train_x, train_y, opts); [er, bad] = nntest(nn, test_x, test_y); assert(er < 0.1, 'Too big error'); %% ex6 neural net with sigmoid activation and plotting of validation and training error % split training data into training and validation data vx = train_x(1:10000,:); tx = train_x(10001:end,:); vy = train_y(1:10000,:); ty = train_y(10001:end,:); rand('state',0) nn = nnsetup([784 20 10]); nn.output = 'softmax'; % use softmax output opts.numepochs = 5; % Number of full sweeps through data opts.batchsize = 1000; % Take a mean gradient step over this many samples opts.plot = 1; % enable plotting nn = nntrain(nn, tx, ty, opts, vx, vy); % nntrain takes validation set as last two arguments (optionally) [er, bad] = nntest(nn, test_x, test_y); assert(er < 0.1, 'Too big error'); ``` [![Bitdeli Badge](https://d2weczhvl823v0.cloudfront.net/rasmusbergpalm/deeplearntoolbox/trend.png)](https://bitdeli.com/free "Bitdeli Badge")
### 回答1: Deep Learning ToolboxMATLAB的一个工具箱,主要用于深度学习领域的研究与开发。该工具箱提供了各种各样的函数和算法,可以帮助用户快速构建神经网络模型、训练网络、测试网络以及应用网络。 要下载Deep Learning Toolbox,首先需要安装MATLAB。如果你还没有MATLAB,你可以在MathWorks官网上下载免费试用版或购买订阅版。一旦安装好MATLAB,就可以使用MATLAB的Addon Explorer来下载Deep Learning Toolbox。 步骤如下: 1. 打开MATLAB,点击左上角的“Addon Explorer”按钮。 2. 在Addon Explorer搜索“Deep Learning Toolbox”。 3. 点击“Deep Learning Toolbox”图标,可以看到该工具箱的简介以及安装选项。 4. 点击“Install”按钮,等待下载和安装过程完成即可。 需要注意的是,Deep Learning Toolbox是一个收费的工具箱,如果你没有订阅版MATLAB的话,需要先购买Deep Learning Toolbox的许可证。此外,Deep Learning Toolbox对计算机硬件要求比较高,建议使用性能较强的计算机来进行深度学习的研究和开发。 ### 回答2: Deep Learning ToolboxMATLAB提供的一套深度学习工具箱,包括了各种深度学习算法与模型,如神经网络和卷积神经网络等。用户可以使用这些模型进行分类、回归、对话、文本分析等任务。此外,该工具箱还提供了训练模型、调整网络参数、优化神经网络、验证和测试网络模型的功能。本工具箱极大地简化了深度学习的过程,无需深度学习专家的知识,甚至可以直接使用预训练模型进行迁移学习。 要下载Deep Learning Toolbox,首先需要安装MATLAB R2016b或更高版本。用户可以通过访问MathWorks网站下载并安装MATLAB,然后通过MATLAB App Store下载Deep Learning Toolbox。下载后,用户可以从应用集成界面直接启动Deep Learning Toolbox,并随时开始操作。该工具箱不仅易于操作,还可以与各种硬件和云服务集成使用。 总之,Deep Learning Toolbox是一款功能齐全、易用便捷的深度学习工具模块,包括了各种功能强大的模型、算法和工具,可以帮助用户进行深度学习研究和应用。下载该工具箱,既可以简化深度学习的过程,又可以满足用户对不同深度学习任务的需求。 ### 回答3: Deep Learning ToolboxMatlab的一个工具箱,用于开发和实现深度学习算法。要下载该工具箱,首先需要安装Matlab软件。Matlab提供了一种免费的试用版本,用户可以先试用该版本,了解并体验MatlabDeep Learning Toolbox。 下载Deep Learning Toolbox的步骤如下: 1. 打开Matlab软件,点击“Add-Ons”按钮; 2. 在“Add-On Explorer”界面搜索“Deep Learning Toolbox”; 3. 选择“Deep Learning Toolbox”并点击“Install”按钮; 4. 等待下载和安装完成。 下载完成后,就可以在Matlab使用Deep Learning Toolbox开发和实现深度学习算法了。用户可以通过Deep Learning Toolbox提供的各种工具创建和训练深度神经网络,对数据进行分类、回归、聚类等处理,实现机器学习和人工智能应用。 总之,下载Deep Learning Toolbox非常简单,只需安装Matlab软件后,在Add-On Explorer搜索并安装即可。通过使用Deep Learning Toolbox,用户可以更轻松地进行深度学习的开发和实现。
评论 13
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

拦路雨g

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值