Convolutional neural networks(CNN) (四) Learn features for handwritten digits Exercise(Vectorization)

11 篇文章 0 订阅

{作为CNN学习入门的一部分,笔者在这里逐步给出UFLDL的各章节Exercise的个人代码实现,供大家参考指正}

此文基于Sparse Autoencoder Exercise的向量化优化版本代码,为读取MNIST数据集修改了sampleIMAGES.m以及train.m部分参数。

Note: loadMNISTImages.m取出的结果是二维数组(784*60000)


train.m :

%% CS294A/CS294W Programming Assignment Starter Code

%  This document has been overridden for the feature extraction of MNIST
%  dataset. Some redundant explanations have been removed. See more
%  @exercise sparse autoencoder.

%  Instructions
%  ------------
% 
%  This file contains code that helps you get started on the
%  programming assignment. You will need to complete the code in sampleIMAGES.m,
%  sparseAutoencoderCost.m and computeNumericalGradient.m. 
%  For the purpose of completing the assignment, you do not need to
%  change the code in this file. 
%
%%======================================================================
%% STEP 0: Here we provide the relevant parameters values that will
%  allow your sparse autoencoder to get good filters; you do not need to 
%  change the parameters below.

visibleSize = 28*28;   % number of input units 

hiddenSize = 14*14;     % number of hidden units

sparsityParam = 0.1;   % desired average activation of the hidden units.
                     % (This was denoted by the Greek alphabet rho, which looks like a lower-case "p",
		     %  in the lecture notes). 
lambda = 3e-3;     % weight decay parameter       
beta = 3;            % weight of sparsity penalty term       

%%======================================================================
%% STEP 1: Implement sampleIMAGES
%
%  patches : first 10000 images from the MNIST dataset

patches = sampleIMAGES;

%%======================================================================
%% STEP 2: You can start training your sparse
%  autoencoder with minFunc (L-BFGS).

%  Randomly initialize the parameters
theta = initializeParameters(hiddenSize, visibleSize);

%  Use minFunc to minimize the function
addpath minFunc/
options.Method = 'lbfgs'; % Here, we use L-BFGS to optimize our cost
                          % function. Generally, for minFunc to work, you
                          % need a function pointer with two outputs: the
                          % function value and the gradient. In our problem,
                          % sparseAutoencoderCost.m satisfies this.
options.maxIter = 400;	  % Maximum number of iterations of L-BFGS to run 
options.display = 'on';


[opttheta, cost] = minFunc( @(p) sparseAutoencoderCost(p, ...
                                   visibleSize, hiddenSize, ...
                                   lambda, sparsityParam, ...
                                   beta, patches), ...
                              theta, options);

%%======================================================================
%% STEP 3: Visualization 

W1 = reshape(opttheta(1:hiddenSize*visibleSize), hiddenSize, visibleSize);
display_network(W1', 12); 

print -djpeg weights.jpg   % save the visualization to a file 

sampleIMAGES.m :

function patches = sampleIMAGES()
% sampleIMAGES
% Returns 10000 patches for training

Filename = 'train-images.idx3-ubyte';
images = loadMNISTImages(Filename);    % load images from disk 

patchsize = 28;  % we'll use 28x28 patches 
numpatches = 10000;

% Initialize patches with zeros.  Your code will fill in this matrix--one
% column per patch, 10000 columns. 
patches = zeros(patchsize*patchsize, numpatches);

%% ---------- YOUR CODE HERE --------------------------------------
%  Instructions: Fill in the variable called "patches" using data 
%  from MNIST Dataset.  
%  
%  loadMNISTImages returns a 28x28x[number of MNIST images] matrix containing
%  the raw MNIST images

for i = 1:1:numpatches
    patch784 = images(:,i);
    patches(:,i) = patch784(:);
end

end

实验结果:

迭代过程确实耗时很多,但是通过Vectorization优化后的代码在笔者的笔电上运行时间也还可以接受:


548.561/60 = 9.14 mins 并没有UFLDL中描述的 15-20 mins,但是毕竟已经时隔多年,笔电的运算能力(i7-5500U)也不比当年。


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值