教程地址:http://deeplearning.stanford.edu/wiki/index.php/UFLDL_Tutorial
Exercise地址:http://deeplearning.stanford.edu/wiki/index.php/Exercise:Vectorization
代码
在这里只需修改train.m中参数的设置和sampleIMAGES.m中样本集生成的代码,其他代码在Exercise(1)中已向量化,不需进行修改。
修改参数(train.m)
%% STEP 0: Here we provide the relevant parameters values that will
% allow your sparse autoencoder to get good filters; you do not need to
% change the parameters below.
visibleSize = 28*28; % number of input units
hiddenSize = 196; % number of hidden units
sparsityParam = 0.1; % desired average activation of the hidden units.
% (This was denoted by the Greek alphabet rho, which looks like a lower-case "p",
% in the lecture notes).
lambda = 3e-3; % weight decay parameter
beta = 3; % weight of sparsity penalty term
生成样本集(sampleIMAGES.m)
取MNIST dataset 中的前10000幅图像作为样本集
% load images from disk
IMAGES = loadMNISTImages('train-images.idx3-ubyte');
patchsize = 28;
numpatches = 10000;
%% ---------- YOUR CODE HERE --------------------------------------
patches = IMAGES(:,1:numpatches); %取前10000幅图像作为样本
注:
由于样本数据集比较大,在运行时最好将STEP 3:Gradient Checking 注释掉,否则运行会很慢。
结果