Stanford UFLDL教程 Exercise:Learning color features with Sparse Autoencoders

Exercise:Learning color features with Sparse Autoencoders

Contents

[hide]

Learning color features with Sparse Autoencoders

In this exercise, you will implement a linear decoder (a sparse autoencoder whose output layer uses a linear activation function). You will then apply it to learn features on color images from the STL-10 dataset. These features will be used in an laterexercise on convolution and pooling for classifying STL-10 images.

In the file linear_decoder_exercise.zip we have provided some starter code. You should write your code at the places indicated "YOUR CODE HERE" in the files.

For this exercise, you will need to copy and modify sparseAutoencoderCost.m from thesparse autoencoder exercise.

Dependencies

You will need:

The following additional file is also required for this exercise:

If you have not completed the exercise listed above, we strongly suggest you complete it first.

Learning from color image patches

In all the exercises so far, you have been working only with grayscale images. In this exercise, you will get to work with RGB color images for the first time.

Conveniently, the fact that an image has three color channels (RGB), rather than a single gray channel, presents little difficulty for the sparse autoencoder. You can just combine the intensities from all the color channels for the pixels into one long vector, as if you were working with a grayscale image with 3x the number of pixels as the original image.

Step 0: Initialization

In this step, we initialize some parameters used in the exercise (see starter code for details).

Step 1: Modify your sparse autoencoder to use a linear decoder

Copy sparseAutoencoderCost.m to the directory for this exercise and rename it tosparseAutoencoderLinearCost.m. Rename the function sparseAutoencoderCost in the file tosparseAutoencoderLinearCost, and modify it to use a linear decoder. In particular, you should change the cost and gradients returned to reflect the change from a sigmoid to a linear decoder. After making this change, check your gradients to ensure that they are correct.

Step 2: Learn features on small patches

You will now use your sparse autoencoder to learn features on a set of 100,000 small 8x8 patches sampled from the larger 96x96 STL-10 images (TheSTL-10 dataset comprises 5000 training and 8000 test examples, with each example being a 96x96 labelled color image belonging to one of ten classes: airplane, bird, car, cat, deer, dog, horse, monkey, ship, truck.)

The code provided in this step trains your sparse autoencoder for 400 iterations with the default parameters initialized in step 0. This should take around 45 minutes. Your sparse autoencoder should learn features which when visualized, look like edges and "opponent colors," as in the figure below.

CNN Features Good.png

If your parameters are improperly tuned (the default parameters should work), or if your implementation of the autoencoder is buggy, you might instead get images that look like one of the following:

Cnn Features Bad1.pngCnn Features Bad2.png

The learned features will be saved to STL10Features.mat, which will be used in the laterexercise on convolution and pooling.

from: http://ufldl.stanford.edu/wiki/index.php/Exercise:Learning_color_features_with_Sparse_Autoencoders

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值