Fine-tunning适用情况

https://www.quora.com/How-do-I-fine-tune-a-Caffe-pre-trained-model-to-do-image-classification-on-my-own-dataset

基本概念及举例:

Deep Net or CNN like alexnet, Vggnet or googlenet are trained to classify images into different categories. Before the recent trend of Deep net or CNN, the typical method for classification is to extract the features from the images and use them to classify images by training with a svm. In this procedure the features were already computed using a method like hog or bag of words. But in deep net the features and the weights for classifying into different classes are all learned from end to end. You don't need to extract features from different method.

So when you train an alexnet, it learns the features representation as well as the weights for classifying the image. You just need to input the image and you will get the class which is assigned.

The notion is the features which are learned for classifying the 1000 object categories would be sufficient enough to classify your another different set of object categories.

This may not be the case every time. It depends on the dataset and type of images and classification task.

If all of them look similar then you can use that feature representation part of deepnet     instead of again learning it. This is the idea of Finetuning.

So what you do is copy the layers of feature representation as it is from the network you already learned and you just learn the new  weights  required to classify that features into new categories your dataset has.

Implementation Level details in Caffe:

I assume here that you know how to create lmdb files for the new dataset you had.

if you want to finetune alexnet you need to copy the first 7 layers out of 8 as it is and need to change the last layer i.e fc_8. fully connected layer.

the changes you need to do is in train_val.prototxt .

Take the train_val.prototxt file in alexnet and just change the last layer fc8 to fc8_tune.

You should change the name of this layer. or else it copies the same weights before. Be careful about it.

And you need to change the train.sh file to load the weight of alexnet.

$TOOLS/caffe train --solver=quick_solver.prototxt --weights=bvlc_googlenet.caffemodel  2> train.log.

Here the finetuning is done on googlenet. You can change accordingly.

And need to update the number of classes in the fc8 layer. ( number of outputs).

If you have any more questions please ask.


步骤:

1) prepare the input data
2) change data of input layer and fc8 layer in imagenet_train.prototxt and imagenet_val.prototxt
3) type finetune_net imagenet_solver.prototxt caffe_reference_imagenet_model in terminal 


Question: 

Check failed: error == cudaSuccess (2 vs. 0) out of memory----

The message is clear you are out of memory in your GPU card. So you willneed to reduce the batch_size


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值