翻译的链接:http://caffe.berkeleyvision.org/gathered/examples/finetune_flickr_style.html
挑一些主要的链接进行翻译
step1:利用shell命令下载数据集:
python examples/finetune_flickr_style/assemble_data.py --workers=-1 --images=2000 --seed 831486
此时通过这个命令之后,会将训练/测试数据保存在data/flickr_style文件夹里面
step2:下载训练的model:
./scripts/download_model_binary.py models/bvlc_reference_caffenet
.
step3:训练下载好的model
./build/tools/caffe train -solver models/finetune_flickr_style/solver.prototxt -weights models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel -gpu 0
参数说明:
train:用来进行训练
-solver:用来进行超参数训练,调用solver.prototxt文件
-weights 用来对于刚才下载好的caffemodel进行继续训练
---------------------------------------------------------------------------------------------------
对于fine tuning的总结:
Fine-tuning can be feasible when training from scratch would not be for lack of time or data. Even in CPU mode each pass through the training set takes ~100 s. GPU fine-tuning is of course faster still and can learn a useful model in minutes or hours instead of days or weeks. Furthermore, note that the model has only trained on < 2,000 instances. Transfer learning a new task like style recognition from the ImageNet pretraining can require much less data than training from scratch.
如果不缺少时间或者数据的话,那么微调会变得可用。GPU的训练速度远远快于cpu,利用imageNet的一个训练好的模型会远远快于从零开始训练模型