前面做好了lmdb和均值文件,下面以Googlenet为例修改网络并训练模型。
我们将caffe-master\models下的bvlc_googlenet文件夹复制到caffe-master\examples\imagenet下。(因为我们的lmdb和均值都在这里,放一起方便些)
打开train_val.txt,修改:
1.修改data层:
layer {
name: "data"
type: "Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
mirror: true
crop_size: 224
mean_file: "examples/imagenet/mydata_mean.binaryproto" #均值文件
#mean_value: 104 #这些注释掉
#mean_value: 117
#mean_value: 123
}
data_param {
source: "examples/imagenet/mydata_train_lmdb" #训练集的lmdb
batch_size: 32 #根据GPU修改
backend: LMDB
}
}
layer {
name: "data"
type: "Data"
top: "data"
top: "label"
include {
phase: TEST
}
transform_param {
mirror: false
crop_size: 224
mean_file: "examples/imagenet/mydata_mean.binaryproto" #均值文件
#mean_value: 104
#mean_value: 117
#mean_value: 123
}
data_param {
source: "examples/imagenet/mydata_val_lmdb" #验证集lmdb
batch_size: 50 #和solver中的test_iter相乘约等于验证集大小
backend: LMDB
}
}
由于Googlenet有三个输出,所以改三个地方,其他网络一般只有一个输出,则改一个地方即可。
如果是微调,那么输出层的层名也要修改。(参数根据层名来初始化,由于输出改了,该层参数就不对应了,因此要改名)
layer {
name: "loss1/classifier"
type: "InnerProduct"
bottom: "loss1/fc"
top: "loss1/classifier"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
inner_product_param {
num_output: 1000 #改成你的数据集类别数
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0
}
}
}
layer {
name: "loss2/classifier"
type: "InnerProduct"
bottom: "loss2/fc"
top: "loss2/classifier"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
inner_product_param {
num_output: 1000 #改成你的数据集类别数
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0
}
}
}
layer {
name: "loss3/classifier"
type: "InnerProduct"
bottom: "pool5/7x7_s1"
top: "loss3/classifier"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
inner_product_param {
num_output: 1000 #改成你的数据集类别数
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0
}
}
}
layer {
name: "loss3/classifier"
type: "InnerProduct"
bottom: "pool5/7x7_s1"
top: "loss3/classifier"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
inner_product_param {
num_output: 1000 #改成你的数据集类别数
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0
}
}
}
如果是微调,该层层名和train_val.prototxt修改一致。
接着,打开solver,修改:
net: "examples/imagenet/bvlc_googlenet/train_val.prototxt" #路径不要错
test_iter: 1000 #前面已说明该值
test_interval: 4000 #迭代多少次测试一次
test_initialization: false
display: 40
average_loss: 40
base_lr: 0.01
lr_policy: "step"
stepsize: 320000 #迭代多少次改变一次学习率
gamma: 0.96
max_iter: 10000000 #迭代次数
momentum: 0.9
weight_decay: 0.0002
snapshot: 40000
snapshot_prefix: "examples/imagenet/bvlc_googlenet" #生成的caffemodel保存在imagenet下,形如bvlc_googlenet_iter_***.caffemodel
solver_mode: GPU
这时,我们回到caffe-master\examples\imagenet下,打开train_caffenet.sh,修改:
(如果是微调,在脚本里加入-weights **/**/**.caffemodel即可,即用来微调的caffemodel路径)
#!/usr/bin/env sh
./build/tools/caffe train \
-solver examples/imagenet/bvlc_googlenet/solver.prototxt -gpu 0
(如果有多个GPU,可自行选择)
然后,在caffe-master下执行改脚本即可开始训练:$caffe-master ./examples/imagenet/train_caffenet.sh
训练得到的caffemodel就可以用来做图像分类了,此时,需要(1)得到的labels.txt,(2)得到的mydata_mean.binaryproto,(3)得到的caffemodel以及已经修改过的deploy.prototxt,共四个文件,具体过程看:http://blog.csdn.net/sinat_30071459/article/details/50974695