1.函数执行的流程
main() --> GetBrewFunction() --> train() --> solve()
2.以下资源可将mnist训练到99.5%
https://github.com/shicai/Caffe_Manual
3.国内caffe开发者论坛
http://caffecn.cn/?/explore/
4.好的博文
http://blog.csdn.net/langb2014/article/category/5998589/3
5.
以mnist为例,讲解prototxt
数据层
layer {
name: "mnist"
type: "Data" // 数据层
transform_param {
scale: 0.00390625 // 像素值做尺度归一化 1.0/256.0,范围[0,1)
}
data_param {
source: "mnist_train_lmdb" // 加载的数据库的地址
backend: LMDB
batch_size: 64
}
top: "data" // data blob
top: "label" // label blob
}
layer {
// ...layer definition...
include: { phase: TRAIN } // 该层只在 train 时使用,而在 test层不使用
}
卷积层
layer {
name: "conv1"
type: "Convolution"
param { lr_mult: 1 } // ?权值学习速率,为solver里面学习速率的1倍
param { lr_mult: 2 } // ?偏移量学习速率,为slover里学习速率的2倍?这两个参数名字都是一样的,怎么来区分
convolution_param {
num_output: 20
kernel_size: 5
stride: 1 // 卷积移动步长
weight_filler {
type: "xavier" // filler用于随机初始化权值,初始化 方法xavier,根据输入输出神经元的个数自动定义初始化的尺度
}
bias_filler {
type: "constant" // 简单初始化,为常量,缺省值0
}
}
bottom: "data"
top: "conv1"
}
池化层
layer {
name: "pool1"
type: "Pooling"
pooling_param {
kernel_size: 2
stride: 2 // kernel_size = 2, stride = 2;因此相邻的池化区域无重叠
pool: MAX
}
bottom: "conv1"
top: "pool1"
}
全链接层
layer {
name: "ip1"
type: "InnerProduct" // 全链接层的名字
param { lr_mult: 1 }
param { lr_mult: 2 }
inner_product_param {
num_output: 500
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
bottom: "pool2"
top: "ip1"
}
Relu层
layer {
name: "relu1"
type: "ReLU"
bottom: "ip1" // 使用同样的名称为了节省内存
top: "ip1" // 使用同样的名称是为了节省内存,其他层不要使用同样的名称
}
Relu层之后再加一个全链接层
layer {
name: "ip2"
type: "InnerProduct"
param { lr_mult: 1 }
param { lr_mult: 2 }
inner_product_param {
num_output: 10
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
bottom: "ip1"
top: "ip2"
}
Loss层
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "ip2"
bottom: "label"
}
另,Accuracy层
只在 TEST phase有,每隔多少次迭代,输出当前精度。
模型训练
caffe.bin train - -slover=examples/mnist/lenet_solver.prototxt
模型预测
caffe.bin test -model examples/mnist/lenet_train_test.prototxt -weights examples/mnist/lenet_iter_10000.caffemodel
caffe.bin commonds:
train: 训练或微调一个模型
test :模型预测
device_query:显示GPU诊断信息
time: 评估模型执行时间
配置选项:
-gpu
-iterations
-model
-sighup_effect
-sigint_effect
-snapeshot
-solver
-weights (用于微调的预训练权值 *.caffemodel)
slover.prototxt 的说明
# The train/test net protocol buffer definition
net: "examples/mnist/lenet_train_test.prototxt"
# test_iter specifies how many forward passes the test should carry out.
# In the case of MNIST, we have test batch size 100 and 100 test iterations,
# covering the full 10,000 testing images. // 由此看来,不同的训练案例,这个参数也要改的,test_iter * test_batch_size = test images;
test_iter: 100
# Carry out testing every 500 training iterations.
test_interval: 500 // 多少次训练,
# The base learning rate, momentum and the weight decay of the network.
base_lr: 0.01
momentum: 0.9
weight_decay: 0.0005
# The learning rate policy
lr_policy: "inv"
gamma: 0.0001
power: 0.75
# Display every 100 iterations
display: 100
# The maximum number of iterations
max_iter: 10000
# snapshot intermediate results
snapshot: 5000
snapshot_prefix: "examples/mnist/lenet"
# solver mode: CPU or GPU
solver_mode: GPU
训练过程的输出结果说明
I1203 net.cpp:66] Creating Layer conv1
I1203 net.cpp:76] conv1 <- data
I1203 net.cpp:101] conv1 -> conv1
I1203 net.cpp:116] Top shape: 20 24 24 // ?? output shape
I1203 net.cpp:127] conv1 needs backward computation.
I1203 solver.cpp:204] Iteration 100, lr = 0.00992565
I1203 solver.cpp:66] Iteration 100, loss = 0.26044
...
I1203 solver.cpp:84] Testing net
I1203 solver.cpp:111] Test score #0: 0.9785 // 测试集的精度
I1203 solver.cpp:111] Test score #1: 0.0606671 // 测试集LOSS函数
CAFFE 数据结构
BlobProto 对象实现了磁盘、内存之间的数据通讯
转换示例:
{
Blob<float>a;
a.reshape(1,2,3,4);
BlobProto bp;
a.ToProto(&bp,true);
WriteProtoToBinaryFile(bp,"a.blob");
BlobProto bp2;
ReadFProtoFromBinaryFileOrDie("a.blob",&bp2);
Blob<float>b;
b.FromProto(bp2,true);
}
数据结构描述文件caffe.proto
按照C++的思路,可以直接使用结构体来定义数据形式,为什么一定要用ProtoBuffer这种格式呐?简单来说,结构体存在一些使用不方便的地方,首先,结构体的序列化和反序列化操作需要额外的编程实现,难以做到接口的标准化;其次,结构体中包含变长数据时(一般用指向某个内存地址的指针),需要更加细致的工作保证数据的完整性。而protobuffer将编程最容易出问题的地方加以隐藏,让机器自动处理,提高了程序的健壮性。