把前面做好的mdb跑起来,网络嘛就是简单的lenet_lr网络.呼呼
网络设置:
name: "LeNet"
layer {
name: "mnist"
type: "Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
scale: 0.00390625
}
data_param {
source: "examples/my_imagenet/mnist_train_lmdb"
batch_size: 64
backend: LMDB
}
}
layer {
name: "mnist"
type: "Data"
top: "data"
top: "label"
include {
phase: TEST
}
transform_param {
scale: 0.00390625
}
data_param {
source: "examples/my_imagenet/mnist_val_lmdb"
batch_size: 100
backend: LMDB
}
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 20
kernel_size: 5
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "conv2"
type: "Convolution"
bottom: "pool1"
top: "conv2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 50
kernel_size: 5
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "pool2"
type: "Pooling"
bottom: "conv2"
top: "pool2"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "ip1"
type: "InnerProduct"
bottom: "pool2"
top: "ip1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 500
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "relu1"
type: "ReLU"
bottom: "ip1"
top: "ip1"
}
layer {
name: "ip2"
type: "InnerProduct"
bottom: "ip1"
top: "ip2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 10
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "accuracy"
type: "Accuracy"
bottom: "ip2"
bottom: "label"
top: "accuracy"
include {
phase: TEST
}
}
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "ip2"
bottom: "label"
top: "loss"
}
网络参数:
#网络设置
net: "examples/my_imagenet/lenet_lr.prototxt"
#个测试图片
test_iter: 100
#每迭代次数500次测试一次
test_interval: 500
#学习率
base_lr: 0.01
#动量
momentum: 0.9
#权重的递减
weight_decay: 0.0005
#学习策略
lr_policy: "inv"
gamma: 0.01
power: 0.75
#每迭代100次显示一次
display: 100
#最大迭代次数10000次
max_iter: 10000
#每5000次迭代存储一次数据到电脑
snapshot: 5000
#存储数据名
snapshot_prefix: "examples/my_imagenet/lenet"
#数据训练方式
solver_mode: CPU
sh来跑起来
#!/usr/bin/env sh
set -e
cd
cd caffe-master
build/tools/caffe train --solver=examples/my_imagenet/lenet_lr_solver.prototxt $@
预测用sh:
./build/tools/caffe.bin test \
-model examples/my_imagenet/lenet_lr.prototxt \
-weights examples/my_imagenet/lenet_iter_10000.caffemodel \
-iterations 1
#表示只做预测,不进行参数更新
#指定模型描述文本文件
#指定模型预先训练好的权值文件
#指定测试迭代次数.参与测试样例数目 = 迭代次数 * batch_size
分类用sh:
./build/examples/cpp_classification/classification.bin \
examples/my_imagenet/lenet_lr.prototxt \
examples/my_imagenet/lenet_iter_10000.caffemodel \
data/mydata/imagenet_mean.binaryproto \
data/mydata/train.txt \
examples/my_imagenet/train14.JPEG
还有个做平均值的,好像是用来维持网络均衡的吧:
#!/usr/bin/env sh
EXAMPLE=examples/my_imagenet
DATA=data/mydata
TOOLS=build/tools
$TOOLS/compute_image_mean \
$EXAMPLE/mnist_train_lmdb \
$DATA/imagenet_mean.binaryproto
echo "Done."
呼呼,
计算中的样子:
好像图像中出现了我的贱手,2333
等一会儿再看结果把.程序还在跑
网络结构:
pic h x w x n
data 32 x 32 xn
conv1 5 x 5 x 20 ,1
data 18 x 18 x 20 x n
pool1 2 x 2 ,2
data 9 x 9 x 20 x n
conv2 5 x 5 x 50 ,1
data 5 x 5 x (20 x 50) x n
pool2 2 x 2,2
data 2 x 2 x (20 x 50) x n
ip1 1 x 1 x 500
data 500 x n
relu1
ip2 1 x 1 x 10
data 10 x n
accuracy1 x n