caffe环境搭建与使用
一、Ubuntu17.10以上版本caffe环境搭建
ubuntu17.10以上版本,caffe环境就嵌入了相应的环境中,不再需要独立编译和配置。具体可以参考以下的链接。这里以18.04版本为例,简要进行介绍。
参考链接:
具体说来,就是
apt-get install caffe-cpu
或者
apt-get install caffe-cuda
安装的是预编译的版本
我使用的是ubuntu18.04版本,相应的结果如下。
下面的安装成功的结果。
二、开始使用
以cifar10的训练和配置为例,官方文件中有部分的参考文件,这里需要进行修改,不然是不能用的。
1.cifar10数据集的下载
使用的是==./data_cifar10/get_cifar10.sh==,但是网络不太好,这里就不这么下载了,太慢了。
而是直接按照下面的操作执行,使用的命令行的方式进行。
最后得到如下的结果。
2.cifar10数据集生成imdb文件
使用的是==./examples/cifar10/create_cifar10.sh==
查看具体的命令,发现还有些问题的, 因为我们是直接编译的,故cifar10相关的操作都在/usr/bin目录下,这样的,原有的examples/cifar10的相关操作是不实用的,需要进行修改。
上图中convert_cifar_data.bin和cifar10_train的执行程序都是没有的。
将create_cifar10.sh修改为如下的脚本。
#!/usr/bin/env sh
# This script converts the cifar data into leveldb format.
set -e
EXAMPLE=examples/cifar10
DATA=data/cifar10
DBTYPE=lmdb
echo "Creating $DBTYPE..."
rm -rf $EXAMPLE/cifar10_train_$DBTYPE $EXAMPLE/cifar10_test_$DBTYPE
/usr/bin/convert_cifar_data $DATA $EXAMPLE $DBTYPE
echo "Computing image mean..."
/usr/bin/compute_image_mean -backend=$DBTYPE \
$EXAMPLE/cifar10_train_$DBTYPE $EXAMPLE/mean.binaryproto
echo "Done."
就修改了两个执行程序的位置。
然后执行生成呗。按照下面的操作执行。
结果中mean.binaryproto就生成了,这是后续操作的基础。
对了,还有cifar10_test_lmdb和cifar10_train_lmdb也是在此时生成的,简单而没有其他多余的东西。
3.cifar10训练文件的配置
这里自己构建一个神经网络,具体的结构是这样的。
具体实现是这样的。
name: "two_conv"
layer {
name: "cifar"
type: "Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
mean_file: "examples/cifar10/mean.binaryproto"
}
data_param {
source: "examples/cifar10/cifar10_train_lmdb"
batch_size: 100
backend: LMDB
}
}
layer {
name: "cifar"
type: "Data"
top: "data"
top: "label"
include {
phase: TEST
}
transform_param {
mean_file: "examples/cifar10/mean.binaryproto"
}
data_param {
source: "examples/cifar10/cifar10_test_lmdb"
batch_size: 100
backend: LMDB
}
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 32
kernel_size: 5
stride: 1
weight_filler {
type: "gaussian"
std: 0.0001
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "relu1"
type: "ReLU"
bottom: "conv1"
top: "conv1"
}
layer {
name: "conv2"
type: "Convolution"
bottom: "conv1"
top: "conv2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 32
kernel_size: 5
stride: 1
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "relu2"
type: "ReLU"
bottom: "conv2"
top: "conv2"
}
layer {
name: "ip1"
type: "InnerProduct"
bottom: "conv2"
top: "ip1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 10
weight_filler {
type: "gaussian"
std: 0.1
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "accuracy"
type: "Accuracy"
bottom: "ip1"
bottom: "label"
top: "accuracy"
include {
phase: TEST
}
}
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "ip1"
bottom: "label"
top: "loss"
}
然后,对于训练的过程,这里需要修改two_conv_solver.prototxt,不然就会出问题喽。
修改后的文档如下。
# reduce the learning rate after 8 epochs (4000 iters) by a factor of 10
# The train/test net protocol buffer definition
net: "examples/cifar10/two_conv_train_test.prototxt"
# test_iter specifies how many forward passes the test should carry out.
# In the case of MNIST, we have test batch size 100 and 100 test iterations,
# covering the full 10,000 testing images.
test_iter: 100
# Carry out testing every 500 training iterations.
test_interval: 500
# The base learning rate, momentum and the weight decay of the network.
base_lr: 0.001
momentum: 0.9
weight_decay: 0.004
# The learning rate policy
lr_policy: "fixed"
# Display every 100 iterations
display: 100
# The maximum number of iterations
max_iter: 10000
# snapshot intermediate results
snapshot: 1000
snapshot_prefix: "examples/cifar10/two_conv"
# solver mode: CPU or GPU
solver_mode: CPU
~
还有就是two_conv_train.sh,结果如下。
# reduce the learning rate after 8 epochs (4000 iters) by a factor of 10
# The train/test net protocol buffer definition
net: "examples/cifar10/two_conv_train_test.prototxt"
# test_iter specifies how many forward passes the test should carry out.
# In the case of MNIST, we have test batch size 100 and 100 test iterations,
# covering the full 10,000 testing images.
test_iter: 100
# Carry out testing every 500 training iterations.
test_interval: 500
# The base learning rate, momentum and the weight decay of the network.
base_lr: 0.001
momentum: 0.9
weight_decay: 0.004
# The learning rate policy
lr_policy: "fixed"
# Display every 100 iterations
display: 100
# The maximum number of iterations
max_iter: 10000
# snapshot intermediate results
snapshot: 1000
snapshot_prefix: "examples/cifar10/two_conv"
# solver mode: CPU or GPU
solver_mode: CPU
~
最后,开始训练吧。