caffe学习笔记5 -- Alex’s CIFAR-10 tutorial, Caffe style

这是caffe官网中Examples中的第三个例子,链接地址:http://caffe.berkeleyvision.org/gathered/examples/cifar10.html

这个例子重现了Alex Krizhevsky的cuda-convnet中的结果,具体的模型定义、参数、训练步骤等都是按照cuda-convnet中的进行设置的。

数据集描述:CIFAR-10数据集包含60000张32*32的彩色图像,包含10类,每类6000张图片,50000张训练样本图片,10000张测试样本图片。 图片类型:airplane, bird, car, cat, deer, dog, horse, monkey, ship, truck.

下面进入训练阶段

1. 获取数据:

进入caffe根目录,运行

./data/cifar10/get_cifar10.sh
./examples/cifar10/create_cifar10.sh

看一下这两个文件里写了什么内容

get_cifar10.sh : 获取数据

#!/usr/bin/env sh
# This scripts downloads the CIFAR10 (binary version) data and unzips it.

DIR="$( cd "$(dirname "$0")" ; pwd -P )"
cd $DIR

echo "Downloading..."

wget --no-check-certificate http://www.cs.toronto.edu/~kriz/cifar-10-binary.tar.gz #下载数据

echo "Unzipping..."

tar -xf cifar-10-binary.tar.gz && rm -f cifar-10-binary.tar.gz  #对数据解压,删除压缩包
mv cifar-10-batches-bin/* . && rm -rf cifar-10-batches-bin

# Creation is split out because leveldb sometimes causes segfault
# and needs to be re-created.

echo "Done."

create_cifar10.sh: 生成lmdb文件并计算平均值,执行该文件后,会生成两个文件

#!/usr/bin/env sh
# This script converts the cifar data into leveldb format.

EXAMPLE=examples/cifar10
DATA=data/cifar10
DBTYPE=lmdb #转换为lmdb

echo "Creating $DBTYPE..."

rm -rf $EXAMPLE/cifar10_train_$DBTYPE $EXAMPLE/cifar10_test_$DBTYPE

./build/examples/cifar10/convert_cifar_data.bin $DATA $EXAMPLE $DBTYPE  #调用convert_cifar_data.bin. 这个文件由convert_cifai_data.cpp生成,他需要三个输入值:数据存储路径,生成数据文件存放位置,生成文件格式(lmdb,或 leveldb)。

echo "Computing image mean..."

./build/tools/compute_image_mean -backend=$DBTYPE \       #调用compute_image_mean计对图像做均值处理,这个函数源于src/tools/compute_image_mean.cpp,需要三个传入参数:转换后文件的格式,转换后文件路径和生成的均值二进制文件
  $EXAMPLE/cifar10_train_$DBTYPE $EXAMPLE/mean.binaryproto  

echo "Done."

2. 使用模型:

网络结构模型

name: "CIFAR10_quick_test"
input: "data"
input_shape {
  dim: 1 # num
  dim: 3  #通道数
  dim: 32  #图像的长和宽
  dim: 32
}
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1 #权重w的学习率倍数
  }
  param {
    lr_mult: 2 #偏置b的学习率倍数
  }
  convolution_param {
    num_output: 32
    pad: 2 #加边为2 
    kernel_size: 5
    stride: 1
  }
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1"
  top: "pool1"
  pooling_param {
    pool: MAX  #Max Pooling
    kernel_size: 3
    stride: 2
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "pool1"
  top: "pool1"
}
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 32
    pad: 2
    kernel_size: 5
    stride: 1
  }
}
layer {
  name: "relu2"
  type: "ReLU"
  bottom: "conv2"
  top: "conv2"
}
layer {
  name: "pool2"
  type: "Pooling"
  bottom: "conv2"
  top: "pool2"
  pooling_param {
    pool: AVE #均值池化
    kernel_size: 3
    stride: 2
  }
}
layer {
  name: "conv3"
  type: "Convolution"
  bottom: "pool2"
  top: "conv3"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 64
    pad: 2
    kernel_size: 5
    stride: 1
  }
}
layer {
  name: "relu3"
  type: "ReLU" # 使用ReLU激励函数,这里需要注意的是,本层的bottom和top都是conv3.
  bottom: "conv3"
  top: "conv3"
}
layer {
  name: "pool3"
  type: "Pooling"
  bottom: "conv3"
  top: "pool3"
  pooling_param {
    pool: AVE
    kernel_size: 3
    stride: 2
  }
}
layer {
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool3"
  top: "ip1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 64
  }
}
layer {
  name: "ip2"
  type: "InnerProduct"
  bottom: "ip1"
  top: "ip2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 10
  }
}
layer {
  name: "prob"
  type: "Softmax"
  bottom: "ip2"
  top: "prob"
}

solver结构

# reduce the learning rate after 8 epochs (4000 iters) by a factor of 10

# The train/test net protocol buffer definition
net: "examples/cifar10/cifar10_quick_train_test.prototxt"   #使用网络模型
# test_iter specifies how many forward passes the test should carry out.
# In the case of MNIST, we have test batch size 100 and 100 test iterations,
# covering the full 10,000 testing images.
test_iter: 100 #test迭代次数
# Carry out testing every 500 training iterations.
test_interval: 500 #每迭代500此测试一次
# The base learning rate, momentum and the weight decay of the network.
base_lr: 0.001 #基础学习率
momentum: 0.9
weight_decay: 0.004
# The learning rate policy
lr_policy: "fixed"
# Display every 100 iterations
display: 100
# The maximum number of iterations
max_iter: 4000
# snapshot intermediate results
snapshot: 4000 # 存储中间结果
snapshot_format: HDF5 #存储格式
snapshot_prefix: "examples/cifar10/cifar10_quick"
# solver mode: CPU or GPU
solver_mode: GPU

对于学习率的影响,在Simon Haykin的《神经网络与及其学习》中说道(中文版p86):反向传播算法提供使用最速下降方法在权值空间计算得到的轨迹的一种近似,使用的参数越小,从一次迭代到下一次迭代的网络突触权值的变化量就越小,轨迹在权值空间就越光滑,然而,这种改进是以减慢学习速度为代价的。另一方面,如果学习率的值太大,学习速度加快,就可能使网络的权值变化量不稳定。


3. 训练

执行

./examples/cifar10/train_quick.sh

训练数据,输出形式类似与mnist训练的形式,这里不重复。

训练结果的正确率大约在75%





  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值