使用VGG训练Imagenet
以前的笔记,放上来吧~~~
准备数据
具体官网地址,请点击这里
训练数据集:ILSVRC2012_img_train.tar
验证数据集:ILSVRC2012_img_val.tar
数据解压
sudo tar –xvf ILSVRC2012_img_train.tar -C ./train
sudo tar –xvf ILSVRC2012_img_val.tar -C ./val
对于val
数据集,解压以后是所有的验证集图片,共50000张,大约6.3G。
对于train
数据集,解压后是1000个tar文件,每个tar文件表示1000类里的一个类,共138G,对于1000个子tar,需要再次解压,解压脚本unzip.sh
如下
dir=/home/satisfie/imagenet/train #satisfie 是我的用户名
for x in `ls *.tar`
do
filename=`basename $x .tar` #注意空格
mkdir $filename
tar -xvf $x -C ./$filename
done
i7 6700K配合我的500G固态硬盘解压超快,到这原始数据就准备好了,分别放在
/home/satisfie/imagenet/train
:里面有1000个文件夹,每个文件夹下为JPG图片/home/satisfie/imagenet/val
:里面有验证集的50000张图片
接下来下载标签等其他说明数据~~~
下载其他数据
进入大caffe
根目录,执行/data/ilsvrc12/get_ilsvrc_aux.sh
下载其他数据,包括
det_synset_words.txt
synset_words.txt
— 1000个类别的文件夹名称及真是物体的名称,比如 “n01440764 tench Tinca tinca”,在训练中,这些都当做一个类别。synsets.txt
— 1000个类别的文件夹名称,比如”n01440764”…train.txt
— 1000个类别每张图片的名字及其标签,比如 “n01440764/n01440764_10026.JPEG 0” 共有1281167张图片val.txt
— 同上,总共有50000张。比如“ILSVRC2012_val_00000001.JPEG 65”test.txt
— 同上,为测试集合,总有100000张imagenet_mean.binaryproto
— 模型图片的各个通道均值imagenet.bet.pickle
模型的训练
训练数据准备
由于转化为lmdb数据库格式需要耗费较大的空间,且不支持shuffle等操作,所以这里直接读取原图片,使用的类型是ImageData
,具体看下面的prototxt
其中的train_new.txt
中对每张图片的加上了绝对值路径,这样才能被读取。
使用sed
命令即可,
sed 's/^/\/home\/satisfie\/imagenet\/val\/&/g' val.txt >val_new.txt
VGG_train_val.prototxt
name: "VGG_ILSVRC_16_layers"
layer {
name: "data"
type: "ImageData"
include {
phase: TRAIN
}
transform_param {
#crop_size: 224
mean_value: 104
mean_value: 117
mean_value: 123
mirror: true
}
image_data_param {
source: "/home/satisfie/imagenet/train_new.txt"
batch_size: 8
new_height: 224
new_width: 224
}
top: "data"
top: "label"
}
layer {
name: "data"
type: "ImageData"
include {
phase: TEST
}
transform_param {
#crop_size: 224
mean_value: 104
mean_value: 117
mean_value: 123
mirror: false
}
image_data_param {
source: "/home/satisfie/imagenet/val_new.txt"
batch_size: 4
new_height: 224
new_width: 224
}
top: "data"
top: "label"
}
layer {
bottom: "data"
top: "conv1_1"
name: "conv1_1"
type: "Convolution"
convolution_param {
num_output: 64
pad: 1
kernel_size: 3
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0
}
}
param {
lr_mult: 1
decay_mult :1
}
param {
lr_mult: 2
decay_mult: 0
}
}
layer {
bottom: "conv1_1"
top: "conv1_1"
name: "relu1_1"
type: "ReLU"
}
layer {
bottom: "conv1_1"
top: "conv1_2"
name: "conv1_2"
type: "Convolution"
convolution_param {
num_output: 64
pad: 1
kernel_size: 3
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0
}
}
param {
lr_mult: 1
decay_mult :1
}
param {
lr_mult: 2
decay_mult: 0
}
}
layer {
bottom: "conv1_2"
top: "conv1_2"
name: "relu1_2"
type: "ReLU"
}
layer {
bottom: "conv1_2"
top: "pool1"
name: "pool1"
type: "Pooling"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
bottom: "pool1"
top: "conv2_1"
name: "conv2_1"
type: "Convolution"
convolution_param {
num_output: 128
pad: 1
kernel_size: 3
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0
}
}
param {
lr_mult: 0
}
param {
lr_mult: 0
}
}
layer {
bottom: "conv2_1"
top: "conv2_1"
name: "relu2_1"
type: "ReLU"
}
layer {
bottom: "conv2_1"
top: "conv2_2"
name: "conv2_2"
type: "Convolution"
convolution_param {
num_output: 128
pad: 1
kernel_size: 3
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0
}
}
param {
lr_mult: 1
decay_mult :1
}
param {
lr_mult: 2
decay_mult: 0
}
}
layer {
bottom: "conv2_2"
top: "conv2_2"
name: "relu2_2"
type: "ReLU"
}
layer {
bottom: "conv2_2"
top: "pool2"
name: "pool2"
type: "Pooling"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
bottom: "pool2"
top: "conv3_1"
name: "conv3_1"
type: "Convolution"
convolution_param {
num_output: 256
pad: 1
kernel_size: 3
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0
}
}
param {
lr_mult: 1
decay_mult :1
}
param {
lr_mult: 2
decay_mult: 0
}
}
layer {
bottom: "conv3_1"
top: "conv3_1"
name: "relu3_1"
type: "ReLU"
}
layer {
bottom: "conv3_1"
top: "conv3_2"
name: "conv3_2"
type: "Convolution"
convolution_param {
num_output: 256
pad: 1
kernel_size: 3
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0
}
}
param {
lr_mult: 1
decay_mult :1
}
param {
lr_mult: 2
decay_mult: 0
}
}
layer {
bottom: "conv3_2"
top: "conv3_2"
name: "relu3_2"
type: "ReLU"
}
layer {
bottom: "conv3_2"
top: "conv3_3"
name: "conv3_3"
type: "Convolution"
convolution_param {
num_output: 256
pad: 1
kernel_size: 3
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0
}
}
param {
lr_mult: 1
decay_mult :1
}
param {
lr_mult: 2
decay_mult: 0
}
}
layer {
bottom: "conv3_3"
top: "conv3_3"
name: "relu3_3"
type: "ReLU"
}
layer {
bottom: "conv3_3"
top: "pool3"
name: "pool3"
type: "Pooling"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
bottom: "pool3"
top: "conv4_1"
name: "conv4_1"
type: "Convolution"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0
}
}
param {
lr_mult: 1
decay_mult :1
}
param {
lr_mult: 2
decay_mult: 0
}
}
layer {
bottom: "conv4_1"
top: "conv4_1"
name: "relu4_1"
type: "ReLU"
}
layer {
bottom: "conv4_1"
top: "conv4_2"
name: "conv4_2"
type: "Convolution"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0
}
}
param {
lr_mult: 1
decay_mult :1
}
param {
lr_mult: 2
decay_mult: 0
}
}
layer {
bottom: "conv4_2"
top: "conv4_2"
name: "relu4_2"
type: "ReLU"
}
layer {
bottom: "conv4_2"
top: "conv4_3"
name: "conv4_3"
type: "Convolution"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0
}
}
param {
lr_mult: 1
decay_mult :1
}
param {
lr_mult: 2
decay_mult: 0
}
}
layer {
bottom: "conv4_3"
top: "conv4_3"
name: "relu4_3"
type: "ReLU"
}
layer {
bottom: "conv4_3"
top: "pool4"
name: "pool4"
type: "Pooling"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
bottom: "pool4"
top: "conv5_1"
name: "conv5_1"
type: "Convolution"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0
}
}
param {
lr_mult: 1
decay_mult :1
}
param {
lr_mult: 2
decay_mult: 0
}
}
layer {
bottom: "conv5_1"
top: "conv5_1"
name: "relu5_1"
type: "ReLU"
}
layer {
bottom: "conv5_1"
top: "conv5_2"
name: "conv5_2"
type: "Convolution"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0
}
}
param {
lr_mult: 1
decay_mult :1
}
param {
lr_mult: 2
decay_mult: 0
}
}
layer {
bottom: "conv5_2"
top: "conv5_2"
name: "relu5_2"
type: "ReLU"
}
layer {
bottom: "conv5_2"
top: "conv5_3"
name: "conv5_3"
type: "Convolution"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0
}
}
param {
lr_mult: 1
decay_mult :1
}
param {
lr_mult: 2
decay_mult: 0
}
}
layer {
bottom: "conv5_3"
top: "conv5_3"
name: "relu5_3"
type: "ReLU"
}
layer {
bottom: "conv5_3"
top: "pool5"
name: "pool5"
type: "Pooling"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
bottom: "pool5"
top: "fc6"
name: "fc6"
type: "InnerProduct"
inner_product_param {
num_output: 4096
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0
}
}
param {
lr_mult: 1
decay_mult :1
}
param {
lr_mult: 2
decay_mult: 0
}
}
layer {
bottom: "fc6"
top: "fc6"
name: "relu6"
type: "ReLU"
}
layer {
bottom: "fc6"
top: "fc6"
name: "drop6"
type: "Dropout"
dropout_param {
dropout_ratio: 0.5
}
}
layer {
bottom: "fc6"
top: "fc7"
name: "fc7"
type: "InnerProduct"
inner_product_param {
num_output: 4096
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0
}
}
param {
lr_mult: 1
decay_mult :1
}
param {
lr_mult: 2
decay_mult: 0
}
}
layer {
bottom: "fc7"
top: "fc7"
name: "relu7"
type: "ReLU"
}
layer {
bottom: "fc7"
top: "fc7"
name: "drop7"
type: "Dropout"
dropout_param {
dropout_ratio: 0.5
}
}
layer {
name: "fc8"
bottom: "fc7"
top: "fc8"
type: "InnerProduct"
inner_product_param {
num_output: 1000
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0
}
}
param {
lr_mult: 1
decay_mult :1
}
param {
lr_mult: 2
decay_mult: 0
}
}
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "fc8"
bottom: "label"
top: "loss/loss"
}
layer {
name: "accuracy/top1"
type: "Accuracy"
bottom: "fc8"
bottom: "label"
top: "accuracy@1"
include: { phase: TEST }
accuracy_param {
top_k: 1
}
}
layer {
name: "accuracy/top5"
type: "Accuracy"
bottom: "fc8"
bottom: "label"
top: "accuracy@5"
include: { phase: TEST }
accuracy_param {
top_k: 5
}
}
solver.prototxt
net: "models/vgg/train_val.prototxt"
test_iter: 10000
test_interval: 40000
test_initialization: false
display: 200
base_lr: 0.0001
lr_policy: "step"
stepsize: 320000
gamma: 0.96
max_iter: 10000000
momentum: 0.9
weight_decay: 0.0005
snapshot: 800000
snapshot_prefix: "models/vgg/vgg"
solver_mode: GPU
finetuning
模型太大,试了下,在GTX980的4G显存下,batchsize只能设置为8或者16这么小。。。
大模型还是得服务器并行,直接在原有的模型上finetuning
VGG_ILSVRC_16_layers_deploy.prototxt
#!/usr/bin/env sh
set -e
TOOLS=./build/tools
GLOG_logtostderr=0 GLOG_log_dir=models/vgg/Log/ \
$TOOLS/caffe train \
--solver=models/vgg/solver.prototxt \
--weights models/vgg/VGG_ILSVRC_16_layers.caffemodel