Caffe部署中的几个train-test-solver-prototxt-deploy等说明<三>

转载地址: http://blog.csdn.net/lg1259156776/article/details/52550865

1:神经网络中,我们通过最小化神经网络来训练网络,所以在训练时最后一层是损失函数层(LOSS),

在测试时我们通过准确率来评价该网络的优劣,因此最后一层是准确率层(ACCURACY)。

但是当我们真正要使用训练好的数据时,我们需要的是网络给我们输入结果,对于分类问题,我们需要获得分类结果,如下右图最后一层我们得到

的是概率,我们不需要训练及测试阶段的LOSS,ACCURACY层了。

下图是能过$CAFFE_ROOT/Python/draw_net.py绘制$CAFFE_ROOT/models/caffe_reference_caffnet/train_val.prototxt   , $CAFFE_ROOT/models/caffe_reference_caffnet/deploy.prototxt,分别代表训练时与最后使用时的网络结构。

 

我们一般将train与test放在同一个.prototxt中,需要在data层输入数据的source,

而在使用时.prototxt只需要定义输入图片的大小通道数据参数即可,如下图所示,分别是

$CAFFE_ROOT/models/caffe_reference_caffnet/train_val.prototxt   , $CAFFE_ROOT/models/caffe_reference_caffnet/deploy.prototxt的data层

训练时, solver.prototxt中使用的是rain_val.prototxt

./build/tools/caffe/train -solver ./models/bvlc_reference_caffenet/solver.prototxt

使用上面训练的网络提取特征,使用的网络模型是deploy.prototxt

./build/tools/extract_features.bin models/bvlc_refrence_caffenet.caffemodel models/bvlc_refrence_caffenet/deploy.prototxt

2:

*_train_test.prototxt文件:这是训练与测试网络配置文件

*_deploy.prototxt文件:这是模型构造文件
deploy.prototxt文件书写:
注意在输出层的类型发生了变化一个是SoftmaxWithLoss,另一个是Softmax。另外为了方便区分训练与应用输出,训练是输出时是loss,应用时是prob。
deploy.prototxt文件代码
 
name: "CIFAR10_quick"
 layer {               #该层去掉
  name: "cifar"
   type: "Data"
   top: "data"
   top: "label"
   include {
     phase: TRAIN
   }
   transform_param {
     mean_file: "examples/cifar10/mean.binaryproto"
   }
   data_param {
     source: "examples/cifar10/cifar10_train_lmdb"
     batch_size: 100
     backend: LMDB
   }
 }
 layer {             #该层去掉
  name: "cifar"
   type: "Data"
   top: "data"
   top: "label"
   include {
     phase: TEST
   }
   transform_param {
     mean_file: "examples/cifar10/mean.binaryproto"
   }
   data_param {
     source: "examples/cifar10/cifar10_test_lmdb"
     batch_size: 100
     backend: LMDB
   }
 }
 layer {                        #将下方的weight_filler、bias_filler全部删除
  name: "conv1"
   type: "Convolution"
   bottom: "data"
   top: "conv1"
   param {
     lr_mult: 1
   }
   param {
     lr_mult: 2
   }
   convolution_param {
     num_output: 32
     pad: 2
     kernel_size: 5
     stride: 1
     weight_filler {
       type: "gaussian"
       std: 0.0001
     }
     bias_filler {
       type: "constant"
     }
   }
 }
 layer {
   name: "pool1"
   type: "Pooling"
   bottom: "conv1"
   top: "pool1"
   pooling_param {
     pool: MAX
     kernel_size: 3
     stride: 2
   }
 }
 layer {
   name: "relu1"
   type: "ReLU"
   bottom: "pool1"
   top: "pool1"
 }
 layer {                         #weight_filler、bias_filler删除
  name: "conv2"
   type: "Convolution"
   bottom: "pool1"
   top: "conv2"
   param {
     lr_mult: 1
   }
   param {
     lr_mult: 2
   }
   convolution_param {
     num_output: 32
     pad: 2
     kernel_size: 5
     stride: 1
     weight_filler {
       type: "gaussian"
       std: 0.01
     }
     bias_filler {
       type: "constant"
     }
   }
 }
 layer {
   name: "relu2"
   type: "ReLU"
   bottom: "conv2"
   top: "conv2"
 }
 layer {
   name: "pool2"
   type: "Pooling"
   bottom: "conv2"
   top: "pool2"
   pooling_param {
     pool: AVE
     kernel_size: 3
     stride: 2
   }
 }
 layer {                         #weight_filler、bias_filler删除
  name: "conv3"
   type: "Convolution"
   bottom: "pool2"
   top: "conv3"
   param {
     lr_mult: 1
   }
   param {
     lr_mult: 2
   }
   convolution_param {
     num_output: 64
     pad: 2
     kernel_size: 5
     stride: 1
     weight_filler {
       type: "gaussian"
       std: 0.01
     }
     bias_filler {
       type: "constant"
     }
   }
 }
 layer {
   name: "relu3"
   type: "ReLU"
   bottom: "conv3"
   top: "conv3"
 }
 layer {
   name: "pool3"
   type: "Pooling"
   bottom: "conv3"
   top: "pool3"
   pooling_param {
     pool: AVE
     kernel_size: 3
     stride: 2
   }
 }
 layer {                       #weight_filler、bias_filler删除
  name: "ip1"
   type: "InnerProduct"
   bottom: "pool3"
   top: "ip1"
   param {
     lr_mult: 1
   }
   param {
     lr_mult: 2
   }
   inner_product_param {
     num_output: 64
     weight_filler {
       type: "gaussian"
       std: 0.1
     }
     bias_filler {
       type: "constant"
     }
   }
 }
 layer {                              # weight_filler、bias_filler删除
  name: "ip2"
   type: "InnerProduct"
   bottom: "ip1"
   top: "ip2"
   param {
     lr_mult: 1
   }
   param {
     lr_mult: 2
   }
   inner_product_param {
     num_output: 10
     weight_filler {
       type: "gaussian"
       std: 0.1
     }
     bias_filler {
       type: "constant"
     }
   }
 }
 layer {                                  #将该层删除
  name: "accuracy"
   type: "Accuracy"
   bottom: "ip2"
   bottom: "label"
   top: "accuracy"
   include {
     phase: TEST
   }
 }
 layer {                                 #修改
  name: "loss"       #---loss  修改为  prob
   type: "SoftmaxWithLoss"             # SoftmaxWithLoss 修改为 softmax
   bottom: "ip2"
   bottom: "label"          #去掉
  top: "loss"
 }


 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值