Caffe学习 (三):Lenet5网络定义参数解读

参考博客:
https://blog.csdn.net/la_fe_/article/details/84937122
https://blog.csdn.net/cyh_24/article/details/51537709

lenet_solver参数解读

# The train/test net protocol buffer definition
net: "examples/mnist/lenet_train_test.prototxt"
# test_iter specifies how many forward passes the test should carry out.
# In the case of MNIST, we have test batch size 100 and 100 test iterations,批量100迭代100次
# covering the full 10,000 testing images.  覆盖10000张图像
test_iter: 100
# Carry out testing every 500 training iterations. 每训练500次测试1次
test_interval: 500
# The base learning rate, momentum and the weight decay of the network.
base_lr: 0.01   学习率
momentum: 0.9   动量
weight_decay: 0.0005  权重衰减项
# The learning rate policy
lr_policy: "inv"   学习策略,递减学习率
gamma: 0.0001  
power: 0.75
# Display every 100 iterations
display: 100    每迭代100次在终端里显示一次
# The maximum number of iterations
max_iter: 10000     最大迭代次数
# snapshot intermediate results
snapshot: 5000    每训练5000次保存一次训练的权重文件
snapshot_prefix: "examples/mnist/lenet"   权重保存地址
solver_mode: GPU

Lenet5网络模型参数解读

name: "LeNet"  #数据层
layer {
  name: "mnist"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TRAIN
  }
  transform_param {     #图像归一化
    scale: 0.00390625    #1/256
  }
  data_param {
    source: "examples/mnist/mnist_train_lmdb"  #训练集所在位置
    batch_size: 64    #训练批次
    backend: LMDB      #数据格式
  } 
}
layer {                     #测试阶段的层
  name: "mnist"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TEST
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "examples/mnist/mnist_test_lmdb"
    batch_size: 100
    backend: LMDB
  }
}
layer {       #卷积层
  name: "conv1"    
  type: "Convolution"
  bottom: "data"   #这层输入为data
  top: "conv1"    #输出为卷积层1
  param {
    lr_mult: 1   #权重参数W的学习率倍数
  }
  param {
    lr_mult: 2   #偏差的学习率倍数
  }
  convolution_param {
    num_output: 20   #卷积通道数
    kernel_size: 5   #卷积核尺寸为5*5
    stride: 1        #步长为1
    weight_filler {
      type: "xavier"  #自动输出输入神经元数量的初始规模
    }
    bias_filler {
      type: "constant"  #偏置的初始化常数 默认是0
    }
  }
}
layer {     #池化层
  name: "pool1"
  type: "Pooling"  
  bottom: "conv1"
  top: "pool1"
  pooling_param {
    pool: MAX    最大池化
    kernel_size: 2
    stride: 2
  }
}
layer {   #全连接层
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool2"  输入
  top: "ip1"  输出
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
layer {  #损失层
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "ip2"
  bottom: "label"
  top: "loss"
}

网络模型图形化:
Caffe中Python脚本文件:
运行命令:

python draw_net.py lenet.prototxt a.png --rankdir=BT
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值