剖析Caffe源码之Net---NetParameter参数

前面几篇文章主要分析了Caffe中的Blob和Layer源码,了解到了Caffe中的参数数据结构文件caffe.proto,掌握了各个Layer是如何注册到Caffe中,下面将分析Net层。
在分析Net层之前首先要了解Net层入参NetParameter

NetParameter

NetParameter参数数据结构同一定义在caffe.proto文件中,其结构总体如下:

message NetParameter {  
optional string name = 1;//网络名称   
repeated string input = 3; //网络的输入Blob名称,可以有多个入参   
repeated BlobShape input_shape = 8;//输入参数Blob的维度信息   
repeated int32 input_dim = 4;//维度信息,在旧的版本中使用,最新版本中已经使用input_shape代替    
optional bool force_backward = 5 [default = false]; //是否强制网络中每层执行后向传播计算,如果设置为False,那么是否执行向后传播计算由网络结构,学习速率决定   
optional NetState state = 6;//网络状态包括phase,level,和stage,在某些层中可以依靠设置层中的included/excluded中的state依靠一些规则来过滤某些层   
optional bool debug_info = 7 [default = false];//当运行Net::Backward, and Net::Update,是否打印结构的调试信息   
repeated LayerParameter layer = 100;  // 网络中各个层的参数.    
repeated V1LayerParameter layers = 2;//旧版本中各层的参数,已经废弃,使用layer代替
}

NetParameter定义结构相对较少,最主要的部分是在layer中,LayerParameter layer中包含了 各个已知所有层的参数。

LayerParameter

LayerParameter参数结构主要如下,定义了所有层的参数数据结构

message LayerParameter {
optional string name = 1; // the layer name
optional string type = 2; // the layer type
  repeated string bottom = 3; // the name of each bottom blob
  repeated string top = 4; // the name of each top blob

 // The train / test phase for computation.
  optional Phase phase = 10;

// The amount of weight to assign each top blob in the objective.
  // Each layer assigns a default value, usually of either 0 or 1,
  // to each top blob.
  repeated float loss_weight = 5;

 // Specifies training parameters (multipliers on global learning constants,
  // and the name and other settings used for weight sharing).
  repeated ParamSpec param = 6;

  // The blobs containing the numeric parameters of the layer.
  repeated BlobProto blobs = 7;

  // Specifies whether to backpropagate to each bottom. If unspecified,
  // Caffe will automatically infer whether each input needs backpropagation
  // to compute parameter gradients. If set to true for some inputs,
  // backpropagation to those inputs is forced; if set false for some inputs,
  // backpropagation to those inputs is skipped.
  //
  // The size must be either 0 or equal to the number of bottoms.
  repeated bool propagate_down = 11;

 // Rules controlling whether and when a layer is included in the network,
  // based on the current NetState.  You may specify a non-zero number of rules
  // to include OR exclude, but not both.  If no include or exclude rules are
  // specified, the layer is always included.  If the current NetState meets
  // ANY (i.e., one or more) of the specified rules, the layer is
  // included/excluded.
  repeated NetStateRule include = 8;
  repeated NetStateRule exclude = 9;

// Parameters for data pre-processing.
  optional TransformationParameter transform_param = 100;

  // Parameters shared by loss layers.
  optional LossParameter loss_param = 101;

// Layer type-specific parameters.
  //
  // Note: certain layers may have more than one computational engine
  // for their implementation. These layers include an Engine type and
  // engine parameter for selecting the implementation.
  // The default for the engine is set by the ENGINE switch at compile-time.
   optional AccuracyParameter accuracy_param = 102;
  optional ArgMaxParameter argmax_param = 103;
  optional BatchNormParameter batch_norm_param = 139;
  optional BiasParameter bias_param = 141;
  optional ClipParameter clip_param = 148;
  optional ConcatParameter concat_param = 104;
  optional ContrastiveLossParameter contrastive_loss_param = 105;
  optional ConvolutionParameter convolution_param = 106;
  optional CropParameter crop_param = 144;
  optional DataParameter data_param = 107;
  optional DropoutParameter dropout_param = 108;
  optional DummyDataParameter dummy_data_param = 109;
  optional EltwiseParameter eltwise_param = 110;
  optional ELUParameter elu_param = 140;
  optional EmbedParameter embed_param = 137;
  optional ExpParameter exp_param = 111;
  optional FlattenParameter flatten_param = 135;
  optional HDF5DataParameter hdf5_data_param = 112;
  optional HDF5OutputParameter hdf5_output_param = 113;
  optional HingeLossParameter hinge_loss_param = 114;
  optional ImageDataParameter image_data_param = 115;
  optional InfogainLossParameter infogain_loss_param = 116;
  optional InnerProductParameter inner_product_param = 117;
  optional InputParameter input_param = 143;
  optional LogParameter log_param = 134;
  optional LRNParameter lrn_param = 118;
  optional MemoryDataParameter memory_data_param = 119;
  optional MVNParameter mvn_param = 120;
  optional ParameterParameter parameter_param = 145;
  optional PoolingParameter pooling_param = 121;
  optional PowerParameter power_param = 122;
  optional PReLUParameter prelu_param = 131;
  optional PythonParameter python_param = 130;
  optional RecurrentParameter recurrent_param = 146;
  optional ReductionParameter reduction_param = 136;
  optional ReLUParameter relu_param = 123;
  optional ReshapeParameter reshape_param = 133;
  optional ScaleParameter scale_param = 142;
  optional SigmoidParameter sigmoid_param = 124;
  optional SoftmaxParameter softmax_param = 125;
  optional SPPParameter spp_param = 132;
  optional SliceParameter slice_param = 126;
  optional SwishParameter swish_param = 147;
  optional TanHParameter tanh_param = 127;
  optional ThresholdParameter threshold_param = 128;
  optional TileParameter tile_param = 138;
  optional WindowDataParameter window_data_param = 129;
  }

prototxt文件

NetParameter层只是描述了数据结构,而整个神经网络的是在prototxt文件中定义的,不仅包含网络结构,还包含各个网络参数,例如定义如下:

name: "CaffeNet"
layer {  
name: "data"  
type: "Input"  
top: "data"  
input_param { shape: { dim: 10 dim: 3 dim: 227 dim: 227 } }
}
layer {  
name: "conv1"  
type: "Convolution"  
bottom: "data"  
top: "conv1"  
convolution_param {    
num_output: 96    
kernel_size: 11    
stride: 4  }
}

当然layer部分一般是由多个网络组成的,可以包含多个layer。

Caffe数据结构框架

结合NetParameter和LayerParameter层,其相关数据结构框架如下:

在这里插入图片描述

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Huo的藏经阁

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值