Caffe 增加自定义 Layer 及其 ProtoBuffer 参数[转自:卜居https://blog.csdn.net/kkk584520/article/details/52721838]

在飞驰的列车上,无法入眠。外面阴雨绵绵,思绪被拉扯到天边。


翻看之前聊天,想起还欠一个读者一篇博客。


于是花了点时间整理一下之前学习 Caffe 时增加自定义 Layer 及自定义 ProtoBuffer 参数的简单例程,希望对初学者有借鉴意义。


博客内容基于新书《深度学习:21 天实战 Caffe》,书中课后习题答案欢迎读者留言讨论。以下进入正文。


在使用 Caffe 过程中经常会有这样的需求:已有 Layer 不符合我的应用场景;我需要这样这样的功能,原版代码没有实现;或者已经实现但效率太低,我有更好的实现。


方案一:简单粗暴的解法——偷天换日


如果你对 ConvolutionLayer 的实现不满意,那就直接改这两个文件:$CAFFE_ROOT/include/caffe/layers/conv_layer.hpp 和 $CAFFE_ROOT/src/caffe/layers/conv_layer.cpp 或 conv_layer.cu ,将 im2col + gemm 替换为你自己的实现(比如基于 winograd 算法的实现)。

优点:快速迭代,不需要对 Caffe 框架有过多了解,糙快狠准。

缺点:代码难维护,不能 merge 到 caffe master branch,容易给使用代码的人带来困惑(效果和 #define TRUE false 差不多)。


方案二:稍微温柔的解法——千人千面

和方案一类似,只是通过预编译宏来确定使用哪种实现。例如可以保留 ConvolutionLayer 默认实现,同时在代码中增加如下段:


  
  
  1. #ifdef SWITCH_MY_IMPLEMENTATION
  2. // 你的实现代码
  3. #else
  4. // 默认代码
  5. #endif

这样可以在需要使用该 Layer 的代码中,增加宏定义:

#define SWITCH_MY_IMPLEMENTATION
  
  

就可以使用你的实现。而未定义该宏的代码,仍然使用原版实现。


优点:可以在新旧实现代码之间灵活切换;

缺点:每次切换需要重新编译;


方案三:优雅转身——山路十八弯

同一个功能的 Layer 有不同实现,希望能灵活切换又不需要重新编译代码,该如何实现?

这时不得不使用 ProtoBuffer 工具了。

首先,要把你的实现,要像正常的 Layer 类一样,分解为声明部分和实现部分,分别放在 .hpp 与 .cpp、.cu 中。Layer 名称要起一个能区别于原版实现的新名称。.hpp 文件置于 $CAFFE_ROOT/include/caffe/layers/,而 .cpp 和 .cu 置于 $CAFFE_ROOT/src/caffe/layers/,这样你在 $CAFFE_ROOT 下执行 make 编译时,会自动将这些文件加入构建过程,省去了手动设置编译选项的繁琐流程。

其次,在 $CAFFE_ROOT/src/caffe/proto/caffe.proto 中,增加新 LayerParameter 选项,这样你在编写 train.prototxt 或者 test.prototxt 或者 deploy.prototxt 时就能把新 Layer 的描述写进去,便于修改网络结构和替换其他相同功能的 Layer 了。

最后也是最容易忽视的一点,在 Layer 工厂注册新 Layer 加工函数,不然在你运行过程中可能会报如下错误:

F1002 01:51:22.656038 1954701312 layer_factory.hpp:81] Check failed: registry.count(type) == 1 (0 vs. 1) Unknown layer type: AllPass (known types: AbsVal, Accuracy, ArgMax, BNLL, BatchNorm, BatchReindex, Bias, Concat, ContrastiveLoss, Convolution, Crop, Data, Deconvolution, Dropout, DummyData, ELU, Eltwise, Embed, EuclideanLoss, Exp, Filter, Flatten, HDF5Data, HDF5Output, HingeLoss, Im2col, ImageData, InfogainLoss, InnerProduct, Input, LRN, Log, MVN, MemoryData, MultinomialLogisticLoss, PReLU, Pooling, Power, ReLU, Reduction, Reshape, SPP, Scale, Sigmoid, SigmoidCrossEntropyLoss, Silence, Slice, Softmax, SoftmaxWithLoss, Split, TanH, Threshold, Tile, WindowData)
*** Check failure stack trace: ***
    @        0x10243154e  google::LogMessage::Fail()
    @        0x102430c53  google::LogMessage::SendToLog()
    @        0x1024311a9  google::LogMessage::Flush()
    @        0x1024344d7  google::LogMessageFatal::~LogMessageFatal()
    @        0x10243183b  google::LogMessageFatal::~LogMessageFatal()
    @        0x102215356  caffe::LayerRegistry<>::CreateLayer()
    @        0x102233ccf  caffe::Net<>::Init()
    @        0x102235996  caffe::Net<>::Net()
    @        0x102118d8b  time()
    @        0x102119c9a  main
    @     0x7fff851285ad  start
    @                0x4  (unknown)
Abort trap: 6



下面给出一个实际案例,走一遍方案三的流程。

这里我们实现一个新 Layer,名称为 AllPassLayer,顾名思义就是全通 Layer,“全通”借鉴于信号处理中的全通滤波器,将信号无失真地从输入转到输出。

虽然这个 Layer 并没有什么卵用,但是在这个基础上增加你的处理是非常简单的事情。另外也是出于实验考虑,全通层的 Forward/Backward 函数非常简单不需要读者有任何高等数学和求导的背景知识。读者使用该层时可以插入到任何已有网络中,而不会影响训练、预测的准确性。


首先看头文件:


  
  
  1. #ifndef CAFFE_ALL_PASS_LAYER_HPP_
  2. #define CAFFE_ALL_PASS_LAYER_HPP_
  3. #include <vector>
  4. #include "caffe/blob.hpp"
  5. #include "caffe/layer.hpp"
  6. #include "caffe/proto/caffe.pb.h"
  7. #include "caffe/layers/neuron_layer.hpp"
  8. namespace caffe {
  9. template < typename Dtype>
  10. class AllPassLayer : public NeuronLayer<Dtype> {
  11. public:
  12. explicit AllPassLayer(const LayerParameter& param)
  13. : NeuronLayer<Dtype> (param) {}
  14. virtual inline const char* type() const { return "AllPass"; }
  15. protected:
  16. virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,
  17. const vector<Blob<Dtype>*>& top);
  18. virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,
  19. const vector<Blob<Dtype>*>& top);
  20. virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,
  21. const vector< bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);
  22. virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,
  23. const vector< bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);
  24. };
  25. } // namespace caffe
  26. #endif // CAFFE_ALL_PASS_LAYER_HPP_


再看源文件:


  
  
  1. #include <algorithm>
  2. #include <vector>
  3. #include "caffe/layers/all_pass_layer.hpp"
  4. #include <iostream>
  5. using namespace std;
  6. #define DEBUG_AP(str) cout<<str<<endl
  7. namespace caffe {
  8. template < typename Dtype>
  9. void AllPassLayer<Dtype>::Forward_cpu( const vector<Blob<Dtype>*>& bottom,
  10. const vector<Blob<Dtype>*>& top) {
  11. const Dtype* bottom_data = bottom[ 0]->cpu_data();
  12. Dtype* top_data = top[ 0]->mutable_cpu_data();
  13. const int count = bottom[ 0]->count();
  14. for ( int i = 0; i < count; ++i) {
  15. top_data[i] = bottom_data[i];
  16. }
  17. DEBUG_AP( "Here is All Pass Layer, forwarding.");
  18. DEBUG_AP( this->layer_param_.all_pass_param().key());
  19. }
  20. template < typename Dtype>
  21. void AllPassLayer<Dtype>::Backward_cpu( const vector<Blob<Dtype>*>& top,
  22. const vector< bool>& propagate_down,
  23. const vector<Blob<Dtype>*>& bottom) {
  24. if (propagate_down[ 0]) {
  25. const Dtype* bottom_data = bottom[ 0]->cpu_data();
  26. const Dtype* top_diff = top[ 0]->cpu_diff();
  27. Dtype* bottom_diff = bottom[ 0]->mutable_cpu_diff();
  28. const int count = bottom[ 0]->count();
  29. for ( int i = 0; i < count; ++i) {
  30. bottom_diff[i] = top_diff[i];
  31. }
  32. }
  33. DEBUG_AP( "Here is All Pass Layer, backwarding.");
  34. DEBUG_AP( this->layer_param_.all_pass_param().key());
  35. }
  36. #ifdef CPU_ONLY
  37. STUB_GPU(AllPassLayer);
  38. #endif
  39. INSTANTIATE_CLASS(AllPassLayer);
  40. REGISTER_LAYER_CLASS(AllPass);
  41. } // namespace caffe


时间考虑,我没有实现 GPU 模式的 forward、backward,故本文例程仅支持 CPU_ONLY 模式。


编辑 caffe.proto,找到 LayerParameter 描述,增加一项:


  
  
  1. message LayerParameter {
  2. optional string name = 1; // the layer name
  3. optional string type = 2; // the layer type
  4. repeated string bottom = 3; // the name of each bottom blob
  5. repeated string top = 4; // the name of each top blob
  6. // The train / test phase for computation.
  7. optional Phase phase = 10;
  8. // The amount of weight to assign each top blob in the objective.
  9. // Each layer assigns a default value, usually of either 0 or 1,
  10. // to each top blob.
  11. repeated float loss_weight = 5;
  12. // Specifies training parameters (multipliers on global learning constants,
  13. // and the name and other settings used for weight sharing).
  14. repeated ParamSpec param = 6;
  15. // The blobs containing the numeric parameters of the layer.
  16. repeated BlobProto blobs = 7;
  17. // Specifies on which bottoms the backpropagation should be skipped.
  18. // The size must be either 0 or equal to the number of bottoms.
  19. repeated bool propagate_down = 11;
  20. // Rules controlling whether and when a layer is included in the network,
  21. // based on the current NetState. You may specify a non-zero number of rules
  22. // to include OR exclude, but not both. If no include or exclude rules are
  23. // specified, the layer is always included. If the current NetState meets
  24. // ANY (i.e., one or more) of the specified rules, the layer is
  25. // included/excluded.
  26. repeated NetStateRule include = 8;
  27. repeated NetStateRule exclude = 9;
  28. // Parameters for data pre-processing.
  29. optional TransformationParameter transform_param = 100;
  30. // Parameters shared by loss layers.
  31. optional LossParameter loss_param = 101;
  32. // Layer type-specific parameters.
  33. //
  34. // Note: certain layers may have more than one computational engine
  35. // for their implementation. These layers include an Engine type and
  36. // engine parameter for selecting the implementation.
  37. // The default for the engine is set by the ENGINE switch at compile-time.
  38. optional AccuracyParameter accuracy_param = 102;
  39. optional ArgMaxParameter argmax_param = 103;
  40. optional BatchNormParameter batch_norm_param = 139;
  41. optional BiasParameter bias_param = 141;
  42. optional ConcatParameter concat_param = 104;
  43. optional ContrastiveLossParameter contrastive_loss_param = 105;
  44. optional ConvolutionParameter convolution_param = 106;
  45. optional CropParameter crop_param = 144;
  46. optional DataParameter data_param = 107;
  47. optional DropoutParameter dropout_param = 108;
  48. optional DummyDataParameter dummy_data_param = 109;
  49. optional EltwiseParameter eltwise_param = 110;
  50. optional ELUParameter elu_param = 140;
  51. optional EmbedParameter embed_param = 137;
  52. optional ExpParameter exp_param = 111;
  53. optional FlattenParameter flatten_param = 135;
  54. optional HDF5DataParameter hdf5_data_param = 112;
  55. optional HDF5OutputParameter hdf5_output_param = 113;
  56. optional HingeLossParameter hinge_loss_param = 114;
  57. optional ImageDataParameter image_data_param = 115;
  58. optional InfogainLossParameter infogain_loss_param = 116;
  59. optional InnerProductParameter inner_product_param = 117;
  60. optional InputParameter input_param = 143;
  61. optional LogParameter log_param = 134;
  62. optional LRNParameter lrn_param = 118;
  63. optional MemoryDataParameter memory_data_param = 119;
  64. optional MVNParameter mvn_param = 120;
  65. optional PoolingParameter pooling_param = 121;
  66. optional PowerParameter power_param = 122;
  67. optional PReLUParameter prelu_param = 131;
  68. optional PythonParameter python_param = 130;
  69. optional ReductionParameter reduction_param = 136;
  70. optional ReLUParameter relu_param = 123;
  71. optional ReshapeParameter reshape_param = 133;
  72. optional ScaleParameter scale_param = 142;
  73. optional SigmoidParameter sigmoid_param = 124;
  74. optional SoftmaxParameter softmax_param = 125;
  75. optional SPPParameter spp_param = 132;
  76. optional SliceParameter slice_param = 126;
  77. optional TanHParameter tanh_param = 127;
  78. optional ThresholdParameter threshold_param = 128;
  79. optional TileParameter tile_param = 138;
  80. optional WindowDataParameter window_data_param = 129;
  81. optional AllPassParameter all_pass_param = 155;
  82. }

注意新增数字不要和以前的 Layer 数字重复。


仍然在 caffe.proto 中,增加 AllPassParameter 声明,位置任意。我设定了一个参数,可以用于从 prototxt 中读取预设值。



  
  
  1. message AllPassParameter {
  2. optional float key = 1 [ default = 0];
  3. }

在 cpp 代码中,通过

this->layer_param_.all_pass_param().key()
  
  
这句来读取 prototxt 预设值。

在 $CAFFE_ROOT 下执行 make clean,然后重新 make all。要想一次编译成功,务必规范代码,对常见错误保持敏锐的嗅觉并加以避免。


万事具备,只欠 prototxt 了。


不难,我们写个最简单的 deploy.prototxt,不需要 data layer 和 softmax layer,just for fun。


  
  
  1. name: "AllPassTest"
  2. layer {
  3. name: "data"
  4. type: "Input"
  5. top: "data"
  6. input_param { shape: { dim: 10 dim: 3 dim: 227 dim: 227 } }
  7. }
  8. layer {
  9. name: "ap"
  10. type: "AllPass"
  11. bottom: "data"
  12. top: "conv1"
  13. all_pass_param {
  14. key: 12.88
  15. }
  16. }


注意,这里的 type :后面写的内容,应该是你在 .hpp 中声明的新类 class name 去掉 Layer 后的名称。

上面设定了 key 这个参数的预设值为 12.88,嗯,你想到了刘翔对不对。


为了检验该 Layer 是否能正常创建和执行  forward, backward,我们运行 caffe time 命令并指定刚刚实现的 prototxt :

$ ./build/tools/caffe.bin time -model deploy.prototxt
I1002 02:03:41.667682 1954701312 caffe.cpp:312] Use CPU.
I1002 02:03:41.671360 1954701312 net.cpp:49] Initializing net from parameters:
name: "AllPassTest"
state {
  phase: TRAIN
}
layer {
  name: "data"
  type: "Input"
  top: "data"
  input_param {
    shape {
      dim: 10
      dim: 3
      dim: 227
      dim: 227
    }
  }
}
layer {
  name: "ap"
  type: "AllPass"
  bottom: "data"
  top: "conv1"
  all_pass_param {
    key: 12.88
  }
}
I1002 02:03:41.671463 1954701312 layer_factory.hpp:77] Creating layer data
I1002 02:03:41.671484 1954701312 net.cpp:91] Creating Layer data
I1002 02:03:41.671499 1954701312 net.cpp:399] data -> data
I1002 02:03:41.671555 1954701312 net.cpp:141] Setting up data
I1002 02:03:41.671566 1954701312 net.cpp:148] Top shape: 10 3 227 227 (1545870)
I1002 02:03:41.671592 1954701312 net.cpp:156] Memory required for data: 6183480
I1002 02:03:41.671605 1954701312 layer_factory.hpp:77] Creating layer ap
I1002 02:03:41.671620 1954701312 net.cpp:91] Creating Layer ap
I1002 02:03:41.671630 1954701312 net.cpp:425] ap <- data
I1002 02:03:41.671644 1954701312 net.cpp:399] ap -> conv1
I1002 02:03:41.671663 1954701312 net.cpp:141] Setting up ap
I1002 02:03:41.671674 1954701312 net.cpp:148] Top shape: 10 3 227 227 (1545870)
I1002 02:03:41.671685 1954701312 net.cpp:156] Memory required for data: 12366960
I1002 02:03:41.671695 1954701312 net.cpp:219] ap does not need backward computation.
I1002 02:03:41.671705 1954701312 net.cpp:219] data does not need backward computation.
I1002 02:03:41.671710 1954701312 net.cpp:261] This network produces output conv1
I1002 02:03:41.671720 1954701312 net.cpp:274] Network initialization done.
I1002 02:03:41.671746 1954701312 caffe.cpp:320] Performing Forward
Here is All Pass Layer, forwarding.
12.88
I1002 02:03:41.679689 1954701312 caffe.cpp:325] Initial loss: 0
I1002 02:03:41.679714 1954701312 caffe.cpp:326] Performing Backward
I1002 02:03:41.679738 1954701312 caffe.cpp:334] *** Benchmark begins ***
I1002 02:03:41.679746 1954701312 caffe.cpp:335] Testing for 50 iterations.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.681139 1954701312 caffe.cpp:363] Iteration: 1 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.682394 1954701312 caffe.cpp:363] Iteration: 2 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.683653 1954701312 caffe.cpp:363] Iteration: 3 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.685096 1954701312 caffe.cpp:363] Iteration: 4 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.686326 1954701312 caffe.cpp:363] Iteration: 5 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.687713 1954701312 caffe.cpp:363] Iteration: 6 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.689038 1954701312 caffe.cpp:363] Iteration: 7 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.690251 1954701312 caffe.cpp:363] Iteration: 8 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.691548 1954701312 caffe.cpp:363] Iteration: 9 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.692805 1954701312 caffe.cpp:363] Iteration: 10 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.694056 1954701312 caffe.cpp:363] Iteration: 11 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.695264 1954701312 caffe.cpp:363] Iteration: 12 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.696761 1954701312 caffe.cpp:363] Iteration: 13 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.698225 1954701312 caffe.cpp:363] Iteration: 14 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.699653 1954701312 caffe.cpp:363] Iteration: 15 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.700945 1954701312 caffe.cpp:363] Iteration: 16 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.702761 1954701312 caffe.cpp:363] Iteration: 17 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.704056 1954701312 caffe.cpp:363] Iteration: 18 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.706471 1954701312 caffe.cpp:363] Iteration: 19 forward-backward time: 2 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.708784 1954701312 caffe.cpp:363] Iteration: 20 forward-backward time: 2 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.710043 1954701312 caffe.cpp:363] Iteration: 21 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.711272 1954701312 caffe.cpp:363] Iteration: 22 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.712528 1954701312 caffe.cpp:363] Iteration: 23 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.713964 1954701312 caffe.cpp:363] Iteration: 24 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.715248 1954701312 caffe.cpp:363] Iteration: 25 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.716487 1954701312 caffe.cpp:363] Iteration: 26 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.717725 1954701312 caffe.cpp:363] Iteration: 27 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.718962 1954701312 caffe.cpp:363] Iteration: 28 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.720289 1954701312 caffe.cpp:363] Iteration: 29 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.721837 1954701312 caffe.cpp:363] Iteration: 30 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.723042 1954701312 caffe.cpp:363] Iteration: 31 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.724261 1954701312 caffe.cpp:363] Iteration: 32 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.725587 1954701312 caffe.cpp:363] Iteration: 33 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.726771 1954701312 caffe.cpp:363] Iteration: 34 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.728013 1954701312 caffe.cpp:363] Iteration: 35 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.729249 1954701312 caffe.cpp:363] Iteration: 36 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.730716 1954701312 caffe.cpp:363] Iteration: 37 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.732275 1954701312 caffe.cpp:363] Iteration: 38 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.733809 1954701312 caffe.cpp:363] Iteration: 39 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.735049 1954701312 caffe.cpp:363] Iteration: 40 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.737144 1954701312 caffe.cpp:363] Iteration: 41 forward-backward time: 2 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.739090 1954701312 caffe.cpp:363] Iteration: 42 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.741575 1954701312 caffe.cpp:363] Iteration: 43 forward-backward time: 2 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.743450 1954701312 caffe.cpp:363] Iteration: 44 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.744732 1954701312 caffe.cpp:363] Iteration: 45 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.745970 1954701312 caffe.cpp:363] Iteration: 46 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.747185 1954701312 caffe.cpp:363] Iteration: 47 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.748430 1954701312 caffe.cpp:363] Iteration: 48 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.749826 1954701312 caffe.cpp:363] Iteration: 49 forward-backward time: 1 ms.
Here is All Pass Layer, forwarding.
12.88
Here is All Pass Layer, backwarding.
12.88
I1002 02:03:41.751124 1954701312 caffe.cpp:363] Iteration: 50 forward-backward time: 1 ms.
I1002 02:03:41.751147 1954701312 caffe.cpp:366] Average time per layer:
I1002 02:03:41.751157 1954701312 caffe.cpp:369]       data	forward: 0.00108 ms.
I1002 02:03:41.751183 1954701312 caffe.cpp:372]       data	backward: 0.001 ms.
I1002 02:03:41.751194 1954701312 caffe.cpp:369]         ap	forward: 1.37884 ms.
I1002 02:03:41.751205 1954701312 caffe.cpp:372]         ap	backward: 0.01156 ms.
I1002 02:03:41.751220 1954701312 caffe.cpp:377] Average Forward pass: 1.38646 ms.
I1002 02:03:41.751231 1954701312 caffe.cpp:379] Average Backward pass: 0.0144 ms.
I1002 02:03:41.751240 1954701312 caffe.cpp:381] Average Forward-Backward: 1.42 ms.
I1002 02:03:41.751250 1954701312 caffe.cpp:383] Total Time: 71 ms.
I1002 02:03:41.751260 1954701312 caffe.cpp:384] *** Benchmark ends ***

可见该 Layer 可以正常创建、加载预设参数、执行 forward、backward 函数。

实际上对于算法 Layer,还要写 Test Case 保证功能正确。由于我们选择了极为简单的全通 Layer,故这一步可以省去。我这里偷点懒,您省点阅读时间。


感谢各位读者提出的宝贵建议和意见,这些都是无价的有监督学习数据集,是激励我不断 update 的 back prop 源动力。

祝各位同学国庆快乐!

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值