caffe源码学习(六) 自定义层

本文介绍了如何在Caffe框架下自定义层,包括在.h文件中声明类并重写方法,.cpp文件中实现计算功能,以及在CUDA版本中添加GPU支持。作者提供了一个简单的mysquare层的实现示例,该层从prototxt文件读取参数n,并通过MATLAB接口验证计算的准确性。尽管测试和Creator函数尚未实现,但文章提供了自定义层的基础步骤。
摘要由CSDN通过智能技术生成

经过前面对google protocol bufferBlobSyncedMemory 与 shared_ptrlayerdata layer的初步学习,已经能够仿照caffe中已有的层来写自定层了。所以接下来就在caffe深度学习框架下写自定义层。首先应该注意,不同版本的caffe可能会有一些区别,所以要根据官网指南来写自定义层。为了方便,把目前版本的指南(20160605)贴出来:

Developing new layers

1. Add a class declaration for your layer to include/caffe/layers/your_layer.hpp.
(1) Include an inline implementation of type overriding the method virtual inline const char* type() const { return "YourLayerName"; } replacing YourLayerName with your layer’s name.
(2) Implement the {*}Blobs() methods to specify blob number requirements; see /caffe/include/caffe/layers.hpp to enforce strict top and bottom Blob counts using the inline {*}Blobs() methods.
(3) Omit the *_gpu declarations if you’ll only be implementing CPU code.
2. Implement your layer in src/caffe/layers/your_layer.cpp.
(1) (optional) LayerSetUp for one-time initialization: reading parameters, fixed-size allocations, etc.
(2) Reshape for computing the sizes of top blobs, allocating buffers, and any other work that depends on the shapes of bottom blobs
(3) Forward_cpu for the function your layer computes
(4) Backward_cpu for its gradient (Optional – a layer can be forward-only)
3. (Optional) Implement the GPU versions Forward_gpu and Backward_gpu in layers/your_layer.cu.
4. If needed, declare parameters in proto/caffe.proto, using (and then incrementing) the “next available layer-specific ID” declared in a comment above message LayerParameter.
5. Instantiate and register your layer in your cpp file with the macro provided in layer_factory.hpp. Assuming that you have a new layer MyAwesomeLayer, you can achieve it with the following command:

INSTANTIATE_CLASS(MyAwesomeLayer);
REGISTER_LAYER_CLASS(MyAwesome);

6. Note that you should put the registration code in your own cpp file, so your implementation of a layer is self-contained.
7. Optionally, you can also register a Creator if your layer has multiple engines. For an example on how to define a creator function and register it, see GetConvolutionLayer in caffe/layer_factory.cpp.
8. Write tests in test/test_your_layer.cpp. Use test/test_gradient_check_util.hpp to check that your Forward and Backward implementations are in numerical agreement.

Forward-Only Layers
If you want to write a layer that you will only ever include in a test net, you do not have to code the backward pass. For example, you might want a layer that measures performance metrics at test time that haven’t already been implemented. Doing this is very simple. You can write an inline implementation of Backward_cpu (or Backward_gpu) together with the definition of your layer in include/caffe/your_layer.hpp that looks like:

virtual void Backward_cpu(const vector<Blob<Dtype>*>& top, const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {
  NOT_IMPLEMENTED;
}

The NOT_IMPLEMENTED macro (defined in common.hpp) throws an error log saying “Not implemented yet”. For examples, look at the accuracy layer (accuracy_layer.hpp) and threshold layer (threshold_layer.hpp) definitions.

官网指南中已经说的比较详细,下面自定义层主要用到1,2,3,4,5,6中提到的,简单的实现了y = x^n点对点的简单计算,其中n从prototxt文件中读取。7和8目前还没有实现,后面有需要的话再更新。然后通过matlab接口进行验证该层的计算是否正确。

还是按照老习惯,先给出源码,再做总结。

1.源码

mysquare.hpp(该代码是仿照caffe中已经实现的absval_layerdropout_layer来写的,所以保留原有的一些内容。最开始要是实现y = x^2,接下来为了实现能够从prototxt中读取数据,实现y = x^n,n从prototxt中读取,所以命名为mysquare。)

#ifndef MYSQUARE_LAYER_HPP
#define MYSQUARE_LAYER_HPP

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值