Caffe中DeconvolutionLayer的用法

写在前面:关于Deconvolution 与Transposed Convolution的字面上的区别,在此不再讨论,以下统称为Deconvolution,可参考http://blog.csdn.net/u013250416/article/details/78247818。在我的理解里面,Convolution是将大尺寸的feature map转换为小尺寸的feature map,而Deconvolution是将小尺寸的feature map转换为大尺寸的feature map。下面就介绍一下Caffe中DeconvolutionLayer的用法。



1.定义

在github上最新的caffe版本中,已经包含了DeconvolutionLayer。见src/caffe/layers/deconv_layer.cpp,deconv_layer.cu和 include/caffe/layers/deconv_layer.hpp,与ConvolutionLayer的区别在于output_shape的计算。

对于convolution:

output = (input + 2 * p  - k)  / s + 1;

对于deconvolution:

output = (input - 1) * s + k - 2 * p;


conv_layer.cpp:

template <typename Dtype>
void ConvolutionLayer<Dtype>::compute_output_shape() {
  const int* kernel_shape_data = this->kernel_shape_.cpu_data();
  const int* stride_data = this->stride_.cpu_data();
  const int* pad_data = this->pad_.cpu_data();
  const int* dilation_data = this->dilation_.cpu_data();
  this->output_shape_.clear();
  for (int i = 0; i < this->num_spatial_axes_; ++i) {
    // i + 1 to skip channel axis
    const int input_dim = this->input_shape(i + 1);
    const int kernel_extent = dilation_data[i] * (kernel_shape_data[i] - 1) + 1;
    const int output_dim = (input_dim + 2 * pad_data[i] - kernel_extent)
        / stride_data[i] + 1;
    this->output_shape_.push_back(output_dim);
  }
}
deconv_layer.cpp:

template <typename Dtype>
void DeconvolutionLayer<Dtype>::compute_output_shape() {
  const int* kernel_shape_data = this->kernel_shape_.cpu_data();
  const int* stride_data = this->stride_.cpu_data();
  const int* pad_data = this->pad_.cpu_data();
  const int* dilation_data = this->dilation_.cpu_data();
  this->output_shape_.clear();
  for (int i = 0; i < this->num_spatial_axes_; ++i) {
    // i + 1 to skip channel axis
    const int input_dim = this->input_shape(i + 1);
    const int kernel_extent = dilation_data[i] * (kernel_shape_data[i] - 1) + 1;
    const int output_dim = stride_data[i] * (input_dim - 1)
        + kernel_extent - 2 * pad_data[i];
    this->output_shape_.push_back(output_dim);
  }
}



2.用法

在使用Python中的NetSpec生成network prototxt的时候,layers.Deconvolution不能接受其他参数,只能通过显式的convolution_param的方式来实现。

否则,如果按照ConvolutionLayer的方式来传递参数,可能会报错:AttributeError: 'LayerParameter' object has no attribute 'stride'。可以参考:https://github.com/shelhamer/fcn.berkeleyvision.org/blob/master/voc-fcn32s/net.py#L58-L61

ConvlutionLayer:

conv = L.Convolution(relu, kernel_size=ks, stride=stride,
                             num_output=nout, pad=pad, bias_term=False, weight_filler=dict(type='xavier'),
                             bias_filler=dict(type='constant'))


DeconvolutionLayer:

conv = L.Deconvolution(relu, convolution_param=dict(kernel_size=ks, stride=stride,
			     num_output=nout, pad=pad, bias_term=False, weight_filler=dict(type='xavier'),
		             bias_filler=dict(type='constant')))



3.Caffe.proto配置

在Caffe.proto中,没有配置DeconvolutionParameter.

可添加如下:

在LayerParameter中:

message LayerParameter {
  ......
  optional ConvolutionParameter convolution_param = 106;
  // Deconvolution
  optional DeconvolutionParameter deconvolution_param = 147;
  ......
}
添加message DeconvolutionParameter:

message DeconvolutionParameter {
  optional uint32 num_output = 1; // The number of outputs for the layer
  optional bool bias_term = 2 [default = true]; // whether to have bias terms

  // Pad, kernel size, and stride are all given as a single value for equal
  // dimensions in all spatial dimensions, or once per spatial dimension.
  repeated uint32 pad = 3; // The padding size; defaults to 0
  repeated uint32 kernel_size = 4; // The kernel size
  repeated uint32 stride = 6; // The stride; defaults to 1
  // Factor used to dilate the kernel, (implicitly) zero-filling the resulting
  // holes. (Kernel dilation is sometimes referred to by its use in the
  // algorithme à trous from Holschneider et al. 1987.)
  repeated uint32 dilation = 18; // The dilation; defaults to 1

  // For 2D convolution only, the *_h and *_w versions may also be used to
  // specify both spatial dimensions.
  optional uint32 pad_h = 9 [default = 0]; // The padding height (2D only)
  optional uint32 pad_w = 10 [default = 0]; // The padding width (2D only)
  optional uint32 kernel_h = 11; // The kernel height (2D only)
  optional uint32 kernel_w = 12; // The kernel width (2D only)
  optional uint32 stride_h = 13; // The stride height (2D only)
  optional uint32 stride_w = 14; // The stride width (2D only)

  optional uint32 group = 5 [default = 1]; // The group size for group conv

  optional FillerParameter weight_filler = 7; // The filler for the weight
  optional FillerParameter bias_filler = 8; // The filler for the bias
  enum Engine {
    DEFAULT = 0;
    CAFFE = 1;
    CUDNN = 2;
  }
  optional Engine engine = 15 [default = DEFAULT];

  // The axis to interpret as "channels" when performing convolution.
  // Preceding dimensions are treated as independent inputs;
  // succeeding dimensions are treated as "spatial".
  // With (N, C, H, W) inputs, and axis == 1 (the default), we perform
  // N independent 2D convolutions, sliding C-channel (or (C/g)-channels, for
  // groups g>1) filters across the spatial axes (H, W) of the input.
  // With (N, C, D, H, W) inputs, and axis == 1, we perform
  // N independent 3D convolutions, sliding (C/g)-channels
  // filters across the spatial axes (D, H, W) of the input.
  optional int32 axis = 16 [default = 1];

  // Whether to force use of the general ND convolution, even if a specific
  // implementation for blobs of the appropriate number of spatial dimensions
  // is available. (Currently, there is only a 2D-specific convolution
  // implementation; for input blobs with num_axes != 2, this option is
  // ignored and the ND implementation will be used.)
  optional bool force_nd_im2col = 17 [default = false];
}




评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值