caffe源码深入学习5:超级详细的caffe卷积层代码解析

   在本篇博客中,笔者为大家解析一下caffe卷积层的源码,在开篇提醒各位读者朋友,由于caffe卷积层实现较为复杂,参数相对较多,因此,读者朋友们如果发现笔者的博客中的疏漏或者错误之处,请大家不吝赐教笔者在此表示衷心的感谢。

   在解析代码前,首先要强调一下卷积核的定义,许多读者朋友理解CNN中的卷积核有误(包括之前的笔者),因此特地把这个谬误写出来,供警示之用:

   举个例子:CNN中的一个卷积层输入64个通道的特征子图,输出256个通道的特征子图,那么,该层一共包含多少个卷积核?笔者在初接触CNN的时候认为,该卷积层包含256个卷积核,每个卷积核是二维的,然后每个卷积核分别对输入所有通道卷积,最后合成得到的结果就得到该卷积核的输出特征子图。

以上的认为是错误的!

   在上述的例子中,正确的理解是,该层包含64*256个卷积核,每个输出的特征子图对应了64个不同的卷积核,这64个不同的卷积核对相应的输入通道进行卷积操作,最后合成各卷积结果得到一张输出的特征子图。

   笔者在接下来的代码解析中注意纠正了上述谬误,请读者朋友们留意。好,让我们先放conv_layer.hpp的代码:

#ifndef CAFFE_CONV_LAYER_HPP_
#define CAFFE_CONV_LAYER_HPP_

#include <vector>

#include "caffe/blob.hpp"
#include "caffe/layer.hpp"
#include "caffe/proto/caffe.pb.h"

#include "caffe/layers/base_conv_layer.hpp"

namespace caffe {

/**
 * @brief Convolves the input image with a bank of learned filters,
 *        and (optionally) adds biases.
 *
 *   Caffe convolves by reduction to matrix multiplication. This achieves
 *   high-throughput and generality of input and filter dimensions but comes at
 *   the cost of memory for matrices. This makes use of efficiency in BLAS.
 *
 *   The input is "im2col" transformed to a channel K' x H x W data matrix
 *   for multiplication with the N x K' x H x W filter matrix to yield a
 *   N' x H x W output matrix that is then "col2im" restored. K' is the
 *   input channel * kernel height * kernel width dimension of the unrolled
 *   inputs so that the im2col matrix has a column for each input region to
 *   be filtered. col2im restores the output spatial structure by rolling up
 *   the output channel N' columns of the output matrix.
 */
template <typename Dtype>
class ConvolutionLayer : public BaseConvolutionLayer<Dtype> {
 public:
  /**
   * @param param provides ConvolutionParameter convolution_param,
   *    with ConvolutionLayer options:
   *  - num_output. The number of filters.
   *  - kernel_size / kernel_h / kernel_w. The filter dimensions, given by
   *  kernel_size for square filters or kernel_h and kernel_w for rectangular
   *  filters.
   *  - stride / stride_h / stride_w (\b optional, default 1). The filter
   *  stride, given by stride_size for equal dimensions or stride_h and stride_w
   *  for different strides. By default the convolution is dense with stride 1.
   *  - pad / pad_h / pad_w (\b optional, default 0). The zero-padding for
   *  convolution, given by pad for equal dimensions or pad_h and pad_w for
   *  different padding. Input padding is computed implicitly instead of
   *  actually padding.
   *  - dilation (\b optional, default 1). The filter
   *  dilation, given by dilation_size for equal dimensions for different
   *  dilation. By default the convolution has dilation 1.
   *  - group (\b optional, default 1). The number of filter groups. Group
   *  convolution is a method for reducing parameterization by selectively
   *  connecting input and output channels. The input and output channel dimensions must be divisible
   *  by the number of groups. For group @f$ \geq 1 @f$, the
   *  convolutional filters' input and output channels are separated s.t. each
   *  group takes 1 / group of the input channels and makes 1 / group of the
   *  output channels. Concretely 4 input channels, 8 output channels, and
   *  2 groups separate input channels 1-2 and output channels 1-4 into the
   *  first group and input channels 3-4 and output channels 5-8 into the second
   *  group.
   *  - bias_term (\b optional, default true). Whether to have a bias.
   *  - engine: convolution has CAFFE (matrix multiplication) and CUDNN (library
   *    kernels + stream parallelism) engines.
   */
  explicit ConvolutionLayer(const LayerParameter& param)
      : BaseConvolutionLayer<Dtype>(param) {}//构造函数为空

  virtual inline const char* type() const { return "Convolution"; }

 protected:
  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,
      const vector<Blob<Dtype>*>& top);//卷积层的cpu前向传播
  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,
      const vector<Blob<Dtype>*>& top);//卷积层的gpu前向传播
  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,
      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);//卷积层的cpu反向传播
  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,
      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);//卷积层的gpu反向传播
  virtual inline bool reverse_dimensions() { return false; }//判断是否反转卷积操作,直接返回false
  virtual void compute_output_shape();//计算卷积层的输出形状
};

}  // namespace caffe

#endif  // CAFFE_CONV_LAYER_HPP_
   conv_layer.hpp中除了声明了前向和后向传播,还定义了一个判断是否反转卷积操作的函数,这里直接返回了false,同时还声明了一个计算卷积层输出形状的函数compute_output_shape,下面让我们移步conv_layer.cpp,对声明的函数做出解析。下面是conv_layer.cpp的代码:

#include <vector>

#include "caffe/layers/conv_layer.hpp"

namespace caffe {

template <typename Dtype>
void ConvolutionLayer<Dtype>::compute_output_shape() {//计算卷积层的输出形状
  const int* kernel_shape_data = this->kernel_shape_.cpu_data();//卷积核大小
  const int* stride_data = this->stride_.cpu_data();//步长
  const int* pad_data = this->pad_.cpu_data();//pad
  const int* dilation_data = this->dilation_.cpu_data();//卷积核膨胀
  this->output_shape_.clear();
  for (int i = 0; i < this->num_spatial_axes_; ++i) {
    // i + 1 to skip channel axis
    const int input_dim = this->input_shape(i + 1);//在这里获取输入blob的height与width
    const int kernel_extent = dilation_data[i] * (kernel_shape_data[i] - 1) + 1;//在这里进行卷积核的扩展操作
    const int output_dim = (input_dim + 2 * pad_data[i] - kernel_extent)
        / stride_data[i] + 1;//在这里计算卷积过后生成的blob的高和宽
    this->output_shape_.push_back(output_dim);
  }
}

template <typename Dtype>
void ConvolutionLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,
      const vector<Blob<Dtype>*>& top) {
  const Dtype* weight = this->blobs_[0]->cpu_data();//读入卷积层的参数(权重),blobs_[0]存储的权重,而blobs_[1]存储的偏置
  for (int i = 0; i < bottom.size(); ++i) {
    const Dtype* bottom_data = bottom[i]->cpu_data();//读入bottom blob的data
    Dtype* top_data = top[i]->mutable_cpu_data();
    for (int n = 0; n < this->num_; ++n) {//这里的num_指的是batch_size,也就是说,一张一张图片的来
      this->forward_cpu_gemm(bottom_data + n * this->bottom_dim_, weight,
          top_data + n * this->top_dim_);
      if (this->bias_term_) {//如果启用了偏置
        const Dtype* bias = this->blobs_[1]->cpu_data();
        this->forward_cpu_bias(top_data + n * this->top_dim_, bias);//那么加上偏置
      }
    }
  }
}

template <typename Dtype>
void ConvolutionLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,
      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {
  const Dtype* weight = this->blobs_[0]->cpu_data();//读入权重参数
  Dtype* weight_diff = this->blobs_[0]->mutable_cpu_diff();//读入权重的梯度
  for (int i = 0; i < top.size(); ++i) {
    const Dtype* top_diff = top[i]->cpu_diff();//获取每个top blob的梯度
    const Dtype* bottom_data = bottom[i]->cpu_data();//获取每个bottom blob的数据
    Dtype* bottom_diff = bottom[i]->mutable_cpu_diff();//获取每个bottom blob的梯度
    // Bias gradient, if necessary.
    if (this->bias_term_ && this->param_propagate_down_[1]) {//如果这个blob需要反传并且启用了偏置的话
      Dtype* bias_diff = this->blobs_[1]->mutable_cpu_diff();//获取该层偏置的梯度
      for (int n = 0; n < this->num_; ++n) {
        this->backward_cpu_bias(bias_diff, top_diff + n * this->top_dim_);//对于每张输入的原图片偏置梯度的反传
      }
    }
    if (this->param_propagate_down_[0] || propagate_down[i]) {
      for (int n = 0; n < this->num_; ++n) {
        // gradient w.r.t. weight. Note that we will accumulate diffs.
        if (this->param_propagate_down_[0]) {//如果该blob需要反传权值梯度,则反传
          this->weight_cpu_gemm(bottom_data + n * this->bottom_dim_,
              top_diff + n * this->top_dim_, weight_diff);
        }
        // gradient w.r.t. bottom data, if necessary.
        if (propagate_down[i]) {//如果该blob需要反传数据梯度,则反传
          this->backward_cpu_gemm(top_diff + n * this->top_dim_, weight,
              bottom_diff + n * this->bottom_dim_);
        }
      }
    }
  }
}

#ifdef CPU_ONLY
STUB_GPU(ConvolutionLayer);
#endif

INSTANTIATE_CLASS(ConvolutionLayer);

}  // namespace caffe

   在compute_output_shape函数中,我们计算了卷积层输出的特征图的大小,是不是看到了熟悉的公式呢?输出特征图尺寸= (输入特征图尺寸+2*pad-卷积核尺寸)/步长+1。

其余的函数定义了卷积层的前传和反传接口。

   看到这里,是不是觉得卷积层的代码过于简单了?很多操作被封装在base_conv_layer.hpp实现,那么,我们就打开base_conv_layer.hpp,看看里面定义了些什么东西。按照老规矩先放base_conv_layer.hpp源代码:

#ifndef CAFFE_BASE_CONVOLUTION_LAYER_HPP_
#define CAFFE_BASE_CONVOLUTION_LAYER_HPP_

#include <vector>

#include "caffe/blob.hpp"
#include "caffe/layer.hpp"
#include "caffe/proto/caffe.pb.h"
#include "caffe/util/im2col.hpp"

namespace caffe {

/**
 * @brief Abstract base class that factors out the BLAS code common to
 *        ConvolutionLayer and DeconvolutionLayer.
 */
template <typename Dtype>
class BaseConvolutionLayer : public Layer<Dtype> {
 public:
  explicit BaseConvolutionLayer(const LayerParameter& param)
      : Layer<Dtype>(param) {}//构造函数为空
  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,
      const vector<Blob<Dtype>*>& top);//卷积层初始化,详见cpp代码解析
  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,
      const vector<Blob<Dtype>*>& top);//卷积层输出形状初始化,详见cpp代码解析

  virtual inline int MinBottomBlobs() const { return 1; }
  virtual inline int MinTopBlobs() const { return 1; }
  virtual inline bool EqualNumBottomTopBlobs() const { return true; }//卷积层不会改变blob的数量,一般会改变blob的channel

 protected:
  // Helper functions that abstract away the column buffer and gemm arguments.
  // The last argument in forward_cpu_gemm is so that we can skip the im2col if
  // we just called weight_cpu_gemm with the same input.
  void forward_cpu_gemm(const Dtype* input, const Dtype* weights,
      Dtype* output, bool skip_im2col = false);//cpu模式数据的前传函数
  void forward_cpu_bias(Dtype* output, const Dtype* bias);//cpu模式偏置的前传函数
  void backward_cpu_gemm(const Dtype* input, const Dtype* weights,
      Dtype* output);//cpu模式数据梯度的反传函数
  void weight_cpu_gemm(const Dtype* input, const Dtype* output, Dtype*
      weights);//cpu模式权重梯度的反传函数
  void backward_cpu_bias(Dtype* bias, const Dtype* input);//cpu模式偏置梯度的反传函数

#ifndef CPU_ONLY
  void forward_gpu_gemm(const Dtype* col_input, const Dtype* weights,
      Dtype* output, bool skip_im2col = false);//gpu模式数据的前传函数
  void forward_gpu_bias(Dtype* output, const Dtype* bias);//gpu模式偏置的前传函数
  void backward_gpu_gemm(const Dtype* input, const Dtype* weights,
      Dtype* col_output);//gpu模式数据的反传函数
  void weight_gpu_gemm(const Dtype* col_input, const Dtype* output, Dtype*
      weights);//gpu模式权重的反传函数
  void backward_gpu_bias(Dtype* bias, const Dtype* input);//gpu模式偏置的反传函数
#endif

  /// @brief The spatial dimensions of the input.
  //这个函数用于卷积层的输入blob的高(height)和宽(width),注意参数i从1开始取,代表从channel的后一维开始
  inline int input_shape(int i) {
    return (*bottom_shape_)[channel_axis_ + i];
  }
  // reverse_dimensions should return true iff we are implementing deconv, so
  // that conv helpers know which dimensions are which.
  virtual bool reverse_dimensions() = 0;//这个函数判断是否是反卷积运算(在conv_layer.hpp中直接置为false)
  // Compute height_out_ and width_out_ from other parameters.
  virtual void compute_output_shape() = 0;//这个函数计算卷积层输出的形状

  /// @brief The spatial dimensions of a filter kernel.
  Blob<int> kernel_shape_;//卷积核的形状,长*宽
  /// @brief The spatial dimensions of the stride.
  Blob<int> stride_;//步长
  /// @brief The spatial dimensions of the padding.
  Blob<int> pad_;//卷积的时候做的边缘pad
  /// @brief The spatial dimensions of the dilation.
  Blob<int> dilation_;//描述卷积核的膨胀参数
  /// @brief The spatial dimensions of the convolution input.
  Blob<int> conv_input_shape_;//输入的形状
  /// @brief The spatial dimensions of the col_buffer.
  vector<int> col_buffer_shape_;//一个输出通道对应的所有卷积核的所有卷积区域转化成一列向量的形状
  /// @brief The spatial dimensions of the output.
  vector<int> output_shape_;//输出的形状
  const vector<int>* bottom_shape_;//这个指针指向了输入blob的shape

  int num_spatial_axes_;//这个参数描述的是卷积处理的维度数,一般为2,表示二维卷积
  int bottom_dim_;//bottom_dim_描述的是bottom blob的一个channel包含的数据量
  int top_dim_;//top_dim_描述的是top blob的一个channel包含的数据量

  int channel_axis_;//这个参数一般为1,指示卷积核是按照输入blob各通道进行卷积
  int num_;//这个参数代表卷积操作输入图片的数目
  int channels_;//代表卷积层输入的单blob的通道数
  int group_;//卷积组的大小
  int out_spatial_dim_;//卷积后的图像大小
  int weight_offset_;//权重的偏移量,尤其适用于卷积组大于1的情况
  int num_output_;//表示该卷积层输出的通道数
  bool bias_term_;//是否启用偏置
  bool is_1x1_;//判断是不是1*1卷积,要求卷积核为1*1,步长为1,pad为0
  bool force_nd_im2col_;//是否需要强制n维卷积

 private:
  // wrap im2col/col2im so we don't have to remember the (long) argument lists
  inline void conv_im2col_cpu(const Dtype* data, Dtype* col_buff) {//将卷积处理的特征图按小窗大小转化为并列的单列向量的cpu实现函数
    if (!force_nd_im2col_ && num_spatial_axes_ == 2) {
      im2col_cpu(data, conv_in_channels_,
          conv_input_shape_.cpu_data()[1], conv_input_shape_.cpu_data()[2],
          kernel_shape_.cpu_data()[0], kernel_shape_.cpu_data()[1],
          pad_.cpu_data()[0], pad_.cpu_data()[1],
          stride_.cpu_data()[0], stride_.cpu_data()[1],
          dilation_.cpu_data()[0], dilation_.cpu_data()[1], col_buff);
    } else {
      im2col_nd_cpu(data, num_spatial_axes_, conv_input_shape_.cpu_data(),
          col_buffer_shape_.data(), kernel_shape_.cpu_data(),
          pad_.cpu_data(), stride_.cpu_data(), dilation_.cpu_data(), col_buff);
    }
  }
  inline void conv_col2im_cpu(const Dtype* col_buff, Dtype* data) {//将列向量还原为卷积处理的特征图的cpu实现函数
    if (!force_nd_im2col_ && num_spatial_axes_ == 2) {
      col2im_cpu(col_buff, conv_in_channels_,
          conv_input_shape_.cpu_data()[1], conv_input_shape_.cpu_data()[2],
          kernel_shape_.cpu_data()[0], kernel_shape_.cpu_data()[1],
          pad_.cpu_data()[0], pad_.cpu_data()[1],
          stride_.cpu_data()[0], stride_.cpu_data()[1],
          dilation_.cpu_data()[0], dilation_.cpu_data()[1], data);
    } else {
      col2im_nd_cpu(col_buff, num_spatial_axes_, conv_input_shape_.cpu_data(),
          col_buffer_shape_.data(), kernel_shape_.cpu_data(),
          pad_.cpu_data(), stride_.cpu_data(), dilation_.cpu_data(), data);
    }
  }
#ifndef CPU_ONLY
  inline void conv_im2col_gpu(const Dtype* data, Dtype* col_buff) {//将卷积处理的特征图按小窗大小转化为并列的单列向量的gpu实现函数
    if (!force_nd_im2col_ && num_spatial_axes_ == 2) {
      im2col_gpu(data, conv_in_channels_,
          conv_input_shape_.cpu_data()[1], conv_input_shape_.cpu_data()[2],
          kernel_shape_.cpu_data()[0], kernel_shape_.cpu_data()[1],
          pad_.cpu_data()[0], pad_.cpu_data()[1],
          stride_.cpu_data()[0], stride_.cpu_data()[1],
          dilation_.cpu_data()[0], dilation_.cpu_data()[1], col_buff);
    } else {
      im2col_nd_gpu(data, num_spatial_axes_, num_kernels_im2col_,
          conv_input_shape_.gpu_data(), col_buffer_.gpu_shape(),
          kernel_shape_.gpu_data(), pad_.gpu_data(),
          stride_.gpu_data(), dilation_.gpu_data(), col_buff);
    }
  }
  inline void conv_col2im_gpu(const Dtype* col_buff, Dtype* data) {//将列向量还原为卷积处理的特征图的gpu实现函数
    if (!force_nd_im2col_ && num_spatial_axes_ == 2) {
      col2im_gpu(col_buff, conv_in_channels_,
          conv_input_shape_.cpu_data()[1], conv_input_shape_.cpu_data()[2],
          kernel_shape_.cpu_data()[0], kernel_shape_.cpu_data()[1],
          pad_.cpu_data()[0], pad_.cpu_data()[1],
          stride_.cpu_data()[0], stride_.cpu_data()[1],
          dilation_.cpu_data()[0], dilation_.cpu_data()[1], data);
    } else {
      col2im_nd_gpu(col_buff, num_spatial_axes_, num_kernels_col2im_,
          conv_input_shape_.gpu_data(), col_buffer_.gpu_shape(),
          kernel_shape_.gpu_data(), pad_.gpu_data(), stride_.gpu_data(),
          dilation_.gpu_data(), data);
    }
  }
#endif

  int num_kernels_im2col_;//im2col操作生成的列向量数量
  int num_kernels_col2im_;//col2im操作还原得到的卷积操作处理的的小窗的数量
  int conv_out_channels_;//描述卷积层输出的通道数
  int conv_in_channels_;//描述卷积层输入的通道数
  int conv_out_spatial_dim_;//卷积操作输出的单通道数据量
  int kernel_dim_;//表示一个输出通道对应的所有卷积核对输入的一个卷积组的所有通道卷积操作一次处理数据量大小
  int col_offset_;//表示一个输出通道对应的所有卷积核处理的一个卷积组的所有数据量
  int output_offset_;//表示一个卷积组输出的所有数据量

  Blob<Dtype> col_buffer_;//存储了一个输出通道对应的所有卷积核的所有卷积区域转化成的众多单列向量
  Blob<Dtype> bias_multiplier_;//存储了偏置乘数
};

}  // namespace caffe

#endif  // CAFFE_BASE_CONVOLUTION_LAYER_HPP_
   在上述代码中,有相当多的变量被定义,首先是LayerSetup和Reshape函数,这两个函数在相应的.cpp代码中有详细解析,紧接着声明了一些cpu和gpu实现的前向传播和反向传播的函数,然后就是input_shape函数,这个函数在conv_layer.cpp中的compute_output_shape函数被调用,目的是获取卷积层输入特征图的大小,之后定义了一大堆描述卷积层输入输出,权重,操作参数等等变量,还定义了一些有关卷积操作处理的特征图和并列的列向量的相互转换的函数。笔者想提示大家的是,在这个hpp文件中,名称中包含“offset”的变量全是和卷积组相关的,这些变量在多组卷积中会大派用场。关于卷积操作处理的特征图与并列的列向量的相互转换的函数,下面笔者绘图表示一下相关操作


   上图表示了卷积输入图像5*5,卷积核为3*3,长宽方向pad为0,长宽方向步长为2的卷积操作的原图和对应的列向量,可见im2col函数是将卷积操作的原图像上的小窗转换成一个个列向量并存储,反之。col2im亦然。

   让我们再来看看base_conv_layer.cpp的代码:

#include <algorithm>
#include <vector>

#include "caffe/filler.hpp"
#include "caffe/layers/base_conv_layer.hpp"
#include "caffe/util/im2col.hpp"
#include "caffe/util/math_functions.hpp"

namespace caffe {

template <typename Dtype>
void BaseConvolutionLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,
      const vector<Blob<Dtype>*>& top) {
  // Configure the kernel size, padding, stride, and inputs.
  ConvolutionParameter conv_param = this->layer_param_.convolution_param();//读入参数
  force_nd_im2col_ = conv_param.force_nd_im2col();//读入标志进行强制n维卷积的参数
  /*channel_axis_这个参数读取参数定义中的axis参数,默认为1,表示按channel求和,输入blob为(N,C,W,H)时,
  一个输出通道对应的所有卷积核对输入blob上各通道做二维卷积,最后将输入各通道卷积的结果加起来,作为
  一张输出的特征子图*/
  channel_axis_ = bottom[0]->CanonicalAxisIndex(conv_param.axis());
  const int first_spatial_axis = channel_axis_ + 1;//指示卷积输入图像的第一个轴,往往是H(height)
  const int num_axes = bottom[0]->num_axes();//得到bottom blob的维度
  num_spatial_axes_ = num_axes - first_spatial_axis;//卷积处理的维度数
  CHECK_GE(num_spatial_axes_, 0);//卷积处理的维度数必须大于0
  vector<int> bottom_dim_blob_shape(1, num_spatial_axes_ + 1);//用于初始化卷积操作输入数据的形状,一般三维(C,H,W)
  vector<int> spatial_dim_blob_shape(1, std::max(num_spatial_axes_, 1));//用于初始化卷积核的形状
  // Setup filter kernel dimensions (kernel_shape_).
  kernel_shape_.Reshape(spatial_dim_blob_shape);//初始化卷积核的形状(高*宽)
  int* kernel_shape_data = kernel_shape_.mutable_cpu_data();//得到记录卷积核形状数据地址
  /*检查参数中有没有自定义二维卷积的卷积核长宽,如果有定义则分别赋值,且自定义了二维卷积核
  长宽的话,kernal_size参数将不能被定义,否则非法。若参数中没有定义二维卷积核的长宽,那么根据
  kernal_size参数给卷积核赋值,卷积核一般是正方形*/
  if (conv_param.has_kernel_h() || conv_param.has_kernel_w()) {
    CHECK_EQ(num_spatial_axes_, 2)
        << "kernel_h & kernel_w can only be used for 2D convolution.";
    CHECK_EQ(0, conv_param.kernel_size_size())
        << "Either kernel_size or kernel_h/w should be specified; not both.";
    kernel_shape_data[0] = conv_param.kernel_h();
    kernel_shape_data[1] = conv_param.kernel_w();
  } else {
    const int num_kernel_dims = conv_param.kernel_size_size();
    CHECK(num_kernel_dims == 1 || num_kernel_dims == num_spatial_axes_)
        << "kernel_size must be specified once, or once per spatial dimension "
        << "(kernel_size specified " << num_kernel_dims << " times; "
        << num_spatial_axes_ << " spatial dims).";
      for (int i = 0; i < num_spatial_axes_; ++i) {
        kernel_shape_data[i] =
            conv_param.kernel_size((num_kernel_dims == 1) ? 0 : i);
      }
  }
  //检查卷积核参数(高宽)是否合法
  for (int i = 0; i < num_spatial_axes_; ++i) {
    CHECK_GT(kernel_shape_data[i], 0) << "Filter dimensions must be nonzero.";
  }
  // Setup stride dimensions (stride_).
  stride_.Reshape(spatial_dim_blob_shape);//初始化步长,注意,卷积核处理二维图像的话,步长也是二维的
  int* stride_data = stride_.mutable_cpu_data();//得到卷积核步长参数的地址
  /*检查参数中有没有自定义二维卷积时高和宽方向的步长,如果定义了则赋值。如果没有定义的话,就按照我们
  定义的网络参数文件中的卷积层的stride参数赋值,stride参数要是缺失的话步长默认为kDefaultStride,即为1,
  我们往往只定义了一个步长值,代表高和宽方向的步长一致。*/
  if (conv_param.has_stride_h() || conv_param.has_stride_w()) {
    CHECK_EQ(num_spatial_axes_, 2)
        << "stride_h & stride_w can only be used for 2D convolution.";
    CHECK_EQ(0, conv_param.stride_size())
        << "Either stride or stride_h/w should be specified; not both.";
    stride_data[0] = conv_param.stride_h();
    stride_data[1] = conv_param.stride_w();
  } else {
    const int num_stride_dims = conv_param.stride_size();
    CHECK(num_stride_dims == 0 || num_stride_dims == 1 ||
          num_stride_dims == num_spatial_axes_)
        << "stride must be specified once, or once per spatial dimension "
        << "(stride specified " << num_stride_dims << " times; "
        << num_spatial_axes_ << " spatial dims).";
    const int kDefaultStride = 1;
    for (int i = 0; i < num_spatial_axes_; ++i) {
      stride_data[i] = (num_stride_dims == 0) ? kDefaultStride :
          conv_param.stride((num_stride_dims == 1) ? 0 : i);
      CHECK_GT(stride_data[i], 0) << "Stride dimensions must be nonzero.";
    }
  }
  // Setup pad dimensions (pad_).
  /*检查参数中有没有自定义高和宽方向的pad,如果定义了则赋值。如果没有定义的话,就按照我们
  定义的网络参数文件中的卷积层的pad参数赋值,pad参数要是缺失的话默认为kDefaultPad,即为0,
  我们往往只定义了一个pad值,代表高和宽方向的pad一致。*/
  pad_.Reshape(spatial_dim_blob_shape);
  int* pad_data = pad_.mutable_cpu_data();
  if (conv_param.has_pad_h() || conv_param.has_pad_w()) {
    CHECK_EQ(num_spatial_axes_, 2)
        << "pad_h & pad_w can only be used for 2D convolution.";
    CHECK_EQ(0, conv_param.pad_size())
        << "Either pad or pad_h/w should be specified; not both.";
    pad_data[0] = conv_param.pad_h();
    pad_data[1] = conv_param.pad_w();
  } else {
    const int num_pad_dims = conv_param.pad_size();
    CHECK(num_pad_dims == 0 || num_pad_dims == 1 ||
          num_pad_dims == num_spatial_axes_)
        << "pad must be specified once, or once per spatial dimension "
        << "(pad specified " << num_pad_dims << " times; "
        << num_spatial_axes_ << " spatial dims).";
    const int kDefaultPad = 0;
    for (int i = 0; i < num_spatial_axes_; ++i) {
      pad_data[i] = (num_pad_dims == 0) ? kDefaultPad :
          conv_param.pad((num_pad_dims == 1) ? 0 : i);
    }
  }
  /*检查参数中有没有自定义高和宽方向的卷积核扩展,如果定义了则赋值。如果没有定义的话,就按照我们
  定义的网络参数文件中的卷积层的dilation参数赋值,dilation_参数要是缺失的话默认为kDefaultDilation,
  即为1,表示卷积核不进行扩展。*/
  // Setup dilation dimensions (dilation_).
  dilation_.Reshape(spatial_dim_blob_shape);
  int* dilation_data = dilation_.mutable_cpu_data();
  const int num_dilation_dims = conv_param.dilation_size();
  CHECK(num_dilation_dims == 0 || num_dilation_dims == 1 ||
        num_dilation_dims == num_spatial_axes_)
      << "dilation must be specified once, or once per spatial dimension "
      << "(dilation specified " << num_dilation_dims << " times; "
      << num_spatial_axes_ << " spatial dims).";
  const int kDefaultDilation = 1;
  for (int i = 0; i < num_spatial_axes_; ++i) {
    dilation_data[i] = (num_dilation_dims == 0) ? kDefaultDilation :
                       conv_param.dilation((num_dilation_dims == 1) ? 0 : i);
  }
  // Special case: im2col is the identity for 1x1 convolution with stride 1
  // and no padding, so flag for skipping the buffer and transformation.
  //判断是不是1*1卷积
  is_1x1_ = true;
  for (int i = 0; i < num_spatial_axes_; ++i) {
    is_1x1_ &=
        kernel_shape_data[i] == 1 && stride_data[i] == 1 && pad_data[i] == 0;
    if (!is_1x1_) { break; }
  }
  // Configure output channels and groups.
  channels_ = bottom[0]->shape(channel_axis_);//获取卷积层输入的单blob的通道数
  num_output_ = this->layer_param_.convolution_param().num_output();//获取卷积层输出的通道数
  CHECK_GT(num_output_, 0);//核验输出通道数是否大于零
  group_ = this->layer_param_.convolution_param().group();//获取卷积组大小
  CHECK_EQ(channels_ % group_, 0);//核验输入的单blob通道数是否能被卷积组数整除
  CHECK_EQ(num_output_ % group_, 0)//核验输出通道数是否能被卷积组数整除
      << "Number of output should be multiples of group.";
  if (reverse_dimensions()) {//若需要反转卷积操作,则交换输入输出,否则不交换
    conv_out_channels_ = channels_;
    conv_in_channels_ = num_output_;
  } else {
    conv_out_channels_ = num_output_;
    conv_in_channels_ = channels_;
  }
  // Handle the parameters: weights and biases.
  // - blobs_[0] holds the filter weights
  // - blobs_[1] holds the biases (optional)
  vector<int> weight_shape(2);//定义卷积层参数规格
  weight_shape[0] = conv_out_channels_;//权重参数shape的第一个数为输出通道大小,即每个输出通道对应各自的卷积核,理解为num
  weight_shape[1] = conv_in_channels_ / group_;//权重参数shape的第二个数为输入通道大小除以卷积组数,理解为channel
  for (int i = 0; i < num_spatial_axes_; ++i) {
    weight_shape.push_back(kernel_shape_data[i]);//权重参数shape的第三个和第四个数为卷积核维度大小
  }
  bias_term_ = this->layer_param_.convolution_param().bias_term();//获取是否使用偏置的参数
  vector<int> bias_shape(bias_term_, num_output_);//定义偏置参数规格,若bias_term_为true(1),那么bias_shape[0]=num_output_
  if (this->blobs_.size() > 0) {
    CHECK_EQ(1 + bias_term_, this->blobs_.size())//核验blobs_是否合法
        << "Incorrect number of weight blobs.";
    if (weight_shape != this->blobs_[0]->shape()) {//若weight_shape不为bobs_[0]的shape,则输出异常
      Blob<Dtype> weight_shaped_blob(weight_shape);
      LOG(FATAL) << "Incorrect weight shape: expected shape "
          << weight_shaped_blob.shape_string() << "; instead, shape was "
          << this->blobs_[0]->shape_string();
    }
    if (bias_term_ && bias_shape != this->blobs_[1]->shape()) {//若bias_shape不为bobs_[1]的shape,则输出异常
      Blob<Dtype> bias_shaped_blob(bias_shape);
      LOG(FATAL) << "Incorrect bias shape: expected shape "
          << bias_shaped_blob.shape_string() << "; instead, shape was "
          << this->blobs_[1]->shape_string();
    }
    LOG(INFO) << "Skipping parameter initialization";
  } else {//若blobs_.size() = 0,那么根据bias_term_的真伪进行blobs_的大小初始化
    if (bias_term_) {
      this->blobs_.resize(2);
    } else {
      this->blobs_.resize(1);
    }
    // Initialize and fill the weights:
    // output channels x input channels per-group x kernel height x kernel width
    this->blobs_[0].reset(new Blob<Dtype>(weight_shape));//将blobs_[0]大小初始化为weight_shape
    shared_ptr<Filler<Dtype> > weight_filler(GetFiller<Dtype>(
        this->layer_param_.convolution_param().weight_filler()));//读取我们定义层的参数中的权重填充,默认为0
    weight_filler->Fill(this->blobs_[0].get());//进行权重填充
    // If necessary, initialize and fill the biases.
    if (bias_term_) {
      this->blobs_[1].reset(new Blob<Dtype>(bias_shape));//若启用了偏置,则读取我们定义层的参数中的偏置填充,默认为0
      shared_ptr<Filler<Dtype> > bias_filler(GetFiller<Dtype>(
          this->layer_param_.convolution_param().bias_filler()));
      bias_filler->Fill(this->blobs_[1].get());//进行偏置的填充
    }
  }
  kernel_dim_ = this->blobs_[0]->count(1);//获取一个输出通道对应的所有卷积核对输入的一个卷积组所有通道操作一次处理数据量大小,为(输入总通道数/卷积组数)*卷积核高*卷积核宽
  weight_offset_ = conv_out_channels_ * kernel_dim_ / group_;//获取权重的偏移量,理解为(conv_out_channels_/group_)* kernel_dim_ 
  // Propagate gradients to the parameters (as directed by backward pass).
  this->param_propagate_down_.resize(this->blobs_.size(), true);//初始化对权重和偏置(可选)梯度反传的开关
}

template <typename Dtype>
void BaseConvolutionLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,
      const vector<Blob<Dtype>*>& top) {
  const int first_spatial_axis = channel_axis_ + 1;//找到卷积操作处理的第一维的索引,通常为height
  /*核验输入blob的维度是否等于卷积操作处理的第一维的索引加上卷积操作需要处理的维度数*/
  CHECK_EQ(bottom[0]->num_axes(), first_spatial_axis + num_spatial_axes_)
      << "bottom num_axes may not change.";
  num_ = bottom[0]->count(0, channel_axis_);//获取卷积层操作输入的图片数目
  CHECK_EQ(bottom[0]->shape(channel_axis_), channels_)//检查输入的通道数是否合法
      << "Input size incompatible with convolution kernel.";
  // TODO: generalize to handle inputs of different shapes.
  for (int bottom_id = 1; bottom_id < bottom.size(); ++bottom_id) {
    CHECK(bottom[0]->shape() == bottom[bottom_id]->shape())//如果输入多个blob的话,检查所有blob是否具有相同的shape
        << "All inputs must have the same shape.";
  }
  // Shape the tops.
  bottom_shape_ = &bottom[0]->shape();//获取卷积层输入的blob的形状
  compute_output_shape();//获取卷积层输出的blob的形状
  vector<int> top_shape(bottom[0]->shape().begin(),//初始化top_shape第一个元素为输入单位blob的num
      bottom[0]->shape().begin() + channel_axis_);
  top_shape.push_back(num_output_);//top_shape加入输出的通道数
  for (int i = 0; i < num_spatial_axes_; ++i) {
    top_shape.push_back(output_shape_[i]);//top_shape加入卷积处理的维度
  }
  for (int top_id = 0; top_id < top.size(); ++top_id) {
    top[top_id]->Reshape(top_shape);//将top的每个blob进行初始化
  }
  if (reverse_dimensions()) {
	/*如果要反转卷积操作,conv_out_spatial_dim_初始化为卷积层输出单位blob(bottom[0])的单通道的数据量*/
    conv_out_spatial_dim_ = bottom[0]->count(first_spatial_axis);
  } else {
	/*否则,conv_out_spatial_dim_初始化为卷积层输出单位blob(top[0])的单通道的数据量*/
    conv_out_spatial_dim_ = top[0]->count(first_spatial_axis);
  }
  col_offset_ = kernel_dim_ * conv_out_spatial_dim_;//col_offset表征了一个输出通道对应的所有卷积核处理的一个卷积组的所有数据量
  output_offset_ = conv_out_channels_ * conv_out_spatial_dim_ / group_;//output_offset_表征了一个卷积组输出的所有数据量
  // Setup input dimensions (conv_input_shape_).
  vector<int> bottom_dim_blob_shape(1, num_spatial_axes_ + 1);//用于初始化卷积操作输入数据的形状,一般三维(C,H,W)
  conv_input_shape_.Reshape(bottom_dim_blob_shape);//初始化卷积层输入shape,一般大小为3
  int* conv_input_shape_data = conv_input_shape_.mutable_cpu_data();
  for (int i = 0; i < num_spatial_axes_ + 1; ++i) {//初始化卷积层的输入参数,一般顺序为channel->height->width
    if (reverse_dimensions()) {
      conv_input_shape_data[i] = top[0]->shape(channel_axis_ + i);
    } else {
      conv_input_shape_data[i] = bottom[0]->shape(channel_axis_ + i);
    }
  }
  // The im2col result buffer will only hold one image at a time to avoid
  // overly large memory usage. In the special case of 1x1 convolution
  // it goes lazily unused to save memory.
  col_buffer_shape_.clear();
  col_buffer_shape_.push_back(kernel_dim_ * group_);//col_buffer_shape_加入(输入总通道数*卷积核高*卷积核宽)
  for (int i = 0; i < num_spatial_axes_; ++i) {//col_buffer_shape_加入卷积层输出单通道的维度
    if (reverse_dimensions()) {
      col_buffer_shape_.push_back(input_shape(i + 1));
    } else {
      col_buffer_shape_.push_back(output_shape_[i]);
    }
  }
  col_buffer_.Reshape(col_buffer_shape_);//初始化col_buffer
  bottom_dim_ = bottom[0]->count(channel_axis_);//bottom_dim_描述的是bottom blob的一个channel包含的数据量
  top_dim_ = top[0]->count(channel_axis_);//top_dim_描述的是top blob的一个channel包含的数据量
  num_kernels_im2col_ = conv_in_channels_ * conv_out_spatial_dim_;//描述了一个输出通道对应的所有卷积核对全部输入做卷积操作时转换生成的列向量的数量
  num_kernels_col2im_ = reverse_dimensions() ? top_dim_ : bottom_dim_;//描述了将生成的列向量还原卷积操作的区域图的数量
  // Set up the all ones "bias multiplier" for adding biases by BLAS
  out_spatial_dim_ = top[0]->count(first_spatial_axis);//描述了输出的单通道数据量
  if (bias_term_) {//若启用了偏置,那么初始化偏置乘数blob
    //偏置乘数的大小为输出的单通道数据量,因为对于每个输出数据乘数不一样
    vector<int> bias_multiplier_shape(1, out_spatial_dim_);
    bias_multiplier_.Reshape(bias_multiplier_shape);
    caffe_set(bias_multiplier_.count(), Dtype(1),//先将这些乘数置为1
        bias_multiplier_.mutable_cpu_data());
  }
}

template <typename Dtype>
void BaseConvolutionLayer<Dtype>::forward_cpu_gemm(const Dtype* input,//进行数据的cpu前向传播
    const Dtype* weights, Dtype* output, bool skip_im2col) {
  const Dtype* col_buff = input;
  if (!is_1x1_) {
    if (!skip_im2col) {//im2col将一个卷积操作处理的原特征图按小窗变成并排列向量
      conv_im2col_cpu(input, col_buffer_.mutable_cpu_data());
    }
    col_buff = col_buffer_.cpu_data();
  }
  for (int g = 0; g < group_; ++g) {
    caffe_cpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, conv_out_channels_ /
        group_, conv_out_spatial_dim_, kernel_dim_,
        (Dtype)1., weights + weight_offset_ * g, col_buff + col_offset_ * g,
        (Dtype)0., output + output_offset_ * g);
  }
}

template <typename Dtype>
void BaseConvolutionLayer<Dtype>::forward_cpu_bias(Dtype* output,//进行偏置的cpu前向传播
    const Dtype* bias) {
  caffe_cpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, num_output_,
      out_spatial_dim_, 1, (Dtype)1., bias, bias_multiplier_.cpu_data(),
      (Dtype)1., output);
}

template <typename Dtype>
void BaseConvolutionLayer<Dtype>::backward_cpu_gemm(const Dtype* output,//进行数据梯度的cpu反向传播
    const Dtype* weights, Dtype* input) {
  Dtype* col_buff = col_buffer_.mutable_cpu_data();
  if (is_1x1_) {
    col_buff = input;
  }
  for (int g = 0; g < group_; ++g) {
    caffe_cpu_gemm<Dtype>(CblasTrans, CblasNoTrans, kernel_dim_,
        conv_out_spatial_dim_, conv_out_channels_ / group_,
        (Dtype)1., weights + weight_offset_ * g, output + output_offset_ * g,
        (Dtype)0., col_buff + col_offset_ * g);
  }
  if (!is_1x1_) {
    conv_col2im_cpu(col_buff, input);//将并列的列向量还原成图像
  }
}

template <typename Dtype>
void BaseConvolutionLayer<Dtype>::weight_cpu_gemm(const Dtype* input,//进行权重的cpu前向传播
    const Dtype* output, Dtype* weights) {
  const Dtype* col_buff = input;
  if (!is_1x1_) {
    conv_im2col_cpu(input, col_buffer_.mutable_cpu_data());
    col_buff = col_buffer_.cpu_data();
  }
  for (int g = 0; g < group_; ++g) {
    caffe_cpu_gemm<Dtype>(CblasNoTrans, CblasTrans, conv_out_channels_ / group_,
        kernel_dim_, conv_out_spatial_dim_,
        (Dtype)1., output + output_offset_ * g, col_buff + col_offset_ * g,
        (Dtype)1., weights + weight_offset_ * g);
  }
}

template <typename Dtype>
void BaseConvolutionLayer<Dtype>::backward_cpu_bias(Dtype* bias,//进行偏置梯度的cpu反向传播
    const Dtype* input) {
  caffe_cpu_gemv<Dtype>(CblasNoTrans, num_output_, out_spatial_dim_, 1.,
      input, bias_multiplier_.cpu_data(), 1., bias);
}

#ifndef CPU_ONLY

template <typename Dtype>
void BaseConvolutionLayer<Dtype>::forward_gpu_gemm(const Dtype* input,//进行数据的gpu前向传播
    const Dtype* weights, Dtype* output, bool skip_im2col) {
  const Dtype* col_buff = input;
  if (!is_1x1_) {
    if (!skip_im2col) {
      conv_im2col_gpu(input, col_buffer_.mutable_gpu_data());
    }
    col_buff = col_buffer_.gpu_data();
  }
  for (int g = 0; g < group_; ++g) {
    caffe_gpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, conv_out_channels_ /
        group_, conv_out_spatial_dim_, kernel_dim_,
        (Dtype)1., weights + weight_offset_ * g, col_buff + col_offset_ * g,
        (Dtype)0., output + output_offset_ * g);
  }
}

template <typename Dtype>
void BaseConvolutionLayer<Dtype>::forward_gpu_bias(Dtype* output,//进行偏置的gpu前向传播
    const Dtype* bias) {
  caffe_gpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, num_output_,
      out_spatial_dim_, 1, (Dtype)1., bias, bias_multiplier_.gpu_data(),
      (Dtype)1., output);
}

template <typename Dtype>
void BaseConvolutionLayer<Dtype>::backward_gpu_gemm(const Dtype* output,//进行数据梯度的gpu反向传播
    const Dtype* weights, Dtype* input) {
  Dtype* col_buff = col_buffer_.mutable_gpu_data();
  if (is_1x1_) {
    col_buff = input;
  }
  for (int g = 0; g < group_; ++g) {
    caffe_gpu_gemm<Dtype>(CblasTrans, CblasNoTrans, kernel_dim_,
        conv_out_spatial_dim_, conv_out_channels_ / group_,
        (Dtype)1., weights + weight_offset_ * g, output + output_offset_ * g,
        (Dtype)0., col_buff + col_offset_ * g);
  }
  if (!is_1x1_) {
    conv_col2im_gpu(col_buff, input);
  }
}

template <typename Dtype>
void BaseConvolutionLayer<Dtype>::weight_gpu_gemm(const Dtype* input,//进行权重的gpu前向传播
    const Dtype* output, Dtype* weights) {
  const Dtype* col_buff = input;
  if (!is_1x1_) {
    conv_im2col_gpu(input, col_buffer_.mutable_gpu_data());
    col_buff = col_buffer_.gpu_data();
  }
  for (int g = 0; g < group_; ++g) {
    caffe_gpu_gemm<Dtype>(CblasNoTrans, CblasTrans, conv_out_channels_ / group_,
        kernel_dim_, conv_out_spatial_dim_,
        (Dtype)1., output + output_offset_ * g, col_buff + col_offset_ * g,
        (Dtype)1., weights + weight_offset_ * g);
  }
}

template <typename Dtype>
void BaseConvolutionLayer<Dtype>::backward_gpu_bias(Dtype* bias,//进行偏置梯度的反向传播
    const Dtype* input) {
  caffe_gpu_gemv<Dtype>(CblasNoTrans, num_output_, out_spatial_dim_, 1.,
      input, bias_multiplier_.gpu_data(), 1., bias);
}

#endif  // !CPU_ONLY

INSTANTIATE_CLASS(BaseConvolutionLayer);

}  // namespace caffe

   很详细的代码注释已经贴在上面,在base_conv_layer.cpp中,首先是LayerSetup函数,初始化了卷积核大小,步长,pad,卷积核扩展,然后根据这些参数初始化了卷积层的可学习参数的规格。在Reshape函数中,进行了卷积层输出空间的布局,还初始化了一个偏置乘数器,这个小东西可以使得每次卷积操作的结果再加上的偏置是不一样的,请大家注意这个细节。在后面则定义了cpu和gpu上面的前传和反传函数,这些函数大量地调用了caffe/math_function.hpp中的接口,在这里只需要明白这些函数是对哪些信息(数据,梯度,权重梯度,偏置梯度)进行的哪些操作(前传,反传)即可。

   到此为止,卷积层的代码解析就接近尾声了,笔者认为,卷积层的实现细节中,关于底层封装还是比较严密的,比如卷积具体操作的前传反传并没有具体给出来,不过窥一斑可见全豹,相信读者朋友们通过卷积层的结构,能理解卷积层的底层操作,相关代码笔者打算留到后文解析。通过阅读卷积层的源码,笔者深深敬佩caffe框架开发者的极客精神与动手能力,能够写出如此具有艺术气息的代码。

   在解析完毕卷积层的代码之后,笔者也深深感受到,源码阅读的确是提升综合能力的有效手段,在对源码进行解析时,不仅能够理解作者的理论思路,更能养成规范的代码编写习惯,提升代码能力。同时在此提醒读者朋友们,笔者作为深度学习菜鸟,在解析源码的时候,会有一些谬误和疏漏,万望读者朋友们热心提出,笔者一定加以改正。

   欢迎阅读笔者后续解析caffe源码的博客,各位读者朋友的支持与鼓励是我最大的动力!


written by jiong

生命的闪耀不坚持到底怎能看到

  • 19
    点赞
  • 37
    收藏
    觉得还不错? 一键收藏
  • 15
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 15
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值