这两个类是caffe框架的基石,从名字上就看得出来,深度学习就是围绕这两个东西展开的,还是从代码去看具体实现。
1.layer
layer类有五大种类,每个种类里又有详细按作用区分,但全是从一个基类Layer继承过来,下面是具体的五类
Data Layers
Common Layers
Activation / Neuron LayersLoss Layers
Vision Layers
基类layer里的主要成员变量和函数,结合caffe的英文注释看。
protected:
/** The protobuf that stores the layer parameters */
LayerParameter layer_param_;
/** The phase: TRAIN or TEST */
Phase phase_;
/** The vector that stores the learnable parameters as a set of blobs. */
vector<shared_ptr<Blob<Dtype> > > blobs_;//blobs_[0]是weights,blobs_[1]是bias
/** Vector indicating whether to compute the diff of each param blob. */
vector<bool> param_propagate_down_;//是否根据反馈更新
/** The vector that indicates whether each top blob has a non-zero weight in
* the objective function. */
vector<Dtype> loss_;//这个应该是只有最后的softmax之类的层才会有,每个输出对应的具体loss值
/** Device context */
DeviceContext *device_context_;
/** @brief Using the CPU device, compute the layer output. */
virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,
const vector<Blob<Dtype>*>& top) = 0;
/**
* @brief Using the GPU device, compute the layer output.
* Fall back to Forward_cpu() if unavailable.
*/
virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,
const vector<Blob<Dtype>*>& top) {
// LOG(WARNING) << "Using CPU code as backup.";
Forward_cpu(bottom, top);
}
/**
* @brief Using the CPU device, compute the gradients for any parameters and
* for the bottom blobs if propagate_down is true.
*/
virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,
const vector<