caffe--net.cpp解析


http://blog.csdn.net/mrhiuser/article/details/52345469
http://blog.csdn.net/langb2014/article/details/50987593
Net类是Solve类的一个成员,在net.cpp中定义了对Net的所有操作,其中包括:
  • Init
  • GetLearningRateAndWeightDecay
  • ForwardPrefilled
  • Backward
  • ShareTrainedLayersWith
  • CopyTrainedLayersFrom
  • ToProto
  • Update
  • has_blob
  • blob_by_name
  • has_layer
  • layer_by_name

一、代码的总体介绍

           该init()函数中主要包括以下几个函数:

1.     FilterNet(in_param,&filtered_param);

此函数的作用就是模型参数文件(*.prototxt)中的不符合规则的层去掉。例如:在caffe的examples/mnist中的lenet网络中,如果只是用于网络的前向,则需要将包含train的数据层去掉。

2、InsertSplits(filtered_param,&param);

此函数作用是,对于底层一个输出blob对应多个上层的情况,则要在加入分裂层,形成新的网络。这么做的主要原因是多个层反传给该blob的梯度需要累加。

例如:LeNet网络中的数据层的top label blob对应两个输入层,分别是accuracy层和loss层,那么需要在数据层在插入一层。

3、layers_.push_back();

该行代码是把当前层的参数转换为shared_ptr<Layer<Dtype>>,创建一个具体的层,并压入到layers_

4、AppendBottom();

此函数为该层创建bottom blob,由于网络是堆叠而成,即:当前层的输出 bottom是前一层的输出top blob,因此此函数并没没有真正的创建blob,只是在将前一层的指针压入到了bottom_vecs_中。

5、AppendTop();

此函数为该层创建top blob,该函数真正的new的一个blob的对象。并将topblob 的指针压入到top_vecs_中

 6、layers_[layer_id]->SetUp();

  前面创建了具体的层,并为层创建了输入bottom blob 和输出top blob。改行代码这是启动该层,setup()函数的功能是为创建的blob分配数据内存空间,如有必要还需要调整该层的输入bottom blob 和输出top blob的shape。

 7、AppendParam();

 对于某些有参数的层,例如:卷基层、全连接层有weight和bias。该函数主要是修改和参数有关的变量,实际的层参数的blob在上面提到的setup()函数中已经创建。如:将层参数blob的指针压入到params_。

二、下面是对函数Net:init()的代码的详细注解。

template <typename Dtype>
void Net<Dtype>::Init(const NetParameter& in_param) {
  CHECK(Caffe::root_solver() || root_net_)
      << "root_net_ needs to be set for all non-root solvers";
  // Set phase from the state.
  phase_ = in_param.state().phase();
  // Filter layers based on their include/exclude rules and
  // the current NetState.
  NetParameter filtered_param;
  
  /*将in_param中的某些不符合规则的层去掉*/
  FilterNet(in_param, &filtered_param);
  LOG_IF(INFO, Caffe::root_solver())
      << "Initializing net from parameters: " << std::endl
      << filtered_param.DebugString();
  // Create a copy of filtered_param with splits added where necessary.
  NetParameter param;
  /*
  *调用InsertSplits()函数,对于底层的一个输出blob对应多个上层的情况,
  *则要在加入分裂层,形成新的网络。
  **/ 
  InsertSplits(filtered_param, ¶m);
/*
 *以上部分只是根据 *.prototxt文件,确定网络name 和 blob的name的连接情况,
 *下面部分是层以及层间的blob的创建,函数ApendTop()中间blob的实例化
 *函数layer->SetUp()分配中间层blob的内存空间
 *appendparam()
 */
  // Basically, build all the layers and set up their connections.
  name_ = param.name();
  map<string, int> blob_name_to_idx;
  set<string> available_blobs;
  memory_used_ = 0;  
  // For each layer, set up its input and output 
  bottom_vecs_.resize(param.layer_size());//存每一层的输入(bottom)blob指针 
  top_vecs_.resize(param.layer_size());//存每一层输出(top)的blob指针
  bottom_id_vecs_.resize(param.layer_size());//存每一层输入(bottom)blob的id
  param_id_vecs_.resize(param.layer_size());//存每一层参数blob的id
  top_id_vecs_.resize(param.layer_size());//存每一层输出(top)的blob的id
  bottom_need_backward_.resize(param.layer_size());//该blob是需要返回的bool值

  //(很大的一个for循环)对每一层处理
  for (int layer_id = 0; layer_id < param.layer_size(); ++layer_id) {
    // For non-root solvers, whether this layer is shared from root_net_.
    bool share_from_root = !Caffe::root_solver()
        && root_net_->layers_[layer_id]->ShareInParallel();// ???
    // Inherit phase from net if unset.
    //如果当前层没有设置phase,则将当前层phase设置为网络net 的phase
    if (!param.layer(layer_id).has_phase()) {
      param.mutable_layer(layer_id)->set_phase(phase_);
    }
    // Setup layer.
    // param.layers(i)返回的是关于第当前层的参数:
    const LayerParameter& layer_param = param.layer(layer_id); 
    if (layer_param.propagate_down_size() > 0) {
      CHECK_EQ(layer_param.propagate_down_size(),
          layer_param.bottom_size())
          << "propagate_down param must be specified "
          << "either 0 or bottom_size times ";
    }
    if (share_from_root) {
      LOG(INFO) << "Sharing layer " << layer_param.name() << " from root net";
      layers_.push_back(root_net_->layers_[layer_id]);
      layers_[layer_id]->SetShared(true);
    } else {
  	    /*
    	*把当前层的参数转换为shared_ptr<Layer<Dtype>>,
    	*创建一个具体的层,并压入到layers_中 
    	*/
      layers_.push_back(LayerRegistry<Dtype>::CreateLayer(layer_param));
    }
	//把当前层的名字压入到layer_names_:vector<string> layer_names_
    layer_names_.push_back(layer_param.name());
    LOG_IF(INFO, Caffe::root_solver())
        << "Creating Layer " << layer_param.name();
    bool need_backward = false;

    // Figure out this layer's input and output 
    //下面开始产生当前层:分别处理bottom的blob和top的blob两个步骤 
    //输入bottom blob
    for (int bottom_id = 0; bottom_id < layer_param.bottom_size();
         ++bottom_id) {
      const int blob_id = AppendBottom(param, layer_id, bottom_id,
                                       &available_blobs, &blob_name_to_idx);
      // If a blob needs backward, this layer should provide it.
      /*
      	*blob_need_backward_,整个网络中,所有非参数blob,是否需要backward。
      	*注意,这里所说的所有非参数blob其实指的是AppendTop函数中遍历的所有top blob,
      	*并不是每一层的top+bottom,因为这一层的top就是下一层的bottom,网络是一层一层堆起来的。  
		*/
      need_backward |= blob_need_backward_[blob_id];
    }
	//输出top blob
    int num_top = layer_param.top_size();
    for (int top_id = 0; top_id < num_top; ++top_id) {
      AppendTop(param, layer_id, top_id, &available_blobs, &blob_name_to_idx);
      // Collect Input layer tops as Net inputs.
      if (layer_param.type() == "Input") {
        const int blob_id = blobs_.size() - 1;
        net_input_blob_indices_.push_back(blob_id);
        net_input_blobs_.push_back(blobs_[blob_id].get());
      }
    }
    // If the layer specifies that AutoTopBlobs() -> true and the LayerParameter
    // specified fewer than the required number (as specified by
    // ExactNumTopBlobs() or MinTopBlobs()), allocate them here.
    Layer<Dtype>* layer = layers_[layer_id].get();
    if (layer->AutoTopBlobs()) {
      const int needed_num_top =
          std::max(layer->MinTopBlobs(), layer->ExactNumTopBlobs());
      for (; num_top < needed_num_top; ++num_top) {
        // Add "anonymous" top blobs -- do not modify available_blobs or
        // blob_name_to_idx as we don't want these blobs to be usable as input
        // to other layers.
        AppendTop(param, layer_id, num_top, NULL, NULL);
      }
    }
    // After this layer is connected, set it up.
    if (share_from_root) {
      // Set up size of top blobs using root_net_
      const vector<Blob<Dtype>*>& base_top = root_net_->top_vecs_[layer_id];
      const vector<Blob<Dtype>*>& this_top = this->top_vecs_[layer_id];
      for (int top_id = 0; top_id < base_top.size(); ++top_id) {
        this_top[top_id]->ReshapeLike(*base_top[top_id]);
        LOG(INFO) << "Created top blob " << top_id << " (shape: "
            << this_top[top_id]->shape_string() <<  ") for shared layer "
            << layer_param.name();
      }
    } else {
   	 // 在 SetUp()中为 appendTop()中创建的Blob分配内存空间
      layers_[layer_id]->SetUp(bottom_vecs_[layer_id], top_vecs_[layer_id]);
    }
    LOG_IF(INFO, Caffe::root_solver())
        << "Setting up " << layer_names_[layer_id];
	
	//每次循环,都会更新向量blob_loss_weights    
    for (int top_id = 0; top_id < top_vecs_[layer_id].size(); ++top_id) {
		//blob_loss_weights_,每次遍历一个layer的时候,都会resize blob_loss_weights_, 
		//然后调用模板类layer的loss函数返回loss_weight   
      if (blob_loss_weights_.size() <= top_id_vecs_[layer_id][top_id]) {
        blob_loss_weights_.resize(top_id_vecs_[layer_id][top_id] + 1, Dtype(0));
      }
	  //top_id_vecs_中存储的最基本元素是blob_id -> 每一个新的blob都会赋予其一个blob_id,
	  //但是这个blob_id可能是会有重复的 
      blob_loss_weights_[top_id_vecs_[layer_id][top_id]] = layer->loss(top_id);
	  //loss函数返回loss_weight —> 在模板类的SetUp方法中会调用SetLossWeights来设置其私有数据成员loss_,
	  //里面存储的其实是loss_weight    
      LOG_IF(INFO, Caffe::root_solver())
          << "Top shape: " << top_vecs_[layer_id][top_id]->shape_string();
	  
      if (layer->loss(top_id)) {
        LOG_IF(INFO, Caffe::root_solver())
            << "    with loss weight " << layer->loss(top_id);
      }
	  //计算所需内存 
      memory_used_ += top_vecs_[layer_id][top_id]->count();
    }
    LOG_IF(INFO, Caffe::root_solver())
        << "Memory required for data: " << memory_used_ * sizeof(Dtype);

	/*
	*以下部分是对 每层的param blob 的处理,主要是AppendParam()函数,
	*将param blob 以及blob的ID添加到 params_,param_id_vecs_ 等
	*/
    const int param_size = layer_param.param_size();
	// 层内blob_的数量,即该层有几个权重参数,每个blob内有一个参数,例如;cov层和IP层都有两个参数
    const int num_param_blobs = layers_[layer_id]->blobs().size();
	//param_size是Layermeter类型对象layer_param中ParamSpec param成员的个数, 
	//num_param_blobs是一个Layer中learnable parameter blob的个数,param_size <= num_param_blobs 
    CHECK_LE(param_size, num_param_blobs)
        << "Too many params specified for layer " << layer_param.name();
    ParamSpec default_param_spec;
    for (int param_id = 0; param_id < num_param_blobs; ++param_id) {
      const ParamSpec* param_spec = (param_id < param_size) ? &layer_param.param(param_id) : &default_param_spec;
      const bool param_need_backward = param_spec->lr_mult() != 0;
	  //由param_need_backward来决定need_backward是否为真,
	  //并且,只要有一次遍历使得need_backward为真,则这个for循环结束后,need_backward也为真  
      need_backward |= param_need_backward;
      layers_[layer_id]->set_param_propagate_down(param_id,
                                                  param_need_backward);
    }
	/*
	*添加parameter blob,如果当前layer没有parameter blob(num_param_blobs==0),
	*比如ReLU,那么就不进入循环,不添加parameter blob    
 	*AppendParam只是执行为当前layer添加parameter blob的相关工作,
 	*并不会修改与backward的相关属性 
 	*/
    for (int param_id = 0; param_id < num_param_blobs; ++param_id) {
      AppendParam(param, layer_id, param_id);
    }
    // Finally, set the backward flag
    layer_need_backward_.push_back(need_backward);
	/*
	*在上述的AppendTop函数中,在遍历当前层的每一个top blob的时候
	*都会将一个false(默认值)压入向量blob_need_backward_。
	*在下面的代码中,如果这个layer need backward,则会更新blob_need_backward_  
	*/
    if (need_backward) {
      for (int top_id = 0; top_id < top_id_vecs_[layer_id].size(); ++top_id) {
        blob_need_backward_[top_id_vecs_[layer_id][top_id]] = true;
      }
    }
  }
/*至此上面部分各个层被创建并启动,下面部分是按后向顺序修正backward设置  */
  
  // Go through the net backwards to determine which blobs contribute to the
  // loss.  We can skip backward computation for blobs that don't contribute
  // to the loss.
  // Also checks if all bottom blobs don't need backward computation (possible
  // because the skip_propagate_down param) and so we can skip bacward
  // computation for the entire layer
  /*
  *需要注意的是,上述代码中关于backward设置的部分,是按照前向的顺序设置的,
  *而下面的代码是按后向顺序修正前向设置的结果。    
  * 一个layer是否需要backward computation,主要依据两个方面:
  *	(1)该layer的top blob 是否参与loss的计算;
  *	(2)该layer的bottom blob 是否需要backward computation,
  *    比如Data层一般就不需要backward computation 
  */
  set<string> blobs_under_loss;
  set<string> blobs_skip_backp;
  //反向,从后向前
  for (int layer_id = layers_.size() - 1; layer_id >= 0; --layer_id) {
    bool layer_contributes_loss = false;
    bool layer_skip_propagate_down = true;
	/*
	*为true,则表示当前layer的bottom blob不需要backward computation
	*即该层不需要backward computation。    
	*这个局部变量所表示的意义与caffe.proto里
	*message Layerparameter的propagate_down的定义恰好相反。
	*/
    for (int top_id = 0; top_id < top_vecs_[layer_id].size(); ++top_id) {
		 //blob_names_整个网络中,所有非参数blob的name 
      const string& blob_name = blob_names_[top_id_vecs_[layer_id][top_id]];
      if (layers_[layer_id]->loss(top_id) ||
          (blobs_under_loss.find(blob_name) != blobs_under_loss.end())) {
        layer_contributes_loss = true;
      }
      if (blobs_skip_backp.find(blob_name) == blobs_skip_backp.end()) {
        layer_skip_propagate_down = false;
      }
      if (layer_contributes_loss && !layer_skip_propagate_down)
        break;
    }
    // If this layer can skip backward computation, also all his bottom blobs
    // don't need backpropagation
    if (layer_need_backward_[layer_id] && layer_skip_propagate_down) {
      layer_need_backward_[layer_id] = false;
      for (int bottom_id = 0; bottom_id < bottom_vecs_[layer_id].size();
               ++bottom_id) {
		//bottom_need_backward_,整个网络所有网络层的bottom blob是否需要backward  
        bottom_need_backward_[layer_id][bottom_id] = false;
      }
    }
    if (!layer_contributes_loss) { layer_need_backward_[layer_id] = false; }
    if (Caffe::root_solver()) {
      if (layer_need_backward_[layer_id]) {
        LOG(INFO) << layer_names_[layer_id] << " needs backward computation.";
      } else {
        LOG(INFO) << layer_names_[layer_id]
            << " does not need backward computation.";
      }
    }
	//修正前向设置的结果  
    for (int bottom_id = 0; bottom_id < bottom_vecs_[layer_id].size();
         ++bottom_id) {
      if (layer_contributes_loss) {
        const string& blob_name =
            blob_names_[bottom_id_vecs_[layer_id][bottom_id]];
        blobs_under_loss.insert(blob_name);//为blobs_under_loss添加新元素  
      } else {
        bottom_need_backward_[layer_id][bottom_id] = false;
      }
      if (!bottom_need_backward_[layer_id][bottom_id]) {
        const string& blob_name =
                   blob_names_[bottom_id_vecs_[layer_id][bottom_id]];
        blobs_skip_backp.insert(blob_name);
      }
    }
  }
  // Handle force_backward if needed.
  if (param.force_backward()) {
    for (int layer_id = 0; layer_id < layers_.size(); ++layer_id) {
      layer_need_backward_[layer_id] = true;
      for (int bottom_id = 0;
           bottom_id < bottom_need_backward_[layer_id].size(); ++bottom_id) {
        bottom_need_backward_[layer_id][bottom_id] =
            bottom_need_backward_[layer_id][bottom_id] ||
            layers_[layer_id]->AllowForceBackward(bottom_id);
        blob_need_backward_[bottom_id_vecs_[layer_id][bottom_id]] =
            blob_need_backward_[bottom_id_vecs_[layer_id][bottom_id]] ||
            bottom_need_backward_[layer_id][bottom_id];
      }
      for (int param_id = 0; param_id < layers_[layer_id]->blobs().size();
           ++param_id) {
        layers_[layer_id]->set_param_propagate_down(param_id, true);
      }
    }
  }
  // In the end, all remaining blobs are considered output blobs.
  for (set<string>::iterator it = available_blobs.begin();
      it != available_blobs.end(); ++it) {
    LOG_IF(INFO, Caffe::root_solver())
        << "This network produces output " << *it;
    net_output_blobs_.push_back(blobs_[blob_name_to_idx[*it]].get());
    net_output_blob_indices_.push_back(blob_name_to_idx[*it]);
  }
  for (size_t blob_id = 0; blob_id < blob_names_.size(); ++blob_id) {
  	//第一次使用向量blob_names_index_,逐一添加元素,是一个map    
    blob_names_index_[blob_names_[blob_id]] = blob_id;
  }
  for (size_t layer_id = 0; layer_id < layer_names_.size(); ++layer_id) {
  	//第一次使用向量layer_names_index_,逐一添加元素,是一个map    
    layer_names_index_[layer_names_[layer_id]] = layer_id;
  }
  ShareWeights();
  debug_info_ = param.debug_info();
  LOG_IF(INFO, Caffe::root_solver()) << "Network initialization done.";
}

another anlysis:

    #include <algorithm>  
    #include <map>  
    #include <set>  
    #include <string>  
    #include <utility>  
    #include <vector>  
      
    #include "caffe/common.hpp"  
    #include "caffe/layer.hpp"  
    #include "caffe/net.hpp"  
    #include "caffe/proto/caffe.pb.h"  
    #include "caffe/util/insert_splits.hpp"  
    #include "caffe/util/io.hpp"  
    #include "caffe/util/math_functions.hpp"  
    #include "caffe/util/upgrade_proto.hpp"  
      
    #include "caffe/util/channel.hpp"  
    #include "caffe/util/mpi_functions.hpp"  
      
    #include "caffe/test/test_caffe_main.hpp"  
    #include "caffe/vision_layers.hpp"  
      
    namespace caffe {  
    /* 
    功能:调用Init函数初始化网络 
    输入:NetParameter& param 
    输出:无 
    */  
    template <typename Dtype>  
    Net<Dtype>::Net(const NetParameter& param) {  
      Init(param);  
    }  
    /* 
    功能:调用Init函数初始化网络 
    输入:string& param_file 
    输出:无 
    */  
    template <typename Dtype>  
    Net<Dtype>::Net(const string& param_file, Phase phase) {  
      NetParameter param;  
      ReadNetParamsFromTextFileOrDie(param_file, ¶m);  
      param.mutable_state()->set_phase(phase);  
      Init(param);  
    }  
    /* 
    功能:初始化网络 
    输入:NetParameter& in_param 
    输出:无 
    步骤: 
    <1> 调用InsertSplits()函数从in_param读入新网络到param 
    <2> 定义name_,blob_name_to_idx,available_blobs,num_layers 
    <3> param.input_size()返回输入层blob的个数; 
        param.input(i)表示第i个blob的名字; 
        param.layers_size()返回网络的层数。 
    <4> 对每一个输入层的blob: 
        产生一块和当前blob一样大的空间 e.g. imput_dim=[12 55 66 39 20 24 48 64]表示第一个blob的四个维数为 12 55 66 39,第二个为 20 24 48 64 接着blob_pointer指向这块空间 
        blob_pointer压到blobs_中 vector<shared_ptr<Blob<Dtype>>> blobs_ 
        blob_name压到blob_names_中 vector<string> blob_names_ 
        param.force_backward()压到blob_need_backward_中vector<bool> blob_need_backward_ 
        i 压到 net_input_blob_indices_中 net_input_blob_indices_ -> vector 
        blob_pointer.get() 压到 net_input_blobs_中 
        注意与blobs_的区别 
        vector<shared_ptr<Blob<Dtype>>> blobs_ 
        vector<Blob<Dtype>*> net_input_blobs_ 
        shared_ptr类型的参数调用.get()则得到Blob*类型 
        map<string, int> blob_name_to_idx 
        初始化为输入层的每个blob的名字 set<string> available_blobs 
        计算所需内存 memory_used += blob_pointer->count() 
     
    <5> 存每一层的输入blob指针 vector<vector<Blob<Dtype>*> > bottom_vecs_ 
        存每一层输入(bottom)的id vector<vector<int> > bottom_id_vecs_ 
        存每一层输出(top)的blob vector<vector<Blob<Dtype>*> > top_vecs_ 
        用网络的层数param.layers_size()去初始化上面四个变量 
        vector<vector<int> > top_id_vecs_ 
    <6> 对第i层(很大的一个for循环): 
        param.layers(i)返回的是关于第当前层的参数: 
        layer_param = param.layers(i) 
        把当前层的参数转换为shared_ptr<Layer<Dtype>>,并压入到layers_中 
        把当前层的名字压入到layer_names_:vector<string> layer_names_ 
        判断当前层是否需要反馈 need_backward = param.force_backward() 
     
        下面开始产生当前层:分为处理bottom的blob和top的blob两个步骤 
        对第j个bottom的blob: 
            layer_param.bottom_size()存的是当前层的输入blob数量 
            layer_param.bottom(j)存的是第j个输入blob的名字 
            读取当前blob的id,其中blob_name_to_idx在输入层初始化过了 
            blob_name_to_idx[blob_name] = i 
            输出当前blob的名字 
            存入第j个输入blob的指针bottom_vecs_[i].push_back(blobs_[blob_id].get()) 
            存入第j个输入blob的id bottom_id_vecs_[i].push_back(blob_id) 
            更新need_backward 
            从available_blobs中删除第j个blob的名字 
     
        对第j个top的blob: 
            layer_param.top_size()存的是当前层的输出blob数量 
            layer_param.top(j)存的是第j个输出blob的名字 
            判断是否进行同址计算 
            输出当前blob的名字 
            定义一块新的blob空间,用blob_pointer指向这块空间 
            把这个指针存入到blobs_中 
            把blob_name、force_backward、idx存入对应的容器中 
            向available_blobs插入当前blob的名字 
            top_vecs_[i]对于第i层,插入当前blob的指针 
            top_id_vecs_[i]对于第i层,插入当前blob的id 
        输出当前层位于top的blob的信息 
        计算所需内存 
        判断当前层i是否需要backward 
     
    <7> 所有名字在available_blobs中的blob为当前层的输出blob,存入net_output_blobs_中 
    <8> 建立每个blob的name和index的对应关系map:blob_names_index_ 
    <9> 建立每个层的name和index的对应关系map:layer_names_index_ 
    <10> 调用GetLearningRateAndWeightDecay函数 
    */  
    template <typename Dtype>  
    void Net<Dtype>::Init(const NetParameter& in_param) {  
      // Set phase from the state.  
      phase_ = in_param.state().phase();  
      // Filter layers based on their include/exclude rules and  
      // the current NetState.  
      NetParameter filtered_param;  
      FilterNet(in_param, &filtered_param);  
      LOG(INFO) << "Initializing net from parameters: " << std::endl  
                << filtered_param.DebugString();  
      // Create a copy of filtered_param with splits added where necessary.  
      NetParameter param;  
      InsertSplits(filtered_param, ¶m);  
      // Basically, build all the layers and set up their connections.  
      name_ = param.name();  
      map<string, int> blob_name_to_idx;//blob_name_to_idx是一个map,其关键字是不重复的  
      set<string> available_blobs;//available_blobs是一个set,其关键字是不重复的  
      CHECK(param.input_dim_size() == 0 || param.input_shape_size() == 0)  
          << "Must specify either input_shape OR deprecated input_dim, not both.";  
      if (param.input_dim_size() > 0) {  
        // Deprecated 4D dimensions.  
        CHECK_EQ(param.input_size() * 4, param.input_dim_size())  
            << "Incorrect input blob dimension specifications.";  
      } else {  
        CHECK_EQ(param.input_size(), param.input_shape_size())  
            << "Exactly one input_shape must be specified per input.";  
      }  
      memory_used_ = 0;  
      // set the input blobs  
      for (int input_id = 0; input_id < param.input_size(); ++input_id) {  
        const int layer_id = -1;  // inputs have fake layer ID -1  
        AppendTop(param, layer_id, input_id, &available_blobs, &blob_name_to_idx);  
      }  
      DLOG(INFO) << "Memory required for data: " << memory_used_ * sizeof(Dtype);  
      // For each layer, set up its input and output  
      bottom_vecs_.resize(param.layer_size());  
      top_vecs_.resize(param.layer_size());  
      bottom_id_vecs_.resize(param.layer_size());  
      param_id_vecs_.resize(param.layer_size());  
      top_id_vecs_.resize(param.layer_size());  
      bottom_need_backward_.resize(param.layer_size());  
      
      for (int layer_id = 0; layer_id < param.layer_size(); ++layer_id) {  
        // Inherit phase from net if unset.  
        if (!param.layer(layer_id).has_phase()) {  
          param.mutable_layer(layer_id)->set_phase(phase_);//实参phase_是网络的phase,为模板类layer设置shape_属性    
        }  
        // Setup BN params implicitly.  
        if (param.layer(layer_id).type() == "BN") {  
          LayerParameter* layer_param = param.mutable_layer(layer_id);  
          if (layer_param->param_size() > 2) {  
            LOG(FATAL) << "Layer " << layer_param->name()  
                       << " must have no more than two specified params";  
          }  
          while (layer_param->param_size() < 4) {  
            ParamSpec* param = layer_param->add_param();  
            if (layer_param->param_size() <= 2) {  
              param->set_lr_mult(1);  
              param->set_decay_mult(0);  
            } else {  
              param->set_lr_mult(0);  
              param->set_decay_mult(0);  
            }  
          }  
        }  
        // Setup layer.  
        const LayerParameter& layer_param = param.layer(layer_id);  
     //检查LayerParameter类型propagate_down成员的个数师傅达标   
        if (layer_param.propagate_down_size() > 0) {  
          CHECK_EQ(layer_param.propagate_down_size(),  
              layer_param.bottom_size())  
              << "propagate_down param must be specified "  
              << "either 0 or bottom_size times ";  
        }  
        layers_.push_back(LayerRegistry<Dtype>::CreateLayer(layer_param));  
        layer_names_.push_back(layer_param.name());  
        LOG(INFO) << "Creating Layer " << layer_param.name();  
        bool need_backward = false;  
      
        // Figure out this layer's input and output  
        #ifdef USE_MPI  
        vector<bool> source_layer_need_sync;  
        for (int bottom_id = 0; bottom_id < layer_param.bottom_size();  
             ++bottom_id) {  
      
          const int blob_id = AppendBottom(param, layer_id, bottom_id,  
                                           &available_blobs, &blob_name_to_idx);  
          int src_layer_id = top_layer_indices_[blob_id].first;  
          if (src_layer_id>=0) source_layer_need_sync.push_back(layers_[src_layer_id]->need_sync());  
          if (source_layer_need_sync.size()>0){  
            CHECK_EQ(source_layer_need_sync.back(), source_layer_need_sync[0])  
              <<" blob "<<layer_param.bottom(0)  
              <<" and blob "<< layer_param.bottom(bottom_id)  
              <<" are from layers with different paralle mode. This is not supported.";  
          }  
          // If a blob needs backward, this layer should provide it.  
    /*blob_need_backward_,整个网络中,所有非参数blob,是否需要backward。注意,这里所说的所有非参数blob其实指的是AppendTop函数中遍历的所有top blob,并不是每一层的top+bottom,因为这一层的top就是下一层的bottom,网络是一层一层堆起来的。  
    */  
          need_backward |= blob_need_backward_[blob_id];  
        }  
      
        if (layers_[layer_id]->is_gathering()){  
          layers_[layer_id]->set_need_sync(false);  
        } else {  
          if(layers_[layer_id]->is_scattering()){  
            layers_[layer_id]->set_need_sync(true);  
          } else {  
            if ((source_layer_need_sync.size() > 0)) {  
              layers_[layer_id]->set_need_sync(source_layer_need_sync[0]);  
              LOG(INFO) << "This layer is inheriting previous layer's sync mode: " << source_layer_need_sync[0];  
            }  
          }  
        }  
        #else  
        for (int bottom_id = 0; bottom_id < layer_param.bottom_size();  
             ++bottom_id) {  
          const int blob_id = AppendBottom(param, layer_id, bottom_id,  
                                           &available_blobs, &blob_name_to_idx);  
          // If a blob needs backward, this layer should provide it.  
          need_backward |= blob_need_backward_[blob_id];  
        }  
        #endif  
      
        int num_top = layer_param.top_size();  
        for (int top_id = 0; top_id < num_top; ++top_id) {  
          AppendTop(param, layer_id, top_id, &available_blobs, &blob_name_to_idx);  
        }  
        // If the layer specifies that AutoTopBlobs() -> true and the LayerParameter  
        // specified fewer than the required number (as specified by  
        // ExactNumTopBlobs() or MinTopBlobs()), allocate them here.  
        Layer<Dtype>* layer = layers_[layer_id].get();  
        if (layer->AutoTopBlobs()) {  
          const int needed_num_top =  
              std::max(layer->MinTopBlobs(), layer->ExactNumTopBlobs());  
          for (; num_top < needed_num_top; ++num_top) {  
            // Add "anonymous" top blobs -- do not modify available_blobs or  
            // blob_name_to_idx as we don't want these blobs to be usable as input  
            // to other layers.  
            AppendTop(param, layer_id, num_top, NULL, NULL);  
          }  
        }  
        // After this layer is connected, set it up.  
        LOG(INFO) << "Setting up " << layer_names_[layer_id];  
    //每次循环,都会更新向量blob_loss_weights    
        layers_[layer_id]->SetUp(bottom_vecs_[layer_id], top_vecs_[layer_id]);  
        for (int top_id = 0; top_id < top_vecs_[layer_id].size(); ++top_id) {  
    //blob_loss_weights_,每次遍历一个layer的时候,都会resize blob_loss_weights_, 然后调用模板类layer的loss函数返回loss_weight   
          if (blob_loss_weights_.size() <= top_id_vecs_[layer_id][top_id]) {  
            blob_loss_weights_.resize(top_id_vecs_[layer_id][top_id] + 1, Dtype(0));  
          }  
    //top_id_vecs_中存储的最基本元素是blob_id ——> 每一个新的blob都会赋予其一个blob_id,但是这个blob_id可能是会有重复的   
          blob_loss_weights_[top_id_vecs_[layer_id][top_id]] = layer->loss(top_id);  
    //loss函数返回loss_weight ——> 在模板类的SetUp方法中会调用SetLossWeights来设置其私有数据成员loss_,里面存储的其实是loss_weight    
          LOG(INFO) << "Top shape: " << top_vecs_[layer_id][top_id]->shape_string();  
          if (layer->loss(top_id)) {  
            LOG(INFO) << "    with loss weight " << layer->loss(top_id);  
          }  
          memory_used_ += top_vecs_[layer_id][top_id]->count();  
        }  
        DLOG(INFO) << "Memory required for data: " << memory_used_ * sizeof(Dtype);  
        const int param_size = layer_param.param_size();  
        const int num_param_blobs = layers_[layer_id]->blobs().size();  
    //param_size是Layermeter类型对象layer_param中ParamSpec param成员的个数, num_param_blobs是一  
    //个Layer中learnable parameter blob的个数,param_size <= num_param_blobs    
        CHECK_LE(param_size, num_param_blobs)  
            << "Too many params specified for layer " << layer_param.name();  
        ParamSpec default_param_spec;  
        for (int param_id = 0; param_id < num_param_blobs; ++param_id) {  
          const ParamSpec* param_spec = (param_id < param_size) ?  
              &layer_param.param(param_id) : &default_param_spec;  
          const bool param_need_backward = param_spec->lr_mult() > 0;//need backward 则为真。  
          need_backward |= param_need_backward;  
    //由param_need_backward来决定need_backward是否为真,并且,只要有一次遍历使得need_backward为真,则这个for循环结束后,need_backward也为真  
          layers_[layer_id]->set_param_propagate_down(param_id,  
                                                      param_need_backward);  
    //设定一个Layer的parameter blob 是否需要计算diff backward--->set_param_propagate_down是模板类Layer的方法。    
        }  
        for (int param_id = 0; param_id < num_param_blobs; ++param_id) {  
     //添加parameter blob,如果当前layer没有parameter blob(num_param_blobs==0),比如RELU,那么就不进入循环,不添加parameter blob    
     //AppendParam只是执行为当前layer添加parameter blob的相关工作,并不会修改与backward的相关属性    
          AppendParam(param, layer_id, param_id);  
        }  
        // Finally, set the backward flag  
        layer_need_backward_.push_back(need_backward);  
    //在上述的AppendTop函数中,在遍历当前层的每一个top blob的时候都会将一个false(默认值)压入向量blob_need_backward_。在下面的代码中,如果这个layer need backward,则会更新blob_need_backward_  
        if (need_backward) {  
          for (int top_id = 0; top_id < top_id_vecs_[layer_id].size(); ++top_id) {  
            blob_need_backward_[top_id_vecs_[layer_id][top_id]] = true;  
      
            //special treatment for "Gather" layer  
            //This layer should be transparent to bp inferring.  
            if (strcmp(layers_[layer_id]->type(), "Gather")==0){  
              blob_need_backward_[top_id_vecs_[layer_id][top_id]]  
                  = blob_need_backward_[bottom_id_vecs_[layer_id][top_id]];  
            }  
          }  
        }  
      }  
      // Go through the net backwards to determine which blobs contribute to the  
      // loss.  We can skip backward computation for blobs that don't contribute  
      // to the loss.  
      // Also checks if all bottom blobs don't need backward computation (possible  
      // because the skip_propagate_down param) and so we can skip backward  
      // computation for the entire layer  
    // 需要注意的是,上述代码中关于backward设置的部分,是按照前向的顺序设置的,而下面的代码是按后向顺序修正前向设置的结果。    
    // 一个layer是否需要backward computation,主要依据两个方面:(1)该layer的top blob 是否参与loss的计算;(2):该layer的bottom blob 是否需要backward computation,比如Data层一般就不需要backward computation    
      set<string> blobs_under_loss;  
      set<string> blobs_skip_backp;  
      for (int layer_id = layers_.size() - 1; layer_id >= 0; --layer_id) {  
        bool layer_contributes_loss = false;  
        bool layer_skip_propagate_down = true;  
    //为true,则表示当前layer的bottom blob不需要backward computation,即该层不需要backward computation。    
    //这个局部变量所表示的意义与caffe.proto里message Layerparameter的propagate_down的定义恰好相反。    
        for (int top_id = 0; top_id < top_vecs_[layer_id].size(); ++top_id) {  
          //blob_names_整个网络中,所有非参数blob的name  
          const string& blob_name = blob_names_[top_id_vecs_[layer_id][top_id]];  
          if (layers_[layer_id]->loss(top_id) ||  
              (blobs_under_loss.find(blob_name) != blobs_under_loss.end())) {  
            layer_contributes_loss = true;  
          }  
          if (blobs_skip_backp.find(blob_name) == blobs_skip_backp.end()) {  
            layer_skip_propagate_down = false;  
          }  
          if (layer_contributes_loss && !layer_skip_propagate_down)  
            break;//只是跳出当前if语句   
        }  
        // If this layer can skip backward computation, also all his bottom blobs  
        // don't need backpropagation  
        if (layer_need_backward_[layer_id] && layer_skip_propagate_down) {  
          layer_need_backward_[layer_id] = false;  
          for (int bottom_id = 0; bottom_id < bottom_vecs_[layer_id].size();  
                   ++bottom_id) {  
    //bottom_need_backward_,整个网络所有网络层的bottom blob是否需要backward  
            bottom_need_backward_[layer_id][bottom_id] = false;  
          }  
        }  
        if (!layer_contributes_loss) { layer_need_backward_[layer_id] = false; }  
        if (layer_need_backward_[layer_id]) {  
          LOG(INFO) << layer_names_[layer_id] << " needs backward computation.";  
        } else {  
          LOG(INFO) << layer_names_[layer_id]  
                    << " does not need backward computation.";  
        }  
        for (int bottom_id = 0; bottom_id < bottom_vecs_[layer_id].size();//修正前向设置的结果  
             ++bottom_id) {  
          if (layer_contributes_loss) {  
            const string& blob_name =  
                blob_names_[bottom_id_vecs_[layer_id][bottom_id]];  
            blobs_under_loss.insert(blob_name);//为blobs_under_loss添加新元素  
          } else {  
            bottom_need_backward_[layer_id][bottom_id] = false;  
          }  
          if (!bottom_need_backward_[layer_id][bottom_id]) {  
            const string& blob_name =  
                       blob_names_[bottom_id_vecs_[layer_id][bottom_id]];  
            blobs_skip_backp.insert(blob_name);//为blobs_skip_backp添加新元素  
          }  
        }  
      }  
      //Handle force_backward if needed.Netparameter类型的force_backward方法    
      if (param.force_backward()) {  
        for (int layer_id = 0; layer_id < layers_.size(); ++layer_id) {  
          layer_need_backward_[layer_id] = true;  
          for (int bottom_id = 0;  
               bottom_id < bottom_need_backward_[layer_id].size(); ++bottom_id) {  
            bottom_need_backward_[layer_id][bottom_id] =  
                bottom_need_backward_[layer_id][bottom_id] ||  
                layers_[layer_id]->AllowForceBackward(bottom_id);  
            blob_need_backward_[bottom_id_vecs_[layer_id][bottom_id]] =  
                blob_need_backward_[bottom_id_vecs_[layer_id][bottom_id]] ||  
                bottom_need_backward_[layer_id][bottom_id];  
          }  
          for (int param_id = 0; param_id < layers_[layer_id]->blobs().size();  
               ++param_id) {  
            layers_[layer_id]->set_param_propagate_down(param_id, true);  
          }  
        }  
      }  
      // In the end, all remaining blobs are considered output blobs.  
      for (set<string>::iterator it = available_blobs.begin();  
          it != available_blobs.end(); ++it) {  
        LOG(INFO) << "This network produces output " << *it;  
        net_output_blobs_.push_back(blobs_[blob_name_to_idx[*it]].get());  
        net_output_blob_indices_.push_back(blob_name_to_idx[*it]);  
      }  
      for (size_t blob_id = 0; blob_id < blob_names_.size(); ++blob_id) {  
        blob_names_index_[blob_names_[blob_id]] = blob_id;  
    //第一次使用向量blob_names_index_,逐一添加元素,是一个map    
      }  
      for (size_t layer_id = 0; layer_id < layer_names_.size(); ++layer_id) {  
        layer_names_index_[layer_names_[layer_id]] = layer_id;  
    //第一次使用向量layer_names_index_,逐一添加元素,是一个map    
      }  
      GetLearningRateAndWeightDecay();  
      debug_info_ = param.debug_info();  
      LOG(INFO) << "Network initialization done.";  
      LOG(INFO) << "Memory required for data: " << memory_used_ * sizeof(Dtype);  
    }  
    //FilterNet()给定当前phase/level/stage,移除指定层   
    template <typename Dtype>  
    void Net<Dtype>::FilterNet(const NetParameter& param,  
        NetParameter* param_filtered) {  
      NetState net_state(param.state());  
      param_filtered->CopyFrom(param);  
      param_filtered->clear_layer();  
      for (int i = 0; i < param.layer_size(); ++i) {  
        const LayerParameter& layer_param = param.layer(i);  
        const string& layer_name = layer_param.name();  
        CHECK(layer_param.include_size() == 0 || layer_param.exclude_size() == 0)  
              << "Specify either include rules or exclude rules; not both.";  
        // If no include rules are specified, the layer is included by default and  
        // only excluded if it meets one of the exclude rules.  
        bool layer_included = (layer_param.include_size() == 0);  
        for (int j = 0; layer_included && j < layer_param.exclude_size(); ++j) {  
          if (StateMeetsRule(net_state, layer_param.exclude(j), layer_name)) {  
            layer_included = false;//如果不包含include,只要meet一个include_size(idx)即可  
          }  
        }  
        for (int j = 0; !layer_included && j < layer_param.include_size(); ++j) {  
          if (StateMeetsRule(net_state, layer_param.include(j), layer_name)) {  
            layer_included = true;//如果包含include,只要符合一个include_size(idx)即可  
          }  
        }  
        if (layer_included) {  
          param_filtered->add_layer()->CopyFrom(layer_param);  
        }  
      }  
    }  
    //StateMeetsRule()中net的state是否满足NetStaterule    
    template <typename Dtype>  
    bool Net<Dtype>::StateMeetsRule(const NetState& state,  
        const NetStateRule& rule, const string& layer_name) {  
      // Check whether the rule is broken due to phase.  
      if (rule.has_phase()) {  
          if (rule.phase() != state.phase()) {  
            LOG(INFO) << "The NetState phase (" << state.phase()  
              << ") differed from the phase (" << rule.phase()  
              << ") specified by a rule in layer " << layer_name;  
            return false;  
          }  
      }  
      // Check whether the rule is broken due to min level.  
      if (rule.has_min_level()) {  
        if (state.level() < rule.min_level()) {  
          LOG(INFO) << "The NetState level (" << state.level()  
              << ") is above the min_level (" << rule.min_level()  
              << ") specified by a rule in layer " << layer_name;  
          return false;  
        }  
      }  
      // Check whether the rule is broken due to max level.  
      if (rule.has_max_level()) {  
        if (state.level() > rule.max_level()) {  
          LOG(INFO) << "The NetState level (" << state.level()  
              << ") is above the max_level (" << rule.max_level()  
              << ") specified by a rule in layer " << layer_name;  
          return false;  
        }  
      }  
      // Check whether the rule is broken due to stage. The NetState must  
      // contain ALL of the rule's stages to meet it.  
      for (int i = 0; i < rule.stage_size(); ++i) {  
        // Check that the NetState contains the rule's ith stage.  
        bool has_stage = false;  
        for (int j = 0; !has_stage && j < state.stage_size(); ++j) {  
          if (rule.stage(i) == state.stage(j)) { has_stage = true; }  
        }  
        if (!has_stage) {  
          LOG(INFO) << "The NetState did not contain stage '" << rule.stage(i)  
                    << "' specified by a rule in layer " << layer_name;  
          return false;  
        }  
      }  
      // Check whether the rule is broken due to not_stage. The NetState must  
      // contain NONE of the rule's not_stages to meet it.  
      for (int i = 0; i < rule.not_stage_size(); ++i) {  
        // Check that the NetState contains the rule's ith not_stage.  
        bool has_stage = false;  
        for (int j = 0; !has_stage && j < state.stage_size(); ++j) {  
          if (rule.not_stage(i) == state.stage(j)) { has_stage = true; }  
        }  
        if (has_stage) {  
          LOG(INFO) << "The NetState contained a not_stage '" << rule.not_stage(i)  
                    << "' specified by a rule in layer " << layer_name;  
          return false;  
        }  
      }  
      return true;  
    }  
      
    // Helper for Net::Init: add a new input or top blob to the net.  (Inputs have  
    // layer_id == -1, tops have layer_id >= 0.)  
    template <typename Dtype>  
    void Net<Dtype>::AppendTop(const NetParameter& param, const int layer_id,  
                               const int top_id, set<string>* available_blobs,  
                               map<string, int>* blob_name_to_idx) {  
      shared_ptr<LayerParameter> layer_param((layer_id >= 0) ?  
        (new LayerParameter(param.layer(layer_id))) : NULL);  
      const string& blob_name = layer_param ?  
          (layer_param->top_size() > top_id ?  
              layer_param->top(top_id) : "(automatic)") : param.input(top_id);  
      // Check if we are doing in-place computation  
      if (blob_name_to_idx && layer_param && layer_param->bottom_size() > top_id &&  
          blob_name == layer_param->bottom(top_id)) {  
        // In-place computation  
        LOG(INFO) << layer_param->name() << " -> " << blob_name << " (in-place)";  
        top_vecs_[layer_id].push_back(blobs_[(*blob_name_to_idx)[blob_name]].get());  
        top_id_vecs_[layer_id].push_back((*blob_name_to_idx)[blob_name]);  
      } else if (blob_name_to_idx &&  
                 blob_name_to_idx->find(blob_name) != blob_name_to_idx->end()) {  
        // If we are not doing in-place computation but have duplicated blobs,  
        // raise an error.  
        LOG(FATAL) << "Duplicate blobs produced by multiple sources.";  
      } else {  
        // Normal output.  
        if (layer_param) {  
          LOG(INFO) << layer_param->name() << " -> " << blob_name;  
        } else {  
          LOG(INFO) << "Input " << top_id << " -> " << blob_name;  
        }  
        shared_ptr<Blob<Dtype> > blob_pointer(new Blob<Dtype>());  
    //blobs只是存储中间结果;每次遍历到一个top blob都会更新blob_id    
        const int blob_id = blobs_.size();  
        blobs_.push_back(blob_pointer);  
        blob_names_.push_back(blob_name);  
        blob_need_backward_.push_back(false);  
        top_layer_indices_.push_back(make_pair(layer_id, blob_id));  
    /* 
    blob_name_to_idx是一个局部变量,其实它是在当前layer的top blob 和下一层的bottom blob间起着一个桥梁作用。   
    blob_name_to_idx中元素的pair是从网络最开始一层一层搭建的过程中压入map的,其中的name和id都是不重复的。name是关键字——不重复是map数据结构的必然要求,id也是不重复的——0,1,2...   
    blob_name_to_idx和blobs_一样,在"Normal output"的情形下,每次遍历到一个top blob的时候都会更新   
    */  
    //添加新元素-->map可以通过下标访问符为(关联)容器添加新元素   
        if (blob_name_to_idx) { (*blob_name_to_idx)[blob_name] = blob_id; }  
        if (layer_id == -1) {  
          // Set the (explicitly specified) dimensions of the input blob.  
          if (param.input_dim_size() > 0) {  
            blob_pointer->Reshape(param.input_dim(top_id * 4),  
                                  param.input_dim(top_id * 4 + 1),  
                                  param.input_dim(top_id * 4 + 2),  
                                  param.input_dim(top_id * 4 + 3));  
          } else {  
            blob_pointer->Reshape(param.input_shape(top_id));  
          }  
          net_input_blob_indices_.push_back(blob_id);  
    //当layer_id==-1时,即当前层为输入层的时候,会向net_input_blob_indices_里添加新元素,即add new input blob   
          net_input_blobs_.push_back(blob_pointer.get());  
        } else {  
          top_id_vecs_[layer_id].push_back(blob_id);  
    //当layer_id !=-1时,即当前层不是输入层的时候,会向net_input_blob_indices_里添加新元素,即add new top blob   
          top_vecs_[layer_id].push_back(blob_pointer.get());  
        }  
      
      }  
      if (available_blobs) { available_blobs->insert(blob_name); }  
    }  
      
    // Helper for Net::Init: add a new bottom blob to the net.  
    template <typename Dtype>  
    int Net<Dtype>::AppendBottom(const NetParameter& param, const int layer_id,  
        const int bottom_id, set<string>* available_blobs,  
        map<string, int>* blob_name_to_idx) {  
      const LayerParameter& layer_param = param.layer(layer_id);  
      const string& blob_name = layer_param.bottom(bottom_id);  
      if (available_blobs->find(blob_name) == available_blobs->end()) {  
        LOG(FATAL) << "Unknown blob input " << blob_name  
                   << " (at index " << bottom_id << ") to layer " << layer_id;  
      }  
    //blob_name_to_idx是一个map,其关键字是不重复的。blob_name_to_idx在输入层初始化  
    //过了-->*blob_name_to_idx)[blob_name] = blob_id  
      const int blob_id = (*blob_name_to_idx)[blob_name];  
      LOG(INFO) << layer_names_[layer_id] << " <- " << blob_name;  
    //存储整个网络所有网络层的bottom blob指针,实际上存储的是前一层的top,因为网络是一层一层堆起来的  
      bottom_vecs_[layer_id].push_back(blobs_[blob_id].get());//调用shared_ptr类的get()方法提取存储在blobs_中的中间变量  
      bottom_id_vecs_[layer_id].push_back(blob_id);  
      available_blobs->erase(blob_name);  
      bool propagate_down = true;  
      // Check if the backpropagation on bottom_id should be skipped  
      if (layer_param.propagate_down_size() > 0)  
        propagate_down = layer_param.propagate_down(bottom_id);  
      const bool need_backward = blob_need_backward_[blob_id] &&  
                              propagate_down;//propagate_down为true,则表示参与BP;否则,skip bp  
      bottom_need_backward_[layer_id].push_back(need_backward);  
      return blob_id;  
    }  
      
    template <typename Dtype>  
    void Net<Dtype>::AppendParam(const NetParameter& param, const int layer_id,  
                                 const int param_id) {  
    //模板类Layer的layer_param方法,返回Layerparameter类型成员  
      const LayerParameter& layer_param = layers_[layer_id]->layer_param();  
      const int param_size = layer_param.param_size();  
      string param_name =  
          (param_size > param_id) ? layer_param.param(param_id).name() : "";  
      if (param_name.size()) {  
    //vector<string> param_display_names_ 这里param_name获取的是PaParamSpec类型中的name成员,如果有name且非空,就把name压入该向量,否则就压入param_id    
        param_display_names_.push_back(param_name);  
      } else {  
        ostringstream param_display_name;  
        param_display_name << param_id;  
        param_display_names_.push_back(param_display_name.str());  
      }  
    //params_,整个网络的参数blob。 不管这个参数有没有non-emty name,是否参与share!!!    
      const int net_param_id = params_.size(); //Append 参数blob 每一次循环,net_param_id和param_id_vecs_都会更新    
      params_.push_back(layers_[layer_id]->blobs()[param_id]);  
    //param_id_vecs_,存储的基本元素是net_param_id,每遍历一个参数blob,net_param_id和param_id_vecs_都会更新  
      param_id_vecs_[layer_id].push_back(net_param_id);  
    //param_layer_indices_其元素为当layer_id 与当前param_id 组成的pair.vector<pair<int, int> > param_layer_indices_  
      param_layer_indices_.push_back(make_pair(layer_id, param_id));  
      if (!param_size || !param_name.size() || (param_name.size() &&  
          param_names_index_.find(param_name) == param_names_index_.end())) {  
        // This layer "owns" this parameter blob -- it is either anonymous  
        // (i.e., not given a param_name) or explicitly given a name that we  
        // haven't already seen.  
    /*param_owners_ 是一个存储parameter "onwer"的一个向量  ——> -1 表示当前Layer就是该parameter的"owner" , 
    如果param_name不为空,而且能够在param_names_index_中找到,说明这个parameter已经存在于之前的某个或者某 
    些网络层里,说明这个parameter是共享于多个layer。 在caffe.proto的message ParamSpec里关于name的 
    注释——>To share a parameter between two layers, give it a (non-empty) name, 可见,如果一个parameter是 
    共享与多个网络层,那么它会有一个非空的name。 
    */  
        param_owners_.push_back(-1);  
    //添加param_name   
        if (param_name.size()) {  
    /* 
    map<string, int> param_names_index_是整个网络的参数non-empty name与index的映射。  注意,这个name是ParamSpec 类 
    型中的name,而且,""To share a parameter between two layers, give it a (non-empty) name"",所以说这个map中存 
    储的pair是<会被share的parameter_name, 其对应index>  
    */  
          param_names_index_[param_name] = net_param_id;  
    /* 
    map<string, int> param_names_index_ 。虽然每一次循环,net_param_id都会更新,但 
    是net_param_id只有当param_name.size()>0时才会被压入向量param_names_index_  
    */  
        }  
      } else {  
        // Named param blob with name we've seen before: share params  
    //因为"To share a parameter between two layers, give it a (non-empty) name",所以这句代码就是获取shared parameter的"owner" net_param_id    
        const int owner_net_param_id = param_names_index_[param_name];  
        param_owners_.push_back(owner_net_param_id);  
    /只获取了那些shared的parameter,即具有non-empty name的parameter的pair<layer_id, param_id>    
        const pair<int, int>& owner_index =  
            param_layer_indices_[owner_net_param_id];  
        const int owner_layer_id = owner_index.first;  
        const int owner_param_id = owner_index.second;  
        LOG(INFO) << "Sharing parameters '" << param_name << "' owned by "  
                  << "layer '" << layer_names_[owner_layer_id] << "', param "  
                  << "index " << owner_param_id;  
    //获取当前层的当前参数Blob    
        Blob<Dtype>* this_blob = layers_[layer_id]->blobs()[param_id].get();  
    //获取owner layer的对应的参数blob  
        Blob<Dtype>* owner_blob =  
            layers_[owner_layer_id]->blobs()[owner_param_id].get();  
        const int param_size = layer_param.param_size();  
        if (param_size > param_id && (layer_param.param(param_id).share_mode() ==  
                                      ParamSpec_DimCheckMode_PERMISSIVE)) {  
          // Permissive dimension checking -- only check counts are the same.  
          CHECK_EQ(this_blob->count(), owner_blob->count())  
              << "Shared parameter blobs must have the same count.";  
        } else {  
          // Strict dimension checking -- all dims must be the same.  
          CHECK(this_blob->shape() == owner_blob->shape());  
        }  
        layers_[layer_id]->blobs()[param_id]->ShareData(  
            *layers_[owner_layer_id]->blobs()[owner_param_id]);  
      }  
    }  
    /* 
    功能:收集学习速率和权重衰减,即更新params_、params_lr_和params_weight_decay_ 
    输入:无 
    输出:无 
    步骤:对每一层 
    1. 把当前层的所有blob存入params_中 
    params_// The parameters in the network 
    2. 如果有lr, 则把当前层的所有blob的lr存入params_lr_中; 否则, lr默认为1 
    3. 如果有 weight_decay,则把当前层的所有 blob 的 weight_decay存入 params_weight_decay_ 中 
    4. 否则,weight_decay 默认为1 
    */  
    template <typename Dtype>  
    void Net<Dtype>::GetLearningRateAndWeightDecay() {  
      LOG(INFO) << "Collecting Learning Rate and Weight Decay.";  
      ParamSpec default_param_spec;  
      for (int i = 0; i < layers_.size(); ++i) {  
        vector<shared_ptr<Blob<Dtype> > >& layer_blobs = layers_[i]->blobs();  
        for (int j = 0; j < layer_blobs.size(); ++j) {  
          const ParamSpec* param_spec =  
              (layers_[i]->layer_param().param_size() > j) ?  
              &layers_[i]->layer_param().param(j) : &default_param_spec;  
          params_lr_.push_back(param_spec->lr_mult());  
          params_weight_decay_.push_back(param_spec->decay_mult());  
        }  
      }  
    }  
      
    template <typename Dtype>  
    Dtype Net<Dtype>::ForwardFromTo(int start, int end) {  
      CHECK_GE(start, 0);  
      CHECK_LT(end, layers_.size());  
      Dtype loss = 0;  
      if (debug_info_) {  
        for (int i = 0; i < net_input_blobs_.size(); ++i) {  
          InputDebugInfo(i);  
        }  
      }  
      for (int i = start; i <= end; ++i) {  
        // LOG(ERROR) << "Forwarding " << layer_names_[i];  
        Dtype layer_loss = layers_[i]->Forward(bottom_vecs_[i], top_vecs_[i]);  
        loss += layer_loss;  
        if (debug_info_) { ForwardDebugInfo(i); }  
      }  
      
    #ifdef USE_CUDNN  
      if (Caffe::mode() == Caffe::GPU)  
        CuDNNConvolutionLayer<Dtype>::RuntimeOptimize(1000);  
    #endif  
      return loss;  
    }  
      
    template <typename Dtype>  
    Dtype Net<Dtype>::ForwardFrom(int start) {  
      return ForwardFromTo(start, layers_.size() - 1);  
    }  
      
    template <typename Dtype>  
    Dtype Net<Dtype>::ForwardTo(int end) {  
      return ForwardFromTo(0, end);  
    }  
    /* 
    功能:前馈预先填满,即预先进行一次前馈 
    输入:Dtype* loss 
    输出:net_output_blobs_,前馈后的输出层blob:vector 
    */  
    template <typename Dtype>  
    const vector<Blob<Dtype>*>& Net<Dtype>::ForwardPrefilled(Dtype* loss) {  
      if (loss != NULL) {  
        *loss = ForwardFromTo(0, layers_.size() - 1);  
      } else {  
        ForwardFromTo(0, layers_.size() - 1);  
      }  
      return net_output_blobs_;  
    }  
    /* 
    功能:把网络输入层的blob读到net_input_blobs_,然后进行前馈,计算出loss 
    输入:整个网络输入层的blob 
    输出:整个网络输出层的blob 
    */  
    template <typename Dtype>  
    const vector<Blob<Dtype>*>& Net<Dtype>::Forward(  
        const vector<Blob<Dtype>*> & bottom, Dtype* loss) {  
      // Copy bottom to internal bottom  
      for (int i = 0; i < bottom.size(); ++i) {  
        net_input_blobs_[i]->CopyFrom(*bottom[i]);  
      }  
      return ForwardPrefilled(loss);  
    }  
    /* 
    功能:Forward的重载,只是输入层的blob以string的格式传入 
    */  
    template <typename Dtype>  
    string Net<Dtype>::Forward(const string& input_blob_protos, Dtype* loss) {  
      BlobProtoVector blob_proto_vec;  
      if (net_input_blobs_.size()) {  
        blob_proto_vec.ParseFromString(input_blob_protos);  
        CHECK_EQ(blob_proto_vec.blobs_size(), net_input_blobs_.size())  
            << "Incorrect input size.";  
        for (int i = 0; i < blob_proto_vec.blobs_size(); ++i) {  
          net_input_blobs_[i]->FromProto(blob_proto_vec.blobs(i));  
        }  
      }  
      ForwardPrefilled(loss);  
      blob_proto_vec.Clear();  
      for (int i = 0; i < net_output_blobs_.size(); ++i) {  
        net_output_blobs_[i]->ToProto(blob_proto_vec.add_blobs());  
      }  
      string output;  
      blob_proto_vec.SerializeToString(&output);  
      return output;  
    }  
      
    template <typename Dtype>  
    void Net<Dtype>::BackwardFromTo(int start, int end) {  
      CHECK_GE(end, 0);  
      CHECK_LT(start, layers_.size());  
      
      for (int i = start; i >= end; --i) {  
        if (layer_need_backward_[i]) {  
          layers_[i]->Backward(  
              top_vecs_[i], bottom_need_backward_[i], bottom_vecs_[i]);  
          if (debug_info_) { BackwardDebugInfo(i); }  
      
    #ifdef USE_MPI  
          if ((Caffe::parallel_mode() == Caffe::MPI) && (Caffe::remaining_sub_iter() == 0)) {  
            for (int n = 0; n < param_layer_indices_.size(); ++n) {  
              bool ready_for_sync = false;  
      
              //decide whether we need to sync the gradient of this blob  
              if ((param_layer_indices_[n].first == i)) {  
                if (param_owners_[n] == -1) {  
                  ready_for_sync = true;  
                } else {  
                  // this blob is a shared one, we need to make sure no more gradients will be  
                  // accumulated to it before transmission  
                  int owner_id = param_owners_[n];  
                  ready_for_sync = true;  
                  for (int m = n - 1; m >= 0; --m) {  
                    if ((param_owners_[m] == owner_id) && (param_layer_indices_[m].first >= end)) {  
                      // there are still layers holding this shared blob,  
                      // not secure the do the transmission  
                      ready_for_sync = false;  
                      break;  
                    }  
                  }  
                }  
              }  
              //sync gradient  
              if (ready_for_sync && layers_[i]->need_sync())  
                caffe_iallreduce(  
                    this->params_[n]->mutable_cpu_diff(),  
                    this->params_[n]->count()  
                );  
            }  
          }  
    #endif //USE_MPI  
      
        }  
      }  
    }  
      
    template <typename Dtype>  
    void Net<Dtype>::InputDebugInfo(const int input_id) {  
      const Blob<Dtype>& blob = *net_input_blobs_[input_id];  
      const string& blob_name = blob_names_[net_input_blob_indices_[input_id]];  
      const Dtype data_abs_val_mean = blob.asum_data() / blob.count();  
      LOG(INFO) << "    [Forward] "  
         << "Input " << blob_name << " data: " << data_abs_val_mean;  
    }  
      
    template <typename Dtype>  
    void Net<Dtype>::ForwardDebugInfo(const int layer_id) {  
      for (int top_id = 0; top_id < top_vecs_[layer_id].size(); ++top_id) {  
        const Blob<Dtype>& blob = *top_vecs_[layer_id][top_id];  
        const string& blob_name = blob_names_[top_id_vecs_[layer_id][top_id]];  
        const Dtype data_abs_val_mean = blob.asum_data() / blob.count();  
        LOG(INFO) << "    [Forward] "  
           << "Layer " << layer_names_[layer_id] << ", top blob " << blob_name  
           << " data: " << data_abs_val_mean;  
      }  
      for (int param_id = 0; param_id < layers_[layer_id]->blobs().size();  
           ++param_id) {  
        const Blob<Dtype>& blob = *layers_[layer_id]->blobs()[param_id];  
        const int net_param_id = param_id_vecs_[layer_id][param_id];  
        const string& blob_name = param_display_names_[net_param_id];  
        const Dtype data_abs_val_mean = blob.asum_data() / blob.count();  
        LOG(INFO) << "    [Forward] "  
           << "Layer " << layer_names_[layer_id] << ", param blob " << blob_name  
           << " data: " << data_abs_val_mean;  
      }  
    }  
      
    template <typename Dtype>  
    void Net<Dtype>::BackwardDebugInfo(const int layer_id) {  
      const vector<Blob<Dtype>*>& bottom_vec = bottom_vecs_[layer_id];  
      for (int bottom_id = 0; bottom_id < bottom_vec.size(); ++bottom_id) {  
        if (!bottom_need_backward_[layer_id][bottom_id]) { continue; }  
        const Blob<Dtype>& blob = *bottom_vec[bottom_id];  
        const string& blob_name = blob_names_[bottom_id_vecs_[layer_id][bottom_id]];  
        const Dtype diff_abs_val_mean = blob.asum_diff() / blob.count();  
        LOG(INFO) << "    [Backward] "  
            << "Layer " << layer_names_[layer_id] << ", bottom blob " << blob_name  
            << " diff: " << diff_abs_val_mean;  
      }  
      for (int param_id = 0; param_id < layers_[layer_id]->blobs().size();  
           ++param_id) {  
        if (!layers_[layer_id]->param_propagate_down(param_id)) { continue; }  
        const Blob<Dtype>& blob = *layers_[layer_id]->blobs()[param_id];  
        const Dtype diff_abs_val_mean = blob.asum_diff() / blob.count();  
        LOG(INFO) << "    [Backward] "  
            << "Layer " << layer_names_[layer_id] << ", param blob " << param_id  
            << " diff: " << diff_abs_val_mean;  
      }  
    }  
      
    template <typename Dtype>  
    void Net<Dtype>::UpdateDebugInfo(const int param_id) {  
      const Blob<Dtype>& blob = *params_[param_id];  
      const int param_owner = param_owners_[param_id];  
      const string& layer_name = layer_names_[param_layer_indices_[param_id].first];  
      const string& param_display_name = param_display_names_[param_id];  
      const Dtype diff_abs_val_mean = blob.asum_diff() / blob.count();  
      if (param_owner < 0) {  
        const Dtype data_abs_val_mean = blob.asum_data() / blob.count();  
        LOG(INFO) << "    [Update] Layer " << layer_name  
            << ", param " << param_display_name  
            << " data: " << data_abs_val_mean << "; diff: " << diff_abs_val_mean;  
      } else {  
        const string& owner_layer_name =  
            layer_names_[param_layer_indices_[param_owner].first];  
        LOG(INFO) << "    [Update] Layer " << layer_name  
            << ", param blob " << param_display_name  
            << " (owned by layer " << owner_layer_name << ", "  
            << "param " << param_display_names_[param_owners_[param_id]] << ")"  
            << " diff: " << diff_abs_val_mean;  
      }  
    }  
    /* 
    功能:从Other网络复制某些层 
    步骤:对Other网络的第i层(源层): 
    1. 定义一个Layer的指针指向第i层 
    2. 读取第i层(源层)的名字 
    3. 找通过名字来找目标层如果没找到,即target_layer_id == layer_names_.size()则忽略Other的第i层,即Other的第i层不需要share给网络 
    4. 如果找到了,即other的第i层需要share给网络,则把目标层的所有blob读到target_blobs中 
        1判断目标层和源层的blob数量是否相等 
        2判断每个blob大小是否相等 
        3调用ShareData函数把源层的blob赋给目标层的blob 
     
    */  
    template <typename Dtype>  
    void Net<Dtype>::ShareTrainedLayersWith(const Net* other) {  
      int num_source_layers = other->layers().size();  
      for (int i = 0; i < num_source_layers; ++i) {  
        Layer<Dtype>* source_layer = other->layers()[i].get();  
        const string& source_layer_name = other->layer_names()[i];  
        int target_layer_id = 0;  
        while (target_layer_id != layer_names_.size() &&  
            layer_names_[target_layer_id] != source_layer_name) {  
          ++target_layer_id;  
        }  
        if (target_layer_id == layer_names_.size()) {  
          DLOG(INFO) << "Ignoring source layer " << source_layer_name;  
          continue;  
        }  
        DLOG(INFO) << "Copying source layer " << source_layer_name;  
        vector<shared_ptr<Blob<Dtype> > >& target_blobs =  
            layers_[target_layer_id]->blobs();  
        CHECK_EQ(target_blobs.size(), source_layer->blobs().size())  
            << "Incompatible number of blobs for layer " << source_layer_name;  
        for (int j = 0; j < target_blobs.size(); ++j) {  
          Blob<Dtype>* source_blob = source_layer->blobs()[j].get();  
          CHECK(target_blobs[j]->shape() == source_blob->shape());  
          target_blobs[j]->ShareData(*source_blob);  
        }  
      }  
    }  
      
    template <typename Dtype>  
    void Net<Dtype>::BackwardFrom(int start) {  
      BackwardFromTo(start, 0);  
    }  
      
    template <typename Dtype>  
    void Net<Dtype>::BackwardTo(int end) {  
      BackwardFromTo(layers_.size() - 1, end);  
    }  
    /* 
    功能:对整个网络进行反向传播 
    */  
    template <typename Dtype>  
    void Net<Dtype>::Backward() {  
      BackwardFromTo(layers_.size() - 1, 0);  
      if (debug_info_) {  
        Dtype asum_data = 0, asum_diff = 0, sumsq_data = 0, sumsq_diff = 0;  
        for (int i = 0; i < params_.size(); ++i) {  
          if (param_owners_[i] >= 0) { continue; }  
          asum_data += params_[i]->asum_data();  
          asum_diff += params_[i]->asum_diff();  
          sumsq_data += params_[i]->sumsq_data();  
          sumsq_diff += params_[i]->sumsq_diff();  
        }  
        const Dtype l2norm_data = std::sqrt(sumsq_data);  
        const Dtype l2norm_diff = std::sqrt(sumsq_diff);  
        LOG(ERROR) << "    [Backward] All net params (data, diff): "  
            << "L1 norm = (" << asum_data << ", " << asum_diff << "); "  
            << "L2 norm = (" << l2norm_data << ", " << l2norm_diff << ")";  
      }  
    }  
      
    template <typename Dtype>  
    void Net<Dtype>::Reshape() {  
      for (int i = 0; i < layers_.size(); ++i) {  
        layers_[i]->Reshape(bottom_vecs_[i], top_vecs_[i]);  
      }  
      
    #ifdef USE_CUDNN  
      if (Caffe::mode() == Caffe::GPU)  
        CuDNNConvolutionLayer<Dtype>::RuntimeOptimize(1000);  
    #endif  
    }  
    /* 
    功能:和ShareTrainedLayersWith一样 
    步骤:不同的是调用FromProto函数把源层的blob赋给目标层的blob 
    */  
    template <typename Dtype>  
    void Net<Dtype>::CopyTrainedLayersFrom(const NetParameter& param) {  
      int num_source_layers = param.layer_size();  
      for (int i = 0; i < num_source_layers; ++i) {  
        const LayerParameter& source_layer = param.layer(i);  
        const string& source_layer_name = source_layer.name();  
        int target_layer_id = 0;  
        while (target_layer_id != layer_names_.size() &&  
            layer_names_[target_layer_id] != source_layer_name) {  
          ++target_layer_id;  
        }  
        if (target_layer_id == layer_names_.size()) {  
          DLOG(INFO) << "Ignoring source layer " << source_layer_name;  
          continue;  
        }  
        DLOG(INFO) << "Copying source layer " << source_layer_name;  
        vector<shared_ptr<Blob<Dtype> > >& target_blobs =  
            layers_[target_layer_id]->blobs();  
        CHECK_EQ(target_blobs.size(), source_layer.blobs_size())  
            << "Incompatible number of blobs for layer " << source_layer_name;  
        for (int j = 0; j < target_blobs.size(); ++j) {  
          const bool kReshape = false;  
          target_blobs[j]->FromProto(source_layer.blobs(j), kReshape);  
        }  
      }  
    }  
    /* 
    功能:从文件中读入NetParameter param,然后调用CopyTrainedLayersFrom() 
    */  
    template <typename Dtype>  
    void Net<Dtype>::CopyTrainedLayersFrom(const string trained_filename) {  
      NetParameter param;  
      ReadNetParamsFromBinaryFileOrDie(trained_filename, ¶m);  
      CopyTrainedLayersFrom(param);  
    }  
    /* 
    功能:把网络的参数存入prototxt中 
    步骤: 
    1. 设置网络的名字:param->set_name(name_) 
    2. 加入输入层blob的名字 
    3. 对于第i层: 
        1加入bottom的blob的名字 
        2加入top的blob的名字 
        3写到proto中 
     
    */  
    template <typename Dtype>  
    void Net<Dtype>::ToProto(NetParameter* param, bool write_diff) const {  
      param->Clear();  
      param->set_name(name_);  
      // Add bottom and top  
      for (int i = 0; i < net_input_blob_indices_.size(); ++i) {  
        param->add_input(blob_names_[net_input_blob_indices_[i]]);  
      }  
      DLOG(INFO) << "Serializing " << layers_.size() << " layers";  
      for (int i = 0; i < layers_.size(); ++i) {  
        LayerParameter* layer_param = param->add_layer();  
    //bottom_id_vecs_存储整个网络所有网络层的bottom blob的ID  
        for (int j = 0; j < bottom_id_vecs_[i].size(); ++j) {  
          layer_param->add_bottom(blob_names_[bottom_id_vecs_[i][j]]);  
        }  
        for (int j = 0; j < top_id_vecs_[i].size(); ++j) {  
          layer_param->add_top(blob_names_[top_id_vecs_[i][j]]);  
        }  
        layers_[i]->ToProto(layer_param, write_diff);  
      }  
    }  
    /* 
    功能:更新params_中blob的值 
    */  
    template <typename Dtype>  
    void Net<Dtype>::Update() {  
      // First, accumulate the diffs of any shared parameters into their owner's  
      // diff. (Assumes that the learning rate, weight decay, etc. have already been  
      // accounted for in the current diff.)  
      for (int i = 0; i < params_.size(); ++i) {  
        if (param_owners_[i] < 0) { continue; }  
        if (debug_info_) { UpdateDebugInfo(i); }  
        const int count = params_[i]->count();  
        const Dtype* this_diff;  
        Dtype* owner_diff;  
        switch (Caffe::mode()) {  
        case Caffe::CPU:  
          this_diff = params_[i]->cpu_diff();  
          owner_diff = params_[param_owners_[i]]->mutable_cpu_diff();  
          caffe_add(count, this_diff, owner_diff, owner_diff);  
          break;  
        case Caffe::GPU:  
    #ifndef CPU_ONLY  
          this_diff = params_[i]->gpu_diff();  
          owner_diff = params_[param_owners_[i]]->mutable_gpu_diff();  
          caffe_gpu_add(count, this_diff, owner_diff, owner_diff);  
    #else  
          NO_GPU;  
    #endif  
          break;  
        default:  
          LOG(FATAL) << "Unknown caffe mode: " << Caffe::mode();  
        }  
      }  
      // Now, update the owned parameters.  
      for (int i = 0; i < params_.size(); ++i) {  
        if (param_owners_[i] >= 0) { continue; }  
        if (debug_info_) { UpdateDebugInfo(i); }  
        params_[i]->Update();  
      }  
    }  
    /* 
    功能:判断是否存在名字为blob_name的blob 
    */  
    template <typename Dtype>  
    bool Net<Dtype>::has_blob(const string& blob_name) const {  
      return blob_names_index_.find(blob_name) != blob_names_index_.end();  
    }  
    /* 
    功能:给一个blob的名字,返回这个blob的指针 
    */  
    template <typename Dtype>  
    const shared_ptr<Blob<Dtype> > Net<Dtype>::blob_by_name(  
        const string& blob_name) const {  
      shared_ptr<Blob<Dtype> > blob_ptr;  
      if (has_blob(blob_name)) {  
        blob_ptr = blobs_[blob_names_index_.find(blob_name)->second];  
      } else {  
        blob_ptr.reset((Blob<Dtype>*)(NULL));  
        LOG(WARNING) << "Unknown blob name " << blob_name;  
      }  
      return blob_ptr;  
    }  
    /* 
    功能:判断是否存在名字为layer_name的layer 
    */  
    template <typename Dtype>  
    bool Net<Dtype>::has_layer(const string& layer_name) const {  
      return layer_names_index_.find(layer_name) != layer_names_index_.end();  
    }  
      
    /* 
    功能:给一个layer的名字,返回这个layer的指针 
    */  
    template <typename Dtype>  
    const shared_ptr<Layer<Dtype> > Net<Dtype>::layer_by_name(  
        const string& layer_name) const {  
      shared_ptr<Layer<Dtype> > layer_ptr;  
      if (has_layer(layer_name)) {  
        layer_ptr = layers_[layer_names_index_.find(layer_name)->second];  
      } else {  
        layer_ptr.reset((Layer<Dtype>*)(NULL));  
        LOG(WARNING) << "Unknown layer name " << layer_name;  
      }  
      return layer_ptr;  
    }  
      
    INSTANTIATE_CLASS(Net);  
      
    }  // namespace caffe  


another analysis

template <typename Dtype>
void Net<Dtype>::Init(const NetParameter& in_param) {
//in_param,接solver.cpp的NetParameter
  CHECK(Caffe::root_solver() || root_net_)
      << "root_net_ needs to be set for all non-root solvers";
  // Set phase from the state.
  phase_ = in_param.state().phase();
  //phase_ = caffe::TRAIN
  // Filter layers based on their include/exclude rules and
  // the current NetState.
  NetParameter filtered_param;
  FilterNet(in_param, &filtered_param);
  //这个函数的作用就是检查in_param,如果in_param的layer符合要求,就赋给filtered_param
  //否则就不赋给filtered_param,你也可以认为这个函数的作用是移除in_param的指定层,将剩下
  //的复制给filtered_param(这里面主要是针对included和exclude)
  LOG_IF(INFO, Caffe::root_solver())
      << "Initializing net from parameters: " << std::endl
      << filtered_param.DebugString();
  // Create a copy of filtered_param with splits added where necessary.
  NetParameter param;
  InsertSplits(filtered_param, ¶m);
  //函数从filtered_param读入新网络到param
  // Basically, build all the layers and set up their connections.
  name_ = param.name();
  map<string, int> blob_name_to_idx;
  set<string> available_blobs;
  //关于set容器,可以看这个网址http://blog.csdn.net/wangran51/article/details/8836160
  memory_used_ = 0;
  // For each layer, set up its input and output
  bottom_vecs_.resize(param.layer_size());//重置bottom_vecs_的大小,一下是函数前后对比
  // bottom_vecs_ = std::vector of length 0, capacity 0
// bottom_vecs_ = std::vector of length 9, capacity 9 = {
//  std::vector of length 0, capacity 0, std::vector of length 0, capacity 0, 
//  std::vector of length 0, capacity 0, std::vector of length 0, capacity 0, 
//  std::vector of length 0, capacity 0, std::vector of length 0, capacity 0, 
// std::vector of length 0, capacity 0, std::vector of length 0, capacity 0, 
//  std::vector of length 0, capacity 0}
//这里面九个元素指的是网络的train layer共有9个所以需要九个参数

  top_vecs_.resize(param.layer_size());
  bottom_id_vecs_.resize(param.layer_size());
  param_id_vecs_.resize(param.layer_size());
  top_id_vecs_.resize(param.layer_size());
  bottom_need_backward_.resize(param.layer_size());
  //差不多参数后面带‘_’的,代表的都是函数运行过程中的中间变量
  for (int layer_id = 0; layer_id < param.layer_size(); ++layer_id) {
  //对layer的每一层进行处理
    // For non-root solvers, whether this layer is shared from root_net_.
    bool share_from_root = !Caffe::root_solver()
        && root_net_->layers_[layer_id]->ShareInParallel();
    // Inherit phase from net if unset.
    if (!param.layer(layer_id).has_phase()) {
      param.mutable_layer(layer_id)->set_phase(phase_);
    }
    // Setup layer.
    const LayerParameter& layer_param = param.layer(layer_id);//看 caffe.proto去~ 赶紧的
    if (layer_param.propagate_down_size() > 0) {
    //propagate_down:Specifies on which bottoms the backpropagation should 
    //be skipped. The size must be either 0 or equal to the number of bottoms.
      CHECK_EQ(layer_param.propagate_down_size(),
          layer_param.bottom_size())
          << "propagate_down param must be specified "
          << "either 0 or bottom_size times ";
    }
    if (share_from_root) {
      LOG(INFO) << "Sharing layer " << layer_param.name() << " from root net";
      layers_.push_back(root_net_->layers_[layer_id]);
      layers_[layer_id]->SetShared(true);
    } else {
      layers_.push_back(LayerRegistry<Dtype>::CreateLayer(layer_param));
      创建layer并将layer_param的值赋值给layers_(具体见下篇博客)
    }
    layer_names_.push_back(layer_param.name());
    LOG_IF(INFO, Caffe::root_solver())
        << "Creating Layer " << layer_param.name();
    bool need_backward = false;

    // Figure out this layer's input and output
    for (int bottom_id = 0; bottom_id < layer_param.bottom_size();
         ++bottom_id) 
         //上边创建了层,然后就该对bottom/top进行处理了
         {
      const int blob_id = AppendBottom(param, layer_id, bottom_id,
                                       &available_blobs, &blob_name_to_idx);
     //见附1
      // If a blob needs backward, this layer should provide it.
      need_backward |= blob_need_backward_[blob_id];
    }
    int num_top = layer_param.top_size();
    for (int top_id = 0; top_id < num_top; ++top_id) {
      AppendTop(param, layer_id, top_id, &available_blobs, &blob_name_to_idx);
      //见附2
      // Collect Input layer tops as Net inputs.
      if (layer_param.type() == "Input") {
        const int blob_id = blobs_.size() - 1;
        net_input_blob_indices_.push_back(blob_id);
        net_input_blobs_.push_back(blobs_[blob_id].get());
      }
    }
    // If the layer specifies that AutoTopBlobs() -> true and the LayerParameter
    // specified fewer than the required number (as specified by
    // ExactNumTopBlobs() or MinTopBlobs()), allocate them here.
    Layer<Dtype>* layer = layers_[layer_id].get();
    //vector<shared_ptr<Layer<Dtype> > > layers_;
    if (layer->AutoTopBlobs()) {
      const int needed_num_top =
          std::max(layer->MinTopBlobs(), layer->ExactNumTopBlobs());
      for (; num_top < needed_num_top; ++num_top) {
        // Add "anonymous" top blobs -- do not modify available_blobs or
        // blob_name_to_idx as we don't want these blobs to be usable as input
        // to other layers.
        AppendTop(param, layer_id, num_top, NULL, NULL);
      }
    }
    // After this layer is connected, set it up.
    if (share_from_root) {
      // Set up size of top blobs using root_net_
      const vector<Blob<Dtype>*>& base_top = root_net_->top_vecs_[layer_id];
      const vector<Blob<Dtype>*>& this_top = this->top_vecs_[layer_id];
      for (int top_id = 0; top_id < base_top.size(); ++top_id) {
        this_top[top_id]->ReshapeLike(*base_top[top_id]);
        LOG(INFO) << "Created top blob " << top_id << " (shape: "
            << this_top[top_id]->shape_string() <<  ") for shared layer "
            << layer_param.name();
      }
    } else {
      layers_[layer_id]->SetUp(bottom_vecs_[layer_id], top_vecs_[layer_id]);
      //调用SetUp这一段的介绍看下一篇啊,要不然东西就太多了
    }
    LOG_IF(INFO, Caffe::root_solver())
        << "Setting up " << layer_names_[layer_id];

        //更新向量blob_loss_weights
    for (int top_id = 0; top_id < top_vecs_[layer_id].size(); ++top_id) {
      if (blob_loss_weights_.size() <= top_id_vecs_[layer_id][top_id]) {
        blob_loss_weights_.resize(top_id_vecs_[layer_id][top_id] + 1, Dtype(0));
        //调整blob_loss_weights_的大小,使其与top_id_vecs_[layer_id][top_id]一样大
      }
      blob_loss_weights_[top_id_vecs_[layer_id][top_id]] = layer->loss(top_id);
      //loss函数返回loss_weight ——> 在模板类的SetUp方法中会调用SetLossWeights来设置
      //其私有数据成员loss_,里面存储的其实是loss_weight  
      LOG_IF(INFO, Caffe::root_solver())
          << "Top shape: " << top_vecs_[layer_id][top_id]->shape_string();
          //  top_vecs_[0][0]->shape_string() = "64 1 28 28 (50176)"
      if (layer->loss(top_id)) {
        LOG_IF(INFO, Caffe::root_solver())
            << "    with loss weight " << layer->loss(top_id);
      }
      memory_used_ += top_vecs_[layer_id][top_id]->count();
    }
    LOG_IF(INFO, Caffe::root_solver())
        << "Memory required for data: " << memory_used_ * sizeof(Dtype);
    const int param_size = layer_param.param_size();
    const int num_param_blobs = layers_[layer_id]->blobs().size();
    //param_size是Layermeter类型对象layer_param中ParamSpec param成员的个数, num_param_blobs是一
//个Layer中learnable parameter blob的个数,param_size <= num_param_blobs  
    CHECK_LE(param_size, num_param_blobs)
        << "Too many params specified for layer " << layer_param.name();
    ParamSpec default_param_spec;
    for (int param_id = 0; param_id < num_param_blobs; ++param_id) {
      const ParamSpec* param_spec = (param_id < param_size) ?
          &layer_param.param(param_id) : &default_param_spec;
      const bool param_need_backward = param_spec->lr_mult() != 0;
      //是否反反向传播,主要看基础学习率,如果其为0,则不传播
      need_backward |= param_need_backward;
      //由param_need_backward来决定need_backward是否为真,并且,只要有一次遍历使得
      //need_backward为真,则这个for循环结束后,need_backward也为真
      layers_[layer_id]->set_param_propagate_down(param_id,
                                                  param_need_backward);
    }
    for (int param_id = 0; param_id < num_param_blobs; ++param_id) {
      AppendParam(param, layer_id, param_id);//附3
    }
    // Finally, set the backward flag
    layer_need_backward_.push_back(need_backwar
    d);
    if (need_backward) {
      for (int top_id = 0; top_id < top_id_vecs_[layer_id].size(); ++top_id) {
        blob_need_backward_[top_id_vecs_[layer_id][top_id]] = true;
      }
    }
  }
  //大循环,对每个层都进行处理。 附4

  // Go through the net backwards to determine which blobs contribute to the
  // loss.  We can skip backward computation for blobs that don't contribute
  // to the loss.
  // Also checks if all bottom blobs don't need backward computation (possible
  // because the skip_propagate_down param) and so we can skip bacward
  // computation for the entire layer
  set<string> blobs_under_loss;
  set<string> blobs_skip_backp;
  //这两个参数你可能不太懂,别着急 往下看
  //从上往下,遍历每一层
  for (int layer_id = layers_.size() - 1; layer_id >= 0; --layer_id) {
    bool layer_contributes_loss = false;
    bool layer_skip_propagate_down = true;
    //为true,则表示当前layer的bottom blob不需要backward computation,即该层不需要backward computation。  
//这个局部变量所表示的意义与caffe.proto里message Layerparameter的propagate_down的定义恰好相反。  
    //对于每一层的 top
    for (int top_id = 0; top_id < top_vecs_[layer_id].size(); ++top_id) {
      const string& blob_name = blob_names_[top_id_vecs_[layer_id][top_id]];
      if (layers_[layer_id]->loss(top_id) ||
          (blobs_under_loss.find(blob_name) != blobs_under_loss.end())) {
           //blobs_under_loss的赋值是在下面,也就是上几层
        layer_contributes_loss = true;
      }
      if (blobs_skip_backp.find(blob_name) == blobs_skip_backp.end()) {
        layer_skip_propagate_down = false;
      }
      if (layer_contributes_loss && !layer_skip_propagate_down)
        break;
    }
    // If this layer can skip backward computation, also all his bottom blobs
    // don't need backpropagation
    if (layer_need_backward_[layer_id] && layer_skip_propagate_down) {
      layer_need_backward_[layer_id] = false;
      for (int bottom_id = 0; bottom_id < bottom_vecs_[layer_id].size();
               ++bottom_id) {
        bottom_need_backward_[layer_id][bottom_id] = false;
      }
    }
    if (!layer_contributes_loss) { layer_need_backward_[layer_id] = false; }
    if (Caffe::root_solver()) {
      if (layer_need_backward_[layer_id]) {
        LOG(INFO) << layer_names_[layer_id] << " needs backward computation.";
      } else {
        LOG(INFO) << layer_names_[layer_id]
            << " does not need backward computation.";
      }
    }
    for (int bottom_id = 0; bottom_id < bottom_vecs_[layer_id].size();
         ++bottom_id) {
      if (layer_contributes_loss) {
        const string& blob_name =
            blob_names_[bottom_id_vecs_[layer_id][bottom_id]];
        blobs_under_loss.insert(blob_name);
        //判断当前层是否contributions to loss 是的话 就把名字插入 blobs_under_loss中
      } else {
        bottom_need_backward_[layer_id][bottom_id] = false;
      }
      if (!bottom_need_backward_[layer_id][bottom_id]) {
        const string& blob_name =
                   blob_names_[bottom_id_vecs_[layer_id][bottom_id]];
        blobs_skip_backp.insert(blob_name);
        //若本层不需要反反向传播,将名字插入blobs_skip_backp中。
      }
    }
  }
  // Handle force_backward if needed.
  if (param.force_backward()) {
    for (int layer_id = 0; layer_id < layers_.size(); ++layer_id) {
      layer_need_backward_[layer_id] = true;
      for (int bottom_id = 0;
           bottom_id < bottom_need_backward_[layer_id].size(); ++bottom_id) {
        bottom_need_backward_[layer_id][bottom_id] =
            bottom_need_backward_[layer_id][bottom_id] ||
            layers_[layer_id]->AllowForceBackward(bottom_id);
        blob_need_backward_[bottom_id_vecs_[layer_id][bottom_id]] =
            blob_need_backward_[bottom_id_vecs_[layer_id][bottom_id]] ||
            bottom_need_backward_[layer_id][bottom_id];
      }
      for (int param_id = 0; param_id < layers_[layer_id]->blobs().size();
           ++param_id) {
        layers_[layer_id]->set_param_propagate_down(param_id, true);
      }
    }
  }
  // In the end, all remaining blobs are considered output blobs.
  for (set<string>::iterator it = available_blobs.begin();
      it != available_blobs.end(); ++it) {
    LOG_IF(INFO, Caffe::root_solver())
        << "This network produces output " << *it;
    net_output_blobs_.push_back(blobs_[blob_name_to_idx[*it]].get());
    net_output_blob_indices_.push_back(blob_name_to_idx[*it]);
  }

//blob_names_.size() = 9
  for (size_t blob_id = 0; blob_id < blob_names_.size(); ++blob_id) {
    blob_names_index_[blob_names_[blob_id]] = blob_id;
    //向 blob_names_index_里逐一添加元素
  }

//layer_names_.size()= 9
  for (size_t layer_id = 0; layer_id < layer_names_.size(); ++layer_id) {
    layer_names_index_[layer_names_[layer_id]] = layer_id;
  }
/*
(gdb) p blob_names_index_
$95 = std::map with 9 elements = {["conv1"] = 2, ["conv2"] = 4, ["data"] = 0, 
  ["ip1"] = 6, ["ip2"] = 7, ["label"] = 1, ["loss"] = 8, ["pool1"] = 3, 
  ["pool2"] = 5}
(gdb) p  layer_names_index_
$96 = std::map with 9 elements = {["conv1"] = 1, ["conv2"] = 3, ["ip1"] = 5, 
  ["ip2"] = 7, ["loss"] = 8, ["mnist"] = 0, ["pool1"] = 2, ["pool2"] = 4, 
  ["relu1"] = 6}
*/
  ShareWeights();
  debug_info_ = param.debug_info();
  LOG_IF(INFO, Caffe::root_solver()) << "Network initialization done.";
}

template <typename Dtype>
void Net<Dtype>::FilterNet(const NetParameter& param,
    NetParameter* param_filtered) {
  NetState net_state(param.state());
  param_filtered->CopyFrom(param);
  param_filtered->clear_layer();
  for (int i = 0; i < param.layer_size(); ++i) {
    const LayerParameter& layer_param = param.layer(i);
    const string& layer_name = layer_param.name();
    CHECK(layer_param.include_size() == 0 || layer_param.exclude_size() == 0)
          << "Specify either include rules or exclude rules; not both.";
    // If no include rules are specified, the layer is included by default and
    // only excluded if it meets one of the exclude rules.
    bool layer_included = (layer_param.include_size() == 0);
    for (int j = 0; layer_included && j < layer_param.exclude_size(); ++j) {
      if (StateMeetsRule(net_state, layer_param.exclude(j), layer_name)) {
        layer_included = false;
      }
    }
    for (int j = 0; !layer_included && j < layer_param.include_size(); ++j) {
      if (StateMeetsRule(net_state, layer_param.include(j), layer_name)) {
        layer_included = true;
      }
    }
    if (layer_included) {
      param_filtered->add_layer()->CopyFrom(layer_param);
    }
  }
}

template <typename Dtype>
bool Net<Dtype>::StateMeetsRule(const NetState& state,
    const NetStateRule& rule, const string& layer_name) {
  // Check whether the rule is broken due to phase.
  if (rule.has_phase()) {
      if (rule.phase() != state.phase()) {
        LOG_IF(INFO, Caffe::root_solver())
            << "The NetState phase (" << state.phase()
            << ") differed from the phase (" << rule.phase()
            << ") specified by a rule in layer " << layer_name;
        return false;
      }
  }
  // Check whether the rule is broken due to min level.
  if (rule.has_min_level()) {
    if (state.level() < rule.min_level()) {
      LOG_IF(INFO, Caffe::root_solver())
          << "The NetState level (" << state.level()
          << ") is above the min_level (" << rule.min_level()
          << ") specified by a rule in layer " << layer_name;
      return false;
    }
  }
  // Check whether the rule is broken due to max level.
  if (rule.has_max_level()) {
    if (state.level() > rule.max_level()) {
      LOG_IF(INFO, Caffe::root_solver())
          << "The NetState level (" << state.level()
          << ") is above the max_level (" << rule.max_level()
          << ") specified by a rule in layer " << layer_name;
      return false;
    }
  }
  // Check whether the rule is broken due to stage. The NetState must
  // contain ALL of the rule's stages to meet it.
  for (int i = 0; i < rule.stage_size(); ++i) {
    // Check that the NetState contains the rule's ith stage.
    bool has_stage = false;
    for (int j = 0; !has_stage && j < state.stage_size(); ++j) {
      if (rule.stage(i) == state.stage(j)) { has_stage = true; }
    }
    if (!has_stage) {
      LOG_IF(INFO, Caffe::root_solver())
          << "The NetState did not contain stage '" << rule.stage(i)
          << "' specified by a rule in layer " << layer_name;
      return false;
    }
  }
  // Check whether the rule is broken due to not_stage. The NetState must
  // contain NONE of the rule's not_stages to meet it.
  for (int i = 0; i < rule.not_stage_size(); ++i) {
    // Check that the NetState contains the rule's ith not_stage.
    bool has_stage = false;
    for (int j = 0; !has_stage && j < state.stage_size(); ++j) {
      if (rule.not_stage(i) == state.stage(j)) { has_stage = true; }
    }
    if (has_stage) {
      LOG_IF(INFO, Caffe::root_solver())
          << "The NetState contained a not_stage '" << rule.not_stage(i)
          << "' specified by a rule in layer " << layer_name;
      return false;
    }
  }
  return true;
}

附1::AppendBottom
// Helper for Net::Init: add a new bottom blob to the net.
template <typename Dtype>
int Net<Dtype>::AppendBottom(const NetParameter& param, const int layer_id,
    const int bottom_id, set<string>* available_blobs,
    map<string, int>* blob_name_to_idx) {
  const LayerParameter& layer_param = param.layer(layer_id);
  const string& blob_name = layer_param.bottom(bottom_id);
  if (available_blobs->find(blob_name) == available_blobs->end()) {
    LOG(FATAL) << "Unknown bottom blob '" << blob_name << "' (layer '"
               << layer_param.name() << "', bottom index " << bottom_id << ")";
  }
  const int blob_id = (*blob_name_to_idx)[blob_name];
  LOG_IF(INFO, Caffe::root_solver())
      << layer_names_[layer_id] << " <- " << blob_name;
  bottom_vecs_[layer_id].push_back(blobs_[blob_id].get());
  //调用shared_ptr类的get()方法提取存储在blobs_中的中间变量  
  bottom_id_vecs_[layer_id].push_back(blob_id);
  available_blobs->erase(blob_name);
  bool need_backward = blob_need_backward_[blob_id];
  // Check if the backpropagation on bottom_id should be skipped
  if (layer_param.propagate_down_size() > 0) {
    need_backward = layer_param.propagate_down(bottom_id);
    propagate_down为true,则表示参与BP;否则,skip bp  
  }  
  bottom_need_backward_[layer_id].push_back(need_backward);
  return blob_id;
}

附2:AppendTop
// Helper for Net::Init: add a new top blob to the net.
template <typename Dtype>
void Net<Dtype>::AppendTop(const NetParameter& param, const int layer_id,
                           const int top_id, set<string>* available_blobs,
                           map<string, int>* blob_name_to_idx) {
  shared_ptr<LayerParameter> layer_param( 
      new LayerParameter(param.layer(layer_id)));
      //param.layer(layer_id),第layer_id层的layer参数
  const string& blob_name = (layer_param->top_size() > top_id) ?
      layer_param->top(top_id) : "(automatic)";
  // Check if we are doing in-place computation
  if (blob_name_to_idx && layer_param->bottom_size() > top_id &&
      blob_name == layer_param->bottom(top_id)) {
    // In-place computation
    LOG_IF(INFO, Caffe::root_solver())
        << layer_param->name() << " -> " << blob_name << " (in-place)";
    top_vecs_[layer_id].push_back(blobs_[(*blob_name_to_idx)[blob_name]].get());
    top_id_vecs_[layer_id].push_back((*blob_name_to_idx)[blob_name]);
  } else if (blob_name_to_idx &&
             blob_name_to_idx->find(blob_name) != blob_name_to_idx->end()) {
    // If we are not doing in-place computation but have duplicated blobs,
    // raise an error.
    LOG(FATAL) << "Top blob '" << blob_name
               << "' produced by multiple sources.";
  } else {
    // Normal output.
    if (Caffe::root_solver()) {
      LOG(INFO) << layer_param->name() << " -> " << blob_name;
      //这里layer_param->name()指的是层的名字,blob_name指的是top或bottom的名字
    }
    shared_ptr<Blob<Dtype> > blob_pointer(new Blob<Dtype>());
    //构造函数 new一个bolb_pointer
    const int blob_id = blobs_.size();
    blobs_.push_back(blob_pointer);
    //blobs_是一个向量,值为vector of length 0, capacity 0
    //在其尾部插入blob_pointer值为vector of length 1, capacity 1 = {{px =
    //0x6af420, pn = {pi_ = 0x6af480}}}
    //感觉一开始的blibs_就是一个向量,里面储存的是可以0指向blob的的只能指针,然后将指向
    //blob_pointer的指针赋给了它
    blob_names_.push_back(blob_name);
    blob_need_backward_.push_back(false);
    if (blob_name_to_idx) { (*blob_name_to_idx)[blob_name] = blob_id; }
    //*blob_name_to_idx= std::map with 1 elements = {["data"] = 0}
/*
blob_name_to_idx是一个局部变量,其实它是在当前layer的top blob 和下一层的bottom blob间起着一个桥梁作用。  
blob_name_to_idx中元素的pair是从网络最开始一层一层搭建的过程中压入map的,其中的name和id都是不重复的。name是关键字——不重复是map数据结构的必然要求,id也是不重复的——0,1,2...  
blob_name_to_idx和blobs_一样,在"Normal output"的情形下,每次遍历到一个top blob的时候都会更新  参考 http://www.itdaan.com/blog/2016/03/26/726330.html
*/
    /// top_vecs stores the vectors containing the output for each layer
    //vector<vector<Blob<Dtype>*> > top_vecs_;
    //vector<vector<int> > top_id_vecs_;
    top_id_vecs_[layer_id].push_back(blob_id);
    top_vecs_[layer_id].push_back(blob_pointer.get());
  }
  if (available_blobs) { available_blobs->insert(blob_name); }
}
/*
总结:AppendTop主要干了以下几件事:
1.new了bolb类的指针;
2.将blob的指针,名字等压入blobs;
3.更新map类型的blob_name_to_idx以及set类型的available_blobs;
现在只是一个初始化过程,还没有进行 数据的处理,现在只是搭框架。
*/

附3:

AppendParam函数
    template <typename Dtype>  
    void Net<Dtype>::AppendParam(const NetParameter& param, const int layer_id,  
                                 const int param_id) {  
      const LayerParameter& layer_param = layers_[layer_id]->layer_param();//模板类Layer的layer_param方法,返回Layerparameter类型成员  
      const int param_size = layer_param.param_size();  
      string param_name =  
          (param_size > param_id) ? layer_param.param(param_id).name() : "";  
      if (param_name.size()) {  
        param_display_names_.push_back(param_name);//vector<string> param_display_names_ 这里param_name获取的是PaParamSpec类型中的name成员,如果有name且非空,就把name压入该向量,否则就压入param_id  
      } else {  
        ostringstream param_display_name;  
        param_display_name << param_id;  
        param_display_names_.push_back(param_display_name.str());  
      }  
      //Append 参数blob 每一次循环,net_param_id和param_id_vecs_都会更新  
      const int net_param_id = params_.size();//vector<shared_ptr<Blob<Dtype> > > params_--->The parameters in the network,整个网络的参数的id,!!!不管这个参数有没有non-emty name,是否参与share!!!  
      params_.push_back(layers_[layer_id]->blobs()[param_id]);//将当前layer当前"参数blob"压入params_ --->vector<shared_ptr<Blob<Dtype> > > params_  
      param_id_vecs_[layer_id].push_back(net_param_id);//将整个网络的参数按层的形式来存储,存储的元素可以理解为params_这个向量的下标值(类型为整型)  
      param_layer_indices_.push_back(make_pair(layer_id, param_id));//param_layer_indices_是向量,其元素为当layer_id 与当前param_id 组成的pair.vector<pair<int, int> > param_layer_indices_  
      //获取每个param_id所对应的Paramspec类型成员,如果param_id >= param_size 则返回default_param_spec。注意param_size <= num_param_blobs  
      ParamSpec default_param_spec;  
      const ParamSpec* param_spec = (layer_param.param_size() > param_id) ?  
          &layer_param.param(param_id) : &default_param_spec;  
      if (!param_size || !param_name.size() || (param_name.size() &&  
          param_names_index_.find(param_name) == param_names_index_.end())) {  
        // This layer "owns" this parameter blob -- it is either anonymous  
        // (i.e., not given a param_name) or explicitly given a name that we  
        // haven't already seen.  
        // 相反,如果param_name不为空,而且能够在param_names_index_中找到,说明这个parameter已经存在于之前的某个或者某些网络层里,说明这个parameter是共享于多个layer  
        // 在caffe.proto的message ParamSpec里关于name的注释——>To share a parameter between two layers, give it a (non-empty) name, 可见,如果一个parameter是共享与多个网络层,那么它会有一个非空的name  
        param_owners_.push_back(-1);//vector<int> param_owners_ 是一个存储parameter "onwer"的一个向量  ——> -1 表示当前Layer就是该parameter的"owner"  
        //添加param_name  
        if (param_name.size()) {  
          //map<string, int> param_names_index_是整个网络的参数non-empty name与index的映射。  
          //注意,这个name是ParamSpec 类型中的name,而且,""To share a parameter between two layers, give it a (non-empty) name"",所以说这个map中存储的pair是<会被share的parameter_name, 其对应index>  
          param_names_index_[param_name] = net_param_id;//map<string, int> param_names_index_ 。虽然每一次循环,net_param_id都会更新,但是net_param_id只有当param_name.size()>0时才会被压入向量param_names_index_  
        }  
        //添加learnable_param  
        const int learnable_param_id = learnable_params_.size();//vector<Blob<Dtype>*> learnable_params_   
        learnable_params_.push_back(params_[net_param_id].get());//压入learnable parameter ---> 在模板类layer中,定义了一个blobs_成员,其存储的就是learnable parameter。随后压入learnable_param_id  
        learnable_param_ids_.push_back(learnable_param_id);//vector<int> learnable_param_ids_  
        has_params_lr_.push_back(param_spec->has_lr_mult());//vector<bool> has_params_lr_  
        has_params_decay_.push_back(param_spec->has_decay_mult());  
        params_lr_.push_back(param_spec->lr_mult());//vector<float> params_lr_  
        params_weight_decay_.push_back(param_spec->decay_mult());  
      } else {  
        // Named param blob with name we've seen before: share params  
        const int owner_net_param_id = param_names_index_[param_name];//因为"To share a parameter between two layers, give it a (non-empty) name",所以这句代码就是获取shared parameter的"owner" net_param_id  
        param_owners_.push_back(owner_net_param_id);//vector<int> param_owners_  
        const pair<int, int>& owner_index =  
            param_layer_indices_[owner_net_param_id];//只获取了那些shared的parameter,即具有non-empty name的parameter的pair<layer_id, param_id>  
        const int owner_layer_id = owner_index.first;  
        const int owner_param_id = owner_index.second;  
        LOG_IF(INFO, Caffe::root_solver()) << "Sharing parameters '" << param_name  
            << "' owned by "  
            << "layer '" << layer_names_[owner_layer_id] << "', param "  
            << "index " << owner_param_id;  
        Blob<Dtype>* this_blob = layers_[layer_id]->blobs()[param_id].get();//获取当前层的当前参数Blob  
        Blob<Dtype>* owner_blob =  
            layers_[owner_layer_id]->blobs()[owner_param_id].get();//获取owner layer的对应的参数blob  
        const int param_size = layer_param.param_size();  
        if (param_size > param_id && (layer_param.param(param_id).share_mode() ==  
                                      ParamSpec_DimCheckMode_PERMISSIVE)) {  
          // Permissive dimension checking -- only check counts are the same.  
          CHECK_EQ(this_blob->count(), owner_blob->count())  
              << "Cannot share param '" << param_name << "' owned by layer '"  
              << layer_names_[owner_layer_id] << "' with layer '"  
              << layer_names_[layer_id] << "'; count mismatch.  Owner layer param "  
              << "shape is " << owner_blob->shape_string() << "; sharing layer "  
              << "shape is " << this_blob->shape_string();  
        } else {  
          // Strict dimension checking -- all dims must be the same.  
          CHECK(this_blob->shape() == owner_blob->shape())  
              << "Cannot share param '" << param_name << "' owned by layer '"  
              << layer_names_[owner_layer_id] << "' with layer '"  
              << layer_names_[layer_id] << "'; shape mismatch.  Owner layer param "  
              << "shape is " << owner_blob->shape_string() << "; sharing layer "  
              << "expects shape " << this_blob->shape_string();  
        }  
        //获取owner layer的learnable_param_id,并且压入当前layer的向量learnable_param_ids_。  
        //而且在这里也没有把参数blob压入learnable_params_向量(只是将id压入learnable_param_ids_),从而避免当前layer与sharing layer之间关于shared parameter blob 的重复  
        const int learnable_param_id = learnable_param_ids_[owner_net_param_id];//vector<int> learnable_param_ids_ ; vector<float> params_lr_;  
        learnable_param_ids_.push_back(learnable_param_id);  
        if (param_spec->has_lr_mult()) {  
          if (has_params_lr_[learnable_param_id]) {  
            CHECK_EQ(param_spec->lr_mult(), params_lr_[learnable_param_id])  
                << "Shared param '" << param_name << "' has mismatched lr_mult.";  
          } else {  
            has_params_lr_[learnable_param_id] = true;  
            params_lr_[learnable_param_id] = param_spec->lr_mult();  
          }  
        }  
        if (param_spec->has_decay_mult()) {  
          if (has_params_decay_[learnable_param_id]) {  
            CHECK_EQ(param_spec->decay_mult(),  
                     params_weight_decay_[learnable_param_id])  
                << "Shared param '" << param_name << "' has mismatched decay_mult.";  
          } else {  
            has_params_decay_[learnable_param_id] = true;  
            params_weight_decay_[learnable_param_id] = param_spec->decay_mult();  
          }  
        }  
      }  
    }  
    ps:借鉴的这个网址http://blog.csdn.net/iamzhangzhuping/article/details/50537240



  • 4
    点赞
  • 15
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值