caffe c++实战:通过训练好的模型对人脸图像进行特征提取(单张图像)

       最近开始看caffe1源码,非常感谢赵永科老师的《深度学习:21天实战Caffe》书的引导,加上网上各路大神的真知灼见,自己经过一番收集整理,希望跟我一样晚入门的小伙伴能快速掌握相关知识,把相关知识应用在实际工程中。本系列caffe实战博客都将通过完整的c++实例代码来实现,作为自己学习caffe的小总结。

一、caffe总体结构

        Caffe框架主要有五个组件,Blob,Solver,Net,Layer,Proto,其结构图如下图所示。Solver负责深度网络的训练,每个Solver中包含一个训练网络对象和一个测试网络对象。每个网络则由若干个Layer构成。每个Layer的输入和输出Feature map表示为Input Blob和Output Blob。Blob是Caffe实际存储数据的结构,是一个不定维的矩阵,在Caffe中一般用来表示一个拉直的四维矩阵,四个维度分别对应批处理数目Batch Size(N),Feature Map的通道数(C),Feature Map高度(H)和宽度(W)。Proto则基于Google的Protobuf开源项目,是一种类似XML的数据交换格式,用户只需要按格式定义对象的数据成员,可以在多种语言中实现对象的序列化与反序列化,在Caffe中用于网络模型的结构定义、存储和读取[1]。

       引用这一大段的目的还是为了看代码时对各个数据的定义的意义有个比较清晰的认识,我们再来简单梳理一下:Blob存储了基本的数据(那么最开始的图像数据输入和结果输出也应在Blob中),一个输入和输出的Blob通过Layer连接,多个Layer构成了一个Net神经网络,Net通过Solver求解器进行损失和梯度计算实现参数的精调和更新。

二、caffe代码实现的一点理解

       我们先回忆神经网络的处理过程,对输入数据进行前向传播过程(Forward),训练结果和标准样本数据相比较计算损失(Loss),利用反向传播过程(Backward)计算梯度,调整权重减小损失值,收敛时输出训练模型。读者可以自己设想一下如果是你来设计这个模型,应该建立一个怎样的数据关系?

       我们再来看看google风格的在caffe源码的结构。显然我们需要定义一个数据类存放数据,caffe中定义了不定维度的Blob矩阵对象,这里不过多的介绍。很显然对于前向和后向的传播过程实际上可以分解为一个一个Layer的计算过程,Layer的构造函数如下:结果是一个Layer对象绑定了一组Blob对象(一个Blob对应一个神经元)

explicit Layer(const LayerParameter& param)
    : layer_param_(param) {
      // Set phase and copy blobs (if there are any).
      phase_ = param.phase();//设置是train还是test阶段
      if (layer_param_.blobs_size() > 0) {
        blobs_.resize(layer_param_.blobs_size());
        for (int i = 0; i < layer_param_.blobs_size(); ++i) {
          blobs_[i].reset(new Blob<Dtype>());//创建Blob对象
          blobs_[i]->FromProto(layer_param_.blobs(i));
        }
      }
    }

       对于关键函数的前向传播方法在layer.hpp文件的声明和定义:一层网络传入了bottom数据和输出top数据,注意这里传入的都是引用,对于CPU和GPU的执行分别具体调用Forward_cpu(bottom, top);Forward_gpu(bottom, top)方法,这两个方法都是纯虚函数,需要在子类中具体实现。

//声明:
inline Dtype Forward(const vector<Blob<Dtype>*>& bottom,
      const vector<Blob<Dtype>*>& top);
//定义
template <typename Dtype>
inline Dtype Layer<Dtype>::Forward(const vector<Blob<Dtype>*>& bottom,
    const vector<Blob<Dtype>*>& top) 
{
  Dtype loss = 0;//定义损失值
  Reshape(bottom, top);
  switch (Caffe::mode()) 
  {
  case Caffe::CPU:
    Forward_cpu(bottom, top);
    for (int top_id = 0; top_id < top.size(); ++top_id)
    {
      if (!this->loss(top_id)) { continue; }
      const int count = top[top_id]->count();
      const Dtype* data = top[top_id]->cpu_data();
      const Dtype* loss_weights = top[top_id]->cpu_diff();
      loss += caffe_cpu_dot(count, data, loss_weights);
    }
    break;
  case Caffe::GPU:
    Forward_gpu(bottom, top);
#ifndef CPU_ONLY
    for (int top_id = 0; top_id < top.size(); ++top_id)
    {
      if (!this->loss(top_id)) { continue; }
      const int count = top[top_id]->count();
      const Dtype* data = top[top_id]->gpu_data();
      const Dtype* loss_weights = top[top_id]->gpu_diff();
      Dtype blob_loss = 0;
      caffe_gpu_dot(count, data, loss_weights, &blob_loss);
      loss += blob_loss;
    }
#endif
    break;
  default:
    LOG(FATAL) << "Unknown caffe mode.";
  }
  return loss;
}

//纯虚函数,需要在子类中实现具体的操作
virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,
      const vector<Blob<Dtype>*>& top) = 0;

virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,
      const vector<Blob<Dtype>*>& top) 
{
    // LOG(WARNING) << "Using CPU code as backup.";
    return Forward_cpu(bottom, top);
 }

       上面只是实现了对于单个layer的数据前向和后向的计算过程,那么各个layer之间是怎么建立前后的联系呢?当然是Net了,我们翻开net.hpp,先看看其中一个构造函数:网络在初始化的时候会调用函数net.cpp里的FilterNet函数,根据NetStateRule来设计网络的规则,通过NetState的实际状态确定该Layer是要加入(include)还是排除在外(exclude),最终通过init函数一个网络建立起来了。

/*
请看caffe.proto中的定义:
//网络的状态定义
message NetState 
{
  optional Phase phase = 1 [default = TEST];
  optional int32 level = 2 [default = 0];
  repeated string stage = 3; //字符串的stage在NetStateRule中定义,如下:
}

//NetStateRule描述的是一种规则,在层的定义里设置,用来决定Layer是否被加进网络
message NetStateRule 
{
  //Phase是个枚举类型的变量,取值为{TRAIN, TEST},表示的是网络的两个阶段(训练和测试)
  optional Phase phase = 1;

  //net.cpp文件里的StateMeetsRule函数用来判断NetState是否符合NetStateRule的规则
  //满足规则的才会被包含在当前网络中
  // 因此这里定义了最大最小level
  optional int32 min_level = 2;
  optional int32 max_level = 3;

  // 可以自定义包含和排除的stages集合
  // NetState的stage(字符)包含NetStateRule所列出的所有stage并且不包含任何一个not_stage
  // 即要包含的放在stage里,要在当前网络下去除的放在not_stage里
  repeated string stage = 4;
  repeated string not_stage = 5;
}

*/
template <typename Dtype>
Net<Dtype>::Net(const string& param_file, Phase phase,
    const int level, const vector<string>* stages) 
	{
  NetParameter param;
  ReadNetParamsFromTextFileOrDie(param_file, &param);
  //设置phase, stages and level
  param.mutable_state()->set_phase(phase);
  if (stages != NULL) 
  {
    for (int i = 0; i < stages->size(); i++) 
	{
      param.mutable_state()->add_stage((*stages)[i]);
    }
  }
  param.mutable_state()->set_level(level);
  Init(param);
}

template <typename Dtype>
void Net<Dtype>::Init(const NetParameter& in_param) 
{
  // Set phase from the state.
  phase_ = in_param.state().phase();
  // 过滤层基于 include/exclude 规则和目前的NetState
  NetParameter filtered_param;
  FilterNet(in_param, &filtered_param);//网络过滤,挑选规则范围内的网络构建
  LOG_IF(INFO, Caffe::root_solver())
      << "Initializing net from parameters: " << std::endl
      << filtered_param.DebugString();
  // Create a copy of filtered_param with splits added where necessary.
  NetParameter param;
  /*
  *调用InsertSplits()函数,对于底层的一个输出blob对应多个上层的情况,
  *则要在加入分裂层,形成新的网络。
  **/
  InsertSplits(filtered_param, &param);

  // Basically, build all the layers and set up their connections.
  /*
  *以上部分只是根据 *.prototxt文件,确定网络name 和 blob的name的连接情况,
  *下面部分是层以及层间的blob的创建,函数ApendTop()中间blob的实例化
  *函数layer->SetUp()分配中间层blob的内存空间
  **/
  name_ = param.name();
  map<string, int> blob_name_to_idx;
  set<string> available_blobs;
  memory_used_ = 0;
  // For each layer, set up its input and output
  bottom_vecs_.resize(param.layer_size());//存每一层的输入(bottom)blob指针 
  top_vecs_.resize(param.layer_size());//存每一层输出(top)的blob指针
  bottom_id_vecs_.resize(param.layer_size());//存每一层输入(bottom)blob的id
  param_id_vecs_.resize(param.layer_size());//存每一层参数blob的id
  top_id_vecs_.resize(param.layer_size());//存每一层输出(top)的blob的id
  bottom_need_backward_.resize(param.layer_size());//该blob是需要返回的bool值

  /*
    1.初始化bottom blob: 将bottom_vecs_的地址与blobs_[blob_id]地址关联起来,
    将bottom_id_vecs_与blob_id_关联起来;
    2.对于数据输入层来说只有top,没有bottom,所以会跳过下面的for循环
  **/

  for (int layer_id = 0; layer_id < param.layer_size(); ++layer_id) 
  {
    // Inherit phase from net if unset.
    if (!param.layer(layer_id).has_phase()) 
	{
      param.mutable_layer(layer_id)->set_phase(phase_);
    }
    // Setup layer.
    const LayerParameter& layer_param = param.layer(layer_id);
    if (layer_param.propagate_down_size() > 0) 
	{
      CHECK_EQ(layer_param.propagate_down_size(),
          layer_param.bottom_size())
          << "propagate_down param must be specified "
          << "either 0 or bottom_size times ";
    }
    
    /*
     *把当前层的参数转换为shared_ptr<Layer<Dtype>>,创建一个具体的层,并压入到layers_中
     * 
     **/
    layers_.push_back(LayerRegistry<Dtype>::CreateLayer(layer_param));
    layer_names_.push_back(layer_param.name());
    LOG_IF(INFO, Caffe::root_solver())
        << "Creating Layer " << layer_param.name();
    bool need_backward = false;

    /*
      1.初始化bottom blob: 将bottom_vecs_的地址与blobs_[blob_id]地址关联起来,
        将bottom_id_vecs_与blob_id_关联起来;
      2.对于数据输入层来说只有top,没有bottom,所以会跳过下面的for循环
    **/
    for (int bottom_id = 0; bottom_id < layer_param.bottom_size();
         ++bottom_id) {
      /*
        1.net中bottom/top是交替初始化的,前一层的top是后一层的bottom,前一层top的
          available_blobs/blob_name_to_idx参数就是后一层的bottom参数
        2.AppendBottom将bottom_vecs_与blobs_[id]关联起来, 将bottom_id_vecs_与
          blob_id_关联起来
      **/
      const int blob_id = AppendBottom(param, layer_id, bottom_id,
                                       &available_blobs, &blob_name_to_idx);
      // If a blob needs backward, this layer should provide it.
      //blob_need_backward_[blob_id]的值是由前一层top_blob传递过来的,同时与当
      //前层bottom_need_backward_[layer_id][bottom_id]或运算出来的结果;
      //need_backward是当前层是否要做反向传播计算的最终判断: need_backward由
      //所有blob_need_backward_和param_need_backward_组合得到
      need_backward |= blob_need_backward_[blob_id]; 
    }
    int num_top = layer_param.top_size();
	
    /*
      初始化top blob: 将top_vecs_的地址与blobs_[blob_id]地址关联起来,
      将top_id_vecs_与blob_id_关联起来; AppendTop还创建了新blob
    **/
    for (int top_id = 0; top_id < num_top; ++top_id) {
      //通过AppendTop和AppendBottom, bottom_vecs_和top_vecs_连接在了一起
      //在AppendTop中会往available_blobs添加某层的输出blob,在AppendBottom中会
      //从available_blobs中删除前一层的输出blob,所有layers遍历完后剩下的就
      //是整个net的输出blob
      AppendTop(param, layer_id, top_id, &available_blobs, &blob_name_to_idx);
      // Collect Input layer tops as Net inputs.
      if (layer_param.type() == "Input") {
	//对于整个net的输入层,每通过AppendTop新建一个top blob, blobs.size()
	//就增加1,blobs_size()是从0开始增加的,就能代表整个net输入blob的id
        const int blob_id = blobs_.size() - 1;
        net_input_blob_indices_.push_back(blob_id);
        net_input_blobs_.push_back(blobs_[blob_id].get());
      }
    }
    // If the layer specifies that AutoTopBlobs() -> true and the LayerParameter
    // specified fewer than the required number (as specified by
    // ExactNumTopBlobs() or MinTopBlobs()), allocate them here.
    Layer<Dtype>* layer = layers_[layer_id].get();
	
    //补上top blob, 使该层的top blob个数达到要求
    if (layer->AutoTopBlobs()) {
      const int needed_num_top =
          std::max(layer->MinTopBlobs(), layer->ExactNumTopBlobs());

      //只有当当前层已有的top blob个数(num_top)小于参数中定义的个
      //数(needed_num_top)时 ,才需要自动生成blobs,补上缺口
      for (; num_top < needed_num_top; ++num_top) {
        // Add "anonymous" top blobs -- do not modify available_blobs or
        // blob_name_to_idx as we don't want these blobs to be usable as input
        // to other layers.
        AppendTop(param, layer_id, num_top, NULL, NULL);
      }
    }	
	
    /*
      After this layer is connected, set it up.
      初始化每一个top Blob的shape(前面已经把bottom blob和top blob地址关联起来了
      所以不需要对bottom blob进行shape)
    **/
    if (share_from_root) {
      // Set up size of top blobs using root_net_
      const vector<Blob<Dtype>*>& base_top = root_net_->top_vecs_[layer_id];
      const vector<Blob<Dtype>*>& this_top = this->top_vecs_[layer_id];
      for (int top_id = 0; top_id < base_top.size(); ++top_id) {
        this_top[top_id]->ReshapeLike(*base_top[top_id]);
        LOG(INFO) << "Created top blob " << top_id << " (shape: "
            << this_top[top_id]->shape_string() <<  ") for shared layer "
            << layer_param.name();
      }
    } else { 
      //如果是caffe::root_solver, 或非caffe::root_solver的非共享层的, 都会走下面分支
      layers_[layer_id]->SetUp(bottom_vecs_[layer_id], top_vecs_[layer_id]);
    }
    LOG_IF(INFO, Caffe::root_solver())
        << "Setting up " << layer_names_[layer_id];	
	
    /*
     初始化blob_loss_weights_: blob_loss_weights_用于存放loss;
     blob_loss_weights_覆盖了所有层的top blob, 但只有最后一层
     Loss输出层值才是非0
    **/
    for (int top_id = 0; top_id < top_vecs_[layer_id].size(); ++top_id) {
      if (blob_loss_weights_.size() <= top_id_vecs_[layer_id][top_id]) {
 
        //top_id_vecs_[layer_id][top_id]是整个net中顺序排列的id号(不是本层
        //中从0开始的序列号 );每一个top blob都对应一个blob_loss_weights_[id]
	    //的值, 用来存放loss,除了整个net的输出blob外,值都是0
        blob_loss_weights_.resize(top_id_vecs_[layer_id][top_id] + 1, Dtype(0));
      }
      blob_loss_weights_[top_id_vecs_[layer_id][top_id]] = layer->loss(top_id);
      LOG_IF(INFO, Caffe::root_solver())
          << "Top shape: " << top_vecs_[layer_id][top_id]->shape_string();
      if (layer->loss(top_id)) {
        LOG_IF(INFO, Caffe::root_solver())
            << "    with loss weight " << layer->loss(top_id);
      }
      memory_used_ += top_vecs_[layer_id][top_id]->count();
    }
    LOG_IF(INFO, Caffe::root_solver())
        << "Memory required for data: " << memory_used_ * sizeof(Dtype);	
	
    /*
      对参数进行初始化:一般权值weight存放在一个blob,偏执bias存放在另一个blob
      本层的param_need_backward(具体值来自LayerParameter)和本层的
      blob_need_backward_决定了本层的need_backward;本层的need_backward决
      定了本层的layer_need_backward_
    **/
    //LayerParameter中已经定义了的参数个数(可能小于实际的个数 )
    const int param_size = layer_param.param_size();
    //某层的实际参数个数
    const int num_param_blobs = layers_[layer_id]->blobs().size();
    CHECK_LE(param_size, num_param_blobs)
        << "Too many params specified for layer " << layer_param.name();
    ParamSpec default_param_spec;
    for (int param_id = 0; param_id < num_param_blobs; ++param_id) {
      const ParamSpec* param_spec = (param_id < param_size) ?
          &layer_param.param(param_id) : &default_param_spec;
      //lr_mult是收敛速率
      const bool param_need_backward = param_spec->lr_mult() != 0; 
      //need_backward是当前层是否要做反向传播计算的最终判断: need_backward由
      //所有blob_need_backward_和param_need_backward_组合得到
      need_backward |= param_need_backward;
      layers_[layer_id]->set_param_propagate_down(param_id,
                                                  param_need_backward);
    }
	
    //一个layer一般有两个参数Blob, 第一个存weight, 第二个存bias
    for (int param_id = 0; param_id < num_param_blobs; ++param_id) {
      AppendParam(param, layer_id, param_id);
    }
    // Finally, set the backward flag
    // 只要本层中所有bottom blob和所有param blob中有一个支持backward, 
    //need_backward就为true
    layer_need_backward_.push_back(need_backward);
    if (need_backward) {
      //当本层要支持backward后, 本层所有blob都要支持backward
      for (int top_id = 0; top_id < top_id_vecs_[layer_id].size(); ++top_id) {
        //对top blob对应id的blob_need_backward_置true, 该结果会传递到后面一层
	//的bottom blob
        blob_need_backward_[top_id_vecs_[layer_id][top_id]] = true;
      }
    }
  }
  // Go through the net backwards to determine which blobs contribute to the
  // loss.  We can skip backward computation for blobs that don't contribute
  // to the loss.
  // Also checks if all bottom blobs don't need backward computation (possible
  // because the skip_propagate_down param) and so we can skip bacward
  // computation for the entire layer
  set<string> blobs_under_loss;
  set<string> blobs_skip_backp;
  
  //for循环遍历每个layer, 将不需要backward计算的层和bottom_blob标记出来
  for (int layer_id = layers_.size() - 1; layer_id >= 0; --layer_id) {
    bool layer_contributes_loss = false;
    bool layer_skip_propagate_down = true;
	
    ///
    //遍历该层每个top_blob,  确定该层是否输出loss, 是否要backward计算
    ///
    for (int top_id = 0; top_id < top_vecs_[layer_id].size(); ++top_id) {
      const string& blob_name = blob_names_[top_id_vecs_[layer_id][top_id]];
 
      //如果当前层是最终输出层,或当前top blob为最终loss做出贡献了就把
      //layer_contributes_loss置true,layer_contributes_loss的true值最开
      //始的源头是整个net的最终输出层,之后每一层的layer_contributes_loss
      //通过判断是否有top blob在blobs_under_loss中得到,blobs_under_loss
      //的值是由上一层bottom计算时插入的
      if (layers_[layer_id]->loss(top_id) ||
          (blobs_under_loss.find(blob_name) != blobs_under_loss.end())) {
        layer_contributes_loss = true;
      }
      if (blobs_skip_backp.find(blob_name) == blobs_skip_backp.end()) {
        layer_skip_propagate_down = false;
      }
	  
      //只要一层中有一个blob贡献了loss,有一个blob要backwards, 就得到了该层这
      //两个参数的最终结果,可以直接退出循环
      if (layer_contributes_loss && !layer_skip_propagate_down)
        break;
    }
    // If this layer can skip backward computation, also all his bottom blobs
    // don't need backpropagation	
    //该层如果同时满足下面if中两个条件, 就相互矛盾, 该层就不进行backward计算
    if (layer_need_backward_[layer_id] && layer_skip_propagate_down) {
      layer_need_backward_[layer_id] = false;
      for (int bottom_id = 0; bottom_id < bottom_vecs_[layer_id].size();
               ++bottom_id) {
        bottom_need_backward_[layer_id][bottom_id] = false;
      }
    }
    if (!layer_contributes_loss) { layer_need_backward_[layer_id] = false; }
    if (Caffe::root_solver()) {
      if (layer_need_backward_[layer_id]) {
        LOG(INFO) << layer_names_[layer_id] << " needs backward computation.";
      } else {
        LOG(INFO) << layer_names_[layer_id]
            << " does not need backward computation.";
      }
    }
    for (int bottom_id = 0; bottom_id < bottom_vecs_[layer_id].size();
         ++bottom_id) {
      if (layer_contributes_loss) {
        const string& blob_name =
            blob_names_[bottom_id_vecs_[layer_id][bottom_id]];
	//插入blobs_under_loss
        blobs_under_loss.insert(blob_name);
      } else {
	//如果该层没有为loss做出贡献, 该层就不需要backward计算
        bottom_need_backward_[layer_id][bottom_id] = false; 
      }
      if (!bottom_need_backward_[layer_id][bottom_id]) {
        const string& blob_name =
                   blob_names_[bottom_id_vecs_[layer_id][bottom_id]];
        blobs_skip_backp.insert(blob_name);
      }
    }
  }
  
  
  //如果当前net需要force backward, 将layer_need_backward设成true
  //blob_need_backward会由 layers_[layer_id]->AllowForceBackward(bottom_id)决定
 
  if (param.force_backward()) {
    for (int layer_id = 0; layer_id < layers_.size(); ++layer_id) {
      layer_need_backward_[layer_id] = true;
      for (int bottom_id = 0;
           bottom_id < bottom_need_backward_[layer_id].size(); ++bottom_id) {
        bottom_need_backward_[layer_id][bottom_id] =
            bottom_need_backward_[layer_id][bottom_id] ||
            layers_[layer_id]->AllowForceBackward(bottom_id);
        blob_need_backward_[bottom_id_vecs_[layer_id][bottom_id]] =
            blob_need_backward_[bottom_id_vecs_[layer_id][bottom_id]] ||
            bottom_need_backward_[layer_id][bottom_id];
      }
      for (int param_id = 0; param_id < layers_[layer_id]->blobs().size();
           ++param_id) {
        layers_[layer_id]->set_param_propagate_down(param_id, true);
      }
    }
  }
  // In the end, all remaining blobs are considered output blobs.
  //在AppendBottom中已经将bottom blob从available_blobs中删掉,最终只剩下最顶
  //层的top blob,就是输出blob
  for (set<string>::iterator it = available_blobs.begin(); 
      it != available_blobs.end(); ++it) {
    LOG_IF(INFO, Caffe::root_solver())
        << "This network produces output " << *it;
    net_output_blobs_.push_back(blobs_[blob_name_to_idx[*it]].get());
    net_output_blob_indices_.push_back(blob_name_to_idx[*it]);
  }
  for (size_t blob_id = 0; blob_id < blob_names_.size(); ++blob_id) {
    blob_names_index_[blob_names_[blob_id]] = blob_id;
  }
  for (size_t layer_id = 0; layer_id < layer_names_.size(); ++layer_id) {
    layer_names_index_[layer_names_[layer_id]] = layer_id;
  }
  ShareWeights();
  debug_info_ = param.debug_info();
  LOG_IF(INFO, Caffe::root_solver()) << "Network initialization done.";
}

       网络建立后如何进行正向和反向传播呢?原来在net中也定义了Forward(Dtype* loss = NULL);Backward()方法,我们进一步发现此函数实际上就是在调用Layer层中的前向后向计算方法。

template <typename Dtype>
const vector<Blob<Dtype>*>& Net<Dtype>::Forward(Dtype* loss) {
  if (loss != NULL) {
    *loss = ForwardFromTo(0, layers_.size() - 1);
  } else {
    ForwardFromTo(0, layers_.size() - 1); //跳到此处
  }
  return net_output_blobs_;
}

template <typename Dtype>
Dtype Net<Dtype>::ForwardFromTo(int start, int end) {
  CHECK_GE(start, 0);
  CHECK_LT(end, layers_.size());
  Dtype loss = 0;
  for (int i = start; i <= end; ++i) {
    for (int c = 0; c < before_forward_.size(); ++c) {
      before_forward_[c]->run(i);
    }
    //此处重新定位到了前面layer定义的前向计算
    Dtype layer_loss = layers_[i]->Forward(bottom_vecs_[i], top_vecs_[i]);
    loss += layer_loss;
    if (debug_info_) { ForwardDebugInfo(i); }
    for (int c = 0; c < after_forward_.size(); ++c) {
      after_forward_[c]->run(i);
    }
  }
  return loss;
}

       其实对于特征提取我们可以说已经讲完了,特征提取不就是来一张图像跑一轮前向计算的过程然后挑出某一层的参数输出作为图像的特征嘛!其实不是就是通过构建Net网络,执行Layer前向算法,在图像输入Blob并把存储在Blob的数据输出嘛!在深入一点思考一下Solver组件为达到训练网络的目的实质上也还是通过Net网络执行Layer的前后向算法使得Blob数据更新瘦脸的过程,那么我们想当然的在Solver类中引入Net对象不就好了,关于Solver后面章节详细分析,这里不细讲。

//Solver中直接定义了net对象
shared_ptr<Net<Dtype> > net_;
inline shared_ptr<Net<Dtype> > net() { return net_; }

三、特征提取代码实现

       好了,我们再重复一下特征提取的流程:

特征提取主要步骤:
1.构建生成Net网络
2.把输入图像转化成net的Blob数据结构输入
3.执行一次Layer前向算法
4.输出指定层的Blob数据,得到特征

       以下main方法中即按照此流程进行编码,其中大部分代码都来自caffe自带的代码,请仔细阅读extrure_feature.cpp文件,由于caffe基于了shared_ptr共享指针编程,所以不了解此方法的可以先搜索一下shared_ptr使用的相关知识。

/*
 * create by wangbaojia
 * 2018.10.9
 * email:wangbaojia_hrbeu@163.com
 *
 * */

#include <caffe/blob.hpp>
#include <caffe/layer.hpp>
#include <caffe/net.hpp>
#include <caffe/common.hpp>

#include <opencv2/opencv.hpp>

#include <caffeconfig.h>

#define  CPU_ONLY //定义只使用cpu

using namespace std;
using namespace caffe;
using namespace boost;
using namespace cv;

CaffeConfig caffeConfig; //配置信息获取


//对输入层进行进行赋值,绑定imageChannels和mat的地址相同
void WrapInputLayer(std::vector<cv::Mat>* input_channels,boost::shared_ptr<caffe::Net<float> > &net_)
{
  Blob<float>* input_layer = net_->input_blobs()[0];

  int width = input_layer->width();
  int height = input_layer->height();
  float* input_data = input_layer->mutable_cpu_data();
  for (int i = 0; i < input_layer->channels(); ++i)
  {
    cv::Mat channel(height, width, CV_32FC1, input_data);//mat矩阵绑定input_data指针
    input_channels->push_back(channel);
    input_data += width * height;
  }
}

//输入数据赋值构造
void Preprocess(const cv::Mat& img,std::vector<cv::Mat>* input_channels, int num_channels_,
                cv::Size input_geometry_, boost::shared_ptr<caffe::Net<float> > &net_)
{
  /* Convert the input image to the input image format of the network. */
  cv::Mat sample;
  if (img.channels() == 3 && num_channels_ == 1)
    cv::cvtColor(img, sample, cv::COLOR_BGR2GRAY);
  else if (img.channels() == 4 && num_channels_ == 1)
    cv::cvtColor(img, sample, cv::COLOR_BGRA2GRAY);
  else if (img.channels() == 4 && num_channels_ == 3)
    cv::cvtColor(img, sample, cv::COLOR_BGRA2BGR);
  else if (img.channels() == 1 && num_channels_ == 3)
    cv::cvtColor(img, sample, cv::COLOR_GRAY2BGR);
  else
    sample = img;

  cv::Mat sample_resized;
  if (sample.size() != input_geometry_)
    cv::resize(sample, sample_resized, input_geometry_);
  else
    sample_resized = sample;

  cv::Mat sample_float;
  if (num_channels_ == 3)
    sample_resized.convertTo(sample_float, CV_32FC3);
  else
    sample_resized.convertTo(sample_float, CV_32FC1);

  //cv::Mat sample_normalized;
  //cv::subtract(sample_float, mean_, sample_normalized);

  /* This operation will write the separate BGR planes directly to the
   * input layer of the network because it is wrapped by the cv::Mat
   * objects in input_channels. */
  //cv::split(sample_normalized, *input_channels);
  cv::split(sample_float, *input_channels);//图像mat多通道分离

  CHECK(reinterpret_cast<float*>(input_channels->at(0).data)
        == net_->input_blobs()[0]->cpu_data())
    << "Input channels are not wrapping the input layer of the network.";
}


int main()
{

    Phase phase = TEST;//设置为test阶段,通过已训练得到的caffemodel计算特征向量
    #ifdef CPU_ONLY
        Caffe::set_mode(Caffe::CPU);
    #else
        Caffe::set_mode(Caffe::GPU);
    #endif
    boost::shared_ptr<caffe::Net<float>> feature_net; //定义一个网络
    feature_net.reset(new Net<float>(caffeConfig.getCaffePrototxt(),phase));// 生成网络对象并设置为测试阶段(reset方法为shared_ptr类中生成对象方法)
    feature_net->CopyTrainedLayersFrom(caffeConfig.getCaffeModel());//第二个参数,模型文件,从.caffemodel文件加载网络参数,拷贝载入网络权重和bias

    //获取模型输入的图像标准
    Blob<float> *input_layer = feature_net->input_blobs()[0];//输入层:第0个即表示输入数据
    int imageBatchNum = input_layer->num();
    int imageChannels = input_layer->channels();//通道数
    int imageWidth = input_layer->width();//图像宽
    int imageHeight = input_layer->height();//图像高
    cout<<"imageBatchNum="<<imageBatchNum<<"; imageChannels="<< imageChannels<< "; imageWidth="<<imageWidth<<"; imageHeight="<<imageHeight <<endl;

    //Blob初始化,输入数据nchw
    input_layer->Reshape(1, imageChannels,imageWidth,imageHeight);
    /* Forward dimension change to all layers. */
    feature_net->Reshape();

    //待处理的输入图像
    cv::Size modelImageSize = cv::Size(imageWidth,imageHeight);
    string imageInputPath = caffeConfig.getImagePath();
    Mat imageSrc = cv::imread(imageInputPath,-1);//载入带a通道的图像
    if(imageSrc.empty())
    {
        printf("image load failed...\n");
        return -1;
    }
    //cv::cvtColor(imageSrc,imageSrc,COLOR_BGR2GRAY);
    //cv::resize(imageSrc,imageSrc,modelImageSize);//图像载入成功缩放到模型输入比例

    //输入数据的设置
    std::vector<cv::Mat> input_channels;
    WrapInputLayer(&input_channels, feature_net);
    Preprocess(imageSrc, &input_channels, imageChannels, modelImageSize,feature_net);

    //float *inputData = input_layer->mutable_cpu_data();//此函数得到输入数据指针,并可以修改内部数值

    /*
     * 前向传播计算,并获取相应层layer的blob数据------->得到网络计算结果
     * */
    feature_net->Forward();//前向网络计算

    /*
    //获取net所有blob对象名
    vector<string> blobNameVec = feature_net->blob_names();
    for(int i=0; i<blobNameVec.size();i++)
    {
        cout<<"Blob #"<<i<<" : "<<blobNameVec[i]<<endl;

    }
    */
    //设置需要输出的blob层:倒数第二层
    vector<string> feature_layer_name = caffeConfig.getFeatureLayerName();
    string FS_LAYER_NAME = feature_layer_name.at(0);
    if(!feature_net->has_blob(FS_LAYER_NAME))
    {
        printf("%s layer is not exist..\n",FS_LAYER_NAME.c_str());
        return -1;
    }

    long long  count = feature_net->blob_by_name(FS_LAYER_NAME)->count(1);//单张图像blob元素总数目:c*h*w
    //data,提取特征值
    caffe::shared_ptr<Blob<float>> blobFeature = feature_net->blob_by_name(FS_LAYER_NAME);
    const float *data = blobFeature->cpu_data();//cpu_data表示只访问cpu_data数据
    //const float* dataFeature = feature_net->blob_by_name(FS_LAYER_NAME)->cpu_data();

    vector<float> feature_value;//特征向量
    feature_value.clear();
    feature_value.assign(data, data + count);

    for(int i = 0;i < feature_value.size(); i++ )
    {
        cout<<feature_value[i]<<" ";//输出特征向量
        if((i+1)%10 == 0 )
        {
            cout<<endl;
        }
    }
    cout<<endl<<"feature_value.size="<<feature_value.size()<<endl;

    return 0;

}

       代码部分最为巧妙的就是WrapInputLayer()方法了,功能如函数名,输入层和输入的图像进行了绑定。配置文件代码如下:

#ifndef CAFFECONFIG_H
#define CAFFECONFIG_H

#include <iostream>
#include <vector>

using namespace std;


class CaffeConfig
{
public:
    CaffeConfig();
    string getCaffeModel();
    string getCaffePrototxt();
    vector<string> getFeatureLayerName();
    string getImagePath();

};

#endif // CAFFECONFIG_H
#include "caffeconfig.h"

CaffeConfig::CaffeConfig()
{

}

string CaffeConfig::getCaffeModel()
{
    string caffeModelPath = "../CaffeLearn2/FaceRecognition/arcface50-caffe/face.caffemodel";
    return caffeModelPath;

}

string CaffeConfig::getCaffePrototxt()
{
    string caffePrototxtPath = "../CaffeLearn2/FaceRecognition/arcface50-caffe/face.prototxt";
    return caffePrototxtPath;
}

vector<string> CaffeConfig::getFeatureLayerName()
{
    vector<string> featureLayerName;
    string value1 = "fc1";
    featureLayerName.push_back(value1);
    return featureLayerName;
}

string CaffeConfig::getImagePath()
{
    string imagePath = "../CaffeLearn2/FaceRecognition/demo.png";
    return imagePath;
}

       结果显示如下:

 

四、源代码下载

点击下载

五、参考文献

【1】薛云峰  深度学习框架Caffe源码解析

【2】Caffe: Net类解析(1)--原创

【3】caffe源码解析—caffe layer的工作原理理解

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值