NVIDIA nx实现facenet人脸识别

一、训练工程
https://github.com/davidsandberg/facenet
训练得到.meta,用tf2onnx转化,有两种方法,直接meta->onnx,或者meta->pb->onnx。
安装tf2onnx的方法:

sudo conda install --channel https://conda.anaconda.org/conda-forge tf2onnx

方法一:

python -m tf2onnx.convert --checkpoint model-20220303-112436.meta --inputs Placeholder:0 --outputs output:0 --output onnxModel.onnx --opset 11

报错:

ValueError: Node 'gradients/InceptionResnetV1/Bottleneck/BatchNorm/cond/FusedBatchNorm_1_grad/FusedBatchNormGrad' has an _output_shapes attribute inconsistent with the GraphDef for output #3: Dimension 0 in both shapes must be equal, but are 0 and 128. Shapes are [0] and [128]

未找到报错原因,怀疑是输入给的不对。

方法二:
meta转pb,脚本:

pb转onnx:

pip install tf2onnx
python -m tf2onnx.convert --saved-model ./zl/model.pb --output ./model.onnx --opset 7

一直报错找不到model.pb

OSError: SavedModel file does not exist at: ./model.pb/{saved_model.pbtxt|saved_model.pb}

修改路径和名字,model.pb改成saved_model.pb

python -m tf2onnx.convert --saved-model ./zl --output ./model.onnx --opset 7

又报错:

RuntimeError: MetaGraphDef associated with tags 'serve' could not be found in SavedModel, with available tags '[set()]'. To inspect available tag-sets in the SavedModel, please use the SavedModel CLI: `saved_model_cli`.

这个应该环境问题,把环境问题解决以后,运行:

python3 -m tf2onnx.convert --input saved_model.pb --inputs batch_join:0[1,160,160,3],batch_join:1[1,160,160,3],image_batch:0[1,160,160,3],input:0[1,160,160,3] --outputs InceptionResnetV1/Bottleneck/BatchNorm/Reshape_1:0 --output model.onnx

转化成功了!但是转换出来的模型结构不正确。回头再看pb模型的结构就已经错了,没找到为什么错。

再尝试:

python -m tf2onnx.convert --input model.pb --inputs X:0 --outputs output:0 --output model.onnx --verbose

该方法需要指定输入和输出节点。

python3 -m tf2onnx.convert --input saved_model.pb --inputs batch_size:0 --outputs InceptionResnetV1/Bottleneck/BatchNorm/Reshape_1:0 --output facenetconv_0304.onnx

报错:

Traceback (most recent call last):
  File "/home/zl/anaconda3/envs/facenet_modelcvt/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/zl/anaconda3/envs/facenet_modelcvt/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/zl/anaconda3/envs/facenet_modelcvt/lib/python3.9/site-packages/tf2onnx/convert.py", line 633, in <module>
    main()
  File "/home/zl/anaconda3/envs/facenet_modelcvt/lib/python3.9/site-packages/tf2onnx/convert.py", line 227, in main
    graph_def, inputs, outputs = tf_loader.from_graphdef(args.graphdef, args.inputs, args.outputs)
  File "/home/zl/anaconda3/envs/facenet_modelcvt/lib/python3.9/site-packages/tf2onnx/tf_loader.py", line 367, in from_graphdef
    frozen_graph = tf_optimize(input_names, output_names, frozen_graph)
  File "/home/zl/anaconda3/envs/facenet_modelcvt/lib/python3.9/site-packages/tf2onnx/tf_loader.py", line 710, in tf_optimize
    graph_def = tf_optimize_grappler(input_names, output_names, graph_def, fold_constant)
  File "/home/zl/anaconda3/envs/facenet_modelcvt/lib/python3.9/site-packages/tf2onnx/tf_loader.py", line 689, in tf_optimize_grappler
    meta_graph = tf.compat.v1.train.export_meta_graph(graph_def=graph_def)
  File "/home/zl/anaconda3/envs/facenet_modelcvt/lib/python3.9/site-packages/tensorflow/python/training/saver.py", line 1590, in export_meta_graph
    meta_graph_def, _ = meta_graph.export_scoped_meta_graph(
  File "/home/zl/anaconda3/envs/facenet_modelcvt/lib/python3.9/site-packages/tensorflow/python/framework/meta_graph.py", line 1035, in export_scoped_meta_graph
    scoped_meta_graph_def = create_meta_graph_def(
  File "/home/l/anaconda3/envs/facenet_modelcvt/lib/python3.9/site-packages/tensorflow/python/framework/meta_graph.py", line 577, in create_meta_graph_def
    meta_graph_def.graph_def.MergeFrom(graph_def)
TypeError: Parameter to MergeFrom() must be instance of same class: expected tensorflow.GraphDef got tensorflow.GraphDef.

未解决

Python输入

def parse_arguments():
    parser = argparse.ArgumentParser()
    parser.add_argument('--model_dir', type=str, default='/home/zl/model-20220303-112436.ckpt-5.data-00000-of-00001')
    parser.add_argument('--output_file', type=str, default='/home/zl/model.pb')  ##不想从终端运行,只从pycharm中运行,需要加--,代表可选参数。如果是parser.add_argument('output_file', type=str)就必须要在终端运行时给参数。
    return parser.parse_args()
if __name__ == '__main__':
    main(parse_arguments())
import tensorflow as tf
import os
model_dir = ./model.pb'
def create_graph():
    with tf.io.gfile.GFile(model_dir, 'rb') as f:
        graph_def = tf.compat.v1.GraphDef()
        graph_def.ParseFromString(f.read())
        tf.import_graph_def(graph_def, name='')

create_graph()
tensor_name_list = [tensor.name for tensor in tf.compat.v1.get_default_graph().as_graph_def().node]
print("input_name:"+tensor_name_list[0])
print("output_name:"+str(tensor_name_list))

二、部署工程

方法一:
deepdtream5.0+yolo+facenet
参考:https://github.com/shubham-shahh/mtcnn_facenet_cpp_tensorRT/tree/develop

方法二:
mtcnn+facenet
参考:https://github.com/shubham-shahh/mtcnn_facenet_cpp_tensorRT/tree/develop

方法三:
参考:https://github.com/shubham-shahh/mtcnn_facenet_cpp_tensorRT

我这里使用的第二种方法+第三种中的facenet模型:

#Move to sample app directory
cd /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-infer-tensor-meta-test

#build
CUDA_VER = 10.2 make

#Run test app
./deepstream-infer-tensor-meta-app -t infer /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_720p.h264

我使用的deepstream-6.0,编译时有些接口不一样,需要修改deepstream_infer_tensor_meta_test.cpp。
增加了比对

#include <fstream>
//#define INFER_PGIE_CONFIG_FILE  "/opt/nvidia/deepstream/deepstream-6.0/sources/objectDetector_Yolo/dstest2_pgie_config.txt"
#define INFER_PGIE_CONFIG_FILE  "./dstest2_pgie_config_face.txt"
std::ifstream feature_diku;
float *pcFeatureData=NULL;
std::vector<float> embeddedFace(128);

float vectors_distance(const std::vector<float>& a, const std::vector<float>& b) {
    std::vector<double>	auxiliary;
    std::transform (a.begin(), a.end(), b.begin(), std::back_inserter(auxiliary),//
                    [](float element1, float element2) {return pow((element1-element2),2);});
    auxiliary.shrink_to_fit();
    float loopSum = 0.;
    for(auto it=auxiliary.begin(); it!=auxiliary.end(); ++it) loopSum += *it;
    return  std::sqrt(loopSum);
} 
static GstPadProbeReturn
sgie_pad_buffer_probe (GstPad * pad, GstPadProbeInfo * info, gpointer u_data)
{
  static guint use_device_mem = 0;
  std::stringstream strIn;
  int i = 0;
  float f32FeatTemp = 0.;
  std::string strFeature;
  const char *s_sub="jpeg";
  std::string person_name;


  pcFeatureData = (float*)malloc(128 * sizeof(float));
  memset(pcFeatureData, 0, 128 * sizeof(float));

  feature_diku.open("./feature.txt");
  if(!feature_diku.is_open())
  {
   g_print ("error read feature\n");
  }
  while(getline(feature_diku,strFeature)){
    if(strstr(strFeature.c_str(), s_sub) != NULL){
      std::cout << "found\n";
      person_name = strFeature;
      }
    else
      {
      g_print ("zllllll%s\n", strFeature.c_str());
      strIn << strFeature;
      g_print ("zll line422\n");
      while (strIn >> f32FeatTemp){
	std::cout<<" "<< f32FeatTemp;
        *(pcFeatureData + i) = f32FeatTemp;
        ++i;
        }
      std::copy_n(pcFeatureData, 128, embeddedFace.begin());
      }
  }

  NvDsBatchMeta *batch_meta =
      gst_buffer_get_nvds_batch_meta (GST_BUFFER (info->data));
  
  /* Iterate each frame metadata in batch */
  for (NvDsMetaList * l_frame = batch_meta->frame_meta_list; l_frame != NULL;
      l_frame = l_frame->next) {
    NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) l_frame->data;

    /* Iterate object metadata in frame */
    for (NvDsMetaList * l_obj = frame_meta->obj_meta_list; l_obj != NULL;
        l_obj = l_obj->next) {
      NvDsObjectMeta *obj_meta = (NvDsObjectMeta *) l_obj->data;

      /* Iterate user metadata in object to search SGIE's tensor data */
      for (NvDsMetaList * l_user = obj_meta->obj_user_meta_list; l_user != NULL;
          l_user = l_user->next) {
        NvDsUserMeta *user_meta = (NvDsUserMeta *) l_user->data;
        if (user_meta->base_meta.meta_type != NVDSINFER_TENSOR_OUTPUT_META)
          continue;

        /* convert to tensor metadata */
        NvDsInferTensorMeta *meta =
            (NvDsInferTensorMeta *) user_meta->user_meta_data;
        //std::cout<<"size "<< *(unsigned int *)meta->num_output_layers;
        for (unsigned int i = 0; i < meta->num_output_layers; i++) {
          NvDsInferLayerInfo *info = &meta->output_layers_info[i];
          
          info->buffer = meta->out_buf_ptrs_host[i];
          float (*array)[130] = (float (*)[130]) info->buffer;
	  std::vector<float> currEmbedding(128);
	  std::cout<<"Shape "<<info->inferDims.numElements<<std::endl;
          std::cout<<"128d Tensor"<<std::endl;
        

	  std::copy_n(*array, 128, currEmbedding.begin());
	  float currDistance = 0.;
	  
	  for(int m =0;m<info->inferDims.numElements;m++){
         // std::cout<<" "<< (*array)[m]; 
         //currEmbedding << (*array)[m] << " ";
       //    std::cout<<" "<< currEmbedding[m];
        //   std::cout<<" "<< embeddedFace[m];
          }

	  
	  currDistance = vectors_distance(currEmbedding, embeddedFace);
          g_print ("zll distance=%f\n", currDistance);
	    //if (currDistance < minDistance) {
            //        minDistance = currDistance;
            //        winner = j;
            // }
	    
          //double* ptr = (double*)meta->out_buf_ptrs_host[0];  // output layer 0
          //for( size_t i=0; i<info->inferDims.numElements; i++ )
          //{
            //std::cout << "Tensor " << i << ": " << ptr[i] << std::endl;
            //std::cout  << i << ": " << ptr[i];
           // }
          
          if (use_device_mem && meta->out_buf_ptrs_dev[i]) {
            cudaMemcpy (meta->out_buf_ptrs_host[i], meta->out_buf_ptrs_dev[i],
                info->inferDims.numElements * 4, cudaMemcpyDeviceToHost);
          }
          
        }
    ...
    }
    }
  
  }

  use_device_mem = 1 - use_device_mem;
  return GST_PAD_PROBE_OK;
}
  //queue_src_pad = gst_element_get_static_pad (queue2, "src");
  //gst_pad_add_probe (queue_src_pad, GST_PAD_PROBE_TYPE_BUFFER,
  //    pgie_pad_buffer_probe, NULL, NULL);

模型转化在本地进行

#Test if we properly configured YOLO to run with Deepstream
cd ./mtcnn_facenet_cpp_tensorRT/ModelConversion
python3 h5topb.py --input_path ./kerasmodel/facenet_keras_128.h5 --output_path ./tensorflowmodel/facenet.pb

#Convert Tensorflow model to an ONNX model. This will take approx 50 mins and this has to be done on the host device
python3 -m tf2onnx.convert --input ./tensorflowmodel/facenet.pb --inputs input_1:0[1,160,160,3] --inputs-as-nchw input_1:0 --outputs Bottleneck_BatchNorm/batchnorm_1/add_1:0 --output onnxmodel/facenetconv.onnx

#Convert ONNX model to a model that can take dynamic input size
python3 dynamic_conv.py --input_path ./onnxmodel/facenetconv.onnx --output_path ./dynamiconnxmodel/dynamicfacenetmodel.onnx
  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值