利用tensorflow object detection API 训练MASK-RCNN (二)

接上文

5.训练模型

首先去官网上下载预训练模型

https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md

我下载的是mask_rcnn_inception_v2_coco模型,

下载解压后可以在文件夹中有如下文件

其次根据实际的训练类别修改pipeline.config文件,需要修改的部分已经标注,其他超参数建议先不改后期在试验中修正.

model {
  faster_rcnn {
    number_of_stages: 3
    num_classes: 90   ########################修改类别数 = 分割的类别数+1
    image_resizer {
      keep_aspect_ratio_resizer {
        min_dimension: 800 #####可以根据自身情况修改
        max_dimension: 1365  ####可以根据自身情况修改
      }
    }
    feature_extractor {
      type: "faster_rcnn_inception_v2"
      first_stage_features_stride: 16
    }
    first_stage_anchor_generator {
      grid_anchor_generator {
        height_stride: 16
        width_stride: 16
        scales: 0.25
        scales: 0.5
        scales: 1.0
        scales: 2.0
        aspect_ratios: 0.5
        aspect_ratios: 1.0
        aspect_ratios: 2.0
      }
    }
    first_stage_box_predictor_conv_hyperparams {
      op: CONV
      regularizer {
        l2_regularizer {
          weight: 0.0
        }
      }
      initializer {
        truncated_normal_initializer {
          stddev: 0.00999999977648
        }
      }
    }
    first_stage_nms_score_threshold: 0.0
    first_stage_nms_iou_threshold: 0.699999988079
    first_stage_max_proposals: 100
    first_stage_localization_loss_weight: 2.0
    first_stage_objectness_loss_weight: 1.0
    initial_crop_size: 14
    maxpool_kernel_size: 2
    maxpool_stride: 2
    second_stage_box_predictor {
      mask_rcnn_box_predictor {
        fc_hyperparams {
          op: FC
          regularizer {
            l2_regularizer {
              weight: 0.0
            }
          }
          initializer {
            variance_scaling_initializer {
              factor: 1.0
              uniform: true
              mode: FAN_AVG
            }
          }
        }
        use_dropout: false
        dropout_keep_probability: 1.0
        conv_hyperparams {
          op: CONV
          regularizer {
            l2_regularizer {
              weight: 0.0
            }
          }
          initializer {
            truncated_normal_initializer {
              stddev: 0.00999999977648
            }
          }
        }
        predict_instance_masks: true
        mask_prediction_conv_depth: 0
        mask_height: 15
        mask_width: 15
        mask_prediction_num_conv_layers: 2
      }
    }
    second_stage_post_processing {
      batch_non_max_suppression {
        score_threshold: 0.300000011921
        iou_threshold: 0.600000023842
        max_detections_per_class: 100
        max_total_detections: 100
      }
      score_converter: SOFTMAX
    }
    second_stage_localization_loss_weight: 2.0
    second_stage_classification_loss_weight: 1.0
    second_stage_mask_prediction_loss_weight: 4.0
  }
}
train_config {
  batch_size: 1
  data_augmentation_options {
    random_horizontal_flip {
    }
  }
  optimizer {
    momentum_optimizer {
      learning_rate {
        manual_step_learning_rate {
          initial_learning_rate: 0.000199999994948
          schedule {
            step: 0
            learning_rate: 0.000199999994948
          }
          schedule {
            step: 900000
            learning_rate: 1.99999994948e-05
          }
          schedule {
            step: 1200000
            learning_rate: 1.99999999495e-06
          }
        }
      }
      momentum_optimizer_value: 0.899999976158
    }
    use_moving_average: false
  }
  gradient_clipping_by_norm: 10.0
  fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED/model.ckpt"###预训练模型路径
  from_detection_checkpoint: true
  num_steps: 200000
}
train_input_reader {
  label_map_path: "PATH_TO_BE_CONFIGURED/mscoco_label_map.pbtxt"###训练文件标签路径
  tf_record_input_reader {
    input_path: "PATH_TO_BE_CONFIGURED/mscoco_train.record"###训练数据路径
  }
}
eval_config {
  num_examples: 8000
  max_evals: 10
  use_moving_averages: false
}
eval_input_reader {
  label_map_path: "PATH_TO_BE_CONFIGURED/mscoco_label_map.pbtxt"  #####验证数据标签文件路径
  shuffle: false
  num_readers: 1
  tf_record_input_reader {
    input_path: "PATH_TO_BE_CONFIGURED/mscoco_val.record"#####验证数据路径
  }
}

最后将准备好的文件送入已有的API.进入 <git clone path>/models/research/object_detection/目录  开始训练 执行:

python model_main.py --model_dir=./log --pipeline_config_path=./pipeline.config

其中

--train_dir: 训练模型保存目录

--pipeline_config_path: 前面修改的pipeline_config文件全路径

输入命令

tensorboard --logdir=./log

点击输出的链接可以在浏览器上查看训练及验证的各种指标的曲线.

成功训练后会在train_dir目录下获得如下文件(文件数为随着训练的次数不同会可保存间隔会存在不一样)

6.模型转换(ckpt2pb)

<git clone path>/models/research/object_detection/目录 下

python export_inference_graph.py --input_type=image_tensor
--pipeline_config_path=/home/xah/mask-rcnn_tensorflow/mask_rcnn_test/training/mask_rcnn_inception_v2_coco.config 
--trained_checkpoint_prefix=/home/xah/tensorflow_model/models/research/object_detection/save_model/model.ckpt-543      
--output_directory=/home/xah/mask-rcnn_tensorflow/mask_rcnn_test/pbmodel

其中

--pipeline_config_path:前面根据自己需求修改的模型配置文件(.config)的路径

--trained_checkpoint_prefix:前面训练得到的ckpt的模型的路径

--output_directory:转化成pb模型的存储路径

转化成功后会在out_directory的文件夹下生成如下文件

注意:此处的forzen_inference_graph.pb 后续需要用到

7.模型移植(c++)

转化后的pb模型,我们可以通过opencv4.0中dnn模块提供的接口进行模型加载,还需要一份文件 graph.pbtxt.

在opencv4.0的dnn模块中可以找到 tf_text_graph_mask_rcnn.py文件

python tf_text_graph_mask_rcnn.py  \
--input=/the/path/of/pbmodel/frozen_inference_graph.pb \
--output=/output/path/mask_rcnn_test/pbmodel/graph.pbtxt \
--config=/input/path/of/config/mask_rcnn_test/pigtrain/mask_rcnn_inception_v2_pig.config

在output指定的路径下可生成graph.pbtxt.

移植代码可参考如下:

示例代码为:


#include <fstream>
#include <sstream>
#include <iostream>
#include <string.h>

#include <opencv2/dnn.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>


using namespace cv;
using namespace dnn;
using namespace std;

// Initialize the parameters
float confThreshold = 0.5; // Confidence threshold
float maskThreshold = 0.3; // Mask threshold

vector<string> classes;
vector<Scalar> colors;

// Draw the predicted bounding box
void drawBox(Mat& frame, int classId, float conf, Rect box, Mat& objectMask);

// Postprocess the neural network's output for each frame
void postprocess(Mat& frame, const vector<Mat>& outs);

int main()
{
	// Load names of classes
	string classesFile = "./pbmodel/pig.names";
	ifstream ifs(classesFile.c_str());
	string line;
	while (getline(ifs, line)) classes.push_back(line);

	colors.push_back(Scalar(255.0, 255.0, 255.0, 255.0));
	

	// Give the configuration and weight files for the model
	String textGraph = "./pbmodel/graph.pbtxt";
	String modelWeights = "./pbmodel/frozen_inference_graph.pb";

	// Load the network
	Net net = readNetFromTensorflow(modelWeights, textGraph);
	net.setPreferableBackend(DNN_BACKEND_DEFAULT);
	net.setPreferableTarget(DNN_TARGET_CPU);

	// Open a video file or an image file or a camera stream.
	string str, outputFile;
	VideoCapture cap(0);//根据摄像头端口id不同,修改下即可
	//VideoWriter video;
	Mat frame, blob;

	// Create a window
	static const string kWinName = "Deep learning object detection in OpenCV";
	namedWindow(kWinName, WINDOW_NORMAL);

	// Process frames.
	while (waitKey(1) < 0)
	{
		// get frame from the video
		cap >> frame;

		// Stop the program if reached end of video
		if (frame.empty()) 
		{
			cout << "Done processing !!!" << endl;
			cout << "Output file is stored as " << outputFile << endl;
			waitKey(3000);
			break;
		}
		// Create a 4D blob from a frame.
		blobFromImage(frame, blob, 1.0, Size(frame.cols, frame.rows), Scalar(), true, false);
		//blobFromImage(frame, blob);

		//Sets the input to the network
		net.setInput(blob);

		// Runs the forward pass to get output from the output layers
		std::vector<String> outNames(2);
		outNames[0] = "detection_out_final";
		outNames[1] = "detection_masks";
		vector<Mat> outs;
		net.forward(outs, outNames);

		// Extract the bounding box and mask for each of the detected objects
		postprocess(frame, outs);

		// Put efficiency information. The function getPerfProfile returns the overall time for inference(t) and the timings for each of the layers(in layersTimes)
		vector<double> layersTimes;
		double freq = getTickFrequency() / 1000;
		double t = net.getPerfProfile(layersTimes) / freq;
		string label = format("Mask-RCNN on 2.5 GHz Intel Core i7 CPU, Inference time for a frame : %0.0f ms", t);
		putText(frame, label, Point(0, 15), FONT_HERSHEY_SIMPLEX, 0.5, Scalar(0, 0, 0));

		// Write the frame with the detection boxes
		Mat detectedFrame;
		frame.convertTo(detectedFrame, CV_8U);

		imshow(kWinName, frame);

	}
	cap.release();
	return 0;
}

// For each frame, extract the bounding box and mask for each detected object
void postprocess(Mat& frame, const vector<Mat>& outs)
{
	Mat outDetections = outs[0];
	Mat outMasks = outs[1];

	// Output size of masks is NxCxHxW where
	// N - number of detected boxes
	// C - number of classes (excluding background)
	// HxW - segmentation shape
	const int numDetections = outDetections.size[2];
	const int numClasses = outMasks.size[1];

	outDetections = outDetections.reshape(1, outDetections.total() / 7);
	for (int i = 0; i < numDetections; ++i)
	{
		float score = outDetections.at<float>(i, 2);
		if (score > confThreshold)
		{
			// Extract the bounding box
			int classId = static_cast<int>(outDetections.at<float>(i, 1));
			int left = static_cast<int>(frame.cols * outDetections.at<float>(i, 3));
			int top = static_cast<int>(frame.rows * outDetections.at<float>(i, 4));
			int right = static_cast<int>(frame.cols * outDetections.at<float>(i, 5));
			int bottom = static_cast<int>(frame.rows * outDetections.at<float>(i, 6));

			left = max(0, min(left, frame.cols - 1));
			top = max(0, min(top, frame.rows - 1));
			right = max(0, min(right, frame.cols - 1));
			bottom = max(0, min(bottom, frame.rows - 1));
			Rect box = Rect(left, top, right - left + 1, bottom - top + 1);

			// Extract the mask for the object
			Mat objectMask(outMasks.size[2], outMasks.size[3], CV_32F, outMasks.ptr<float>(i, classId));

			// Draw bounding box, colorize and show the mask on the image
			drawBox(frame, classId, score, box, objectMask);

		}
	}
}

// Draw the predicted bounding box, colorize and show the mask on the image
void drawBox(Mat& frame, int classId, float conf, Rect box, Mat& objectMask)
{
	//Draw a rectangle displaying the bounding box
	rectangle(frame, Point(box.x, box.y), Point(box.x + box.width, box.y + box.height), Scalar(255, 178, 50), 3);

	//Get the label for the class name and its confidence
	string label = format("%.2f", conf);
	if (!classes.empty())
	{
		CV_Assert(classId < (int)classes.size());
		label = classes[classId] + ":" + label;
	}

	//Display the label at the top of the bounding box
	int baseLine;
	Size labelSize = getTextSize(label, FONT_HERSHEY_SIMPLEX, 0.5, 1, &baseLine);
	box.y = max(box.y, labelSize.height);
	rectangle(frame, Point(box.x, box.y - round(1.5*labelSize.height)), Point(box.x + round(1.5*labelSize.width), box.y + baseLine), Scalar(255, 255, 255), FILLED);
	putText(frame, label, Point(box.x, box.y), FONT_HERSHEY_SIMPLEX, 0.75, Scalar(0, 0, 0), 1);

	Scalar color = colors[classId%colors.size()];

	// Resize the mask, threshold, color and apply it on the image
	resize(objectMask, objectMask, Size(box.width, box.height));
	Mat mask = (objectMask > maskThreshold);
	Mat coloredRoi = (0.3 * color + 0.7 * frame(box));
	coloredRoi.convertTo(coloredRoi, CV_8UC3);

	// Draw the contours on the image
	vector<Mat> contours;
	Mat hierarchy;
	mask.convertTo(mask, CV_8U);
	findContours(mask, contours, hierarchy, RETR_CCOMP, CHAIN_APPROX_SIMPLE);
	drawContours(coloredRoi, contours, -1, color, 5, LINE_8, hierarchy, 100);
	coloredRoi.copyTo(frame(box), mask);

}

上述代码中的textGraph 和modelWeights 就是上文中提到要注意的文件.至于classesFile 表示的.names文件可以实际上是存储这类别名称,你可以直接在文件中写死,也可以像我一样自己创建文件mscoco_labels.names

background
pig

类别数可以根据实际添加

8.运行效果

 

主要训练的主体分割,所以效果如下,由于数据集太少,所以效果有待增强!

mask-rcnn的训练及移植的流程大致入这两篇博文所述,希望有用!

  • 3
    点赞
  • 13
    收藏
    觉得还不错? 一键收藏
  • 8
    评论
评论 8
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值