Detectron2入门代码教程——以Faster RCNN在自定义数据集上目标检测为例

Detectron2介绍

Detectron2是Facebook AI Research的下一代库,提供最先进的检测和分割算法。它是Detectron和maskrcnn-benchmark的继承者。它支持Facebook中的许多计算机视觉研究项目和生产应用。

简单来说,Detectron2是一个提供了简单的快速实现Facebook中的许多计算机视觉研究成果的框架。想要看看具体支持哪些成果可以看看他们的Model Zoo,以及github仓库

本文将以搭建Faster RCNN完成目标检测Detection为例,数据集使用更加具有泛用性的自定义数据集。

代码解读

首先导入一系列需要用到的包。

import detectron2
from detectron2.utils.logger import setup_logger
setup_logger()

# import some common libraries
import numpy as np
import os, json, cv2, random

# import some common detectron2 utilities
from detectron2 import model_zoo
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.utils.visualizer import Visualizer
from detectron2.data import MetadataCatalog, DatasetCatalog
from detectron2.structures import BoxMode

from tqdm import tqdm

准备数据集

由于是自定义数据集,所以需要完成数据集的注册。

def get_balloon_dicts(img_dir):
    json_file = os.path.join(img_dir, "via_region_data.json")

    with open(json_file) as f:
        imgs_anns = json.load(f)

    dataset_dicts = []
    for idx, v in tqdm(enumerate(imgs_anns.values())):
        # if idx > 100:
        #     break
        record = {}
        
        filename = os.path.join(img_dir, v["filename"])
        height, width = cv2.imread(filename).shape[:2]
        
        record["file_name"] = filename
        record["image_id"] = idx
        record["height"] = height
        record["width"] = width
      
        annos = v["regions"]
        objs = []
        for _, anno in annos.items():
            assert not anno["region_attributes"]
            anno = anno["shape_attributes"]
            px = anno["all_points_x"]
            py = anno["all_points_y"]

            obj = {
                "bbox": [np.min(px), np.min(py), np.max(px), np.max(py)],
                "bbox_mode": BoxMode.XYXY_ABS,
                "category_id": 0,
            }
            objs.append(obj)
        record["annotations"] = objs
        # print("record")
        # print(record)
        dataset_dicts.append(record)
    return dataset_dicts

for d in ["train", "val"]:
    DatasetCatalog.register("balloon_" + d, lambda d=d: get_balloon_dicts("/home/faster_rcnn/datasets/balloon/" + d))
    MetadataCatalog.get("balloon_" + d).set(thing_classes=["balloon"])

这里定义了一个函数get_balloon_dicts,它被DatasetCatalog.register使用。

该函数接收可以提供 数据集图片和标注文件的字符串信息,输出一个格式化后的标注列表信息。

下面给出格式化前标注文件中某个元素的格式为例:

"24_Soldier_Firing_Soldier_Firing_24_281": {
        "fileref": "",
        "size": 0,
        "filename": "24_Soldier_Firing_Soldier_Firing_24_281.jpg",
        "base64_img_data": "",
        "file_attributes": {},
        "regions": {
            "0": {
                "shape_attributes": {
                    "name": "polygon",
                    "all_points_x": [
                        387,
                        401
                    ],
                    "all_points_y": [
                        300,
                        315
                    ]
                },
                "region_attributes": {}
            },
            "1": {
                "shape_attributes": {
                    "name": "polygon",
                    "all_points_x": [
                        712,
                        730
                    ],
                    "all_points_y": [
                        414,
                        435
                    ]
                },
                "region_attributes": {}
            }
        }
    },

结合代码可以看出,get_balloon_dicts函数将每个图片做成一个record字典,record字典包括file_name、image_id、height、width和annotations,前面四个是图片的基本信息,annotations是由obj组成的列表objs,每个obj是一个字典,即annotations是以字典为元素的列表。每个obj就对应着一张图片上的其中一个框,包括bbox、bbox_mode、category_id,分别表示bbox的坐标、bbox的标注格式、框中物体的类别。

MetadataCatalog.get("balloon_" + d).set(thing_classes=["balloon"])

则设置了类别的label,这里只有一种,是balloon。


训练

from detectron2.engine import DefaultTrainer

cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml"))
cfg.DATASETS.TRAIN = ("balloon_train",)
cfg.DATASETS.TEST = ()
cfg.DATALOADER.NUM_WORKERS = 2
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml")  # Let training initialize from model zoo
cfg.SOLVER.IMS_PER_BATCH = 2  # This is the real "batch size" commonly known to deep learning people
cfg.SOLVER.BASE_LR = 0.000025  # pick a good LR
cfg.SOLVER.MAX_ITER = 270000    # 300 iterations seems good enough for this toy dataset; you will need to train longer for a practical dataset
cfg.SOLVER.STEPS = []        # do not decay learning rate
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 512   # The "RoIHead batch size". 128 is faster, and good enough for this toy dataset (default: 512)
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1  # only has one class (ballon). (see https://detectron2.readthedocs.io/tutorials/datasets.html#update-the-config-for-new-datasets)
# NOTE: this config means the number of classes, but a few popular unofficial tutorials incorrect uses num_classes+1 here.

os.makedirs(cfg.OUTPUT_DIR, exist_ok=True)
trainer = DefaultTrainer(cfg) 
trainer.resume_or_load(resume=False)
trainer.train()

这里设置了一些配置项,开始训练。训练结果过默认存放在output文件夹,权重存放在model_final.pth。


验证

cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth")  # path to the model we just trained
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.7   # set a custom testing threshold   
predictor = DefaultPredictor(cfg)

from detectron2.evaluation import COCOEvaluator, inference_on_dataset
from detectron2.data import build_detection_test_loader
evaluator = COCOEvaluator("balloon_val", output_dir="./output")
val_loader = build_detection_test_loader(cfg, "balloon_val")
print(inference_on_dataset(predictor.model, val_loader, evaluator))

首先读取了训练的结果"model_final.pth",设置了阈值为0.7,阈值越高越严格。
最后打印出了验证的结果,正确的输出应该与下图相类似。
在这里插入图片描述

参考资料

https://github.com/facebookresearch/detectron2
https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5#scrollTo=gKkz6CkaL6Y2

  • 1
    点赞
  • 14
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
Faster R-CNN 是一种基于深度学习的目标检测算法,它可以用于训练自己的数据集。下面是 Faster R-CNN 训练自己数据集代码示例: 1. 准备训练数据集 首先需要准备训练数据集,包括图像和标注文件。标注文件可以是 VOC 格式或 COCO 格式。 2. 安装依赖库和下载代码 需要安装 TensorFlow 和 Keras,以及下载 Faster R-CNN 的代码。 3. 修改配置文件 修改 Faster R-CNN 的配置文件,包括训练和测试的参数、数据集路径以及模型保存路径等。 4. 训练模型 运行训练代码,使用准备好的数据集进行训练,直到模型收敛或达到预设的训练轮数。 5. 测试模型 使用测试数据集训练好的模型进行测试,评估模型的准确率和召回率等指标。 6. 模型优化 根据测试结果对模型进行优化,包括调整参数、增加数据集大小等。 参考代码: 以下是 Faster R-CNN 训练自己数据集代码示例。这里以 TensorFlow 和 Keras 为例代码中的数据集为 VOC 格式。 ```python # 导入依赖库 import tensorflow as tf from keras import backend as K from keras.layers import Input from keras.models import Model from keras.optimizers import Adam from keras.utils import plot_model from keras.callbacks import TensorBoard, ModelCheckpoint from keras_frcnn import config from keras_frcnn import data_generators from keras_frcnn import losses as losses_fn from keras_frcnn import roi_helpers from keras_frcnn import resnet as nn from keras_frcnn import visualize # 设置配置文件 config_output_filename = 'config.pickle' network = 'resnet50' num_epochs = 1000 output_weight_path = './model_frcnn.hdf5' input_weight_path = './resnet50_weights_tf_dim_ordering_tf_kernels.h5' tensorboard_dir = './logs' train_path = './train.txt' test_path = './test.txt' num_rois = 32 horizontal_flips = True vertical_flips = True rot_90 = True output_weight_path = './model_frcnn.hdf5' # 加载配置文件 config = config.Config() config_output_filename = 'config.pickle' # 加载数据集 all_imgs, classes_count, class_mapping = data_generators.get_data(train_path) test_imgs, _, _ = data_generators.get_data(test_path) # 计算平均像素值 if 'bg' not in classes_count: classes_count['bg'] = 0 class_mapping['bg'] = len(class_mapping) config.class_mapping = class_mapping # 计算平均像素值 C = config.num_channels mean_pixel = [103.939, 116.779, 123.68] img_size = (config.im_size, config.im_size) # 组装模型 input_shape_img = (None, None, C) img_input = Input(shape=input_shape_img) roi_input = Input(shape=(num_rois, 4)) shared_layers = nn.nn_base(img_input, trainable=True) # RPN 网络 num_anchors = len(config.anchor_box_scales) * len(config.anchor_box_ratios) rpn_layers = nn.rpn(shared_layers, num_anchors) # RoI 网络 classifier = nn.classifier(shared_layers, roi_input, num_rois, nb_classes=len(classes_count), trainable=True) model_rpn = Model(img_input, rpn_layers) model_classifier = Model([img_input, roi_input], classifier) # 加载权重 model_rpn.load_weights(input_weight_path, by_name=True) model_classifier.load_weights(input_weight_path, by_name=True) # 生成训练数据 data_gen_train = data_generators.get_anchor_gt(all_imgs, classes_count, C, K.image_dim_ordering(), mode='train', \ img_size=img_size, \ num_rois=num_rois, \ horizontal_flips=horizontal_flips, \ vertical_flips=vertical_flips, \ rot_90=rot_90) # 编译模型 optimizer = Adam(lr=1e-5) model_rpn.compile(optimizer=optimizer, loss=[losses_fn.rpn_loss_cls(num_anchors), losses_fn.rpn_loss_regr(num_anchors)]) model_classifier.compile(optimizer=optimizer, loss=[losses_fn.class_loss_cls, losses_fn.class_loss_regr(len(classes_count) - 1)], metrics={'dense_class_{}'.format(len(classes_count)): 'accuracy'}) # 训练模型 epoch_length = 1000 num_epochs = int(num_epochs) iter_num = 0 losses = np.zeros((epoch_length, 5)) rpn_accuracy_rpn_monitor = [] rpn_accuracy_for_epoch = [] start_time = time.time() best_loss = np.Inf class_mapping_inv = {v: k for k, v in class_mapping.items()} print('Starting training') for epoch_num in range(num_epochs): progbar = generic_utils.Progbar(epoch_length) print('Epoch {}/{}'.format(epoch_num + 1, num_epochs)) while True: try: if len(rpn_accuracy_rpn_monitor) == epoch_length and C.verbose: mean_overlapping_bboxes = float(sum(rpn_accuracy_rpn_monitor)) / len(rpn_accuracy_rpn_monitor) rpn_accuracy_rpn_monitor = [] print('Average number of overlapping bounding boxes from RPN = {} for {} previous iterations'.format(mean_overlapping_bboxes, epoch_length)) if mean_overlapping_bboxes == 0: print('RPN is not producing bounding boxes that overlap the ground truth boxes. Check RPN settings or keep training.') X, Y, img_data = next(data_gen_train) loss_rpn = model_rpn.train_on_batch(X, Y) P_rpn = model_rpn.predict_on_batch(X) R = roi_helpers.rpn_to_roi(P_rpn[0], P_rpn[1], C.image_dim_ordering(), use_regr=True, overlap_thresh=0.7, max_boxes=300) X2, Y1, Y2, IouS = roi_helpers.calc_iou(R, img_data, C, class_mapping) if X2 is None: rpn_accuracy_rpn_monitor.append(0) rpn_accuracy_for_epoch.append(0) continue # sampling positive/negative samples neg_samples = np.where(Y1[0, :, -1] == 1) pos_samples = np.where(Y1[0, :, -1] == 0) if len(neg_samples) > 0: neg_samples = neg_samples[0] else: neg_samples = [] if len(pos_samples) > 0: pos_samples = pos_samples[0] else: pos_samples = [] rpn_accuracy_rpn_monitor.append(len(pos_samples)) rpn_accuracy_for_epoch.append((len(pos_samples))) if C.num_rois > 1: if len(pos_samples) < C.num_rois // 2: selected_pos_samples = pos_samples.tolist() else: selected_pos_samples = np.random.choice(pos_samples, C.num_rois // 2, replace=False).tolist() try: selected_neg_samples = np.random.choice(neg_samples, C.num_rois - len(selected_pos_samples), replace=False).tolist() except: selected_neg_samples = np.random.choice(neg_samples, C.num_rois - len(selected_pos_samples), replace=True).tolist() sel_samples = selected_pos_samples + selected_neg_samples else: # in the extreme case where num_rois = 1, we pick a random pos or neg sample selected_pos_samples = pos_samples.tolist() selected_neg_samples = neg_samples.tolist() if np.random.randint(0, 2): sel_samples = random.choice(neg_samples) else: sel_samples = random.choice(pos_samples) loss_class = model_classifier.train_on_batch([X, X2[:, sel_samples, :]], [Y1[:, sel_samples, :], Y2[:, sel_samples, :]]) losses[iter_num, 0] = loss_rpn[1] losses[iter_num, 1] = loss_rpn[2] losses[iter_num, 2] = loss_class[1] losses[iter_num, 3] = loss_class[2] losses[iter_num, 4] = loss_class[3] iter_num += 1 progbar.update(iter_num, [('rpn_cls', np.mean(losses[:iter_num, 0])), ('rpn_regr', np.mean(losses[:iter_num, 1])), ('detector_cls', np.mean(losses[:iter_num, 2])), ('detector_regr', np.mean(losses[:iter_num, 3])), ('mean_overlapping_bboxes', float(sum(rpn_accuracy_for_epoch)) / len(rpn_accuracy_for_epoch))]) if iter_num == epoch_length: loss_rpn_cls = np.mean(losses[:, 0]) loss_rpn_regr = np.mean(losses[:, 1]) loss_class_cls = np.mean(losses[:, 2]) loss_class_regr = np.mean(losses[:, 3]) class_acc = np.mean(losses[:, 4]) mean_overlapping_bboxes = float(sum(rpn_accuracy_for_epoch)) / len(rpn_accuracy_for_epoch) rpn_accuracy_for_epoch = [] if C.verbose: print('Mean number of bounding boxes from RPN overlapping ground truth boxes: {}'.format(mean_overlapping_bboxes)) print('Classifier accuracy for bounding boxes from RPN: {}'.format(class_acc)) print('Loss RPN classifier: {}'.format(loss_rpn_cls)) print('Loss RPN regression: {}'.format(loss_rpn_regr)) print('Loss Detector classifier: {}'.format(loss_class_cls)) print('Loss Detector regression: {}'.format(loss_class_regr)) print('Elapsed time: {}'.format(time.time() - start_time)) curr_loss = loss_rpn_cls + loss_rpn_regr + loss_class_cls + loss_class_regr iter_num = 0 start_time = time.time() if curr_loss < best_loss: if C.verbose: print('Total loss decreased from {} to {}, saving weights'.format(best_loss, curr_loss)) best_loss = curr_loss model_rpn.save_weights(output_weight_path) model_classifier.save_weights(output_weight_path) break except Exception as e: print('Exception: {}'.format(e)) continue print('Training complete, exiting.') ``` 这是一个简单的 Faster R-CNN 训练自己数据集的示例代码,可以根据自己的数据集和需求进行修改和优化。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Wei *

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值