PaddleDetection/mmdetection/Detectron2自定义目标检测数据集

按COCO格式标注

将数据集按照coco的格式整理,很多目标检测框架都支持coco。coco的数据集标注详见 COCO - Common Objects in Context

格式是字典和列表的相互嵌套。根是是一个字典,有四个键,分别是info images annotations license。info是个字典,images是image字典组成的列表,annotations是annotation字典组成的列表,licenses是license字典组成的列表。最后保存为json格式。

根字典
{
"info": info, 
"images": [image], 
"annotations": [annotation], 
"licenses": [license],
}

内容
info{
"year": int, 
"version": str, 
"description": str, 
"contributor": str, 
"url": str, 
"date_created": datetime,
}

image{
"id": int, 
"width": int, 
"height": int, 
"file_name": str, 
"license": int, 
"flickr_url": str, 
"coco_url": str, 
"date_captured": datetime,
}

license{
"id": int, 
"name": str, 
"url": str,
}

目标检测的标注:
annotation{
"id": int, 
"image_id": int, 
"category_id": int, 
"segmentation": RLE or [polygon], 
"area": float, 
"bbox": [x,y,width,height], 
"iscrowd": 0 or 1,
}

categories[{
"id": int, 
"name": str, 
"supercategory": str,
}]

按照coco的格式标注后使用就很方便了。

文件夹的组织如下

greenhouse
├── annotations
│   ├── train.json
│   └── valid.json
└── images
    ├── wf1_000000.tif
    ├── wf1_000001.tif
    ├── wf1_000002.tif

PaddleDetection目标检测数据集

        按照如上格式整理后,新建个配置文件configs/datasets/greenhouse_det.yml,内容如下。标明数据集文件夹路径(dataset_dir)、图片文件夹路径(image_dir)、标注文件路径(anno_path)。data_fields 保持默认。

metric: COCO
num_classes: 1

TrainDataset:
  !COCODataSet
    image_dir: images
    anno_path: annotations/train.json
    dataset_dir: dataset/greenhouse
    data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']

EvalDataset:
  !COCODataSet
    image_dir: images
    anno_path: annotations/valid.json
    dataset_dir: dataset/greenhouse

TestDataset:
  !ImageFolder
    anno_path: annotations/valid.json

mmDetection目标检测数据集

数据集的组织方式同上,mmDetection可以直接支持coco格式的标注。一个文件夹放一堆图片,一个文件夹放json格式的标注,十分的简单。

重要的是在配置文件里配置数据和路径。这里需要改的比较多,不太友好,如果运行出错,肯定是配置文件出问题了。以yolox的m版本为例,注释中用△ 标出了需要注意的地方,避免采坑

_base_ = ['../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py']
load_from = "configs/yolox/yolox_m.pth"
# model settings
model = dict(
    type='YOLOX',
    backbone=dict(type='CSPDarknet', deepen_factor=0.67, widen_factor=0.75),
    neck=dict(
        type='YOLOXPAFPN',
        in_channels=[192, 384, 768], out_channels=192, num_csp_blocks=2),
    bbox_head=dict(type='YOLOXHead', num_classes=1, in_channels=192, feat_channels=192),
    train_cfg=dict(assigner=dict(type='SimOTAAssigner', center_radius=2.5)),
    # In order to align the source code, the threshold of the val phase is
    # 0.01, and the threshold of the test phase is 0.001.
    test_cfg=dict(type="SB", score_thr=0.01, nms=dict(type='nms', iou_threshold=0.65)))

# dataset settings 
# △ 1数据集路径
data_root = '/home/wang/mycode/greenhouse/'
dataset_type = 'CocoDataset'

img_norm_cfg = dict(
    mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)

img_scale = (768, 768)

train_pipeline = [
    # dict(type='Mosaic', img_scale=img_scale, pad_val=114.0),
    # dict(
    #     type='MixUp',
    #     img_scale=img_scale,
    #     ratio_range=(0.8, 1.6),
    #     pad_val=114.0),
    dict(
        type='PhotoMetricDistortion',
        brightness_delta=32,
        contrast_range=(0.5, 1.5),
        saturation_range=(0.5, 1.5),
        hue_delta=18),
    dict(type='RandomFlip', flip_ratio=0.5),
    dict(type='Resize', keep_ratio=True),
    dict(type='Pad', pad_to_square=True, pad_val=114.0),
    dict(type='Normalize', **img_norm_cfg),
    dict(type='DefaultFormatBundle'),
    dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels'])
]
# △ 2类别名称,绘图用
classes = ("factory",)
train_dataset = dict(
    type='MultiImageMixDataset',
    dataset=dict(
        type=dataset_type,
        # △ 3类别配置
        classes=classes,
        # △ 4标注文件路径
        ann_file=data_root + 'annotations/train.json',
        # △ 5图片的路径前缀,这个和json中的文件名拼起来能找到图片
        img_prefix=data_root + 'images/',
        pipeline=[
            dict(type='LoadImageFromFile', to_float32=True),
            dict(type='LoadAnnotations', with_bbox=True)
        ],
        filter_empty_gt=False,
    ),
    pipeline=train_pipeline,
    dynamic_scale=img_scale)

test_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(
        type='MultiScaleFlipAug',
        img_scale=img_scale,
        flip=False,
        transforms=[
            dict(type='Resize', keep_ratio=True),
            dict(type='RandomFlip'),
            dict(type='Pad', size=img_scale, pad_val=114.0),
            dict(type='Normalize', **img_norm_cfg),
            dict(type='DefaultFormatBundle'),
            dict(type='Collect', keys=['img'])
        ])
]

data = dict(
    samples_per_gpu=2,
    workers_per_gpu=2,
    train=train_dataset,
    # △ 6类别配置
    classes=classes,
    val=dict(
        # △ 7类别配置
        classes=classes,
        type=dataset_type,
        # △ 8小心
        ann_file=data_root + 'annotations/valid.json',
        # △ 9小心
        img_prefix=data_root + 'images/',
        pipeline=test_pipeline),
    test=dict(
        # △ 10类别配置
        classes=classes,
        type=dataset_type,
        # △ 11小心
        ann_file=data_root + 'annotations/valid.json',
        # △ 12小心
        img_prefix=data_root + 'images/',
        pipeline=test_pipeline))

# optimizer
# default 8 gpu
optimizer = dict(
    type='SGD',
    lr=0.002,
    momentum=0.9,
    weight_decay=5e-4,
    nesterov=True,
    paramwise_cfg=dict(norm_decay_mult=0., bias_decay_mult=0.))
optimizer_config = dict(grad_clip=None)

# learning policy
lr_config = dict(
    _delete_=True,
    policy='YOLOX',
    warmup='exp',
    by_epoch=False,
    warmup_by_epoch=True,
    warmup_ratio=1,
    warmup_iters=1,  # 5 epoch
    num_last_epochs=15,
    min_lr_ratio=0.01)
runner = dict(type='EpochBasedRunner', max_epochs=30)

resume_from = None
interval = 5

custom_hooks = [
    dict(type='YOLOXModeSwitchHook', num_last_epochs=15, priority=48),
    dict(
        type='SyncRandomSizeHook',
        ratio_range=(14, 26),
        img_scale=img_scale,
        interval=interval,
        priority=48),
    dict(
        type='SyncNormHook',
        num_last_epochs=15,
        interval=interval,
        priority=48),
    dict(type='ExpMomentumEMAHook', resume_from=resume_from, priority=49)
]
checkpoint_config = dict(interval=interval)
evaluation = dict(interval=interval, metric='bbox')
log_config = dict(interval=50)

Detectron2自定义目标检测数据集

        注册coco格式的目标检测数据比较简单。以大棚检测为例,直接使用register_coco_instances进行注册,传入数据集名称、json路径、图片文件夹路径。如果是完全自定义的话,还需要参考官方文档Use Custom Datasets — detectron2 0.5 documentation

from detectron2.data.datasets import register_coco_instances
from detectron2.data import MetadataCatalog, DatasetCatalog

dataset_dir = "/home/wang/detection/dataset/"

train_anno = dataset_dir + "annotations/train.json"
register_coco_instances("greenhouse_train", {}, train_anno, dataset_dir+"images")

val_anno = dataset_dir + "annotations/val.json"
register_coco_instances("greenhouse_val", {}, val_anno, dataset_dir+"images")

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

独孤尚亮dugushangliang

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值