使用Detectron2训练自己的数据集

如何使用自己的coco格式数据集

之前看了一篇 https://blog.csdn.net/weixin_43823854/article/details/108980188 实现有点啰嗦,那些class 颜色不用管

  1. data/datasets/builtin.py 里面的 调用 register_coco_instances 里取消meta的传入
def register_all_coco(root):
    for dataset_name, splits_per_dataset in _PREDEFINED_SPLITS_COCO.items():
        for key, (image_root, json_file) in splits_per_dataset.items():
            # Assume pre-defined datasets live in `./datasets`.
            register_coco_instances(
                key,
                #  _get_builtin_metadata(dataset_name),
                {},
                os.path.join(root, json_file) if "://" not in json_file else json_file,
                os.path.join(root, image_root),
            )

  1. detectron2/data/datasets/builtin.py 里面的_PREDEFINED_SPLITS_COCO 字典里加入数据集名字,图像位置,json位置,注意train和val分开定义,如poker
_PREDEFINED_SPLITS_COCO["coco"] = {
    "coco_2014_train": ("coco/train2014", "coco/annotations/instances_train2014.json"),
    "coco_2014_val": ("coco/val2014", "coco/annotations/instances_val2014.json"),
    "coco_2014_minival": ("coco/val2014", "coco/annotations/instances_minival2014.json"),
    "coco_2014_minival_100": ("coco/val2014", "coco/annotations/instances_minival2014_100.json"),
    "coco_2014_valminusminival": (
        "coco/val2014",
        "coco/annotations/instances_valminusminival2014.json",
    ),
    "coco_2017_train": ("coco/train2017", "coco/annotations/instances_train2017.json"),
    "coco_2017_val": ("coco/val2017", "coco/annotations/instances_val2017.json"),
    "coco_2017_test": ("coco/test2017", "coco/annotations/image_info_test2017.json"),
    "coco_2017_test-dev": ("coco/test2017", "coco/annotations/image_info_test-dev2017.json"),
    "coco_2017_val_100": ("coco/val2017", "coco/annotations/instances_val2017_100.json"),
    "poker_train": ("poker/images", "poker/instances_train2017.json"),
    "poker_val": ("poker/images", "poker/instances_val2017.json"),
}

over,就可以在yaml里使用了

一个简单的fastercnn

_BASE_: "../Base-RCNN-FPN.yaml"
MODEL:
  WEIGHTS: "weights/faster_rcnn_R_50_FPN_3x.pkl"
  # WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  MASK_ON: False
  RESNETS:
    DEPTH: 50
  ROI_HEADS:
    NUM_CLASSES: 6
DATASETS:
  TRAIN: ("poker_train",)
  TEST: ("poker_val",)
SOLVER:
  IMS_PER_BATCH: 4 
  BASE_LR: 0.005
  STEPS: (2000, 3000)
  MAX_ITER: 3500 
DATALOADER:
    NUM_WORKERS: 2
TEST:
    EVAL_PERIOD: 500 

id映射

detectron2/evaluation/coco_evaluation.py _eval_predictions 函数里先检查是否在0到79中间 ,然后做了个resverse再保存,其中的thing_dataset_id_to_contiguous_id由load_coco_json 函数生成

不错的示例

https://www.dlology.com/blog/how-to-train-detectron2-with-custom-coco-datasets/

info

是怎么解析config和num-gpus的

detectron2/engine/defaults.py

coco地址在哪定义的

detectron2/data/datasets/builtin.py 51行

coco类别数量在哪定义的

detectron2/config/defaults.py

数据集应该放在哪

datasets/README.md 里面说可以 export DETECTRON2_DATASETS=/path/to/datasets一般就是 ./datasets

yaml写法应该怎么写

不能强行写到一个yaml里, 要么继承,要么覆盖,不然weights会有问题

usage

multi gpu inference

python train_net.py --num-gpus 4 --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml  --eval-only MODEL.WEIGHTS /path/to/checkpoint_file 

multi gpu train

python tools/train_net.py --num-gpus 2 --config-file configs/PokerDet/faster_rcnn_R_50_FPN_1x.yaml

trouble shooting

RuntimeError: nvrtc: error: invalid value for --gpu-architecture (-arch)

pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio===0.7.2 -f https://download.pytorch.org/whl/torch_stable.html

TEST里面的size

TEST:
  EVAL_PERIOD: 0 
  AUG:
    ENABLED: True
    FLIP: false
    MIN_SIZES: (1218,) # (608,)
    MAX_SIZE: 1800 # 900

应该就是长边和短边,好奇当AUG False的时候image size是怎样的, 是按INPUT里面的

_C.INPUT.MIN_SIZE_TEST = 800
# Maximum size of the side of the image during testing
_C.INPUT.MAX_SIZE_TEST = 1333

input里面的size

INPUT:
  FORMAT: RGB
  CUSTOM_AUG: EfficientDetResizeCrop
  TRAIN_SIZE: 1280 # 640
  TEST_SIZE: 1280 # 640
  MIN_SIZE_TEST: 1216 #608
  MAX_SIZE_TEST: 1800 #900

MIN_SIZE_TEST 是给默认的ResizeShortestEdge用的,一般都是ResizeShortestEdge

projects/CenterNet2/centernet/data/custom_build_augmentation.py

def build_custom_augmentation(cfg, is_train):
    """
    Create a list of default :class:`Augmentation` from config.
    Now it includes resizing and flipping.

    Returns:
        list[Augmentation]
    """
    if cfg.INPUT.CUSTOM_AUG == 'ResizeShortestEdge':
        if is_train:
            min_size = cfg.INPUT.MIN_SIZE_TRAIN
            max_size = cfg.INPUT.MAX_SIZE_TRAIN
            sample_style = cfg.INPUT.MIN_SIZE_TRAIN_SAMPLING
        else:
            min_size = cfg.INPUT.MIN_SIZE_TEST
            max_size = cfg.INPUT.MAX_SIZE_TEST
            sample_style = "choice"
        augmentation = [T.ResizeShortestEdge(min_size, max_size, sample_style)]
    elif cfg.INPUT.CUSTOM_AUG == 'EfficientDetResizeCrop':
        if is_train:
            scale = cfg.INPUT.SCALE_RANGE
            size = cfg.INPUT.TRAIN_SIZE
        else:
            scale = (1, 1)
            size = cfg.INPUT.TEST_SIZE
        augmentation = [EfficientDetResizeCrop(size, scale)]
    else:
        assert 0, cfg.INPUT.CUSTOM_AUG

    if is_train:
        augmentation.append(T.RandomFlip())
    return augmentation

在detectron里面 detectron2/data/detection_utils.py 里面有个 build_augmentation 里面有个 augmentation = [T.ResizeShortestEdge(min_size, max_size, sample_style)]
里面的 projects/CenterNet2/train_net.py 有个 do_train 里面是

    mapper = DatasetMapper(cfg, True) if cfg.INPUT.CUSTOM_AUG == '' else \
        DatasetMapper(cfg, True, augmentations=build_custom_augmentation(cfg, True))

projects/CenterNet2/train_net.py 里面

    mapper = DatasetMapper(cfg, True) if cfg.INPUT.CUSTOM_AUG == '' else \
        DatasetMapper(cfg, True, augmentations=build_custom_augmentation(cfg, True))

main函数里

        if cfg.TEST.AUG.ENABLED:
            logger.info("Running inference with test-time augmentation ...")
            model = GeneralizedRCNNWithTTA(cfg, model, batch_size=1)

当AUG FALSE的时候 的 直接do_test -> EfficientDetResizeCrop -> cfg.INPUT.TRAIN_SIZE and cfg.INPUT.TEST_SIZE

原版里面的DefaultPredictor aug 用的 cfg.INPUT.MIN_SIZE_TEST 和 do_test里面的 build_augmentation 也是

  • 3
    点赞
  • 15
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值