mmtracking训练sot数据集问题汇总

最近想用mmtracking训练sot数据集,不管在搭建环境还是在训练过程中遇到各种问题。

首先就是环境问题,经反复测试,需要以下版本的包才能正常运行。

mmtracking版本0.14.0
pip install openmim==0.3.7
mim install mmengine==0.10.2
mim installmmdet==2.28.2
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu117/torch2.0.0/index.html(依赖与你的cuda和torch)

数据集我弄了两种,一种是uav123,如图:

一种是otb100

uav123数据集使用siamese_rpn算法报错:ValueError: cannot convert float NaN to integer,原因未知;

数据集下载方式参考:数据集准备 — MMTracking 0.14.0 文档

但uav123数据集使用stark算法一切正常,大概率数据集问题吧。

最后选择使用otb100数据集,由于官方提供的siamese_rpn和stark算法配置文件融合了多种开源数据集,如果只有otb100数据集,那么配置文件修改修改data为:

data = dict(
    samples_per_gpu=28,
    workers_per_gpu=4,
    persistent_workers=True,
    samples_per_epoch=60000,
    train=dict(
        dict(
            type='OTB100Dataset',
            ann_file=data_root +
            'otb100/annotations/otb100_infos.txt',
            img_prefix=data_root + 'otb100/data',
            pipeline=train_pipeline,
            split='train',
            test_mode=False)
        ),
    val=dict(
        type='OTB100Dataset',
        ann_file=data_root + 'otb100/annotations/otb100_infos.txt',
        img_prefix=data_root + 'otb100/data',
        pipeline=test_pipeline,
        split='test',
        test_mode=True),
    test=dict(
        type='OTB100Dataset',
        ann_file=data_root + 'otb100/annotations/otb100_infos.txt',
        img_prefix=data_root + 'otb100/data',
        pipeline=test_pipeline,
        split='test',
        test_mode=True))

训练时报错:AssertionError: 354 is not equal to 354-6+1,我查了mmtrack/datasets/otb_dataset.py发现,官方特意处理了Board、Tiger1两个分类,如果你仔细看data/otb100/annotations/otb100_infos.txt文件会发现,

Tiger1/img,Tiger1/groundtruth_rect.txt,6,354

其它数据都是从第一帧开始,唯独这个是从第6帧开始,最简单的办法,把data/otb100/annotations/otb100_infos.txt文件的这两行删掉:

Tiger1/img,Tiger1/groundtruth_rect.txt,6,354
Board/img,Board/groundtruth_rect.txt,1,698

开始训练,两个算法都正常!

附siamese_rpn_r50_20e.py完整配置文件:

cudnn_benchmark = False
deterministic = True
seed = 1
find_unused_parameters = True
crop_size = 511
exemplar_size = 127
search_size = 255

# model settings
model = dict(
    type='SiamRPN',
    backbone=dict(
        type='SOTResNet',
        depth=50,
        out_indices=(1, 2, 3),
        frozen_stages=4,
        strides=(1, 2, 1, 1),
        dilations=(1, 1, 2, 4),
        norm_eval=True,
        init_cfg=dict(
            type='Pretrained',
            checkpoint=  # noqa: E251
            'https://download.openmmlab.com/mmtracking/pretrained_weights/sot_resnet50.model'  # noqa: E501
        )),
    neck=dict(
        type='ChannelMapper',
        in_channels=[512, 1024, 2048],
        out_channels=256,
        kernel_size=1,
        norm_cfg=dict(type='BN'),
        act_cfg=None),
    head=dict(
        type='SiameseRPNHead',
        anchor_generator=dict(
            type='SiameseRPNAnchorGenerator',
            strides=[8],
            ratios=[0.33, 0.5, 1, 2, 3],
            scales=[8]),
        in_channels=[256, 256, 256],
        weighted_sum=True,
        bbox_coder=dict(
            type='DeltaXYWHBBoxCoder',
            target_means=[0., 0., 0., 0.],
            target_stds=[1., 1., 1., 1.]),
        loss_cls=dict(
            type='CrossEntropyLoss', reduction='sum', loss_weight=1.0),
        loss_bbox=dict(type='L1Loss', reduction='sum', loss_weight=1.2)),
    train_cfg=dict(
        rpn=dict(
            assigner=dict(
                type='MaxIoUAssigner',
                pos_iou_thr=0.6,
                neg_iou_thr=0.3,
                min_pos_iou=0.6,
                match_low_quality=False),
            sampler=dict(
                type='RandomSampler',
                num=64,
                pos_fraction=0.25,
                add_gt_as_proposals=False),
            num_neg=16,
            exemplar_size=exemplar_size,
            search_size=search_size)),
    test_cfg=dict(
        exemplar_size=exemplar_size,
        search_size=search_size,
        context_amount=0.5,
        center_size=7,
        rpn=dict(penalty_k=0.05, window_influence=0.42, lr=0.38)))

data_root = 'data/'
train_pipeline = [
    dict(
        type='PairSampling',
        frame_range=100,
        pos_prob=0.8,
        filter_template_img=False),
    dict(type='LoadMultiImagesFromFile', to_float32=True),
    dict(type='SeqLoadAnnotations', with_bbox=True, with_label=False),
    dict(
        type='SeqCropLikeSiamFC',
        context_amount=0.5,
        exemplar_size=exemplar_size,
        crop_size=crop_size),
    dict(
        type='SeqShiftScaleAug',
        target_size=[exemplar_size, search_size],
        shift=[4, 64],
        scale=[0.05, 0.18]),
    dict(type='SeqColorAug', prob=[1.0, 1.0]),
    dict(type='SeqBlurAug', prob=[0.0, 0.2]),
    dict(type='VideoCollect', keys=['img', 'gt_bboxes', 'is_positive_pairs']),
    dict(type='ConcatSameTypeFrames'),
    dict(type='SeqDefaultFormatBundle', ref_prefix='search')
]
test_pipeline = [
    dict(type='LoadImageFromFile', to_float32=True),
    dict(type='LoadAnnotations', with_bbox=True, with_label=False),
    dict(
        type='MultiScaleFlipAug',
        scale_factor=1,
        flip=False,
        transforms=[
            dict(type='VideoCollect', keys=['img', 'gt_bboxes']),
            dict(type='ImageToTensor', keys=['img'])
        ])
]
# dataset settings
data = dict(
    samples_per_gpu=28,
    workers_per_gpu=4,
    persistent_workers=True,
    samples_per_epoch=60000,
    train=dict(
        dict(
            type='OTB100Dataset',
            ann_file=data_root +
            'otb100/annotations/otb100_infos.txt',
            img_prefix=data_root + 'otb100/data',
            pipeline=train_pipeline,
            split='train',
            test_mode=False)
        ),
    val=dict(
        type='OTB100Dataset',
        ann_file=data_root + 'otb100/annotations/otb100_infos.txt',
        img_prefix=data_root + 'otb100/data',
        pipeline=test_pipeline,
        split='test',
        test_mode=True),
    test=dict(
        type='OTB100Dataset',
        ann_file=data_root + 'otb100/annotations/otb100_infos.txt',
        img_prefix=data_root + 'otb100/data',
        pipeline=test_pipeline,
        split='test',
        test_mode=True))
# optimizer
optimizer = dict(
    type='SGD',
    lr=0.005,
    momentum=0.9,
    weight_decay=0.0001,
    paramwise_cfg=dict(
        custom_keys=dict(backbone=dict(lr_mult=0.1, decay_mult=1.0))))
optimizer_config = dict(
    type='SiameseRPNOptimizerHook',
    backbone_start_train_epoch=10,
    backbone_train_layers=['layer2', 'layer3', 'layer4'],
    grad_clip=dict(max_norm=10.0, norm_type=2))
# learning policy
lr_config = dict(
    policy='SiameseRPN',
    lr_configs=[
        dict(type='step', start_lr_factor=0.2, end_lr_factor=1.0, end_epoch=5),
        dict(type='log', start_lr_factor=1.0, end_lr_factor=0.1, end_epoch=20),
    ])
# checkpoint saving
checkpoint_config = dict(interval=1)
evaluation = dict(
    metric=['track'],
    interval=1,
    start=10,
    rule='greater',
    save_best='success')
# yapf:disable
log_config = dict(
    interval=50,
    hooks=[
        dict(type='TextLoggerHook'),
        # dict(type='TensorboardLoggerHook')
    ])
# yapf:enable
# runtime settings
total_epochs = 20
dist_params = dict(backend='nccl')
log_level = 'INFO'
load_from = None
resume_from = None
workflow = [('train', 1)]

stark_st1_r50_500e.py配置:

cudnn_benchmark = False
deterministic = True
seed = 1

# model setting
model = dict(
    type='Stark',
    backbone=dict(
        type='ResNet',
        depth=50,
        num_stages=3,
        strides=(1, 2, 2),
        dilations=[1, 1, 1],
        out_indices=[2],
        frozen_stages=1,
        norm_eval=True,
        norm_cfg=dict(type='BN', requires_grad=False),
        init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')),
    neck=dict(
        type='ChannelMapper',
        in_channels=[1024],
        out_channels=256,
        kernel_size=1,
        act_cfg=None),
    head=dict(
        type='StarkHead',
        num_querys=1,
        transformer=dict(
            type='StarkTransformer',
            encoder=dict(
                type='DetrTransformerEncoder',
                num_layers=6,
                transformerlayers=dict(
                    type='BaseTransformerLayer',
                    attn_cfgs=[
                        dict(
                            type='MultiheadAttention',
                            embed_dims=256,
                            num_heads=8,
                            attn_drop=0.1,
                            dropout_layer=dict(type='Dropout', drop_prob=0.1))
                    ],
                    ffn_cfgs=dict(
                        feedforward_channels=2048,
                        embed_dims=256,
                        ffn_drop=0.1),
                    operation_order=('self_attn', 'norm', 'ffn', 'norm'))),
            decoder=dict(
                type='DetrTransformerDecoder',
                return_intermediate=False,
                num_layers=6,
                transformerlayers=dict(
                    type='BaseTransformerLayer',
                    attn_cfgs=dict(
                        type='MultiheadAttention',
                        embed_dims=256,
                        num_heads=8,
                        attn_drop=0.1,
                        dropout_layer=dict(type='Dropout', drop_prob=0.1)),
                    ffn_cfgs=dict(
                        feedforward_channels=2048,
                        embed_dims=256,
                        ffn_drop=0.1),
                    operation_order=('self_attn', 'norm', 'cross_attn', 'norm',
                                     'ffn', 'norm'))),
        ),
        positional_encoding=dict(
            type='SinePositionalEncoding', num_feats=128, normalize=True),
        bbox_head=dict(
            type='CornerPredictorHead',
            inplanes=256,
            channel=256,
            feat_size=20,
            stride=16),
        loss_bbox=dict(type='L1Loss', loss_weight=5.0),
        loss_iou=dict(type='GIoULoss', loss_weight=2.0)),
    test_cfg=dict(
        search_factor=5.0,
        search_size=320,
        template_factor=2.0,
        template_size=128,
        update_intervals=[200]))

data_root = 'data/'
train_pipeline = [
    dict(
        type='TridentSampling',
        num_search_frames=1,
        num_template_frames=2,
        max_frame_range=[200],
        cls_pos_prob=0.5,
        train_cls_head=False),
    dict(type='LoadMultiImagesFromFile', to_float32=True),
    dict(type='SeqLoadAnnotations', with_bbox=True, with_label=False),
    dict(type='SeqGrayAug', prob=0.05),
    dict(
        type='SeqRandomFlip',
        share_params=True,
        flip_ratio=0.5,
        direction='horizontal'),
    dict(
        type='SeqBboxJitter',
        center_jitter_factor=[0, 0, 4.5],
        scale_jitter_factor=[0, 0, 0.5],
        crop_size_factor=[2, 2, 5]),
    dict(
        type='SeqCropLikeStark',
        crop_size_factor=[2, 2, 5],
        output_size=[128, 128, 320]),
    dict(type='SeqBrightnessAug', jitter_range=0.2),
    dict(
        type='SeqRandomFlip',
        share_params=False,
        flip_ratio=0.5,
        direction='horizontal'),
    dict(
        type='SeqNormalize',
        mean=[123.675, 116.28, 103.53],
        std=[58.395, 57.12, 57.375],
        to_rgb=True),
    dict(type='CheckPadMaskValidity', stride=16),
    dict(
        type='VideoCollect',
        keys=['img', 'gt_bboxes', 'padding_mask'],
        meta_keys=('valid')),
    dict(type='ConcatSameTypeFrames', num_key_frames=2),
    dict(type='SeqDefaultFormatBundle', ref_prefix='search')
]

img_norm_cfg = dict(mean=[0, 0, 0], std=[1, 1, 1], to_rgb=True)
test_pipeline = [
    dict(type='LoadImageFromFile', to_float32=True),
    dict(type='LoadAnnotations', with_bbox=True, with_label=False),
    dict(
        type='MultiScaleFlipAug',
        scale_factor=1,
        flip=False,
        transforms=[
            dict(type='Normalize', **img_norm_cfg),
            dict(type='VideoCollect', keys=['img', 'gt_bboxes']),
            dict(type='ImageToTensor', keys=['img'])
        ])
]
# dataset settings
data = dict(
    samples_per_gpu=16,
    workers_per_gpu=8,
    persistent_workers=True,
    samples_per_epoch=60000,
    train=dict(
        dict(
            type='OTB100Dataset',
            ann_file=data_root +
            'otb100/annotations/otb100_infos.txt',
            img_prefix=data_root + 'otb100/data',
            pipeline=train_pipeline,
            split='train',
            test_mode=False)
        ),
    val=dict(
        type='OTB100Dataset',
        ann_file=data_root + 'otb100/annotations/otb100_infos.txt',
        img_prefix=data_root + 'otb100/data',
        pipeline=test_pipeline,
        split='test',
        test_mode=True),
    test=dict(
        type='OTB100Dataset',
        ann_file=data_root + 'otb100/annotations/otb100_infos.txt',
        img_prefix=data_root + 'otb100/data',
        pipeline=test_pipeline,
        split='test',
        test_mode=True))

# optimizer
optimizer = dict(
    type='AdamW',
    lr=0.0001,
    weight_decay=0.0001,
    paramwise_cfg=dict(
        custom_keys=dict(backbone=dict(lr_mult=0.1, decay_mult=1.0))))
optimizer_config = dict(grad_clip=dict(max_norm=0.1, norm_type=2))
# learning policy
lr_config = dict(policy='step', step=[400])
# checkpoint saving
checkpoint_config = dict(interval=100)
evaluation = dict(
    metric=['track'],
    interval=100,
    start=501,
    rule='greater',
    save_best='success')
# yapf:disable
log_config = dict(
    interval=1,
    hooks=[
        dict(type='TextLoggerHook'),
        dict(type='TensorboardLoggerHook')
    ])
# yapf:enable
# runtime settings
total_epochs = 500
dist_params = dict(backend='nccl')
log_level = 'INFO'
load_from = None
resume_from = None
workflow = [('train', 1)]

  • 2
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值