Mmaction2跑视频帧训练

问题发现

前段时间跑mmaction的代码,一开始看了这位大佬的经验贴mmaction2训练经验贴,大佬用的配置文件是跑视频帧的,但是我用这个配置文件跑不起来Rawframes的路线,只能换成VideoDataset跑视频路线了。

但是就很麻烦,还要把我自己的图片数据集转成视频数据集之后再输入进去训练,后面就去看看怎么才能跑视频帧路线。

目前视频帧训练能跑通了,做个分享

最后解决

发现官网去年更新了视频帧训练的一些方法,所以原来的配置文件不起作用了(会出现初始化不正确的问题),具体的修改可以看这个

下面是我训练的时候能跑通视频帧的一个配置文件:

_base_ = ['../../_base_/default_runtime.py']

# model settings
model = dict(
    type='Recognizer3D',
    backbone=dict(
        type='TimeSformer',
        pretrained=  # noqa: E251
        'https://download.openmmlab.com/mmaction/recognition/timesformer/vit_base_patch16_224.pth',  # noqa: E501
        num_frames=8,
        img_size=224,
        patch_size=16,
        embed_dims=768,
        in_channels=3,
        dropout_ratio=0.,
        transformer_layers=None,
        attention_type='divided_space_time',
        norm_cfg=dict(type='LN', eps=1e-6)),
    cls_head=dict(type='TimeSformerHead', num_classes=13, in_channels=768,average_clips='prob'),
    data_preprocessor=dict(
        type='ActionDataPreprocessor',
        mean=[127.5, 127.5, 127.5],
        std=[127.5, 127.5, 127.5],
        format_shape='NCTHW')
    )

# dataset settings
dataset_type = 'RawframeDataset'
data_root = './data/rawframes_train/'
data_root_val = './data/rawframes_val/'
ann_file_train = './data/rawframes_train.txt'
ann_file_val = './data/rawframes_val.txt'
ann_file_test = './data/rawframes_val.txt'

train_pipeline = [
    dict(type='SampleFrames', clip_len=8, frame_interval=32, num_clips=1),
    dict(type='RawFrameDecode'),
    dict(type='RandomRescale', scale_range=(256, 320)),
    dict(type='RandomCrop', size=224),
    dict(type='Flip', flip_ratio=0.5),
    dict(type='FormatShape', input_format='NCTHW'),
    dict(type='PackActionInputs')
]
val_pipeline = [
    dict(
        type='SampleFrames',
        clip_len=8,
        frame_interval=32,
        num_clips=1,
        test_mode=True),
    dict(type='RawFrameDecode'),
    dict(type='Resize', scale=(-1, 256)),
    dict(type='CenterCrop', crop_size=224),
    dict(type='FormatShape', input_format='NCTHW'),
    dict(type='PackActionInputs')
]
test_pipeline = [
    dict(
        type='SampleFrames',
        clip_len=8,
        frame_interval=32,
        num_clips=1,
        test_mode=True),
    dict(type='RawFrameDecode'),
    dict(type='Resize', scale=(-1, 224)),
    dict(type='ThreeCrop', crop_size=224),
    dict(type='FormatShape', input_format='NCTHW'),
    dict(type='PackActionInputs')
]
# set train dataloader like video way
train_dataloader = dict(
    batch_size=8,
    num_workers=8,
    persistent_workers=True,
    sampler=dict(type='DefaultSampler', shuffle=True),
    dataset=dict(
        type=dataset_type,
        ann_file=ann_file_train,
        data_prefix=dict(img='./data/lung-fc/rawframes_train/'),
        pipeline=train_pipeline))

# set test dataloader like video way
test_dataloader = dict(
    batch_size=8,
    num_workers=8,
    persistent_workers=True,
    sampler=dict(type='DefaultSampler', shuffle=True),
    dataset=dict(
        type=dataset_type,
        ann_file=ann_file_test,
        data_prefix=dict(img='./data/lung-fc/rawframes_val/'),
        pipeline=test_pipeline))

# set val dataloader like video way
val_dataloader = dict(
    batch_size=8,
    num_workers=8,
    persistent_workers=True,
    sampler=dict(type='DefaultSampler', shuffle=True),
    dataset=dict(
        type=dataset_type,
        ann_file=ann_file_val,
        data_prefix=dict(img='./data/lung-fc/rawframes_val/'),
        pipeline=val_pipeline))

val_evaluator = dict(type='AccMetric')
test_evaluator = val_evaluator

train_cfg = dict(
    type='EpochBasedTrainLoop', max_epochs=15, val_begin=1, val_interval=1)
val_cfg = dict(type='ValLoop')
test_cfg = dict(type='TestLoop')

# learning policy
optim_wrapper = dict(
    optimizer=dict(
        type='SGD', lr=0.005, momentum=0.9, weight_decay=1e-4, nesterov=True),
    paramwise_cfg=dict(
        custom_keys={
            '.backbone.cls_token': dict(decay_mult=0.0),
            '.backbone.pos_embed': dict(decay_mult=0.0),
            '.backbone.time_embed': dict(decay_mult=0.0)
        }),
    clip_grad=dict(max_norm=40, norm_type=2))

param_scheduler = [
    dict(
        type='MultiStepLR',
        begin=0,
        end=15,
        by_epoch=True,
        milestones=[5, 10],
        gamma=0.1)
]

default_hooks = dict(checkpoint=dict(interval=5))

auto_scale_lr = dict(enable=False, base_batch_size=64)

主要就是把原来的data_dict拆分成三个分别的dataloader了,然后下面的学习率和优化器之类的也有了新的名称和设置方式,需要做修改的地方就是:

# dataset settings
dataset_type = 'RawframeDataset'
data_root = './data/rawframes_train/'
data_root_val = './data/rawframes_val/'
ann_file_train = './data/rawframes_train.txt'
ann_file_val = './data/rawframes_val.txt'
ann_file_test = './data/rawframes_val.txt'

这一部分要换成自己的数据集路径,这里面的数据集和txt文件怎么生成可以看我一开始说的这个大佬的帖子mmaction2训练经验贴

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值