mmdetection3.1.0测试模型的FPS

mmdetection3.1提供了测试模型的FPS方法:

其中,命令 python -m torch.distributed.launch --nproc_per_node=1 --master_port=29500 应该是用来设置分布式计算等等的方法,具体原因参考了下面这篇博文:

mmdetection3.x中benchmark.py的问题与解决方案-CSDN博客

我在测试的时候并没有使用上述复杂的命令,其实去掉前面-m后面那串也行,主要修改的地方如下:

一、修改benchmark.py文件

由于是测试的是FPS,需要修改benchmark.pytask部分,dataloaderinference。

benchmark.py的具体代码如下:

# Copyright (c) OpenMMLab. All rights reserved.
import argparse
import os
from mmengine import MMLogger
from mmengine.config import Config, DictAction
from mmengine.dist import init_dist
from mmengine.registry import init_default_scope
from mmengine.utils import mkdir_or_exist
from mmdet.utils.benchmark import (DataLoaderBenchmark, DatasetBenchmark,InferenceBenchmark)


def parse_args():
    parser = argparse.ArgumentParser(description='MMDet benchmark')
    parser.add_argument('config', help='test config file path')
    parser.add_argument('--checkpoint', help='checkpoint file')
    parser.add_argument('--task',choices=['inference', 'dataloader', 'dataset'],default='inference',help='Which task do you want to go to benchmark')
    parser.add_argument('--repeat-num',type=int,default=1,help='number of repeat times of measurement for averaging the results')
    parser.add_argument('--max-iter', type=int, default=2000, help='num of max iter')
    parser.add_argument('--log-interval', type=int, default=50, help='interval of logging')
    parser.add_argument('--num-warmup', type=int, default=5, help='Number of warmup')
    parser.add_argument('--fuse-conv-bn',action='store_true',help='Whether to fuse conv and bn, this will slightly increase the inference speed')
    parser.add_argument('--dataset-type',choices=['train', 'val', 'test'],default='test',help='Benchmark dataset type. only supports train, val and test')
    parser.add_argument('--work-dir',help='the directory to save the file containing benchmark metrics')
    parser.add_argument(
        '--cfg-options',
        nargs='+',
        action=DictAction,
        help='override some settings in the used config, the key-value pair '
        'in xxx=yyy format will be merged into config file. If the value to '
        'be overwritten is a list, it should be like key="[a,b]" or key=a,b '
        'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" '
        'Note that the quotation marks are necessary and that no white space '
        'is allowed.')
    parser.add_argument(
        '--launcher',
        choices=['none', 'pytorch', 'slurm', 'mpi'],
        default='none',
        help='job launcher')
    parser.add_argument('--local_rank', type=int, default=0)
    args = parser.parse_args()

    if 'LOCAL_RANK' not in os.environ:
        os.environ['LOCAL_RANK'] = str(args.local_rank)
    return args


def inference_benchmark(args, cfg, distributed, logger):
    benchmark = InferenceBenchmark(
        cfg,
        args.checkpoint,
        distributed,
        args.fuse_conv_bn,
        args.max_iter,
        args.log_interval,
        args.num_warmup,
        logger=logger)
    return benchmark


def dataloader_benchmark(args, cfg, distributed, logger):
    benchmark = DataLoaderBenchmark(
        cfg,
        distributed,
        args.dataset_type,
        args.max_iter,
        args.log_interval,
        args.num_warmup,
        logger=logger)
    return benchmark


def dataset_benchmark(args, cfg, distributed, logger):
    benchmark = DatasetBenchmark(
        cfg,
        args.dataset_type,
        args.max_iter,
        args.log_interval,
        args.num_warmup,
        logger=logger)
    return benchmark


def main():
    args = parse_args()
    cfg = Config.fromfile(args.config)
    if args.cfg_options is not None:
        cfg.merge_from_dict(args.cfg_options)

    init_default_scope(cfg.get('default_scope', 'mmdet'))

    distributed = False
    if args.launcher != 'none':
        init_dist(args.launcher, **cfg.get('env_cfg', {}).get('dist_cfg', {}))
        distributed = True

    log_file = None
    if args.work_dir:
        log_file = os.path.join(args.work_dir, 'benchmark.log')
        mkdir_or_exist(args.work_dir)

    logger = MMLogger.get_instance(
        'mmdet', log_file=log_file, log_level='INFO')

    benchmark = eval(f'{args.task}_benchmark')(args, cfg, distributed, logger)
    benchmark.run(args.repeat_num)


if __name__ == '__main__':
    main()

二、调试debug

1、attributeerror: 'nonetype' object has no attribute 'process'

原因是benchmark.py在导包时:from mmdet.utils.benchmark import (DataLoaderBenchmark, DatasetBenchmark,InferenceBenchmark),这个里面有一行代码的逻辑为:

这个里面import psutil时根本没有导入,所以返回None了。

解决方法:终端pip install psutil

2、mmdetection 报错 benchmark.py: error: unrecognized arguments:无法识别权重文件

原因是,--checkpoint参数未指明

最后整体的运行命令为(以centernet为例):

python tools/analysis_tools/benchmark.py work_dirs/centernet-update_r18_fpn_8xb8-amp-lsj-200e_coco/centernet-update_r18_fpn_8xb8-amp-lsj-200e_coco.py --checkpoint work_dirs/centernet-update_r18_fpn_8xb8-amp-lsj-200e_coco/epoch_30.pth

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值