姿态估计一手体验(填坑)- 了解它!

对不起大家,这个系列后面断更了,我终于做完毕业设计了,来把当年写的最后一段话发上来。

省流小助手
这篇文章写的时候mmpose只更新到0.13.0版本,如果大家在现在最新的GitHub上面按照这个流程来安装和操作,可能不是完全的准确。


5 Exporting a model to ONNX

Open Neural Network Exchange (ONNX) is an open ecosystem that empowers AI developers to choose the right tools as their project evolves.

按下不表。

6 定制 Runtime Setting

  • Customize Optimization Methods
    • Customize optimizer supported by PyTorch
    • Customize self-implemented optimizer
        1. Define a new optimizer
        1. Add the optimizer to registry
        1. Specify the optimizer in the config file
    • Customize optimizer constructor
    • Additional settings
  • Customize Training Schedules
  • Customize Workflow
    • Customize self-implemented hooks
        1. Implement a new hook
        1. Register the new hook
        1. Modify the config
    • Use hooks implemented in MMCV
    • Modify default runtime hooks
      • Checkpoint config
      • Log config
      • Evaluation config

6.1 定制 优化方法

—- 6.1 从这里开始,都跟 4 内容基本一样!

定制 pytorch 支持的的 optimizer

这里支持所有的 pytorch 里的 optimizer;只需要稍微改动一下 optimizer 的配置就行。

例如,想用 Adam

optimizer = dict(type='Adam', lr=0.0003, weight_decay=0.0001)

或者想实现这种语句类似效果:

# in pytorch
torch.optim.Adam(parms, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False)

# in mmpose
optimizer = dict(type='Adam', lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=Flse)
定制自己的 optimizer
  1. 定义一个自己的 optimier;

创建一个新的文件夹 mmpose/core/optimizer.

然后创建新的文件 mmpose/core/optimizer/my_optimizer.py:

from .registry import OPTIMIZERS
from torch.optim import Optimizer


@OPTIMIZERS.register_module()
class MyOptimizer(Optimizer):

    def __init__(self, a, b, c):
  1. 导入 mmpose/core/optimizer/__init__.py
from .my_optimizer import MyOptimizer

另外一种方法如下,不翻了。

Use custom_imports in the config to manually import it

custom_imports = dict(imports=['mmpose.core.optimizer.my_optimizer'], allow_failed_imports=False)

The module mmpose.core.optimizer.my_optimizer will be imported at the beginning of the program and the class MyOptimizer is then automatically registered. Note that only the package containing the class MyOptimizer should be imported. mmpose.core.optimizer.my_optimizer.MyOptimizer cannot be imported directly.

  1. 用啊!
optimizer = dict(type='MyOptimizer', a=a_value, b=b_value, c=c_value)
定制 optimizer constructor
from mmcv.utils import build_from_cfg

from mmcv.runner.optimizer import OPTIMIZER_BUILDERS, OPTIMIZERS
from mmpose.utils import get_root_logger
from .my_optimizer import MyOptimizer


@OPTIMIZER_BUILDERS.register_module()
class MyOptimizerConstructor:

    def __init__(self, optimizer_cfg, paramwise_cfg=None):
        pass

    def __call__(self, model):

        return my_optimizer

The default optimizer constructor is implemented here, which could also serve as a template for new optimizer constructor.

—- 6.1 到这里为止,都跟 4 内容基本一样!

额外设置

Tricks not implemented by the optimizer should be implemented through optimizer constructor (e.g., set parameter-wise learning rates) or hooks.

We list some common settings that could stabilize the training or accelerate the training. Feel free to create PR, issue for more settings.

  • Use gradient clip to stabilize training: Some models need gradient clip to clip the gradients to stabilize the training process. An example is as below:

    optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2))
    
  • Use momentum schedule to accelerate model convergence: We support momentum scheduler to modify model’s momentum according to learning rate, which could make the model converge in a faster way. Momentum scheduler is usually used with LR scheduler, for example, the following config is used in 3D detection to accelerate convergence. For more details, please refer to the implementation of CyclicLrUpdater and CyclicMomentumUpdater.

    lr_config = dict(
        policy='cyclic',
        target_ratio=(10, 1e-4),
        cyclic_times=1,
        step_ratio_up=0.4,
    )
    momentum_config = dict(
        policy='cyclic',
        target_ratio=(0.85 / 0.95, 1),
        cyclic_times=1,
        step_ratio_up=0.4,
    )
    

6.2 定制训练流程

we use step learning rate with default value in config files, this calls StepLRHook in MMCV. We support many other learning rate schedule here, such as CosineAnnealing and Poly schedule. Here are some examples

  • Poly schedule:

    lr_config = dict(policy='poly', power=0.9, min_lr=1e-4, by_epoch=False)
    
  • ConsineAnnealing schedule:

    lr_config = dict(
        policy='CosineAnnealing',
        warmup='linear',
        warmup_iters=1000,
        warmup_ratio=1.0 / 10,
        min_lr_ratio=1e-5)
    

6.3 定制 workflow

By default, we recommend users to use EpochEvalHook to do evaluation after training epoch, but they can still use val workflow as an alternative.

Workflow is a list of (phase, epochs) to specify the running order and epochs. By default it is set to be

workflow = [('train', 1)]

which means running 1 epoch for training. Sometimes user may want to check some metrics (e.g. loss, accuracy) about the model on the validate set. In such case, we can set the workflow as

[('train', 1), ('val', 1)]

so that 1 epoch for training and 1 epoch for validation will be run iteratively.

Note:

  1. The parameters of model will not be updated during val epoch.
  2. Keyword total_epochs in the config only controls the number of training epochs and will not affect the validation workflow.
  3. Workflows [('train', 1), ('val', 1)] and [('train', 1)] will not change the behavior of EpochEvalHook because EpochEvalHook is called by after_train_epoch and validation workflow only affect hooks that are called through after_val_epoch. Therefore, the only difference between [('train', 1), ('val', 1)] and [('train', 1)] is that the runner will calculate losses on validation set after each training epoch.

6.4 定制 hooks

定制自己的 hooks
  1. 实现一个 hook。

Here we give an example of creating a new hook in MMPose and using it in training.

from mmcv.runner import HOOKS, Hook


@HOOKS.register_module()
class MyHook(Hook):

    def __init__(self, a, b):
        pass

    def before_run(self, runner):
        pass

    def after_run(self, runner):
        pass

    def before_epoch(self, runner):
        pass

    def after_epoch(self, runner):
        pass

    def before_iter(self, runner):
        pass

    def after_iter(self, runner):
        pass

Depending on the functionality of the hook, the users need to specify what the hook will do at each stage of the training in before_run, after_run, before_epoch, after_epoch, before_iter, and after_iter.

  1. 导入mmpose/core/utils/my_hook.py
from .my_hook import MyHook

另外做法如下:

Use custom_imports in the config to manually import it

custom_imports = dict(imports=['mmpose.core.utils.my_hook'], allow_failed_imports=False)
  1. 修改配置
custom_hooks = [
    dict(type='MyHook', a=a_value, b=b_value)
]

You can also set the priority of the hook by adding key priority to 'NORMAL' or 'HIGHEST' as below

custom_hooks = [
    dict(type='MyHook', a=a_value, b=b_value, priority='NORMAL')
]

By default the hook’s priority is set as NORMAL during registration.

使用 mmcv 里面的 hooks

If the hook is already implemented in MMCV, you can directly modify the config to use the hook as below

mmcv_hooks = [
    dict(type='MMCVHook', a=a_value, b=b_value, priority='NORMAL')
]
修改默认的 runtime hooks

There are some common hooks that are not registered through custom_hooks but has been registered by default when importing MMCV, they are

  • log_config
  • checkpoint_config
  • evaluation
  • lr_config
  • optimizer_config
  • momentum_config

In those hooks, only the logger hook has the VERY_LOW priority, others’ priority are NORMAL. The above-mentioned tutorials already cover how to modify optimizer_config, momentum_config, and lr_config. Here we reveals how what we can do with log_config, checkpoint_config, and evaluation.

Checkpoint config

The MMCV runner will use checkpoint_config to initialize CheckpointHook.

checkpoint_config = dict(interval=1)

The users could set max_keep_ckpts to only save only small number of checkpoints or decide whether to store state dict of optimizer by save_optimizer. More details of the arguments are here

Log config

The log_config wraps multiple logger hooks and enables to set intervals. Now MMCV supports WandbLoggerHook, MlflowLoggerHook, and TensorboardLoggerHook. The detail usages can be found in the doc.

log_config = dict(
    interval=50,
    hooks=[
        dict(type='TextLoggerHook'),
        dict(type='TensorboardLoggerHook')
    ])
Evaluation config

The config of evaluation will be used to initialize the EvalHook. Except the key interval, other arguments such as metric will be passed to the dataset.evaluate()

evaluation = dict(interval=1, metric='mAP')

7 有用的 Tools

tools/ 文件夹的东西。

7.1 Log 分析

  1. tools/analysis/analyze_logs.py plots loss/pose acc curves given a training log file.

  2. Run pip install seaborn first to install the dependency.

acc_curve_image

python tools/analysis/analyze_logs.py plot_curve ${JSON_LOGS} [--keys ${KEYS}] [--title ${TITLE}] [--legend ${LEGEND}] [--backend ${BACKEND}] [--style ${STYLE}] [--out ${OUT_FILE}]

Examples:

  • Plot the mse loss of some run.

    python tools/analysis/analyze_logs.py plot_curve log.json --keys mse_loss --legend mse_loss
    
  • Plot the acc of some run, and save the figure to a pdf.

    python tools/analysis/analyze_logs.py plot_curve log.json --keys acc_pose --out results.pdf
    
  • Compare the acc of two runs in the same figure. 在一张图里比较

    python tools/analysis/analyze_logs.py plot_curve log1.json log2.json --keys acc_pose --legend run1 run2
    

You can also compute the average training speed.

python tools/analysis/analyze_logs.py cal_train_time ${JSON_LOGS} [--include-outliers]
  • Compute the average training speed for a config file

    python tools/analysis/analyze_logs.py cal_train_time log.json
    

    The output is expected to be like the following.

    -----Analyze train time of log.json-----
    slowest epoch 114, average time is 0.9662
    fastest epoch 16, average time is 0.7532
    time std over epochs is 0.0426
    average iter time: 0.8406 s/iter
    

7.2 模型计算量

/tools/analysis/get_flops.py is a script adapted from flops-counter.pytorch to compute the FLOPs and params of a given model.

python tools/analysis/get_flops.py ${CONFIG_FILE} [--shape ${INPUT_SHAPE}]

We will get the result like this

==============================
Input shape: (1, 3, 256, 192)
Flops: 8.9 GMac
Params: 28.04 M
==============================

Note: This tool is still experimental and we do not guarantee that the number is absolutely correct. You may use the result for simple comparisons, but double check it before you adopt it in technical reports or papers. 这个现在还没搞好啦,如果用来作比较还可以,但是放到学术论文里就不太行了!

(1) FLOPs are related to the input shape while parameters are not. The default input shape is (1, 3, 340, 256) for 2D recognizer, (1, 3, 32, 340, 256) for 3D recognizer. (2) Some operators are not counted into FLOPs like GN and custom operators. Refer to mmcv.cnn.get_model_complexity_info() for details.

然后就是我做的变形卷积这方面的模型量用这个公式不行。

7.3 model conversion

按下不表。

7.4 Miscellaneous

打印全部配置。

tools/analysis/print_config.py prints the whole config verbatim, expanding all its imports.

python tools/print_config.py ${CONFIG} [-h] [--options ${OPTIONS [OPTIONS...]}]

参考

https://blog.csdn.net/weixin_43013761/article/details/108147598

使用教程

https://mmpose.readthedocs.io/en/latest/install.html


后记

mmpose自己已经出了中文版官方文档了, 而且竟然更新到了0.27.0,是我写文章时候的版本0.13.0两倍有多。今天上去看了一眼,很良心,写的很好,并且拓充了很多方法,比如3d、Transformer和动物姿态估计哈哈哈,当然也少了一些以前有的方法(不说)。

我其实不是科班出身,写这个博客希望没有误导大家,让大家走弯路,如果走了那就是我们天涯沦落人!贴贴!这几天毕业,我在收拾毕业材料的时候,顺手就把我的毕设一部分代码传到GitHub:SSR-Pose上了。参考性不强,如果大家不嫌弃我,我也会继续发一些菜鸡在cv门框前蹦跶的实录,总结一下研究生生活(比心)

另外,这个GitHub代码是写给师弟师妹看的,我的语气显得比较威严,他们不一定知道我在这里蹦跶,保留一些师姐的体面。嘘🤫

  • 2
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值