源码地址:https://github.com/open-mmlab/mmaction2
一、安装环境:
Ubuntu16.04+anaconda: Python3.7 pytorch1.5.0 cuda10.1
mmaction2 V0.13.0:https://github.com/open-mmlab/mmaction2/releases/tag/v0.13.0
mmcv-full==1.3.0
二、安装步骤
- 创建虚拟环境:conda create -n py37 python=3.7
- 激活环境
- 安装torch==1.5.0(本文使用的是离线安装包)
- 安装mmcv-ful(根据cuda和torch版本): pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu101/torch1.5.0/index.html
- 克隆mmaction2的仓库:git clone https://github.com/open-mmlab/mmaction2.git
- 编译并安装:
- cd mmaction2
- pip install -r requirements/build.txt
- python setup.py develop
- 进入Python后,导入mmcv和mmaction查看是否成功。
三、功能测试
- cd demo
- 查看demo.ipynb文件,本文将其转为py文件,方便测试。
-
from mmaction.apis import init_recognizer, inference_recognizer config_file = '../configs/recognition/tsn/tsn_r50_video_inference_1x1x3_100e_kinetics400_rgb.py' # download the checkpoint from model zoo and put it in `checkpoints/` checkpoint_file = '../checkpoints/tsn_r50_256p_1x1x8_100e_kinetics400_rgb.pth' model = init_recognizer(config_file, checkpoint_file, device='cpu') # test a single video and show the result: video = 'demo.mp4' label = 'label_map_k400.txt' results = inference_recognizer(model, video, label) # show the results for result in results: print(f'{result[0]}: ', result[1])
- 根据加载的网络,下载模型(重命名了):tsn_r50_256p_1x1x8_100e_kinetics400_rgb.pth(地址:https://github.com/open-mmlab/mmaction2/blob/master/configs/recognition/tsn/README.md)
- 运行测试文件:python demo_test.py
- 得到识别结果
- 待续