一、python基本环境安装
1.Python环境
conda create -n mmdection python=3.7
activate mmdection #激活环境
2.cudatoolkit和cudnn
conda install cudatoolkit==11.1.1
conda install cudnn==8.2.0.53
#使用GPU进行训练时可以不用安装CUDA,只需要用下面的指令在虚拟环境中安装cudatoolkit和cudnn即可
3.pytorch(1.8 GPU版本)
conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 -c pytorch -c conda-forge
二、克隆
1.在github上克隆mmdection代码,并将基本Python环境配置进去,保证页面显示这样
github网址:open-mmlab/mmdetection: OpenMMLab Detection Toolbox and Benchmark (github.com)
2.配置完成之后在目录下新建一个py文件来测试torch是否可用
import torch
print(torch.__version__)
print(torch.version.cuda)
print(torch.cuda.is_available())
结果:
三、配置mmdection环境
pip install -U openmim
mim install mmengine
mim install "mmcv>=2.0.0"
注意:不要同时下载mmcv和mmcv-full,,mmcv分mmcv-full和mmcv两种版本,同时下载会报错
在环境中pip list查看是否安装成功
2.在pycharm终端配置(不报error即可)
pip install -r requirements/build.txt
#安装相应的依赖包
python setup.py develop
#执行后,若对项目源代码进行修改,则这些修改会立即生效,无需重新运行安装命令
四、验证mmdection安装成功(以Faster-rcnn为例)
mmdection已提供demo/demo.jpg进行验证
1.在项目主目录下新建目录:test_demo,再新建目录:checkpoints
在checkpoints中放入下载的相应权重文件http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth
2.在主目录下新建Python文件:test.py
import mmcv
from mmengine import VISUALIZERS
from mmdet.apis import init_detector, inference_detector
config_file = 'configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py'
# 从 model zoo 下载 checkpoint 并放在 `checkpoints/` 文件下
# 网址为: http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth
checkpoint_file = 'C:/Anaconda/Python_code/mmdetection/test_demo/checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth'
device = 'cuda:0'
# 初始化检测器
# build the model from a config file and a checkpoint file
model = init_detector(config_file, checkpoint_file)
image = mmcv.imread('demo/demo.jpg')
result = inference_detector(model, image)
# init the visualizer(execute this block only once)
visualizer = VISUALIZERS.build(model.cfg.visualizer)
# the dataset_meta is loaded from the checkpoint and
# then pass to the model in init_detector
visualizer.dataset_meta = model.dataset_meta
# show the results
visualizer.add_datasample(
'result',
image,
data_sample=result,
draw_gt=False,
wait_time=0,
out_file='outputs/result.png' # optionally, write to output file
)
visualizer.show()
注意:不要使用result_show展示图片,版本已经不支持该函数
结果: