最近需要训练cascade rcnn,有一起的同学可以私信我,记得备注下联系方式
这篇文章是windows系统关于mmdetection的环境配置,以及以cascade rcnn 为例训练VOC格式的数据集,有不当之处希望读者评论区告知!!!感谢!!!
这是MMDetection目前最新的安装教程链接,有许多地方和一些前辈的博客有不同之处,仔细阅读
GET STARTED — MMDetection 3.2.0 documentation
1、首先是创建虚拟环境:
conda create -n mmlab2 python=3.7
2、激活虚拟环境
conda activate mmlab2
3、下载pytorch,相关版本自己要根据自己电脑修改,
pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113
4、此时可以测试一下pytorch是否成功导入
python
import pytorch
torch.__version__
5、下载mmdetection相关文件:
pip install mmdet==3.2.0
pip install -U openmim
mim install mmengine
mim install "mmcv>=2.0.0"
这里我试了很多次,版本一定要对应上
6、去github下载mmdetection
GitHub - open-mmlab/mmdetection: OpenMMLab Detection Toolbox and Benchmark
关闭现在的虚拟环境,从下载好的文件夹内cmd,重新进入虚拟环境,如图:
7、激活虚拟环境以后,继续执行下面操作:
pip install -r requirements/build.txt
pip install -v -e .
注意上面一行代码的 .
8、到此环境安装结束,接下来进行环境的测试,新版的mmdet需要用代码执行测试,先下载权重文件,下面的连接直接复制,可以直接下载cascadercnn权重;
https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_rcnn_r50_fpn_1x_coco/cascade_rcnn_r50_fpn_1x_coco_20200316-3dc56deb.pth
或者进入一下页面,自行选择所需权重,
注:在mmdetection-main文件夹内创建一个checkpoints文件夹,权重文件下载至该文件夹内,如图1
https://github.com/open-mmlab/mmdetection/tree/main
9、在detection-main文件夹内新建一个test1.py文件,将以下代码copy,
# 测试环境合格代码 下载权重文件
from mmdet.apis import init_detector, inference_detector
from mmdet.utils import register_all_modules
from mmdet.registry import VISUALIZERS
import mmcv
def main():
config_file = './configs/cascade_rcnn/cascade-rcnn_r50_fpn_1x_coco.py'
# download the checkpoint from model zoo and put it in `checkpoints/`
# url:该行为自动从网页下载权重 https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_rcnn_r50_fpn_1x_coco/cascade_rcnn_r50_fpn_1x_coco_20200316-3dc56deb.pth
checkpoint_file = './checkpoints/cascade_rcnn_r50_fpn_1x_coco_20200316-3dc56deb'
device = 'cuda:0'
register_all_modules()
# init a detector
model = init_detector(config_file, checkpoint_file, device=device)
# inference the demo image
img = mmcv.imread('./demo/demo.jpg', channel_order='rgb')
result = inference_detector(model, img)
# init the visualizer(execute this block only once)
visualizer = VISUALIZERS.build(model.cfg.visualizer)
# the dataset_meta is loaded from the checkpoint and
# then pass to the model in init_detector
visualizer.dataset_meta = model.dataset_meta
# Let's plot the result
# show the results
visualizer.add_datasample(
'result',
img,
data_sample=result,
draw_gt=False,
wait_time=0,
)
visualizer.show()
if __name__ == '__main__':
main()
运行成功后会出现一张图片,代表已经配置成功了。撒花~~~
后续的操作在另一篇博客,链接如下!