弱监督学习框架 Detectron2/DRN-WSOD-pytorch 在服务器/windows上配置安装及使用

最近做弱监督学习研究,进行相关分析。发现Detectron2是一个不错的框架,其中也有model zoo相当多种类的预训练模型可以拿来直接用。但是安装配置使用中碰到了许多坑。跟各位小伙伴们分享。

推荐使用Linux Ubuntu16.04以上版本安装,虚拟机不太好使

我的环境:

GPU:4 x RTX2080

Linux: Ubuntu 16.04 x64 预装doc

Nvidia驱动:440.82

CUDA:10.1

Usr: 无sudo权限

服务器安装指南:

1.强烈推荐先安装anaconda

2.安装PyTorch

确保驱动和cuda已经安装好了,进入pytorch官网,选择对应版本,这里推荐使用conda安装方式,以便整个conda下的所有环境都可以使用pytorch不必重复安装

https://pytorch.org/get-started/locally/

如果cuda版本比较老,需要自己认证去寻找previous version

https://pytorch.org/get-started/previous-versions/

3.下载并配置Detectron2

Git链接:GitHub - shenyunhang/DRN-WSOD-pytorch: Enabling Deep Residual Networks for Weakly Supervised Object Detection创建合适的项目路径,通过以下执行clone

git clone https://github.com/shenyunhang/DRN-WSOD-pytorch.git

以下是官方给出的安装指导和配置要求,供参考。

Installation

Our Colab Notebook has step-by-step instructions that install detectron2. The Dockerfile also installs detectron2 with a few simple commands.

Requirements

  • Linux or macOS with Python ≥ 3.6
  • PyTorch ≥ 1.4 and torchvision that matches the PyTorch installation. You can install them together at pytorch.org to make sure of this
  • OpenCV is optional and needed by demo and visualization

Build Detectron2 from Source

gcc & g++ ≥ 5 are required. ninja is recommended for faster build. After having them, run:

python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'
# (add --user if you don't have permission)

# Or, to install it from a local clone:
git clone https://github.com/facebookresearch/detectron2.git
python -m pip install -e detectron2

# Or if you are on macOS
CC=clang CXX=clang++ python -m pip install ......

To rebuild detectron2 that's built from a local clone, use rm -rf build/ **/*.so to clean the old build first. You often need to rebuild detectron2 after reinstalling PyTorch.

Install Pre-Built Detectron2 (Linux only)

Choose from this table:

CUDAtorch 1.5torch 1.4
10.2

install

10.1

install

install

10.0

install

9.2

install

install

cpu

install

install

Note that:

  1. The pre-built package has to be used with corresponding version of CUDA and official PyTorch release. It will not work with a different version of PyTorch or a non-official build of PyTorch.
  2. Such installation is out-of-date w.r.t. master branch of detectron2. It may not be compatible with the master branch of a research project that uses detectron2 (e.g. those in projects or meshrcnn).

发现安装需要执行

python3 -m pip install -e .

其实对应的就是根目录下执行

pip install setup.py

如果在安装中有错误,请根据setup.py内要求的依赖包逐个安装

安装完后准备试一下他的功力,这里直接选用demo/中的demo.py做实验

以下是GETSTART.MD的部分内容

Getting Started with Detectron2

This document provides a brief intro of the usage of builtin command-line tools in detectron2.

For a tutorial that involves actual coding with the API, see our Colab Notebook which covers how to run inference with an existing model, and how to train a builtin model on a custom dataset.

For more advanced tutorials, refer to our documentation.

Inference Demo with Pre-trained Models

  1. Pick a model and its config file from model zoo, for example, mask_rcnn_R_50_FPN_3x.yaml.
  2. We provide demo.py that is able to run builtin standard models. Run it with:
cd demo/
python demo.py --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml \
  --input input1.jpg input2.jpg \
  [--other-options]
  --opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl

The configs are made for training, therefore we need to specify MODEL.WEIGHTS to a model from model zoo for evaluation. This command will run the inference and show visualizations in an OpenCV window.

For details of the command line arguments, see demo.py -h or look at its source code to understand its behavior. Some common arguments are:

  • To run on your webcam, replace --input files with --webcam.
  • To run on a video, replace --input files with --video-input video.mp4.
  • To run on cpu, add MODEL.DEVICE cpu after --opts.
  • To save outputs to a directory (for images) or a file (for webcam or video), use --output.

在其他博客中detectron2 + ubuntu + cpu - 走看看有对其的一点点翻译大家可以参看。

注意:这里要说的是在线下载模型文件不太好用,无论是用代理还是不用,都下载不了

所以选择model zoo中直接下载pkl与训练模型,到demo/目录https://github.com/shenyunhang/DRN-WSOD-pytorch/blob/DRN-WSOD/MODEL_ZOO.mdhttps://github.com/shenyunhang/DRN-WSOD-pytorch/blob/DRN-WSOD/MODEL_ZOO.md注意,id要对应上,然后将你要识别的图片也放在demo/下,执行命令

python demo.py --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml --input ies.jpg --output 1.jpg --opts MODEL.WEIGHTS model_final_f10217.pkl

[12/22 01:35:08 detectron2]: Arguments: Namespace(confidence_threshold=0.5, config_file='../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml', input=['ies.jpg'], opts=['MODEL.WEIGHTS', 'model_final_f10217.pkl'], output='1.jpg', video_input=None, webcam=False)
[12/22 01:35:12 fvcore.common.checkpoint]: [Checkpointer] Loading from model_final_f10217.pkl ...
[12/22 01:35:12 fvcore.common.checkpoint]: Reading a file from 'Detectron2 Model Zoo'
  0%|                                                                                                                                    | 0/1 [00:00<?, ?it/s]/home/zhf/PJ/wsod/DRN-WSOD-pytorch/detectron2/layers/wrappers.py:226: UserWarning: This overload of nonzero is deprecated:
    nonzero()
Consider using one of the following signatures instead:
    nonzero(*, bool as_tuple) (Triggered internally at  /opt/conda/conda-bld/pytorch_1607370141920/work/torch/csrc/utils/python_arg_parser.cpp:882.)
  return x.nonzero().unbind(1)
[12/22 01:35:13 detectron2]: ies.jpg: detected 10 instances in 0.36s
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00,  2.49it/s]

完成后1.png即为输出文件,导出后查看,效果还可以,输入图片可以是任意尺寸的!

,后期会解析一下detectron2的代码结构

  • 0
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值