java使用yolov2_yolo2-pytorch

YOLOv2 in PyTorch

NOTE: This project is no longer maintained and may not compatible with the newest pytorch (after 0.4.0).

This is a PyTorch

implementation of YOLOv2.

This project is mainly based on darkflow

and darknet.

I used a Cython extension for postprocessing and

multiprocessing.Pool for image preprocessing.

Testing an image in VOC2007 costs about 13~20ms.

For details about YOLO and YOLOv2 please refer to their project page

and the paper:

YOLO9000: Better, Faster, Stronger by Joseph Redmon and Ali Farhadi.

NOTE 1:

This is still an experimental project.

VOC07 test mAP is about 0.71 (trained on VOC07+12 trainval,

reported by @cory8249).

See issue1

and issue23

for more details about training.

NOTE 2:

I recommend to write your own dataloader using torch.utils.data.Dataset

since multiprocessing.Pool.imap won't stop even there is no enough memory space.

An example of dataloader for VOCDataset: issue71.

Installation and demo

Clone this repository

git clone git@github.com:longcw/yolo2-pytorch.git

Build the reorg layer (tf.extract_image_patches)

cdyolo2-pytorch

./make.sh

Download the trained model yolo-voc.weights.h5

and set the model path in demo.py

Run demo python demo.py.

Training YOLOv2

You can train YOLO2 on any dataset. Here we train it on VOC2007/2012.

Download the training, validation, test data and VOCdevkit

wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar

wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tar

wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCdevkit_08-Jun-2007.tar

Extract all of these tars into one directory named VOCdevkit

tarxvf VOCtrainval_06-Nov-2007.tar

tarxvf VOCtest_06-Nov-2007.tar

tarxvf VOCdevkit_08-Jun-2007.tar

It should have this basic structure

$VOCdevkit/ # development kit

$VOCdevkit/VOCcode/ # VOC utility code

$VOCdevkit/VOC2007 # image sets, annotations, etc.

# ... and several other directories ...

Since the program loading the data in yolo2-pytorch/data by default,

you can set the data path as following.

cdyolo2-pytorch

mkdirdata

cddata

ln -s $VOCdevkit VOCdevkit2007

Download the pretrained darknet19 model

and set the path in yolo2-pytorch/cfgs/exps/darknet19_exp1.py.

(optional) Training with TensorBoard.

To use the TensorBoard,

set use_tensorboard = True in yolo2-pytorch/cfgs/config.py

and install TensorboardX (https://github.com/lanpa/tensorboard-pytorch).

Tensorboard log will be saved in training/runs.

Run the training program: python train.py.

Evaluation

Set the path of the trained_model in yolo2-pytorch/cfgs/config.py.

cdfaster_rcnn_pytorch

mkdiroutput

python test.py

Training on your own data

The forward pass requires that you supply 4 arguments to the network:

im_data - image data.

This should be in the format C x H x W, where C corresponds to the color channels of the image and H and W are the height and width respectively.

Color channels should be in RGB format.

Use the imcv2_recolor function provided in utils/im_transform.py to preprocess your image. Also, make sure that images have been resized to 416 x 416 pixels

gt_boxes - A list of numpy arrays, where each one is of size N x 4, where N is the number of features in the image. The four values in each row should correspond to x_bottom_left, y_bottom_left, x_top_right, and y_top_right.

gt_classes - A list of numpy arrays, where each array contains an integer value corresponding to the class of each bounding box provided in gt_boxes

dontcare - a list of lists

License: MIT license (MIT)

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值