YOLOv3研究

一定要掌握深度学习技术

0.YOLOv3前提知识

  • YOLOv3版本
    当前有两个维护较好的版本:
    ultralytics版 (https://github.com/ultralytics/yolov3)
    eriklindernoren版A minimal PyTorch implementation of YOLOv3
    https://github.com/search?q=yolo&type=repositories
  • yolov3在YOLO系列中也是非常经典的一个版本

1.YOLO v3下载

链接到github:
https://github.com/eriklindernoren/PyTorch-YOLOv3
下载zip文件,然后解压,在你想要存放的位置,打开README文件,依次操作

快速下载coco数据集:
MSCOCO数据集较大,可以使用Google gsutil工具搭配命令行下载

sudo apt-get install aria2
aria2c -c #即为官网下载地址

train2017:http://images.cocodataset.org/zips/train2017.zip
val2017:http://images.cocodataset.org/zips/val2017.zip
test2017:http://images.cocodataset.org/zips/test2017.zip
trainval2017:http://images.cocodataset.org/annotations/annotations_trainval2017.zip
trainval2017:http://images.cocodataset.org/annotations/stuff_annotations_trainval2017.zip
image_info_test2017:http://images.cocodataset.org/annotations/image_info_test2017.zip
train2014:http://images.cocodataset.org/zips/train2014.zip
val2014:http://images.cocodataset.org/zips/val2014.zip
官网地址:http://cocodataset.org/#download

参考:快速下载coco数据集


在早期的Ultralytics版本的YOLOv3
https://github.com/ultralytics/yolov3/tree/e8c205b4122871710429b2e245dda9d2c40929b4/data
使用的是COCO2014数据集,组织方式同darknet版
原文链接:https://blog.csdn.net/a_piece_of_ppx/article/details/123435369


2.数据组织结构

1.Clone COCO API
git clone https://github.com/pdollar/coco
cd coco

COCO API - http://cocodataset.org/

COCO is a large image dataset designed for object detection, segmentation, person keypoints detection, stuff segmentation, and caption generation. This package provides Matlab, Python, and Lua APIs that assists in loading, parsing, and visualizing the annotations in COCO. Please visit http://cocodataset.org/ for more information on COCO, including for the data, paper, and tutorials. The exact format of the annotations is also described on the COCO website. The Matlab and Python APIs are complete, the Lua API provides only basic functionality.

In addition to this API, please download both the COCO images and annotations in order to run the demos and use the API. Both are available on the project website.
-Please download, unzip, and place the images in: coco/images/
-Please download and place the annotations in: coco/annotations/
For substantially more details on the API please see http://cocodataset.org/#download.

After downloading the images and annotations, run the Matlab, Python, or Lua demos for example usage.

To install:
-For Matlab, add coco/MatlabApi to the Matlab path (OSX/Linux binaries provided)
-For Python, run "make" under coco/PythonAPI
-For Lua, run “luarocks make LuaAPI/rocks/coco-scm-1.rockspec” under coco/

2.下载及解压
train2014.zip
val2014.zip

unzip -q train2014.zip
unzip -q val2014.zip

3.数据组织
cd到data文件夹下,输入指令git clone https://github.com/pdollar/coco
完成后,data文件夹会多出一个coco文件夹;再cd到coco文件夹下;mkdir images以为创建一个名为images的文件夹

从指定网址下载数据集并且解压到images文件夹下
统计当前目录下文件的个数(包括子目录)
ls -lR| grep “^-” | wc -l

下载4个文件至coco文件夹下,其中两个压缩包,两个part文件,将压缩包进行解压
tar xvf labels.tgz
unzip -q instances_train-val2014.zip ->annotations文件夹

// Set Up Image Lists
paste <(awk “{print “KaTeX parse error: Expected group as argument to '\"' at position 6: PWD\"}̲" <5k.part) 5k.…PWD”}” <trainvalno5k.part) trainvalno5k.part | tr -d ‘\t’ > trainvalno5k.txt
之后就会生成5k.txt和trainvalno5k.txt两个文件夹

Set Up Image Lists(个人觉得就是建立数据集的存放路径,paste是个合并指令)


#当前路径 $PWD
echo “current_path:”$PWD
#父级路径 $(dirname $PWD)
echo “parent_path:”$(dirname $PWD)
#套娃即可 $(dirname $(dirname $PWD))
echo “parent_parent_path:”$(dirname $(dirname $PWD))


3.环境requirements

tqdm
matplotlib
参见:Python常用包总结归纳

4.trainpy关键代码解析

4.1 logger = Logger(args.logdir) # Tensorboard logger

import os
import datetime
from torch.utils.tensorboard import SummaryWriter
class Logger(object):
    def __init__(self, log_dir, log_hist=True):
        """Create a summary writer logging to log_dir."""
        if log_hist:    # Check a new folder for each log should be dreated
            log_dir = os.path.join(
                log_dir,
                datetime.datetime.now().strftime("%Y_%m_%d__%H_%M_%S"))
        self.writer = SummaryWriter(log_dir)

    def scalar_summary(self, tag, value, step):
        """Log a scalar variable."""
        self.writer.add_scalar(tag, value, step)

    def list_of_scalars_summary(self, tag_value_pairs, step):
        """Log scalar variables."""
        for tag, value in tag_value_pairs:
            self.writer.add_scalar(tag, value, step)

详解参见Tensorboard的使用 ---- SummaryWriter类(pytorch版)

输出日志异步


4.2 model = load_model(args.model, args.pretrained_weights)

def load_model(model_path, weights_path=None):
    """Loads the yolo model from file.

    :param model_path: Path to model definition file (.cfg)
    :type model_path: str
    :param weights_path: Path to weights or checkpoint file (.weights or .pth)
    :type weights_path: str
    :return: Returns model
    :rtype: Darknet
    """
    device = torch.device("cuda" if torch.cuda.is_available()
                          else "cpu")  # Select device for inference
    model = Darknet(model_path).to(device)

    model.apply(weights_init_normal)

    # If pretrained weights are specified, start from checkpoint or weight file
    if weights_path:
        if weights_path.endswith(".pth"):
            # Load checkpoint weights
            model.load_state_dict(torch.load(weights_path, map_location=device))
        else:
            # Load darknet weights
            model.load_darknet_weights
  • 21
    点赞
  • 28
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

周陽讀書

周陽也想繼往聖之絕學呀~

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值