windows下训练yolov2和yolov3

YOLO on windows

1.What do you need?

1.1 A compute with Nvidia Graphic Unit will speed the whole process a lot; else, it may take you several days to get your model  well trained. Try yolo-tiny instead of yolov2 or yolov3 under the situation of no Nvidia Graphic card.

1.2 Install MSVS(MSVS 2015 and MSVS 2013 are recommended), CUDA, cuDNN(not necessary)  and OpenCV. According to AlexeyAB’s advice in github, you should install MSVS  before CUDA. MSVS 2013. Here are some referenced blogs, hope they can do some help. Blog 1, Blog 2.

Download  MSVS 2015

Download  CUDA

Download  cuDNN

Download opencv (don’t forget to add its path after you install opencv)

Download  Nvidia Driver

2.How to compile on windows?

After you set the environment, begin to compile darknet. Here are some useful references: Blog 1, Blog 2, and they are based on this.

 

After compiling, you’ll get darknet.exe (path ‘\darknet-master \build\darknet\x64’),  then you can start to train your own data by yolo.

3 How to train YOLO

3.1 Prepare your pictures

Rename pictures in order by voc_label.py (path build\darknet\x64\data\\voc’), you can start from 0. In this way, you can divided data into train set and validation set easily later.

3.2 label pictures

I used a great tool to help me get this done, labelImg. If you don’t want to download a bunch of stuff and compile, click this link, password: cnn6. however, I’m not sure if the hot keys are aviliable in your computer, and you can only get .xml documentation by the linked versions until now(you can get .txt doc directly in the well installed version ).

Syntax  below trans .xml doc to .txt doc.                                     

import os

from os import listdir, getcwd

from os.path import join

if __name__ == '__main__':

    source_folder='JPEGImages/'

    dest='ImageSets/Main/train.txt'

    dest2='ImageSets/Main/val.txt'

    file_list=os.listdir(source_folder)

    train_file=open(dest,'a')

    val_file=open(dest2,'a')

    for file_obj in file_list:

        file_path=os.path.join(source_folder,file_obj) 

        file_name,file_extend=os.path.splitext(file_obj)

        file_num=int(file_name)

        if(file_num<1800):#divided data into train set and validation set

            train_file.write(file_name+'\n')

        else :

            val_file.write(file_name+'\n')

    train_file.close()

    val_file.close()

3.2.2 Yolo mark

Windows & Linux GUI for marking bounded boxes of objects in images for training Yolo v3 and v2.To compile on Windows open yolo_mark.sln in MSVS2013/2015, compile it x64 & Release and run the file: x64/Release/yolo_mark.cmd. Change paths in yolo_mark.sln to the OpenCV 2.x/3.x installed on your computer:

  • (right click on project) -> properties -> C/C++ -> General -> Additional Include Directories: C:\opencv_3.0\opencv\build\include;
  • (right click on project) -> properties -> Linker -> General -> Additional Library Directories: C:\opencv_3.0\opencv\build\x64\vc14\lib;

To test, simply run x64/Release/yolo_mark.cmd.

https://camo.githubusercontent.com/e1e33a7ef92dfc86ab8929dd0e8e96395cbcab5c/68747470733a2f2f686162726173746f726167652e6f72672f66696c65732f3232392f6630362f3237372f32323966303632373766636334393237393334326237656466616262623437612e6a7067

3.3 Change the config files

Two config files should be changed: cfg/voc.data, cfg/yolo-voc.cfg, and one should be created.

3.3.1data/voc.names

Write the objection names in this doc.

3.3.2 cfg/voc.data

Below are for reference:

3.3.3 cfg/yolo-voc.cfg(for yolov2) 

Change the classes in [region] and the filters in the last [convolutional]:

class=3

filters=40 filters = classes+ coords+ 1)* (NUM)=(1+4+1)×5=30  ,5表示每个grid cell预测的bounding box的数量

 cfg/yolov3.cfg(for yolov3)

filters = (classes+ 5)* 3, in this case filters=(3+5)*3=24

If the video memory is very small, set random = 0 to turn off multiscale training, else, set random=1.  

[net]  

# Testing  

# batch=1  

# subdivisions=1  

# Training  

batch=64  

subdivisions=8  

  

......  

  

[convolutional]  

size=1  

stride=1  

pad=1  

filters=24#filters = (classes+ 5)* 3  

activation=linear  

  

[yolo]  

mask = 6,7,8  

anchors = 10,13,  16,30,  33,23,  30,61,  62,45,  59,119,  116,90,  156,198,  373,326  

classes=3  

num=9  

jitter=.3  

ignore_thresh = .5  

truth_thresh = 1  

random=0 #If the video memory is very small, set random = 0 ato turn off multiscale training, else set random=1.  

......  

  

[convolutional]  

size=1  

stride=1  

pad=1  

filters=24 #filters = (classes+ 5)* 3  

activation=linear  

  

[yolo]  

mask = 3,4,5  

anchors = 10,13,  16,30,  33,23,  30,61,  62,45,  59,119,  116,90,  156,198,  373,326  

classes=3###20  

num=9  

jitter=.3  

ignore_thresh = .5  

truth_thresh = 1  

random=0###1  

......  

  

[convolutional]  

size=1  

stride=1  

pad=1  

filters=24 #filters = (classes+ 5)* 3  

activation=linear  

  

[yolo]  

mask = 0,1,2  

anchors = 10,13,  16,30,  33,23,  30,61,  62,45,  59,119,  116,90,  156,198,  373,326  

classes=3 

num=9  

jitter=.3  

ignore_thresh = .5  

truth_thresh = 1  

random=0###1  

 

3.4 Start to train

Download the pre-trained weights : yolov2.weights; yolov3.weights, put to the directory build\darknet\x64.

Start training by using the command line:

Yolov2

darknet.exe detector train data/voc.data yolo-voc.cfg darknet19_448.conv.23  

Yolov3

darknet.exe detector train data/voc.data yolov3.cfg yolov3.weights 

 

Trained weights will be saved in the directory build\darknet\x64\backup, and if your training is interrupted, changed the weights doc inthe syntax above so you can continue.

4. Test your model

Yolov2

For images:

darknet.exe detector test data/voc.data yolo-voc.cfg yolo-voc.weights  

then input the path of picture

For real-time object detection:

darknet.exe detector demo data/voc.data yolo-voc.cfg yolo-voc.weights  

 

Yolov3

For images:

darknet.exe detector test data/coco.data yolov3.cfg yolov3.weights -thresh 0.25

then input the path of picture

For real-time object detection:

darknet.exe detector demo data/voc.data yolov3.cfg yolov3.weights  
  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
要在Windows训练Yolov8数据集,您可以按照以下步骤进行操作: 1. 安装Python和相关依赖:首先,确保您已经安装了Python和pip。然后,使用以下命令安装所需的Python库: ``` pip install numpy opencv-python ``` 2. 下载Yolov8代码:您可以从GitHub上下载Yolov8的代码,例如在https://github.com/AlexeyAB/darknet 上。 3. 准备数据集:将您的训练图像和相应的标签文件放入一个文件夹中。确保标签文件的格式与Yolov8所需的格式相匹配。 4. 修改配置文件:在Yolov8代码文件夹中,找到并编辑`yolov3.cfg`文件。根据您的数据集和训练需求,您可能需要调整一些参数,如类别数量、训练图像尺寸等。 5. 下载预训练权重:从 https://github.com/AlexeyAB/darknet/releases 上下载预训练权重,例如`yolov3.weights`。 6. 转换权重文件格式:使用以下命令将预训练权重文件转换为Yolov8所需的格式: ``` darknet.exe partial cfg/yolov3.cfg yolov3.weights yolov3.conv.74 74 ``` 7. 开始训练:使用以下命令开始训练Yolov8模型: ``` darknet.exe detector train data/obj.data cfg/yolov3.cfg yolov3.conv.74 ``` 如果你的数据集很大,你可能需要配置GPU以加速训练过程。你可以通过编辑`yolov3.cfg`文件中的`batch`和`subdivisions`参数来调整训练过程中的批次大小和子分区大小。 8. 监控训练进度:训练过程中,Yolov8会在每个epoch结束时保存权重文件。您可以使用这些权重文件来监控训练进度和进行目标检测任务。 请注意,以上步骤提供了一个基本的指南,实际上可能需要根据您的具体情况进行调整和修改。另外,如果您拥有NVIDIA GPU,您还可以考虑使用CUDA和cuDNN来加速训练过程。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值