Darknet训练数据集(linux)----超详细,上手即用
下载darknet
git clone https://github.com/AlexeyAB/darknet.git
编辑Makefile配置文件,将
GPU=0 CUDNN=0改为
GPU=1 CUDNN=1
如果文件目录找不到darknet.so,改为LIBSO=1
(在已经配置好cudann情况下)
用make编译,make clean清除编译
xml to txt格式转换
该xml转txt有两个优点:
1.自动识别数据集类别classes
2.读取对应图像尺寸,解决xml文件size为0的报错
import xml.etree.ElementTree as ET
import os
from PIL import Image
# 类别
CLASSES=['vertical','horizontal','single']
# xml,img文件路径,输出txt文件路径
xml_input=r"C:\Users\GW\Desktop\data\xmls"
imgs_input=r'C:\Users\GW\Desktop\data\imgs'
txt_output=r'C:\Users\GW\Desktop\data\txt' ##没有txt文件目录手动创建
#自动识别类别
def auto_classes(filenames):
CLASSES = [] #类比列表
for image_id in filenames:
in_file = open(xml_input + '//' + image_id)
tree = ET.parse(in_file)
root = tree.getroot()
for obj in root.iter("object"):
obj_cls=obj.find("name").text
CLASSES.append(obj_cls)
set(CLASSES)
def convert(size,box):
# 将bbox的左上角点,右下角点坐标的格式,转换为bbox中心点+bbox的W,H的格式,并进行归一化
#读取对应图像尺
dw=1./size[0]
dh=1./size[1]
#转换为中心点坐标
x=(box[0]+box[1])/2.0
y=(box[2]+box[3])/2.0
w=box[1]-box[0]
h=box[3]-box[2]
#归一化
x=x*dw
w=w*dw
y=y*dh
h=h*dh
return (x,y,w,h)
def convert_annotation(image_id):
# 把图像image_id的xml文件转换为目标检测的label文件(txt)
# 其中包含物体的类别cls,bbox的中心点坐标,以及bbox的W,H
# 并将四个物理量归一化
in_file=open(xml_input+'//'+image_id)
image_id=image_id.split(".")[0]#图片id
tree=ET.parse(in_file)
root=tree.getroot()
# 获取图像尺寸
img_path = imgs_input+'\\'+image_id + r'.png'
image=Image.open(img_path)
width, height = image.size
for obj in root.iter("object"):
difficult=obj.find("difficult").text
obj_cls=obj.find("name").text
if obj_cls not in CLASSES or int(difficult)==1:
continue
cls_id=CLASSES.index(obj_cls)
xmlbox=obj.find("bndbox")
points=(float(xmlbox.find("xmin").text),
float(xmlbox.find("xmax").text),
float(xmlbox.find("ymin").text),
float(xmlbox.find("ymax").text))
bb=convert((width,height),points)
# 打开文件以写入模式
file_path = txt_output+'\\'+image_id + r'.txt' # 文件路径和名称
with open(file_path, "a") as out_file:
out_file.write(str(cls_id)+" "+" ".join([str(a) for a in bb])+"\n")
def make_label_txt():
# labels文件夹下创建image_id.txt
# 对应每个image_id.xml提取出的bbox信息
filenames=os.listdir(xml_input)#获取文件名列表
auto_classes(filenames)
print(CLASSES)
for file in filenames:
convert_annotation(file)
if __name__=="__main__":
# 开始提取和转换
make_label_txt()
训练数据集
进入data目录,创建weights(权重)目录,可以从官网下载权重,
wget https://pjreddie.com/media/files/yolov3.weights
创建train文件夹,存放训练数据,创建test文件夹,存放测试数据,png与txt文件放在同一目录下,否则会报错
创建obj.names,obj.data存放文件的类别名,文件路径,如:
obj.names
vertical
horizontal
single
obj.data
classes = 2
train = data/train.txt
valid = data/test.txt
names = data/obj.names
backup = data/weights #更新后权重文件保存地址
同时创建train.txt,test.txt,储存训练,测试集名称
find `pwd`/train/imgs -name \*.png > train.txt
find `pwd`/test -name \*.png > test.txt
调整配置文件(cfg):
常见调整项:
batch=64(每64个图片一个批次)
subdivisions=16(分16次前向传播,即每次训练64/16=4张图片,8张容易爆显存)
width=416
height=416
angle=0(随机旋转角度0~180)
learning_rate=0.001(学习率)
burn_in=1000
max_batches = 50200
steps=10000,12000(到达10000,12000时,学习率改变)
scales=0.1,0.1(学习率降到原来0.1)
[convolutional]
size=1
stride=1
pad=1
filters=24 #3*(classes+5)
activation=linear
[yolo]
mask = 0,1,2
anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326 #先验框,yolov3用聚类算法,可修改
classes=3
num=9
jitter=.3
ignore_thresh =0 .7
truth_thresh = 1
random=1
模型训练命令:
./darknet detector test <data_cfg> <models_cfg> <weights> <test_file> [-thresh] [-out]
./darknet detector train <data_cfg> <models_cfg> <weights> [-thresh] [-gpu] [-gpus] [-clear]
./darknet detector valid <data_cfg> <models_cfg> <weights> [-out] [-thresh]
./darknet detector recall <data_cfg> <models_cfg> <weights> [-thresh]
相关参数有
-clear:清除原有权重,从头训练
-thresh:显示被检测物体中confidence大于等于 [-thresh] 的bounding-box,默认0.005
-out:输出文件名称,默认路径为results文件夹下
-gpus: 0,1 (指定多个gpu,nvidia-smi查询gpu编号,工作状态)
-dont_show:不窗口显示
-map:显示map
可以到src/detector.c查询
生成先验框(对应修改cfg的anchors)
./darknet detector calc_anchors data/obj.data -num_of_clusters 9 -width 416 -height 416
-num_of_clusters:先验框数量
-width -height :图像尺寸
训练模型
./darknet detector train data/obj.data cfg/yolov3.cfg data/weights/yolov3.weight
挂后台运行
nohup ./darknet detector train data/obj.data cfg/yolov3.cfg data/weights/yolov3.weight >out.txt 2>&1 & #默认输出nohup.out
测试模型
./darknet detector test data/obj.data cfg/yolov3.cfg data/weights/yolov3.weight data/test/xxx.png
训练输出结果(截取):
]2;18723/50200: loss=0.4 hours left=46.5
18723: 0.386321, 0.518175 avg loss, 0.001000 rate, 5.657589 seconds, 1198272 images, 46.511484 hours left
v3 (mse loss, Normalizer: (iou: 0.75, obj: 1.00, cls: 1.00) Region 82 Avg (IOU: 0.851339), count: 8, class_loss = 0.000016, iou_loss = 0.109539, total_loss = 0.109555
v3 (mse loss, Normalizer: (iou: 0.75, obj: 1.00, cls: 1.00) Region 94 Avg (IOU: 0.757716), count: 10, class_loss = 0.592243, iou_loss = 0.450531, total_loss = 1.042775
v3 (mse loss, Normalizer: (iou: 0.75, obj: 1.00, cls: 1.00) Region 106 Avg (IOU: 0.766373), count: 8, class_loss = 0.768172, iou_loss = 0.879061, total_loss = 1.647233
total_bbox = 6280049, rewritten_bbox = 0.409360 %
第一行:已训练/总batch,平均损失,剩余训练时间(基于当前训练速度预估,仅作参考)
Aug IOU:当前迭代中,预测的box与标注的box的平均交并比,越大越好,期望数值为1;
Class:标注物体的分类准确率,越大越好,期望数值为1;
obj: 越大越好,期望数值为1;
No obj: 越小越好;
0.5R: 以IOU=0.5为阈值时候的recall; recall = 检出的正样本/实际的正样本
0.75R: 以IOU=0.75为阈值时候的recall;
count:正样本数目
特别注意
数据格式:yolo采用coco数据格式,使用xml_to_txt.py,xml内size可能默认为0,读取图片尺寸
cfg配置文件:
注释测试行,取消注释训练行
#Testing
#batch=1
#subdivisions=1
修改classes,为数据集类别数量,
filters,卷积核数量,3*(classes+5),修改[yolo]上方[convolutional]内最后一个filters,一共修改3个
rewritten_bbox = 0.000000 %始终为零,检查txt,是否只有一组参数,可能是txt使用了w覆写模式