参考博客:
(1) https://blog.csdn.net/plSong_CSDN/article/details/85194719
(3) https://blog.csdn.net/u012746060/article/details/81183006
(4) https://blog.csdn.net/weixin_42142612/article/details/83142213
(5) https://www.cnblogs.com/justcoder/p/10520997.html
步骤:(15步)
1.下载源代码 https://github.com/qqwweee/keras-yolo3
2.下载权重文件到文件目录下 https://pjreddie.com/media/files/yolov3.weights
#linux下
wget https://pjreddie.com/media/files/yolov3.weights
将 .weights 文件转换为 Keras支持的 h5 权重文件,保存在model_data文件夹下。
python convert.py yolov3.cfg yolov3.weights model_data/yolo.h5
3.使用已有h5权重文件检测图片
可以在 https://github.com/AlexeyAB/darknet/tree/master/data 中下载喜欢的测试图片到项目目录下,也可以使用自己的图片
执行:
python yolo_video.py --image
之后会要求输入需要检测的图片名称,输入图片名称之后就会出现检测结果:
4.利用摄像头进行视频检测
进行视频检测首先需要 CUDA 9.0+, h5py 以及 OpenCV3.x 的支持,使用 Anaconda 就可以轻松安装
同样在项目目录下执行:
python yolo_video.py
就可以利用摄像头进行视频检测,也可以加上 --input
参数变为视频检测,参数内容就是要检测的视频文件目录
以上为使用已有权重文件进行检测
注意:使用tiny模型检测时,需要修改yolov3-tiny.cfg文件内容(两处,类似9),之后运行:
python convert.py -w yolov3-tiny.cfg yolov3-tiny.weights model_data/yolo_weights.h5
5.使用VOC2007数据集的文件结构
文件结构如下图,可以自己创建,也可以下载VOC2007数据集后删除文件内容。
注:数据集中没有 test.py,你需要将其拷贝到VOC2007文件夹下
5.将自己所有的图片拷贝到”……\keras-yolo3\VOCdevkit\VOC2007\JPEGImages“文件目录下,最好都是jpg格式。
(至于为什么 参看:https://blog.csdn.net/plSong_CSDN/article/details/86134407)
6.标注图片:
使用LabelImg (https://github.com/tzutalin/labelImg)对训练图片进行标注生成xml标签,将所有标签(.xml)放在 Annotations 文件夹下。
注:window10下使用labelImg:
(知乎 https://zhuanlan.zhihu.com/p/102385949 或者CSDN https://blog.csdn.net/weixin_44791964/article/details/103481681?ops_request_misc=%257B%2522request%255Fid%2522%253A%2522159767831019724843321104%2522%252C%2522scm%2522%253A%252220140713.130102334.pc%255Fblog.%2522%257D&request_id=159767831019724843321104&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_v2~rank_blog_v1-1-103481681.pc_v2_rank_blog_v1&utm_term=label&spm=1018.2118.3001.4187)
注:标记时常用的几个快捷键,“w”:激活标记框;“d”:下一张图片;“a”:上一张图片
7.将“test.py”文件拷贝到“……\keras-yolo3\VOCdevkit\VOC2007”,运行,在“……\keras-yolo3\VOCdevkit\VOC2007\ImageSets\Main”文件夹下,自动生成生成test.txt、train.txt、trainval.txt、val.txt文件
(划分数据集,90%为训练集(train.txt), 9%为测试集(test.txt),1%为评估数据集(val.txt).txt文件保存在ImageSets/Main下)
test.py代码:(https://github.com/EddyGao/make_VOC2007/blob/master/make_main_txt.py)
import os
import random
trainval_percent = 0.2
train_percent = 0.8
xmlfilepath = 'Annotations'
txtsavepath = 'ImageSets\Main'
total_xml = os.listdir(xmlfilepath)
num = len(total_xml)
list = range(num)
tv = int(num * trainval_percent)
tr = int(tv * train_percent)
trainval = random.sample(list, tv)
train = random.sample(trainval, tr)
ftrainval = open('ImageSets/Main/trainval.txt', 'w')
ftest = open('ImageSets/Main/test.txt', 'w')
ftrain = open('ImageSets/Main/train.txt', 'w')
fval = open('ImageSets/Main/val.txt', 'w')
for i in list:
name = total_xml[i][:-4] + '\n'
if i in trainval:
ftrainval.write(name)
if i in train:
ftest.write(name)
else:
fval.write(name)
else:
ftrain.write(name)
ftrainval.close()
ftrain.close()
fval.close()
ftest.close()
此时VOC2007数据集制作完成,但是,yolo3并不直接用这个数据集
8.运行根目录下的voc_annotation.py,运行前需要将classes改成你自己的classes
运行之后,会在主目录下多生成三个txt文件(train.txt、test.txt、val.txt):
(文本内容是:img_path、box位置(top left bottom right)、类别索引(0 1 2 ...))
准备训练
9. 修改yolo3的网络结构:
打开yolo3.cfg,搜索"yolo",如下图所示,需要修改filters、classes、random,一共可搜索到三处yolo,需要修改的地方一共是三处,每一处都要修改三个地方。
filters:3*(5+len(classes));#我是3*(5+8)=39
classes: len(classes) = 8,#这里我是"dog", "bicycle","car","eagle", "giraffe", "zebra", "horse", "person"八类
random:默认是1,显存小改为0
10.确认model_data文件夹下相关依赖文件:
(1)model_data/voc_classes.txt:与你的class一致,一般需要手动修改;
(2)model_data/coco_classes.txt:与你的class一致,一般需要手动修改;
(3)model_data/yolo_anchors.txt:确认存在即可;
(4)model_data/yolo_weights.h5:由 yolov3.weights 转换得到,需要先下载yolov3.weights,然后 python convert.py yolov3.cfg yolov3.weights model_data/yolo_weights.h5
(5)存在logs文件夹(不存在则自己创建)
11. 在“……\keras-yolo3”建一个“\keras-yolo3\logs\000”文件夹,用来存放训练完成的权重文件
12. 对train.py做了一下修改为train_new.py,直接复制替换原文件就可以了,细节大家自己看吧,直接运行,loss达到20左右的时候效果就可以了(batch_size=10,epoch=1000)
train_new.py
""
Retrain the YOLO model for your own dataset.
"""
import numpy as np
import keras.backend as K
from keras.layers import Input, Lambda
from keras.models import Model
from keras.callbacks import TensorBoard, ModelCheckpoint, EarlyStopping
from yolo3.model import preprocess_true_boxes, yolo_body, tiny_yolo_body, yolo_loss
from yolo3.utils import get_random_data
def _main():
annotation_path = '2007_train.txt'
log_dir = 'logs/000/'
classes_path = 'model_data/voc_classes.txt'
anchors_path = 'model_data/yolo_anchors.txt'
class_names = get_classes(classes_path)
anchors = get_anchors(anchors_path)
input_shape = (416,416) # multiple of 32, hw
model = create_model(input_shape, anchors, len(class_names) )
train(model, annotation_path, input_shape, anchors, len(class_names), log_dir=log_dir)
def train(model, annotation_path, input_shape, anchors, num_classes, log_dir='logs/'):
model.compile(optimizer='adam', loss={
'yolo_loss': lambda y_true, y_pred: y_pred})
logging = TensorBoard(log_dir=log_dir)
checkpoint = ModelCheckpoint(log_dir + "ep{epoch:03d}-loss{loss:.3f}-val_loss{val_loss:.3f}.h5",
monitor='val_loss', save_weights_only=True, save_best_only=True, period=1)
batch_size = 10
val_split = 0.1
with open(annotation_path) as f:
lines = f.readlines()
np.random.shuffle(lines)
num_val = int(len(lines)*val_split)
num_train = len(lines) - num_val
print('Train on {} samples, val on {} samples, with batch size {}.'.format(num_train, num_val, batch_size))
model.fit_generator(data_generator_wrap(lines[:num_train], batch_size, input_shape, anchors, num_classes),
steps_per_epoch=max(1, num_train//batch_size),
validation_data=data_generator_wrap(lines[num_train:], batch_size, input_shape, anchors, num_classes),
validation_steps=max(1, num_val//batch_size),
epochs=500,
initial_epoch=0)
model.save_weights(log_dir + 'trained_weights.h5')
def get_classes(classes_path):
with open(classes_path) as f:
class_names = f.readlines()
class_names = [c.strip() for c in class_names]
return class_names
def get_anchors(anchors_path):
with open(anchors_path) as f:
anchors = f.readline()
anchors = [float(x) for x in anchors.split(',')]
return np.array(anchors).reshape(-1, 2)
def create_model(input_shape, anchors, num_classes, load_pretrained=False, freeze_body=False,
weights_path='model_data/yolo_weights.h5'):
K.clear_session() # get a new session
image_input = Input(shape=(None, None, 3))
h, w = input_shape
num_anchors = len(anchors)
y_true = [Input(shape=(h//{0:32, 1:16, 2:8}[l], w//{0:32, 1:16, 2:8}[l], \
num_anchors//3, num_classes+5)) for l in range(3)]
model_body = yolo_body(image_input, num_anchors//3, num_classes)
print('Create YOLOv3 model with {} anchors and {} classes.'.format(num_anchors, num_classes))
if load_pretrained:
model_body.load_weights(weights_path, by_name=True, skip_mismatch=True)
print('Load weights {}.'.format(weights_path))
if freeze_body:
# Do not freeze 3 output layers.
num = len(model_body.layers)-7
for i in range(num): model_body.layers[i].trainable = False
print('Freeze the first {} layers of total {} layers.'.format(num, len(model_body.layers)))
model_loss = Lambda(yolo_loss, output_shape=(1,), name='yolo_loss',
arguments={'anchors': anchors, 'num_classes': num_classes, 'ignore_thresh': 0.5})(
[*model_body.output, *y_true])
model = Model([model_body.input, *y_true], model_loss)
return model
def data_generator(annotation_lines, batch_size, input_shape, anchors, num_classes):
n = len(annotation_lines)
np.random.shuffle(annotation_lines)
i = 0
while True:
image_data = []
box_data = []
for b in range(batch_size):
i %= n
image, box = get_random_data(annotation_lines[i], input_shape, random=True)
image_data.append(image)
box_data.append(box)
i += 1
image_data = np.array(image_data)
box_data = np.array(box_data)
y_true = preprocess_true_boxes(box_data, input_shape, anchors, num_classes)
yield [image_data, *y_true], np.zeros(batch_size)
def data_generator_wrap(annotation_lines, batch_size, input_shape, anchors, num_classes):
n = len(annotation_lines)
if n==0 or batch_size<=0: return None
return data_generator(annotation_lines, batch_size, input_shape, anchors, num_classes)
if __name__ == '__main__':
_main()
13. 开始训练:python train_new.py (GPU训练 速度挺快,大概三个小时就训练完了)(loss降不下来,测试效果不好,检测不出来物体,由于制作的数据集数量少,只有8张)(看别人博客下的评论,可以降低score和IOU值)
图片测试
14. 将训练完成的权重文件,.....\keras-yolo3\logs\000目录下的trained_weights.h5,修改名称为“yolo.h5”,拷贝到“……\keras-yolo3\model_data”目录下。
15. 在“……\keras-yolo3\”目录下放一个测试图片,运行 python yolo_video.py --image
注:可以修改yolo.py下的预测图片的函数,将检测的图片都储存在了outdir里
''' def detect_img(yolo): while True: img = input('Input image filename:') try: image = Image.open(img) except: print('Open Error! Try again!') continue else: r_image = yolo.detect_image(image) r_image.show() yolo.close_session() ''' import glob def detect_img(yolo): path = "D:\VOCdevkit\VOC2007\JPEGImages\*.jpg" outdir = "D:\\VOCdevkit\VOC2007\SegmentationClass" for jpgfile in glob.glob(path): img = Image.open(jpgfile) img = yolo.detect_image(img) img.save(os.path.join(outdir, os.path.basename(jpgfile))) yolo.close_session()
yolo3实现视频目标的实时检测
IPWebCam(ip摄像头)
https://blog.csdn.net/qq_41621362/article/details/94554660
命令:
python yolo_video.py --input http://192.168.1.101:8080/video?dummy=param.mjpg
mAP指标计算:
https://blog.csdn.net/plsong_csdn/article/details/89502117
https://blog.csdn.net/weixin_41243159/article/details/103748428
解决yolo3-keras检测时报错:TypeError: function takes exactly 1 argument (3 given)
https://blog.csdn.net/qq_27871973/article/details/84252488
报错说PIL库中的函数只接收到一个参数,应该给三个,自己在这里记录下解决方法,出错的地方在yolo.py中,在yolo中在测试时需要对检测到的区域进行画出标记框和类别数字,因为作者测试的coco等图库都是RGB图像,会有三个参数输入给rectangle函数,不会发生报错,而在测试图像为灰度图时,就会出错。在解决错误是参考了参考文献[1]中的提示,很感谢!
对于这个错误原因,个人认为是这个函数在图像上绘制矩形框时,要求输入的图像的颜色空间维度要和矩形框颜色维度一致,比如:我们输入的灰度图(不做维度扩充),只能使用灰度进行画矩形框;输入的是RGB三色图,可以使用RGB颜色画矩形框。