YOLOV3目标检测框架搭建和训练自己模型的流程

YOLOV3目标检测框架搭建和训练自己模型的步骤(超详细)

Ubuntu18.04安装labelImg标注工具—pyqt5 安装步骤
# 不需要进入到虚拟环境中
sudo apt-get install pyqt5-dev-tools 
sudo pip3 install lxml 
git clone https://github.com/tzutalin/labelImg.git 
cd labelImg 
make qt5py3      # 用make all 会导致先识别pyqt4.
python3 labelImg.py #打开labelImg
使用图像标注工具
Open可导入单张图片。
Open Dir可打开文件夹目录,然后可以用Next Image和Prev Image查看所有图片。
Change Save Dir可以更改xml文件保存的路径。
Verify Image可更改xml文件的内容。
Save可保存xml文件。

项目结构搭建

项目源码:github源码下载地址:git clone https://github.com/qqwweee/keras-yolo3
到yolo官网,下载yolo3预训练权重:https://pjreddie.com/media/files/yolov3.weights 下载后放到根目录
Tiny YOLOv3的权重文件下载:https://pjreddie.com/darknet/yolo/

CPU环境搭建(Ubuntu安装cpu版本tensflow)

#方法1
在https://github.com/lakshayg/tensorflow-build下载   --------》未成功
#方法2
pip install tensorflow-cpu==1.15.0 -i https://pypi.douban.com/simple/  #直接在命令行安装
#方法3(目前在用)
pip install tensorflow==1.12.0 
pip install Keras==2.2.4
pip install Pillow==8.2.0

GPU环境搭建(需要安装gpu版本tensorflow)建议使用GPU训练速度会更快

cuda 8.0
python 3.6
conda install tensorflow-gpu==1.12.0
conda install keras

conda使用

conda create -n zs python=3.6   # 使用python3.6创建你的虚拟环境
conda remove -n zs --all  # 删除虚拟环境zs
conda activate zs #激活环境
deactivate #退出环境

使用默认的模型

python yolo_video.py --image

训练自己的模型构建步骤

  • 标注图片

    在这里插入图片描述

  • 生成XML 文件

    在这里插入图片描述

  • 构建自己的数据集目录结构

    在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

  • convert_to_txt.py 文件 (执行后生成上述图片)
    import os
    import random
     
    trainval_percent = 0.1
    train_percent = 0.9 # 全部划分为训练集,因为yolo3在训练时依旧会划分训练集与测试集,不需要在此划分
    xmlfilepath = 'Annotations'
    txtsavepath = 'ImageSets/Main'
    total_xml = os.listdir(xmlfilepath)
     
    num = len(total_xml)
    list = range(num)
    tv = int(num * trainval_percent)
    tr = int(tv * train_percent)
    trainval = random.sample(list, tv)
    train = random.sample(trainval, tr)
     
    ftrainval = open('ImageSets/Main/trainval.txt', 'w')
    ftest = open('ImageSets/Main/test.txt', 'w')
    ftrain = open('ImageSets/Main/train.txt', 'w')
    fval = open('ImageSets/Main/val.txt', 'w')
     
    for i in list:
        name = total_xml[i][:-4] + '\n'
        if i in trainval:
            ftrainval.write(name)
            if i in train:
                ftest.write(name)
            else:
                fval.write(name)
        else:
            ftrain.write(name)
     
    ftrainval.close()
    ftrain.close()
    fval.close()
    ftest.close()
    
  • 转换标注数据文件 执行voc_annotation.py(位置在项目的根目录下),转换之前先进行修改成你的要检测的类别classes
    import xml.etree.ElementTree as ET
    from os import getcwd
    
    sets=[('2007', 'train'), ('2007', 'val'), ('2007', 'test')]
    
    # classes = ["aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"]
    classes = ["dabao"]   #这里是我定义的大宝,此处修改
    
    
    def convert_annotation(year, image_id, list_file):
        in_file = open('VOCdevkit/VOC%s/Annotations/%s.xml'%(year, image_id))
        tree=ET.parse(in_file)
        root = tree.getroot()
    
        for obj in root.iter('object'):
            difficult = obj.find('difficult').text
            cls = obj.find('name').text
            if cls not in classes or int(difficult)==1:
                continue
            cls_id = classes.index(cls)
            xmlbox = obj.find('bndbox')
            b = (int(xmlbox.find('xmin').text), int(xmlbox.find('ymin').text), int(xmlbox.find('xmax').text), int(xmlbox.find('ymax').text))
            list_file.write(" " + ",".join([str(a) for a in b]) + ',' + str(cls_id))
    
    wd = getcwd()
    
    for year, image_set in sets:
        image_ids = open('VOCdevkit/VOC%s/ImageSets/Main/%s.txt'%(year, image_set)).read().strip().split()
        list_file = open('%s_%s.txt'%(year, image_set), 'w')
        for image_id in image_ids:
            list_file.write('%s/VOCdevkit/VOC%s/JPEGImages/%s.jpg'%(wd, year, image_id))
            convert_annotation(year, image_id, list_file)
            list_file.write('\n')
        list_file.close()
    
  • 执行执行voc_annotation.py 后生成的转换标注文件
    • 生成转换后的标注文件的位置

      在这里插入图片描述

    • 生成的2007_train.txt 文件数据格式(训练文件)

    在这里插入图片描述

    • 生成的2007_val.txt

      在这里插入图片描述

  • 创建类别文件my_classses.txt(名字可以自定义)

    在这里插入图片描述

  • 创建权重文件,需要将darknet版本的yolo model 转换为 Keras model
    python convert.py yolov3.cfg yolov3.weights model_data/yolo.h5
    
  • 修改训练配置yolov3.cfg,在此文件中搜索yolo,会有三处匹配,都是相同的更改方式,以第一次匹配举例,三处注释位置,也就是共需改动9个位置

在这里插入图片描述

#####修改说明#####
[convolutional]
size=1
stride=1
pad=1
filters=18  # 3*(5+len(classes)) # 我训练一种类别 即 3*(5+1) = 18
activation=linear


[yolo]
mask = 6,7,8
anchors = 10,13,  16,30,  33,23,  30,61,  62,45,  59,119,  116,90,  156,198,  373,326
classes=1 # 一种类别
num=9
jitter=.3
ignore_thresh = .5
truth_thresh = 1
random=1 # 显存小就改为0   
  • 参考修改(可做参考)
    val_split = 0.1 # 训练集与测试集划分比例
    batch_size = 5 # 每次训练选择样本数
    epochs = 300 # 训练三百次
    
  • 修改训练文件train.py
    """
    Retrain the YOLO model for your own dataset.
    """
    # 防止显存占用过多
    import tensorflow as tf
    from keras.backend.tensorflow_backend import set_session
    config = tf.ConfigProto()
    config.gpu_options.per_process_gpu_memory_fraction = 0.3
    set_session(tf.Session(config=config))
    
    import numpy as np
    import keras.backend as K
    from keras.layers import Input, Lambda
    from keras.models import Model
    from keras.optimizers import Adam
    from keras.callbacks import TensorBoard, ModelCheckpoint, ReduceLROnPlateau, EarlyStopping
    
    from yolo3.model import preprocess_true_boxes, yolo_body, tiny_yolo_body, yolo_loss
    from yolo3.utils import get_random_data
    
    
    def _main():
        annotation_path = '2007_train.txt'   #此处修改成你的训练文件
        log_dir = 'logs/000/'  #此处可以自定义成你的路径
        classes_path = 'model_data/my_classes.txt'  #此处修该成你的配置文件
        anchors_path = 'model_data/yolo_anchors.txt'
        # anchors_path = 'model_data/dabao.txt'
        class_names = get_classes(classes_path)
        num_classes = len(class_names)
        anchors = get_anchors(anchors_path)
    
        input_shape = (416,416) # multiple of 32, hw
    
        is_tiny_version = len(anchors)==6 # default setting
        if is_tiny_version:
            model = create_tiny_model(input_shape, anchors, num_classes,
                freeze_body=2, weights_path='model_data/tiny_yolo_weights.h5')
        else:
            model = create_model(input_shape, anchors, num_classes,
                freeze_body=2, weights_path='model_data/yolo_weights.h5') # make sure you know what you freeze
    
        logging = TensorBoard(log_dir=log_dir)
        checkpoint = ModelCheckpoint(log_dir + 'ep{epoch:03d}-loss{loss:.3f}-val_loss{val_loss:.3f}.h5',
            monitor='val_loss', save_weights_only=True, save_best_only=True, period=3)
        reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=3, verbose=1)
        early_stopping = EarlyStopping(monitor='val_loss', min_delta=0, patience=10, verbose=1)
    
        val_split = 0.25  #此处可以根据你的数据集量做修改
        with open(annotation_path) as f:
            lines = f.readlines()
        np.random.seed(10101)
        np.random.shuffle(lines)
        np.random.seed(None)
        num_val = int(len(lines)*val_split)
        num_train = len(lines) - num_val
    
        # Train with frozen layers first, to get a stable loss.
        # Adjust num epochs to your dataset. This step is enough to obtain a not bad model.
        if True:
            model.compile(optimizer=Adam(lr=1e-3), loss={
                # use custom yolo_loss Lambda layer.
                'yolo_loss': lambda y_true, y_pred: y_pred})
    
            batch_size = 4   #此处可以根据你的情况调整,调整显存的,显存小的可以调小
            print('Train on {} samples, val on {} samples, with batch size {}.'.format(num_train, num_val, batch_size))
            model.fit_generator(data_generator_wrapper(lines[:num_train], batch_size, input_shape, anchors, num_classes),
                    steps_per_epoch=max(1, num_train//batch_size),
                    validation_data=data_generator_wrapper(lines[num_train:], batch_size, input_shape, anchors, num_classes),
                    validation_steps=max(1, num_val//batch_size),
                    epochs=50,
                    initial_epoch=0,
                    callbacks=[logging, checkpoint])
            model.save_weights(log_dir + 'trained_weights_stage_1.h5')
    
        # Unfreeze and continue training, to fine-tune.
        # Train longer if the result is not good.
        if True:
            for i in range(len(model.layers)):
                model.layers[i].trainable = True
            model.compile(optimizer=Adam(lr=1e-4), loss={'yolo_loss': lambda y_true, y_pred: y_pred}) # recompile to apply the change
            print('Unfreeze all of the layers.')
    
            batch_size = 4 # note that more GPU memory is required after unfreezing the body
            print('Train on {} samples, val on {} samples, with batch size {}.'.format(num_train, num_val, batch_size))
            model.fit_generator(data_generator_wrapper(lines[:num_train], batch_size, input_shape, anchors, num_classes),
                steps_per_epoch=max(1, num_train//batch_size),
                validation_data=data_generator_wrapper(lines[num_train:], batch_size, input_shape, anchors, num_classes),
                validation_steps=max(1, num_val//batch_size),
                epochs=100,
                initial_epoch=50,
                callbacks=[logging, checkpoint, reduce_lr, early_stopping])
            model.save_weights(log_dir + 'trained_weights_final.h5')  # 训练生成的模型,log_dir在前面可以自定义成你的存放位置
    
        # Further training if needed.
    
    
    def get_classes(classes_path):
        '''loads the classes'''
        with open(classes_path) as f:
            class_names = f.readlines()
        class_names = [c.strip() for c in class_names]
        return class_names
    
    def get_anchors(anchors_path):
        '''loads the anchors from a file'''
        with open(anchors_path) as f:
            anchors = f.readline()
        anchors = [float(x) for x in anchors.split(',')]
        return np.array(anchors).reshape(-1, 2)
    
    
    def create_model(input_shape, anchors, num_classes, load_pretrained=True, freeze_body=2,
                weights_path='model_data/yolo_weights.h5'):
        '''create the training model'''
        K.clear_session() # get a new session
        image_input = Input(shape=(None, None, 3))
        h, w = input_shape
        num_anchors = len(anchors)
    
        y_true = [Input(shape=(h//{0:32, 1:16, 2:8}[l], w//{0:32, 1:16, 2:8}[l], \
            num_anchors//3, num_classes+5)) for l in range(3)]
    
        model_body = yolo_body(image_input, num_anchors//3, num_classes)
        print('Create YOLOv3 model with {} anchors and {} classes.'.format(num_anchors, num_classes))
    
        if load_pretrained:
            model_body.load_weights(weights_path, by_name=True, skip_mismatch=True)
            print('Load weights {}.'.format(weights_path))
            if freeze_body in [1, 2]:
                # Freeze darknet53 body or freeze all but 3 output layers.
                num = (185, len(model_body.layers)-3)[freeze_body-1]
                for i in range(num): model_body.layers[i].trainable = False
                print('Freeze the first {} layers of total {} layers.'.format(num, len(model_body.layers)))
    
        model_loss = Lambda(yolo_loss, output_shape=(1,), name='yolo_loss',
            arguments={'anchors': anchors, 'num_classes': num_classes, 'ignore_thresh': 0.5})(
            [*model_body.output, *y_true])
        model = Model([model_body.input, *y_true], model_loss)
    
        return model
    
    def create_tiny_model(input_shape, anchors, num_classes, load_pretrained=True, freeze_body=2,
                weights_path='model_data/tiny_yolo_weights.h5'):
        '''create the training model, for Tiny YOLOv3'''
        K.clear_session() # get a new session
        image_input = Input(shape=(None, None, 3))
        h, w = input_shape
        num_anchors = len(anchors)
    
        y_true = [Input(shape=(h//{0:32, 1:16}[l], w//{0:32, 1:16}[l], \
            num_anchors//2, num_classes+5)) for l in range(2)]
    
        model_body = tiny_yolo_body(image_input, num_anchors//2, num_classes)
        print('Create Tiny YOLOv3 model with {} anchors and {} classes.'.format(num_anchors, num_classes))
    
        if load_pretrained:
            model_body.load_weights(weights_path, by_name=True, skip_mismatch=True)
            print('Load weights {}.'.format(weights_path))
            if freeze_body in [1, 2]:
                # Freeze the darknet body or freeze all but 2 output layers.
                num = (20, len(model_body.layers)-2)[freeze_body-1]
                for i in range(num): model_body.layers[i].trainable = False
                print('Freeze the first {} layers of total {} layers.'.format(num, len(model_body.layers)))
    
        model_loss = Lambda(yolo_loss, output_shape=(1,), name='yolo_loss',
            arguments={'anchors': anchors, 'num_classes': num_classes, 'ignore_thresh': 0.7})(
            [*model_body.output, *y_true])
        model = Model([model_body.input, *y_true], model_loss)
    
        return model
    
    def data_generator(annotation_lines, batch_size, input_shape, anchors, num_classes):
        '''data generator for fit_generator'''
        n = len(annotation_lines)
        i = 0
        while True:
            image_data = []
            box_data = []
            for b in range(batch_size):
                if i==0:
                    np.random.shuffle(annotation_lines)
                image, box = get_random_data(annotation_lines[i], input_shape, random=True)
                image_data.append(image)
                box_data.append(box)
                i = (i+1) % n
            image_data = np.array(image_data)
            box_data = np.array(box_data)
            y_true = preprocess_true_boxes(box_data, input_shape, anchors, num_classes)
            yield [image_data, *y_true], np.zeros(batch_size)
    
    def data_generator_wrapper(annotation_lines, batch_size, input_shape, anchors, num_classes):
        n = len(annotation_lines)
        if n==0 or batch_size<=0: return None
        return data_generator(annotation_lines, batch_size, input_shape, anchors, num_classes)
    
    if __name__ == '__main__':
        _main()
    
    
  • 训练模型
    #  运行train.py,进行训练模型
    python train.py
    

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-rsTNv2pY-1624698824326)(C:\Users\Admin\AppData\Roaming\Typora\typora-user-images\image-20210625144629724.png)]

  • 使用模型 完成模型的训练之后,调用yolo.py即可使用我们训练好的模型
    • 修改yolo.py用你训练模型

      # -*- coding: utf-8 -*-
      """
      Class definition of YOLO_v3 style detection model on image and video
      """
      # 防止显存占用过多
      import tensorflow as tf
      from keras.backend.tensorflow_backend import set_session
      config = tf.ConfigProto()
      config.gpu_options.per_process_gpu_memory_fraction = 0.3
      set_session(tf.Session(config=config))
      
      import colorsys
      import os
      from timeit import default_timer as timer
      
      import numpy as np
      from keras import backend as K
      from keras.models import load_model
      from keras.layers import Input
      from PIL import Image, ImageFont, ImageDraw
      
      from yolo3.model import yolo_eval, yolo_body, tiny_yolo_body
      from yolo3.utils import letterbox_image
      import os
      from keras.utils import multi_gpu_model
      
      class YOLO(object):
          _defaults = {
              # "model_path": 'model_data/yolo.h5',
              "model_path": 'trained_weights_final.h5',  # 此处修改成你的训练模型
              "anchors_path": 'model_data/yolo_anchors.txt',
              # "classes_path": 'model_data/coco_classes.txt',
              "classes_path": 'model_data/my_classes.txt',  # 此处修改成你的类别
              "score" : 0.3,   # 此处可根据你的识别情况做修改
              "iou" : 0.45,
              "model_image_size": (416, 416),
              "gpu_num" : 1,
          }
      
          @classmethod
          def get_defaults(cls, n):
              if n in cls._defaults:
                  return cls._defaults[n]
              else:
                  return "Unrecognized attribute name '" + n + "'"
      
          def __init__(self, **kwargs):
              self.__dict__.update(self._defaults) # set up default values
              self.__dict__.update(kwargs) # and update with user overrides
              self.class_names = self._get_class()
              self.anchors = self._get_anchors()
              self.sess = K.get_session()
              self.boxes, self.scores, self.classes = self.generate()
      
          def _get_class(self):
              classes_path = os.path.expanduser(self.classes_path)
              with open(classes_path) as f:
                  class_names = f.readlines()
              class_names = [c.strip() for c in class_names]
              return class_names
      
          def _get_anchors(self):
              anchors_path = os.path.expanduser(self.anchors_path)
              with open(anchors_path) as f:
                  anchors = f.readline()
              anchors = [float(x) for x in anchors.split(',')]
              return np.array(anchors).reshape(-1, 2)
      
          def generate(self):
              model_path = os.path.expanduser(self.model_path)
              assert model_path.endswith('.h5'), 'Keras model or weights must be a .h5 file.'
      
              # Load model, or construct model and load weights.
              num_anchors = len(self.anchors)
              num_classes = len(self.class_names)
              is_tiny_version = num_anchors==6 # default setting
              try:
                  self.yolo_model = load_model(model_path, compile=False)
              except:
                  self.yolo_model = tiny_yolo_body(Input(shape=(None,None,3)), num_anchors//2, num_classes) \
                      if is_tiny_version else yolo_body(Input(shape=(None,None,3)), num_anchors//3, num_classes)
                  self.yolo_model.load_weights(self.model_path) # make sure model, anchors and classes match
              else:
                  assert self.yolo_model.layers[-1].output_shape[-1] == \
                      num_anchors/len(self.yolo_model.output) * (num_classes + 5), \
                      'Mismatch between model and given anchor and class sizes'
      
              print('{} model, anchors, and classes loaded.'.format(model_path))
      
              # Generate colors for drawing bounding boxes.
              hsv_tuples = [(x / len(self.class_names), 1., 1.)
                            for x in range(len(self.class_names))]
              self.colors = list(map(lambda x: colorsys.hsv_to_rgb(*x), hsv_tuples))
              self.colors = list(
                  map(lambda x: (int(x[0] * 255), int(x[1] * 255), int(x[2] * 255)),
                      self.colors))
              np.random.seed(10101)  # Fixed seed for consistent colors across runs.
              np.random.shuffle(self.colors)  # Shuffle colors to decorrelate adjacent classes.
              np.random.seed(None)  # Reset seed to default.
      
              # Generate output tensor targets for filtered bounding boxes.
              self.input_image_shape = K.placeholder(shape=(2, ))
              if self.gpu_num>=2:
                  self.yolo_model = multi_gpu_model(self.yolo_model, gpus=self.gpu_num)
              boxes, scores, classes = yolo_eval(self.yolo_model.output, self.anchors,
                      len(self.class_names), self.input_image_shape,
                      score_threshold=self.score, iou_threshold=self.iou)
              return boxes, scores, classes
      
          def detect_image(self, image):
              start = timer()
      
              if self.model_image_size != (None, None):
                  assert self.model_image_size[0]%32 == 0, 'Multiples of 32 required'
                  assert self.model_image_size[1]%32 == 0, 'Multiples of 32 required'
                  boxed_image = letterbox_image(image, tuple(reversed(self.model_image_size)))
              else:
                  new_image_size = (image.width - (image.width % 32),
                                    image.height - (image.height % 32))
                  boxed_image = letterbox_image(image, new_image_size)
              image_data = np.array(boxed_image, dtype='float32')
      
              print(image_data.shape)
              image_data /= 255.
              image_data = np.expand_dims(image_data, 0)  # Add batch dimension.
      
              out_boxes, out_scores, out_classes = self.sess.run(
                  [self.boxes, self.scores, self.classes],
                  feed_dict={
                      self.yolo_model.input: image_data,
                      self.input_image_shape: [image.size[1], image.size[0]],
                      K.learning_phase(): 0
                  })
      
              print('Found {} boxes for {}'.format(len(out_boxes), 'img'))
      
              font = ImageFont.truetype(font='font/FiraMono-Medium.otf',
                          size=np.floor(3e-2 * image.size[1] + 0.5).astype('int32'))
              thickness = (image.size[0] + image.size[1]) // 300
      
              for i, c in reversed(list(enumerate(out_classes))):
                  predicted_class = self.class_names[c]
                  box = out_boxes[i]
                  score = out_scores[i]
      
                  label = '{} {:.2f}'.format(predicted_class, score)
                  draw = ImageDraw.Draw(image)
                  label_size = draw.textsize(label, font)
      
                  top, left, bottom, right = box
                  top = max(0, np.floor(top + 0.5).astype('int32'))
                  left = max(0, np.floor(left + 0.5).astype('int32'))
                  bottom = min(image.size[1], np.floor(bottom + 0.5).astype('int32'))
                  right = min(image.size[0], np.floor(right + 0.5).astype('int32'))
                  print(label, (left, top), (right, bottom))
      
                  if top - label_size[1] >= 0:
                      text_origin = np.array([left, top - label_size[1]])
                  else:
                      text_origin = np.array([left, top + 1])
      
                  # My kingdom for a good redistributable image drawing library.
                  for i in range(thickness):
                      draw.rectangle(
                          [left + i, top + i, right - i, bottom - i],
                          outline=self.colors[c])
                  draw.rectangle(
                      [tuple(text_origin), tuple(text_origin + label_size)],
                      fill=self.colors[c])
                  draw.text(text_origin, label, fill=(0, 0, 0), font=font)
                  del draw
      
              end = timer()
              print(end - start)
              return image
      
          def close_session(self):
              self.sess.close()
      
      def detect_video(yolo, video_path, output_path=""):
          import cv2
          vid = cv2.VideoCapture(video_path)
          if not vid.isOpened():
              raise IOError("Couldn't open webcam or video")
          video_FourCC    = int(vid.get(cv2.CAP_PROP_FOURCC))
          video_fps       = vid.get(cv2.CAP_PROP_FPS)
          video_size      = (int(vid.get(cv2.CAP_PROP_FRAME_WIDTH)),
                              int(vid.get(cv2.CAP_PROP_FRAME_HEIGHT)))
          isOutput = True if output_path != "" else False
          if isOutput:
              print("!!! TYPE:", type(output_path), type(video_FourCC), type(video_fps), type(video_size))
              out = cv2.VideoWriter(output_path, video_FourCC, video_fps, video_size)
          accum_time = 0
          curr_fps = 0
          fps = "FPS: ??"
          prev_time = timer()
          while True:
              return_value, frame = vid.read()
              image = Image.fromarray(frame)
              image = yolo.detect_image(image)
              result = np.asarray(image)
              curr_time = timer()
              exec_time = curr_time - prev_time
              prev_time = curr_time
              accum_time = accum_time + exec_time
              curr_fps = curr_fps + 1
              if accum_time > 1:
                  accum_time = accum_time - 1
                  fps = "FPS: " + str(curr_fps)
                  curr_fps = 0
              cv2.putText(result, text=fps, org=(3, 15), fontFace=cv2.FONT_HERSHEY_SIMPLEX,
                          fontScale=0.50, color=(255, 0, 0), thickness=2)
              cv2.namedWindow("result", cv2.WINDOW_NORMAL)
              cv2.imshow("result", result)
              if isOutput:
                  out.write(result)
              if cv2.waitKey(1) & 0xFF == ord('q'):
                  break
          yolo.close_session()
      
      if __name__ == '__main__':
          yolo=YOLO()
          path = './img/1.jpg'
          try:
              image = Image.open(path)
          except:
              print('Open Error! Try again!')
          else:
              r_image = yolo.detect_image(image)
              r_image.save("./img/cup2.jpg")
              r_image.show()
          yolo.close_session()
      
    • 测试模型

      import sys
      import argparse
      from yolo import YOLO, detect_video
      from PIL import Image
      
      
      if __name__ == '__main__':
          config = {
              "model_path": "logs/000/trained_weights_final.h5", # 加载模型(你的模型)
              "score": 0.3, # 超出这个值的预测才会被显示
              "iou": 0.5, # 交并比
          }
          yolo = YOLO(**config)
          image = Image.open("./img/dabao1.jpg")
          r_image = yolo.detect_image(image)
          r_image.save("./img/dabao1_detect.jpg")
      
    • 效果展示

      在这里插入图片描述

  • 项目实践参考链接
    https://pjreddie.com/darknet/yolo/   #YOLO官网
    # 以下是文章参考博文,感谢以下博主分享
    https://blog.csdn.net/qinchang1/article/details/89608058 
    https://my.oschina.net/u/876354/blog/1927881
    https://www.it610.com/article/1277379316287553536.htm
    https://www.cnblogs.com/WindrunnerMax/p/12782939.html
    
  • 关于显存问题(占用过多)可参考这个链接(在GPU环境中)
    https://blog.csdn.net/sinat_26917383/article/details/75633754  # 感谢博主分享
    
    # 在使用keras时候会出现总是占满GPU显存的情况,可以通过重设backend的GPU占用情况来进行调节。
    import tensorflow as tf
    from keras.backend.tensorflow_backend import set_session
    config = tf.ConfigProto()
    config.gpu_options.per_process_gpu_memory_fraction = 0.3
    set_session(tf.Session(config=config))
    
  • tensorflow + CPU充分使用(cpu环境中)
    num_cores = 4
    
    config = tf.ConfigProto(intra_op_parallelism_threads=num_cores, inter_op_parallelism_threads=num_cores,
                            allow_soft_placement=True, device_count={'CPU': 4})
    session = tf.Session(config=config)
    K.set_session(session)
    
    '''
    # 说明
    device_count, 告诉tf Session使用CPU数量上限,如果你的CPU数量较多,可以适当加大这个值
    inter_op_parallelism_threads和intra_op_parallelism_threads告诉session操作的线程并行程度,如果值越小,线程的复用就越少,越可能使用较多的CPU核数。如果值为0,TF会自动选择一个合适的值。
    allow_soft_placement=True, 有时候,不同的设备,它的cpu和gpu是不同的,如果将这个选项设置成True,那么当运行设备不满足要求时,会自动分配GPU或者CPU。
    '''
    

ons.per_process_gpu_memory_fraction = 0.3
set_session(tf.Session(config=config))


- #### tensorflow + CPU充分使用(cpu环境中)

```python
num_cores = 4

config = tf.ConfigProto(intra_op_parallelism_threads=num_cores, inter_op_parallelism_threads=num_cores,
                        allow_soft_placement=True, device_count={'CPU': 4})
session = tf.Session(config=config)
K.set_session(session)

'''
# 说明
device_count, 告诉tf Session使用CPU数量上限,如果你的CPU数量较多,可以适当加大这个值
inter_op_parallelism_threads和intra_op_parallelism_threads告诉session操作的线程并行程度,如果值越小,线程的复用就越少,越可能使用较多的CPU核数。如果值为0,TF会自动选择一个合适的值。
allow_soft_placement=True, 有时候,不同的设备,它的cpu和gpu是不同的,如果将这个选项设置成True,那么当运行设备不满足要求时,会自动分配GPU或者CPU。
'''
  • 1
    点赞
  • 29
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 2
    评论
YOLOV5是一种基于深度学习的目标检测算法,它是YOLO(You Only Look Once)系列的最新版本。下面是YOLOV5目标检测训练流程: 1. 数据准备:首先需要准备训练所需的数据集。数据集应包含标注好的图像和对应的目标框信息。可以使用标注工具如LabelImg来进行标注。 2. 数据划分:将数据集划分为训练集、验证集和测试集。通常采用70%的数据作为训练集,10%的数据作为验证集,20%的数据作为测试集。 3. 模型选择:选择适合的YOLOV5模型进行目标检测训练YOLOV5提供了不同大小的模型,如YOLOV5s、YOLOV5m、YOLOV5l和YOLOV5x,根据实际需求选择合适的模型。 4. 模型配置:配置模型的参数,包括输入图像大小、类别数、学习率等。可以根据实际情况进行调整。 5. 数据增强:为了增加数据的多样性和泛化能力,可以对训练数据进行增强操作,如随机裁剪、旋转、缩放等。 6. 模型训练:使用训练集对模型进行训练训练过程中,通过反向传播算法不断更新模型权重,使其能够更好地预测目标。 7. 模型评估:使用验证集对训练得到的模型进行评估,计算模型目标检测任务上的性能指标,如精度、召回率等。 8. 模型调优:根据评估结果,可以对模型进行调优,如调整学习率、增加训练轮数等,以提高模型的性能。 9. 模型测试:使用测试集对最终训练得到的模型进行测试,评估其在实际场景中的表现。 10. 部署应用:将训练好的模型部署到实际应用中,进行目标检测任务。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

小帆芽芽

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值