YOLOv1 代码复现

0.权重文件说明

        关于各位要求权重文件的回复:我已经重构了代码,并且增加上了预训练的权重文件,但是YOLO的模型由于存在fc层,超过了1G,我实在没有办法上传,所以请各位自己训练吧。

表情包

1.  YOLO v1概述

        Two-stage目标检测算法将目标检测与识别的过程分为候选区域提取与目标识别两个步骤来做,由于在做具体分类识别和位置回归前多了一步候选区域提取,因此Two-stage目标检测算法的识别率和候选框精确度是比较高的,但对性能的消耗是非常巨大的。而YOLOv1作为YOLO系列算法的开山之作,创造性地提出不再预先进行候选区域(Proposal Region)的提取,而是直接将输入图片以网格的方式进行划分,由每个网格负责预测中心点落在它内部的物体。不过也正是因为缺少了Proposal Region的提取,所以相对来说回归精度要低一些。Yolo v1是端到端的,直接做预测,而不是通过候选区域提取,候选框提取的方法在候选框选取后对候选区域进行分类更像是一个分类问题,而YOLO的流程将目标检测问题转换为一个回归问题,即对目标区域的选定直接由模型预估而来。

one-stage vs two-stage
One-stageTwo-stage
优点优点
推理速度快、训练快精度高
背景误检率低目标定位精度高 、检出率高
缺点缺点
目标定位精度低 、检出率低推理速度慢、训练慢
小物体检测效果差背景误检率高

2.YOLOv1网络结构

        作者实现的YOLO v1版本中,输入图像的尺寸固定为448*448,在经过了24个卷积层和2个全连接层后,最后输出7*7*1024的特征图(feature map),对应了作者将原图划分为S*S个格子的思想,feature map上的每一个张量都包含了后续预测任务时所需要的高层抽象语意信息。

        如图,YOLO v1将一张图片划分为S*S个格子,作者称之为栅格(grid cell)。对于一张大小为448*448的图像,经卷积层提取特征后,输出大小为7*7*1024的特征图(feature map),feature map上的每一个1*1*1024的张量就对应着原图中的一个grid cell所提取出的特征,不同的通道对应着不同的抽象语意信息。每个grid cell预测两个物体边界框(Bounding Box)以及grid cell预测的物体类别,最后通过一个NSM算法去除冗余的Bounding Box,生成检测结果。

       如图, YOLO v1的网络架构为24个卷积层、4个最大池化层、2个全连接层组成,卷积和池化层部分用于特征的提取,全连接层用于预测。全连接层输出7*7*30,7*7代表原图被划分成的7*7的grid cell。

预训练模型结构定义:

import torch.nn as nn
import torch

class Convention(nn.Module):
    def __init__(self,in_channels,out_channels,conv_size,conv_stride,padding,need_bn = True):
        super(Convention,self).__init__()
        self.conv = nn.Conv2d(in_channels, out_channels, conv_size, conv_stride, padding, bias=False if need_bn else True)
        self.leaky_relu = nn.LeakyReLU(inplace=True,negative_slope=1e-1)
        self.need_bn = need_bn
        if need_bn:
            self.bn = nn.BatchNorm2d(out_channels)

    def forward(self, x):
        return self.bn(self.leaky_relu(self.conv(x))) if self.need_bn else self.leaky_relu(self.conv(x))

    def weight_init(self):
        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                torch.nn.init.kaiming_normal_(m.weight.data)
            elif isinstance(m, nn.BatchNorm2d):
                m.weight.data.fill_(1)
                m.bias.data.zero_()

class YOLO_Feature(nn.Module):

    def __init__(self, classes_num=80):
        super(YOLO_Feature,self).__init__()

        self.Conv_Feature = nn.Sequential(
            Convention(3, 64, 7, 2, 3),
            nn.MaxPool2d(2, 2),

            Convention(64, 192, 3, 1, 1),
            nn.MaxPool2d(2, 2),

            Convention(192, 128, 1, 1, 0),
            Convention(128, 256, 3, 1, 1),
            Convention(256, 256, 1, 1, 0),
            Convention(256, 512, 3, 1, 1),
            nn.MaxPool2d(2, 2),

            Convention(512, 256, 1, 1, 0),
            Convention(256, 512, 3, 1, 1),
            Convention(512, 256, 1, 1, 0),
            Convention(256, 512, 3, 1, 1),
            Convention(512, 256, 1, 1, 0),
            Convention(256, 512, 3, 1, 1),
            Convention(512, 256, 1, 1, 0),
            Convention(256, 512, 3, 1, 1),
            Convention(512, 512, 1, 1, 0),
            Convention(512, 1024, 3, 1, 1),
            nn.MaxPool2d(2, 2),
        )

        self.Conv_Semanteme = nn.Sequential(
            Convention(1024, 512, 1, 1, 0),
            Convention(512, 1024, 3, 1, 1),
            Convention(1024, 512, 1, 1, 0),
            Convention(512, 1024, 3, 1, 1),
        )

        self.avg_pool = nn.AdaptiveAvgPool2d(1)

        self.linear = nn.Linear(1024, classes_num)

    def forward(self, x):
        x = self.Conv_Feature(x)
        x = self.Conv_Semanteme(x)
        x = self.avg_pool(x)
        # batch_size * channel * width * height
        x = x.permute(0, 2, 3, 1)
        x = torch.flatten(x, start_dim=1, end_dim=3)
        x = self.linear(x)
        return x

    # 定义权值初始化
    def initialize_weights(self):
        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                torch.nn.init.kaiming_normal_(m.weight.data)
            elif isinstance(m, nn.BatchNorm2d):
                m.weight.data.fill_(1)
                m.bias.data.zero_()
            elif isinstance(m, nn.Linear):
                torch.nn.init.kaiming_normal_(m.weight.data)
                m.bias.data.zero_()
            elif isinstance(m, Convention):
                m.weight_init()

注:笔者此处并对YOLOv1前20个普通卷积改进为Conv+BN层,是为了利用BN层来加速网络的收敛。时至今日,BN+Residual已经成为了CNN进行特征提取的标配,当然值得注意的是,在GAN-generator中,更为合适的是LayerNormal,不同的领域有各自适应的方法。

使用COCO数据集中的目标进行预训练(当然有条件的还是建议使用ImageNet预训练):

注:笔者先前使用过ImageNet-Tiny数据集训练,发现效果很差,检查数据集后发现Tiny系数据集的实际待分类目标占整幅图像的比例很低,而在YOLOv1的网络中存在全局平均池化,因此会加剧收敛出现问题。举例来说,假设都是鱼的种类,一幅图片是一只占图像比例很大的鱼,另一幅图片是一个人手中抱着一只鱼,在全局池化后,后者中混入了较多了人类的特征,我们在当前将其往鱼的类别收敛,那么当我们遇到分类目标为人的图片后,又需要朝着将其分类为人的目标迭代,因此网络将一直在人是鱼/人是人两种决策中摇摆,无法收敛。

所以笔者采用的替代方案为,利用现有的COCO数据集的bounding box标注,将拥有最小杂信息的图像区域框选出来,用这部分图像区域进行训练。

好处:杂项信息更少,网络便于训练收敛

坏处:图像简单导致任务变得简单,同时网络可能学不会利用背景辅助判断物体

import cv2
import os
import time
import random
import imagesize
import numpy as np
from utils import image
from torch.utils.data import Dataset
import torchvision.transforms as transforms

class coco_classify_dataset(Dataset):
    def __init__(self,imgs_path = "../DataSet/COCO2017/Train/Imgs", txts_path = "../DataSet/COCO2017/Train/Labels", is_train = True, edge_threshold=200, class_num=80, input_size=256):  # input_size:输入图像的尺度
        img_names = os.listdir(txts_path)
        self.is_train = is_train

        self.transform_common = transforms.Compose([
            transforms.ToTensor(),  # height * width * channel -> channel * height * width
            transforms.Normalize(mean=(0.408, 0.448, 0.471), std=(0.242, 0.239, 0.234))  # 归一化后.不容易产生梯度爆炸的问题
        ])

        self.input_size = input_size
        self.train_data = []  # [img_path,[[coord, class_id]]]

        for img_name in img_names:
            img_path = os.path.join(imgs_path, img_name.replace(".txt", ".jpg"))
            txt_path = os.path.join(txts_path, img_name)

            coords = []

            with open(txt_path, 'r') as label_txt:
                for label in label_txt:
                    label = label.replace("\n", "").split(" ")
                    class_id = int(label[4])

                    if class_id >= class_num:
                        continue

                    xmin = round(float(label[0]))
                    ymin = round(float(label[1]))
                    xmax = round(float(label[2]))
                    ymax = round(float(label[3]))

                    if (xmax - xmin) < edge_threshold or (ymax - ymin) < edge_threshold:
                        continue

                    coords.append([xmin, ymin, xmax, ymax, class_id])

            if len(coords) != 0:
                self.train_data.append([img_path, coords])

    def __getitem__(self, item):

        img_path, coords = self.train_data[item]
        img = cv2.imread(img_path)
        random.seed(int(time.time()))
        random_index = random.randint(0, len(coords) - 1)
        xmin, ymin, xmax, ymax, class_index = coords[random_index]

        img = img[ymin: ymax, xmin: xmax]

        if self.is_train:
            transform_seed = random.randint(0, 2)

            if transform_seed == 0:  # 原图
                img = image.resize_image_without_annotation(img, self.input_size, self.input_size)

            elif transform_seed == 1:  # 缩放+中心裁剪
                img, coords = image.center_crop_with_coords(img, coords)
                img, coords = image.resize_image_with_coords(img, self.input_size, self.input_size, coords)

            else:  # 明度调整 YOLO在论文中称曝光度为明度
                img = image.resize_image_without_annotation(img, self.input_size, self.input_size)
                img = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
                H, S, V = cv2.split(img)
                cv2.merge([np.uint8(H), np.uint8(S), np.uint8(V * 1.5)], dst=img)
                cv2.cvtColor(src=img, dst=img, code=cv2.COLOR_HSV2BGR)

        else:
            img = image.resize_image_without_annotation(img, self.input_size, self.input_size)

        img = self.transform_common(img)
        return img, class_index

    def __len__(self):
        return len(self.train_data)

注:笔者对于训练过程使用了数据增强,对于验证过程则不用数据增强。

训练过程:

#------0.common variable definition------
import torch
import argparse
import torch.nn as nn
from tqdm import tqdm
import torch.optim as optim
from utils.model import accuracy
from tensorboardX import SummaryWriter
from torch.utils.data import DataLoader
from utils.model import feature_map_visualize
from YOLO.PreTrain.YOLO_Feature import YOLO_Feature
from YOLO.PreTrain.COCO_Classify_DataSet import coco_classify_dataset
if torch.cuda.is_available():
    device = torch.device('cuda:0')
    torch.backends.cudnn.benchmark = True
else:
    device = torch.device('cpu')

if __name__ == "__main__":

    # 1.training parameters
    parser = argparse.ArgumentParser(description="YOLO_Feature train config")
    parser.add_argument('--batch_size', type=int, help="YOLO_Feature train batch_size", default=32)
    parser.add_argument('--num_workers', type=int, help="YOLO_Feature train num_worker num", default=4)
    parser.add_argument('--lr', type=float, help="lr", default=3e-4)
    parser.add_argument('--weight_decay', type=float, help="weight_decay", default=0.0005)
    parser.add_argument('--epoch_num', type=int, help="YOLO_Feature train epoch_num", default=200)
    parser.add_argument('--epoch_interval', type=int, help="save YOLO_Feature interval", default=10)
    parser.add_argument('--class_num', type=int, help="YOLO_Feature train class_num", default=80)
    parser.add_argument('--train_imgs', type=str, help="YOLO_Feature train train_imgs", default="../../DataSet/COCO2017/Train/Imgs")
    parser.add_argument('--train_labels', type=str, help="YOLO_Feature train train_labels", default="../../DataSet/COCO2017/Train/Labels")
    parser.add_argument('--val_imgs', type=str, help="YOLO_Feature train val_imgs", default="../../DataSet/COCO2017/Val/Imgs")
    parser.add_argument('--val_labels', type=str, help="YOLO_Feature train val_labels", default="../../DataSet/COCO2017/Val/Labels")
    parser.add_argument('--grad_visualize', type=bool, help="YOLO_Feature train grad visualize", default=False)
    parser.add_argument('--feature_map_visualize', type=bool, help="YOLO_Feature train feature map visualize", default=False)
    parser.add_argument('--restart', type=bool, help="YOLO_Feature train from zeor?", default=True)
    parser.add_argument('--pre_weight_file', type=str, help="YOLO_Feature pre weight path", default="./weights/YOLO_Feature_20.pth")
    args = parser.parse_args()

    batch_size = args.batch_size
    num_workers = args.num_workers
    epoch_num = args.epoch_num
    epoch_interval = args.epoch_interval
    class_num = args.class_num

    if args.restart == True:
        lr = args.lr
        param_dict = {}
        epoch = 0
        epoch_val_loss_min = 999999999

    else:
        param_dict = torch.load(args.pre_weight_file, map_location=torch.device("cpu"))
        optimal_dict = param_dict['optimal']
        epoch = param_dict['epoch']
        epoch_val_loss_min = param_dict['epoch_val_loss_min']

    # 2.dataset
    train_dataSet = coco_classify_dataset(imgs_path=args.train_imgs,txts_path=args.train_labels, is_train=True, edge_threshold=200)
    val_dataSet = coco_classify_dataset(imgs_path=args.val_imgs,txts_path=args.val_labels, is_train=False, edge_threshold=200)

    # 3-4.network - optimizer
    yolo_feature = YOLO_Feature(classes_num=class_num)
    if args.restart == True:
        yolo_feature.initialize_weights()
        optimizer = optim.Adam(params=yolo_feature.parameters(), lr=args.lr, weight_decay=args.weight_decay)
    else:
        yolo_feature.load_state_dict(param_dict['model'])
        optimizer = param_dict['optimizer']
    yolo_feature.to(device=device, non_blocking=True)

    # 5.loss
    loss_function = nn.CrossEntropyLoss().to(device=device)

    # 6.train and record
    input_size = 256
    writer = SummaryWriter(logdir='./log', filename_suffix=' [' + str(epoch) + '~' + str(epoch + epoch_interval) + ']')

    while epoch < epoch_num:

        epoch_train_loss = 0
        epoch_val_loss = 0
        epoch_train_top1_acc = 0
        epoch_train_top5_acc = 0
        epoch_val_top1_acc = 0
        epoch_val_top5_acc = 0

        train_loader = DataLoader(dataset=train_dataSet, batch_size=batch_size, shuffle=True, num_workers=num_workers,
                                  pin_memory=True)
        train_len = train_loader.__len__()
        yolo_feature.train()
        with tqdm(total=train_len) as tbar:

            for batch_index, batch_train in enumerate(train_loader):
                train_data = batch_train[0].float().to(device=device, non_blocking=True)
                label_data = batch_train[1].long().to(device=device, non_blocking=True)
                net_out = yolo_feature(train_data)
                loss = loss_function(net_out, label_data)
                loss.backward()
                optimizer.step()
                optimizer.zero_grad()
                batch_loss = loss.item() * batch_size
                epoch_train_loss = epoch_train_loss + batch_loss

                # 计算准确率
                net_out = net_out.detach()
                [top1_acc, top5_acc] = accuracy(net_out, label_data)
                top1_acc = top1_acc.item()
                top5_acc = top5_acc.item()

                epoch_train_top1_acc = epoch_train_top1_acc + top1_acc
                epoch_train_top5_acc = epoch_train_top5_acc + top5_acc

                tbar.set_description(
                    "train: class_loss:{} top1-acc:{} top5-acc:{}".format(round(loss.item(), 4), round(top1_acc, 4),
                                                                          round(top5_acc, 4), refresh=True))
                tbar.update(1)

                if args.feature_map_visualize:
                    feature_map_visualize(train_data[0][0], writer, yolo_feature)
                # print("batch_index : {} ; batch_loss : {}".format(batch_index, batch_loss))
            print(
                "train-mean: batch_loss:{} batch_top1_acc:{} batch_top5_acc:{}".format(round(epoch_train_loss / train_loader.__len__(), 4), round(
                    epoch_train_top1_acc / train_loader.__len__(), 4), round(
                    epoch_train_top5_acc / train_loader.__len__(), 4)))

        # lr_reduce_scheduler.step()

        val_loader = DataLoader(dataset=val_dataSet, batch_size=batch_size, shuffle=True, num_workers=num_workers,
                                pin_memory=True)
        val_len = val_loader.__len__()
        yolo_feature.eval()
        with tqdm(total=val_len) as tbar:
            with torch.no_grad():
                for batch_index, batch_train in enumerate(val_loader):
                    train_data = batch_train[0].float().to(device=device, non_blocking=True)
                    label_data = batch_train[1].long().to(device=device, non_blocking=True)
                    net_out = yolo_feature(train_data)
                    loss = loss_function(net_out, label_data)
                    batch_loss = loss.item() * batch_size
                    epoch_val_loss = epoch_val_loss + batch_loss

                    # 计算准确率
                    net_out = net_out.detach()
                    [top1_acc, top5_acc] = accuracy(net_out, label_data)
                    top1_acc = top1_acc.item()
                    top5_acc = top5_acc.item()

                    epoch_val_top1_acc = epoch_val_top1_acc + top1_acc
                    epoch_val_top5_acc = epoch_val_top5_acc + top5_acc

                    tbar.set_description(
                        "val: class_loss:{} top1-acc:{} top5-acc:{}".format(round(loss.item(), 4), round(top1_acc, 4),
                                                                            round(top5_acc, 4), refresh=True))
                    tbar.update(1)

                if args.feature_map_visualize:
                    feature_map_visualize(train_data[0][0], writer, yolo_feature)
                # print("batch_index : {} ; batch_loss : {}".format(batch_index, batch_loss))
            print(
                "val-mean: batch_loss:{} batch_top1_acc:{} batch_top5_acc:{}".format(round(epoch_val_loss / val_loader.__len__(), 4), round(
                    epoch_val_top1_acc / val_loader.__len__(), 4), round(
                    epoch_val_top5_acc / val_loader.__len__(), 4)))
        epoch = epoch + 1

        if epoch_val_loss < epoch_val_loss_min:
            epoch_val_loss_min = epoch_val_loss
            optimal_dict = yolo_feature.state_dict()

        if epoch % epoch_interval == 0:
            param_dict['model'] = yolo_feature.state_dict()
            param_dict['optimizer'] = optimizer
            param_dict['epoch'] = epoch
            param_dict['optimal'] = optimal_dict
            param_dict['epoch_val_loss_min'] = epoch_val_loss_min
            torch.save(param_dict, './weights/YOLO_Feature_' + str(epoch) + '.pth')
            writer.close()
            writer = SummaryWriter(logdir='log', filename_suffix='[' + str(epoch) + '~' + str(epoch + epoch_interval) + ']')

        avg_train_sample_loss = epoch_train_loss / batch_size / train_loader.__len__()
        avg_val_sample_loss = epoch_val_loss / batch_size / val_loader.__len__()

        print("epoch:{}, train_sample_avg_loss:{}, val_sample_avg_loss:{}".format(epoch, avg_train_sample_loss, avg_val_sample_loss))

        if args.grad_visualize:
            for i, (name, layer) in enumerate(yolo_feature.named_parameters()):
                if 'bn' not in name:
                    writer.add_histogram(name + '_grad', layer, epoch)

        writer.add_scalar('Train/Loss_sample', avg_train_sample_loss, epoch)
        writer.add_scalar('Train/Batch_Acc_Top1', round(epoch_train_top1_acc / train_loader.__len__(), 4), epoch)
        writer.add_scalar('Train/Batch_Acc_Top5', round(epoch_train_top5_acc / train_loader.__len__(), 4), epoch)

        writer.add_scalar('Val/Loss_sample', avg_val_sample_loss, epoch)
        writer.add_scalar('Val/Batch_Acc_Top1', round(epoch_val_top1_acc / val_loader.__len__(), 4), epoch)
        writer.add_scalar('Val/Batch_Acc_Top5', round(epoch_val_top5_acc / val_loader.__len__(), 4), epoch)

    writer.close()

3.YOLOv1输出结构

       

        如图,由于作者使用了VOC数据集(20个类别)来测试并测试YOLO v1,所以预测输出的张量中,前面两个5维分别表示两个Bounding Box的物体置信度以及两个box各自的中心坐标及宽高,后面的20维对应了20种类别各自的概率。

        IOU(区域交并比)

           

            在目标检测领域,IoU是一个重要指标,通过两个box的交集和并集的面积值比值来衡量两个boxes的接近程度(重叠程度)。

矩形交集计算:223. 矩形面积_The Shawshank Redemption-CSDN博客

def iou(self, box1, box2):  # 计算两个box的IoU值
    # box: lx-左上x ly-左上y rx-右下x ry-右下y 图像向右为y 向下为x
    # 1. 获取交集的矩形左上和右下坐标
    interLX = max(box1[0],box2[0])
    interLY = max(box1[1],box2[1])
    interRX = min(box1[2],box2[2])
    interRY = min(box1[3],box2[3])

    # 2. 计算两个矩形各自的面积
    Area1 = (box1[2] - box1[0]) * (box1[3] - box1[1])
    Area2 = (box2[2] - box2[0]) * (box2[3] - box2[1])

    # 3. 不存在交集
    if interRX < interLX or interRY < interLY:
        return 0

    # 4. 计算IOU
    interSection = (interRX - interLX) * (interRY - interLY)
    return interSection / (Area1 + Area2 - interSection)

        置信度:作者采用了同时考虑有无物体以及定位准确度的方式

                                                                   Confidence = Pr(obj)*IOU_{truth}^{pred}

        Pr(obj)表示有物体的中心落在当前的grid cell内部的概率,即当前grid cell负责的区域有物体的概率。IOU_{truth}^{pred}表示当前用于预测的Bounding Box与真实要预测的物体的Truth Box的IOU值,这体现了预测的Bounding Box的准确度。

        预测输出的Pr(obj)表示Bounding Box覆盖的区域含有物体的可能性,而我们在制作Ground Truth时,如果该grid cell中有物体的中心,则该Ground Truth的Pr(obj)=1,否则Pr(obj)=0。

        定位:如果我们直接预测Bounding Box的位置、长宽,这会导致模型的泛化能力有所降低,这是因为:

                1. 直接预测图像中心点的位置和尺度,会导致预测值的变化幅度占据了[0,447],尺度变化太过剧烈,并不利于网络的收敛,训练过程中波动会很大

                2. 如果网络训练和测试的图片中物体的尺度差异过大,会导致模型在测试数据上的识别能力完全不够

        因此作者使用了相对偏移量的方式,由于每一个grid cell 负责预测目标中心落在其内部的物体,因此物体中心的坐标一定在grid cell内,所以将中心点与grid cell左上角的坐标差与grid cell本身的长宽做除法得到相对的比例值。同样,物体的长宽预测,由于物体一定落在整个图像内部,于是可以与图像的长宽做除法得到相对的比例值,如下图:

        预测输出的结果我们使用sigmod函数将输出压缩在(0,1),在制作Ground Truth时,我们根据上述约定直接计算即可。          

使用VOC数据集进行目标检测的训练

用于目标检测的YOLOv1网络结构:

import torch
import torch.nn as nn
from YOLO.PreTrain.YOLO_Feature import Convention

class YOLOv1(nn.Module):

    def __init__(self,B=2,classes_num=20):
        super(YOLOv1,self).__init__()
        self.B = B
        self.classes_num = classes_num

        self.Conv_Feature = nn.Sequential(
            Convention(3, 64, 7, 2, 3),
            nn.MaxPool2d(2, 2),

            Convention(64, 192, 3, 1, 1),
            nn.MaxPool2d(2, 2),

            Convention(192, 128, 1, 1, 0),
            Convention(128, 256, 3, 1, 1),
            Convention(256, 256, 1, 1, 0),
            Convention(256, 512, 3, 1, 1),
            nn.MaxPool2d(2, 2),

            Convention(512, 256, 1, 1, 0),
            Convention(256, 512, 3, 1, 1),
            Convention(512, 256, 1, 1, 0),
            Convention(256, 512, 3, 1, 1),
            Convention(512, 256, 1, 1, 0),
            Convention(256, 512, 3, 1, 1),
            Convention(512, 256, 1, 1, 0),
            Convention(256, 512, 3, 1, 1),
            Convention(512, 512, 1, 1, 0),
            Convention(512, 1024, 3, 1, 1),
            nn.MaxPool2d(2, 2),
        )

        self.Conv_Semanteme = nn.Sequential(
            Convention(1024, 512, 1, 1, 0),
            Convention(512, 1024, 3, 1, 1),
            Convention(1024, 512, 1, 1, 0),
            Convention(512, 1024, 3, 1, 1),
        )

        self.Conv_Back = nn.Sequential(
            Convention(1024, 1024, 3, 1, 1, need_bn=False),
            Convention(1024, 1024, 3, 2, 1, need_bn=False),
            Convention(1024, 1024, 3, 1, 1, need_bn=False),
            Convention(1024, 1024, 3, 1, 1, need_bn=False),
        )

        self.Fc = nn.Sequential(
            nn.Linear(7*7*1024,4096),
            nn.LeakyReLU(inplace=True, negative_slope=1e-1),
            nn.Linear(4096,7 * 7 * (B*5 + classes_num)),
        )

        self.sigmoid = nn.Sigmoid()
        self.softmax = nn.Softmax(dim=3)

    def forward(self, x):
        x = self.Conv_Feature(x)
        x = self.Conv_Semanteme(x)
        x = self.Conv_Back(x)
        # batch_size * channel * height * weight -> batch_size * height * weight * channel
        x = x.permute(0, 2, 3, 1)
        x = torch.flatten(x, start_dim=1, end_dim=3)
        x = self.Fc(x)
        x = x.view(-1, 7, 7, (self.B * 5 + self.classes_num))
        #print("x seg:{}".format(x[:,:,:,0 : self.B * 5]))
        bnd_coord = self.sigmoid(x[:,:,:,0 : self.B * 5])
        #print("bnd_coord:{}".format(bnd_coord))
        bnd_cls = self.softmax(x[:,:,:, self.B * 5 : ])
        bnd = torch.cat([bnd_coord, bnd_cls], dim=3)
        #x = self.sigmoid(x.view(-1,7,7,(self.B * 5 + self.classes_num)))
        #x[:,:,:, 0 : self.B * 5] = self.sigmoid(x[:,:,:, 0 : self.B * 5])
        #x[:,:,:, self.B * 5 : ] = self.softmax(x[:,:,:, self.B * 5 : ])
        return bnd

    # 定义权值初始化
    def initialize_weights(self, net_param_dict):
        for name, m in self.named_modules():
            if isinstance(m, nn.Conv2d):
                torch.nn.init.kaiming_normal_(m.weight.data)
            elif isinstance(m, nn.BatchNorm2d):
                m.weight.data.fill_(1)
                m.bias.data.zero_()
            elif isinstance(m, nn.Linear):
                torch.nn.init.kaiming_normal_(m.weight.data)
                m.bias.data.zero_()
            elif isinstance(m, Convention):
                m.weight_init()

        self_param_dict = self.state_dict()
        for name, layer in self.named_parameters():
            if name in net_param_dict:
                self_param_dict[name] = net_param_dict[name]
        self.load_state_dict(self_param_dict)

VOC目标检测数据集类:

from torch.utils.data import Dataset
import os
import cv2
import xml.etree.ElementTree as ET
import torchvision.transforms as transforms
import numpy as np
import random
import torch
from utils import image

class VOC_Detection_Set(Dataset):
    def __init__(self, imgs_path="../DataSet/VOC2007+2012/Train/JPEGImages",
                 annotations_path="../DataSet/VOC2007+2012/Train/Annotations",
                 classes_file="../DataSet/VOC2007+2012/class.data", is_train = True, class_num=20,
                 label_smooth_value=0.05, input_size=448, grid_size=64, loss_mode="mse"):  # input_size:输入图像的尺度
        self.label_smooth_value = label_smooth_value
        self.class_num = class_num
        self.imgs_name = os.listdir(imgs_path)
        self.input_size = input_size
        self.grid_size = grid_size
        self.is_train = is_train
        self.transform_common = transforms.Compose([
            transforms.ToTensor(),  # height * width * channel -> channel * height * width
            transforms.Normalize(mean=(0.408, 0.448, 0.471), std=(0.242, 0.239, 0.234))  # 归一化后.不容易产生梯度爆炸的问题
        ])
        self.imgs_path = imgs_path
        self.annotations_path = annotations_path
        self.class_dict = {}
        self.loss_mode = loss_mode

        class_index = 0
        with open(classes_file, 'r') as file:
            for class_name in file:
                class_name = class_name.replace('\n', '')
                self.class_dict[class_name] = class_index  # 根据类别名制作索引
                class_index = class_index + 1

    def __getitem__(self, item):

        img_path = os.path.join(self.imgs_path, self.imgs_name[item])
        annotation_path = os.path.join(self.annotations_path, self.imgs_name[item].replace(".jpg", ".xml"))
        img = cv2.imread(img_path)
        tree = ET.parse(annotation_path)
        annotation_xml = tree.getroot()

        objects_xml = annotation_xml.findall("object")
        coords = []

        for object_xml in objects_xml:
            bnd_xml = object_xml.find("bndbox")
            class_name = object_xml.find("name").text
            if class_name not in self.class_dict:  # 不属于我们规定的类
                continue
            xmin = round((float)(bnd_xml.find("xmin").text))
            ymin = round((float)(bnd_xml.find("ymin").text))
            xmax = round((float)(bnd_xml.find("xmax").text))
            ymax = round((float)(bnd_xml.find("ymax").text))
            class_id = self.class_dict[class_name]
            coords.append([xmin, ymin, xmax, ymax, class_id])

        coords.sort(key=lambda coord : (coord[2] - coord[0]) * (coord[3] - coord[1]) )

        if self.is_train:

            transform_seed = random.randint(0, 4)

            if transform_seed == 0:  # 原图
                img, coords = image.resize_image_with_coords(img, self.input_size, self.input_size, coords)
                img = self.transform_common(img)

            elif transform_seed == 1:  # 缩放+中心裁剪
                img, coords = image.center_crop_with_coords(img, coords)
                img, coords = image.resize_image_with_coords(img, self.input_size, self.input_size, coords)
                img = self.transform_common(img)

            elif transform_seed == 2:  # 平移
                img, coords = image.transplant_with_coords(img, coords)
                img, coords = image.resize_image_with_coords(img, self.input_size, self.input_size, coords)
                img = self.transform_common(img)

            else:  # 曝光度调整
                img, coords = image.resize_image_with_coords(img, self.input_size, self.input_size, coords)
                img = image.exposure(img, gamma=0.5)
                img = self.transform_common(img)

        else:
            img, coords = image.resize_image_with_coords(img, self.input_size, self.input_size, coords)
            img = self.transform_common(img)

        ground_truth, ground_mask_positive, ground_mask_negative = self.getGroundTruth(coords)
        return img, [ground_truth, ground_mask_positive, ground_mask_negative, img_path]

        #ground_truth, ground_mask_positive, ground_mask_negative = self.getGroundTruth(coords)

        # 通道变化方法: img = img[:, :, ::-1]

        #return img, ground_truth, ground_mask_positive, ground_mask_negative

    def __len__(self):
        return len(self.imgs_name)

    def getGroundTruth(self, coords):

        feature_size = self.input_size // self.grid_size
        #ground_mask_positive = np.zeros([feature_size, feature_size, 1], dtype=bool)
        #ground_mask_negative = np.ones([feature_size, feature_size, 1], dtype=bool)
        ground_mask_positive = np.full(shape=(feature_size, feature_size, 1), fill_value=False, dtype=bool)
        ground_mask_negative = np.full(shape=(feature_size, feature_size, 1), fill_value=True, dtype=bool)

        if self.loss_mode == "mse":
            ground_truth = np.zeros([feature_size, feature_size, 10 + self.class_num + 2])
        else:
            ground_truth = np.zeros([feature_size, feature_size, 10 + 1])

        for coord in coords:

            xmin, ymin, xmax, ymax, class_id = coord

            ground_width = (xmax - xmin)
            ground_height = (ymax - ymin)

            center_x = (xmin + xmax) / 2
            center_y = (ymin + ymax) / 2

            index_row = (int)(center_y * feature_size)
            index_col = (int)(center_x * feature_size)

            # 分类标签 label_smooth
            if self.loss_mode == "mse":
                # 转化为one_hot编码 对one_hot编码做平滑处理
                class_list = np.full(shape=self.class_num, fill_value=1.0, dtype=float)
                deta = 0.01
                class_list = class_list * deta / (self.class_num - 1)
                class_list[class_id] = 1.0 - deta
            elif self.loss_mode == "cross_entropy":
                class_list = [class_id]
            else:
                raise Exception("the loss mode can't be support now!")

            # 定位数据预设
            ground_box = [center_x * feature_size - index_col, center_y * feature_size - index_row,
                          ground_width, ground_height, 1,
                          round(xmin * self.input_size), round(ymin * self.input_size),
                          round(xmax * self.input_size), round(ymax * self.input_size),
                          round(ground_width * self.input_size * ground_height * self.input_size)
                          ]
            ground_box.extend(class_list)
            ground_box.extend([index_col, index_row])

            ground_truth[index_row][index_col] = np.array(ground_box)
            ground_mask_positive[index_row][index_col] = True
            ground_mask_negative[index_row][index_col] = False

        return ground_truth, torch.BoolTensor(ground_mask_positive), torch.BoolTensor(ground_mask_negative)

[注]:在YOLO v1中,每一个grid cell虽然预测两个bounding box,但是最终只有一个是有效的,最多检测7*7*1=49个物体。为简单起见,在本人的实现中,对于多个物体的重心落于同一个grid cell的情况(概率非常低),采用的方式是选择最后一个确定是该方格负责的物体。mask操作是为了利用一部分显存实现快速计算正负样本损失。

4. YOLO v1 损失函数

        损失函数是深度学习网络模型非常重要的“指挥棒”,负责引导整体网络的任务和学习方向,通过对预测样本和真实样本的误差进行反向传播来指导网络进行参数的调整学习。

        我们将含有物体的Bounding Box当作正样本,将不含有物体的Bounding Box当作负样本。在实际的实现上,通过Bounding Box与真实的物体边界框(Ground Truth)的IoU值来判定正负样本,将与Ground Truth拥有最大IoU值的box当作正样本,其余的box作为负样本。

        整个YOLO v1算法的损失函数就包含分别关于正样本(负责预测物体的Bounding Box)负样本(负责预测物体的Bounding Box)两部分,正样本置信度为1,负样本置信度为0,正样本的损失包含置信度损失、边框回归损失和类别损失,而负样本损失只有置信度损失。

        [注]:这边解释一下,因为我们预先设置好了S*S*B个Bounding Box,但是有可能存在一些Bounding Box是完全没有预测到目标的,那些预测到目标的Bounding Box就是正样本,没有预测到目标的就是负样本。在作者创作YOLO v1的那个年代,用于目标检测的数据还没有特别密集的目标的情况,因此存在较多的负样本。

        YOLO v1的损失由5个部分组成,均使用均方差损失:

        (1) 第一部分为正样本中心点坐标的损失,引入λcoord\lambda coord参数调节定位损失的权重。默认设置为5,提高了定位损失的权重,避免在训练初期,由于负样本过多导致正样本的损失在反向传播时的作用微弱进而导致模型不稳定、网络训练发散的问题。

                                                   \lambda coord\sum_{i=0}^{S*S}\sum_{j=0}^{B}1_{ij}^{obj}[(x_{i}-\hat{x}_{i})^{2}+(y_{i}-\hat{y}_{i})^{2}]

        \lambda coord:超参数,用于调节定位损失在整体损失中的权重

        \sum_{i=0}^{S*S}:S*S个格子里都有Bounding Box

        \sum_{j=0}^{B}:每个格子里有B个Bounding Box

        1_{ij}^{obj}:第i个网格中的第j个Bounding Box负责预测该网格对应的物体时为1,否则为0

        (x_{i}-\hat{x}_{i})+(y_{i}-\hat{y}_{i}):物体中心点与Bounding Box预测的中心点的差距

        (2) 第二部分为正样本的宽高损失,YOLO v1通过对宽高进行根号处理,在一定程度上降低了网络对尺度变化的敏感程度,同时也能提高小物体宽高损失在整体目标宽高差距损失上的权重。毕竟,对于大型的Bounding Box来说,小的偏差影响并不大,而对于小型的Bounding Box来说,小型的偏差就显得尤为重要。

                                                   \lambda coord\sum_{i=0}^{S*S}\sum_{j=0}^{B}1_{ij}^{obj}[(\sqrt{w_{i}}-\sqrt{\hat{w}_{i}})^{2}+(\sqrt{h_{i}}-\sqrt{\hat{h}_{i}})^{2}]

        (\sqrt{w_{i}}-\sqrt{\hat{w}_{i}})+(\sqrt{h_{i}}-\sqrt{\hat{h}_{i}}):物体的长宽与Bounding Box预测的长宽之间的差距,根号处理是因为小尺度的目标对于尺度变化很敏感。例如,目标尺度为10,预测出来为20,差值为100%;目标尺度为100,预测出来为110,插值为10%。

        (3) 第三部分分别为正样本的置信度损失。

                                                  \sum_{i=0}^{S*S}\sum_{j=0}^{B}1_{ij}^{obj}(C_{i}-\hat{C}_{i})^{2}

        (C_{i}-\hat{C}_{i})^{2}:含有物体的Bounding Box的置信度与对应Ground Truth的置信度方差

        (4) 第四部分为负样本的置信度损失,引入\lambda noobj调节负样本置信度损失的权重,默认值为0.5。

                                                 \lambda noobj\sum_{i=0}^{S*S}\sum_{j=0}^{B}1_{ij}^{obj}(C_{i}-\hat{C}_{i})^{2}

          \lambda noobj:由于负样本常常比较多,为了保证网络更多的还是学习如果正确定位正样本,因此需要将负样本的损失权重降低

          (C_{i}-\hat{C}_{i})^{2}:不含有物体的Bounding Box的置信度与对应Ground Truth的置信度方差

          (5) 第五部分是正样本的类别损失。

                                                \sum_{i=0}^{S*S}1_{i}^{obj}\sum_{c\ \epsilon\ classes}(p_{i}(c)-\hat{p}_{i}(c))^{2}

           1_{i}^{obj}:是否有物体的中心落在该grid cell中

           \sum_{c\ \epsilon\ classes}(p_{i}(c)-\hat{p}_{i}(c))^{2}:对于每一个类别,都计算平方误差

YOLOv1损失函数:

import math
import torch
import torch.nn as nn

class YOLOv1_Loss(nn.Module):

    def __init__(self, S=7, B=2, Classes=20, l_coord=5, pos_conf=1, pos_cls=1, l_noobj=0.5):
        # 有物体的box损失权重设为l_coord,没有物体的box损失权重设置为l_noobj
        super(YOLOv1_Loss, self).__init__()
        self.S = S
        self.B = B
        self.Classes = Classes
        self.l_coord = l_coord
        self.pos_conf = pos_conf
        self.pos_cls = pos_cls
        self.l_noobj = l_noobj

    def iou_force(self, bounding_box, ground_box, gridX, gridY, img_size=448, grid_size=64):  # 计算两个box的IoU值
        # predict_box: [centerX, centerY, width, height]
        # ground_box : [centerX / self.grid_cell_size - indexJ,centerY / self.grid_cell_size - indexI,(xmax-xmin)/self.img_size,(ymax-ymin)/self.img_size,1,xmin,ymin,xmax,ymax,(xmax-xmin)*(ymax-ymin)
        # 1.  预处理 predict_box  变为  左上X,Y  右下X,Y  两个边界点的坐标 避免浮点误差 先还原成整数
        # 不要共用引用
        # [xmin,ymin,xmax,ymax]

        predict_box = list([0, 0, 0, 0])
        predict_box[0] = (int)(gridX + bounding_box[0] * grid_size)
        predict_box[1] = (int)(gridY + bounding_box[1] * grid_size)
        predict_box[2] = (int)(bounding_box[2] * img_size)
        predict_box[3] = (int)(bounding_box[3] * img_size)

        predict_coord = list([max(0, predict_box[0] - predict_box[2] / 2),
                              max(0, predict_box[1] - predict_box[3] / 2),
                              min(img_size - 1, predict_box[0] + predict_box[2] / 2),
                              min(img_size - 1, predict_box[1] + predict_box[3] / 2)])

        predict_Area = (predict_coord[2] - predict_coord[0]) * (predict_coord[3] - predict_coord[1])

        ground_coord = list([ground_box[5].item(), ground_box[6].item(), ground_box[7].item(), ground_box[8].item()])
        ground_Area = (ground_coord[2] - ground_coord[0]) * (ground_coord[3] - ground_coord[1])

        # 存储格式 xmin ymin xmax ymax

        # 2.计算交集的面积 左边的大者 右边的小者 上边的大者 下边的小者
        CrossLX = max(predict_coord[0], ground_coord[0])
        CrossRX = min(predict_coord[2], ground_coord[2])
        CrossUY = max(predict_coord[1], ground_coord[1])
        CrossDY = min(predict_coord[3], ground_coord[3])

        if CrossRX < CrossLX or CrossDY < CrossUY:  # 没有交集
            return 0

        interSection = (CrossRX - CrossLX) * (CrossDY - CrossUY)

        return interSection / (predict_Area + ground_Area - interSection)

    def iou(self, bounding_boxes, ground_boxes, img_size=448, grid_size=64, device=torch.device("cuda:0")):  # 计算两个box的IoU值
        # predict_box: [centerX, centerY, width, height]
        # ground_box : [xmin,ymin,xmax,ymax,gridX, gridY]
        # 1.  预处理 predict_box  变为  左上X,Y  右下X,Y  两个边界点的坐标 避免浮点误差 先还原成整数
        # 不要共用引用

        gridX = ground_boxes[:,4]
        gridY = ground_boxes[:,5]
        # [center_x, center_y, width, height]

        center_x = ((gridX + bounding_boxes[:,0]) * grid_size).unsqueeze(1).int()
        center_y = ((gridY + bounding_boxes[:,1]) * grid_size).unsqueeze(1).int()
        width = (bounding_boxes[:,2] * img_size).unsqueeze(1).int()
        height = (bounding_boxes[:,3] * img_size).unsqueeze(1).int()

        predict_boxes = torch.cat([center_x, center_y, width, height], dim=1)
        # [xmin,ymin,xmax,ymax]

        predict_coords = torch.cat([torch.max(torch.Tensor([0]).to(device=device), predict_boxes[:,0] - predict_boxes[:,2] / 2).unsqueeze(1),
                                    torch.max(torch.Tensor([0]).to(device=device), predict_boxes[:,1] - predict_boxes[:,3] / 2).unsqueeze(1),
                                    torch.min(torch.Tensor([img_size - 1]).to(device=device), predict_boxes[:,0] + predict_boxes[:,2] / 2).unsqueeze(1),
                                    torch.min(torch.Tensor([img_size - 1]).to(device=device), predict_boxes[:,1] + predict_boxes[:,3] / 2).unsqueeze(1)], dim=1)

        predict_areas = (predict_coords[:,2] - predict_coords[:,0]) * (predict_coords[:,3] - predict_coords[:,1])
        ground_area = (ground_boxes[:,2] - ground_boxes[:,0]) * (ground_boxes[:,3] - ground_boxes[:,1])

        # 2.计算交集的面积 左边的大者 右边的小者 上边的大者 下边的小者
        cross_lx = torch.max(predict_coords[:,0], ground_boxes[:,0])
        cross_rx = torch.min(predict_coords[:,2], ground_boxes[:,2])
        cross_uy = torch.max(predict_coords[:,1], ground_boxes[:,1])
        cross_dy = torch.min(predict_coords[:,3], ground_boxes[:,3])

        inter_section = torch.where((cross_rx < cross_lx) | (cross_dy < cross_uy), 0, ((cross_rx - cross_lx) * (cross_dy - cross_uy)).long())

        return inter_section / (predict_areas + ground_area - inter_section)

    def forward(self, bounding_boxes, ground_labels, grid_size=64, img_size=448, device=torch.device("cuda:0")):  # 输入是 S * S * ( 2 * B + Classes)

        # 定义三个计算损失的变量 正样本定位损失 样本置信度损失 样本类别损失
        loss = 0
        loss_coord = 0
        loss_positive_conf = 0
        loss_negative_conf = 0
        loss_classes = 0
        mseLoss = nn.MSELoss(size_average=False)
        batch_size = len(bounding_boxes)
        #print("bs:{}".format(batch_size))
        # optimize backward
        ground_truth, ground_mask_positive, ground_mask_negative, img_path = ground_labels
        predict_positive_boxes = torch.masked_select(bounding_boxes, ground_mask_positive).view(-1, 10 + self.Classes)
        predict_negative_boxes = torch.masked_select(bounding_boxes, ground_mask_negative).view(-1, 10 + self.Classes)
        ground_positive_boxes = torch.masked_select(ground_truth, ground_mask_positive).view(-1, 10 + self.Classes + 2)
        #print("pos mask{} neg mask{}".format(ground_mask_positive, ground_mask_negative))

        # positive samples
        predict_boxes_one = predict_positive_boxes[:, 0:5]
        predict_boxes_two = predict_positive_boxes[:, 5:10]
        #print("predict_boxes_one:{} predict_boxes_two:{}".format(predict_boxes_one, predict_boxes_two))
        ground_boxes = torch.cat([ground_positive_boxes[:,5:9], ground_positive_boxes[:,self.B * 5 + self.Classes:]], dim=1)
        boxes_one_iou = self.iou(predict_boxes_one, ground_boxes)
        boxes_two_iou = self.iou(predict_boxes_two, ground_boxes)

        #print("one:{} two:{}".format(boxes_one_iou, boxes_two_iou))
        positive_location = torch.where(boxes_one_iou > boxes_two_iou, 0, 1)
        positive_iou = torch.where(boxes_one_iou > boxes_two_iou, boxes_one_iou, boxes_two_iou)
        iou_sum = positive_iou.sum()
        #print("loc:{}".format(positive_location))

        object_num = len(positive_location)
        grid_positive_mask = torch.zeros(size=(object_num, 10)).to(device=device)
        grid_negative_mask = torch.ones(size=(object_num, 10)).to(device=device)

        # 分类
        ground_class = ground_positive_boxes[:, self.B * 5: self.B * 5 + self.Classes]
        predict_class = predict_positive_boxes[:, self.B * 5: self.B * 5 + self.Classes]
        # print("ground_class:{} predict_class:{}".format(ground_class, predict_class))
        # classes = self.pos_cls * torch.pow(ground_class - predict_class, 2).sum()
        # loss = loss + classes
        # loss_classes += classes.item()
        loss_class = self.pos_cls * mseLoss(ground_class, predict_class) / batch_size
        loss = loss + loss_class

        for location_idx in range(object_num):
            if positive_location[location_idx] == 0:
                grid_positive_mask[location_idx][0:5] = torch.ones(size=(5,))
                grid_negative_mask[location_idx][0:5] = torch.zeros(size=(5,))
            else:
                grid_positive_mask[location_idx][5:10] = torch.ones(size=(5,))
                grid_negative_mask[location_idx][5:10] = torch.zeros(size=(5,))

        predict_grid_positive_box = torch.masked_select(predict_positive_boxes[:,0:10], grid_positive_mask.bool()).view(-1, 5)
        predict_grid_negative_box = torch.masked_select(predict_positive_boxes[:,0:10], grid_negative_mask.bool()).view(-1, 5)

        # 正样本:
        # 定位
        loss_coord = self.l_coord * (mseLoss(predict_grid_positive_box[:,0:2], ground_positive_boxes[:,0:2]) + mseLoss(torch.sqrt(predict_grid_positive_box[:,2:4] + 1e-8), torch.sqrt(ground_positive_boxes[:,2:4] + 1e-8))) / batch_size
        loss = loss + loss_coord
        #coord = self.l_coord * torch.pow(predict_grid_positive_box[:,0:2] - ground_positive_boxes[:,0:2], 2).sum() / batch_size \
        #        + self.l_coord * torch.pow(torch.sqrt(predict_grid_positive_box[:,2:4]) - torch.sqrt(ground_positive_boxes[:, 2:4]), 2).sum() / batch_size
        #loss = loss + coord
        #loss_coord += coord.item()
        # positive 置信度
        loss_positive_conf = self.pos_conf * mseLoss(predict_grid_positive_box[:,4], torch.Tensor([1]).to(device=device)) / batch_size
        loss = loss + loss_positive_conf
        #positive_conf = self.pos_conf * torch.pow(predict_grid_positive_box[:, 4] - torch.Tensor([1]).to(device=device), 2).sum() / batch_size
        #loss = loss + positive_conf
        #loss_positive_conf += positive_conf.item()
        # negative 置信度
        predict_negative_boxes = torch.cat([predict_negative_boxes[:,0:5], predict_negative_boxes[:,5:10], predict_grid_negative_box], dim=0)
        loss_negative_conf = self.l_noobj * mseLoss(predict_negative_boxes[:,4], torch.Tensor([0]).to(device=device)) / batch_size
        loss = loss + loss_negative_conf
        #negative_conf = self.l_noobj * torch.pow(predict_negative_boxes[:,4] - torch.Tensor([0]).to(device=device), 2).sum() / batch_size
        #loss = loss + negative_conf
        #loss_negative_conf += negative_conf.item()
        #print("loss:{} loss_coord:{} loss_positive_conf:{} loss_negative_conf:{} loss_classes:{} iou_sum.item():{} object_num:{}".format(loss, loss_coord, loss_positive_conf, loss_negative_conf, loss_classes, iou_sum.item(), object_num))

        return loss, loss_coord.item(), loss_positive_conf.item(), loss_negative_conf.item(), loss_class.item(), iou_sum.item(), object_num

注:的确可能存在一个grid cell中含有多个物体的中心点的情况,本人认为的最优处理策略为选取具有最大面积的那个ground_truth,降低网络训练的难度,因为YOLOv1天生就存在着小物体识别能力不足的缺陷。

YOLOv1目标检测训练:

#---------------step0:Common Definition-----------------
import os
import torch
import argparse
from utils.model import feature_map_visualize
from DataSet.VOC_DataSet import VOC_Detection_Set, voc_dataloader, voc_prefetcher
from torch.utils.data import DataLoader
from YOLO.Train.YOLOv1_Model import YOLOv1
from utils import model
from YOLO.Train.YOLOv1_LossFunction import YOLOv1_Loss
import torch.optim as optim
from tensorboardX import SummaryWriter
from tqdm import tqdm
import warnings

from prefetch_generator import BackgroundGenerator

if torch.cuda.is_available():
    device = torch.device('cuda:0')
    torch.backends.cudnn.benchmark = True
else:
    device = torch.device('cpu')
os.environ["KMP_DUPLICATE_LIB_OK"] = "True"
warnings.filterwarnings("ignore")

if __name__ == "__main__":
#def train():
#../../DataSet/VOC2007+2012/Train/JPEGImages/2008_002118.jpg
    # 1.training parameters
    parser = argparse.ArgumentParser(description="YOLOv1 train config")
    parser.add_argument('--num_workers', type=int, help="train num_workers num", default=4)
    parser.add_argument('--B', type=int, help="YOLOv1 predict box num every grid", default=2)
    parser.add_argument('--class_num', type=int, help="YOLOv1 predict class num", default=20)
    parser.add_argument('--lr', type=float, help="start lr", default=1e-3)
    parser.add_argument('--lr_mul_factor_epoch_1', type=float, help="lr mul factor when full YOLOv1 train in epoch1", default=1.04)
    parser.add_argument('--lr_epoch_2', type=int, help="lr when full YOLOv1 train in epoch2", default=0.001)
    parser.add_argument('--lr_epoch_77', type=int, help="lr when full YOLOv1 train in epoch77", default=0.0001)
    parser.add_argument('--lr_epoch_107', type=int, help="lr when full YOLOv1 train in epoch107", default=0.00001)
    parser.add_argument('--batch_size', type=int, help="YOLOv1 train batch size", default=32)
    parser.add_argument('--train_imgs', type=str, help="YOLOv1 train train_imgs", default="../../DataSet/VOC2007/Train/JPEGImages")
    parser.add_argument('--train_labels', type=str, help="YOLOv1 train train_labels", default="../../DataSet/VOC2007/Train/Annotations")
    parser.add_argument('--val_imgs', type=str, help="YOLOv1 train val_imgs", default="../../DataSet/VOC2007/Val/JPEGImages")
    parser.add_argument('--val_labels', type=str, help="YOLOv1 train val_labels", default="../../DataSet/VOC2007/Val/Annotations")
    parser.add_argument('--voc_classes_path', type=str, help="voc classes path", default="../../DataSet/VOC2007/class.data")
    parser.add_argument('--weight_decay', type=float, help="optim weight_decay", default=5e-4)
    parser.add_argument('--momentum', type=float, help="optim momentum", default=0.9)
    parser.add_argument('--pre_weight_file', type=str, help="YOLOv1 BackBone pre-train path", default="../PreTrain/weights/YOLO_Feature_150.pth")
    parser.add_argument('--epoch_interval', type=int, help="save YOLOv1 weight epoch interval", default=10)
    parser.add_argument('--epoch_unfreeze', type=int, help="YOLOv1 backbone unfreeze epoch", default=10)
    parser.add_argument('--epoch_num', type=int, help="YOLOv1 train epoch num", default=200)
    parser.add_argument('--grad_visualize', type=bool, help="YOLOv1 train grad visualize", default=True)
    parser.add_argument('--feature_map_visualize', type=bool, help="YOLOv1 train feature map visualize", default=False)
    parser.add_argument('--restart', type=bool, default=True)
    parser.add_argument('--weight_file', type=str, help="YOLOv1 weight path", default="./weights/YOLO_V1_110.pth")
    args = parser.parse_args()

    num_workers = args.num_workers
    class_num = args.class_num
    batch_size = args.batch_size
    lr_mul_factor_epoch_1 = args.lr_mul_factor_epoch_1
    lr_epoch_2 = args.lr_epoch_2
    lr_epoch_77 = args.lr_epoch_77
    lr_epoch_107 = args.lr_epoch_107
    batch_size = args.batch_size
    weight_decay = args.weight_decay
    momentum = args.momentum
    epoch_interval = args.epoch_interval
    epoch_unfreeze = args.epoch_unfreeze
    loss_mode = "mse"

    if args.restart == True:
        pre_weight_file = args.pre_weight_file
        pre_param_dict = torch.load(pre_weight_file, map_location=torch.device("cpu"))
        lr = args.lr
        param_dict = {}
        epoch = 0
        epoch_val_loss_min = 999999999

    else:
        weight_file = args.weight_file
        param_dict = torch.load(weight_file, map_location=torch.device("cpu"))
        epoch = param_dict['epoch']
        epoch_val_loss_min = param_dict['epoch_val_loss_min']

    # 2.dataset
    train_dataSet = VOC_Detection_Set(imgs_path=args.train_imgs,
                                      annotations_path=args.train_labels,
                                      classes_file=args.voc_classes_path, class_num=class_num, is_train=True, loss_mode=loss_mode)
    val_dataSet = VOC_Detection_Set(imgs_path=args.val_imgs,
                                    annotations_path=args.val_labels,
                                    classes_file=args.voc_classes_path, class_num=class_num, is_train=False, loss_mode=loss_mode)

    # 3-4.network + optimizer
    YOLO = YOLOv1().to(device=device, non_blocking=True)
    if args.restart == True:
        YOLO.initialize_weights(pre_param_dict["min_loss_model"]) #load darknet pretrain weight
        optimizer_SGD = optim.SGD(YOLO.parameters(), lr=lr, weight_decay=weight_decay, momentum=momentum)
        optimal_dict = {}
    else:
        YOLO.load_state_dict(param_dict['model']) #load yolov1 train weight
        optimizer_SGD = param_dict['optim']
        optimal_dict = param_dict['optimal']
    if epoch < epoch_unfreeze:
        model.set_freeze_by_idxs(YOLO, [0, 1])

    # 5.loss
    loss_function = YOLOv1_Loss().to(device=device, non_blocking=True)

    # 6.train and record
    writer = SummaryWriter(logdir='./log', filename_suffix=' [' + str(epoch) + '~' + str(epoch + epoch_interval) + ']')

    train_loader = DataLoader(train_dataSet, batch_size=batch_size, shuffle=True, num_workers=num_workers, pin_memory=True)
    val_loader = DataLoader(val_dataSet, batch_size=batch_size, shuffle=True, num_workers=num_workers, pin_memory=True)

    while epoch < args.epoch_num:

        epoch_train_loss = 0
        epoch_val_loss = 0
        epoch_train_iou = 0
        epoch_val_iou = 0
        epoch_train_object_num = 0
        epoch_val_object_num = 0
        epoch_train_loss_coord = 0
        epoch_val_loss_coord = 0
        epoch_train_loss_pos_conf = 0
        epoch_train_loss_neg_conf = 0
        epoch_val_loss_pos_conf = 0
        epoch_val_loss_neg_conf = 0
        epoch_train_loss_classes = 0
        epoch_val_loss_classes = 0

        train_len = train_loader.__len__()
        YOLO.train()
        with tqdm(total=train_len) as tbar:

            #voc_train_loader = voc_dataloader(train_loader)

            '''
            train_prefetcher = voc_prefetcher(train_loader, device)
            train_data, label_data = train_prefetcher.next()

            while train_data is not None:

                loss = loss_function(bounding_boxes=YOLO(train_data), ground_labels=label_data)
                sample_avg_loss = loss[0]
                epoch_train_loss_coord = epoch_train_loss_coord + loss[1]
                epoch_train_loss_positive_confidence = epoch_train_loss_positive_confidence + loss[2]
                epoch_train_loss_negative_confidence = epoch_train_loss_negative_confidence + loss[3]
                epoch_train_loss_classes = epoch_train_loss_classes + loss[4]
                epoch_train_iou = epoch_train_iou + loss[5]
                epoch_train_object_num = epoch_train_object_num + loss[6]

                sample_avg_loss.backward()
                optimizer_SGD.step()

                batch_loss = sample_avg_loss.item() * batch_size
                epoch_train_loss = epoch_train_loss + batch_loss

                tbar.set_description(
                    "train: coord_loss:{} pos_conf_loss:{} neg_conf_loss:{} class_loss:{} avg_iou:{}".format(round(loss[1], 4),
                                                                                            round(loss[2], 4),
                                                                                            round(loss[3], 4),
                                                                                            round(loss[4], 4),
                                                                                            round(loss[5] / loss[6],
                                                                                                  4)), refresh=True)
                tbar.update(1)

                if epoch == epoch_unfreeze + 1:
                    lr = min(lr * lr_mul_factor_epoch_1, lr_epoch_2)
                    for param_group in optimizer_SGD.param_groups:
                        param_group["lr"] = lr

                train_data, label_data = train_prefetcher.next()

            '''

            #for batch_index, batch_train in BackgroundGenerator(train_loader):
            for batch_idx, [train_data, label_data] in enumerate(train_loader):
                optimizer_SGD.zero_grad()
                train_data = train_data.float().to(device=device, non_blocking=True)
                label_data[0] = label_data[0].float().to(device=device, non_blocking=True)
                label_data[1] = label_data[1].to(device=device, non_blocking=True)
                label_data[2] = label_data[2].to(device=device, non_blocking=True)

                loss = loss_function(bounding_boxes=YOLO(train_data), ground_labels=label_data)
                sample_avg_loss = loss[0]
                epoch_train_loss_coord = epoch_train_loss_coord + loss[1] * batch_size
                epoch_train_loss_pos_conf = epoch_train_loss_pos_conf + loss[2] * batch_size
                epoch_train_loss_neg_conf = epoch_train_loss_neg_conf + loss[3] * batch_size
                epoch_train_loss_classes = epoch_train_loss_classes + loss[4] * batch_size
                epoch_train_iou = epoch_train_iou + loss[5]
                epoch_train_object_num = epoch_train_object_num + loss[6]

                sample_avg_loss.backward()
                optimizer_SGD.step()

                batch_loss = sample_avg_loss.item() * batch_size
                epoch_train_loss = epoch_train_loss + batch_loss

                tbar.set_description(
                    "train: coord_loss:{} pos_conf_loss:{} neg_conf_loss:{} class_loss:{} avg_iou:{}".format(
                        round(loss[1], 4),
                        round(loss[2], 4),
                        round(loss[3], 4),
                        round(loss[4], 4),
                        round(loss[5] / loss[6], 4)), refresh=True)
                tbar.update(1)

                if epoch == epoch_unfreeze + 1:
                    lr = min(lr * lr_mul_factor_epoch_1, lr_epoch_2)
                    for param_group in optimizer_SGD.param_groups:
                        param_group["lr"] = lr

            if args.feature_map_visualize:
                feature_map_visualize(train_data[0][0], writer, YOLO)
            # print("batch_index : {} ; batch_loss : {}".format(batch_index, batch_loss))

            print("train-batch-mean loss:{} coord_loss:{} pos_conf_loss:{} neg_conf_loss:{} class_loss:{} iou:{}".format(
                round(epoch_train_loss / train_len, 4),
                round(epoch_train_loss_coord / train_len, 4),
                round(epoch_train_loss_pos_conf / train_len, 4),
                round(epoch_train_loss_neg_conf / train_len, 4),
                round(epoch_train_loss_classes / train_len, 4),
                round(epoch_train_iou / epoch_train_object_num, 4)))


        val_len = val_loader.__len__()
        YOLO.eval()
        with tqdm(total=val_len) as tbar:
            with torch.no_grad():

                #voc_val_loader = voc_dataloader(val_loader)
                '''
                val_prefetcher = voc_prefetcher(val_loader, device)
                val_data, label_data = val_prefetcher.next()
                while val_data is not None:

                    loss = loss_function(bounding_boxes=YOLO(val_data), ground_labels=label_data)
                    sample_avg_loss = loss[0]
                    epoch_val_loss_coord = epoch_val_loss_coord + loss[1] * batch_size
                    epoch_val_loss_positive_confidence = epoch_val_loss_positive_confidence + loss[2] * batch_size
                    epoch_val_loss_negative_confidence = epoch_val_loss_negative_confidence + loss[3] * batch_size
                    epoch_val_loss_classes = epoch_val_loss_classes + loss[4] * batch_size
                    epoch_val_iou = epoch_val_iou + loss[5]
                    epoch_val_object_num = epoch_val_object_num + loss[6]
                    batch_loss = sample_avg_loss.item() * batch_size
                    epoch_val_loss = epoch_val_loss + batch_loss

                    tbar.set_description(
                        "val: coord_loss:{} pos_conf_loss:{} neg_conf_loss:{} class_loss:{} iou:{}".format(round(loss[1], 4),
                                                                                            round(loss[2], 4),
                                                                                            round(loss[3], 4),
                                                                                            round(loss[4], 4),
                                                                                            round(loss[5] / loss[6],
                                                                                                  4)), refresh=True)
                    tbar.update(1)

                    val_data, label_data = val_prefetcher.next()


                '''
                #for batch_index, batch_train in BackgroundGenerator(val_loader):
                for batch_idx, [val_data, label_data] in enumerate(train_loader):
                    val_data = val_data.float().to(device=device, non_blocking=True)
                    label_data[0] = label_data[0].float().to(device=device, non_blocking=True)
                    label_data[1] = label_data[1].to(device=device, non_blocking=True)
                    label_data[2] = label_data[2].to(device=device, non_blocking=True)
                    loss = loss_function(bounding_boxes=YOLO(val_data), ground_labels=label_data)
                    sample_avg_loss = loss[0]
                    epoch_val_loss_coord = epoch_val_loss_coord + loss[1] * batch_size
                    epoch_val_loss_pos_conf = epoch_val_loss_pos_conf + loss[2] * batch_size
                    epoch_val_loss_neg_conf = epoch_val_loss_neg_conf + loss[3] * batch_size
                    epoch_val_loss_classes = epoch_val_loss_classes + loss[4] * batch_size
                    epoch_val_iou = epoch_val_iou + loss[5]
                    epoch_val_object_num = epoch_val_object_num + loss[6]
                    batch_loss = sample_avg_loss.item() * batch_size
                    epoch_val_loss = epoch_val_loss + batch_loss

                    tbar.set_description("val: coord_loss:{} pos_conf_loss:{} neg_conf_loss:{} class_loss:{} iou:{}".format(
                        round(loss[1], 4),
                        round(loss[2], 4),
                        round(loss[3], 4),
                        round(loss[4], 4),
                        round(loss[5]/ loss[6], 4)), refresh=True)
                    tbar.update(1)

                if args.feature_map_visualize:
                    feature_map_visualize(train_data[0][0], writer, YOLO)
                # print("batch_index : {} ; batch_loss : {}".format(batch_index, batch_loss))
            print("val-batch-mean loss:{} coord_loss:{} pos_conf_loss:{} neg_conf_loss:{} class_loss:{} iou:{}".format(
                round(epoch_val_loss / val_len, 4),
                round(epoch_val_loss_coord / val_len, 4),
                round(epoch_val_loss_pos_conf / val_len, 4),
                round(epoch_val_loss_neg_conf / val_len, 4),
                round(epoch_val_loss_classes / val_len, 4),
                round(epoch_val_iou / epoch_val_object_num, 4)))

        epoch = epoch + 1
        print("epoch : {} ; loss : {}".format(epoch, epoch_train_loss))

        if epoch == epoch_unfreeze:
            model.unfreeze_by_idxs(YOLO, [0, 1])

        if epoch == 2 + epoch_unfreeze:
            lr = lr_epoch_2
            for param_group in optimizer_SGD.param_groups:
                param_group["lr"] = lr
        elif epoch == 77 + epoch_unfreeze:
            lr = lr_epoch_77
            for param_group in optimizer_SGD.param_groups:
                param_group["lr"] = lr
        elif epoch == 107 + epoch_unfreeze:
            lr = lr_epoch_107
            for param_group in optimizer_SGD.param_groups:
                param_group["lr"] = lr

        if epoch_val_loss < epoch_val_loss_min:
            epoch_val_loss_min = epoch_val_loss
            optimal_dict = YOLO.state_dict()

        if epoch % epoch_interval == 0:
            param_dict['model'] = YOLO.state_dict()
            param_dict['optimizer'] = optimizer_SGD
            param_dict['epoch'] = epoch
            param_dict['optimal'] = optimal_dict
            param_dict['epoch_val_loss_min'] = epoch_val_loss_min
            torch.save(param_dict, './weights/YOLOv1_' + str(epoch) + '.pth')
            writer.close()
            writer = SummaryWriter(logdir='log', filename_suffix='[' + str(epoch) + '~' + str(epoch + epoch_interval) + ']')

        if args.grad_visualize:
            for i, (name, layer) in enumerate(YOLO.named_parameters()):
                if 'bn' not in name:
                    writer.add_histogram(name + '_grad', layer, epoch)
        '''
        for name, layer in YOLO.named_parameters():
            writer.add_histogram(name + '_grad', layer.grad.cpu().data.numpy(), epoch)
            writer.add_histogram(name + '_data', layer.cpu().data.numpy(), epoch)
        '''

        writer.add_scalar('Train/Loss_sum', epoch_train_loss, epoch)
        writer.add_scalar('Train/Loss_coord', epoch_train_loss_coord, epoch)
        writer.add_scalar('Train/Loss_pos_conf', epoch_train_loss_pos_conf, epoch)
        writer.add_scalar('Train/Loss_neg_conf', epoch_train_loss_neg_conf, epoch)
        writer.add_scalar('Train/Loss_classes', epoch_train_loss_classes, epoch)
        writer.add_scalar('Train/Epoch_iou', epoch_train_iou / epoch_train_object_num, epoch)

        writer.add_scalar('Val/Loss_sum', epoch_val_loss, epoch)
        writer.add_scalar('Val/Loss_coord', epoch_val_loss_coord, epoch)
        writer.add_scalar('Val/Loss_pos_conf', epoch_val_loss_pos_conf, epoch)
        writer.add_scalar('Val/Loss_neg_conf', epoch_val_loss_neg_conf, epoch)
        writer.add_scalar('Val/Loss_classes', epoch_val_loss_classes, epoch)
        writer.add_scalar('Val/Epoch_iou', epoch_val_iou / epoch_val_object_num, epoch)

    writer.close()

[注]:对于YOLOv1来说,正负样本的归属取决于在预测阶段 预测框与真实框的IoU值,若一个物体落在某个cell内,那么由这个cell预测出的两个box中,与真实框拥有更大IoU值的box负责拟合,即作为正样本,另一个即为负样本。

        另外,在最初的时候,先冻结backbone部分训练10个epoch,先训练出预测部分,然后再让预测部分与特征提取部分共同训练。

5. YOLOv1预测结果处理--NMS算法

通常来说,目标检测算法的最终输出结果是很多的Bounding Box用于预测目标,常用做法是将所有的Box通过非极大值抑制(NMS)算法去除冗余,保留效果最好的。

算法 NMS算法

输入:Bounding Box的集合p、IoU阈值、置信度阈值。

输出:去除冗余的Bounding box集合q。

1.去除集合p中置信度低于置信度阈值的Bounding Box。

2.在集合p中选取拥有最大置信度的Box,移出集合p并加入集合q,并将p中剩余的Bounding Box与该box计算IOU值,去除那些与该Box的IOU值超过阈值的Bounding Box。

3.重复步骤2,直到集合p为空

4.输出集合q,为所求的结果集合。

NMS:

import numpy as np

# 这边要求的bounding_boxes为处理后的实际的样子
def NMS(bounding_boxes,confidence_threshold,iou_threshold):
    # boxRow : x y dx dy c
    # 1. 初步筛选,先把grid cell预测的两个bounding box取出置信度较高的那个
    boxes = []
    for boxRow in bounding_boxes:
        # grid cell预测出的两个box,含有物体的置信度没有达到阈值
        if boxRow[4] < confidence_threshold or boxRow[9] < confidence_threshold:
            continue
        # 获取物体的预测概率
        classes = boxRow[10:-1]
        class_probality_index = np.argmax(classes,axis=1)
        class_probality = classes[class_probality_index]
        # 选择拥有更大置信度的box
        if boxRow[4] > boxRow[9]:
            box = boxRow[0:4]
        else:
            box = boxRow[5:9]
        # box : x y dx dy class_probality_index class_probality
        box.append(class_probality_index)
        box.append(class_probality)
        boxes.append(box)

    # 2. 循环直到待筛选的box集合为空
    predicted_boxes = []
    while len(boxes) != 0:
        # 对box集合按照置信度从大到小排序
        boxes = sorted(boxes, key=(lambda x : [x[4]]), reverse=True)
        # 确定含有最大值信度的box会被选中
        choiced_box = boxes[0]
        predicted_boxes.append(choiced_box)
        for index in len(boxes):
            # 如果冲突的box的iou值已经大于阈值 需要丢弃
            if iou(boxes[index],choiced_box) > iou_threshold:
                boxes.pop(index)

    return predicted_boxes

6.  YOLO v1分析

    1.YOLO v1网络优势

①在3*3的卷积后接上一个通道数低的1*1的卷积,用于进行特征的通道压缩,降低计算量;同时多一层的卷积也提升了模型的非线性表达能力。

②在训练中使用Dropout和数据增强的方式来防止网络过拟合。

③并没有引入Anchor机制,而是直接在每个区域进行框的大小与位置信息的预测,利用区域本身携带的位置信息和被检测物体尺度处于网络可以回归范围之内的特性将目标检测问题转化为一个回归问题。

④YOLO v1将物体类别与物体置信度分开预测,简化了问题,实验证明YOLO v1背景误检率要低于Fast R-CNN,YOLO v1的误差主要来源是定位误差,如图4-7所示:

    2.YOLO v1缺陷分析

①每一个区域只预测两个框,并且共用同一个类别向量,这导致YOLO v1只能检测有限个物体,并且对于小物体和距离相近的物体的检测效果并不好,而实际的情况下,预测的7*7*2=98个bounding box中,最多只有49个是有效的,也就是说YOLO v1对于一张图片最多预测49个物体。

②由于没有引入Anchor机制,而是直接从数据中学习并进行预测,故很难泛化到新的、不常见的宽高比例的目标的检测中,所以模型对于新的或者并不常见宽高比例的物体检测效果并不好。另外,由于下采样率比较大,对于边框的回归精度也不高。

③在v1的损失函数设计中,大物体和小物体的定位损失权重一样,这将会导致同等比例的定位误差,大物体的损失会比小物体大,小物体的损失在总损失中占比较小,然而实际上,小边界框的小误差对IoU的影响比大边界框要大得多,会导致定位的不准确,但是作者也是知道的,只不过为了保持YOLO v1简单的特性,作者的处理方式是使用对尺度开方,依此提高小物体尺度损失的相对权重。

    3.YOLO v1与其他网络的性能对比:

        相较于DPM等传统方法而言,YOLO有更高的精度;相较于以Fast R-CNN为代表的一系列的Two-stage算法,YOLO的精度稍有逊色,但是FPS达到了完全碾压的地步,兼顾了实时性和精度,使得工业上用深度学习做目标检测成为可能。

7.个人训练优化策略(已删除,基本按照YOLOv1论文复现)

    1.全卷积结构

       为了避免卷积的输出reshape为普通张量导致的特征图错乱的问题,因此本人还提出一种全卷积结构用来实验对YOLO V1的推理能力进行优化,结合1*1的卷积进行特征压缩,而不是直接降采样,依此来提高有效的特征保留。

    2.多步长调整学习率

       在深度学习中,学习率在初期往往很大,一是可以用来加快训练,二是可以冲出鞍点和一些局部最优点;而在后期,网络稳定收敛到某个最小值时(实际上可能还是局部最小,因为深度学习不是一个凸优化问题,因此我们不太可能正好找到那个最优解,但是我们可以通过学习算法获得一个较为优秀的解),为了避免网络发散,同时防止网络在最小值附近不断震荡,而应该调小学习率,让网络顺着那个最小值的方向进行下降。

    3.Tensorboard监控训练

       为了更好地监控网络的训练情况,本人在项目中引入了Tensorboard功能。

    4.后期准备

       本人打算先复现一个功能上还算完善的网络,后期还会加入数据集扩充等功能,并继续优化网络的计算速度以及显存占用~~

    5.当前网络情况

                                         全卷积网络收敛情况

                                  YOLO V1原网络收敛情况

项目复现github地址:经过几版本重构后,原仓库太大导致上传太慢,现全部转移至新仓库

GitHub - ProgrammerZhujinming/YOLO

评论 49
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值