Yolov5 最简推理代码

        Yolov5模型加载与推理的简化代码,只需要一个文件即可,不依赖其他文件。

        如果报下列错误,请下载精简版的models文件夹。下载地址:Yolov5最简推理代码-深度学习文档类资源-CSDN下载

File "D:\ProgramData\Anaconda3\lib\site-packages\torch\serialization.py", line 853, in _load
result = unpickler.load()
ModuleNotFoundError: No module named 'models'

        这个错误是因为torch保存了模型存储路径和结构,但是并没有保存计算过程。因此,需要models文件中对模型的定义。

File "D:\ProgramData\Anaconda3\lib\site-packages\torch\serialization.py", line 853, in _load
result = unpickler.load()
ModuleNotFoundError: No module named 'models'

        Yolov5最简推理代码如下所示:

# -*- coding: utf-8 -*-
"""
Created on Fri Apr  1 16:12:34 2022

@author: suiyingy
"""

import cv2
import torch
import torchvision
import numpy as np
import time
import os

def xywh2xyxy(x):
    # Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right
    y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
    y[:, 0] = x[:, 0] - x[:, 2] / 2  # top left x
    y[:, 1] = x[:, 1] - x[:, 3] / 2  # top left y
    y[:, 2] = x[:, 0] + x[:, 2] / 2  # bottom right x
    y[:, 3] = x[:, 1] + x[:, 3] / 2  # bottom right y
    return y

def xyxy2xywh(x):
    # Convert nx4 boxes from [x1, y1, x2, y2] to [x, y, w, h] where xy1=top-left, xy2=bottom-right
    y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
    y[:, 0] = (x[:, 0] + x[:, 2]) / 2  # x center
    y[:, 1] = (x[:, 1] + x[:, 3]) / 2  # y center
    y[:, 2] = x[:, 2] - x[:, 0]  # width
    y[:, 3] = x[:, 3] - x[:, 1]  # height
    return y

def box_iou(box1, box2):
    # https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py
    """
    Return intersection-over-union (Jaccard index) of boxes.
    Both sets of boxes are expected to be in (x1, y1, x2, y2) format.
    Arguments:
        box1 (Tensor[N, 4])
        box2 (Tensor[M, 4])
    Returns:
        iou (Tensor[N, M]): the NxM matrix containing the pairwise
            IoU values for every element in boxes1 and boxes2
    """

    def box_area(box):
        # box = 4xn
        return (box[2] - box[0]) * (box[3] - box[1])

    area1 = box_area(box1.T)
    area2 = box_area(box2.T)

    # inter(N,M) = (rb(N,M,2) - lt(N,M,2)).clamp(0).prod(2)
    inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2)
    return inter / (area1[:, None] + area2 - inter)  # iou = inter / (area1 + area2 - inter)


def non_max_suppression(prediction, conf_thres=0.25, iou_thres=0.45, classes=None, agnostic=False, multi_label=False,
                        labels=(), max_det=300):
    """Runs Non-Maximum Suppression (NMS) on inference results

    Returns:
         list of detections, on (n,6) tensor per image [xyxy, conf, cls]
    """

    nc = prediction.shape[2] - 5  # number of classes
    xc = prediction[..., 4] > conf_thres  # candidates

    # Checks
    assert 0 <= conf_thres <= 1, f'Invalid Confidence threshold {conf_thres}, valid values are between 0.0 and 1.0'
    assert 0 <= iou_thres <= 1, f'Invalid IoU {iou_thres}, valid values are between 0.0 and 1.0'

    # Settings
    min_wh, max_wh = 2, 7680  # (pixels) minimum and maximum box width and height
    max_nms = 30000  # maximum number of boxes into torchvision.ops.nms()
    time_limit = 10.0  # seconds to quit after
    redundant = True  # require redundant detections
    multi_label &= nc > 1  # multiple labels per box (adds 0.5ms/img)
    merge = False  # use merge-NMS

    t = time.time()
    output = [torch.zeros((0, 6), device=prediction.device)] * prediction.shape[0]
    for xi, x in enumerate(prediction):  # image index, image inference
        # Apply constraints
        x[((x[..., 2:4] < min_wh) | (x[..., 2:4] > max_wh)).any(1), 4] = 0  # width-height
        x = x[xc[xi]]  # confidence

        # Cat apriori labels if autolabelling
        if labels and len(labels[xi]):
            lb = labels[xi]
            v = torch.zeros((len(lb), nc + 5), device=x.device)
            v[:, :4] = lb[:, 1:5]  # box
            v[:, 4] = 1.0  # conf
            v[range(len(lb)), lb[:, 0].long() + 5] = 1.0  # cls
            x = torch.cat((x, v), 0)

        # If none remain process next image
        if not x.shape[0]:
            continue

        # Compute conf
        x[:, 5:] *= x[:, 4:5]  # conf = obj_conf * cls_conf

        # Box (center x, center y, width, height) to (x1, y1, x2, y2)
        box = xywh2xyxy(x[:, :4])

        # Detections matrix nx6 (xyxy, conf, cls)
        if multi_label:
            i, j = (x[:, 5:] > conf_thres).nonzero(as_tuple=False).T
            x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1)
        else:  # best class only
            conf, j = x[:, 5:].max(1, keepdim=True)
            x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > conf_thres]

        # Filter by class
        if classes is not None:
            x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)]

        # Apply finite constraint
        # if not torch.isfinite(x).all():
        #     x = x[torch.isfinite(x).all(1)]

        # Check shape
        n = x.shape[0]  # number of boxes
        if not n:  # no boxes
            continue
        elif n > max_nms:  # excess boxes
            x = x[x[:, 4].argsort(descending=True)[:max_nms]]  # sort by confidence

        # Batched NMS
        c = x[:, 5:6] * (0 if agnostic else max_wh)  # classes
        boxes, scores = x[:, :4] + c, x[:, 4]  # boxes (offset by class), scores
        i = torchvision.ops.nms(boxes, scores, iou_thres)  # NMS
        if i.shape[0] > max_det:  # limit detections
            i = i[:max_det]
        if merge and (1 < n < 3E3):  # Merge NMS (boxes merged using weighted mean)
            # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4)
            iou = box_iou(boxes[i], boxes) > iou_thres  # iou matrix
            weights = iou * scores[None]  # box weights
            x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True)  # merged boxes
            if redundant:
                i = i[iou.sum(1) > 1]  # require redundancy

        output[xi] = x[i]
        if (time.time() - t) > time_limit:
            break  # time limit exceeded

    return output

def scale_coords(img1_shape, coords, img0_shape, ratio_pad=None):
    # Rescale coords (xyxy) from img1_shape to img0_shape
    if ratio_pad is None:  # calculate from img0_shape
        gain = min(img1_shape[0] / img0_shape[0], img1_shape[1] / img0_shape[1])  # gain  = old / new
        pad = (img1_shape[1] - img0_shape[1] * gain) / 2, (img1_shape[0] - img0_shape[0] * gain) / 2  # wh padding
    else:
        gain = ratio_pad[0][0]
        pad = ratio_pad[1]

    coords[:, [0, 2]] -= pad[0]  # x padding
    coords[:, [1, 3]] -= pad[1]  # y padding
    coords[:, :4] /= gain
    clip_coords(coords, img0_shape)
    return coords

def clip_coords(boxes, shape):
    # Clip bounding xyxy bounding boxes to image shape (height, width)
    if isinstance(boxes, torch.Tensor):  # faster individually
        boxes[:, 0].clamp_(0, shape[1])  # x1
        boxes[:, 1].clamp_(0, shape[0])  # y1
        boxes[:, 2].clamp_(0, shape[1])  # x2
        boxes[:, 3].clamp_(0, shape[0])  # y2
    else:  # np.array (faster grouped)
        boxes[:, [0, 2]] = boxes[:, [0, 2]].clip(0, shape[1])  # x1, x2
        boxes[:, [1, 3]] = boxes[:, [1, 3]].clip(0, shape[0])  # y1, y2

def letterbox(im, new_shape=(640, 640), color=(114, 114, 114), auto=True, scaleFill=False, scaleup=True, stride=32):
    # Resize and pad image while meeting stride-multiple constraints
    shape = im.shape[:2]  # current shape [height, width]
    if isinstance(new_shape, int):
        new_shape = (new_shape, new_shape)

    # Scale ratio (new / old)
    r = min(new_shape[0] / shape[0], new_shape[1] / shape[1])
    if not scaleup:  # only scale down, do not scale up (for better val mAP)
        r = min(r, 1.0)

    # Compute padding
    ratio = r, r  # width, height ratios
    new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))
    dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1]  # wh padding
    if auto:  # minimum rectangle
        dw, dh = np.mod(dw, stride), np.mod(dh, stride)  # wh padding
    elif scaleFill:  # stretch
        dw, dh = 0.0, 0.0
        new_unpad = (new_shape[1], new_shape[0])
        ratio = new_shape[1] / shape[1], new_shape[0] / shape[0]  # width, height ratios

    dw /= 2  # divide padding into 2 sides
    dh /= 2

    if shape[::-1] != new_unpad:  # resize
        im = cv2.resize(im, new_unpad, interpolation=cv2.INTER_LINEAR)
    top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))
    left, right = int(round(dw - 0.1)), int(round(dw + 0.1))
    im = cv2.copyMakeBorder(im, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color)  # add border
    return im, ratio, (dw, dh)



def preprocess(path, img_size, stride, auto):
    img_bgr = cv2.imread(path)  # BGR
    assert img_bgr is not None, f'Image Not Found {path}'
    # Padded resize
    img_rgb = letterbox(img_bgr, img_size, stride=stride, auto=auto)[0]

    # Convert
    img_rgb = img_rgb.transpose((2, 0, 1))[::-1]  # HWC to CHW, BGR to RGB
    img_rgb = np.ascontiguousarray(img_rgb)
    return img_rgb, img_bgr



class Detect():
    def __init__(self, weights='yolov5s.pt'):
        self.device = 'cpu'
        self.weights =  weights
        self.model = None
        self.imgsz = (640, 640)
        self.conf_thres=0.25
        self.iou_thres=0.45
        self.save_txt=True
        self.save_conf=True
        self.save_dir = './detect/'
        if not os.path.exists(self.save_dir):
            os.makedirs(self.save_dir) 
        if torch.cuda.is_available() and torch.cuda.device_count() > 1:
            self.device = torch.device('cuda:0')
        
        self.init_model()
        self.stride = max(int(self.model.stride.max()), 32)
        
        
        
    def init_model(self):
        ckpt = torch.load(self.weights, map_location=self.device)  # load
        ckpt = (ckpt.get('ema') or ckpt['model']).float()  # FP32 model
        fuse = True
        self.model = ckpt.fuse().eval() if fuse else ckpt.eval() # fused or un-fused model in eval mode
        self.model.float()
        
    def infer_image(self, image_path):
        im, im0 = preprocess(image_path, img_size=self.imgsz, stride=self.stride, auto=True)
        im = torch.from_numpy(im).to(self.device).float() / 255
        if len(im.shape) == 3:
            im = im[None]  # expand for batch dim

        # Inference
        pred = self.model(im, augment=False, visualize=False)[0]
        # NMS
        pred = non_max_suppression(pred, self.conf_thres, self.iou_thres, None, False, max_det=1000)
        det  = pred[0]
        fn =  image_path.rsplit('/', 1)[-1]
        txt_path  = self.save_dir + fn.split('.', 1)[0] + '.txt'
        gn = torch.tensor(im0.shape)[[1, 0, 1, 0]]  # normalization gain whwh
    
        if len(det):
            # Rescale boxes from img_size to im0 size
            det[:, :4] = scale_coords(im.shape[2:], det[:, :4], im0.shape).round()
            # Write results
            for *xyxy, conf, cls in reversed(det):
                if self.save_txt:  # Write to file
                    xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist()  # normalized xywh
                    line = (cls, *xywh, conf) if self.save_conf else (cls, *xywh)  # label format
                    with open(txt_path, 'a') as f:
                        f.write(('%g ' * len(line)).rstrip() % line + '\n')


if __name__ == "__main__":
    detect = Detect('best.pt')
    detect.infer_image('test.png')

   Yolov5模型加载与推理的简化代码,只需要一个文件即可,不依赖其他文件。

        如果报下列错误,请下载精简版的models文件夹。下载地址:Yolov5最简推理代码-深度学习文档类资源-CSDN下载

File "D:\ProgramData\Anaconda3\lib\site-packages\torch\serialization.py", line 853, in _load
result = unpickler.load()
ModuleNotFoundError: No module named 'models'

        这个错误是因为torch保存了模型存储路径和结构,但是并没有保存计算过程。因此,需要models文件中对模型的定义。

File "D:\ProgramData\Anaconda3\lib\site-packages\torch\serialization.py", line 853, in _load
result = unpickler.load()
ModuleNotFoundError: No module named 'models'

更多三维、二维感知算法和金融量化分析算法请关注“乐乐感知学堂”微信公众号,并将持续进行更新。

  • 5
    点赞
  • 36
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 16
    评论
### 回答1: YOLOv5是一种流行的目标检测算法,用于检测图像或视频中的多个对象。YOLOv5 7.0版本的推理代码可以通过以下方式进行简化。 首先,可以使用PyTorch框架来实现YOLOv5推理代码PyTorch提供了丰富的工具和函数,可以方便地构建和训练神经网络模型,包括YOLOv5。 其次,可以使用已经经过预训练的权重文件来初始化YOLOv5模型。这样可以避免从头开始训练模型,节省了时间和计算资源。预训练的权重文件可以从YOLOv5官方GitHub仓库上下载。 然后,可以使用推理模式来进行目标检测。推理模式是一种优化的模式,可以提高推理速度和准确度。可以设置推理模式的参数,如batch size、图像大小等,以满足具体应用的需求。 接下来,可以通过一次前向传播(forward pass)来完成目标检测。在前向传播过程中,输入图像经过YOLOv5模型的各个层,最终得到目标的预测框、类别和置信度。 最后,可以根据需要对目标进行后处理。后处理包括非极大值抑制(non-maximum suppression)和类别筛选等步骤,用于去除重叠的框和选择最可信的目标。 以上是对YOLOv5 7.0推理代码简化的描述,通过使用PyTorch框架,预训练的权重文件,推理模式以及后处理步骤,可以简化代码并提高目标检测的效率和准确度。 ### 回答2: 要简化yolov5 7.0的推理代码,可以考虑以下几个方面: 1. 模型加载:首先需要加载yolov5的预训练权重文件,可以使用官方提供的load方法进行模型加载。可以将模型的类型、权重文件等配置信息写入配置文件,然后通过读取配置文件进行模型加载,从而简化代码。 2. 图像处理:对于输入的图像,可以使用OpenCV等库进行图像的读取和预处理,如调整图像尺寸、归一化等操作。这可以通过编写一个函数来实现,并在推理过程中调用该函数,以简化代码的重复性。 3. 推理过程:推理过程包括前向计算和后处理两个部分。在yolov5 7.0中,可以使用forward方法进行前向计算,可以将前向计算的代码封装在一个函数中,并通过传递输入图像和模型对象来调用该函数。对于输出的预测框,可以使用后处理方法进行解码、筛选和非极大值抑制等处理。 4. 结果展示:可以使用OpenCV等库将推理结果可视化,如在图像上绘制出预测的边界框、类别标签等信息。可以编写一个函数来实现结果的展示,传递原始图像、预测框等参数,并在推理完成后调用该函数进行结果展示。 简化yolov5 7.0推理代码的关键是将代码块封装成函数,通过传递参数来实现代码的重用性,并通过配置文件等方式管理模型相关的信息。这样可以使代码更简洁、易于维护,并提高代码的可读性和复用性。 ### 回答3: yolov5版本7.0的推理代码简化了很多,具体包括以下几个方面。 首先,在模型加载方面,简化了模型的加载过程。新版本的yolov5将模型加载和设备选择的代码进行了合并,简化了调用过程。开发者只需要通过一行代码即可加载和设定模型的设备。 其次,在图像预处理方面,也进行了简化。新版本的yolov5提供了一个集成的预处理函数,可以自动进行图像的缩放、归一化和通道转换等操作,并且支持多种图像输入格式,减少了开发者的手动处理工作。 再次,在推理过程中的后处理方面,也进行了简化。新版本的yolov5提供了一组内置的后处理函数,用于解码模型的输出并得到最终的检测结果。开发者只需要调用这些函数,即可得到目标的位置、类别和置信度等信息,不需要再手动解析模型输出。 最后,在可视化输出方面,也进行了简化。新版本的yolov5提供了一个可视化函数,可以直接在原图上标注检测结果,并将结果保存到指定的文件中。这样,开发者可以快速查看推理结果,减少了手动编写可视化代码的工作量。 综上所述,yolov5版本7.0对推理代码进行了简化,减少了开发者的编码工作量,提高了开发效率。开发者只需要调用相应的函数,即可完成模型的加载、图像预处理、推理和结果可视化等操作。这使得使用yolov5进行目标检测变得更加简单和便捷。
评论 16
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Coding的叶子

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值