Yolov5项目在RK3588s设备上的部署(RMYC2023技术分享)

前言

(跟作者比赛有关,可以选择略过)

在RMYC以往的赛季当中大多数队伍使用到的视觉技术都依赖于RoboMaster官方给的AI人工智能教育套件,但是随着比赛难度的增加,比赛对各种技术的要求AI教育套件可能已经无法满足,同时因AI教育套件的价格偏高,很多战队都无法完成基本的视觉技术。作者偶然间看见香橙派5的Yolov5部署,于是作者在两周的时间里摸索了Yolov5在RK3588s的部署并且成功写出了一个用于RMYC上工程机器人自动取弹对位的程序。 

作者在整个摸索的过程中踩了很多坑,所以决定写下这篇文章供大家参考 

 实现效果: 

 https://www.bilibili.com/video/BV1og4y1u799/?spm_id_from=333.999.0.0&vd_source=d232228705f824c1efbcaad8e6566462

在部署前,请先下载好所需要的文件🔗

Yolov5-v5.0(必须是v5.0的项目)项目获取

git clone https://github.com/airockchip/yolov5.git

rknn-toolkit2工具包获取,要注意看自己的操作系统和python版本是否符合项目的安装需求

git clone https://github.com/rockchip-linux/rknn-toolkit2.git

一,训练平台环境配置(X86 Ubuntu20.04)

首先用anaconda创建一个python版本为3.8的虚拟环境

conda create -n yolov5-v5.0 python=3.8

 激活并进入虚拟环境,接着进入yolov5项目

conda activate yolov5-v5.0
cd yolov5-v5.0

临时使用清华源安装yolov5-v5.0所需要的第三方库

pip3 install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple

使用SFTP上传预训练数据集到models目录下

 进入models目录下,新建一个bottles.yaml并编辑(文件名可自行更改,文件后缀为.yaml)

cd models
vim bottles.yaml
train: ./models/bottle/images
val: ./models/bottle/images
nc: 16

names: ["dog","person","cat","tv","car","meatballs","marinara sauce",
        "tomato soup","chicken noodle soup","french onion soup",
        "chicken breast","ribs","pulled pork","hamburger","cavity","bottle"]

修改其中的参数,接下来运行train.py进行训练

python train.py --data bottles.yaml --cfg yolov5s.yaml --weights ./yolov5s.pt --epoch 50 --batch-size 8 --device 0

下面给一些常见训练报错的解决方法

因为需要经常修改代码,所以接下来用Vscode中的Remote远程开发

 下载好之后直接输入密码连接ubuntu主机,接下来我们正式解决报错信息

Cant get attribute SPPF on module models.common  问题

报错信息如下, 这是由于yolo团队在6.0版本中加入了SPPF模块,但是之前版本中权重不包含这部分信息,所以才会报错。所以要在model文件夹中的common.py文件中重新加入这部分代码。

 File "train.py", line 543, in <module>
    train(hyp, opt, device, tb_writer)
  File "train.py", line 71, in train
    run_id = torch.load(weights).get('wandb_id') if weights.endswith('.pt') and os.path.isfile(weights) else None
  File "/home/gavin/anaconda3/envs/yolov5-v5.0/lib/python3.8/site-packages/torch/serialization.py", line 809, in load
    return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
  File "/home/gavin/anaconda3/envs/yolov5-v5.0/lib/python3.8/site-packages/torch/serialization.py", line 1172, in _load
    result = unpickler.load()
  File "/home/gavin/anaconda3/envs/yolov5-v5.0/lib/python3.8/site-packages/torch/serialization.py", line 1165, in find_class
    return super().find_class(mod_name, name)
AttributeError: Can't get attribute 'SPPF' on <module 'models.common' from '/home/gavin/桌面/Yolo/yolov5-v5.0/models/common.py'>

进入models目录,打开common.py文件,将以下代码复制到文件中(记得保存)

class SPPF(nn.Module):
    # Spatial Pyramid Pooling - Fast (SPPF) layer for YOLOv5 by Glenn Jocher
    def __init__(self, c1, c2, k=5):  # equivalent to SPP(k=(5, 9, 13))
        super().__init__()
        c_ = c1 // 2  # hidden channels
        self.cv1 = Conv(c1, c_, 1, 1)
        self.cv2 = Conv(c_ * 4, c2, 1, 1)
        self.m = nn.MaxPool2d(kernel_size=k, stride=1, padding=k // 2)
 
    def forward(self, x):
        x = self.cv1(x)
        with warnings.catch_warnings():
            warnings.simplefilter('ignore')  # suppress torch 1.9.0 max_pool2d() warning
            y1 = self.m(x)
            y2 = self.m(y1)
            return self.cv2(torch.cat([x, y1, y2, self.m(y2)], 1))

 Module 'numpy' has no attribute 'int'  问题

报错信息如下,np.int在NumPy 1.20中已弃用,在NumPy 1.24中已删除,所以我们要重新安装Numpy1.22版本

File "/home/gavin/anaconda3/envs/yolov5-v5.0/lib/python3.8/site-packages/numpy/__init__.py", line 305, in __getattr__
    raise AttributeError(__former_attrs__[attr])
AttributeError: module 'numpy' has no attribute 'int'.
`np.int` was a deprecated alias for the builtin `int`. To avoid this error in existing code, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
    https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations

卸载原来虚拟环境中的Numpy,使用清华源下载Numpy1.22版本 

pip3 uninstall numpy
pip3 install numpy==1.22 -i https://pypi.tuna.tsinghua.edu.cn/simple

 Result type Float can't be cast to the desired output type long int  问题

 猜测是是yolov5-5.0版本更新的问题,总之需要修改utils目录下的中的loss.py文件

Traceback (most recent call last):
  File "train.py", line 543, in <module>
    train(hyp, opt, device, tb_writer)
  File "train.py", line 304, in train
    loss, loss_items = compute_loss(pred, targets.to(device))  # loss scaled by batch_size
  File "/home/gavin/桌面/Yolo/yolov5-v5.0/utils/loss.py", line 117, in __call__
    tcls, tbox, indices, anchors = self.build_targets(p, targets)  # targets
  File "/home/gavin/桌面/Yolo/yolov5-v5.0/utils/loss.py", line 211, in build_targets
    indices.append((b, a, gj.clamp_(0, gain[3] - 1), gi.clamp_(0, gain[2] - 1)))  # image, anchor, grid indices
RuntimeError: result type Float can't be cast to the desired output type long int

 在loss.py文件大概178行左右找到  anchors = self.anchors[i]  这行代码,将它替换成:

 anchors, shape = self.anchors[i], p[i].shape 

 接着继续在213行左右的位置找到  indices.append((b, a, gj.clamp_(0, gain[3] - 1), gi.clamp_(0, gain[2] - 1)))  这行代码,将它替换成:

indices.append((b, a, gj.clamp_(0, shape[2] - 1), gi.clamp_(0, shape[3] - 1)))  # image, anchor, grid

如果没有问题的话,那么就可以成功训练模型了

训练过程中一定不能关机,可以使用nvidia-smi命令查看N卡的GPU使用情况

训练完成后记得查看提示训练的文件位置

二,模型转换平台环境配置(X86 Ubuntu20.04)

我们已经使用yolov5-v5.0项目训练自己的数据集并得到了文件,接下来我们要将.pt文件转换成能部署到RK3588s NPU的 .rknn文件

首先在模型导出前我们需要修改一下yolo.py文件

在yolo.py文件中找到class Detect(nn.Module)的子函数forward,将

def forward(self, x):
    z = []  # inference output
    for i in range(self.nl):
        x[i] = self.m[i](x[i])  # conv
        bs, _, ny, nx = x[i].shape  # x(bs,255,20,20) to x(bs,3,20,20,85)
        x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
        if not self.training:  # inference
            if self.dynamic or self.grid[i].shape[2:4] != x[i].shape[2:4]:
                self.grid[i], self.anchor_grid[i] = self._make_grid(nx, ny, i)
            if isinstance(self, Segment):  # (boxes + masks)
                xy, wh, conf, mask = x[i].split((2, 2, self.nc + 1, self.no - self.nc - 5), 4)
                xy = (xy.sigmoid() * 2 + self.grid[i]) * self.stride[i]  # xy
                wh = (wh.sigmoid() * 2) ** 2 * self.anchor_grid[i]  # wh
                y = torch.cat((xy, wh, conf.sigmoid(), mask), 4)
            else:  # Detect (boxes only)
                xy, wh, conf = x[i].sigmoid().split((2, 2, self.nc + 1), 4)
                xy = (xy * 2 + self.grid[i]) * self.stride[i]  # xy
                wh = (wh * 2) ** 2 * self.anchor_grid[i]  # wh
                y = torch.cat((xy, wh, conf), 4)
            z.append(y.view(bs, self.na * nx * ny, self.no))
    return x if self.training else (torch.cat(z, 1),) if self.export else (torch.cat(z, 1), x)

更改为:

def forward(self, x):
    z = []  # inference output
    for i in range(self.nl):
        x[i] = self.m[i](x[i])  # conv
    return x

接着我们先讲.pt文件转换成.onnx文件,激活进入虚拟环境并安装onnx包

conda activate yolov5-v5.0
conda install onnx

开始转换模型(记得自行修改路径),当看到提示Export complete说明导出完成: 

python models/export.py --weights 'bottles/weights/best.pt' --img-size 640 --batch-size 1

此时通过SFTP可以看到模型(best.onnx)导出成功!

然后来我们配置RKNN模型转换项目rknn-toolkit2-master所需要的环境

conda create -n rknn python=3.8
conda activate rknn

进入rknn-toolkit2-master项目的doc目录下

cd rknn-toolkit2-master/doc

安装rknn-toolkit2的环境

pip3 install -r requirements_cp38-1.4.0.txt -i https://pypi.tuna.tsinghua.edu.cn/simple

转移到packages目录下安装rknn-toolkit2工具包 

cd ..
cd packages
pip3 install rknn_toolkit2-1.4.0_22dcfef4-cp38-cp38-linux_x86_64.whl -i https://pypi.tuna.tsinghua.edu.cn/simple

 验证一下是否安装成功                

 进入yolov5文件夹

cd ..
cd examples/onnx/yolov5
vim test.py

test.py部分需要修改的代码 

ONNX_MODEL = 'yolov5s.onnx'
RKNN_MODEL = 'yolov5s.rknn'
IMG_PATH = './bus.jpg'
DATASET = './dataset.txt'

QUANTIZE_ON = True

OBJ_THRESH = 0.25
NMS_THRESH = 0.45
IMG_SIZE = 640

CLASSES = ("person", "bicycle", "car", "motorbike ", "aeroplane ", "bus ", "train", "truck ", "boat", "traffic light",
           "fire hydrant", "stop sign ", "parking meter", "bench", "bird", "cat", "dog ", "horse ", "sheep", "cow", "elephant",
           "bear", "zebra ", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite",
           "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "wine glass", "cup", "fork", "knife ",
           "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza ", "donut", "cake", "chair", "sofa",
           "pottedplant", "bed", "diningtable", "toilet ", "tvmonitor", "laptop	", "mouse	", "remote ", "keyboard ", "cell phone", "microwave ",
           "oven ", "toaster", "sink", "refrigerator ", "book", "clock", "vase", "scissors ", "teddy bear ", "hair drier", "toothbrush ")

其中,需要将ONNX_MODEL修改为转换过的best.onnx,RKNN_MODEL为转换后的RKNN模型文件名称 

需要将CLASSES修改成自己模型的类别,我自己的修改如下(仅供参考):

 

特别注意的是在rknn.config()函数中,需要在后面加上target_platform参数,否则默认将会转换为部署在RK3568的模型 

    # pre-process config
    print('--> Config model')
    rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]])
    print('done')

这里在target_platform参数后面加上"rk3588"就不会有问题了,修改如下:

修改完成后运行test.py,开始转换模型

python test.py

运行结束后就可以在当前文件夹中看到生成的best.rknn模型

三,RK3588s部署配置(ARM64 Debian11.6)

此次模型部署以OrangePi5为例,按理来讲搭载RK3588的设备通用,开始之前请先在开发板下载好rknn-toolkit2工具包并且把之前转换好的rknn模型文件传入开发板中

创建一个python3.9的虚拟环境激活并进入 

conda create -n rknn python=3.9
conda activate rknn

pip安装rknnlite2包 

cd rknn-toolkit2-master/rknn_toolkit_lite2/packages
pip3 install rknn_toolkit_lite2-1.4.0-cp39-cp39-linux_aarch64.whl -i https://pypi.tuna.tsinghua.edu.cn/simple

验证是否安装成功

Jupyter Lab配置 

之后我们需要远程调试,这里推荐使用Jupyter Lab来调试代码

Jupyter Lab是Jupyter Notebook的升级版,支持多个文件编辑器、终端、markdown、文件管理器等功能,使数据科学团队能够更好地协作、版本控制和共享项目。

Jupyter Lab通过网页浏览器访问,远程调试起来十分方便

 使用pip安装jupyter lab

pip3 install jupyterlab -i https://pypi.tuna.tsinghua.edu.cn/simple

 生成jupyter lab的配置文件

jupyter lab --generate-config

 修改配置文件内容 

vim ~/.jupyter/jupyter_notebook_config.py

 在文件最底部添加以下内容 

# 允许任何IP访问
c.NotebookApp.ip = ‘*‘
# 禁止jupyter lab启动时自动打开浏览器
c.NotebookApp.open_browser = False 
# 设置监听端口设置为8888(可自行修改)
c.NotebookApp.port = 8888
# 设置登入密码(需自行设置)
c.NotebookApp.password = "*********"
# 允许远程访问 
c.NotebookApp.allow_remote_access = True

 配置完成后启动Jupyter Lab 

jupyter lab

 Jupyter Lab开启前需将主机与开发板连接到同一局域网内,打开浏览器输入开发板的ip:端口号/lab,如:192.168.1.112:8888/lab,输入密码就可以开始使用Jupyter Lab了,如果不想因为SSH断连就自动关闭Jupyter Lab,可以使用screen工具

 screen工具安装:

sudo apt-get update
sudo apt-get install screen

  进入rknn-toolkit2项目yolov5目录所在位置,将best.rknn模型传入目录下

cd rknn-toolkit2-master/examples/onnx/yolov5
vim model_test.py

 我们先写一段代码测试转换后的模型是否可以正常导入(记得使用安装了rknnlite的python虚拟环境运行)

from rknnlite.api import RKNNLite as RKNN
if __name__ == '__main__':
    # Create RKNN object
    rknn = RKNN()
    rknn.load_rknn("best.rknn")
    # Init runtime environment
    print('--> Init runtime environment')
    ret = rknn.init_runtime(core_mask=RKNN.NPU_CORE_0)
    if ret != 0:
        print('Init runtime environment failed!')
    else:
        print('done')

 从程序输出结果看出模型导入正常,说明模型转换无误,接下来我们来测试官方给出的yolov5 demo(test.py)代码 ,未经过修改的代码如下:

import os
import urllib
import traceback
import time
import sys
import numpy as np
import cv2
from rknn.api import RKNN

ONNX_MODEL = 'yolov5s.onnx'
RKNN_MODEL = 'yolov5s.rknn'
IMG_PATH = './bus.jpg'
DATASET = './dataset.txt'

QUANTIZE_ON = True

OBJ_THRESH = 0.25
NMS_THRESH = 0.45
IMG_SIZE = 640

CLASSES = ("person", "bicycle", "car", "motorbike ", "aeroplane ", "bus ", "train", "truck ", "boat", "traffic light",
           "fire hydrant", "stop sign ", "parking meter", "bench", "bird", "cat", "dog ", "horse ", "sheep", "cow", "elephant",
           "bear", "zebra ", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite",
           "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "wine glass", "cup", "fork", "knife ",
           "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza ", "donut", "cake", "chair", "sofa",
           "pottedplant", "bed", "diningtable", "toilet ", "tvmonitor", "laptop	", "mouse	", "remote ", "keyboard ", "cell phone", "microwave ",
           "oven ", "toaster", "sink", "refrigerator ", "book", "clock", "vase", "scissors ", "teddy bear ", "hair drier", "toothbrush ")


def sigmoid(x):
    return 1 / (1 + np.exp(-x))


def xywh2xyxy(x):
    # Convert [x, y, w, h] to [x1, y1, x2, y2]
    y = np.copy(x)
    y[:, 0] = x[:, 0] - x[:, 2] / 2  # top left x
    y[:, 1] = x[:, 1] - x[:, 3] / 2  # top left y
    y[:, 2] = x[:, 0] + x[:, 2] / 2  # bottom right x
    y[:, 3] = x[:, 1] + x[:, 3] / 2  # bottom right y
    return y


def process(input, mask, anchors):

    anchors = [anchors[i] for i in mask]
    grid_h, grid_w = map(int, input.shape[0:2])

    box_confidence = sigmoid(input[..., 4])
    box_confidence = np.expand_dims(box_confidence, axis=-1)

    box_class_probs = sigmoid(input[..., 5:])

    box_xy = sigmoid(input[..., :2])*2 - 0.5

    col = np.tile(np.arange(0, grid_w), grid_w).reshape(-1, grid_w)
    row = np.tile(np.arange(0, grid_h).reshape(-1, 1), grid_h)
    col = col.reshape(grid_h, grid_w, 1, 1).repeat(3, axis=-2)
    row = row.reshape(grid_h, grid_w, 1, 1).repeat(3, axis=-2)
    grid = np.concatenate((col, row), axis=-1)
    box_xy += grid
    box_xy *= int(IMG_SIZE/grid_h)

    box_wh = pow(sigmoid(input[..., 2:4])*2, 2)
    box_wh = box_wh * anchors

    box = np.concatenate((box_xy, box_wh), axis=-1)

    return box, box_confidence, box_class_probs


def filter_boxes(boxes, box_confidences, box_class_probs):
    """Filter boxes with box threshold. It's a bit different with origin yolov5 post process!

    # Arguments
        boxes: ndarray, boxes of objects.
        box_confidences: ndarray, confidences of objects.
        box_class_probs: ndarray, class_probs of objects.

    # Returns
        boxes: ndarray, filtered boxes.
        classes: ndarray, classes for boxes.
        scores: ndarray, scores for boxes.
    """
    boxes = boxes.reshape(-1, 4)
    box_confidences = box_confidences.reshape(-1)
    box_class_probs = box_class_probs.reshape(-1, box_class_probs.shape[-1])

    _box_pos = np.where(box_confidences >= OBJ_THRESH)
    boxes = boxes[_box_pos]
    box_confidences = box_confidences[_box_pos]
    box_class_probs = box_class_probs[_box_pos]

    class_max_score = np.max(box_class_probs, axis=-1)
    classes = np.argmax(box_class_probs, axis=-1)
    _class_pos = np.where(class_max_score >= OBJ_THRESH)

    boxes = boxes[_class_pos]
    classes = classes[_class_pos]
    scores = (class_max_score* box_confidences)[_class_pos]

    return boxes, classes, scores


def nms_boxes(boxes, scores):
    """Suppress non-maximal boxes.

    # Arguments
        boxes: ndarray, boxes of objects.
        scores: ndarray, scores of objects.

    # Returns
        keep: ndarray, index of effective boxes.
    """
    x = boxes[:, 0]
    y = boxes[:, 1]
    w = boxes[:, 2] - boxes[:, 0]
    h = boxes[:, 3] - boxes[:, 1]

    areas = w * h
    order = scores.argsort()[::-1]

    keep = []
    while order.size > 0:
        i = order[0]
        keep.append(i)

        xx1 = np.maximum(x[i], x[order[1:]])
        yy1 = np.maximum(y[i], y[order[1:]])
        xx2 = np.minimum(x[i] + w[i], x[order[1:]] + w[order[1:]])
        yy2 = np.minimum(y[i] + h[i], y[order[1:]] + h[order[1:]])

        w1 = np.maximum(0.0, xx2 - xx1 + 0.00001)
        h1 = np.maximum(0.0, yy2 - yy1 + 0.00001)
        inter = w1 * h1

        ovr = inter / (areas[i] + areas[order[1:]] - inter)
        inds = np.where(ovr <= NMS_THRESH)[0]
        order = order[inds + 1]
    keep = np.array(keep)
    return keep


def yolov5_post_process(input_data):
    masks = [[0, 1, 2], [3, 4, 5], [6, 7, 8]]
    anchors = [[10, 13], [16, 30], [33, 23], [30, 61], [62, 45],
               [59, 119], [116, 90], [156, 198], [373, 326]]

    boxes, classes, scores = [], [], []
    for input, mask in zip(input_data, masks):
        b, c, s = process(input, mask, anchors)
        b, c, s = filter_boxes(b, c, s)
        boxes.append(b)
        classes.append(c)
        scores.append(s)

    boxes = np.concatenate(boxes)
    boxes = xywh2xyxy(boxes)
    classes = np.concatenate(classes)
    scores = np.concatenate(scores)

    nboxes, nclasses, nscores = [], [], []
    for c in set(classes):
        inds = np.where(classes == c)
        b = boxes[inds]
        c = classes[inds]
        s = scores[inds]

        keep = nms_boxes(b, s)

        nboxes.append(b[keep])
        nclasses.append(c[keep])
        nscores.append(s[keep])

    if not nclasses and not nscores:
        return None, None, None

    boxes = np.concatenate(nboxes)
    classes = np.concatenate(nclasses)
    scores = np.concatenate(nscores)

    return boxes, classes, scores


def draw(image, boxes, scores, classes):
    """Draw the boxes on the image.

    # Argument:
        image: original image.
        boxes: ndarray, boxes of objects.
        classes: ndarray, classes of objects.
        scores: ndarray, scores of objects.
        all_classes: all classes name.
    """
    for box, score, cl in zip(boxes, scores, classes):
        top, left, right, bottom = box
        print('class: {}, score: {}'.format(CLASSES[cl], score))
        print('box coordinate left,top,right,down: [{}, {}, {}, {}]'.format(top, left, right, bottom))
        top = int(top)
        left = int(left)
        right = int(right)
        bottom = int(bottom)

        cv2.rectangle(image, (top, left), (right, bottom), (255, 0, 0), 2)
        cv2.putText(image, '{0} {1:.2f}'.format(CLASSES[cl], score),
                    (top, left - 6),
                    cv2.FONT_HERSHEY_SIMPLEX,
                    0.6, (0, 0, 255), 2)


def letterbox(im, new_shape=(640, 640), color=(0, 0, 0)):
    # Resize and pad image while meeting stride-multiple constraints
    shape = im.shape[:2]  # current shape [height, width]
    if isinstance(new_shape, int):
        new_shape = (new_shape, new_shape)

    # Scale ratio (new / old)
    r = min(new_shape[0] / shape[0], new_shape[1] / shape[1])

    # Compute padding
    ratio = r, r  # width, height ratios
    new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))
    dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1]  # wh padding

    dw /= 2  # divide padding into 2 sides
    dh /= 2

    if shape[::-1] != new_unpad:  # resize
        im = cv2.resize(im, new_unpad, interpolation=cv2.INTER_LINEAR)
    top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))
    left, right = int(round(dw - 0.1)), int(round(dw + 0.1))
    im = cv2.copyMakeBorder(im, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color)  # add border
    return im, ratio, (dw, dh)


if __name__ == '__main__':

    # Create RKNN object
    rknn = RKNN(verbose=True)

    # pre-process config
    print('--> Config model')
    rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]])
    print('done')

    # Load ONNX model
    print('--> Loading model')
    ret = rknn.load_onnx(model=ONNX_MODEL)
    if ret != 0:
        print('Load model failed!')
        exit(ret)
    print('done')

    # Build model
    print('--> Building model')
    ret = rknn.build(do_quantization=QUANTIZE_ON, dataset=DATASET)
    if ret != 0:
        print('Build model failed!')
        exit(ret)
    print('done')

    # Export RKNN model
    print('--> Export rknn model')
    ret = rknn.export_rknn(RKNN_MODEL)
    if ret != 0:
        print('Export rknn model failed!')
        exit(ret)
    print('done')

    # Init runtime environment
    print('--> Init runtime environment')
    ret = rknn.init_runtime()
    # ret = rknn.init_runtime('rk3566')
    if ret != 0:
        print('Init runtime environment failed!')
        exit(ret)
    print('done')

    # Set inputs
    img = cv2.imread(IMG_PATH)
    # img, ratio, (dw, dh) = letterbox(img, new_shape=(IMG_SIZE, IMG_SIZE))
    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
    img = cv2.resize(img, (IMG_SIZE, IMG_SIZE))

    # Inference
    print('--> Running model')
    outputs = rknn.inference(inputs=[img])
    np.save('./onnx_yolov5_0.npy', outputs[0])
    np.save('./onnx_yolov5_1.npy', outputs[1])
    np.save('./onnx_yolov5_2.npy', outputs[2])
    print('done')

    # post process
    input0_data = outputs[0]
    input1_data = outputs[1]
    input2_data = outputs[2]

    input0_data = input0_data.reshape([3, -1]+list(input0_data.shape[-2:]))
    input1_data = input1_data.reshape([3, -1]+list(input1_data.shape[-2:]))
    input2_data = input2_data.reshape([3, -1]+list(input2_data.shape[-2:]))

    input_data = list()
    input_data.append(np.transpose(input0_data, (2, 3, 0, 1)))
    input_data.append(np.transpose(input1_data, (2, 3, 0, 1)))
    input_data.append(np.transpose(input2_data, (2, 3, 0, 1)))

    boxes, classes, scores = yolov5_post_process(input_data)

    img_1 = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
    if boxes is not None:
        draw(img_1, boxes, scores, classes)
    # show output
    # cv2.imshow("post process result", img_1)
    # cv2.waitKey(0)
    # cv2.destroyAllWindows()

    rknn.release()

 现在我们修改以上代码来实现一个简单的识别demo测试程序,所有代码修改仅供参考​​​​​​​!

 首先,将:

from rknn.api import RKNN

 改为 ,并且新导入plt库:

from rknnlite.api import RKNNLite as RKNN
from matplotlib import pyplot as plt

将CLASSES改成我们自己的类别

CLASSES = ("person", "bicycle", "car", "motorbike ", "aeroplane ", "bus ", "train", "truck ", "boat", "traffic light",
           "fire hydrant", "stop sign ", "parking meter", "bench", "bird", "cat", "dog ", "horse ", "sheep", "cow", "elephant",
           "bear", "zebra ", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite",
           "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "wine glass", "cup", "fork", "knife ",
           "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza ", "donut", "cake", "chair", "sofa",
           "pottedplant", "bed", "diningtable", "toilet ", "tvmonitor", "laptop	", "mouse	", "remote ", "keyboard ", "cell phone", "microwave ",
           "oven ", "toaster", "sink", "refrigerator ", "book", "clock", "vase", "scissors ", "teddy bear ", "hair drier", "toothbrush ")

 修改如下,因为只需要识别bottle一个类别,所以只传入一个"bottle"类别:

CLASSES = ('dog', "person", "cat", "tv", "car", "meatballs", "marinara sauce", "tomato soup", "chicken noodle soup",
"french onion soup", "chicken breast","ribs", "pulled pork", "hamburger", "cavity", "bottle")

 新增一个rknn_init()函数,用于初始化RKNN对象并且设置NPU核心推理

def rknn_init(number):
    QUANTIZE_ON = True

    OBJ_THRESH = 0.25
    NMS_THRESH = 0.45
    IMG_SIZE = 640

    CLASSES = ('dog', "person", "cat", "tv", "car", "meatballs", "marinara sauce", "tomato soup", "chicken noodle soup","french onion soup", "chicken breast","ribs", "pulled pork", "hamburger", "cavity", "bottle")
    
    rknn = RKNN()#创建RKNN对象
    number = int(number)
   
    rknn.load_rknn("best.rknn")#导入模型
    # Init runtime environment
    print('--> Init runtime environment')
    if number == 1:
        ret = rknn.init_runtime(core_mask=RKNN.NPU_CORE_0)#单核NPU推理
    elif number == 2:
        ret = rknn.init_runtime(core_mask=RKNN.NPU_CORE_0_1)#双核NPU推理
    elif number == 3:
        ret = rknn.init_runtime(core_mask=RKNN.NPU_CORE_0_1_2)#三核NPU推理
    else:
        ret = rknn.init_runtime(core_mask=RKNN.NPU_CORE_0)
    if ret != 0:
        print('Init runtime environment failed!')
        exit(ret)
    print('done')
    return rknn

  接着新增一个detect_bottles()函数,用于推理模型并且获取推理结果

def detect_bottles(image,rknn):
    #print("detect")
    image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
    image = cv2.resize(image, (IMG_SIZE, IMG_SIZE))
    t1 = time.time()
    outputs = rknn.inference(inputs=[image])#将获取的图片进行推理
    print(time.time()-t1)#记录推理时间
        #print('done')
            # post process
    input0_data = outputs[0];input1_data = outputs[1];input2_data = outputs[2]

    input0_data = input0_data.reshape([3, -1]+list(input0_data.shape[-2:]))
    input1_data = input1_data.reshape([3, -1]+list(input1_data.shape[-2:]))
    input2_data = input2_data.reshape([3, -1]+list(input2_data.shape[-2:]))
            
    input_data = list()
    input_data.append(np.transpose(input0_data, (2, 3, 0, 1)))
    input_data.append(np.transpose(input1_data, (2, 3, 0, 1)))
    input_data.append(np.transpose(input2_data, (2, 3, 0, 1)))
    boxes, classes, scores = yolov5_post_process(input_data)#获取推理结果中的类别,置信度等信息
    img_1 = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)

    if boxes is not None:#如果识别到了物体:
        position = draw(img_1, boxes, scores, classes)#用draw函数绘画出框,并且获取识别目标的坐标信息
    return img_1

 最后,修改draw()函数

def draw(image, boxes, scores, classes):
    """Draw the boxes on the image.

    # Argument:
        image: original image.
        boxes: ndarray, boxes of objects.
        classes: ndarray, classes of objects.
        scores: ndarray, scores of objects.
        all_classes: all classes name.
    """
    for box, score, cl in zip(boxes, scores, classes):
        top, left, right, bottom = box
        top = int(top)
        left = int(left)
        right = int(right)
        bottom = int(bottom)
        if score >=0.3:#如果置信度大于30%
            cv2.rectangle(image, (top, left), (right, bottom), (255, 0, 0), 2)#绘

 修改完成,在Jupyter Lab新建一个NoteBook,输入:

from test import *

 传入一张图片做推理测试:


 

  读取图片,并初始化RKNN对象

image = cv2.imread("1.jpg")#读取图片
rknn = rknn_init(3)#设置为3核NPU推理

  

 推理图片,并显示推理结果 

result = detect_bottles(image,rknn)#获取推理结果
plt.imshow(result)
plt.show()#在Jupyter Lab显示推理结果

 

  此时我们已经可以使用RK3588s的NPU来推理yolov5了,整个部署环节到此结束。祝各位部署顺利

  • 13
    点赞
  • 50
    收藏
    觉得还不错? 一键收藏
  • 3
    评论
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值