RK3588 yolo v8实例分割部署

环境:
yolo v8版本:8.0.151

1.训练自己的模型(省)

from ultralytics import YOLO

if __name__ == '__main__':
    # 加载模型
    model = YOLO(r'yolov8-seg.yaml')  # 不使用预训练权重训练
    # model = YOLO(r'yolov8-seg.yaml').load("yolov8n-seg.pt")  # 使用预训练权重训练
    # 训练参数 ----------------------------------------------------------------------------------------------
    model.train(
        data=r'coco128bak-seg.yaml',
        epochs=300,  # (int) 训练的周期数
        batch=64,  # (int) 每批次的图像数量(-1 为自动批处理)
        device='',  # (int | str | list, optional) 运行的设备,例如 cuda device=0 或 device=0,1,2,3 或 device=cpu
        workers=8,  # (int) 数据加载的工作线程数(每个DDP进程)
    )

2.pt模型转onnx模型再转rknn模型

a.修改 ultralytics/cfg/defualt.yaml中的model字段
model: 你训练后生成的模型文件路径/best.pt 

b.执行ultralytics/engine/exporter.py将.pt模型转为onnx模型,生成的模型会在打印信息中给出。
python ultralytics/engine/exporter.py

c.借助toolkit2将onnx转为rknn模型,此处省略toolkit2的相关安装,参见:[RK git](https://github.com/rockchip-linux/rknn-toolkit2)
转换命令例如:python convert.py yolov8-seg.onnx rk3588

3.交叉编译C++部署代码
rk已经提供了参考的部署代码:rknn_model_zoo仓库
clone代码下来,在rknn_model_zoo/build-linux.sh中修改交叉编译工具链路径就可以编译官方deme了
例如本人修改下:
在这里插入图片描述执行:

chmod 777 build-linux.sh
./build-linux.sh -t rk3588 -a aarch64 -d yolov8

编译的文件会全部打包到install,修改install/rk3588_linux_aarch64/rknn_yolov8_seg_demo/model/coco_80_labels_list.txt中的内容,修改为自己的类别,将上述所转换的rknn模型与该文件压缩打包到rk3588平台上就可以了

4.部署到rk3588平台
在平台上根据程序执行的参数格式运行程序


./rknn_yolov8_seg_demo model/yolov8-seg.rknn test.jpg

执行效果:
load lable ./model/coco_80_labels_list.txt
model input num: 1, output num: 13
input tensors:
  index=0, name=images, n_dims=4, dims=[1, 640, 640, 3], n_elems=1228800, size=1228800, fmt=NHWC, type=INT8, qnt_type=AFFINE, zp=-128, scale=0.003922
output tensors:
  index=0, name=375, n_dims=4, dims=[1, 64, 80, 80], n_elems=409600, size=409600, fmt=NCHW, type=INT8, qnt_type=AFFINE, zp=-3, scale=0.113323
  index=1, name=onnx::ReduceSum_383, n_dims=4, dims=[1, 10, 80, 80], n_elems=64000, size=64000, fmt=NCHW, type=INT8, qnt_type=AFFINE, zp=-128, scale=0.003346
  index=2, name=388, n_dims=4, dims=[1, 1, 80, 80], n_elems=6400, size=6400, fmt=NCHW, type=INT8, qnt_type=AFFINE, zp=-128, scale=0.003505
  index=3, name=354, n_dims=4, dims=[1, 32, 80, 80], n_elems=204800, size=204800, fmt=NCHW, type=INT8, qnt_type=AFFINE, zp=-9, scale=0.023833
  index=4, name=395, n_dims=4, dims=[1, 64, 40, 40], n_elems=102400, size=102400, fmt=NCHW, type=INT8, qnt_type=AFFINE, zp=7, scale=0.094574
  index=5, name=onnx::ReduceSum_403, n_dims=4, dims=[1, 10, 40, 40], n_elems=16000, size=16000, fmt=NCHW, type=INT8, qnt_type=AFFINE, zp=-128, scale=0.003012
  index=6, name=407, n_dims=4, dims=[1, 1, 40, 40], n_elems=1600, size=1600, fmt=NCHW, type=INT8, qnt_type=AFFINE, zp=-128, scale=0.003036
  index=7, name=361, n_dims=4, dims=[1, 32, 40, 40], n_elems=51200, size=51200, fmt=NCHW, type=INT8, qnt_type=AFFINE, zp=-5, scale=0.022464
  index=8, name=414, n_dims=4, dims=[1, 64, 20, 20], n_elems=25600, size=25600, fmt=NCHW, type=INT8, qnt_type=AFFINE, zp=3, scale=0.068694
  index=9, name=onnx::ReduceSum_422, n_dims=4, dims=[1, 10, 20, 20], n_elems=4000, size=4000, fmt=NCHW, type=INT8, qnt_type=AFFINE, zp=-128, scale=0.002477
  index=10, name=426, n_dims=4, dims=[1, 1, 20, 20], n_elems=400, size=400, fmt=NCHW, type=INT8, qnt_type=AFFINE, zp=-128, scale=0.002487
  index=11, name=368, n_dims=4, dims=[1, 32, 20, 20], n_elems=12800, size=12800, fmt=NCHW, type=INT8, qnt_type=AFFINE, zp=-3, scale=0.018722
  index=12, name=347, n_dims=4, dims=[1, 32, 160, 160], n_elems=819200, size=819200, fmt=NCHW, type=INT8, qnt_type=AFFINE, zp=-117, scale=0.025950
model is NHWC input fmt
model input height=640, width=640, channel=3
origin size=3984x1840 crop size=3984x1840
input image: 3984 x 1840, subsampling: 4:2:0, colorspace: YCbCr, orientation: 1
scale=0.160643 dst_box=(0 172 639 465) allow_slight_change=1 _left_offset=0 _top_offset=172 padding_w=0 padding_h=346
src width=3984 height=1840 fmt=0x1 virAddr=0x0x7fbbf9e010 fd=0
dst width=640 height=640 fmt=0x1 virAddr=0x0x38ec4000 fd=0
src_box=(0 0 3983 1839)
dst_box=(0 172 639 465)
color=0x72
rga_api version 1.10.0_[2]
fill dst image (x y w h)=(0 0 640 640) with color=0x72727272
 RgaCollorFill(1756) RGA_COLORFILL fail: Invalid argument
 RgaCollorFill(1757) RGA_COLORFILL fail: Invalid argument
230 im2d_rga_impl rga_task_submit(2100): Failed to call RockChipRga interface, please use 'dmesg' command to view driver error log.
230 im2d_rga_impl rga_dump_channel_info(1452): src_channel: 
  rect[x,y,w,h] = [0, 0, 0, 0]
  image[w,h,ws,hs,f] = [0, 0, 0, 0, rgba8888]
  buffer[handle,fd,va,pa] = [0, 0, 0, 0]
  color_space = 0x0, global_alpha = 0x0, rd_mode = 0x0

230 im2d_rga_impl rga_dump_channel_info(1452): dst_channel: 
  rect[x,y,w,h] = [0, 0, 640, 640]
  image[w,h,ws,hs,f] = [640, 640, 640, 640, rgb888]
  buffer[handle,fd,va,pa] = [108, 0, 0, 0]
  color_space = 0x0, global_alpha = 0xff, rd_mode = 0x1

230 im2d_rga_impl rga_dump_opt(1502): opt version[0x0]:

230 im2d_rga_impl rga_dump_opt(1503): set_core[0x0], priority[0]

230 im2d_rga_impl rga_dump_opt(1506): color[0x72727272] 
230 im2d_rga_impl rga_dump_opt(1515): 

230 im2d_rga_impl rga_task_submit(2109): acquir_fence[-1], release_fence_ptr[0x0], usage[0x280000]

rknn_run
carpet @ (2975 902 3965 1475) 0.768
wire @ (1170 547 2203 1425) 0.768
null @ (946 31 1145 168) 0.512
write_image path: out.png width=3984 height=1840 channel=3 data=0x7fbbf9e010

参考:
https://doc.embedfire.com/linux/rk356x/Ai/zh/latest/README.html
参考的yolo v8代码:https://gitee.com/ysxgitee/rk3588_npu_yolov8_2.git

  • 9
    点赞
  • 12
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
【资源介绍】 基于RK3588部署yolov5s模型源码(实时摄像头检测)+部署说明文档.zip 该项目是个人毕设项目,答辩评审分达到95分,代码都经过调试测试,确保可以运行!欢迎下载使用,可用于小白学习、进阶。 该资源主要针对计算机、通信、人工智能、自动化等相关专业的学生、老师或从业者下载使用,亦可作为期末课程设计、课程大作业、毕业设计等。 项目整体具有较高的学习借鉴价值!基础能力强的可以在此基础上修改调整,以实现不同的功能。 yolov5模型(.pt)在RK3588(S)上的部署(实时摄像头检测) - 所需: - 安装了Ubuntu20系统的RK3588 - 安装了Ubuntu18的电脑或者虚拟机 <details> <summary>一、yolov5 PT模型获取</summary> [Anaconda教程](https://blog.csdn.net/qq_25033587/article/details/89377259)\ [YOLOv5教程](https://zhuanlan.zhihu.com/p/501798155)\ 经过上面两个教程之后,你应该获取了自己的`best.pt`文件 </details> <details> <summary>二、PT模型转onnx模型</summary> - 将`models/yolo.py`文件中的`class`类下的`forward`函数由: ```python def forward(self, x): z = [] # inference output for i in range(self.nl): x[i] = self.m[i](x[i]) # conv bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() if not self.training: # inference if self.dynamic or self.grid[i].shape[2:4] != x[i].shape[2:4]: self.grid[i], self.anchor_grid[i] = self._make_grid(nx, ny, i) if isinstance(self, Segment): # (boxes + masks) xy, wh, conf, mask = x[i].split((2, 2, self.nc + 1, self.no - self.nc - 5), 4) xy = (xy.sigmoid() * 2 + self.grid[i]) * self.stride[i] # xy wh = (wh.sigmoid() * 2) ** 2 * self.anchor_grid[i] # wh y = torch.cat((xy, wh, conf.sigmoid(), mask), 4) else: # Detect (boxes only) xy, wh, conf = x[i].sigmoid().split((2, 2, self.nc + 1), 4) xy = (xy * 2 + self.grid[i]) * self.stride[i] # xy wh = (wh * 2) ** 2 * self.anchor_grid[i] # wh y = torch.cat((xy, wh, conf), 4) z.append(y.view(bs, self.na * nx * ny, self.no)) return x if self.training else (torch.cat(z, 1),) if self.export else (torch.cat(z, 1), x) ``` 改为: ```python def forward(self, x): z = [] # inference

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值