jetson系列开发板生成.engine部署

jetson系列开发板 生成engine

1. 下载 tensorrtx 至 NX

这个tensorrtx的版本号要和训练时的版本号一样。

git clone -b yolov5-v5.0 https://github.com/wang-xinyu/tensorrtx.git

2. 修改 yololayer.h 中的参数

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-76KPGlS4-1689562839272)(C:\Users\Administrator\AppData\Roaming\Typora\typora-user-images\image-20221025185746573.png)]

训练时有几类就将 CLASS_NUM 修改为几。

static constexpr int CLASS_NUM = 80; 

3. 编译 tensorrtx/yolov5

cd tensorrtx/yolov5
mkdir build
cd build
cmake ..
make

4. 在训练yolov5的平台上生成 .wts文件

import sys
import argparse
import os
import struct
import torch
from utils.torch_utils import select_device


def parse_args():
    parser = argparse.ArgumentParser(description='Convert .pt file to .wts')
    parser.add_argument('-w', '--weights', required=True, help='Input weights (.pt) file path (required)')
    parser.add_argument('-o', '--output', help='Output (.wts) file path (optional)')
    args = parser.parse_args()
    if not os.path.isfile(args.weights):
        raise SystemExit('Invalid input file')
    if not args.output:
        args.output = os.path.splitext(args.weights)[0] + '.wts'
    elif os.path.isdir(args.output):
        args.output = os.path.join(
            args.output,
            os.path.splitext(os.path.basename(args.weights))[0] + '.wts')
    return args.weights, args.output


pt_file, wts_file = parse_args()

# Initialize
device = select_device('cpu')
# Load model
model = torch.load(pt_file, map_location=device)  # load to FP32
model = model['ema' if model.get('ema') else 'model'].float()

# update anchor_grid info
anchor_grid = model.model[-1].anchors * model.model[-1].stride[...,None,None]
# model.model[-1].anchor_grid = anchor_grid
delattr(model.model[-1], 'anchor_grid')  # model.model[-1] is detect layer
model.model[-1].register_buffer("anchor_grid",anchor_grid) #The parameters are saved in the OrderDict through the "register_buffer" method, and then saved to the weight.

model.to(device).eval()

with open(wts_file, 'w') as f:
    f.write('{}\n'.format(len(model.state_dict().keys())))
    for k, v in model.state_dict().items():
        vr = v.reshape(-1).cpu().numpy()
        f.write('{} {} '.format(k, len(vr)))
        for vv in vr:
            f.write(' ')
            f.write(struct.pack('>f' ,float(vv)).hex())
        f.write('\n')

命令行执行

python gen_wts.py -w xxxx.pt -o xxxx.wts

4. copy .wts 文件到tensorrtx/yolov5/build目录下

5. 在build目录下执行下面语句生成 engine

sudo ./yolov5 -s hat.wts hat.engine s //需要注意的是最后一个参数s,这个参数表示是用yolov5的哪个模型训练的(s,m,l...)
生成Yolov5的engine文件Jetson Nano上的步骤如下: 1. 将生成的.wts文件复制到Jetson Nano上的yolov5文件夹中。这个文件夹的路径是yolov5-5.0(Tensorrtx)\tensorrtx-yolov5-v5.0\yolov5。\[1\] 2. 打开yololayer.h文件,并根据你训练模型的类别数量修改CLASS_NUM的值。例如,如果你的模型有55个类别,就将CLASS_NUM设置为55。\[1\] 3. 在yolov5文件夹中打开终端,并依次运行以下指令: ``` mkdir build cd build cmake .. make sudo ./yolov5 -s ../yolov5s.wts yolov5s.engine ``` 4. 生成engine文件将会在yolov5文件夹中。\[1\] 接下来,将生成yolov5s.engine和libmyplugin.so文件拷贝到DeepStream的目录中。具体步骤如下: 1. 进入DeepStream的源代码目录: ``` cd /opt/nvidia/deepstream/deepstream-5.1/sources/ ``` 2. 复制yolov5s.engine和libmyplugin.so文件到DeepStream的Yolov5-in-Deepstream-5.0/Deepstream_5.0/目录中: ``` cp /home/nano/tensorrtx/yolov5/build/yolov5s.engine /opt/nvidia/deepstream/deepstream-5.1/sources/Yolov5-in-Deepstream-5.0/Deepstream_5.0/ cp /home/nano/tensorrtx/yolov5/build/libmyplugin.so /opt/nvidia/deepstream/deepstream-5.1/sources/Yolov5-in-Deepstream-5.0/Deepstream_5.0/ ``` 3. 编译DeepStream: ``` sudo chmod -R 777 /opt/nvidia/deepstream/deepstream-5.1/sources/ git clone https://github.com/Glory-Peng/Yolov5-in-Deepstream-5.0.git ``` 完成上述步骤后,你将在DeepStream的Yolov5-in-Deepstream-5.0/Deepstream_5.0/目录中得到生成yolov5s.engine和libmyplugin.so文件,这两个文件在后续的使用中非常重要。\[2\]\[3\] #### 引用[.reference_title] - *1* [Jetson Nano部署YOLOv5与Tensorrtx加速——(自己走一遍全过程记录)](https://blog.csdn.net/Mr_LanGX/article/details/128094428)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^koosearch_v1,239^v3^insert_chatgpt"}} ] [.reference_item] - *2* *3* [yolov5在jetson nano上的部署 deepstream](https://blog.csdn.net/Pcl2001/article/details/125957727)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^koosearch_v1,239^v3^insert_chatgpt"}} ] [.reference_item] [ .reference_list ]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Ap21ril

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值