torch-onnx-ncnn过程(yolov8)

序言

这个方法会比较简单而且不用修改param文件,后处理也比较方便,已跑通。

1.修改pt转onnx的Detect模块

需要修改两个部分

part1

nn.modules.Detect 或者 nn.modules.head.Detect(具体位置要看自己下的ultralytics的代码)

    def forward(self, x):
        """Concatenates and returns predicted bounding boxes and class probabilities."""
        shape = x[0].shape  # BCHW
        for i in range(self.nl):
            x[i] = torch.cat((self.cv2[i](x[i]), self.cv3[i](x[i])), 1)
        
        
        if self.training:
            return x
        elif self.dynamic or self.shape != shape:
            self.anchors, self.strides = (x.transpose(0, 1) for x in make_anchors(x, self.stride, 0.5))
            self.shape = shape

        x_cat = torch.cat([xi.view(shape[0], self.no, -1) for xi in x], 2)
        if self.export and self.format in ('saved_model', 'pb', 'tflite', 'edgetpu', 'tfjs'):  # avoid TF FlexSplitV ops
            box = x_cat[:, :self.reg_max * 4]
            cls = x_cat[:, self.reg_max * 4:]
        else:
            box, cls = x_cat.split((self.reg_max * 4, self.nc), 1)
        
        dfl_output = self.dfl(box)
        # 将strides部分移到后处理中
        dbox = dist2bbox(dfl_output, self.anchors.unsqueeze(0), xywh=True, dim=1) #* self.strides
        y = torch.cat((dbox, cls.sigmoid()), 1)
        return y if self.export else (y, x,x_cat)

part2

def dist2bbox(distance, anchor_points, xywh=True, dim=-1):
    """Transform distance(ltrb) to box(xywh or xyxy)."""
    # 更改chunk为split
    # lt, rb = distance.chunk(2, dim)
    lt, rb = distance.split((2,2), dim)
    x1y1 = anchor_points - lt
    x2y2 = anchor_points + rb
    if xywh:
        c_xy = (x1y1 + x2y2) / 2
        wh = x2y2 - x1y1
        return torch.cat((c_xy, wh), dim)  # xywh bbox
    return torch.cat((x1y1, x2y2), dim)  # xyxy bbox

2.pt转onnx

这里就不做阐述了,但是注意目前是修改了ultralytics的源码,所以要用指定路径下的export.py去生成,否则转出来的onnx就不对了!

3.onnx2ncnn指令生成 ncnn模型

4.后处理c++代码

void postfunc(ncnn::Mat output){
    int oh = output.h;
    int ow = output.w;
    int model_w = 640,model_h = 640;
    int class_num = 80;
    // 86 x 8400
    float *x = (float*)(output.data)+ow*0;
    float *y = (float*)(output.data)+ow*1;
    float *w = (float*)(output.data)+ow*2;
    float *h = (float*)(output.data)+ow*3;
    
    std::vector<float*> cls_val(class_num);
    for(int cls_index = 0;cls_index<cls_val.size();cls_index++){
        cls_val[cls_index]=((float*)(output.data)+ow*(4+cls_index));
    }
    for(int index = 0;index<ow;index++){
        int max_index = 0;
        float max_val = max_cls(cls_val,index,max_index);
        if(max_val>0.7)
        {
            std::cout<<"Index:"<<index<<",cls:"<<max_val<<",max cls:"<<max_index<<std::endl;
  				 //  原模型中的strides
            int stride[3] = {8,16,32};
            float target_x,target_y,target_w,target_h;
            if(index<model_w/8*model_w/8) 
            {
                
                target_x = x[index]*stride[0];
                target_y = y[index]*stride[0];
                target_w = w[index]*stride[0];
                target_h = h[index]*stride[0];
            }
            else if(index<model_w/16*model_w/16) 
            {
                target_x = x[index]*stride[1];
                target_y = y[index]*stride[1];
                target_w = w[index]*stride[1];
                target_h = h[index]*stride[1];
            }
            else{
                target_x = x[index]*stride[2];
                target_y = y[index]*stride[2];
                target_w = w[index]*stride[2];
                target_h = h[index]*stride[2];
            }
            std::cout<<"x:"<<target_x
            <<"y:"<<target_y
            <<"w:"<<target_w
            <<"h:"<<target_h
            <<std::endl;
        }
    }
    // NMS
    return;
}
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值