基于Aidlux实现可见光巡检中的目标锁定

一、sort算法

SoRT算法是非深度学习中应用较为广泛的目标跟踪算法之一,因为本项目是无人机拍摄存在漏拍和重拍现象,所以设计用sort锁定目标,为每个目标确定唯一的ID,这样就可以实现目标的锁定。

class Sort(object):
    def __init__(self, max_age=1, min_hits=3, iou_threshold=0.3):
        """
        Sets key parameters for SORT
        """
        self.max_age = max_age  # time_since_update > max_age, track被清除
        self.min_hits = min_hits
        self.iou_threshold = iou_threshold
        self.trackers = []
        self.frame_count = 0

    def update(self, dets=np.empty((0, 5))):
        """
        Params:
        dets - a numpy array of detections in the format [[x1,y1,x2,y2,score],[x1,y1,x2,y2,score],...]
        Requires: this method must be called once for each frame even with empty detections (use np.empty((0, 5)) for frames without detections).
        Returns the a similar array, where the last column is the object ID.

        NOTE: The number of objects returned may differ from the number of detections provided.
        """
        self.frame_count += 1
        # get predicted locations from existing trackers.
        trks = np.zeros((len(self.trackers), 5))
        to_del = []
        ret = []
        for t, trk in enumerate(trks):
            pos = self.trackers[t].predict()[0]
            trk[:] = [pos[0], pos[1], pos[2], pos[3], 0]
            if np.any(np.isnan(pos)):
                to_del.append(t)
        trks = np.ma.compress_rows(np.ma.masked_invalid(trks))
        for t in reversed(to_del):
            self.trackers.pop(t)
        matched, unmatched_dets, unmatched_trks = associate_detections_to_trackers(dets, trks, self.iou_threshold)

        # update matched trackers with assigned detections
        for m in matched:
            self.trackers[m[1]].update(dets[m[0], :])

        # create and initialize new trackers for unmatched detections
        for i in unmatched_dets:
            trk = KalmanBoxTracker(dets[i,:])
            self.trackers.append(trk)
        i = len(self.trackers)
        for trk in reversed(self.trackers):
            d = trk.get_state()[0]
            if (trk.time_since_update < 1) and (trk.hit_streak >= self.min_hits or self.frame_count <= self.min_hits):
                ret.append(np.concatenate((d, [trk.id+1])).reshape(1,-1)) # +1 as MOT benchmark requires positive
            i -= 1
            # remove dead tracklet
            if(trk.time_since_update > self.max_age):
                self.trackers.pop(i)
        if(len(ret)>0):
            return np.concatenate(ret)
        return np.empty((0,5))

SORT主要由三部分组成:目标检测,卡尔曼滤波,匈牙利算法

二、yolov8与sort实现目标锁定

对于开发者而言,AI项目中各种算法的数据集准备+模型训练+模型部署依然存在着不小的难度。AidLux的出现,可以将我们的安卓设备以非虚拟的形式变成同时拥有Android和Linux系统环境的边缘计算设备,支持主流AI框架,非常易于部署,还有专门的接口调度算力资源,极大地降低了AI应用落地门槛。
所以我在该平台实现了yolov8和sort的目标锁定,核心代码是老师给的。

if __name__ == '__main__':
    mot_tracker = Sort(max_age = 1,  # time_since_update>max_age, 清楚在跟目标
                       min_hits = 3,  # hit_streak>min_hits, 转为确认态
                       iou_threshold = 0.3) # create instance of the SORT tracker
    # tflite模型
    model_path = '/home/YOLOv8_AidLux/models/8086_best_float32.tflite'
    # 定义输入输出shape
    in_shape = [1 * 640 * 640 * 3 * 4]  # HWC, float32
    out_shape = [1 * 8400 * 52 * 4]  # 8400: total cells, 52 = 48(num_classes) + 4(xywh), float32

    # AidLite初始化
    aidlite = aidlite_gpu.aidlite()
    # 载入模型
    res = aidlite.ANNModel(model_path, in_shape, out_shape, 4, 0)
    print(res)

    ''' 读取手机后置摄像头 '''
    cap = cvs.VideoCapture(0)
    frame_id = 0
    while True:
        frame = cap.read()
        if frame is None:
            continue
        frame_id += 1
        if frame_id % 3 != 0:
            continue
        time0 = time.time()
        # 预处理
        img = preprocess_img(frame, target_shape=(640, 640), div_num=255, means=None, stds=None)

        aidlite.setInput_Float32(img, 640, 640)
        # 推理
        aidlite.invoke()
        preds = aidlite.getOutput_Float32(0)
        preds = preds.reshape(1, 52, 8400)
        preds = detect_postprocess(preds, frame.shape, [640, 640, 3], conf_thres=0.25, iou_thres=0.45)
        print('1 batch takes {} s'.format(time.time() - time0))
        if len(preds) != 0:
            preds[:, :4] = scale_boxes([640, 640], preds[:, :4], frame.shape)
            ''' SORT锁定 '''
            preds_out = preds[:, :5]  # 数据切片,得到格式如[x1, y1, x2, y2, conf]的ndarray。
            trackers = mot_tracker.update(preds_out)  # predict -> associate -> update
            ''' 绘制结果 '''
            for d in trackers:
                cv2.putText(frame, str(int(d[4])), (int(d[0]), int(d[1])), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 1)
                cv2.rectangle(frame, (int(d[0]), int(d[1])), (int(d[2]), int(d[3])), (0, 0, 255),thickness = 2)  

        cvs.imshow(frame)

三、具体实现

实现结果如视频所示

从视频效果得知该算法的目标锁定效果较好

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值