Python实现 多目标跟踪+画出轨迹 - OpenCV函数调用测试

       在我之前的博客,”C++实现 多目标跟踪+画出轨迹 - OpenCV函数调用测试“ 中用C++实现了多目标跟踪并画出轨迹,贴上标签的功能,现在将他移植到python上面去。文章末贴上我的代码。

 

环境:pycharm professional 2019.1 + opencv-python 4.2.0.32 + opencv-contrib-python 4.2.0.32 + python 3.7.0 + anaconda

 

       之前只是学过一小段时间python,但是没怎么搞过python,这次差不多是重新来了,因此遇到了很多小儿科问题,比如:

  • 我的pycharm过期了......找到了永久破解的办法懒得弄了,找了个激活码。
  • tracker = cv2.TrackerBoosting_create() 等这些create的方法上出现提示,cannot find reference 'TrackingBoosting_create' in '__ini__.py' ,本以为这个是报错,百度了才知道这并不是报错,只是在ini.py中看不到罢了,好像是因为安装时候的原因?懒得继续弄了,就继续了。
  • python的opencv有些方法跟C++的不一样,最直观的区别就是video >> frame,在python就不这么用。
  • 慢慢去熟练python吧......

       别的没啥可解释的了,要看详细解释,去看我C++实现的博客吧。

包含功能:
       1. 按帧选取视频追踪起始点
       2. 鼠标框选目标
       3. 支持本地视频和摄像头两种方式
       4. 给每个目标贴上标签
       5. 画出轨迹

       6. 将有轨迹的视频和没有轨迹的视频保存到本地

最新完整代码:

from __future__ import print_function
import sys
import cv2
from random import randint
import time
import os

trackerTypes = ['BOOSTING', 'MIL', 'KCF', 'TLD', 'MEDIANFLOW', 'MOSSE', 'CSRT']


def createTrackerByName(trackerType):
    # Create a tracker based on tracker name
    if trackerType == trackerTypes[0]:
        tracker = cv2.TrackerBoosting_create()
    elif trackerType == trackerTypes[1]:
        tracker = cv2.TrackerMIL_create()
    elif trackerType == trackerTypes[2]:
        tracker = cv2.TrackerKCF_create()
    elif trackerType == trackerTypes[3]:
        tracker = cv2.TrackerTLD_create()
    elif trackerType == trackerTypes[4]:
        tracker = cv2.TrackerMedianFlow_create()
    elif trackerType == trackerTypes[5]:
        tracker = cv2.TrackerMOSSE_create()
    elif trackerType == trackerTypes[6]:
        tracker = cv2.TrackerCSRT_create()
    else:
        tracker = None
        print('Incorrect tracker name')
        print('Available trackers are:')
        for t in trackerTypes:
            print(t)

    return tracker


No = 0
if __name__ == '__main__':

    vecPoints = []

    while True:

        vecPoints.clear()

        while True:
            print('\n------------------------------------------------------------------\n'
                  '\n>> 可测试算法有  BOOSTING  MIL  KCF  TLD  MEDIANFLOW  MOSSE  CSRT'
                  '\n>> 请输入要测试的算法并按回车,如需退出请输入exit。')

            tType = input('>> ')

            if tType == 'exit':
                sys.exit(0)
            if tType == 'BOOSTING':
                print('>> 选择BOOSTING成功!')
                break
            elif tType == 'MIL':
                print('>> 选择MIL成功!')
                break
            elif tType == 'KCF':
                print('>> 选择KCF成功!')
                break
            elif tType == 'TLD':
                print('>> 选择TLD成功!')
                break
            elif tType == 'MEDIANFLOW':
                print('>> 选择MEDIANFLOW成功!')
                break
            elif tType == 'MOSSE':
                print('>> 选择MOSSE成功!')
                break
            elif tType == 'CSRT':
                print('>> 选择CSRT成功!')
                break
            else:
                print('>> 选择失败!')
                continue

        print('>> 输入1选择本地视频进行播放'
              '\n>> 输入2选择实时摄像头播放')

        judgement = input('>> ')

        if judgement == '1':
            while True:
                print('\n+----------------+'
                      '\n|  1.步行的人_1  |'
                      '\n|  2.步行的人_2  |'
                      '\n|  3.步行的人_3  |'
                      '\n|  4.车          |'
                      '\n|  5.超车        |'
                      '\n|  6.大卫        |'
                      '\n|  7.跳绳        |'
                      '\n|  8.摩托越野    |'
                      '\n|  9.熊猫        |'
                      '\n|  10.大众汽车   |'
                      '\n+----------------+'
                      '\n\n>> 请输入要播放视频的序列号(例如4)')

                videoNo = input('>> ')

                if videoNo == '1':
                    videoName = 'pedestrian1.mpg'
                    print('>> 选择《步行的人_1》成功!')
                    break
                elif videoNo == '2':
                    videoName = 'pedestrian2.mpg'
                    print('>> 选择《步行的人_2》成功!')
                    break
                elif videoNo == '3':
                    videoName = 'pedestrian3.mpg'
                    print('>> 选择《步行的人_3》成功!')
                    break
                elif videoNo == '4':
                    videoName = 'car.mpg'
                    print('>> 选择《车》成功!')
                    break
                elif videoNo == '5':
                    videoName = 'carchase.mpg'
                    print('>> 选择《超车》成功!')
                    break
                elif videoNo == '6':
                    videoName = 'david.mpg'
                    print('>> 选择《大卫》成功!')
                    break
                elif videoNo == '7':
                    videoName = 'jumping.mpg'
                    print('>> 选择《跳绳》成功!')
                    break
                elif videoNo == '8':
                    videoName = 'motocross.mpg'
                    print('>> 选择《摩托越野》成功!')
                    break
                elif videoNo == '9':
                    videoName = 'panda.mpg'
                    print('>> 选择《熊猫》成功!')
                    break
                elif videoNo == '10':
                    videoName = 'volkswagen.mpg'
                    print('>> 选择《大众汽车》成功!')
                    break
                else:
                    print('>> 序列号有误,请重新输入!')
                    continue

            video = cv2.VideoCapture('.\\datasets\\' + videoName)

            if not video.isOpened():
                print('>> 读取视频失败')
                continue

            print('\n+--------------------------+'
                  '\n|  点击 c 逐帧播放视频     |'
                  '\n|  点击 q 开始选择目标     |'
                  '\n|  点击空格开始播放并跟踪  |'
                  '\n|  播放期间按 q 退出播放   |'
                  '\n+--------------------------+\n')

            time_t = time.strftime('%Y.%m.%d %H-%M-%S', time.localtime(time.time()))
            outDir = 'E:\\targetTracking\\pyMultiTracker\\saveVideo\\Video-' + time_t
            os.mkdir(outDir)
            outFile_1 = outDir + '\\videoNoTrack.avi'
            outFile_2 = outDir + '\\videoWithTrack.avi'

            s = (int(video.get(cv2.CAP_PROP_FRAME_WIDTH)), int(video.get(cv2.CAP_PROP_FRAME_HEIGHT)))
            r = video.get(cv2.CAP_PROP_FPS)

            write_1 = cv2.VideoWriter(outFile_1, 0, r, s, True)
            write_2 = cv2.VideoWriter(outFile_2, 0, r, s, True)

            success, frame = video.read()

            if not success:
                print('>> 读取视频失败')
                continue

            cv2.imshow('Tracker', frame)
            while True:
                key = cv2.waitKey(1)
                if key == ord('c') or key == ord('C'):
                    success, frame = video.read()
                    cv2.imshow('Tracker', frame)
                    write_1.write(frame)
                    write_2.write(frame)
                if key == ord('q') or key == ord('Q'):
                    break

            cv2.destroyWindow('Tracker')

            bboxes = []
            colors = []

            while True:
                # draw bounding boxes over objects
                # selectROI's default behaviour is to draw box starting from the center
                # when fromCenter is set to false, you can draw box starting from top left corner
                bbox = cv2.selectROI('Tracker', frame)
                bboxes.append(bbox)
                colors.append((randint(64, 255), randint(64, 255), randint(64, 255)))
                print("Press q to quit selecting boxes and start tracking")
                print("Press any other key to select next object")
                k = cv2.waitKey(0) & 0xFF
                if (k == 113):  # q is pressed
                    break

            # print('Selected bounding boxes {}'.format(bboxes))

            # Create MultiTracker object
            multiTracker = cv2.MultiTracker_create()

            # Initialize MultiTracker
            for bbox in bboxes:
                multiTracker.add(createTrackerByName(tType), frame, bbox)
                temp = []
                vecPoints.append(temp)

            print('>> 开始播放')

            # Process video and track objects
            while video.isOpened():
                success, frame = video.read()
                if not success:
                    break

                write_1.write(frame)
                # write_2.write(frame)

                # get updated location of objects in subsequent frames
                success, boxes = multiTracker.update(frame)

                # draw tracked objects
                for i, newbox in enumerate(boxes):
                    p1 = (int(newbox[0]), int(newbox[1]))
                    p2 = (int(newbox[0] + newbox[2]), int(newbox[1] + newbox[3]))
                    cv2.rectangle(frame, p1, p2, colors[i], 2, 1)

                for i, newbox in enumerate(boxes):
                    vecPoints[i].append((int(newbox[0] + (newbox[2] * 0.5))*2, int(newbox[1] + (newbox[3] * 0.5))*2))

                if len(vecPoints) > 0:
                    for i in range(len(vecPoints)):
                        for j in range(len(vecPoints[i])-1):
                            cv2.line(frame, vecPoints[i][j], vecPoints[i][j+1], colors[i], 1, 8, 1)

                for i, newbox in enumerate(boxes):
                    cv2.putText(frame, 'id_'+str(i+1), (int(newbox[0]), int(newbox[1])-3), cv2.FONT_HERSHEY_PLAIN, 1, colors[i], 1)

                # show frame
                cv2.imshow('Tracker', frame)

                # write_1.write(frame)
                write_2.write(frame)

                # quit on ESC button
                if cv2.waitKey(30) == ord('q') or cv2.waitKey(30) == ord('Q'):
                    break

            write_1.release()
            write_2.release()
            print('\n>> 视频保存完毕。')
            cv2.destroyWindow('Tracker')
            print('>> 播放完毕')

        elif judgement == '2':

            video = cv2.VideoCapture(0)

            if not video.isOpened():
                print('>> 发生错误,请检查摄像头是否已断开!')
                continue

            time_t = time.strftime('%Y.%m.%d %H-%M-%S', time.localtime(time.time()))
            outDir = 'E:\\targetTracking\\pyMultiTracker\\saveVideo\\Camera-' + time_t
            os.mkdir(outDir)
            outFile_1 = outDir + '\\videoNoTrack.avi'
            outFile_2 = outDir + '\\videoWithTrack.avi'

            s = (int(video.get(cv2.CAP_PROP_FRAME_WIDTH)), int(video.get(cv2.CAP_PROP_FRAME_HEIGHT)))
            r = video.get(cv2.CAP_PROP_FPS)

            write_1 = cv2.VideoWriter(outFile_1, 0, r, s, True)
            write_2 = cv2.VideoWriter(outFile_2, 0, r, s, True)
            # print(outDir)

            print('>> 请按空格开始截取图片')

            while True:
                success, frame = video.read()
                if not success:
                    print('>> 发生错误,请检查摄像头是否已断开!')
                    break

                cv2.imshow('Tracker', frame)

                write_1.write(frame)
                write_2.write(frame)

                # cv2.imwrite('E:\\targetTracking\\pyMultiTracker\\saveVideo\\pic\\' + str(No) + '.bmp', frame)
                # No = No + 1

                if cv2.waitKey(1) == ord(' '):
                    break

            bboxes = []
            colors = []

            while True:
                # draw bounding boxes over objects
                # selectROI's default behaviour is to draw box starting from the center
                # when fromCenter is set to false, you can draw box starting from top left corner
                bbox = cv2.selectROI('Tracker', frame)
                bboxes.append(bbox)
                colors.append((randint(64, 255), randint(64, 255), randint(64, 255)))
                print("Press q to quit selecting boxes and start tracking")
                print("Press any other key to select next object")
                k = cv2.waitKey(0) & 0xFF
                if (k == 113):  # q is pressed
                    break

            # print('Selected bounding boxes {}'.format(bboxes))

            # Create MultiTracker object
            multiTracker = cv2.MultiTracker_create()

            # Initialize MultiTracker
            for bbox in bboxes:
                multiTracker.add(createTrackerByName(tType), frame, bbox)
                temp = []
                vecPoints.append(temp)

            print('>> 开始播放')

            # Process video and track objects
            while video.isOpened():
                success, frame = video.read()
                if not success:
                    break

                write_1.write(frame)
                # write_1.write(frame)
                # write_2.write(frame)
                print('before-' + str(id(frame)))

                # get updated location of objects in subsequent frames
                success, boxes = multiTracker.update(frame)

                # draw tracked objects
                for i, newbox in enumerate(boxes):
                    p1 = (int(newbox[0]), int(newbox[1]))
                    p2 = (int(newbox[0] + newbox[2]), int(newbox[1] + newbox[3]))
                    cv2.rectangle(frame, p1, p2, colors[i], 2, 1)

                for i, newbox in enumerate(boxes):
                    vecPoints[i].append(
                        (int(newbox[0] + (newbox[2] * 0.5)) * 2, int(newbox[1] + (newbox[3] * 0.5)) * 2))

                if len(vecPoints) > 0:
                    for i in range(len(vecPoints)):
                        for j in range(len(vecPoints[i]) - 1):
                            cv2.line(frame, vecPoints[i][j], vecPoints[i][j + 1], colors[i], 1, 8, 1)

                for i, newbox in enumerate(boxes):
                    cv2.putText(frame, 'id_' + str(i + 1), (int(newbox[0]), int(newbox[1]) - 3), cv2.FONT_HERSHEY_PLAIN,
                                1, colors[i], 1)

                # show frame
                cv2.imshow('Tracker', frame)

                # write_1.write(frame)
                write_2.write(frame)
                # write_2.write(frame)
                print('after-' + str(id(frame)))

                # cv2.imwrite('E:\\targetTracking\\pyMultiTracker\\saveVideo\\pic\\' + str(No) + '.bmp', frame)
                # No = No + 1

                # quit on ESC button
                if cv2.waitKey(30) == ord('q') or cv2.waitKey(30) == ord('Q'):
                    break

            write_1.release()
            write_2.release()
            print('\n>> 视频保存完毕。')
            cv2.destroyWindow('Tracker')
            print('\n>> 播放完毕\n')

        else:
            print('>> 输入有误')

    sys.exit(1)

 

  • 11
    点赞
  • 118
    收藏
    觉得还不错? 一键收藏
  • 28
    评论
### 回答1: 在Python中,可以使用OpenCV库来实现将多张小图片组合成一张大图的功能。具体实现步骤如下: 首先,我们需要导入OpenCV库和NumPy库: import cv2 import numpy as np 然后,我们需要定义一个函数来组合多张小图片为一张大图: def combine_images(image_list, rows=1, cols=1): # 计算大图的宽度和高度 width = image_list[0].shape[1] height = image_list[0].shape[0] combined_width = width * cols combined_height = height * rows # 创建一张空白的大图 combined_image = np.zeros((combined_height, combined_width, 3), np.uint8) # 将小图片复制到大图中的对应位置 current_row = 0 current_col = 0 for image in image_list: combined_image[current_row*height:(current_row+1)*height, current_col*width:(current_col+1)*width] = image current_col += 1 if current_col == cols: current_col = 0 current_row += 1 return combined_image 接下来,我们可以准备一些需要合并的小图片,然后调用combine_images函数进行合并: # 读取小图片 image1 = cv2.imread('image1.jpg') image2 = cv2.imread('image2.jpg') image3 = cv2.imread('image3.jpg') # 将小图片放入列表中 image_list = [image1, image2, image3] # 合并小图片为一张大图 combined_image = combine_images(image_list, rows=2, cols=2) 最后,我们可以通过cv2.imshow函数来显示合并后的大图,并通过cv2.imwrite函数将大图保存到本地: # 显示大图 cv2.imshow('Combined Image', combined_image) cv2.waitKey(0) # 保存大图 cv2.imwrite('combined_image.jpg', combined_image) 这样,我们就成功地将多张小图片组合成了一张大图。 ### 回答2: Python-opencv库可以很方便地将多张图片组合成一张大图。首先,我们需要导入cv2和numpy库。然后,创建一个空白的大图,大小要足够容纳所有小图的组合。然后,用cv2.imread()函数读取所有小图,并使用cv2.resize()调整它们的大小以适应大图的要求。接下来,我们需要计算每个小图的位置,并将它们放置在大图上。最后,使用cv2.imshow()函数显示大图。 具体的步骤如下: 1. 导入cv2和numpy库。 ```python import cv2 import numpy as np ``` 2. 创建一个空白的大图。 ```python big_image = np.zeros((height, width, 3), dtype=np.uint8) ``` 3. 使用cv2.imread()函数读取小图,并用cv2.resize()函数调整大小。 ```python image1 = cv2.imread('image1.jpg') image1 = cv2.resize(image1, (img_width, img_height)) image2 = cv2.imread('image2.jpg') image2 = cv2.resize(image2, (img_width, img_height)) # 其他小图的读取和调整大小同理 ``` 4. 计算每个小图在大图中的位置,并放置在大图上。 ```python big_image[x1:x2, y1:y2] = image1 big_image[x3:x4, y3:y4] = image2 # 其他小图的位置计算和放置同理 ``` 5. 使用cv2.imshow()函数显示大图。 ```python cv2.imshow('Big Image', big_image) cv2.waitKey(0) ``` 通过以上步骤,我们就可以将多张小图组合成一张大图了。 ### 回答3: Python-opencv是一个强大的图像处理库,它提供了多种功能和方法来处理和操作图像。为了将多张图片组成一张大图,我们可以使用python-opencv中的一些方法实现。 首先,我们需要导入opencv库并读取所有要组合的图片。可以使用cv2模块的imread方法来读取图片。读取的图片可以保存在一个列表中。 接下来,我们需要确定大图的大小,以便容纳所有的小图片。可以使用cv2模块的resize方法来调整小图片的大小,使其具有相同的尺寸。 然后,我们可以创建一个空白的大图,使用cv2模块的numpy.zeros方法创建一个具有足够大小的全黑矩阵。大图的大小应该是小图片的大小乘以小图的数量,可以根据需要进行调整。 在将小图片放置到大图中之前,我们需要选择一个合适的算法来确定每个小图片的位置。这取决于你想要的布局方式。例如,如果你想要将小图片平均分布在大图中,可以使用一个嵌套的循环来确定每个小图片的位置。 最后,我们可以使用cv2模块的numpy的数组操作方法来将小图片复制到大图中的相应位置。你可以使用numpy的切片操作来选择和复制小图片。 完成以上步骤后,我们可以将大图保存到本地或显示在屏幕上,以供进一步使用或查看。 以上就是使用python-opencv将多张图片组成一张大图的简单方法。当然,如果需要更复杂的布局和图像处理操作,还可以使用更多的opencv方法和技巧。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 28
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值