opencv双目避障那些事

7 篇文章 0 订阅
4 篇文章 0 订阅
  • 公司因为成本原因,让我做一个传统的双目避障。没办法磕一下吧。

  • 双目避障流程

    • 拿到相机第一步肯定要标定了,最简单的方法,matlab2021b的工具箱就可以。这里面我把 Skew (Tangential Distortion上边那个)参数也勾选上了。
    • 因为是双目,所以要做正畸、矫正(对齐)。都是API看代码就懂了。
    • 读取到视频流后,通过SGBM的API计算视差图,从视差图得到深度图,并通过API反映射到三维坐标系,得到三维坐标,便可计算出Z坐标(距离)。
    • 因为是避障,所以光有距离是不够的,你还要对深度图进行一些列处理,得到障碍物的形状。
    • 我这里用的一些API就看代码吧,不介绍了。最后可以算出障碍物的信息。
  • **问题:**速度特别慢,我发现,comput这一步(左右相机计算视差)会特别慢,还在想加速的办法。

  • 下面上代码:
    这个代码,在深度图上点击鼠标可以输出坐标和距离(网上抄的(doge))。

import cv2
import numpy as np
import time
import random
import math


left_camera_matrix = np.array([[1.041391360906410e+03, -0.497373958828725, 6.529633882697765e+02], 
                                [0, 1.041945256148336e+03, 5.011346138132716e+02], 
                                [0, 0, 1]])
left_distortion = np.array([[-0.038865939454924, 0.141434746390239, 0.002010222102729, 0.002010222102729, 0]])

right_camera_matrix = np.array([[1.040740304791424e+03, -0.745746463223811, 6.214512440664031e+02], 
                                [0, 1.041706604301214e+03, 4.936376604155090e+02], 
                                [0, 0, 1]])
right_distortion = np.array([[-0.042478429626748, 0.149801715580196, 4.679743572389913e-04, 3.157700181865255e-04, 0]])

R = np.array([[0.999998129286225, -8.830788262094638e-04, -0.001720928771787],
              [8.814580091946381e-04, 0.999999167496347, -9.423578887644861e-04],
              [0.001721759515406, 9.408391994334367e-04, 0.999998075181034]])
T = np.array([-1.202972148785817e+02, 0.244005292586265, 0.294722278253404])

size = (1280, 720)

R1, R2, P1, P2, Q, validPixROI1, validPixROI2 = cv2.stereoRectify(left_camera_matrix, left_distortion,
                                                                  right_camera_matrix, right_distortion, size, R,
                                                                  T)

# 校正查找映射表,将原始图像和校正后的图像上的点一一对应起来
left_map1, left_map2 = cv2.initUndistortRectifyMap(left_camera_matrix, left_distortion, R1, P1, size, cv2.CV_16SC2)
right_map1, right_map2 = cv2.initUndistortRectifyMap(right_camera_matrix, right_distortion, R2, P2, size, cv2.CV_16SC2)
print(Q)

capture = cv2.VideoCapture("./Capture00004.avi")
ret,img = capture.read()

def onmouse_pick_points(event, x, y, flags, param):
    if event == cv2.EVENT_LBUTTONDOWN:
        threeD = param
        print('\n像素坐标 x = %d, y = %d' % (x, y))
        print("世界坐标xyz 是:", threeD[y][x][0] / 1000.0, threeD[y][x][1] / 1000.0, threeD[y][x][2] / 1000.0, "m")

        distance = math.sqrt(threeD[y][x][0] ** 2 + threeD[y][x][1] ** 2 + threeD[y][x][2] ** 2)
        distance = distance / 1000.0  # mm -> m
        print("距离是:", distance, "m")

fps = 0.0
flag = 0
while True:
    t1 = time.time()
    ret,img = capture.read()
    imgl = img[0:720,0:1280]
    imgr = img[0:720,1280:2560]

    imgl = cv2.cvtColor(imgl,cv2.COLOR_BGR2GRAY)
    imgr = cv2.cvtColor(imgr,cv2.COLOR_BGR2GRAY)

    imgl_rectified = cv2.remap(imgl,left_map1,left_map2,cv2.INTER_LINEAR)
    imgr_rectified = cv2.remap(imgr,right_map1,right_map2,cv2.INTER_LINEAR)

    imagel = cv2.cvtColor(imgl_rectified,cv2.COLOR_GRAY2BGR)
    imager = cv2.cvtColor(imgr_rectified,cv2.COLOR_GRAY2BGR)

    blockSize = 8
    img_channels = 3
    stereo = cv2.StereoSGBM_create(minDisparity=1,numDisparities=112,
                                    blockSize=blockSize,
                                    P1=8 * img_channels * blockSize * blockSize,
                                    P2=32 * img_channels * blockSize * blockSize,
                                    disp12MaxDiff=-1,preFilterCap=1,uniquenessRatio=10,
                                    speckleWindowSize=100,speckleRange=100,
                                    mode=cv2.STEREO_SGBM_MODE_HH)


    disparity = stereo.compute(imgl_rectified,imgr_rectified)
    disp = cv2.normalize(disparity,disparity,alpha=0,beta=255,norm_type=cv2.NORM_MINMAX,dtype=cv2.CV_8U)
    threeD = cv2.reprojectImageTo3D(disparity,Q,handleMissingValues=True)
    threeD = threeD * 16

    cv2.setMouseCallback("before_threeD", onmouse_pick_points, threeD)


    for i in range(disp.shape[0]):
        for j in range(disp.shape[1]):
            Y = threeD[i][j][1] / 1000.0
            if Y < -1.0 or Y > 0.55:
                disp[i][j] = 0

    k = np.ones((10,10),dtype=np.uint8)
    disp = cv2.morphologyEx(disp,op=cv2.MORPH_CLOSE,kernel=k)
    disp = cv2.morphologyEx(disp,op=cv2.MORPH_OPEN,kernel=k)
    # disparity = cv2.GaussianBlur(disparity,(15,15),0,0)
    disp = cv2.medianBlur(disp,15,-1)

    before_threeD = disp.copy()
    ret,disp = cv2.threshold(disp,20,255,type=cv2.THRESH_BINARY)
    contours,hierarchy = cv2.findContours(disp,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)


    for i in range(len(contours)):
        area = cv2.contourArea(contours[i])
        if 15000 < area < 230400:
            x,y,w,h = cv2.boundingRect(contours[i])
            distance = math.sqrt(threeD[y+int(h/2)][x+int(w/2)][0] ** 2 + threeD[y+int(h/2)][x+int(w/2)][1] ** 2 + threeD[y+int(h/2)][x+int(w/2)][2] ** 2)
            distance = distance / 1000.0
            cv2.rectangle(imagel,(x,y),(x + w,y + h),(255,255,0),2)
            cv2.putText(imagel,str(distance),(x,y+50),fontFace=cv2.FONT_HERSHEY_COMPLEX_SMALL,fontScale=2,color=(225,0,0),thickness=2)
            cv2.circle(imagel,(x+int(w/2),y+int(h/2)),3,(255,0,0),-1)

    fps = (fps + (1. / (time.time() - t1))) / 2
    
    cv2.namedWindow("dis",cv2.WINDOW_NORMAL)
    cv2.namedWindow("imagel",cv2.WINDOW_NORMAL)
    cv2.namedWindow("before_threeD",cv2.WINDOW_NORMAL)
    cv2.imshow("dis",disp)
    cv2.imshow("imagel",imagel)
    cv2.imshow("before_threeD",before_threeD)
    if cv2.waitKey(10) & 0xff == ord('q'):
        break
    print(fps)
cv2.destroyAllWindows()
  • 后续改了C++版我再更新,希望各路大佬多多提建议~
  • 3
    点赞
  • 12
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
以下是使用OpenCV实现智能小车避障的步骤: 1. 安装OpenCV 3.0: - 首先,确保你的树莓派已经连接到互联网。 - 打开终端,运行以下命令来安装OpenCV的依赖项: ```shell sudo apt-get update sudo apt-get upgrade sudo apt-get install build-essential cmake pkg-config sudo apt-get install libjpeg-dev libtiff5-dev libjasper-dev libpng12-dev sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev sudo apt-get install libxvidcore-dev libx264-dev sudo apt-get install libgtk2.0-dev libgtk-3-dev sudo apt-get install libatlas-base-dev gfortran ``` - 接下来,下载OpenCV 3.0的源代码并进行编译安装: ```shell wget -O opencv.zip https://github.com/opencv/opencv/archive/3.0.0.zip unzip opencv.zip cd opencv-3.0.0 mkdir build cd build cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local .. make sudo make install sudo ldconfig ``` 2. 实现避障功能: - 首先,导入必要的库和模块: ```python import cv2 import numpy as np import RPi.GPIO as GPIO import time ``` - 设置GPIO引脚模式和超声波传感器: ```python GPIO.setmode(GPIO.BOARD) GPIO_TRIGGER = 11 GPIO_ECHO = 13 GPIO.setup(GPIO_TRIGGER, GPIO.OUT) GPIO.setup(GPIO_ECHO, GPIO.IN) ``` - 定义超声波测距函数: ```python def distance(): GPIO.output(GPIO_TRIGGER, True) time.sleep(0.00001) GPIO.output(GPIO_TRIGGER, False) start_time = time.time() stop_time = time.time() while GPIO.input(GPIO_ECHO) == 0: start_time = time.time() while GPIO.input(GPIO_ECHO) == 1: stop_time = time.time() time_elapsed = stop_time - start_time distance = (time_elapsed * 34300) / 2 return distance ``` - 创建摄像头对象并设置分辨率: ```python cap = cv2.VideoCapture(0) cap.set(3, 320) cap.set(4, 240) ``` - 循环读取摄像头图像并进行避障处理: ```python while True: ret, frame = cap.read() gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # 在这里添加避障处理的代码 # ... cv2.imshow('frame', frame) if cv2.waitKey(1) & 0xFF == ord('q'): break ``` - 最后,释放摄像头对象和GPIO引脚,并关闭窗口: ```python cap.release() GPIO.cleanup() cv2.destroyAllWindows() ``` 以上是使用OpenCV实现智能小车避障的基本步骤。你可以根据你的具体需求和硬件配置进行进一步的开发和优化。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值