《实时畸变校正》

3 篇文章 1 订阅
本文介绍了摄像头成像的非线性畸变,包括径向畸变、切向畸变和薄棱镜畸变,并详细阐述了OpenCV中的校正方法,如cv2.undistort和cv2.initUndistortRectifyMap+cv2.remap。同时,讨论了手眼标定过程,以及实时畸变校正的多线程和多进程实现,以提升处理效率。此外,还针对RTSP流解码错误给出了可能的解决方案。
摘要由CSDN通过智能技术生成

摄像头畸变

​ 由于摄像头制造的不足以及成像过程中的噪声影响,摄像头成像往往不满足针孔模型,这种成像模型称为非线性模型。非线性模型中畸变主要有三种,包括径向畸变、切向畸变和薄棱镜畸变。假设理想的成像点坐标 ,畸变后的实际成像点坐标 ,则非线性畸变模型如下
{ x d = x u + δ x y d = y u + δ y \begin{cases} x_d = x_u + δ_x \\ y_d = y_u + δ_y \\ \end{cases} {xd=xu+δxyd=yu+δy

  • a) 径向畸变

图像的径向畸变是指像点相对理想位置发生向内或者向外的偏移,即像点在径向上出现误差。
{ δ x = x u ( k 1 r u 2 + k 2 r u 4 + k 3 r u 6 + . . . ) δ y = y u ( k 1 r u 2 + k 2 r u 4 + k 3 r u 6 + . . . ) \begin{cases} δ_x = x_u(k_1r_u^2 + k_2r_u^4 + k_3r_u^6 + ...)\\ δ_y = y_u(k_1r_u^2 + k_2r_u^4 + k_3r_u^6 + ...) \end{cases} {δx=xu(k1ru2+k2ru4+k3ru6+...)δy=yu(k1ru2+k2ru4+k3ru6+...)

式中, r u 2 = x u 2 + y u 2 r_u^2 = x_u^2+y_u^2 ru2=xu2+yu2 k 1 , k 2 , k 3 k_1,k_2,k_3 k1,k2,k3为径向畸变系数。

  • b) 切向畸变

图像的切向畸变是指由于光学系统的光学镜头装配产生误差,光轴不可能完全共线,即像点在切向上出现了偏差。
{ δ x = p 1 ( 3 x u 2 + y u 2 ) + 2 p 2 x u y u δ y = p 2 ( 3 y u 2 + x u 2 ) + 2 p 1 x u y u \begin{cases} δ_x = p_1(3x_u^2 + y_u^2) + 2p_2x_uy_u\\ δ_y = p_2(3y_u^2 + x_u^2) + 2p_1x_uy_u \end{cases} {δx=p1(3xu2+yu2)+2p2xuyuδy=p2(3yu2+xu2)+2p1xuyu
式中, p 1 , p 2 p_1,p_2 p1,p2为切向畸变系数。

  • c) 薄棱镜畸变

图像的薄棱镜畸变是指由光学镜头制造误差和成像敏感阵列制造误差引起的图像变形。
{ δ x = s 1 ( x u 2 + y u 2 ) δ y = s 1 ( x u 2 + y u 2 ) \begin{cases} δ_x = s_1(x_u^2+y_u^2)\\ δ_y = s_1(x_u^2+y_u^2)\\ \end{cases} {δx=s1(xu2+yu2)δy=s1(xu2+yu2)
式中, s 1 , s 2 s_1,s_2 s1,s2为薄棱镜畸变系数。

  • 畸变参数

    • 径向畸变 $ k_1, k_2, k_3$

    • 切向畸变 $ p_1, p_2$

    • 薄棱镜畸变 s 1 , s 2 s_1, s_2 s1,s2

  • (一般考虑OpenCV前五个参数, k 1 , k 2 , p 1 , p 2 , k 3 k_1, k_2, p_1, p_2, k_3 k1,k2,p1,p2,k3)镜头畸变中径向畸变(主要是前两阶的)和切向畸变影响较

大,占畸变的95%,而薄棱镜畸变会造成额外的径向畸变和切向畸变,影响较小。一般情况下,只需考虑径向畸变和切向畸变,薄棱镜畸往往忽略不计。

手眼标定

  • 视频取流
  • 按’S’交互保存标定板图片
  • 标定图片到一定数量后,按’C’标定
  • 按’Q’退出
def grab_web_cam(rtsp, path_img):
    VideoCapture = cv2.VideoCapture(rtsp)
    cv2.namedWindow(wnd_name, cv2.WINDOW_NORMAL)
    if not VideoCapture.isOpened():
        print("Error open video!")
        exit()
    save_num = 0
    while VideoCapture.isOpened():
        ret, frame = VideoCapture.read()
        if not ret:
            break
            
        k = show_image("frame", frame, 1)
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break
        elif cv2.waitKey(1) & 0xFF == ord("s"):
            cv2.imencode(".jpg", frame)[1].tofile(os.path.join(path_img, "img_%04d.jpg" % save_num))
            save_num += 1
            print("save image:[%d]" % save_num)
        elif cv2.waitKey(1) & 0xFF == ord("c"):
            calib.calib(path_img, path_cfg)
    VideoCapture.release()

实时畸变校正

cv2.undistort & cv2.initUndistortRectifyMap + cv2.remap

  • mtx, dist, new_mtx分别为内参数,畸变系数,优化后的畸变系数

  • (1) 直接校正

    frame_rect = cv2.undistort(frame, mtx, dist, None, new_mtx)
    

请添加图片描述

  • (2) 先计算映射关系,再校正

    mapx, mapy = cv2.initUndistortRectifyMap(mtx, dist, None, new_mtx, (int(frame_width), int(frame_height)), 5)
    img_dst = cv2.remap(frame, mapx, mapy, interpolation=cv2.INTER_NEAREST, dst=None,borderMode=cv2.BORDER_CONSTANT, borderValue=(0, 0, 0))
    

请添加图片描述

  • 单独使用几次时,差别不大
  • 当多次图片畸变校正时,建议使用一次initUndistortRectifyMap,获取映射矩阵mapxmapy后,作为remap输入,再使用多次的remap校正
  • 实时畸变校正,帧率差距明显

显示区域

  • newcameramtx, roi = cv2.getOptimalNewCameraMatrix(mtx, dist, (img_gray.shape[1], img_gray.shape[0]), alpha, (width, height))

    • 优化相机内参(camera matrix),可选,提高精度
    • alpha= 1, 所有像素都保留,有黑色像素混入
    • alpha=0, 尽可能裁剪不想要的像素,都是有效,这是个scale
  • 边界填充

    • 畸变导致矫正后的图像边缘空缺

    • 可以通过remap里的borderMode来设置不同的填充效果

  • 有效区域裁剪

    • getOptimalNewCameraMatrix计算ROI
    • 矫正后的区域裁剪

Error:[h264 @ 000001bf04177660] error while decoding MB 11 84, bytestream -17

  • RTSP取流后,偶尔报错:

    [h264 @ 000001bf04177660] error while decoding MB 11 84, bytestream -17
    
  • 问题分析:H264除了使用帧内压缩之外,采用了独特的I帧、P帧和B帧策略来实现连续帧之间的压缩。在解码的时候如果不能有H264压缩时候需要的帧,就不能正确解码

  • 解决思路

    • (1)检查网络摄像头读取,失败后重启

          while (VideoCapture.isOpened()):
              ret, frame = VideoCapture.read()
              if not ret:
                  VideoCapture = cv2.VideoCapture(rtsp)
                  print("lost, have to reinitialization!")  
                  continue
      
    • (2)取图和校正分开处理,多线程或多进程

多线程实时实时校正

  • 取图
  • 校正+显示
import cv2, os, time
import numpy as np
import threading, queue

def read_campara(path):
    with np.load(path) as X:
        data = [X[i] for i in ('mtx', 'dist', 'new_mtx', 'roi')]
        mtx, dist, new_mtx, roi = data
    return mtx, dist, new_mtx, roi

def show(fps_enable = True, roi_enable=True):
    x, y, w, h = roi
    cv2.namedWindow('undistort', cv2.WINDOW_NORMAL)
    while True:
        if not q.empty():
            frame = q.get()

            if fps_enable:
                timer = cv2.getTickCount()

            frame_rect = cv2.remap(frame, mapx, mapy, interpolation=cv2.INTER_NEAREST, dst=None,
                                borderMode=cv2.BORDER_CONSTANT, borderValue=(0, 0, 0))
            if fps_enable:
                fps = cv2.getTickFrequency() / (cv2.getTickCount() - timer)
                print("FPS:{}".format(fps))

            if roi_enable:
                rect_roi = frame_rect[y:y + h, x:x + w]
                frame_roi = frame[y:y + h, x:x + w]
                img_stack = np.hstack([frame_roi, rect_roi])
            else:
                img_stack = np.hstack([frame, frame_rect])  # src

            if fps_enable:
                cv2.putText(img_stack, "FPS : " + str(int(fps)), (50, 50), cv2.FONT_HERSHEY_SIMPLEX, 2, (0, 0, 255), 3)

            cv2.imshow('undistort', img_stack)
            if cv2.waitKey(1) & 0xFF == ord('q'):
                break

def undistort_show_threads(rtsp, path_cfg, interval = 3):
    VideoCapture = cv2.VideoCapture(rtsp)
    while (VideoCapture.isOpened()):
        ret, frame = VideoCapture.read()
        if not ret:
            VideoCapture = cv2.VideoCapture(rtsp)
            print("lost, have to reinitialization!")  
            continue
        pos = VideoCapture.get(cv2.CAP_PROP_POS_FRAMES)
        if pos % interval != 0:
            continue
        q.put(frame)
    VideoCapture.release()


if __name__ == "__main__":
    path_cfg = r"./cfg/cam2.npz"
    rtsp = 'rtsp://admin:123456@192.168.*.*'

    frame_width = 2560
    frame_height = 1440
    mtx, dist, new_mtx, roi = read_campara(path_cfg)
    mapx, mapy = cv2.initUndistortRectifyMap(mtx, dist, None, new_mtx, (int(frame_width), int(frame_height)), 5)

    q = queue.Queue()
    t1 = threading.Thread(target=undistort_show_threads, args=(rtsp, path_cfg))
    t2 = threading.Thread(target = show)

    t1.start()
    t2.start()

    t1.join()
    t2.join()

多进程实时校正

  • 取图
  • 校正+显示
import cv2, multiprocessing
import numpy as np


def read_campara(path):
    with np.load(path) as X:
        data = [X[i] for i in ('mtx', 'dist', 'new_mtx', 'roi')]
        mtx, dist, new_mtx, roi = data
    return mtx, dist, new_mtx, roi


def show(queue, roi, mapx, mapy, fps_enable = True, roi_enable = True):
    x, y, w, h = roi
    cv2.namedWindow('undistort', cv2.WINDOW_NORMAL)
    while True:
        if not queue.empty():
            frame = queue.get()
            if fps_enable:
                timer = cv2.getTickCount()

            frame_rect = cv2.remap(frame, mapx, mapy, interpolation=cv2.INTER_NEAREST, dst=None,
                                borderMode=cv2.BORDER_CONSTANT, borderValue=(0, 0, 0))

            if fps_enable:
                fps = cv2.getTickFrequency() / (cv2.getTickCount() - timer)
                print("FPS:{}".format(fps))

            if roi_enable:
                rect_roi = frame_rect[y:y + h, x:x + w]
                frame_roi = frame[y:y + h, x:x + w]
                img_stack = np.hstack([frame_roi, rect_roi])
            else:
                img_stack = np.hstack([frame, frame_rect])  # src

            if fps_enable:
                cv2.putText(img_stack, "FPS : " + str(int(fps)), (50, 50), cv2.FONT_HERSHEY_SIMPLEX, 2, (0, 0, 255), 3)

            cv2.imshow('undistort', img_stack)
            if cv2.waitKey(1) & 0xFF == ord('q'):
                break


def undistort_show_processes(queue, rtsp, path_cfg, interval = 3):
    VideoCapture = cv2.VideoCapture(rtsp)
    while (VideoCapture.isOpened()):
        ret, frame = VideoCapture.read()
        if not ret:
            VideoCapture = cv2.VideoCapture(rtsp)
            print("lost, have to reinitialization!")  
            continue
        pos = VideoCapture.get(cv2.CAP_PROP_POS_FRAMES)
        if pos % interval != 0:
            continue
        queue.put(frame)
    VideoCapture.release()



if __name__ == "__main__":
    path_cfg = r"./cfg/cam2.npz"
    rtsp = 'rtsp://admin:123455@192.168.*.*'

    frame_width = 2560
    frame_height = 1440
    mtx, dist, new_mtx, roi = read_campara(path_cfg)
    mapx, mapy = cv2.initUndistortRectifyMap(mtx, dist, None, new_mtx, (int(frame_width), int(frame_height)), 5)

    multiprocessing.set_start_method(method='spawn')  # init
    queue = multiprocessing.Queue(maxsize=4)
    processes = []
    processes.append(multiprocessing.Process(target = undistort_show_processes, args=(queue, rtsp, path_cfg)))
    processes.append(multiprocessing.Process(target = show, args = (queue, roi, mapx, mapy)))

    for process in processes:
        process.daemon = True
        process.start()

    for process in processes:
        process.join()

参考链接

opencv read error:[h264 @ 0x8f915e0] error while decoding MB 53 20, bytestream -7

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值