1.双目测距原理
图源
OL和OR分别是左右两个相机镜头位置,b是两个镜头之间的距离,左右两个相机成像平面长度是L,并且平行于baseline(OL与OR连线)。视角内一点
P
(
X
,
Y
,
Z
)
P(X,Y,Z)
P(X,Y,Z)在左右两个图像平面的
P
L
,
P
R
P_L,P_R
PL,PR位置,
X
L
,
X
R
X_L,X_R
XL,XR分别是图像点到图像平面最左侧的物理距离,镜头焦距为
f
f
f,三角形
P
P
L
P
R
PP_LP_R
PPLPR与三角形
P
O
L
O
R
PO_LO_R
POLOR相似,则有
∣
P
L
P
R
∣
∣
O
L
O
R
∣
=
Z
−
f
Z
\frac{|P_LP_R|}{|O_LO_R|}=\frac{Z-f}{Z}
∣OLOR∣∣PLPR∣=ZZ−f
即
Z
−
f
Z
=
b
−
(
X
L
−
L
2
)
−
(
L
2
−
X
R
)
b
=
b
−
(
X
L
−
X
R
)
b
\frac{Z-f}{Z}=\frac{b-(X_L-\frac{L}{2})-(\frac{L}{2}-X_R)}{b}=\frac{b-(X_L-X_R)}{b}
ZZ−f=bb−(XL−2L)−(2L−XR)=bb−(XL−XR)
解得
Z
=
f
b
X
L
−
X
R
=
f
x
b
u
L
−
u
R
=
f
x
b
d
Z=\frac{fb}{X_L-X_R}=\frac{f_xb}{u_L-u_R}=\frac{f_xb}{d}
Z=XL−XRfb=uL−uRfxb=dfxb
其中
f
x
=
f
d
x
,
d
f_x=\frac{f}{dx},d
fx=dxf,d是像素位差,也就是视差
d
x
dx
dx是单位晶圆长度(unit size)。同样的,设像素坐标系下目标位置
(
u
,
v
)
(u,v)
(u,v),光心位置
(
u
0
,
v
0
)
(u_0,v_0)
(u0,v0),则有
(
u
−
u
0
)
d
x
f
=
u
−
u
0
f
x
=
X
Z
\frac{(u-u_0)dx}{f}=\frac{u-u_0}{f_x}=\frac{X}{Z}
f(u−u0)dx=fxu−u0=ZX
(
v
−
v
0
)
d
y
f
=
v
−
v
0
f
y
=
Y
Z
\frac{(v-v_0)dy}{f}=\frac{v-v_0}{f_y}=\frac{Y}{Z}
f(v−v0)dy=fyv−v0=ZY
从而有
X
=
b
(
u
−
u
0
)
d
,
Y
=
f
x
f
y
b
(
v
−
v
0
)
d
X=\frac{b(u-u_0)}{d}, Y=\frac{f_x}{f_y}\frac{b(v-v_0)}{d}
X=db(u−u0),Y=fyfxdb(v−v0),一般
f
x
f
y
=
1
\frac{f_x}{f_y}=1
fyfx=1
2.双目标定
1.以左侧相机坐标系为主坐标系,空间中一点
P
P
P,在其世界坐标系下坐标为
P
W
P_W
PW,在左相机坐标系坐标为
P
L
P_L
PL,在右相机坐标系坐标为
P
R
P_R
PR,由单目标定知识知道,分别存在旋转矩阵
R
L
,
R
R
R_L,R_R
RL,RR和平移矩阵
T
L
,
T
R
T_L,T_R
TL,TR使得
P
L
=
R
L
P
W
+
T
L
P_L=R_LP_W+T_L
PL=RLPW+TL
P
R
=
R
R
P
W
+
T
R
P_R=R_RP_W+T_R
PR=RRPW+TR
联立,消去
P
W
P_W
PW得
P
R
=
R
R
R
L
−
1
P
L
+
T
R
−
R
R
R
L
−
1
T
L
=
R
P
L
+
T
P_R=R_RR_L^{-1}P_L+T_R-R_RR_L^{-1}T_L=RP_L+T
PR=RRRL−1PL+TR−RRRL−1TL=RPL+T
左右相机单目标定可以得到外参系数
R
L
,
T
L
,
R
R
,
T
R
R_L,T_L,R_R,T_R
RL,TL,RR,TR,从而获得两个相机的坐标系转换关系
R
,
T
R,T
R,T
2.对极几何
如图所示
在左侧坐标系中
O
L
O
R
=
T
,
O
L
P
=
P
L
,
O
R
P
=
P
L
−
T
O_LO_R=T,O_LP=P_L,O_RP=P_L-T
OLOR=T,OLP=PL,ORP=PL−T,则在
O
R
O_R
OR坐标系中,
P
R
=
R
(
P
L
−
T
)
P_R=R(P_L-T)
PR=R(PL−T)
=
>
R
−
1
P
R
=
P
L
−
T
=> R^{-1}P_R=P_L-T
=>R−1PR=PL−T
<
=
>
R
T
P
R
=
P
L
−
T
<=>R^TP_R=P_L-T
<=>RTPR=PL−T
在
O
L
O_L
OL坐标系中
O
L
,
O
R
,
P
O_L,O_R,P
OL,OR,P三点共面,从而向量
O
L
P
,
O
L
O
R
,
O
R
P
O_LP,O_LO_R,O_RP
OLP,OLOR,ORP混合积为0,即
(
R
T
P
R
)
T
T
×
P
L
=
0
(R^TP_R)^TT\times P_L=0
(RTPR)TT×PL=0
<
=
>
P
R
T
R
T
×
P
L
=
0
<=>P_R^TRT\times P_L=0
<=>PRTRT×PL=0
=
>
P
R
T
R
S
P
L
=
0
=>P_R^TRSP_L=0
=>PRTRSPL=0
其中
S
=
[
0
−
T
z
T
y
T
z
0
−
T
x
−
T
y
T
x
0
]
S=\left[ \begin{array}{ccc} 0& -T_z & T_y\\ T_z & 0 & -T_x\\ -T_y & T_x & 0\\ \end{array} \right]
S=⎣⎡0Tz−Ty−Tz0TxTy−Tx0⎦⎤
本征矩阵
E
=
R
S
E=RS
E=RS,只和两个相机的位置关系有关与其他无关,再根据相机成像原理,设左右相机的内参分别是
M
l
,
M
r
M_l,M_r
Ml,Mr,则有
[
u
v
1
]
=
p
r
=
1
Z
r
M
r
P
r
=
>
P
r
=
Z
r
M
r
−
1
p
r
\left[ \begin{array}{ccc} u\\ v\\ 1\\ \end{array} \right]=p_r=\frac{1}{Z_r}M_rP_r=>P_r=Z_rM_r^{-1}p_r
⎣⎡uv1⎦⎤=pr=Zr1MrPr=>Pr=ZrMr−1pr
同理
P
l
=
Z
l
M
l
−
1
p
l
P_l=Z_lM_l^{-1}p_l
Pl=ZlMl−1pl
从而有
Z
r
(
M
r
−
1
p
r
)
T
E
(
Z
l
M
l
−
1
p
l
)
=
0
Z_r(M_r^{-1}p_r)^TE(Z_lM_l^{-1}p_l)=0
Zr(Mr−1pr)TE(ZlMl−1pl)=0
化简得
p
r
T
M
r
−
T
E
M
l
−
1
p
l
=
0
p_r^TM_r^{-T}EM_l^{-1}p_l=0
prTMr−TEMl−1pl=0
基础矩阵
F
=
M
r
−
T
E
M
l
−
1
F=M_r^{-T}EM_l^{-1}
F=Mr−TEMl−1
基础矩阵联系着同一个点在不同视角成像平面上像素位置关系
3.Bouguet方法
将1)的
R
R
R分解成左右相机各旋转旋转一半的旋转平移矩阵
R
l
,
R
r
R_l,R_r
Rl,Rr,目标是重投影造成的畸变最小,共同面积最大
i.
R
R
R进行分解
R
=
R
r
R
L
−
1
R=R_rR_L^{-1}
R=RrRL−1,则有
R
r
=
R
1
/
2
,
R
l
=
R
−
1
/
2
R_r=R^{1/2},R_l=R^{-1/2}
Rr=R1/2,Rl=R−1/2
ii. 左右相机各旋转一半,使得两者光心平行
iii.构造变换矩阵使得基线与成像平面平行
e
1
=
T
∣
T
∣
=
(
T
x
,
T
y
,
T
z
)
T
∣
T
∣
e_1=\frac{T}{|T|}=\frac{(T_x,T_y,T_z)^T}{|T|}
e1=∣T∣T=∣T∣(Tx,Ty,Tz)T
e
2
=
(
−
T
y
,
T
x
,
0
)
T
T
x
2
+
T
y
2
e_2=\frac{(-T_y,T_x,0)^T}{\sqrt{T_x^2+T_y^2}}
e2=Tx2+Ty2(−Ty,Tx,0)T
e
3
=
e
1
×
e
2
e_3=e_1\times e_2
e3=e1×e2
从而
R
r
e
c
t
=
[
e
1
T
e
2
T
e
3
T
]
R_{rect}=\left[ \begin{array}{ccc} e_1^T\\ e_2^T\\ e_3^T\\ \end{array} \right]
Rrect=⎣⎡e1Te2Te3T⎦⎤
注:构造基本就是当前坐标系与目标坐标系之间的过度矩阵,目标坐标系x方向就是两个镜头连线方向,y方向是平行成像平面并且与x方向垂直方向,右手系确定两个方向,第三个方向也就确定了,三个方向上的单位向量组成两个坐标系之间的过度矩阵。
最终得到
R
l
′
=
R
r
e
c
t
R
l
,
R
r
′
=
R
r
e
c
t
R
r
R_l^{'}=R_{rect}R_l, R_r^{'}=R_{rect}R_r
Rl′=RrectRl,Rr′=RrectRr
参考博客:https://www.cnblogs.com/zyly/p/9373991.html#_label1
(写的非常好,有原理有代码有说明)
3.代码
主要参考这个博客:https://www.cnblogs.com/polly333/p/5013505.html
根据参数调了一下,并把踩坑的地方记一下
流程上
- 计算所有棋盘格图片内角点,计算内角点函数
/*
* iamge 当前图片,灰度图
* patternSize 内角点尺寸//自测发现width和height弄反了也没关系
* corners vector<Point2f>类型,输出所有内角点
*/
bool findChessboardCorners( InputArray image, Size patternSize, OutputArray corners,
int flags = CALIB_CB_ADAPTIVE_THRESH + CALIB_CB_NORMALIZE_IMAGE );
2.计算双目标定的参数
/*
* objectPoints vector<vector<Point2f>>类型,存放所有图片的内角点真实坐标
* imagePoints1 左图内角点像素坐标
* imagePoints2 右图内角点像素坐标
* cameraMatrix1 左相机内参矩阵
* distCoeffs1 左相机畸变系数矩阵
* cameraMatrix2 右相机内参矩阵
* distCoeffs2 右相机畸变系数矩阵
* imageSize 图像分辨率
* R 两个相机之间的旋转矩阵
* T 两个相机之间的平移矩阵
* E 两个相机位置关系确定的本征矩阵
* F 两个相机位置关系加上各自内参确定的基础矩阵
* flag 计算形式
* criteria 迭代停止条件
*/
double stereoCalibrate( InputArrayOfArrays objectPoints, InputArrayOfArrays imagePoints1,
InputArrayOfArrays imagePoints2, InputOutputArray cameraMatrix1,
InputOutputArray distCoeffs1, InputOutputArray cameraMatrix2,
InputOutputArray distCoeffs2,Size imageSize, OutputArray R,OutputArray T, OutputArray E, OutputArray F,
int flags = CALIB_FIX_INTRINSIC, TermCriteria criteria = TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 30, 1e-6) );
注:虽然直接调这个函数也可以分别计算两个相机的内参和畸变系数,但是结果相当拉垮,最好是两个相机的内参和畸变系数先各自标定好输入进去,然后flag 选CV_CALIB_USE_INTRINSIC_GUESS这个参数,这样的效果会好很多
3.计算矫正的参数
/*
* cameraMatrix1 左侧相机内参
* distCoeffs1 左侧相机畸变系数
* cameraMatrix2 右侧相机内参
* distCoeffs2 右侧相机畸变系数
* imageSize 图像大小
* R/T 两个相机之间平移旋转矩阵
* R1,R2 两个相机矫正矩阵
* P1,P2 两个相机投影矩阵
* Q 视差到深度的映射矩阵
* flags=CALIB_ZERO_DISPARITY 主点位于相同像素值
* alpha=-1 缩放因子remap图像只显示有效像素
* newImageSize 新的图像尺寸
* validPixROI1/2 投影后剪切的图像范围
*/
void stereoRectify( InputArray cameraMatrix1, InputArray distCoeffs1,
InputArray cameraMatrix2, InputArray distCoeffs2,
Size imageSize, InputArray R, InputArray T,
OutputArray R1, OutputArray R2,
OutputArray P1, OutputArray P2,
OutputArray Q, int flags = CALIB_ZERO_DISPARITY,
double alpha = -1, Size newImageSize = Size(),
CV_OUT Rect* validPixROI1 = 0, CV_OUT Rect* validPixROI2 = 0 );
根据1的双目测距有
Z
=
f
x
b
d
Z=\frac{f_xb}{d}
Z=dfxb
X
=
b
(
u
−
u
0
)
d
,
X=\frac{b(u-u_0)}{d},
X=db(u−u0),
Y
=
b
(
v
−
v
0
)
d
Y=\frac{b(v-v_0)}{d}
Y=db(v−v0)
从而有
[
X
Y
Z
W
]
=
[
1
0
0
−
u
0
0
1
0
−
v
0
0
0
0
f
x
0
0
1
b
0
]
[
u
v
d
1
]
\left[ \begin{array}{ccc} X\\ Y\\ Z\\ W \end{array} \right]=\left[ \begin{array}{ccc} 1&0&0&-u_0\\ 0&1&0&-v_0\\ 0&0&0&f_x\\ 0&0&\frac{1}{b}&0 \end{array} \right]\left[ \begin{array}{ccc} u\\ v\\ d\\ 1 \end{array} \right]
⎣⎢⎢⎡XYZW⎦⎥⎥⎤=⎣⎢⎢⎡10000100000b1−u0−v0fx0⎦⎥⎥⎤⎣⎢⎢⎡uvd1⎦⎥⎥⎤
上面计算的
Q
=
[
1
0
0
−
u
0
0
1
0
−
v
0
0
0
0
f
x
0
0
1
b
0
]
Q=\left[ \begin{array}{ccc} 1&0&0&-u_0\\ 0&1&0&-v_0\\ 0&0&0&f_x\\ 0&0&\frac{1}{b}&0 \end{array} \right]
Q=⎣⎢⎢⎡10000100000b1−u0−v0fx0⎦⎥⎥⎤
R
1
,
R
2
R_1,R_2
R1,R2分别是左右相机消除畸变后的像平面投影到公共像平面的旋转矩阵,也就是Bouguet矫正哪里计算的
R
l
′
,
R
r
′
R_l^{'},R_r^{'}
Rl′,Rr′。
P
1
,
P
2
P_1,P_2
P1,P2分别是左右相机的投影矩阵,其作用是将世界坐标系的点转换到像素坐标系下。
- 根据每个镜头的内参,畸变参数,矫正参数计算投影,最终获得矫正图像位置对应的原图的位置
/*
* cameraMatrix 相机内参
* distCoeffs 相机畸变参数
* R 相机矫正参数
* newCameraMatrix 新的相机投影矩阵
* size 图像大小
* m1type 下面两个参数的数据类型
* map1 第一个输出映射
* map2 第二个输出映射
*/
void initUndistortRectifyMap( InputArray cameraMatrix, InputArray distCoeffs,
InputArray R, InputArray newCameraMatrix,
Size size, int m1type, OutputArray map1, OutputArray map2 );
上图是opencv官网给的流程
- 对原图进行投影矫正
/*
* src 输入原始图像
* dst 输出矫正后的图像
* map1/2 上面计算的投影映射
* interpolation 插值方式
*/
void remap( InputArray src, OutputArray dst,
InputArray map1, InputArray map2,
int interpolation, int borderMode = BORDER_CONSTANT,
const Scalar& borderValue = Scalar());
整体代码
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <vector>
#include <string>
#include <algorithm>
#include <iostream>
#include <iterator>
#include <stdio.h>
#include <stdlib.h>
#include <ctype.h>
using namespace cv;
using namespace std;
static void StereoCalib(const vector<string>& imagelist, Size boardSize, bool useCalibrated = true, bool showRectified = true)
{
if (imagelist.size() % 2 != 0)
{
cout << "Error: the image list contains odd (non-even) number of elements\n";
return;
}
bool displayCorners = true;
const int maxScale = 2;
const float squareSize = 1.f; // Set this to your actual square size
// ARRAY AND VECTOR STORAGE:
//创建图像坐标和世界坐标系坐标矩阵
vector<vector<Point2f> > imagePoints[2];
vector<vector<Point3f> > objectPoints;
Size imageSize;
int i, j, k, nimages = (int)imagelist.size() / 2;
nimages = 12;
//确定左右视图矩阵的数量,比如10副图,左右矩阵分别为5个
imagePoints[0].resize(nimages);
imagePoints[1].resize(nimages);
vector<string> goodImageList;
nimages = 12;
std::vector<cv::Mat> left;
std::vector<cv::Mat> right;
for (int i = 0; i < nimages; i++)读取本地拍摄的棋盘格图片
{
std::string path_l = "C:\\Users\\user\\Desktop\\双目标定\\data\\left\\" + std::to_string(i) + ".png";
std::string path_r = "C:\\Users\\user\\Desktop\\双目标定\\data\\right\\" + std::to_string(i) + ".png";
cv::Mat left_img = cv::imread(path_l, 0);
cv::Mat right_img = cv::imread(path_r, 0);
left.push_back(left_img);
right.push_back(right_img);
}
for (i = j = 0; i < nimages; i++)
{
std::cout << i << std::endl;
for (k = 0; k < 2; k++)
{
//逐个读取图片
const string& filename = imagelist[i * 2 + k];
Mat img;// = imread(filename, 0);
if (k == 0)
{
img = left[i].clone();
}
else
{
img = right[i].clone();
}
if (img.empty())
break;
if (imageSize == Size())
imageSize = img.size();
else if (img.size() != imageSize)
{
cout << "The image " << filename << " has the size different from the first image size. Skipping the pair\n";
break;
}
bool found = false;
//设置图像矩阵的引用,此时指向左右视图的矩阵首地址
vector<Point2f>& corners = imagePoints[k][j];
for (int scale = 1; scale <= maxScale; scale++)
{
Mat timg;
//图像是8bit的灰度或彩色图像
if (scale == 1)
timg = img;
else
resize(img, timg, Size(), scale, scale);
//boardSize为棋盘图的行、列数
found = findChessboardCorners(timg, boardSize, corners,
CV_CALIB_CB_ADAPTIVE_THRESH | CV_CALIB_CB_NORMALIZE_IMAGE);
if (found)
{
//如果为多通道图像
if (scale > 1)
{
Mat cornersMat(corners);
cornersMat *= 1. / scale;
}
break;
}
}
if (displayCorners)
{
//cout << filename << endl;
Mat cimg, cimg1;
cvtColor(img, cimg, COLOR_GRAY2BGR);
drawChessboardCorners(cimg, boardSize, corners, found);
double sf = 640. / MAX(img.rows, img.cols);
resize(cimg, cimg1, Size(), sf, sf);
cv::imshow("corners", cimg1);
char c = (char)waitKey(500);
if (c == 27 || c == 'q' || c == 'Q') //Allow ESC to quit
exit(-1);
}
else
putchar('.');
if (!found)
break;
cornerSubPix(img, corners, Size(11, 11), Size(-1, -1),
TermCriteria(CV_TERMCRIT_ITER + CV_TERMCRIT_EPS,
30, 0.01));
}
if (k == 2)
{
//goodImageList.push_back(imagelist[i * 2]);
//goodImageList.push_back(imagelist[i * 2 + 1]);
j++;
}
}
cout << j << " pairs have been successfully detected.\n";
nimages = j;
if (nimages < 2)
{
cout << "Error: too little pairs to run the calibration\n";
return;
}
imagePoints[0].resize(nimages);
imagePoints[1].resize(nimages);
// 图像点的世界坐标系
objectPoints.resize(nimages);
for (i = 0; i < nimages; i++)
{
for (j = 0; j < boardSize.height; j++)
for (k = 0; k < boardSize.width; k++)
//直接转为float类型,坐标为行、列
objectPoints[i].push_back(Point3f(j * squareSize, k * squareSize, 0));
}
cout << "Running stereo calibration ...\n";
//创建内参矩阵
Mat cameraMatrix[2], distCoeffs[2];
Mat R, T, E, F;
vector<Mat> Rl, Tl, Rr, Tr;
// Mat cameraMatrix, distCoeffs;// , R, T;//内参矩阵,畸变系数,旋转量,偏移量
//vector<Mat> R, T;
double dis1 = cv::calibrateCamera(objectPoints, imagePoints[0], left[0].size(), cameraMatrix[0], distCoeffs[0], Rl, Tl);
double dis2 = cv::calibrateCamera(objectPoints, imagePoints[1], right[0].size(), cameraMatrix[1], distCoeffs[1], Rr, Tr);
//求解双目标定的参数
opencv直接根据内角点计算的内参和畸变系数非常拉垮,得先分别计算两个相机内参带进去计算
/****
*
* objectPoints 真实点坐标
* imagePoints[0] 左图内角点像素坐标
* iamgePoints[1] 右侧内角点像素坐标
* cameraMatrix[0/1] 左右侧相机内参
* distCorffs[0/1] 左右侧相机畸变参数
* iamgeSize 每个图像尺寸
* R, T 左右侧相机旋转平移矩阵
* E 本征矩阵
* F 基础矩阵
* CV_CALIB_USE_INTRINSIC_GUESS 使用已知内参,并且不改变内参
* 迭代停止条件
****/
double rms = stereoCalibrate(objectPoints, imagePoints[0], imagePoints[1],
cameraMatrix[0], distCoeffs[0],
cameraMatrix[1], distCoeffs[1],
imageSize, R, T, E, F,///
CV_CALIB_USE_INTRINSIC_GUESS,
// CV_CALIB_FIX_ASPECT_RATIO +
// CV_CALIB_ZERO_TANGENT_DIST +
// CV_CALIB_SAME_FOCAL_LENGTH +
// CV_CALIB_RATIONAL_MODEL +
// CV_CALIB_FIX_K3 + CV_CALIB_FIX_K4 + CV_CALIB_FIX_K5,
TermCriteria(CV_TERMCRIT_ITER + CV_TERMCRIT_EPS, 100, 1e-5)
);
cout << "done with RMS error=" << rms << endl;
// CALIBRATION QUALITY CHECK
// because the output fundamental matrix implicitly
// includes all the output information,
// we can check the quality of calibration using the
// epipolar geometry constraint: m2^t*F*m1=0
//计算标定误差
double err = 0;
int npoints = 0;
vector<Vec3f> lines[2];
for (i = 0; i < nimages; i++)
{
int npt = (int)imagePoints[0][i].size();
Mat imgpt[2];
for (k = 0; k < 2; k++)
{
imgpt[k] = Mat(imagePoints[k][i]);将内角点的vector转成Mat,并根据内参和畸变参数进行矫正
//校正图像点坐标
undistortPoints(imgpt[k], imgpt[k], cameraMatrix[k], distCoeffs[k], Mat(), cameraMatrix[k]);
//求解对极线
computeCorrespondEpilines(imgpt[k], k + 1, F, lines[k]);
}
//计算求解点与实际点的误差
for (j = 0; j < npt; j++)左图计算的极线的到右图对应点距离加上右图计算极线到左图对应点的距离
{
double errij = fabs(imagePoints[0][i][j].x * lines[1][j][0] +
imagePoints[0][i][j].y * lines[1][j][1] + lines[1][j][2]) +
fabs(imagePoints[1][i][j].x * lines[0][j][0] +
imagePoints[1][i][j].y * lines[0][j][1] + lines[0][j][2]);
err += errij;
}
npoints += npt;
}
cout << "average reprojection err = " << err / npoints << endl;
Mat R1, R2, P1, P2, Q;
Rect validRoi[2];
//双目视觉校正,根据内参矩阵,两摄像机之间平移矩阵以及投射矩阵
/*
* cameraMatrix[0/1] 左右两个相机内参
* disCoeffs[0/1] 左右两个相机畸变系数
* imageSize 图像大小
* R,T 右侧相机到左侧相机的旋转平移矩阵
* R1,R2 左右两台相机矫正变换
* P1,P2 左右两台相机的投影矩阵
* Q 视差到深度映射矩阵
* CALIB_ZERO_DISPARITY 主点位于相同像素值
* 1 缩放因子//当取值为0时会对图像进行缩放和平移,是的remap图像只显示有效像素
* iamgeSize 新的图像大小
* 投影后剪切的图像范围
*/
stereoRectify(cameraMatrix[0], distCoeffs[0],
cameraMatrix[1], distCoeffs[1],
imageSize, R, T, R1, R2, P1, P2, Q,
CALIB_ZERO_DISPARITY, 0, imageSize, &validRoi[0], &validRoi[1]);
// OpenCV can handle left-right
// or up-down camera arrangements
bool isVerticalStereo = fabs(P2.at<double>(1, 3)) > fabs(P2.at<double>(0, 3));
// COMPUTE AND DISPLAY RECTIFICATION
if (!showRectified)
return;
Mat rmap[2][2];
// IF BY CALIBRATED (CALIBRATE'S METHOD)
//用标定的话,就不许计算左右相机的透射矩阵
if (useCalibrated)
{
// we already computed everything
}
// OR ELSE HARTLEY'S METHOD
else
// use intrinsic parameters of each camera, but
// compute the rectification transformation directly
// from the fundamental matrix
{
vector<Point2f> allimgpt[2];
for (k = 0; k < 2; k++)
{
for (i = 0; i < nimages; i++)
std::copy(imagePoints[k][i].begin(), imagePoints[k][i].end(), back_inserter(allimgpt[k]));
}
F = findFundamentalMat(Mat(allimgpt[0]), Mat(allimgpt[1]), FM_8POINT, 0, 0);
Mat H1, H2;
stereoRectifyUncalibrated(Mat(allimgpt[0]), Mat(allimgpt[1]), F, imageSize, H1, H2, 3);
R1 = cameraMatrix[0].inv() * H1 * cameraMatrix[0];
R2 = cameraMatrix[1].inv() * H2 * cameraMatrix[1];
P1 = cameraMatrix[0];
P2 = cameraMatrix[1];
}
//Precompute maps for cv::remap()
//根据左右相机的投射矩阵,校正图像
initUndistortRectifyMap(cameraMatrix[0], distCoeffs[0], R1, P1, imageSize, CV_16SC2, rmap[0][0], rmap[0][1]);
initUndistortRectifyMap(cameraMatrix[1], distCoeffs[1], R2, P2, imageSize, CV_16SC2, rmap[1][0], rmap[1][1]);
cv::FileStorage fs("map.yml", CV_STORAGE_WRITE);
if (fs.isOpened())
{
fs << "mapl0" << rmap[0][0] << "mapl1" << rmap[0][1] << "mapr0" << rmap[1][0] << "mapr1" << rmap[1][1];
fs.release();
std::cout << "map save done\n";
}
else
{
std::cout << "save map fail\n";
}
Mat canvas;
double sf;
int w, h;
if (!isVerticalStereo)构造画板,将左右两张图存放在一起
{
sf = 600. / MAX(imageSize.width, imageSize.height);
w = cvRound(imageSize.width * sf);
h = cvRound(imageSize.height * sf);
canvas.create(h, w * 2, CV_8UC3);
}
else
{
sf = 300. / MAX(imageSize.width, imageSize.height);
w = cvRound(imageSize.width * sf);
h = cvRound(imageSize.height * sf);
canvas.create(h * 2, w, CV_8UC3);
}
for (i = 0; i < nimages; i++)
{
for (k = 0; k < 2; k++)
{
Mat img , rimg, cimg;
if (k == 0)
{
img = left[i].clone();
}
else
{
img = right[i].clone();
}
remap(img, rimg, rmap[k][0], rmap[k][1], CV_INTER_LINEAR);
cvtColor(rimg, cimg, COLOR_GRAY2BGR);
Mat canvasPart = !isVerticalStereo ? canvas(Rect(w * k, 0, w, h)) : canvas(Rect(0, h * k, w, h));
resize(cimg, canvasPart, canvasPart.size(), 0, 0, CV_INTER_AREA);
if (useCalibrated)
{
Rect vroi(cvRound(validRoi[k].x * sf), cvRound(validRoi[k].y * sf),
cvRound(validRoi[k].width * sf), cvRound(validRoi[k].height * sf));
rectangle(canvasPart, vroi, Scalar(0, 0, 255), 3, 8);
}
}
if (!isVerticalStereo)
for (j = 0; j < canvas.rows; j += 16)按行
line(canvas, Point(0, j), Point(canvas.cols, j), Scalar(0, 255, 0), 1, 8);
else
for (j = 0; j < canvas.cols; j += 16)按列
line(canvas, Point(j, 0), Point(j, canvas.rows), Scalar(0, 255, 0), 1, 8);
imshow("rectified", canvas);
char c = (char)waitKey();
if (c == 27 || c == 'q' || c == 'Q')
break;
}
}
/*
* 根据标定的结果对图像进行矫正
*/
cv::Mat intrinsic_l, coef_l, R1, P1, intrinsic_r, coef_r, R2, P2;
float fx;
cv::Mat mapl0, mapl1, mapr0, mapr1;
void param_init()
{
cv::FileStorage fs("./map.yml", CV_STORAGE_READ);
if (fs.isOpened())
{
fs["mapl0"] >> mapl0;
fs["mapl1"] >> mapl1;
fs["mapr0"] >> mapr0;
fs["mapr1"] >> mapr1;
fs.release();
std::cout << "map load sucess\n";
}
else
{
std::cout << "map load fail\n";
}
}
void img_rectify(cv::Mat& src, int flag)
{
cv::Mat map0, map1;
if (flag == 0)
{
map0 = mapl0.clone();
map1 = mapl1.clone();
}
else
{
map0 = mapr0.clone();
map1 = mapr1.clone();
}
cv::Mat new_img;
remap(src, new_img, map0, map1, CV_INTER_LINEAR);
src = new_img.clone();
}
void test()
{
std::vector<cv::Mat> left;
std::vector<cv::Mat> right;
for (int i = 0; i < 4; i++)
{
cv::Mat lef = cv::imread("C:\\Users\\user\\Desktop\\双目标定\\tes\\data\\left\\" + std::to_string(i) + ".png",0);
cv::Mat rig = cv::imread("C:\\Users\\user\\Desktop\\双目标定\\tes\\data\\right\\" + std::to_string(i) + ".png",0);
img_rectify(lef, 0);
img_rectify(rig, 1);
left.push_back(lef);
right.push_back(rig);
}
int j = 0;
cv::Mat merge = cv::Mat(480, 640 * 2, CV_8UC1);
cv::Mat l = merge(cv::Rect(0, 0, 640, 480));
cv::Mat r = merge(cv::Rect(640, 0, 640, 480));
left[0].copyTo(l);
right[0].copyTo(r);
cvtColor(merge, merge, COLOR_GRAY2BGR);
//merge.convertTo(merge, CV_8UC3);
for (int j = 0; j < merge.rows; j += 16)按行
line(merge, Point(0, j), Point(merge.cols, j), Scalar(0, 255, 0), 1, 8);
// line(canvas, Point(j, 0), Point(j, canvas.rows), Scalar(0, 255, 0), 1, 8);
imshow("rectified", merge);
}
int main()
{
param_init();
test();
return 0;
}
int main1(int argc, char** argv)
{
//newImg2();
Size boardSize;
string imagelistfn;
bool showRectified = true;
boardSize.height = 8;
boardSize.width = 6;
vector<string> imagelist;
StereoCalib(imagelist, boardSize, true, showRectified);
return 0;
}
}