相机内参模型Scaramuzza/ocam详解

本博客主要是从Scaramuzza的论文出发,详细介绍该模型。

参考论文-1:A Flexible Technique for Accurate Omnidirectional Camera Calibration and Structure from Motion (2006年,mei论文是2007年)

参考论文-2:A Toolbox for Easily Calibrating Omnidirectional Cameras (2006年)

参考论文-3:Omnidirectional Camera (2008年)

1. 论文总述

这3篇paper的一作都是Davide Scaramuzza,主讲的都是Scaramuzza 全向相机模型,paper-1是介绍该模型成像原理,paper-2是详细介绍该模型的标定过程(计算相机内参/外参),paper-3是针对全向相机内参模型的一篇综述。

作者在paper中声称Scaramuzza内参模型可以同时建模两种全向相机:(1) 反射折射 catadioptric(camera + mirror)(2) 纯折射 dioptric (fisheye),主要原因在于Scaramuzza模型用泰勒多项式来建模:某个3D点到光心O的向量与该点在图像坐标系下的坐标值(u,v),省去了fx / fy(这和 fisheyemei 不一样)。

注: paper中Scaramuzza的建模是根据2d–>3d,这和其他模型的推导是反着的

在这里插入图片描述

Motivated by this observation, we propose a calibration procedure which uses a generalized parametric model of the sensor, which is suitable to different kinds of omnidirectional vision systems, both catadioptric and dioptric.
The proposed method requires the camera to observe a planar pattern shown at a few different locations. Either the camera or the planar pattern can be freely moved. No a-priori knowledge of the motion is required, nor a specific model of the omnidirectional sensor.
The developed procedure is based on the assumption that the circular external boundary of the mirror or of the fish-eye lens (respectively in the catadioptric and dioptric case) is visible in the image.
Moreover, we assume that the image formation function, which manages the projection of a 3D real point onto a pixel of the image plane, can be described by a Taylor series expansion. The expansion coefficients, which constitute our calibration parameters, are estimated by solving a two-step least-squares linear minimization problem. Finally, the order of the series is determined by minimizing the reprojection error of the calibration points.

注:

  • 在paper-2中,circular external boundary is visible不是必须的了,作者提出改进,利用其它方法来计算cx cy 以及 A

2. 全向相机single viewpoint的重要性

  • 去畸变:像pinhole透视投影 那种效果,直线还是直线
  • 允许使用对极几何的性质

As noted in [1, 3, 11], it is highly desirable that such
imaging systems have a single viewpoint [4, 6].
That is, there exists a single center of projection, so that,
every pixel in the sensed images measures the irradiance
of the light passing through the same viewpoint in
one particular direction. The reason a single viewpoint
is so desirable is that it permits the generation of geometrically
correct perspective images from the pictures
captured by the omnidirectional camera. Moreover, it
allows applying the known theory of epipolar geometry,
which easily permits to perform ego-motion estimation
and structure-from-motion from image correspondences
only.

3. 以前的全向相机标定

可分为两种,但都有其劣势:

  • (1)对标靶有特殊要求,只能标定特定全向相机,不通用
  • (2)自标定对角点提取要求太高,全向相机畸变重,不好提角点

Previous works on omnidirectional camera calibration
can be classified into two different categories.
The first one includes methods which exploit prior knowledge
about the scene, such as the presence of calibration
patterns [5, 7] or plumb lines [8].
The second group covers techniques that do not use this knowledge.
This includes calibration methods from pure
rotation [7] or planar motion of the camera [9], and
self-calibration procedures, which are performed from
point correspondences and epipolar constraint through
minimizing an objective function [10, 11].

4. 2D --> 3D

  • A 和 t 表示affine transformation,A = [c d ; e 1], t = [cx, cy],这里面已经没有fx fy 了,被建模进了g中
  • 2D和 3D点的x y坐标是一样的,f只是建模了一个二次曲面,建模出了Z坐标,但这里的(x , y , Z) 只表示op那条射线(也是op射线与二次曲面的交点),并不表示真实的3D坐标,一般会将(x , y , Z)进行归一化到单位球面上
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述

2D --> 3D代码:

function M=cam2world(m, ocam_model)

n_points = size(m,2);
ss = ocam_model.ss;
xc = ocam_model.xc;
yc = ocam_model.yc;
width = ocam_model.width;
height = ocam_model.height;
c = ocam_model.c;
d = ocam_model.d;
e = ocam_model.e;

A = [c,d;
     e,1];
T = [xc;yc]*ones(1,n_points);

m = A^-1*(m-T);
M = getpoint(ss,m);
M = normc(M); %normalizes coordinates so that they have unit length (projection onto the unit sphere)

function w=getpoint(ss,m)

% Given an image point it returns the 3D coordinates of its correspondent optical
% ray

w = [m(1,:) ; m(2,:) ; polyval(ss(end:-1:1),sqrt(m(1,:).^2+m(2,:).^2)) ];

5. 3D --> 2D

  • 2D --> 3D是用ρ来进行多项式近似
  • 3D --> 2D是用θ来进行多项式近似,θ是入射光线和图像平面的夹角,这个和fisheye很像(θ是入射光线和Z轴的夹角),只不过多项式阶数更高(190度的鱼眼相机:一般为12阶以上)

代码:

function m = world2cam_fast(M, ocam_model)

ss = ocam_model.ss;
xc = ocam_model.xc;
yc = ocam_model.yc;
width = ocam_model.width;
height = ocam_model.height;
c = ocam_model.c;
d = ocam_model.d;
e = ocam_model.e;
pol = ocam_model.pol;

npoints = size(M, 2);
theta = zeros(1,npoints);

NORM = sqrt(M(1,:).^2 + M(2,:).^2);

ind0 = find( NORM == 0); %these are the scene points which are along the z-axis
NORM(ind0) = eps; %this will avoid division by ZERO later

theta = atan( M(3,:)./NORM );

rho = polyval( pol , theta ); %Distance in pixel of the reprojected points from the image center

x = M(1,:)./NORM.*rho ; % 和fisheyey一样也利用了入射光线和x轴夹角等于像素坐标系上该点的x夹角
y = M(2,:)./NORM.*rho ;

%Add center coordinates
m(1,:) = x*c + y*d + xc;
m(2,:) = x*e + y   + yc;

6. 全向相机去畸变

  • 全向相机去畸变时,需要选择fov
    在这里插入图片描述

In this experiment, we test the accuracy of the estimated
sensor model by rectifying all calibration images.
Rectification determines a transformation of the
original distorted image such that the new image appears
as taken by a conventional perspective camera.
In general, is impossible to rectify the whole omnidirectional
image because of a view field larger than
180°. However, it is possible to perform rectification
on image regions which cover a smaller field of view.
As a result, linearity is preserved in the rectified image.
As you can see in Fig. 6, curved edges of a sample
pattern in the original image (Fig. 6) appear straight
after rectification (Fig. 7).

7. paper-2 主要内容

  • 迭代法求解cx cy
  • 最后的非线性优化时,顺便一起把A求解了

The contributions of the present work are the following.
First, we simplify the camera model by reducing the number
of parameters. Next, we refine the calibration output by using
a 4-step least squares linear minimization, followed by a nonlinear
refinement,
which is based on the maximum likelihood
criterion. By doing so, we improve the accuracy of the
previous technique and allow calibration to be done with a
smaller number of images.
Then, in contrast with our previous work, we no longer need
the circular boundary of the mirror to be visible in the image.
In that work, we used the appearance of the mirror boundary
to compute both the position of the center of the
omnidirectional image and the affine transformation.
Conversely, here, these parameters are automatically
computed using only the points the user selected.
In this paper, we evaluate the performance and the
robustness of the calibration by applying the technique to
simulated data. Then, we calibrate a real catadioptric camera,
and show the accuracy of the result by projecting the color
information
of the image onto real 3D points extracted by a
3D sick laser range finder. Finally, we provide a Matlab
Toolbox [14] which implements the procedure described here.

参考文献

  1. OCamCalib: Omnidirectional Camera Calibration Toolbox for Matlab
  • 5
    点赞
  • 27
    收藏
    觉得还不错? 一键收藏
  • 21
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 21
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值