相机模型(来自opencv documentation)

原创 2015年07月09日 11:02:31

1> pin-hole model:

The functions in this section use a so-called pinhole camera model. In this model, a scene view is formed by projecting 3D points into the image planeusing a perspective transformation.

s  \; m' = A [R|t] M'

or

s  \vecthree{u}{v}{1} = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\begin{bmatrix}r_{11} & r_{12} & r_{13} & t_1  \\r_{21} & r_{22} & r_{23} & t_2  \\r_{31} & r_{32} & r_{33} & t_3\end{bmatrix}\begin{bmatrix}X \\Y \\Z \\1\end{bmatrix}

where:

  • (X, Y, Z) are the coordinates of a 3D point in the world coordinate space
  • (u, v) are the coordinates of the projection point in pixels
  • A is a camera matrix, or a matrix of intrinsic parameters
  • (cx, cy) is a principal point that is usually at the image center
  • fx, fy are the focal lengths expressed in pixel units.

Thus, if an image from the camera isscaled by a factor, all of these parameters shouldbe scaled (multiplied/divided, respectively) by the same factor. Thematrix of intrinsic parameters does not depend on the scene viewed. So,once estimated, it can be re-used as long as the focal length is fixed (incase of zoom lens). The joint rotation-translation matrix[R|t] is called a matrix of extrinsic parameters. It is used to describe thecamera motion around a static scene, or vice versa, rigid motion of anobject in front of a still camera. That is,[R|t] translatescoordinates of a point(X, Y, Z) to a coordinate system,fixed with respect to the camera. The transformation above is equivalentto the following (whenz \ne 0 ):

\begin{array}{l}\vecthree{x}{y}{z} = R  \vecthree{X}{Y}{Z} + t \\x' = x/z \\y' = y/z \\u = f_x*x' + c_x \\v = f_y*y' + c_y\end{array}

Real lenses usually have some distortion, mostlyradial distortion and slight tangential distortion. So, the above modelis extended as:

\begin{array}{l} \vecthree{x}{y}{z} = R  \vecthree{X}{Y}{Z} + t \\ x' = x/z \\ y' = y/z \\ x'' = x'  \frac{1 + k_1 r^2 + k_2 r^4 + k_3 r^6}{1 + k_4 r^2 + k_5 r^4 + k_6 r^6} + 2 p_1 x' y' + p_2(r^2 + 2 x'^2)  \\ y'' = y'  \frac{1 + k_1 r^2 + k_2 r^4 + k_3 r^6}{1 + k_4 r^2 + k_5 r^4 + k_6 r^6} + p_1 (r^2 + 2 y'^2) + 2 p_2 x' y'  \\ \text{where} \quad r^2 = x'^2 + y'^2  \\ u = f_x*x'' + c_x \\ v = f_y*y'' + c_y \end{array}

k_1,k_2,k_3,k_4,k_5, andk_6 are radial distortion coefficients.p_1 andp_2 are tangential distortion coefficients.Higher-order coefficients are not considered in OpenCV. In the functions below the coefficients are passed or returned as

(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]])

vector. That is, if the vector contains four elements, it means thatk_3=0 .The distortion coefficients do not depend on the scene viewed. Thus, they also belong to the intrinsic camera parameters. And they remain the same regardless of the captured image resolution.If, for example, a camera has been calibrated on images of320x240 resolution, absolutely the same distortion coefficients canbe used for640x480 images from the same camera whilef_x,f_y,c_x, andc_y need to be scaled appropriately.

The functions below use the above model to do the following:

  • Project 3D points to the image plane given intrinsic and extrinsic parameters.
  • Compute extrinsic parameters given intrinsic parameters, a few 3D points, and their projections.
  • Estimate intrinsic and extrinsic camera parameters from several views of a known calibration pattern (every view is described by several 3D-2D point correspondences).
  • Estimate the relative position and orientation of the stereo camera “heads” and compute therectification transformation that makes the camera optical axes parallel.


2> fisheye model:

Definitions:Let P be a point in 3D of coordinates X in the world reference frame (stored in the matrix X)The coordinate vector of P in the camera reference frame is:

class center

Xc = R X + T

where R is the rotation matrix corresponding to the rotation vector om: R = rodrigues(om);call x, y and z the 3 coordinates of Xc:

class center

x = Xc_1 \\y = Xc_2 \\z = Xc_3

The pinehole projection coordinates of P is [a; b] where

class center

a = x / z \ and \ b = y / z \\r^2 = a^2 + b^2 \\\theta = atan(r)

Fisheye distortion:

class center

\theta_d = \theta (1 + k_1 \theta^2 + k_2 \theta^4 + k_3 \theta^6 + k_4 \theta^8)

The distorted point coordinates are [x’; y’] where

(此处opencv文档有些不清楚, 本人验证并更正如下,附matlab验证代码)

x' = (theta_d / r)* a 
y' = (theta_d / r)* b

Finally, conversion into pixel coordinates: The final pixel coordinates vector [u; v] where:

class center

u = fx * x' + Cx;

v = fy * y' + Cy;


3> other materail

http://wenku.baidu.com/link?url=waSqjF9HJ4BGMGoeL4bLIntrZ24B48jXczOoYz7PBYkoqn8jxZ8HGL8STzvFVdnl8WWEgOg8tcVFoZ4jO_Izo907_DbvLktrpbyd4SQmBMO


http://wenku.baidu.com/view/580fa337ee06eff9aef807cc.html



opencv 鱼眼模型验证:

clear
close all

R = [0.8988261790903926, 0.4188302467301371, 0.129200325873188;
  -0.4187798435070649, 0.9076282961426588, -0.02888457570005586;
  -0.1293636056005076, -0.02814427943910706, 0.9911977386379015];

t = [-0.402431, 0.0388337, 0.671309]';

A = [594.1656343384788, 0, 643.4646451030211;
      0, 593.6065468136707, 371.2638324096167;
      0, 0, 1];

K = [-0.04192856403922697;
     -0.002158383400516276;
      0.001463386066034605;
     -0.005204957317263106];

img_data = [ 327.005707, 401.706879, 382.578613, 368.528595, 447.612915, 331.631134, 521.767090, 291.437500, ...
             603.254089, 249.857986, 688.284241, 209.167130, 772.313904, 171.579849, 851.017456, 138.804169, ...
             921.380676, 111.622528, 982.589966, 89.692650, 355.885986, 474.680847, 413.861481, 445.651489, ...
             481.566345, 412.371521, 558.414246, 374.775757, 642.492310, 334.675598, 729.559509, 293.751709, ...
             814.828247, 254.507523, 893.690674, 218.945618, 963.500610, 187.922989, 1023.213501, 161.938385, ...
             389.184540, 547.380920, 449.031677, 523.005493, 518.651978, 494.009918, 597.481384, 460.122589, ...
             682.705994, 422.229462, 770.243408, 381.848572, 855.282410, 341.607635, 933.055847, 303.314911, ...
             1001.264832, 268.784271, 1059.156372, 238.558731, 424.892181, 617.114441, 486.681976, 597.320923, ...
             557.592102, 572.413391, 636.631287, 542.460144, 721.497192, 507.358459, 807.830017, 468.430420, ...
             891.032349, 427.681854, 966.609009, 387.922577, 1032.822144, 350.344391, 1088.560547, 316.416199 ]; 

obj_data = [ 0.000000, 0.000000,0.100000, 0.000000,0.200000, 0.000000,0.300000, 0.000000, ... 
             0.400000, 0.000000,0.500000, 0.000000,0.600000, 0.000000,0.700000, 0.000000, ...
             0.800000, 0.000000,0.900000, 0.000000,0.000000, 0.100000,0.100000, 0.100000, ...
             0.200000, 0.100000,0.300000, 0.100000,0.400000, 0.100000,0.500000, 0.100000, ...
             0.600000, 0.100000,0.700000, 0.100000,0.800000, 0.100000,0.900000, 0.100000, ...
             0.000000, 0.200000,0.100000, 0.200000,0.200000, 0.200000,0.300000, 0.200000, ...
             0.400000, 0.200000,0.500000, 0.200000,0.600000, 0.200000,0.700000, 0.200000, ...
             0.800000, 0.200000,0.900000, 0.200000,0.000000, 0.300000,0.100000, 0.300000, ...
             0.200000, 0.300000,0.300000, 0.300000,0.400000, 0.300000,0.500000, 0.300000, ...
             0.600000, 0.300000,0.700000, 0.300000,0.800000, 0.300000,0.900000, 0.300000];
         
%% import data

img_point = zeros(2, 40);
obj_point = zeros(3, 40);

img_pre = zeros(2, 40);
obj_pre = zeros(3, 40);

for n = 1: 40
   img_point(1, n) = img_data(2*n - 1);
   img_point(2, n) = img_data(2*n);
   obj_point(1, n) = obj_data(2*n - 1);
   obj_point(2, n) = obj_data(2*n);
   obj_point(3, n) = 0.0;
end

figure(1); hold on;
plot3(obj_point(1,:), obj_point(2,:), obj_point(3,:), 'r*');
grid on;

figure(2); hold on;
plot(img_point(1, :), img_point(2, :), 'r*');
axis equal;

for n = 1: 40
    obj_point(:, n) =  R * obj_point(:, n) + t;
end

figure(1); hold on;
plot3(obj_point(1, :), obj_point(2, :), obj_point(3, :), 'b*');
axis equal;


%% with no distortion
temp = A*obj_point;

temp(1, :) = temp(1,:)./ temp(3,:); 
temp(2, :) = temp(2,:)./ temp(3,:); 
temp(3, :) = temp(3,:)./ temp(3,:); 

figure(2)
hold on;
plot(temp(1, :), temp(2, :), 'b*');
axis equal;


%% with distortion
for n = 1:40
    a = obj_point(1, n) /obj_point(3, n);
    b = obj_point(2, n) /obj_point(3, n);
    
    r = sqrt(a^2 + b^2);
    
    theta = atan(r);
    
    theta_d = theta* (1 + K(1) * theta^2 + K(2) * theta^4 + K(3)*theta^6  + K(4)*theta^8);
    
    temp(1,n) = A(1,1)*(theta_d / r) * a + A(1,3); 
    temp(2,n) = A(2,2)*(theta_d / r) * b + A(2,3);
        
end

figure(2)
hold on;
plot(temp(1, :), temp(2, :), 'g*');
axis equal;

三维点 旋转和平移:


蓝点: 无鱼眼畸变的投影;绿色:有鱼眼畸变的投影; 红色:原图特征点 (红点与绿点重合度越高表明模型越精确)

放大后的误差对比


相关文章推荐

xtion pro live 单目视觉半直接法(SVO)实践

svo的下载编译可以查看教程:https://github.com/uzh-rpg/rpg_svo/wiki 跑完下作者提供的数据集后,我就想用自己手边的摄像头来看下实际效果。 但要用自己的摄像头...

Delphi7高级应用开发随书源码

  • 2003年04月30日 00:00
  • 676KB
  • 下载

相机模型-计算机视觉

计算机视觉

相机成像模型

为了比较清楚得说明这件事,笔者力求以最简洁的方式进行介绍 4个坐标系:世界坐标系(Xw、Yw、Zw)、相机坐标系(Xc、Yc、Zc)、像平面坐标系(X、Y)、像素平面坐标系(u、v) 3个变换关系:(...

pthread_create返回11解决方法

一直以为,程序创建线程,线程运行结束会自动清空资源 最近在一个项目中用到了线程,除去业务逻辑,我把他简化出来是下面这样//pthread.c #include #include static i...
  • cry1994
  • cry1994
  • 2016年09月24日 11:41
  • 2180

Python调用windows下DLL详解 - ctypes库的使用

P.S. 之前的排版乱掉了,这里做一下排版,顺便改一下里面的一些用词错误。  2011-08-04     在python中某些时候需要C做效率上的补充,在实际应用中,需要做部分数据的交...

OpenCV实现SfM(一):相机模型

相机的标定SfM介绍SfM的全称为Structure from Motion,即通过相机的移动来确定目标的空间和几何关系,是三维重建的一种常见方法。...

相机模型与标定(十二)--opencv圆形标志点检测算法

本来以为圆形检测比较简单,实际还是花费我近一上午时间,网上几乎没有相关资料(除了OpenCV官网)。这里坐下简单介绍,分享给大家。 非对称圆形标定物检测: 1.findCirclesGrid函数的...

相机模型与标定(四)--opencv单目标定例子使用说明

原文: http://blog.csdn.net/t247555529/article/details/47836233 最近一个项目要进行相机的标定,作为一个菜鸟,瞎搞一下午才搞定,于是写篇博客...

相机模型与标定(五)--opencv棋盘格角点检测算法

原文: http://blog.csdn.net/b5w2p0/article/details/18446961 很简单,作者写的差不多对,不高兴改了。。。 刚接触图像处理是从摄像机标定开始,一直好奇...
内容举报
返回顶部
收藏助手
不良信息举报
您举报文章:相机模型(来自opencv documentation)
举报原因:
原因补充:

(最多只允许输入30个字)