Matching Calibrated Ca…

最近在做数据集的时候,本来想用Unity简简单单就搞定了,没想到投影矩阵的计算和标定中的不一样,导致实验结果迟迟出不来,翻墙看到http://jamesgregson.blogspot.com/2011/11/matching-calibrated-cameras-with-opengl.html 这篇博文,mark一下,防止以后被墙

When working with calibrated cameras it is often useful to be able to display things on screen for debugging purposes.  However the camera model used by OpenGL is quite different from the calibration parameters from, for example, OpenCV.  The linear parameters that OpenCV provides are the following:

Matching <wbr>Calibrated <wbr>Cameras <wbr>with <wbr>OpenGL <wbr> <wbr>如何将相机内参融合进Openg

where (from http://en.wikipedia.org/wiki/Camera_resectioning) Matching <wbr>Calibrated <wbr>Cameras <wbr>with <wbr>OpenGL <wbr> <wbr>如何将相机内参融合进Openg is the skew between the x and y axes, Matching <wbr>Calibrated <wbr>Cameras <wbr>with <wbr>OpenGL <wbr> <wbr>如何将相机内参融合进Openg are the image principle point Matching <wbr>Calibrated <wbr>Cameras <wbr>with <wbr>OpenGL <wbr> <wbr>如何将相机内参融合进Openg, Matching <wbr>Calibrated <wbr>Cameras <wbr>with <wbr>OpenGL <wbr> <wbr>如何将相机内参融合进Openg with f being the focal length and Matching <wbr>Calibrated <wbr>Cameras <wbr>with <wbr>OpenGL <wbr> <wbr>如何将相机内参融合进Openg being scale factors relating pixels to distance.  Multiplying a point Matching <wbr>Calibrated <wbr>Cameras <wbr>with <wbr>OpenGL <wbr> <wbr>如何将相机内参融合进Openg by this matrix and dividing by resulting z-coordinate then gives the point projected into the image.
The OpenGL parameters are quite different.  Generally the projection is set using the glFrustumcommand, which takes the left, right, top, bottom, near and far clip plane locations as parameters and maps these into "normalized device coordinates" which range from [-1, 1].  The normalized device coordinates are then transformed by the current viewport, which maps them onto the final image plane.  Because of the differences, obtaining an OpenGL projection matrix which matches a given set of intrinsic parameters is somewhat complicated.
Roughly following this post, (update: a much-improved update from Kyle, the post's author isavailable here) the following code will produce an OpenGL projection matrix and viewport.  I have tested this code against the OS-X OpenGL implementation (using gluProject) to verify that for randomly generated intrinsic parameters, the corresponding OpenGL frustum and viewport reproduce the x and y coordinates of the projected point.  The code works by multiplying a perspective projection matrix by an orthographic projection to map into normalized device coordinates, and setting the appropriate box for the glViewport command.

 1: 
 32:  
 33: void build_opengl_projection_for_intrinsics( Eigen::Matrix4d &frustum, int *viewport, double alpha, double beta, double skew, double u0, double v0, int img_width, int img_height, double near_clip, double far_clip ){
 34:  
 35:  
 36: // These parameters define the final viewport that is rendered into by
 37:  
 38: // the camera.
 39:  
 40: double L = 0;
 41:  
 42: double R = img_width;
 43:  
 44: double B = 0;
 45:  
 46: double T = img_height;
 47:  
 48:  
 49: // near and far clipping planes, these only matter for the mapping from
 50:  
 51: // world-space z-coordinate into the depth coordinate for OpenGL
 52:  
 53: double N = near_clip;
 54:  
 55: double F = far_clip;
 56:  
 57:  
 58: // set the viewport parameters
 59:  
 60: viewport[0] = L;
 61:  
 62: viewport[1] = B;
 63:  
 64: viewport[2] = R-L;
 65:  
 66: viewport[3] = T-B;
 67:  
 68:  
 69: // construct an orthographic matrix which maps from projected
 70:  
 71: // coordinates to normalized device coordinates in the range
 72:  
 73: // [-1, 1]. OpenGL then maps coordinates in NDC to the current
 74:  
 75: // viewport
 76:  
 77: Eigen::Matrix4d ortho = Eigen::Matrix4d::Zero();
 78:  
 79: ortho(0,0) = 2.0/(R-L); ortho(0,3) = -(R+L)/(R-L);
 80:  
 81: ortho(1,1) = 2.0/(T-B); ortho(1,3) = -(T+B)/(T-B);
 82:  
 83: ortho(2,2) = -2.0/(F-N); ortho(2,3) = -(F+N)/(F-N);
 84:  
 85: ortho(3,3) = 1.0;
 86:  
 87:  
 88: // construct a projection matrix, this is identical to the
 89:  
 90: // projection matrix computed for the intrinsicx, except an
 91:  
 92: // additional row is inserted to map the z-coordinate to
 93:  
 94: // OpenGL.
 95:  
 96: Eigen::Matrix4d tproj = Eigen::Matrix4d::Zero();
 97:  
 98: tproj(0,0) = alpha; tproj(0,1) = skew; tproj(0,2) = u0;
 99:  
 100: tproj(1,1) = beta; tproj(1,2) = v0;
 101:  
 102: tproj(2,2) = -(N+F); tproj(2,3) = -N*F;
 103:  
 104: tproj(3,2) = 1.0;
 105:  
 106:  
 107: // resulting OpenGL frustum is the product of the orthographic
 108:  
 109: // mapping to normalized device coordinates and the augmented
 110:  
 111: // camera intrinsic matrix
 112:  
 113: frustum = ortho*tproj
 114:  
 115: }

貌似大神还用了Eigen库,之前安装为了模拟C++的Matlab功能用过但是无果,看起来很值得研究一下啊。

The code uses the Eigen linear algebra library, which conveniently stored matrices in column-major order, so applying the resulting frustum matrix is as simple as:

glMatrixMode(GL_PROJECTION);

glLoadMatrixd( &frustum(0,0) );

注意矩阵是列主元的

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值