Lens Distortion Correction

2098人阅读 评论(0) 收藏 举报

Lens Distortion Correction

by Shehrzad Qureshi
Senior Engineer, BDTI
May 14, 2011

A typical processing pipeline for computer vision is given in Figure 1 below:

lens distortion figure 1

The focus of this article is on the lens correction block. In less than ideal optical systems, like those which will be found in cheaper smartphones and tablets, incoming frames will tend to get distorted along their edges. The most common types of lens distortions are either barrel distortion, pincushion distortion, or some combination of the two[1]. Figure 2 is an illustration of the types of distortion encountered in vision, and in this article we will discuss strategies and implementations for correcting for this type of lens distortion.

lens distortion figure 2

These types of lens aberrations can cause problems for vision algorithms because machine vision usually prefers straight edges (for example lane finding in automotive, or various inspection systems). The general effect of both barrel and pincushion distortion is to project what should be straight lines as curves. Correcting for these distortions is computationally expensive because it is a per-pixel operation. However the correction process is also highly regular and “embarassingly data parallel” which makes it amenable to FPGA or GPU acceleration. The FPGA solution can be particularly attractive as there are now cameras on the market with FPGAs in the camera itself that can be programmed to perform this type of processing[2].

Calibration Procedure

The rectlinear correction procedure can be summarized as warping a distorted image (see Figure 2b and 2c) to remove the lens distortion, thus taking the frame back to its undistorted projection (Figure 2a). In other words, we must first estimate the lens distortion function, and then invert it so as to compensate the incoming image frame. The compensated image will be referred to as theundistorted image.

Both types of lens aberrations discussed so far are radial distortions that increase in magnitude as we move farther away from the image center. In order to correct for this distortion at runtime, we first must estimate the coefficients of a parameterized form of the distortion function during a calibration procedure that is specific to a given optics train. The detailed mathematics behind this parameterization is beyond the scope of this article, and is covered thoroughly elsewhere[3]. Suffice it to say if we have:


  • = original distorted point coordinates
  • = image center
  • = undistorted (corrected) point coordinates

then the goal is to measure the distortion model where:


The purpose of the calibration procedure is to estimate the radial distortion coefficients which can be achieved using a gradient descent optimizer[3]. Typically one images a test pattern with co-linear features that are easily extracted autonomously with sub-pixel accuracy. The checkerboard in Figure 2a is one such test pattern that the author has used several times in the past. Figure 3 summarizes the calibration process:

lens distortion figure 3

The basic idea is to image a distorted test pattern, extract the coordinates of lines which are known to be straight, feed these coordinates into an optimizer such as the one described in[3], which emits the lens undistort warp coefficients. These coefficients are used at run-time to correct for the measured lens distortion.

In the provided example we can use a Harris corner detector to automatically find the corners of the checkerboard pattern. The OpenCV library has a robust corner detection function that can be used for this purpose[4]. The following snippet of OpenCV code can be used to extract the interior corners of the calibration image of Figure 2a:

const int nSquaresAcross=9;
const int nSquaresDown=7;
const int nCorners=(nSquaresAcross-1)*(nSquaresDown-1);
CvSize szcorners = cvSize(nSquaresAcross-1,nSquaresDown-1);
std::vector<CvPoint2D32f> vCornerList(nCorners);
/* find corners to pixel accuracy */
int cornerCount = 0;
const int N = cvFindChessboardCorners(pImg, /* IplImage */
/* should check that cornerCount==nCorners */
/* sub-pixel refinement */

Segments corresponding to what should be straight lines are then constructed from the point coordinates stored in the STL vCornerList container. For the best results, multiple lines should be used and it is necessary to have both vertical and horizontal line segments for the optimization procedure to converge to a viable solution (see the blue lines in Figure 3).

Finally, we are ready to determine the radial distortion coefficients. There are numerous camera calibration packages (including one in OpenCV), but a particularly good open-source ANSI C library can be locatedhere[5]. Essentially the line segment coordinates are fed into an optimizer which determines the undistort coefficients by minimizing the error between the radial distortion model and the training data. These coefficients can then be stored in a lookup table for run-time image correction.

Lens Distortion Correction (Warping)

The referenced calibration library[5] also includes a very informativeonline demo. The demo illustrates the procedure briefly described here. Application of the radial distortion coefficients to correct for the lens aberrations basically boils down an image warp operation. That is, for each pixel in the (undistorted) frame, we compute the distance from the image center and evaluate a polynomial that gives us the pixel coordinates from which to fill in the corrected pixel intensity. Because the polynomial evaluation will more than likely fall in between integer pixel coordinates, some form of interpolation must be used. The simplest and cheapest interpolant is so-called “nearest neighbor” which as its name implies means to simply pick the nearest pixel, but this technique results in poor image quality. At a bare minimum bilinear interpolation should be employed and oftentimes higher order bicubic interpolants are called for.

The amount of computations per frame can become quite large, particularly if we are dealing with color frames. The saving grace of this operation is its inherent parallelism (each pixel is completely independent of its neighbors and hence can be corrected in parallel). This parallelism and the highly regular nature of the computations lends itself readily to accelerators, either via FPGAs[6,7] or GPUs[8,9]. The source code provided in[5] includes a lens correction function with full ANSI C source.

A faster software implementation than [5] can be realized using the OpenCVcvRemap() function[4]. The input arguments into this function are the source and destination images, the source to destination pixel mapping (expressed as two floating point arrays), interpolation options, and a default fill value (if there are any holes in the rectilinear corrected image). At calibration time, we evaluate the distortion model polynomials just once and then store the pixel mapping to disk or memory. At run-time the software simply callscvRemap()—which is optimized and can accommodate color frames—to correct the lens distortion.


[1] Wikipedia page:  http://en.wikipedia.org/wiki/Distortion_(optics)

[2] http://www.hunteng.co.uk/info/fpgaimaging.htm

[3] L. Alvarez, L. Gomez, R. Sendra. An Algebraic Approach to Lens Distortion by Line Rectification, Journal of Mathematical Imaging and Vision, Vol. 39 (1), July 2009, pp. 36-50.

[4] Bradski, Gary and Kaebler, Adrian. Learning OpenCV (Computer Vision with the OpenCV Library), O’Reilly, 2008.

[5] http://www.ipol.im/pub/algo/ags_algebraic_lens_distortion_estimation/

[6] Daloukas, K.; Antonopoulos, C.D.; Bellas, N.; Chai, S.M. "Fisheye lens distortion correction on multicore and hardware accelerator platforms,"Parallel & Distributed Processing (IPDPS), 2010 IEEE International Symposium on , vol., no., pp.1-10, 19-23 April 2010

[7] J. Jiang , S. Schmidt , W. Luk and D. Rueckert "Parameterizing reconfigurable designs for image warping",Proc. SPIE, vol. 4867, pp. 86 2002.

[8] http://visionexperts.blogspot.com/2010/07/image-warping-using-texture-fetches.html

[9] Rosner,J., Fassold,H., Bailer,W., Schallauer,P.: “Fast GPU-based Image Warping and Inpainting for Frame Interpolation”, Proceedings of Computer Graphics, Computer Vision and Mathematics (GraVisMa) Workshop, Plzen, CZ, 2010



In contrast to optical distortion, perspective and geometric distortions are no lens aberrations. Th...
  • u014652390
  • u014652390
  • 2015年12月16日 10:24
  • 3805

U3D Distortion

  • songchao_xx
  • songchao_xx
  • 2016年05月30日 22:48
  • 617


Documentation Distortion TopIntroduction Instructions Results: Main figure Results: Decentering Re...
  • Real_Myth
  • Real_Myth
  • 2016年05月31日 11:30
  • 2211

Distortion Correction

Distortion Correction因为最近在搞畸变相关的东西,找了一些畸变的资料来研究,这个章节翻译自Oculus_SDK_OverView的5.6.2节,翻译中有一些个人添加的辅助信息,以括...
  • dabenxiong666
  • dabenxiong666
  • 2017年02月17日 19:33
  • 3316


  • dabenxiong666
  • dabenxiong666
  • 2017年09月26日 19:24
  • 926

Lens Distortion Correction

Lens Distortion Correction by Shehrzad Qureshi Senior Engineer, BDTI May 14, 2011 A typical pr...
  • Real_Myth
  • Real_Myth
  • 2016年05月31日 09:33
  • 2098

Correction of Camera Lens Distortion

Digital camera usually introduces significant distortion caused by the camera and lens. These distor...
  • brilliantyoho
  • brilliantyoho
  • 2014年01月27日 14:17
  • 791

【转】鏡頭校正(Lens Shading Correction)

本文来自:我爱研发网(52RD.com) - R&D大本营 详细出处:http://www.52rd.com/bbs/Dispbbs.asp?BoardID=86&ID=115525     ...
  • sppt168
  • sppt168
  • 2016年01月06日 10:58
  • 1412

opencv 3.0 图像去畸变 undistortion

主要用到的是 initUndistortRectifyMap这个函数 在opencv中这个函数是用于 去除镜头畸变的图像拉伸 为了快速算法:使用了坐标查找变和双线性差值的方法...
  • billbliss
  • billbliss
  • 2016年09月13日 16:53
  • 5042

OpenCV 标定和畸变校正

1.摄像机成像原理 成像的过程实质上是4个坐标系的转换。首先空间中的一点由 世界坐标系 转换到 摄像机坐标系 ,然后再将其投影到成像平面 ( 图像物理坐标系 ) ,最后再将成像平面上的数据转换到图...
  • u013498583
  • u013498583
  • 2017年05月08日 11:53
  • 3046
    专栏达人 持之以恒
    访问量: 203万+
    积分: 2万+
    排名: 371
    个人邮箱: xuxiduo@zju.edu.cn
    群1:186168905 已满
    群2:421100161 可加

    2) 视频/音频/图像/算法/ML
    群1:148111910 可加
    群2:157103105 可加