c++ opencv相机标定源代码

c++ opencv相机标定源代码

源代码中一些关于输入参数类型检查的调试代码删除了,以便简化代码

棋盘格角点检测(findChessboardCorners)

bool cv::findChessboardCorners(InputArray   image,
                               Size         patternSize,
                               OutputArray  corners,
                               int          flags = CALIB_CB_ADAPTIVE_THRESH+CALIB_CB_NORMALIZE_IMAGE 
);

输入参数:
    image:棋盘格图像,彩色图或灰度图;
    patternSize:棋盘格内每行每列的角点数量
    flags:默认参数为:CALIB_CB_ADAPTIVE_THRESH + CALIB_CB_NORMALIZE_IMAGE 
    CALIB_CB_ADAPTIVE_THRESH使用自适应阈值将图像转换为黑色和白色,而不是固定阈值级别(根据平均图像亮度计算)。
    CALIB_CB_NORMALIZE_IMAGE在应用固定或自适应阈值之前使用equalizeHist对图像伽玛进行归一化。
    CALIB_CB_FILTER_QUADS使用附加标准(如轮廓面积、周长、正方形形状)过滤掉在轮廓检索阶段提取的假四边形。
    CALIB_CB_FAST_CHECK对寻找棋盘角的图像进行快速检查,如果未找到则快捷调用。当没有观察到棋盘时,这可以大大加快退化条件下的调用。

输出参数:
    corners:找到的角点()

相机标定(calibrateCamera)

double cv::calibrateCamera  (InputArrayOfArrays     objectPoints,
                             InputArrayOfArrays     imagePoints,
                             Size   imageSize,
                             InputOutputArray   cameraMatrix,
                             InputOutputArray   distCoeffs,
                             OutputArrayOfArrays    rvecs,
                             OutputArrayOfArrays    tvecs,
                             int    flags = 0,
                             TermCriteria   criteria =                      TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 30, DBL_EPSILON) 
                            );

输入参数:
    objectPoints:标定板角点三维坐标;
    imagePoints :图像上对应的二维坐标;
    imageSize   :图像尺寸;
    cameraMatrix:相机内参矩阵;
    distCoeffs  :镜头失真系数;
    rvecs       :外参-旋转矩阵;
    tvecs       :外参-平移矩阵;
    flags       :标定参数;
    criteria    :标定迭代终止条件。
double cv::calibrateCamera( InputArrayOfArrays _objectPoints,
                           InputArrayOfArrays _imagePoints,
                           Size imageSize, InputOutputArray _cameraMatrix,
                           InputOutputArray _distCoeffs,
                           OutputArrayOfArrays _rvecs, 
                           OutputArrayOfArrays _tvecs, 
                           int flags, 
                           TermCriteria criteria )
{
    return calibrateCamera(_objectPoints, _imagePoints, imageSize, _cameraMatrix,
                           _distCoeffs, _rvecs, _tvecs, noArray(), noArray(), 
                           noArray(), flags, criteria);
}

double cv::calibrateCamera(InputArrayOfArrays _objectPoints,
                           InputArrayOfArrays _imagePoints,
                           Size imageSize, InputOutputArray _cameraMatrix,
                           InputOutputArray _distCoeffs,
                           OutputArrayOfArrays _rvecs, 
                           OutputArrayOfArrays _tvecs,
                           OutputArray stdDeviationsIntrinsics,
                           OutputArray stdDeviationsExtrinsics,
                           OutputArray _perViewErrors, int flags, 
                           TermCriteria criteria )
{
    CV_INSTRUMENT_REGION();

    return calibrateCameraRO(_objectPoints, _imagePoints, imageSize, -1, 
                             _cameraMatrix, _distCoeffs,_rvecs, _tvecs, 
                             noArray(), stdDeviationsIntrinsics, 
                             stdDeviationsExtrinsics,
                             noArray(), _perViewErrors, flags, criteria);
}

double cv::calibrateCameraRO(InputArrayOfArrays _objectPoints,
                             InputArrayOfArrays _imagePoints,
                             Size imageSize, int iFixedPoint, 
                             InputOutputArray _cameraMatrix,
                             InputOutputArray _distCoeffs,
                             OutputArrayOfArrays _rvecs, OutputArrayOfArrays _tvecs,
                             OutputArray newObjPoints,
                             int flags, TermCriteria criteria)
{
    CV_INSTRUMENT_REGION();

    return calibrateCameraRO(_objectPoints, _imagePoints, imageSize, 
                             iFixedPoint, _cameraMatrix,_distCoeffs, 
                             _rvecs, _tvecs, newObjPoints, noArray(), 
                             noArray(),noArray(), noArray(), flags, criteria);
}

以上函数经过层层包装最终指向 calibrateCameraRO()函数:

代码中包含 newObjPoints,stdDeviationsIntrinsics,stdDeviationsExtrinsics,stdDeviationsObjPoints,_perViewErrors这几个不常用输出的相关代码删除了,以便简化代码方便理解。

double cv::calibrateCameraRO(InputArrayOfArrays _objectPoints,
                             InputArrayOfArrays _imagePoints,
                             Size imageSize, int iFixedPoint, InputOutputArray _cameraMatrix,
                             InputOutputArray _distCoeffs,
                             OutputArrayOfArrays _rvecs, OutputArrayOfArrays _tvecs,
                             OutputArray newObjPoints,
                             OutputArray stdDeviationsIntrinsics,
                             OutputArray stdDeviationsExtrinsics,
                             OutputArray stdDeviationsObjPoints,
                             OutputArray _perViewErrors, int flags, TermCriteria criteria )
{
    /* 定义一个3×3大小CV_64F的相机内参矩阵cameraMatrix */
    int rtype = CV_64F;
    Mat cameraMatrix = _cameraMatrix.getMat();
    cameraMatrix = prepareCameraMatrix(cameraMatrix, rtype, flags);

    /*
        定义镜头失真系数矩阵,矩阵大小根据flag参数和创建的结果参数矩阵_distCoeffs的大小确认
        情况包括4, 5, 8, 12 或 14个输出失真系数:(k1,k2,p1,p2[,k3[,k4,k5,k6[,s1,s2,s3,s4[,τx,τy]]]])
    */
    Mat distCoeffs = _distCoeffs.getMat();
    distCoeffs = (flags & CALIB_THIN_PRISM_MODEL) && !(flags & CALIB_TILTED_MODEL) ?
                  prepareDistCoeffs(distCoeffs, rtype, 12) :
                  prepareDistCoeffs(distCoeffs, rtype);
    if (!(flags & CALIB_RATIONAL_MODEL) && (!(flags & CALIB_THIN_PRISM_MODEL)) && (!            (flags & CALIB_TILTED_MODEL)))
    {
        distCoeffs = distCoeffs.rows == 1 ?
            distCoeffs.colRange(0, 5) : distCoeffs.rowRange(0, 5);
    }

    // 图像数量(对象点组数)
    int nimages = int(_objectPoints.total());

    Mat objPt, imgPt, npoints, rvecM, tvecM, stdDeviationsM, errorsM;

    /*
        needed:是否需要输出,由输入的对应参数类型确认,若存在返回结果类型则为 真,若为 noArray 则为 假
    */
    bool rvecs_needed = _rvecs.needed(), tvecs_needed = _tvecs.needed();
    // 检查输入的旋转和平移是否为 Mat 数组
    bool rvecs_mat_vec = _rvecs.isMatVector();
    bool tvecs_mat_vec = _tvecs.isMatVector();

    //外参:创建旋转矩阵 rvecM
    if( rvecs_needed )
    {
        _rvecs.create(nimages, 1, CV_64FC3);

        if(rvecs_mat_vec)
            rvecM.create(nimages, 3, CV_64F);
        else
            rvecM = _rvecs.getMat();
    }
    //外参:创建平移矩阵 tvecM
    if( tvecs_needed )
    {
        _tvecs.create(nimages, 1, CV_64FC3);

        if(tvecs_mat_vec)
            tvecM.create(nimages, 3, CV_64F);
        else
            tvecM = _tvecs.getMat();
    }

    /*
        1 挑选角点
        输入:对象点(_objectPoints),图像点(_imagePoints),iFixedPoint = -1
        输出:对象点(objPt),图像点(imgPt),每张图像点数(npoints)
    */
    collectCalibrationData( _objectPoints, _imagePoints, noArray(), iFixedPoint, 
                           objPt, imgPt, 0, npoints );

    // releaseObject  = 0 是否释放对象
    bool releaseObject = iFixedPoint > 0 && iFixedPoint < npoints.at<int>(0) - 1;


    /*
        定义输入:对象点(c_objPt)图像点(c_imgPt)每张图像点数(c_npoints)
        定义输出:相机内参(c_cameraMatrix)失真系数(c_distCoeffs)
        以及旋转矩阵,平移矩阵等
    */
    CvMat c_objPt = cvMat(objPt), c_imgPt = cvMat(imgPt), c_npoints = cvMat(npoints);
    CvMat c_cameraMatrix = cvMat(cameraMatrix), c_distCoeffs = cvMat(distCoeffs);
    CvMat c_rvecM = cvMat(rvecM), c_tvecM = cvMat(tvecM), c_stdDev = cvMat(stdDeviationsM), c_errors = cvMat(errorsM);


    /*
        2 相机标定计算
    */
    double reprojErr = cvCalibrateCamera2Internal(&c_objPt, &c_imgPt, &c_npoints,
                                                  cvSize(imageSize),
                                                  iFixedPoint,
                                                  &c_cameraMatrix, &c_distCoeffs,
                                                  NULL,NULL,NULL, NULL, NULL, 
                                                  flags, cvTermCriteria(criteria));

    // 读取每张图像的 旋转矩阵 和 平移矩阵
    for(int i = 0; i < nimages; i++ )
    {
        if( rvecs_needed && rvecs_mat_vec)
        {
            _rvecs.create(3, 1, CV_64F, i, true);
            Mat rv = _rvecs.getMat(i);
            memcpy(rv.ptr(), rvecM.ptr(i), 3*sizeof(double));
        }
        if( tvecs_needed && tvecs_mat_vec)
        {
            _tvecs.create(3, 1, CV_64F, i, true);
            Mat tv = _tvecs.getMat(i);
            memcpy(tv.ptr(), tvecM.ptr(i), 3*sizeof(double));
        }
    }

    // 读取 相机内参矩阵 和 镜头失真系数
    cameraMatrix.copyTo(_cameraMatrix);
    distCoeffs.copyTo(_distCoeffs);

    return reprojErr;
}

1 collectCalibrationData():

static void collectCalibrationData( InputArrayOfArrays objectPoints,
                                    InputArrayOfArrays imagePoints1,
                                    InputArrayOfArrays imagePoints2,
                                    int iFixedPoint,
                                    Mat& objPtMat, Mat& imgPtMat1, Mat* imgPtMat2,
                                    Mat& npoints )
{
    // nimages : 图像数量
    int nimages = (int)objectPoints.total();
    int total = 0;

    // 遍历每张标定图像
    for (int i = 0; i < nimages; i++)
    {
        // 获取每张图像的角点
        Mat objectPoint = objectPoints.getMat(i);
        /*
            checkVector(3): 若列数为1, 则返回行数;若行数为1, 则返回列数;否则为 -1
            判断点的数组是否符合规范
        */
        int numberOfObjectPoints = objectPoint.checkVector(3, CV_32F);
        if (numberOfObjectPoints <= 0)
            CV_Error(CV_StsUnsupportedFormat, "objectPoints should contain vector of vectors of points of type Point3f");

        /*
            检查图像角点数量及数组
        */
        Mat imagePoint1 = imagePoints1.getMat(i);
        if (imagePoint1.empty())
            CV_Error(CV_StsBadSize, "imagePoints1 should not contain empty vector of vectors of points");
        int numberOfImagePoints = imagePoint1.checkVector(2, CV_32F);
        if (numberOfImagePoints <= 0)
            CV_Error(CV_StsUnsupportedFormat, "imagePoints1 should contain vector of vectors of points of type Point2f");
        CV_CheckEQ(numberOfObjectPoints, numberOfImagePoints, "Number of object and image points must be equal");

        // 所有角点的数量
        total += numberOfObjectPoints;
    }

    /*
        npoints: 每一张图像 取多少角点
        objPtMat:所有世界坐标角点
        imgPtMat1:所有图像坐标角点
    */
    npoints.create(1, (int)nimages, CV_32S);
    objPtMat.create(1, (int)total, CV_32FC3);
    imgPtMat1.create(1, (int)total, CV_32FC2);
    Point2f* imgPtData2 = 0;

    // imgPtMat2 = 0,不执行
    if (imgPtMat2)
    {
        imgPtMat2->create(1, (int)total, CV_32FC2);
        imgPtData2 = imgPtMat2->ptr<Point2f>();
    }

    // 角点数组指针
    Point3f* objPtData = objPtMat.ptr<Point3f>();
    Point2f* imgPtData1 = imgPtMat1.ptr<Point2f>();

    for (int i = 0, j = 0; i < nimages; i++)
    {
        // 每张标定图像的角点
        Mat objpt = objectPoints.getMat(i);
        Mat imgpt1 = imagePoints1.getMat(i);
        // 记录每张图像的对象点数
        int numberOfObjectPoints = objpt.checkVector(3, CV_32F);
        npoints.at<int>(i) = numberOfObjectPoints;
        // 将objectPoints和imagePoints1的数据拷贝到objPtMat,imgPtMat1
        for (int n = 0; n < numberOfObjectPoints; ++n)
        {
            objPtData[j + n] = objpt.ptr<Point3f>()[n];
            imgPtData1[j + n] = imgpt1.ptr<Point2f>()[n];
        }

        // imgPtMat2 = 0,不执行
        if (imgPtData2)
        {
            Mat imgpt2 = imagePoints2.getMat(i);
            int numberOfImage2Points = imgpt2.checkVector(2, CV_32F);
            CV_CheckEQ(numberOfObjectPoints, numberOfImage2Points, "Number of object and image(2) points must be equal");
            for (int n = 0; n < numberOfImage2Points; ++n)
            {
                imgPtData2[j + n] = imgpt2.ptr<Point2f>()[n];
            }
        }
        j += numberOfObjectPoints;
    }

    // 第一张标定图像的点数
    int ni = npoints.at<int>(0);
    //  iFixedPoint 为 -1
    bool releaseObject = iFixedPoint > 0 && iFixedPoint < ni - 1;
    // 检查对象点。如果不合格,请报告错误。releaseObject 为 0, 以下不执行
    if( releaseObject )
    {
        for (int i = 1; i < nimages; i++)
        {
            // 每张图像的点数要相等
            if( npoints.at<int>(i) != ni )
            {
                CV_Error( CV_StsBadArg, "All objectPoints[i].size() should be equal when "
                                        "object-releasing method is requested." );
            }
            // 判断每张标定图像的角点坐标和第一张是否相等
            Mat ocmp = objPtMat.colRange(ni * i, ni * i + ni) != objPtMat.colRange(0, ni);
            // 变形为通道数为1
            ocmp = ocmp.reshape(1);
            // countNonZero 返回值不为0的数量,若图像间所有角点一样,则为真
            if( countNonZero(ocmp) )
            {
                CV_Error( CV_StsBadArg, "All objectPoints[i] should be identical when object-releasing"
                                        " method is requested." );
            }
        }
    }
}

2 相机标定计算 cvCalibrateCamera2Internal():

static double cvCalibrateCamera2Internal( const CvMat* objectPoints,
                    const CvMat* imagePoints, const CvMat* npoints,
                    CvSize imageSize, int iFixedPoint, CvMat* cameraMatrix, CvMat* distCoeffs,
                    CvMat* rvecs, CvMat* tvecs, CvMat* newObjPoints, CvMat* stdDevs,
                    CvMat* perViewErrors, int flags, CvTermCriteria termCrit )
{
    const int NINTRINSIC = CV_CALIB_NINTRINSIC;
    double reprojErr = 0;

    // 定义相机内参矩阵 A,失真系数矩阵 k
    Matx33d A;
    double k[14] = {0};
    CvMat matA = cvMat(3, 3, CV_64F, A.val), _k;
    int i, nimages, maxPoints = 0, ni = 0, pos, total = 0, nparams, npstep, cn;
    double aspectRatio = 0.;  // 焦距fx,fy 的比率

    // 0. 分配内存
    if(flags & CALIB_TILTED_MODEL)
    {
        //使用倾斜传感器模型时,畸变系数矩阵必须具有 14 个参数
        if (distCoeffs->cols*distCoeffs->rows != 14)
            CV_Error( CV_StsBadArg, "The tilted sensor model must have 14 parameters in the distortion matrix" );
    }
    else
    {
        //使用薄棱镜模型时,畸变系数矩阵必须有12个参数
        if(flags & CALIB_THIN_PRISM_MODEL)
            if (distCoeffs->cols*distCoeffs->rows != 12)
                CV_Error( CV_StsBadArg, "Thin prism model must have 12 parameters in the distortion matrix" );
    }

    // nimages: 标定的图像数量 npoints:1 * nimages
    nimages = npoints->rows * npoints->cols;
    npstep = npoints->rows == 1 ? 1 : npoints->step / CV_ELEM_SIZE(npoints->type);
    if( rvecs )
    {
        cn = CV_MAT_CN(rvecs->type);
        if( !CV_IS_MAT(rvecs) ||
            (CV_MAT_DEPTH(rvecs->type) != CV_32F && CV_MAT_DEPTH(rvecs->type) != CV_64F) ||
            ((rvecs->rows != nimages || (rvecs->cols*cn != 3 && rvecs->cols*cn != 9)) &&
            (rvecs->rows != 1 || rvecs->cols != nimages || cn != 3)) )
            CV_Error( CV_StsBadArg, "the output array of rotation vectors must be 3-channel "
                "1xn or nx1 array or 1-channel nx3 or nx9 array, where n is the number of views" );
    }
    if( tvecs )
    {
        cn = CV_MAT_CN(tvecs->type);
        if( !CV_IS_MAT(tvecs) ||
            (CV_MAT_DEPTH(tvecs->type) != CV_32F && CV_MAT_DEPTH(tvecs->type) != CV_64F) ||
            ((tvecs->rows != nimages || tvecs->cols*cn != 3) &&
            (tvecs->rows != 1 || tvecs->cols != nimages || cn != 3)) )
            CV_Error( CV_StsBadArg, "the output array of translation vectors must be 3-channel "
                "1xn or nx1 array or 1-channel nx3 array, where n is the number of views" );
    }

    //releaseObject = 0
    bool releaseObject = iFixedPoint > 0 && iFixedPoint < npoints->data.i[0] - 1;

    // 检查每张图像点数 以及 统计总点数
    for( i = 0; i < nimages; i++ )
    {
        // ni: 每张图像的角点数量,应该要不小于4
        ni = npoints->data.i[i*npstep];
        if( ni < 4 )
        {
            CV_Error_( CV_StsOutOfRange, ("The number of points in the view #%d is < 4", i));
        }
        maxPoints = MAX( maxPoints, ni );
        total += ni;
    }


    // 对象点matM 和 图像点_m
    Mat matM( 1, total, CV_64FC3 );
    Mat _m( 1, total, CV_64FC2 );
    Mat allErrors(1, total, CV_64FC2);
    // 判断对象点(3维)和图像点(2维)维度,维度不对则进行齐次变换
    if(CV_MAT_CN(objectPoints->type) == 3)
    {
        cvarrToMat(objectPoints).convertTo(matM, CV_64F);
    }
    else
    {
        convertPointsHomogeneous(cvarrToMat(objectPoints), matM);
    }
    if(CV_MAT_CN(imagePoints->type) == 2)
    {
        cvarrToMat(imagePoints).convertTo(_m, CV_64F);
    }
    else
    {
        convertPointsHomogeneous(cvarrToMat(imagePoints), _m);
    }

    // 所有参数,4个内参,14个失真系数,nimages * 6个旋转和平移系数
    nparams = NINTRINSIC + nimages * 6;
    //releaseObject = 0
    if( releaseObject )  nparams += maxPoints * 3;

    // 确定失真系数数量
    _k = cvMat( distCoeffs->rows, distCoeffs->cols, CV_MAKETYPE(CV_64F,CV_MAT_CN(distCoeffs->type)), k);
    if( distCoeffs->rows * distCoeffs->cols * CV_MAT_CN(distCoeffs->type) < 8 )
    {
        if( distCoeffs->rows * distCoeffs->cols * CV_MAT_CN(distCoeffs->type) < 5 )
            flags |= CALIB_FIX_K3;
        flags |= CALIB_FIX_K4 | CALIB_FIX_K5 | CALIB_FIX_K6;
    }
    const double minValidAspectRatio = 0.01;
    const double maxValidAspectRatio = 100.0;

    /* 封闭解估计五个内在参数和所有外部参数 或者使用给定的初始参数 */
    /*
       若flags参数包括: CALIB_USE_INTRINSIC_GUESS 则使用给定的初始参数,否则使用封闭解
       cameraMatrix则包含fx,fy,cx,cy的有效初始值,这些值经过进一步优化。
       否则,(cx, cy) 最初设置为图像中心(使用 imageSize),并以最小二乘方式计算焦距。
       请注意,如果已知内在参数,则无需仅使用此函数来估计外在参数。请改用 solvePnP。
    */
    if( flags & CALIB_USE_INTRINSIC_GUESS )
    {
        cvConvert( cameraMatrix, &matA );
        // 检查初始相机内参矩阵A
        /*
            | fx  0  u0 |
        A = | 0  fy  u0 |
            | 0   0  1 |
        */
        if( A(0, 0) <= 0 || A(1, 1) <= 0 )
            CV_Error( CV_StsOutOfRange, "Focal length (fx and fy) must be positive" );
        if( A(0, 2) < 0 || A(0, 2) >= imageSize.width ||
            A(1, 2) < 0 || A(1, 2) >= imageSize.height )
            CV_Error( CV_StsOutOfRange, "Principal point must be within the image" );
        if( fabs(A(0, 1)) > 1e-5 )
            CV_Error( CV_StsOutOfRange, "Non-zero skew is not supported by the function" );
        if( fabs(A(1, 0)) > 1e-5 || fabs(A(2, 0)) > 1e-5 ||
            fabs(A(2, 1)) > 1e-5 || fabs(A(2,2)-1) > 1e-5 )
            CV_Error( CV_StsOutOfRange,
                "The intrinsic matrix must have [fx 0 cx; 0 fy cy; 0 0 1] shape" );

        A(0, 1) = A(1, 0) = A(2, 0) = A(2, 1) = 0.;
        A(2, 2) = 1.;

        /*
            若flags参数包括: CALIB_FIX_ASPECT_RATIO
            这些函数仅将 fy 视为自由参数。fx/fy 的比率与输入 cameraMatrix 中的比率相同。
            如果未设置 CALIB_USE_INTRINSIC_GUESS,则忽略 fx 和 fy 的实际输入值,
            仅计算并进一步使用它们的比率。
        */
        if( flags & CALIB_FIX_ASPECT_RATIO )
        {
            // x 和 y 方向的焦距比
            aspectRatio = A(0, 0)/A(1, 1);
            if( aspectRatio < minValidAspectRatio || aspectRatio > maxValidAspectRatio )
                CV_Error( CV_StsOutOfRange,
                    "The specified aspect ratio (= cameraMatrix[0][0] / cameraMatrix[1][1]) is incorrect" );
        }
        cvConvert( distCoeffs, &_k );
    }
    else
    {
        Scalar mean, sdv;
        meanStdDev(matM, mean, sdv);
        if( fabs(mean[2]) > 1e-5 || fabs(sdv[2]) > 1e-5 )
            CV_Error( CV_StsBadArg,
            "For non-planar calibration rigs the initial intrinsic matrix must be specified" );
        for( i = 0; i < total; i++ )
            matM.at<Point3d>(i).z = 0.; // 将棋盘格平面指定为世界坐标系平面,因此对象点的z坐标为0
        /*
            若flags参数包括: CALIB_FIX_ASPECT_RATIO
            这些函数仅将 fy 视为自由参数。fx/fy 的比率与输入的初始 cameraMatrix 中的比率相同。
            此处为未设置 CALIB_USE_INTRINSIC_GUESS的情况,则忽略 fx 和 fy 的实际输入值。
        */
        if( flags & CALIB_FIX_ASPECT_RATIO )
        {
            aspectRatio = cvmGet(cameraMatrix,0,0);
            aspectRatio /= cvmGet(cameraMatrix,1,1);
            if( aspectRatio < minValidAspectRatio || aspectRatio > maxValidAspectRatio )
                CV_Error( CV_StsOutOfRange,
                    "The specified aspect ratio (= cameraMatrix[0][0] / cameraMatrix[1][1]) is incorrect" );
        }
        CvMat _matM = cvMat(matM), m = cvMat(_m);

        /*
            2.1 封闭解估计相机内参
            输入:对象点(_matM)图像点(m)每张图像点数(npoints)图像尺寸(imageSize)
            输出:相机内参矩阵(matA)焦距比(aspectRatio)
        */
        cvInitIntrinsicParams2D( &_matM, &m, npoints, imageSize, &matA, aspectRatio );
    }

    /* 通过求解线性最小二乘法来估计径向畸变的系数 */
    CvLevMarq solver( nparams, 0, termCrit );

    // maxPoints: 每张图像的点数
    Mat _Ji( maxPoints*2, NINTRINSIC, CV_64FC1, Scalar(0));
    Mat _Je( maxPoints*2, 6, CV_64FC1 );
    Mat _err( maxPoints*2, 1, CV_64FC1 );

    // allocJo = 0
    const bool allocJo = (solver.state == CvLevMarq::CALC_J) || stdDevs || releaseObject;
    Mat _Jo = allocJo ? Mat( maxPoints*2, maxPoints*3, CV_64FC1, Scalar(0) ) : Mat();

    /*
        选定校准求解方法
        CALIB_USE_QR 或 CALIB_USE_LU可用于更快的校准,在某些极少数情况下可能不太精确且稳定性较差。
    */
    if(flags & CALIB_USE_LU) {
        solver.solveMethod = DECOMP_LU;
    }
    else if(flags & CALIB_USE_QR) {
        solver.solveMethod = DECOMP_QR;
    }

    {
        // param : 所有参数
        double* param = solver.param->data.db;
        uchar* mask = solver.mask->data.ptr;

        // fx, fy, u0, v0
        param[0] = A(0, 0); param[1] = A(1, 1); param[2] = A(0, 2); param[3] = A(1, 2);
        // 失真系数14个
        std::copy(k, k + 14, param + 4);

        if (flags & CALIB_FIX_ASPECT_RATIO) //仅将 fy 视为自由参数
            mask[0] = 0;
        if (flags & CALIB_FIX_FOCAL_LENGTH) // 在全局优化期间焦距不会更改
            mask[0] = mask[1] = 0;
        if (flags & CALIB_FIX_PRINCIPAL_POINT) // 在全局优化期间,主点不会改变
            mask[2] = mask[3] = 0;
        if (flags & CALIB_ZERO_TANGENT_DIST) // 切向变形系数 (p1,p2) 设置为零并保持零。
        {
            param[6] = param[7] = 0;
            mask[6] = mask[7] = 0;
        }
        if (!(flags & CALIB_RATIONAL_MODEL)) //启用系数k4、k5、k6。最低返回8个参数
            flags |= CALIB_FIX_K4 + CALIB_FIX_K5 + CALIB_FIX_K6;
        if (!(flags & CALIB_THIN_PRISM_MODEL)) //启用系数 s1、s2、s3 和 s4。最低返回12个参数
            flags |= CALIB_FIX_S1_S2_S3_S4;
        if (!(flags & CALIB_TILTED_MODEL)) //系数 tauX 和 tauY 已启用。以使校准函数使用倾斜传感器模型并返回 14 个系数
            flags |= CALIB_FIX_TAUX_TAUY;

        /*
        * CALIB_FIX_K1, CALIB_FIX_K2 ....
        * 在优化过程中不要更改相应的径向畸变系数。
        * 如果设置了CALIB_USE_INTRINSIC_GUESS,则使用来自提供的 distCoeffs 矩阵的系数。
        * 否则,它设置为 0。
        */
        mask[4] = !(flags & CALIB_FIX_K1);
        mask[5] = !(flags & CALIB_FIX_K2);
        // 切向变形系数 (p1,p2)
        if (flags & CALIB_FIX_TANGENT_DIST)
        {
            mask[6] = mask[7] = 0;
        }
        mask[8] = !(flags & CALIB_FIX_K3);
        mask[9] = !(flags & CALIB_FIX_K4);
        mask[10] = !(flags & CALIB_FIX_K5);
        mask[11] = !(flags & CALIB_FIX_K6);

        if (flags & CALIB_FIX_S1_S2_S3_S4)
        {
            mask[12] = 0;
            mask[13] = 0;
            mask[14] = 0;
            mask[15] = 0;
        }
        /*
         * 倾斜传感器模型的系数在优化过程中不会改变。
        */
        if (flags & CALIB_FIX_TAUX_TAUY)
        {
            mask[16] = 0;
            mask[17] = 0;
        }

        //releaseObject = 0
        if (releaseObject)
        {
            // copy object points
            std::copy(matM.ptr<double>(), matM.ptr<double>(0, maxPoints - 1) + 3,
                param + NINTRINSIC + nimages * 6);
            // fix points
            mask[NINTRINSIC + nimages * 6] = 0;
            mask[NINTRINSIC + nimages * 6 + 1] = 0;
            mask[NINTRINSIC + nimages * 6 + 2] = 0;
            mask[NINTRINSIC + nimages * 6 + iFixedPoint * 3] = 0;
            mask[NINTRINSIC + nimages * 6 + iFixedPoint * 3 + 1] = 0;
            mask[NINTRINSIC + nimages * 6 + iFixedPoint * 3 + 2] = 0;
            mask[nparams - 1] = 0;
        }
    }

    // 求解相机外参
    for( i = 0, pos = 0; i < nimages; i++, pos += ni )
    {
        CvMat _ri, _ti;
        ni = npoints->data.i[i*npstep];

        cvGetRows( solver.param, &_ri, NINTRINSIC + i*6, NINTRINSIC + i*6 + 3 );
        cvGetRows( solver.param, &_ti, NINTRINSIC + i*6 + 3, NINTRINSIC + i*6 + 6 );

        CvMat _Mi = cvMat(matM.colRange(pos, pos + ni));
        CvMat _mi = cvMat(_m.colRange(pos, pos + ni));

 
    /* 2.2 CvLevMar求解器 优化所有系数 */
    CvLevMarq solver( nparams, 0, termCrit );

    // maxPoints: 每张图像的点数
    Mat _Ji( maxPoints*2, NINTRINSIC, CV_64FC1, Scalar(0));
    Mat _Je( maxPoints*2, 6, CV_64FC1 );
    Mat _err( maxPoints*2, 1, CV_64FC1 );

    // allocJo = 0
    const bool allocJo = (solver.state == CvLevMarq::CALC_J) || stdDevs || releaseObject;
    Mat _Jo = allocJo ? Mat( maxPoints*2, maxPoints*3, CV_64FC1, Scalar(0) ) : Mat();

    /*
        选定校准求解方法
        CALIB_USE_QR 或 CALIB_USE_LU可用于更快的校准,在某些极少数情况下可能不太精确且稳定性较差。
    */
    if(flags & CALIB_USE_LU) {
        solver.solveMethod = DECOMP_LU;
    }
    else if(flags & CALIB_USE_QR) {
        solver.solveMethod = DECOMP_QR;
    }

    // 初始所有参数 和 参数掩膜mask
    {
        // param : 所有参数
        double* param = solver.param->data.db;
        uchar* mask = solver.mask->data.ptr;

        // fx, fy, u0, v0
        param[0] = A(0, 0); param[1] = A(1, 1); param[2] = A(0, 2); param[3] = A(1, 2);
        // 失真系数14个
        std::copy(k, k + 14, param + 4);

        if (flags & CALIB_FIX_ASPECT_RATIO) //仅将 fy 视为自由参数
            mask[0] = 0;
        if (flags & CALIB_FIX_FOCAL_LENGTH) // 在全局优化期间焦距不会更改
            mask[0] = mask[1] = 0;
        if (flags & CALIB_FIX_PRINCIPAL_POINT) // 在全局优化期间,主点不会改变
            mask[2] = mask[3] = 0;
        if (flags & CALIB_ZERO_TANGENT_DIST) // 切向变形系数 (p1,p2) 设置为零并保持零。
        {
            param[6] = param[7] = 0;
            mask[6] = mask[7] = 0;
        }
        if (!(flags & CALIB_RATIONAL_MODEL)) //启用系数k4、k5、k6。最低返回8个参数
            flags |= CALIB_FIX_K4 + CALIB_FIX_K5 + CALIB_FIX_K6;
        if (!(flags & CALIB_THIN_PRISM_MODEL)) //启用系数 s1、s2、s3 和 s4。最低返回12个参数
            flags |= CALIB_FIX_S1_S2_S3_S4;
        if (!(flags & CALIB_TILTED_MODEL)) //系数 tauX 和 tauY 已启用。以使校准函数使用倾斜传感器模型并返回 14 个系数
            flags |= CALIB_FIX_TAUX_TAUY;

        /*
        * CALIB_FIX_K1, CALIB_FIX_K2 ....
        * 在优化过程中不要更改相应的径向畸变系数。
        * 如果设置了CALIB_USE_INTRINSIC_GUESS,则使用来自提供的 distCoeffs 矩阵的系数。
        * 否则,它设置为 0。
        */
        mask[4] = !(flags & CALIB_FIX_K1);
        mask[5] = !(flags & CALIB_FIX_K2);
        // 切向变形系数 (p1,p2)
        if (flags & CALIB_FIX_TANGENT_DIST)
        {
            mask[6] = mask[7] = 0;
        }
        mask[8] = !(flags & CALIB_FIX_K3);
        mask[9] = !(flags & CALIB_FIX_K4);
        mask[10] = !(flags & CALIB_FIX_K5);
        mask[11] = !(flags & CALIB_FIX_K6);

        if (flags & CALIB_FIX_S1_S2_S3_S4)
        {
            mask[12] = 0;
            mask[13] = 0;
            mask[14] = 0;
            mask[15] = 0;
        }
        /*
         * 倾斜传感器模型的系数在优化过程中不会改变。
        */
        if (flags & CALIB_FIX_TAUX_TAUY)
        {
            mask[16] = 0;
            mask[17] = 0;
        }

        //releaseObject = 0
        if (releaseObject)
        {
            // copy object points
            std::copy(matM.ptr<double>(), matM.ptr<double>(0, maxPoints - 1) + 3,
                param + NINTRINSIC + nimages * 6);
            // fix points
            mask[NINTRINSIC + nimages * 6] = 0;
            mask[NINTRINSIC + nimages * 6 + 1] = 0;
            mask[NINTRINSIC + nimages * 6 + 2] = 0;
            mask[NINTRINSIC + nimages * 6 + iFixedPoint * 3] = 0;
            mask[NINTRINSIC + nimages * 6 + iFixedPoint * 3 + 1] = 0;
            mask[NINTRINSIC + nimages * 6 + iFixedPoint * 3 + 2] = 0;
            mask[nparams - 1] = 0;
        }
    }

    // 求解相机外参
    for( i = 0, pos = 0; i < nimages; i++, pos += ni )
    {
        CvMat _ri, _ti;
        ni = npoints->data.i[i*npstep];

        cvGetRows( solver.param, &_ri, NINTRINSIC + i*6, NINTRINSIC + i*6 + 3 );
        cvGetRows( solver.param, &_ti, NINTRINSIC + i*6 + 3, NINTRINSIC + i*6 + 6 );

        CvMat _Mi = cvMat(matM.colRange(pos, pos + ni));
        CvMat _mi = cvMat(_m.colRange(pos, pos + ni));

        /* 2.3 根据相机内参和失真系数求解相机外参 */
        cvFindExtrinsicCameraParams2( &_Mi, &_mi, &matA, &_k, &_ri, &_ti );
    }

    /* 全局优化,通过最小化来细化所有参数 */
    for(;;)
    {
        const CvMat* _param = 0;
        CvMat *_JtJ = 0, *_JtErr = 0;
        double* _errNorm = 0;
        // 迭代更新参数,直到达到迭代条件
        bool proceed = solver.updateAlt( _param, _JtJ, _JtErr, _errNorm );
        double *param = solver.param->data.db, *pparam = solver.prevParam->data.db;
        bool calcJ = solver.state == CvLevMarq::CALC_J || (!proceed && stdDevs);

        if( flags & CALIB_FIX_ASPECT_RATIO )
        {
            param[0] = param[1]*aspectRatio;
            pparam[0] = pparam[1]*aspectRatio;
        }

        A(0, 0) = param[0]; A(1, 1) = param[1]; A(0, 2) = param[2]; A(1, 2) = param[3];
        std::copy(param + 4, param + 4 + 14, k);

        if ( !proceed && !stdDevs && !perViewErrors )
            break;
        else if ( !proceed && stdDevs )
            cvZero(_JtJ);

        reprojErr = 0;

        for( i = 0, pos = 0; i < nimages; i++, pos += ni )
        {
            CvMat _ri, _ti;
            ni = npoints->data.i[i*npstep];

            cvGetRows( solver.param, &_ri, NINTRINSIC + i*6, NINTRINSIC + i*6 + 3 );
            cvGetRows( solver.param, &_ti, NINTRINSIC + i*6 + 3, NINTRINSIC + i*6 + 6 );

            CvMat _Mi = cvMat(matM.colRange(pos, pos + ni));
            if( releaseObject )
            {
                cvGetRows( solver.param, &_Mi, NINTRINSIC + nimages * 6,
                           NINTRINSIC + nimages * 6 + ni * 3 );
                cvReshape( &_Mi, &_Mi, 3, 1 );
            }
            CvMat _mi = cvMat(_m.colRange(pos, pos + ni));
            CvMat _me = cvMat(allErrors.colRange(pos, pos + ni));

            _Je.resize(ni*2); _Ji.resize(ni*2); _err.resize(ni*2);
            _Jo.resize(ni*2);

            CvMat _mp = cvMat(_err.reshape(2, 1));

            if( calcJ )
            {
                CvMat _dpdr = cvMat(_Je.colRange(0, 3));
                CvMat _dpdt = cvMat(_Je.colRange(3, 6));
                CvMat _dpdf = cvMat(_Ji.colRange(0, 2));
                CvMat _dpdc = cvMat(_Ji.colRange(2, 4));
                CvMat _dpdk = cvMat(_Ji.colRange(4, NINTRINSIC));
                CvMat _dpdo = _Jo.empty() ? CvMat() : cvMat(_Jo.colRange(0, ni * 3));
                /* 根据参数计算投影点  */
                cvProjectPoints2Internal( &_Mi, &_ri, &_ti, &matA, &_k, &_mp, &_dpdr, &_dpdt,
                                  (flags & CALIB_FIX_FOCAL_LENGTH) ? nullptr : &_dpdf,
                                  (flags & CALIB_FIX_PRINCIPAL_POINT) ? nullptr : &_dpdc, &_dpdk,
                                  (_Jo.empty()) ? nullptr: &_dpdo,
                                  (flags & CALIB_FIX_ASPECT_RATIO) ? aspectRatio : 0);
            }
            else
                cvProjectPoints2( &_Mi, &_ri, &_ti, &matA, &_k, &_mp ); // 根据参数计算投影点

            // 比较计算的投影点和实际点的误差
            cvSub( &_mp, &_mi, &_mp );
            if (perViewErrors || stdDevs)
                cvCopy(&_mp, &_me);

            if( calcJ )
            {
                Mat JtJ(cvarrToMat(_JtJ)), JtErr(cvarrToMat(_JtErr));

                // see HZ: (A6.14) for details on the structure of the Jacobian
                JtJ(Rect(0, 0, NINTRINSIC, NINTRINSIC)) += _Ji.t() * _Ji;
                JtJ(Rect(NINTRINSIC + i * 6, NINTRINSIC + i * 6, 6, 6)) = _Je.t() * _Je;
                JtJ(Rect(NINTRINSIC + i * 6, 0, 6, NINTRINSIC)) = _Ji.t() * _Je;
                if( releaseObject )
                {
                    JtJ(Rect(NINTRINSIC + nimages * 6, 0, maxPoints * 3, NINTRINSIC)) += _Ji.t() * _Jo;
                    JtJ(Rect(NINTRINSIC + nimages * 6, NINTRINSIC + i * 6, maxPoints * 3, 6))
                        += _Je.t() * _Jo;
                    JtJ(Rect(NINTRINSIC + nimages * 6, NINTRINSIC + nimages * 6, maxPoints * 3, maxPoints * 3))
                        += _Jo.t() * _Jo;
                }

                JtErr.rowRange(0, NINTRINSIC) += _Ji.t() * _err;
                JtErr.rowRange(NINTRINSIC + i * 6, NINTRINSIC + (i + 1) * 6) = _Je.t() * _err;
                if( releaseObject )
                {
                    JtErr.rowRange(NINTRINSIC + nimages * 6, nparams) += _Jo.t() * _err;
                }
            }

            double viewErr = norm(_err, NORM_L2SQR);

            if( perViewErrors )
                perViewErrors->data.db[i] = std::sqrt(viewErr / ni);

            reprojErr += viewErr;
        }
        if( _errNorm )
            *_errNorm = reprojErr;

        if( !proceed )
        {
            break;
        }
    }

    // 4. 存储结果
    cvConvert( &matA, cameraMatrix );
    cvConvert( &_k, distCoeffs );
    if( newObjPoints && releaseObject )
    {
        CvMat _Mi;
        cvGetRows( solver.param, &_Mi, NINTRINSIC + nimages * 6,
                   NINTRINSIC + nimages * 6 + maxPoints * 3 );
        cvReshape( &_Mi, &_Mi, 3, 1 );
        cvConvert( &_Mi, newObjPoints );
    }

    for( i = 0, pos = 0; i < nimages; i++ )
    {
        CvMat src, dst;

        if( rvecs )
        {
            src = cvMat( 3, 1, CV_64F, solver.param->data.db + NINTRINSIC + i*6 );
            if( rvecs->rows == nimages && rvecs->cols*CV_MAT_CN(rvecs->type) == 9 )
            {
                dst = cvMat( 3, 3, CV_MAT_DEPTH(rvecs->type),
                    rvecs->data.ptr + rvecs->step*i );
                cvRodrigues2( &src, &matA );
                cvConvert( &matA, &dst );
            }
            else
            {
                dst = cvMat( 3, 1, CV_MAT_DEPTH(rvecs->type), rvecs->rows == 1 ?
                    rvecs->data.ptr + i*CV_ELEM_SIZE(rvecs->type) :
                    rvecs->data.ptr + rvecs->step*i );
                cvConvert( &src, &dst );
            }
        }
        if( tvecs )
        {
            src = cvMat( 3, 1, CV_64F, solver.param->data.db + NINTRINSIC + i*6 + 3 );
            dst = cvMat( 3, 1, CV_MAT_DEPTH(tvecs->type), tvecs->rows == 1 ?
                    tvecs->data.ptr + i*CV_ELEM_SIZE(tvecs->type) :
                    tvecs->data.ptr + tvecs->step*i );
            cvConvert( &src, &dst );
         }
    }

    return std::sqrt(reprojErr/total);
}

关键性步骤、函数及计算

CvLevMarq求解器

LM算法原理参考:【1】【2】

calib3d_c.h中的CvLevMarq类声明
class CV_EXPORTS CvLevMarq
{
public:
    CvLevMarq();

    /*
    * 构造函数
    * nparams   :参数数量
    * nerrs     :误差数量
    * criteria  :迭代终止条件, 默认为最大迭代次数为 30,精度为 10^{-3}
    * completeSymmFlag  :是否使用提供的初始雅可比矩阵
    */
    CvLevMarq( int nparams, int nerrs, CvTermCriteria criteria=
              cvTermCriteria(CV_TERMCRIT_EPS+CV_TERMCRIT_ITER,30,DBL_EPSILON),
              bool completeSymmFlag=false );
    ~CvLevMarq();

    /*
    * 初始化函数
    * nparams   :参数数量
    * nerrs     :误差数量,样本数量
    * criteria  :迭代终止条件, 默认为最大迭代次数为 30,精度为 10^{-3}
    * completeSymmFlag  :
    */
    void init( int nparams, int nerrs, CvTermCriteria criteria=
              cvTermCriteria(CV_TERMCRIT_EPS+CV_TERMCRIT_ITER,30,DBL_EPSILON),
              bool completeSymmFlag=false );

    /*
    * 更新一次参数,并返回是否成功
    * param   :当前参数值,大小为 nparams x 1
    * J       :当前雅可比矩阵,大小为 nerrs x nparams
    * err     :当前误差值,大小为 nerrs x 1
    */
    bool update( const CvMat*& param, CvMat*& J, CvMat*& err );

    /*
    * 更新一次参数,并返回是否成功
    * param   :当前参数值,大小为 nparams x 1
    * JtJ     : J^T J 矩阵,大小为 nparams x nparams
    * JtErr   :J^T err 向量,大小为 nparams x 1
    * errNorm :更新后的测量误差
    */
    bool updateAlt( const CvMat*& param, CvMat*& JtJ, CvMat*& JtErr, double*& errNorm );

    void clear();   /* 清空内部初始变量 */
    void step();    /* 迭代更新 */

    /* 优化步骤中的状态:STARTED(开始优化)CALC_J(迭代更新参数)CHECK_ERR(检查更新误差)DONE(结束优化) */
    enum { DONE=0, STARTED=1, CALC_J=2, CHECK_ERR=3 };

    cv::Ptr<CvMat> mask;            /* 标记优化过程中不优化的量。0代表不优化,1代表优化。 */
    cv::Ptr<CvMat> prevParam;       /* 前一步的优化参数 */
    cv::Ptr<CvMat> param;           /* 当前参数值,大小为 nparams x 1 */
    cv::Ptr<CvMat> J;               /* 当前雅可比矩阵,大小为 nerrs x nparams */
    cv::Ptr<CvMat> err;             /* 当前误差值,大小为 nerrs x 1 */
    cv::Ptr<CvMat> JtJ;             /* J^T J 矩阵,大小为 nparams x nparams */
    cv::Ptr<CvMat> JtJN;            /*  */
    cv::Ptr<CvMat> JtErr;           /* J^T err 向量,大小为 nparams x 1 */
    cv::Ptr<CvMat> JtJV;            /*  */
    cv::Ptr<CvMat> JtJW;            /*  */
    double prevErrNorm;             /* 更新前的测量误差 */
    double errNorm;                 /* 更新后的测量误差 */
    int lambdaLg10;                 /* 当前 Levenberg-Marquardt 参数,加1代表lambda变大10倍,减1缩小10倍 */
    CvTermCriteria criteria;        /* 迭代终止条件, 默认为最大迭代次数为 30,精度为 10^{-3} */
    int state;                      /* 优化步骤中的状态 */
    int iters;                      /* 记录迭代的次数 */
    bool completeSymmFlag;          /*  */
    int solveMethod;                /* 求解方法,只能是 DECOMP_SVD */
};
compat_ptsetreg.cpp中的CvLevMarq类主要函数实现
/* 清空内部变量,并初始化 */
void CvLevMarq::init( int nparams, int nerrs, CvTermCriteria criteria0, bool _completeSymmFlag )
{
    // 清空内部变量,并初始化
    if( !param || param->rows != nparams || nerrs != (err ? err->rows : 0) )
        clear();
    mask.reset(cvCreateMat( nparams, 1, CV_8U ));
    cvSet(mask, cvScalarAll(1));
    prevParam.reset(cvCreateMat( nparams, 1, CV_64F ));
    param.reset(cvCreateMat( nparams, 1, CV_64F ));
    JtJ.reset(cvCreateMat( nparams, nparams, CV_64F ));
    JtErr.reset(cvCreateMat( nparams, 1, CV_64F ));
    if( nerrs > 0 )
    {
        J.reset(cvCreateMat( nerrs, nparams, CV_64F ));
        err.reset(cvCreateMat( nerrs, 1, CV_64F ));
    }
    errNorm = prevErrNorm = DBL_MAX;
    lambdaLg10 = -3;
    criteria = criteria0;
    if( criteria.type & CV_TERMCRIT_ITER )
        criteria.max_iter = MIN(MAX(criteria.max_iter,1),1000);
    else
        criteria.max_iter = 30;
    if( criteria.type & CV_TERMCRIT_EPS )
        criteria.epsilon = MAX(criteria.epsilon, 0);
    else
        criteria.epsilon = DBL_EPSILON;
    state = STARTED;
    iters = 0;
    completeSymmFlag = _completeSymmFlag;
    solveMethod = cv::DECOMP_SVD;
}

/* 执行一次迭代并返回当前迭代状态 */
bool CvLevMarq::update( const CvMat*& _param, CvMat*& matJ, CvMat*& _err )
{
    // 初始化当前雅可比矩阵matJ 和 误差向量_err
    matJ = _err = 0;

    // state = DONE 时不更新参数并返回false
    if( state == DONE )
    {
        _param = param;
        return false;
    }

    // state = STARTED 时初始化参数并将state置为CALC_J
    if( state == STARTED )
    {
        _param = param;
        cvZero( J );
        cvZero( err );
        matJ = J;
        _err = err;
        state = CALC_J;
        return true;
    }

    // state = CALC_J 时更新参数并将state置为CHECK_ERR
    if( state == CALC_J )
    {
        // 根据J计算JtJ
        cvMulTransposed( J, JtJ, 1 );
        // 根据J,err计算JtErr
        cvGEMM( J, err, 1, 0, 0, JtErr, CV_GEMM_A_T );
        // 备份更新前参数
        cvCopy( param, prevParam );
        // 执行一次更新迭代
        step();
        // 初始化prevErrNorm
        if( iters == 0 )
            prevErrNorm = cvNorm(err, 0, CV_L2);
        // 更新参数
        _param = param;
        cvZero( err );
        _err = err;
        state = CHECK_ERR;
        return true;
    }

    errNorm = cvNorm( err, 0, CV_L2 );
    // 比较更新后的误差与更新前的误差,大则系数增加10倍,然后再次尝试更新
    if( errNorm > prevErrNorm )
    {
        if( ++lambdaLg10 <= 16 )
        {
            step();
            _param = param;
            cvZero( err );
            _err = err;
            state = CHECK_ERR;
            return true;
        }
    }

    lambdaLg10 = MAX(lambdaLg10-1, -16);
    // 检查迭代次数 和 参数变化精度 判断是否继续迭代
    if( ++iters >= criteria.max_iter ||
        cvNorm(param, prevParam, CV_RELATIVE_L2) < criteria.epsilon )
    {
        _param = param;
        state = DONE;
        return true;
    }

    prevErrNorm = errNorm;
    _param = param;
    cvZero(J);
    matJ = J;
    _err = err;
    state = CALC_J;
    return true;
}

/* 执行一次迭代并返回当前迭代状态 */
bool CvLevMarq::updateAlt( const CvMat*& _param, CvMat*& _JtJ, CvMat*& _JtErr, double*& _errNorm )
{
    // state = DONE 时不更新参数并返回false
    if( state == DONE )
    {
        _param = param;
        return false;
    }

    // state = STARTED 时初始化参数并将state置为CALC_J
    if( state == STARTED )
    {
        _param = param;
        cvZero( JtJ );
        cvZero( JtErr );
        errNorm = 0;
        _JtJ = JtJ;
        _JtErr = JtErr;
        _errNorm = &errNorm;
        state = CALC_J;
        return true;
    }

    // state = CALC_J 时更新参数并将state置为CHECK_ERR
    if( state == CALC_J )
    {
        cvCopy( param, prevParam );
        step();
        _param = param;
        prevErrNorm = errNorm;
        errNorm = 0;
        _errNorm = &errNorm;
        state = CHECK_ERR;
        return true;
    }

    // 比较更新后的误差与更新前的误差,大则系数增加10倍,然后再次尝试更新
    CV_Assert( state == CHECK_ERR );
    if( errNorm > prevErrNorm )
    {
        if( ++lambdaLg10 <= 16 )
        {
            step();
            _param = param;
            errNorm = 0;
            _errNorm = &errNorm;
            state = CHECK_ERR;
            return true;
        }
    }

    // 检查迭代次数 和 参数变化精度 判断是否继续迭代
    lambdaLg10 = MAX(lambdaLg10-1, -16);
    if( ++iters >= criteria.max_iter ||
        cvNorm(param, prevParam, CV_RELATIVE_L2) < criteria.epsilon )
    {
        _param = param;
        _JtJ = JtJ;
        _JtErr = JtErr;
        state = DONE;
        return false;
    }

    prevErrNorm = errNorm;
    cvZero( JtJ );
    cvZero( JtErr );
    _param = param;
    _JtJ = JtJ;
    _JtErr = JtErr;
    state = CALC_J;
    return true;
}

求解相机外参 cvFindExtrinsicCameraParams2()

该函数的实现方式是基于迭代优化的方法。它首先通过 3D 点坐标和 2D 像素坐标计算相机在 3D 空间中的旋转向量和平移向量的初值,然后利用迭代优化方法,对旋转向量和平移向量进行逐步优化,最终得到最优的结果。在迭代过程中,该函数会反复使用 cvProjectPoints2 函数来计算每个点的投影位置,并将投影位置与真实的 2D 像素坐标之间的误差作为优化的目标函数。

void cvFindExtrinsicCameraParams2( const CvMat* objectPoints,
                                  const CvMat* imagePoints, 
                                  const CvMat* A,
                                  const CvMat* distCoeffs, 
                                  CvMat* rvec, 
                                  CvMat* tvec,
                                  int useExtrinsicGuess );

输入:
    objectPoints:对象点
    imagePoints :图像点
    A           :相机内参
    distCoeffs  :镜头畸变系数

输出:
    rvec        :旋转矩阵
    tvec        :平移矩阵
    useExtrinsicGuess:是否使用提供的初值
相机外参计算原理

问题:求解最佳旋转矩阵R来近似一个给定的3×3矩阵Q。这里,“最佳”是在差分R−Q的最小弗罗比尼乌斯范数的意义上。也就是解决以下问题(Q是利用2.2.3.1求解的旋转矩阵,但不一定是标准旋转矩阵,因此需要优化为标准的旋转矩阵R):
min ⁡ R ∥ R − Q ∥ F 2  且  R T R = I (a) \begin{gathered} \min _R\|R-Q\|_F^2 \\ \text { 且 } R^T R=I \end{gathered}\tag{a} RminRQF2  RTR=I(a)
弗罗比尼乌斯范数性质

则有(矩阵迹的性质):
∥ R − Q ∥ F 2 = trace ⁡ ( ( R − Q ) T ( R − Q ) ) = trace ⁡ ( R T R − R T Q − Q T R + Q T Q ) = trace ⁡ ( I ) − trace ⁡ ( R T Q ) − trace ⁡ ( Q T R ) + trace ⁡ ( Q T Q ) ) = 3 − 2 trace ⁡ ( R T Q ) + trace ⁡ ( Q T Q ) ) \begin{aligned} \|R-Q\|_F^2 & =\operatorname{trace}\left((R-Q)^T(R-Q)\right) \\ & =\operatorname{trace}\left(R^T R-R^T Q-Q^T R+Q^T Q\right) \\ & \left.=\operatorname{trace}(I)-\operatorname{trace}\left(R^T Q\right)-\operatorname{trace}\left(Q^T R\right)+\operatorname{trace}\left(Q^T Q\right)\right) \\ & \left.=3-2 \operatorname{trace}\left(R^T Q\right)+\operatorname{trace}\left(Q^T Q\right)\right) \end{aligned} RQF2=trace((RQ)T(RQ))=trace(RTRRTQQTR+QTQ)=trace(I)trace(RTQ)trace(QTR)+trace(QTQ))=32trace(RTQ)+trace(QTQ))
其中Q是已知量,因此式(a)最小则是
trace ⁡ ( R T Q )  最大  \operatorname{trace}\left(R^T Q\right) \text { 最大 } trace(RTQ) 最大 
对Q奇异值分解有:
S V D ( Q ) = U S V T S V D(Q)=U S V^T SVD(Q)=USVT
其中S为Q的特征值diag (σ1*, σ2, σ*3),有(奇异值分解性质):
trace ⁡ ( R T Q ) = trace ⁡ ( R T U S V T ) = trace ⁡ ( V T R T U S ) = trace ⁡ ( Z S ) = ∑ i = 1 3 z i i σ i ≤ ∑ i = 1 3 σ i \begin{aligned} \operatorname{trace}\left(R^T Q\right) & =\operatorname{trace}\left(R^T U S V^T\right) \\ & =\operatorname{trace}\left(V^T R^T U S\right) \\ & =\operatorname{trace}(Z S) \\ & =\sum_{i=1}^3 z_{i i} \sigma_i \leq \sum_{i=1}^3 \sigma_i \end{aligned} trace(RTQ)=trace(RTUSVT)=trace(VTRTUS)=trace(ZS)=i=13ziiσii=13σi
其中:
Z = V T R T U  且当  Z = I  时达到最大值   此时:  R = U V T \begin{aligned} & Z=V^T R^T U \\ & \text { 且当 } Z=I \text { 时达到最大值 } \\ & \text { 此时: } R=U V^T \end{aligned} Z=VTRTU 且当 Z=I 时达到最大值  此时R=UVT
因此,在我们求出一个旋转矩阵Q后,可以通过上式得到标准旋转矩阵。

实的 2D 像素坐标之间的误差作为优化的目标函数。

void cvFindExtrinsicCameraParams2( const CvMat* objectPoints,
                                  const CvMat* imagePoints, 
                                  const CvMat* A,
                                  const CvMat* distCoeffs, 
                                  CvMat* rvec, 
                                  CvMat* tvec,
                                  int useExtrinsicGuess );

输入:
    objectPoints:对象点
    imagePoints :图像点
    A           :相机内参
    distCoeffs  :镜头畸变系数

输出:
    rvec        :旋转矩阵
    tvec        :平移矩阵
    useExtrinsicGuess:是否使用提供的初值
相机外参计算原理

问题:求解最佳旋转矩阵R来近似一个给定的3×3矩阵Q。这里,“最佳”是在差分R−Q的最小弗罗比尼乌斯范数的意义上。也就是解决以下问题(Q是利用2.2.3.1求解的旋转矩阵,但不一定是标准旋转矩阵,因此需要优化为标准的旋转矩阵R):
min ⁡ R ∥ R − Q ∥ F 2  且  R T R = I (a) \begin{gathered} \min _R\|R-Q\|_F^2 \\ \text { 且 } R^T R=I \end{gathered}\tag{a} RminRQF2  RTR=I(a)
弗罗比尼乌斯范数性质

则有(矩阵迹的性质):
∥ R − Q ∥ F 2 = trace ⁡ ( ( R − Q ) T ( R − Q ) ) = trace ⁡ ( R T R − R T Q − Q T R + Q T Q ) = trace ⁡ ( I ) − trace ⁡ ( R T Q ) − trace ⁡ ( Q T R ) + trace ⁡ ( Q T Q ) ) = 3 − 2 trace ⁡ ( R T Q ) + trace ⁡ ( Q T Q ) ) \begin{aligned} \|R-Q\|_F^2 & =\operatorname{trace}\left((R-Q)^T(R-Q)\right) \\ & =\operatorname{trace}\left(R^T R-R^T Q-Q^T R+Q^T Q\right) \\ & \left.=\operatorname{trace}(I)-\operatorname{trace}\left(R^T Q\right)-\operatorname{trace}\left(Q^T R\right)+\operatorname{trace}\left(Q^T Q\right)\right) \\ & \left.=3-2 \operatorname{trace}\left(R^T Q\right)+\operatorname{trace}\left(Q^T Q\right)\right) \end{aligned} RQF2=trace((RQ)T(RQ))=trace(RTRRTQQTR+QTQ)=trace(I)trace(RTQ)trace(QTR)+trace(QTQ))=32trace(RTQ)+trace(QTQ))
其中Q是已知量,因此式(a)最小则是
trace ⁡ ( R T Q )  最大  \operatorname{trace}\left(R^T Q\right) \text { 最大 } trace(RTQ) 最大 
对Q奇异值分解有:
S V D ( Q ) = U S V T S V D(Q)=U S V^T SVD(Q)=USVT
其中S为Q的特征值diag (σ1*, σ2, σ*3),有(奇异值分解性质):
trace ⁡ ( R T Q ) = trace ⁡ ( R T U S V T ) = trace ⁡ ( V T R T U S ) = trace ⁡ ( Z S ) = ∑ i = 1 3 z i i σ i ≤ ∑ i = 1 3 σ i \begin{aligned} \operatorname{trace}\left(R^T Q\right) & =\operatorname{trace}\left(R^T U S V^T\right) \\ & =\operatorname{trace}\left(V^T R^T U S\right) \\ & =\operatorname{trace}(Z S) \\ & =\sum_{i=1}^3 z_{i i} \sigma_i \leq \sum_{i=1}^3 \sigma_i \end{aligned} trace(RTQ)=trace(RTUSVT)=trace(VTRTUS)=trace(ZS)=i=13ziiσii=13σi
其中:
Z = V T R T U  且当  Z = I  时达到最大值   此时:  R = U V T \begin{aligned} & Z=V^T R^T U \\ & \text { 且当 } Z=I \text { 时达到最大值 } \\ & \text { 此时: } R=U V^T \end{aligned} Z=VTRTU 且当 Z=I 时达到最大值  此时R=UVT
因此,在我们求出一个旋转矩阵Q后,可以通过上式得到标准旋转矩阵。

参考链接: c++opencv相机标定(calibrateCamera)详解

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
### 回答1: 标定相机是指确定相机的内部参数和外部参数,以便对其进行修正和校正,从而提高图像质量和测量精度。OpenCV是一个开源的计算机视觉库,提供了用于相机标定的相关函数和方法。 在OpenCV中,相机标定可以通过使用calibrateCamera函数来实现。以下是一个大致的代码示例: 1. 导入OpenCV库: import cv2 2. 定义需要标定的图像的大小: width = 640 height = 480 3. 提供用于相机标定的棋盘格图像: chessboard_images = [] for i in range(1, 21): image = cv2.imread("chessboard_{}.jpg".format(i)) chessboard_images.append(image) 4. 创建棋盘格角点的三维坐标数组: square_size = 2 # 棋盘格每个方格的实际尺寸 obj_points = [] for i in range(0, width, square_size): for j in range(0, height, square_size): obj_points.append((i, j, 0)) 5. 定义图像中的角点数组: img_points = [] 6. 寻找每个图像中的棋盘格角点并追加到角点数组中: for image in chessboard_images: gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # 将图像转换为灰度图像 ret, corners = cv2.findChessboardCorners(gray, (width, height), None) # 寻找棋盘格角点 if ret: img_points.append(corners) 7. 运行相机标定函数来计算相机的内部参数和外部参数: ret, camera_matrix, dist_coeffs, rvecs, tvecs = cv2.calibrateCamera(obj_points, img_points, (width, height), None, None) 通过这些步骤,我们可以获取到相机的内部参数矩阵(camera_matrix),畸变系数(dist_coeffs)以及每个图像的旋转向量(rvecs)和平移向量(tvecs)。 相机标定的目的是提取和矫正相机获取图像时的畸变,从而提高图像质量和测量精度。OpenCV提供的相机标定函数可以快速准确地完成这些任务。 ### 回答2: OpenCV是一个开源的计算机视觉库,可以用于图像处理和分析。其中相机标定是一项重要的功能,用于确定相机的内部和外部参数,以便在图像中进行准确的测量和估计。 以下是一个基本的OpenCV相机标定代码示例: ```python import cv2 import numpy as np # 定义棋盘格的尺寸,通常是(宽度-1,高度-1) chessboard_size = (8, 6) # 创建棋盘格角点的空数组 object_points = [] # 3D点在真实世界中的坐标 image_points = [] # 图像中对应的2D点坐标 # 生成棋盘格的3D坐标 objp = np.zeros((chessboard_size[0] * chessboard_size[1], 3), np.float32) objp[:, :2] = np.mgrid[0:chessboard_size[0], 0:chessboard_size[1]].T.reshape(-1, 2) # 打开摄像头 cap = cv2.VideoCapture(0) while True: # 读取图像帧 ret, frame = cap.read() # 转换为灰度图像 gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # 查找棋盘格角点 ret, corners = cv2.findChessboardCorners(gray, chessboard_size, None) # 如果找到了角点 if ret == True: object_points.append(objp) image_points.append(corners) # 在图像上绘制角点 cv2.drawChessboardCorners(frame, chessboard_size, corners, ret) # 显示图像 cv2.imshow('Camera Calibration', frame) # 按下键盘上的空格键开始标定 if cv2.waitKey(1) & 0xFF == ord(' '): break # 完成图像采集,关闭摄像头 cap.release() cv2.destroyAllWindows() # 相机标定 ret, camera_matrix, dist_coeffs, rvecs, tvecs = cv2.calibrateCamera(object_points, image_points, gray.shape[::-1], None, None) # 输出相机内部参数和畸变系数 print("相机内部参数:") print(camera_matrix) print("\n畸变系数:") print(dist_coeffs) ``` 以上代码演示了通过使用OpenCV的`calibrateCamera`函数来进行相机标定。在代码中,我们首先从摄像头读取图像帧,在灰度图像上查找棋盘格角点,并将其存储到`object_points`和`image_points`中。然后,通过调用`calibrateCamera`函数,我们可以获得相机的内部参数矩阵(`camera_matrix`)和畸变系数(`dist_coeffs`)。最后,我们将它们打印出来。 相机标定对于许多计算机视觉应用非常重要,它可以让我们更准确地测量和估计图像中的物体。通过使用OpenCV相机标定功能,我们可以轻松地获取相机的内外参数。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值