第8讲 视觉里程计2 上篇

本文介绍了光流跟踪的基本概念,包括使用OpenCV的LK光流算法以及高斯牛顿法实现的光流跟踪。通过C++代码展示了如何在图像中检测和跟踪角点,并解释了高斯牛顿法在光流优化中的应用,同时探讨了单层和多层光流的差异。
摘要由CSDN通过智能技术生成

1 使用LK光流

  给定图像1和图像2,利用OpenCV中的自带函数提取图像1中的GFTT角点,然后利用calcOpticalFlowPyrLK()函数跟踪其在图像2中的位置 ( u , v ) (u,v) (u,v)
  cpp文件内容为,

#include <iostream>
#include <chrono>
#include <opencv2/opencv.hpp>
#include <Eigen/Core>  //Eigen核心模块

using namespace std;
using namespace cv;
using namespace Eigen;

//图片路径
string path1 = "../LK1.png";
string path2 = "../LK2.png";

int main()
{
    chrono::steady_clock::time_point t1 = chrono::steady_clock::now();

    Mat img1 = imread(path1, 0);  //0表示返回灰度图
    Mat img2 = imread(path2, 0);  //0表示返回灰度图

    vector<KeyPoint> kp1;
    Ptr<GFTTDetector> detector = GFTTDetector::create(500, 0.01, 20);
    //maxCorners表示最大角点数目。在此处为500。
    //qualityLevel表示角点可以接受的最小特征值,一般0.1或者0.01,不超过1。在此处为0.01。
    //minDistance表示角点之间的最小距离。在此处为20。
    detector->detect(img1, kp1);

    //利用opencv中的calcOpticalFlowPyrLK()函数进行光流跟踪
    vector<Point2f> pt1, pt2;
    for(auto kp : kp1)
        pt1.push_back(kp.pt);
    vector<uchar> status;
    vector<float> error;
    calcOpticalFlowPyrLK(img1, img2, pt1, pt2, status, error);
    //status中元素表示对应角点是否被正确跟踪到,1为正确跟踪,0为错误跟踪
    //error表示误差,暂时不知道误差咋计算的

    //画出光流图
    Mat img2_CV;
    cvtColor(img2, img2_CV, CV_GRAY2BGR);  //将灰度图转换成彩色图,彩色图中BGR各颜色通道值为原先灰度值
    for(int i = 0; i < pt2.size(); i++)
        if(status[i])
        {
            circle(img2_CV, pt2[i], 2, Scalar(0, 255, 0), 2);
            line(img2_CV, pt1[i], pt2[i], Scalar(0, 255, 0));
        }
    imshow("img2_CV", img2_CV);  //实心圆为光流的终点
    waitKey(0);

    //画出角点连线图
    Mat imgMatches(img1.rows, img1.cols * 2, CV_8UC1);  //定义*行*列的Mat型变量
    Rect rect1 = Rect(0, 0, img1.cols, img1.rows);
    //Rect()有四个参数,第1个参数表示初始列,第2个参数表示初始行,
    //第3个参数表示在初始列的基础上还要加上多少列(即矩形区域的宽度),第4个参数表示在初始行的基础上还要加上多少行(即矩形区域的高度)
    Rect rect2 = Rect(img1.cols, 0, img2.cols, img2.rows);
    img1.copyTo(imgMatches(rect1));
    img2.copyTo(imgMatches(rect2));

    cvtColor(imgMatches, imgMatches, CV_GRAY2BGR);  //将图像类型由CV_8UC1转为CV_8UC3
    RNG rng;  //设置opencv中的随机数生成器
    int successTrackCorner = 0;
    for(int i = 0; i < pt1.size(); i++)
        if(status[i])
        {
            Point2f point1, point2;
            point1 = pt1[i];
            point2 = pt2[i] + Point2f(img1.cols, 0);  //Point2f变量中的x表示图像坐标系中的u轴,其中的y表示图像坐标系的v轴
            circle(imgMatches, point1, 3, Scalar(rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255)), 2);
            circle(imgMatches, point2, 3, Scalar(rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255)), 2);
            line(imgMatches, point1, point2, Scalar(rng.uniform(0,255), rng.uniform(0, 255), rng.uniform(0, 255)), 1);
            //uniform表示均匀分布
            successTrackCorner++;
        }
    imshow("imgMatches", imgMatches);
    waitKey(0);

    cout << "图像1总共提取到" << pt1.size() << "个角点,其中在图像2中正确跟踪到" << successTrackCorner << "个角点!" << endl;

    chrono::steady_clock::time_point t2 = chrono::steady_clock::now();
    chrono::duration<double> timeUsed = chrono::duration_cast<chrono::duration<double>> (t2 - t1);
    cout << "执行整个程序花费的时间为:" << timeUsed.count() << "秒!" << endl;
    return 0;
}

  CMakeLists.txt文件内容为,

find_package(OpenCV REQUIRED)
include_directories(${OpenCV_DIRECTORIES})

include_directories("/usr/include/eigen3")

add_executable(main main.cpp)
target_link_libraries(main ${OpenCV_LIBRARIES})

  运行结果为,

图像1总共提取到229个角点,其中在图像2中正确跟踪到229个角点!
执行整个程序花费的时间为:1.855秒!

在这里插入图片描述注:这张图是图像2,实心圆表示图像2中被追踪到的角点,而线的另一端表示图像1中提取出来的GFTT角点。
在这里插入图片描述

2 用高斯牛顿法实现光流

  对于图像1中的角点 ( u , v ) (u,v) (u,v),每次去求解如下优化问题,找到其在图像2中的对应点 ( u + Δ u , v + Δ v ) (u+\Delta u, v + \Delta v) (u+Δu,v+Δv)。优化问题表述为,
{ Δ u ∗ , Δ v ∗ } = a r g m i n Δ u , Δ v ∑ i = − N N − 1 ∑ j = − N N − 1 ∥ I 1 ( u + i , v + j ) − I 2 ( u + i + Δ u , v + j + Δ v ) ∥ 2 2 \{\Delta u^*,\Delta v^*\}=\underset {\Delta u, \Delta v}{argmin} \sum_{i=-N}^{N-1}\sum_{j=-N}^{N-1} \lVert I_1(u+i,v+j)-I_2(u+i+\Delta u, v + j + \Delta v)\rVert_2^2 {Δu,Δv}=Δu,Δvargmini=NN1j=NN1I1(u+i,v+j)I2(u+i+Δu,v+j+Δv)22
其中前两个求和号表示采取模板匹配的思想,认为N * N像素块内所有像素点的运动是一致的。将优化变量记为 x = ( Δ u , Δ v ) T x=(\Delta u, \Delta v)^T x=(Δu,Δv)T
  雅可比矩阵 J i j J_{ij} Jij计算如下,
J i j = − [ I 2 ( u + i + Δ u + 1 , v + j + Δ v ) − I 2 ( u + i + Δ u − 1 , v + j + Δ v ) 2 I 2 ( u + i + Δ u , v + j + Δ v + 1 ) − I 2 ( u + i + Δ u , v + j + Δ v − 1 ) 2 ] J_{ij}=-\left[\begin{matrix} \frac{I_2(u+i+\Delta u + 1, v+j+\Delta v) - I_2(u+i+\Delta u -1, v+j+\Delta v)}{2} \\ \frac{I_2(u+i+\Delta u, v+j+\Delta v + 1) - I_2(u + i + \Delta u, v+j+\Delta v -1)}{2} \end{matrix} \right] Jij=[2I2(u+i+Δu+1,v+j+Δv)I2(u+i+Δu1,v+j+Δv)2I2(u+i+Δu,v+j+Δv+1)I2(u+i+Δu,v+j+Δv1)]
  误差 e i j e_{ij} eij计算如下,
e i j = I 1 ( u + i , v + j ) − I 2 ( u + i + Δ u , v + j + Δ v ) e_{ij}= I_1(u+i,v+j)-I_2(u+i+\Delta u, v + j + \Delta v) eij=I1(u+i,v+j)I2(u+i+Δu,v+j+Δv)

  那么海塞矩阵 H H H,矩阵 b b b和代价函数 f f f计算如下,
H = ∑ i = − N N − 1 ∑ j = − N N − 1 J i j J i j T H = \sum_{i=-N}^{N-1}\sum_{j=-N}^{N-1}J_{ij}J_{ij}^T H=i=NN1j=NN1JijJijT

b = ∑ i = − N N − 1 ∑ j = − N N − 1 − J i j e i j b=\sum_{i=-N}^{N-1}\sum_{j=-N}^{N-1}-J_{ij} e_{ij} b=i=NN1j=NN1Jijeij

f = ∑ i = − N N − 1 ∑ j = − N N − 1 ∥ e i j ∥ 2 2 f=\sum_{i=-N}^{N-1}\sum_{j=-N}^{N-1} \lVert e_{ij} \rVert_2^2 f=i=NN1j=NN1eij22
解如下增量方程,
H ⋅ d x = b H \cdot dx=b Hdx=b
  如果采用的是反向(inverse)光流,那么雅可比矩阵 J i j J_{ij} Jij保持不变,其值计算如下,
J i j = [ I 1 ( u + i + 1 , v + j ) − I 1 ( u + i − 1 , v + j ) 2 I 1 ( u + i , v + j + 1 ) − I 1 ( u + i , v + j − 1 ) 2 ] J_{ij}=\left[\begin{matrix} \frac{I_1(u+i+1,v+j)-I_1(u+i-1,v+j)}{2} \\ \frac{I_1(u+i,v+j+1)-I_1(u+i,v+j-1)}{2} \end{matrix}\right] Jij=[2I1(u+i+1,v+j)I1(u+i1,v+j)2I1(u+i,v+j+1)I1(u+i,v+j1)]
记当 i = − N , j = − N i=-N, j=-N i=N,j=N时的 J i j J_{ij} Jij J J J,那么海塞矩阵 H H H,矩阵 b b b和代价函数 f f f计算如下,
H = J J T H=JJ^T H=JJT

b = ∑ i = − N N − 1 ∑ j = − N N − 1 − J i j e i j b=\sum_{i=-N}^{N-1}\sum_{j=-N}^{N-1}-J_{ij}e_{ij} b=i=NN1j=NN1Jijeij

f = ∑ i = − N N − 1 ∑ j = − N N − 1 ∥ e i j ∥ 2 2 f=\sum_{i=-N}^{N-1}\sum_{j=-N}^{N-1} \lVert e_{ij} \rVert_2^2 f=i=NN1j=NN1eij22
求解相应的更新方程,更新优化变量即可。由上可知,反向光流就是让图像1中的梯度去代替图像2中的梯度。让人疑惑的是:为什么海塞矩阵 H H H仅仅只计算一次加法呢?并且,迭代过程中也不更新它?

2.1 单层光流

  cpp文件内容如下,

#include <iostream>
#include <string>
#include <chrono>
#include <opencv2/opencv.hpp>
#include <Eigen/Core>  //Eigen核心模块
#include <Eigen/Dense>  //Eigen稠密矩阵运算模块

using namespace std;
using namespace cv;
using namespace Eigen;

//图像路径
string img1Path = "../LK1.png";
string img2Path = "../LK2.png";

//光流跟踪
class OpticalFlowTracker
{
public:
    //类的构造函数定义如下
    OpticalFlowTracker(
            const Mat& img1_,
            const Mat& img2_,
            const vector<KeyPoint>& kp1_,
            vector<KeyPoint>& kp2_,
            vector<bool>& success_,
            bool inverse_ = true,
            bool has_initial_ = false
            ) :
            img1(img1_), img2(img2_), kp1(kp1_), kp2(kp2_), success(success_), inverse(inverse_), has_initial(has_initial_) {}

    void calculateOpticalFlow(const Range& range);
    //Range中有两个关键的变量start和end。Range可以用来表示矩阵的多个连续的行或列。Range表示范围从start到end,包含start,但不包含end。

private:
    const Mat& img1;
    const Mat& img2;
    const vector<KeyPoint>& kp1;
    vector<KeyPoint>& kp2;
    vector<bool>& success;
    bool inverse = true;
    bool has_initial = false;
};

//单层光流函数声明
/**
 * single level optical flow
 * @param img1: input, the first image
 * @param img2: input, the second image
 * @param kp1: input, keypoints in img1
 * @param kp2: output, keypoints in img2, if empty, use initial guess in kp1
 * @param success: output, true if a keypoint is tracked successfully
 * @param inverse: input, use inverse formulation?
 * @param has_initial_guess
 */
void OpticalFlowSingleLevel(
        const Mat& img1,
        const Mat& img2,
        const vector<KeyPoint>& kp1,
        vector<KeyPoint>& kp2,
        vector<bool>& success,
        bool inverse = false,  //函数声明时可以进行这样的赋值,是吗?
        bool has_initial_guess = false  //函数声明时可以进行这样的赋值,是吗?
        );

//双线性插值求灰度值
inline float GetPixelValue(const Mat& img, float x, float y)  //inline表示内联函数,它是为了解决一些频繁调用的小函数大量消耗栈空间的问题而引入的
{
    //边界检验
    if(x < 0) x = 0;
    if(y < 0) y = 0;
    if(x >= img.cols) x = img.cols - 1;
    if(y >= img.rows) y = img.rows - 1;

    //单精度浮点数x和y的整数部分和小数部分
    int xInteger = (int)x;
    int yInteger = (int)y;
    float xFraction = x - xInteger;
    float yFraction = y - yInteger;

    //...I1      I2...
    //      I
    //
    //...I3      I4...

    //I1,I2,I3和I4计算如下
    uchar I1 = img.at<uchar>(yInteger, xInteger);
    uchar I2 = img.at<uchar>(yInteger, xInteger+1);
    uchar I3 = img.at<uchar>(yInteger+1, xInteger);
    uchar I4 = img.at<uchar>(yInteger+1, xInteger+1);

    //I1,I2,I3和I4的权重计算如下
    float w1 = (1 - xFraction) * (1 - yFraction);
    float w2 = xFraction * (1 - yFraction);
    float w3 = (1 - xFraction) * yFraction;
    float w4 = xFraction * yFraction;

    //计算最终的像素灰度值
    float I = w1 * I1 + w2 * I2 + w3 * I3 + w4 * I4;

    return I;
}


void drawOpticalFlow(Mat img2, vector<KeyPoint> kp1, vector<KeyPoint> kp2, vector<bool> success);  //画出光流图
void myDrawMatches(Mat img1, Mat img2, vector<KeyPoint> kp1, vector<KeyPoint> kp2, vector<bool> success);  //画出匹配图


int main()
{
    chrono::steady_clock::time_point t1 = chrono::steady_clock::now();

    Mat img1 = imread(img1Path, 0);  //0表示返回灰度图
    Mat img2 = imread(img2Path, 0);  //0表示返回灰度图

    vector<KeyPoint> kp1;
    Ptr<GFTTDetector> detector = GFTTDetector::create(500, 0.01, 20);
    detector->detect(img1, kp1);

    cout << "图像1中提取到" << kp1.size() << "个GFTT角点!" << endl;

    vector<KeyPoint> kp2_single;
    vector<bool> success_single;
    OpticalFlowSingleLevel(img1, img2, kp1, kp2_single, success_single);  //单层光流跟踪
    //inverse在函数声明中被初始化为false,has_initial_guess在函数声明中被初始化为false


    //画出光流图
    drawOpticalFlow(img2, kp1, kp2_single, success_single);

    //画出匹配图
    myDrawMatches(img1, img2, kp1, kp2_single, success_single);

    chrono::steady_clock::time_point t2 = chrono::steady_clock::now();
    chrono::duration<double> timeUsed = chrono::duration_cast<chrono::duration<double>> (t2 - t1);
    cout << "执行整个程序总共花费" << timeUsed.count() << "秒!" << endl;
    return 0;
}


//画出光流
void drawOpticalFlow(Mat img2, vector<KeyPoint> kp1, vector<KeyPoint> kp2, vector<bool> success)
{
    Mat img2Color;
    int successTrack = 0;
    cvtColor(img2, img2Color, CV_GRAY2BGR);  //img2为CV_8UC1,img2Color为CV_8UC3;将灰度图转换成彩色图
    for(int i = 0; i < kp1.size(); i++)
        if(success[i])
        {
            successTrack++;

            Point2f pixel1 = kp1[i].pt;
            Point2f pixel2 = kp2[i].pt;
            circle(img2Color, pixel2, 2, Scalar(0, 255, 0), 2);
            line(img2Color, pixel1, pixel2, Scalar(0, 255, 0), 1);
        }
    cout << "在图像2中正确跟踪到" << successTrack << "个GFTT角点!" << endl;

    imshow("光流图", img2Color);
    waitKey(0);  //等待按键输入,暂停执行
}


//作出匹配图
void myDrawMatches(Mat img1, Mat img2, vector<KeyPoint> kp1, vector<KeyPoint> kp2, vector<bool> success)
{
    Mat imgMatches(img1.rows, img1.cols * 2, CV_8UC1);
    Rect rect1 = Rect(0, 0, img1.cols, img1.rows);
    Rect rect2 = Rect(img1.cols, 0, img2.cols, img2.rows);
    img1.copyTo(imgMatches(rect1));
    img2.copyTo(imgMatches(rect2));
    cvtColor(imgMatches, imgMatches, CV_GRAY2BGR);  //把它转换成彩色图

    RNG rng;
    for(int i = 0; i < kp1.size(); i++)
        if(success[i])
        {
            Point2f pixel1 = kp1[i].pt;
            Point2f pixel2 = kp2[i].pt + Point2f(img1.cols, 0);
            circle(imgMatches, pixel1, 3, Scalar(rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255)), 2);
            circle(imgMatches, pixel2, 3, Scalar(rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255)), 2);
            line(imgMatches, pixel1, pixel2, Scalar(rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255)), 1);
        }
    imshow("GFTT角点匹配图", imgMatches);
    waitKey(0);  //等待按键输入,暂停执行
}


void OpticalFlowSingleLevel(
        const Mat& img1,
        const Mat& img2,
        const vector<KeyPoint>& kp1,
        vector<KeyPoint>& kp2,
        vector<bool>& success,
        bool inverse,
        bool has_initial
)
{
    kp2.resize(kp1.size());
    success.resize(kp1.size());
    //定义了一个OpticalFlowTracker类型的变量tracker,并进行了初始化
    OpticalFlowTracker tracker(img1, img2, kp1, kp2, success, inverse, has_initial);
    Range range = Range(0, kp1.size());
    //tracker.calculateOpticalFlow(range);  //串行执行
    parallel_for_(Range(0, kp1.size()), std::bind(&OpticalFlowTracker::calculateOpticalFlow, &tracker, placeholders::_1));
    //parallel_for_()实现并行调用OpticalFlowTracker::calculateOpticalFlow()的功能
}


//使用高斯牛顿法求解图像2中相应的角点坐标
void OpticalFlowTracker::calculateOpticalFlow(const Range& range)
{
    int half_patch_size = 4;
    int maxIteration = 10;

    for(size_t i = range.start; i < range.end; i++)  //对图像1中的每个GFTT角点进行高斯牛顿优化
    {
        auto kp = kp1[i];
        double dx = 0, dy = 0;  //优化变量

        if(has_initial)  //如果kp2进行了初始化,则执行
        {
            dx = kp2[i].pt.x - kp.pt.x;
            dy = kp2[i].pt.y - kp.pt.y;
        }

        double currentCost = 0, lastCost = 0;
        bool succ = true;

        Eigen::Matrix2d H = Eigen::Matrix2d::Zero();
        Eigen::Vector2d b = Eigen::Vector2d::Zero();
        Eigen::Vector2d Jij;


        for(int iter = 0; iter < maxIteration; iter++)  //进行高斯牛顿迭代
        {
            if(inverse == false)
            {
                H = Eigen::Matrix2d::Zero();
                b = Eigen::Vector2d::Zero();
            }
            else
                b = Eigen::Vector2d::Zero();  //只重置矩阵b。在反向光流中,其海塞矩阵H在整个高斯牛顿迭代过程中均保持不变!

            currentCost = 0;
            for(int x = -half_patch_size; x < half_patch_size; x++)
                for(int y = -half_patch_size; y < half_patch_size; y++)
                {
                    double error = GetPixelValue(img1, kp.pt.x + x, kp.pt.y + y) - GetPixelValue(img2, kp.pt.x + x + dx, kp.pt.y + y + dy);
                    //x,y是patch内遍历;dx,dy是优化变量

                    //分正向光流还是反向光流,其雅可比矩阵的计算方法不一样
                    if(inverse == false)
                        Jij = -1.0 * Vector2d(0.5 * (GetPixelValue(img2, kp.pt.x + x + dx + 1, kp.pt.y + y + dy) - GetPixelValue(img2, kp.pt.x + x + dx - 1, kp.pt.y + y + dy)),
                                              0.5 * (GetPixelValue(img2, kp.pt.x + x + dx, kp.pt.y + y + dy + 1) - GetPixelValue(img2, kp.pt.x + x + dx, kp.pt.y + y + dy - 1)));
                    else if (iter == 0)
                        Jij = -1.0 * Vector2d(0.5 * (GetPixelValue(img1, kp.pt.x + 1, kp.pt.y) - GetPixelValue(img1, kp.pt.x - 1, kp.pt.y)),
                                              0.5 * (GetPixelValue(img1, kp.pt.x, kp.pt.y + 1) - GetPixelValue(img1, kp.pt.x, kp.pt.y - 1)));

                    b += -error * Jij;
                    currentCost += error * error;
                    if(inverse == false || iter == 0)
                        H += Jij * Jij.transpose();
                }//遍历完patch内的所有像素点

            //求解增量方程,计算更新量
            Vector2d update = H.ldlt().solve(b);

            if(isnan(update[0]))
            {
                //cout << "计算出来的更新量是非数字,光流跟踪失败,退出GN迭代!" << endl;
                succ = false;
                break;
            }

            if(iter > 0 && currentCost >= lastCost)
            {
                //cout << "代价不再减小,退出GN迭代!" << endl;
                break;
            }

            //更新优化变量和lastCost
            dx += update[0];
            dy += update[1];
            lastCost = currentCost;

            if(update.norm() < 1e-2)
            {
                //cout << "更新量的模小于1e-2,退出GN迭代!" << endl;
                break;
            }
        }//GN法进行完一次迭代

        success[i]  = succ;
        kp2[i].pt = kp.pt + Point2f(dx, dy);

    }//对图像1中的所有角点都完成了光流跟踪


}

  CMakeLists.txt文件内容如下,

find_package(OpenCV REQUIRED)
include_directories(${OpenCV_DIRECTORIES})

include_directories("/usr/include/eigen3")

add_executable(main main.cpp)
target_link_libraries(main ${OpenCV_LIBRARIES})

  程序执行结果如下,

图像1中提取到229个GFTT角点!
在图像2中正确跟踪到229个GFTT角点!
执行整个程序总共花费2.15149秒!

在这里插入图片描述在这里插入图片描述

2.2 多层光流

  我们把光流写成了优化问题,就必须假设优化的初始值靠近最优值,才能在一定程度上保障算法的收敛。如果相机运动较快,两张图像差异明显,那么单层图像光流法容易达到一个局部极小值。这种情况可以通过引入图像金字塔来改善。图像金字塔是指对同一图像进行缩放,得到不同分辨率下的图像。以原始图像作为金字塔底层,每往上一层,就对下一层图像进行一定倍率的缩放,就得到了一个金字塔。然后,在计算光流时,先从顶层的图像开始计算,然后把上一层的追踪结果 ( Δ x , Δ y ) (\Delta x, \Delta y) (Δx,Δy),作为下一层光流的初始值。由于上层图像相对粗糙,所以这个过程也称为由粗至精(Coarse-to-fine)的光流,也是实用光流法的通常流程。
  cpp文件内容为,

#include <iostream>
#include <string>
#include <chrono>
#include <opencv2/opencv.hpp>
#include <Eigen/Core>  //Eigen核心模块
#include <Eigen/Dense>  //Eigen稠密矩阵运算模块

using namespace std;
using namespace cv;
using namespace Eigen;

//图像路径
string img1Path = "../LK1.png";
string img2Path = "../LK2.png";

//光流跟踪
class OpticalFlowTracker
{
public:
    //类的构造函数定义如下
    OpticalFlowTracker(
            const Mat& img1_,
            const Mat& img2_,
            const vector<KeyPoint>& kp1_,
            vector<KeyPoint>& kp2_,
            vector<bool>& success_,
            bool inverse_ = true,
            bool has_initial_ = false
            ) :
            img1(img1_), img2(img2_), kp1(kp1_), kp2(kp2_), success(success_), inverse(inverse_), has_initial(has_initial_) {}

    void calculateOpticalFlow(const Range& range);
    //Range中有两个关键的变量start和end。Range可以用来表示矩阵的多个连续的行或列。Range表示范围从start到end,包含start,但不包含end。

private:
    const Mat& img1;
    const Mat& img2;
    const vector<KeyPoint>& kp1;
    vector<KeyPoint>& kp2;
    vector<bool>& success;
    bool inverse = true;
    bool has_initial = false;
};


//单层光流函数声明
/**
 * single level optical flow
 * @param img1: input, the first image
 * @param img2: input, the second image
 * @param kp1: input, keypoints in img1
 * @param kp2: output, keypoints in img2, if empty, use initial guess in kp1
 * @param success: output, true if a keypoint is tracked successfully
 * @param inverse: input, use inverse formulation?
 * @param has_initial_guess
 */
void OpticalFlowSingleLevel(
        const Mat& img1,
        const Mat& img2,
        const vector<KeyPoint>& kp1,
        vector<KeyPoint>& kp2,
        vector<bool>& success,
        bool inverse = false,  //函数声明时可以进行这样的赋值,是吗?
        bool has_initial_guess = false  //函数声明时可以进行这样的赋值,是吗?
);


//多层光流函数声明
/**
 * 多层光流
 * @param img1: input, 图像1
 * @param img2: input, 图像2
 * @param kp1: input, 图像1中的GFTT角点
 * @param kp2: output, 图像2中追踪到的GFTT角点
 * @param success: output, 该角点是否被正确追踪到
 * @param inverse: input, 是否采用反向光流策略;初始化为false
 */
void OpticalFlowMultiLevel(
        const Mat& img1,
        const Mat& img2,
        const vector<KeyPoint>& kp1,
        vector<KeyPoint>& kp2,
        vector<bool>& success,
        bool inverse = false
        );


//双线性插值求灰度值
inline float GetPixelValue(const Mat& img, float x, float y)  //inline表示内联函数,它是为了解决一些频繁调用的小函数大量消耗栈空间的问题而引入的
{
    //边界检验
    if(x < 0) x = 0;
    if(y < 0) y = 0;
    if(x >= img.cols) x = img.cols - 1;
    if(y >= img.rows) y = img.rows - 1;

    //单精度浮点数x和y的整数部分和小数部分
    int xInteger = (int)x;
    int yInteger = (int)y;
    float xFraction = x - xInteger;
    float yFraction = y - yInteger;

    //...I1      I2...
    //      I
    //
    //...I3      I4...

    //I1,I2,I3和I4计算如下
    uchar I1 = img.at<uchar>(yInteger, xInteger);
    uchar I2 = img.at<uchar>(yInteger, xInteger+1);
    uchar I3 = img.at<uchar>(yInteger+1, xInteger);
    uchar I4 = img.at<uchar>(yInteger+1, xInteger+1);

    //I1,I2,I3和I4的权重计算如下
    float w1 = (1 - xFraction) * (1 - yFraction);
    float w2 = xFraction * (1 - yFraction);
    float w3 = (1 - xFraction) * yFraction;
    float w4 = xFraction * yFraction;

    //计算最终的像素灰度值
    float I = w1 * I1 + w2 * I2 + w3 * I3 + w4 * I4;

    return I;
}


void drawOpticalFlow(Mat img2, vector<KeyPoint> kp1, vector<KeyPoint> kp2, vector<bool> success);  //画出光流图
void myDrawMatches(Mat img1, Mat img2, vector<KeyPoint> kp1, vector<KeyPoint> kp2, vector<bool> success);  //画出匹配图


int main()
{
    chrono::steady_clock::time_point t1 = chrono::steady_clock::now();

    Mat img1 = imread(img1Path, 0);  //0表示返回灰度图
    Mat img2 = imread(img2Path, 0);  //0表示返回灰度图

    vector<KeyPoint> kp1;
    Ptr<GFTTDetector> detector = GFTTDetector::create(500, 0.01, 20);
    detector->detect(img1, kp1);

    cout << "图像1中提取到" << kp1.size() << "个GFTT角点!" << endl;

    vector<KeyPoint> kp2_multi;
    vector<bool> success_multi;
    OpticalFlowMultiLevel(img1, img2, kp1, kp2_multi, success_multi, true);  //4层光流跟踪
    //采用反向光流策略


    //画出光流图
    drawOpticalFlow(img2, kp1, kp2_multi, success_multi);

    //画出匹配图
    myDrawMatches(img1, img2, kp1, kp2_multi, success_multi);

    chrono::steady_clock::time_point t2 = chrono::steady_clock::now();
    chrono::duration<double> timeUsed = chrono::duration_cast<chrono::duration<double>> (t2 - t1);
    cout << "执行整个程序总共花费" << timeUsed.count() << "秒!" << endl;
    return 0;
}


//画出光流
void drawOpticalFlow(Mat img2, vector<KeyPoint> kp1, vector<KeyPoint> kp2, vector<bool> success)
{
    Mat img2Color;
    int successTrack = 0;
    cvtColor(img2, img2Color, CV_GRAY2BGR);  //img2为CV_8UC1,img2Color为CV_8UC3;将灰度图转换成彩色图
    for(int i = 0; i < kp1.size(); i++)
        if(success[i])
        {
            successTrack++;

            Point2f pixel1 = kp1[i].pt;
            Point2f pixel2 = kp2[i].pt;
            circle(img2Color, pixel2, 2, Scalar(0, 255, 0), 2);
            line(img2Color, pixel1, pixel2, Scalar(0, 255, 0), 1);
        }
    cout << "在图像2中正确跟踪到" << successTrack << "个GFTT角点!" << endl;

    imshow("光流图", img2Color);
    waitKey(0);  //等待按键输入,暂停执行
}


//作出匹配图
void myDrawMatches(Mat img1, Mat img2, vector<KeyPoint> kp1, vector<KeyPoint> kp2, vector<bool> success)
{
    Mat imgMatches(img1.rows, img1.cols * 2, CV_8UC1);
    Rect rect1 = Rect(0, 0, img1.cols, img1.rows);
    Rect rect2 = Rect(img1.cols, 0, img2.cols, img2.rows);
    img1.copyTo(imgMatches(rect1));
    img2.copyTo(imgMatches(rect2));
    cvtColor(imgMatches, imgMatches, CV_GRAY2BGR);  //把它转换成彩色图

    RNG rng;
    for(int i = 0; i < kp1.size(); i++)
        if(success[i])
        {
            Point2f pixel1 = kp1[i].pt;
            Point2f pixel2 = kp2[i].pt + Point2f(img1.cols, 0);
            circle(imgMatches, pixel1, 3, Scalar(rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255)), 2);
            circle(imgMatches, pixel2, 3, Scalar(rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255)), 2);
            line(imgMatches, pixel1, pixel2, Scalar(rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255)), 1);
        }
    imshow("GFTT角点匹配图", imgMatches);
    waitKey(0);  //等待按键输入,暂停执行
}


void OpticalFlowSingleLevel(
        const Mat& img1,
        const Mat& img2,
        const vector<KeyPoint>& kp1,
        vector<KeyPoint>& kp2,
        vector<bool>& success,
        bool inverse,
        bool has_initial
)
{
    kp2.resize(kp1.size());
    success.resize(kp1.size());
    //定义了一个OpticalFlowTracker类型的变量tracker,并进行了初始化
    OpticalFlowTracker tracker(img1, img2, kp1, kp2, success, inverse, has_initial);
    Range range = Range(0, kp1.size());
    //tracker.calculateOpticalFlow(range);  //串行执行
    parallel_for_(Range(0, kp1.size()), std::bind(&OpticalFlowTracker::calculateOpticalFlow, &tracker, placeholders::_1));
    //parallel_for_()实现并行调用OpticalFlowTracker::calculateOpticalFlow()的功能
}



void OpticalFlowMultiLevel(
        const Mat& img1,
        const Mat& img2,
        const vector<KeyPoint>& kp1,
        vector<KeyPoint>& kp2,
        vector<bool>& success,
        bool inverse
)
{
    int pyramids = 4;  //金字塔层数为4
    double pyramidScale = 0.5;  //每层之间的缩放因子设为0.5
    double scales[] = {1.0, 0.5, 0.25, 0.125};

    //创建图像金字塔
    vector<Mat> pyr1;
    vector<Mat> pyr2;
    for(int i = 0; i < pyramids; i++)
    {
        if(i == 0)
        {
            pyr1.push_back(img1);
            pyr2.push_back(img2);
        }
        else
        {
            Mat img1_pyr;
            Mat img2_pyr;
            //将图像pyr1[i-1]的宽和高各缩放0.5倍得到图像img1_pyr
            resize(pyr1[i-1], img1_pyr, Size(pyr1[i-1].cols * pyramidScale, pyr1[i-1].rows * pyramidScale));
            //将图像pyr2[i-1]的宽和高各缩放0.5倍得到图像img2_pyr
            resize(pyr2[i-1], img2_pyr, Size(pyr2[i-1].cols * pyramidScale, pyr2[i-1].rows * pyramidScale));
            pyr1.push_back(img1_pyr);
            pyr2.push_back(img2_pyr);
        }
    }

    //由粗至精的光流跟踪
    vector<KeyPoint> kp1_pyr;
    vector<KeyPoint> kp2_pyr;
    for(auto &kp : kp1)
    {
        auto kp_top = kp;
        kp_top.pt *= scales[pyramids - 1];
        kp1_pyr.push_back(kp_top);  //最顶层图像1的角点坐标
        kp2_pyr.push_back(kp_top);  //最顶层图像2的角点坐标:用图像1的初始化图像2的
    }

    //从最顶层开始进行光流追踪
    for(int level = pyramids - 1; level >= 0; level--)
    {
        success.clear();
        OpticalFlowSingleLevel(pyr1[level], pyr2[level], kp1_pyr, kp2_pyr, success, inverse, true);
        //has_initial设置为true,表示图像2中的角点kp2_pyr进行了初始化

        if(level > 0)
        {
            for(auto& kp : kp1_pyr)
                kp.pt /= pyramidScale;  //pyramidScale等于0.5,相当于乘了2
            for(auto& kp : kp2_pyr)
                kp.pt /= pyramidScale;  //pyramidScale等于0.5,相当于乘了2
        }
    }

    for(auto& kp : kp2_pyr)
        kp2.push_back(kp);  //kp1本身是输入,所以不用动;只需要存输出kp2就行了!

}


//使用高斯牛顿法求解图像2中相应的角点坐标
void OpticalFlowTracker::calculateOpticalFlow(const Range& range)
{
    int half_patch_size = 4;
    int maxIteration = 10;

    for(size_t i = range.start; i < range.end; i++)  //对图像1中的每个GFTT角点进行高斯牛顿优化
    {
        auto kp = kp1[i];
        double dx = 0, dy = 0;  //优化变量

        if(has_initial)  //如果kp2进行了初始化,则执行
        {
            dx = kp2[i].pt.x - kp.pt.x;
            dy = kp2[i].pt.y - kp.pt.y;
        }

        double currentCost = 0, lastCost = 0;
        bool succ = true;

        Eigen::Matrix2d H = Eigen::Matrix2d::Zero();
        Eigen::Vector2d b = Eigen::Vector2d::Zero();
        Eigen::Vector2d Jij;


        for(int iter = 0; iter < maxIteration; iter++)  //进行高斯牛顿迭代
        {
            if(inverse == false)
            {
                H = Eigen::Matrix2d::Zero();
                b = Eigen::Vector2d::Zero();
            }
            else
                b = Eigen::Vector2d::Zero();  //只重置矩阵b。在反向光流中,其海塞矩阵H在整个高斯牛顿迭代过程中均保持不变!

            currentCost = 0;
            for(int x = -half_patch_size; x < half_patch_size; x++)
                for(int y = -half_patch_size; y < half_patch_size; y++)
                {
                    double error = GetPixelValue(img1, kp.pt.x + x, kp.pt.y + y) - GetPixelValue(img2, kp.pt.x + x + dx, kp.pt.y + y + dy);
                    //x,y是patch内遍历;dx,dy是优化变量

                    //分正向光流还是反向光流,其雅可比矩阵的计算方法不一样
                    if(inverse == false)
                        Jij = -1.0 * Vector2d(0.5 * (GetPixelValue(img2, kp.pt.x + x + dx + 1, kp.pt.y + y + dy) - GetPixelValue(img2, kp.pt.x + x + dx - 1, kp.pt.y + y + dy)),
                                              0.5 * (GetPixelValue(img2, kp.pt.x + x + dx, kp.pt.y + y + dy + 1) - GetPixelValue(img2, kp.pt.x + x + dx, kp.pt.y + y + dy - 1)));
                    else if (iter == 0)
                        Jij = -1.0 * Vector2d(0.5 * (GetPixelValue(img1, kp.pt.x + x + 1, kp.pt.y + y) - GetPixelValue(img1, kp.pt.x + x - 1, kp.pt.y + y)),
                                              0.5 * (GetPixelValue(img1, kp.pt.x + x, kp.pt.y + y + 1) - GetPixelValue(img1, kp.pt.x + x, kp.pt.y + y - 1)));
                        //为什么反向光流时,还要加上dx和dy呢?
                    b += -error * Jij;
                    currentCost += error * error;
                    if(inverse == false || iter == 0)
                        H += Jij * Jij.transpose();  //反向光流时,为什么海塞矩阵仅仅只计算一次加法呢?并且,迭代过程中也不更新它?
                }//遍历完patch内的所有像素点

            //求解增量方程,计算更新量
            Vector2d update = H.ldlt().solve(b);

            if(isnan(update[0]))
            {
                //cout << "计算出来的更新量是非数字,光流跟踪失败,退出GN迭代!" << endl;
                succ = false;
                break;
            }

            if(iter > 0 && currentCost >= lastCost)
            {
                //cout << "代价不再减小,退出GN迭代!" << endl;
                break;
            }

            //更新优化变量和lastCost
            dx += update[0];
            dy += update[1];
            lastCost = currentCost;

            if(update.norm() < 1e-2)
            {
                //cout << "更新量的模小于1e-2,退出GN迭代!" << endl;
                break;
            }
        }//GN法进行完一次迭代

        success[i]  = succ;
        kp2[i].pt = kp.pt + Point2f(dx, dy);

    }//对图像1中的所有角点都完成了光流跟踪


}

  CMakeLists.txt文件内容为,

find_package(OpenCV REQUIRED)
include_directories(${OpenCV_DIRECTORIES})

include_directories("/usr/include/eigen3")

add_executable(main main.cpp)
target_link_libraries(main ${OpenCV_LIBRARIES})

  程序运行结果为,

图像1中提取到229个GFTT角点!
在图像2中正确跟踪到229个GFTT角点!
执行整个程序总共花费1.80817秒!

在这里插入图片描述在这里插入图片描述

  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

YMWM_

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值