双目立体成像(三)视差图空洞填充

双目立体成像(二)中,对图像进行畸变矫正和立体校正之后生成的点云图效果有所改进,但是可以看出恢复出的点云图有很多洞。视差图中视差不可靠值大多数是由于遮挡或光照不均匀引起的,可以用附近可靠视差值进行填充。步骤如下:

① 计算视差图的积分图integral,并保存对应积分图中每个积分值处所有累加的像素点个数n(空洞处的像素点不计入n中,因为空洞处像素值为0,对积分值没有任何作用,反而会平滑图像)。

② 采用多层次均值滤波。首先以一个较大的初始窗口去做均值滤波,将大区域的空洞赋值。然后下次滤波时,将窗口尺寸缩小为原来的一半,利用原来的积分图再次滤波,给较小的空洞赋值(覆盖原来的值);依次类推,直至窗口大小变为3x3,此时停止滤波,得到最终结果。

③ 多层次滤波考虑的是对于初始较大的空洞区域,需要参考更多的邻域值,如果采用较小的滤波窗口,不能够完全填充,而如果全部采用较大的窗口,则图像会被严重平滑。因此根据空洞的大小,不断调整滤波窗口。先用大窗口给所有空洞赋值,然后利用逐渐变成小窗口滤波覆盖原来的值,这样既能保证空洞能被填充上,也能保证图像不会被过度平滑。

空洞填充的函数代码如下:

void insertDepth32f(cv::Mat& depth)
{
    const int width = depth.cols;
    const int height = depth.rows;
    float* data = (float*)depth.data;
    cv::Mat integralMap = cv::Mat::zeros(height, width, CV_64F);
    cv::Mat ptsMap = cv::Mat::zeros(height, width, CV_32S);
    double* integral = (double*)integralMap.data;
    int* ptsIntegral = (int*)ptsMap.data;
    memset(integral, 0, sizeof(double) * width * height);
    memset(ptsIntegral, 0, sizeof(int) * width * height);
    for (int i = 0; i < height; ++i)
    {
        int id1 = i * width;
        for (int j = 0; j < width; ++j)
        {
            int id2 = id1 + j;
            if (data[id2] > 1e-3)
            {
                integral[id2] = data[id2];
                ptsIntegral[id2] = 1;
            }
        }
    }
    // 积分区间
    for (int i = 0; i < height; ++i)
    {
        int id1 = i * width;
        for (int j = 1; j < width; ++j)
        {
            int id2 = id1 + j;
            integral[id2] += integral[id2 - 1];
            ptsIntegral[id2] += ptsIntegral[id2 - 1];
        }
    }
    for (int i = 1; i < height; ++i)
    {
        int id1 = i * width;
        for (int j = 0; j < width; ++j)
        {
            int id2 = id1 + j;
            integral[id2] += integral[id2 - width];
            ptsIntegral[id2] += ptsIntegral[id2 - width];
        }
    }
    int wnd;
    double dWnd = 2;
    while (dWnd > 1)
    {
        wnd = int(dWnd);
        dWnd /= 2;
        for (int i = 0; i < height; ++i)
        {
            int id1 = i * width;
            for (int j = 0; j < width; ++j)
            {
                int id2 = id1 + j;
                int left = j - wnd - 1;
                int right = j + wnd;
                int top = i - wnd - 1;
                int bot = i + wnd;
                left = max(0, left);
                right = min(right, width - 1);
                top = max(0, top);
                bot = min(bot, height - 1);
                int dx = right - left;
                int dy = (bot - top) * width;
                int idLeftTop = top * width + left;
                int idRightTop = idLeftTop + dx;
                int idLeftBot = idLeftTop + dy;
                int idRightBot = idLeftBot + dx;
                int ptsCnt = ptsIntegral[idRightBot] + ptsIntegral[idLeftTop] - (ptsIntegral[idLeftBot] + ptsIntegral[idRightTop]);
                double sumGray = integral[idRightBot] + integral[idLeftTop] - (integral[idLeftBot] + integral[idRightTop]);
                if (ptsCnt <= 0)
                {
                    continue;
                }
                data[id2] = float(sumGray / ptsCnt);
            }
        }
        int s = wnd / 2 * 2 + 1;
        if (s > 201)
        {
            s = 201;
        }
        cv::GaussianBlur(depth, depth, cv::Size(s, s), s, s);
    }
}

将左右目图片去畸变和立体校正之后计算视差,再对视差图进行空洞填补。

原图:

去畸变&立体校正后:

生成的点云图:

空洞填充后生成的点云图:

可以看出虽然物体部分恢复的细节比未填充多,但是白墙这种无纹理得区域还是空洞的。一般默认算的是左视差图,如果需要计算右视差图,则设置参数是要注意最小视差值视差范围。由于视差值计算出来为负值,disp类型为16SC1,因此需要取绝对值,然后保存,代码如下:

#include <opencv2/opencv.hpp>
#include <vector>
#include <string>
#include <Eigen/Core>
#include <pangolin/pangolin.h>
#include <unistd.h>
using namespace std; 
using namespace Eigen;

void showPointCloud(const vector<Vector4d, aligned_allocator<Vector4d>> &pointcloud);


void insertDepth32f(cv::Mat& depth)
{
    const int width = depth.cols;
    const int height = depth.rows;
    float* data = (float*)depth.data;
    cv::Mat integralMap = cv::Mat::zeros(height, width, CV_64F);
    cv::Mat ptsMap = cv::Mat::zeros(height, width, CV_32S);
    double* integral = (double*)integralMap.data;
    int* ptsIntegral = (int*)ptsMap.data;
    memset(integral, 0, sizeof(double) * width * height);
    memset(ptsIntegral, 0, sizeof(int) * width * height);
    for (int i = 0; i < height; ++i)
    {
        int id1 = i * width;
        for (int j = 0; j < width; ++j)
        {
            int id2 = id1 + j;
            if (data[id2] > 1e-3)
            {
                integral[id2] = data[id2];
                ptsIntegral[id2] = 1;
            }
        }
    }
    // 积分区间
    for (int i = 0; i < height; ++i)
    {
        int id1 = i * width;
        for (int j = 1; j < width; ++j)
        {
            int id2 = id1 + j;
            integral[id2] += integral[id2 - 1];
            ptsIntegral[id2] += ptsIntegral[id2 - 1];
        }
    }
    for (int i = 1; i < height; ++i)
    {
        int id1 = i * width;
        for (int j = 0; j < width; ++j)
        {
            int id2 = id1 + j;
            integral[id2] += integral[id2 - width];
            ptsIntegral[id2] += ptsIntegral[id2 - width];
        }
    }
    int wnd;
    double dWnd = 2;
    while (dWnd > 1)
    {
        wnd = int(dWnd);
        dWnd /= 2;
        for (int i = 0; i < height; ++i)
        {
            int id1 = i * width;
            for (int j = 0; j < width; ++j)
            {
                int id2 = id1 + j;
                int left = j - wnd - 1;
                int right = j + wnd;
                int top = i - wnd - 1;
                int bot = i + wnd;
                left = max(0, left);
                right = min(right, width - 1);
                top = max(0, top);
                bot = min(bot, height - 1);
                int dx = right - left;
                int dy = (bot - top) * width;
                int idLeftTop = top * width + left;
                int idRightTop = idLeftTop + dx;
                int idLeftBot = idLeftTop + dy;
                int idRightBot = idLeftBot + dx;
                int ptsCnt = ptsIntegral[idRightBot] + ptsIntegral[idLeftTop] - (ptsIntegral[idLeftBot] + ptsIntegral[idRightTop]);
                double sumGray = integral[idRightBot] + integral[idLeftTop] - (integral[idLeftBot] + integral[idRightTop]);
                if (ptsCnt <= 0)
                {
                    continue;
                }
                data[id2] = float(sumGray / ptsCnt);
            }
        }
        int s = wnd / 2 * 2 + 1;
        if (s > 201)
        {
            s = 201;
        }
        cv::GaussianBlur(depth, depth, cv::Size(s, s), s, s);
    }
}


int main(int argc,char **argv)
{
    if(argc!=3)
    {
        cout<<"ERROR:Please input pictures!"<<endl;
    }
    cv::Mat left_image=cv::imread(argv[1],0);
    cv::Mat right_image=cv::imread(argv[2],0);
    // 内参
    double fx = 392.7540541781677, fy = 395.9067564822596, cx = 320.5167125964447, cy = 214.0481721219656;
    // 基线
    double b=60;//基线单位是mm
    cv::Mat disparity_sgbm,disparity;
    //参数设置
        enum { STEREO_BM = 0, STEREO_SGBM = 1, STEREO_HH = 2, STEREO_VAR = 3, STEREO_3WAY = 4 };
    int numberOfDisparities = ((left_image.size().width / 8) + 15) & -16;
    cv::Ptr<cv::StereoSGBM> sgbm = cv::StereoSGBM::create(0, 16, 3);
    sgbm->setPreFilterCap(63);
    int SADWindowSize = 9;
    int sgbmWinSize = SADWindowSize > 0 ? SADWindowSize : 3;
    sgbm->setBlockSize(sgbmWinSize);
    int cn = left_image.channels();
    sgbm->setP1(8 * cn*sgbmWinSize*sgbmWinSize);
    sgbm->setP2(32 * cn*sgbmWinSize*sgbmWinSize);
    sgbm->setMinDisparity(0);
    sgbm->setNumDisparities(numberOfDisparities);
    sgbm->setUniquenessRatio(10);
    sgbm->setSpeckleWindowSize(100);
    sgbm->setSpeckleRange(32);
    sgbm->setDisp12MaxDiff(1);
    int alg = STEREO_SGBM;
    if (alg == STEREO_HH)
        sgbm->setMode(cv::StereoSGBM::MODE_HH);
    else if (alg == STEREO_SGBM)
        sgbm->setMode(cv::StereoSGBM::MODE_SGBM);
    else if (alg == STEREO_3WAY)
        sgbm->setMode(cv::StereoSGBM::MODE_SGBM_3WAY);
    //参数设置    
    sgbm->compute(left_image,right_image,disparity_sgbm);
    disparity_sgbm.convertTo(disparity,CV_32F,1.0/16.0f);    
    // 视差图后处理
    insertDepth32f(disparity);
    cv::imwrite("./disparity.jpg",disparity);
    vector<Vector4d,Eigen::aligned_allocator<Vector4d>> pointcloud;
    for(int v=0;v<left_image.rows;v++)
    {
        for(int u=0;u<left_image.cols;u++)
        {
            if(disparity.at<float>(v,u)<=0.0||disparity.at<float>(v,u)>=96.0)
            {
                continue;
            }
            Vector4d point(0,0,0,left_image.at<uchar>(v,u)/255.0);
            double x = (u-cx)/fx;
            double y = (v-cy)/fy;
            double depth=fx*b/(disparity.at<float>(v,u));
            // double depth=depthImg_left.at<float>(v,u);
            point[0]=x*depth;
            point[1]=y*depth;
            point[2]=depth;

            pointcloud.push_back(point);
        }
    }

    // 右视差图
    cv::Mat disparity_sgbm_r,disparity_r;
    int numberOfDisparities_r = ((right_image.size().width / 8) + 15) & -16;
    cv::Ptr<cv::StereoSGBM> sgbm_r = cv::StereoSGBM::create(0, 16, 3);
    sgbm_r->setPreFilterCap(63);
    sgbm_r->setBlockSize(sgbmWinSize);
    int cn_r = right_image.channels();
    sgbm_r->setP1(8 * cn_r*sgbmWinSize*sgbmWinSize);
    sgbm_r->setP2(32 * cn_r*sgbmWinSize*sgbmWinSize);
    sgbm_r->setMinDisparity(-numberOfDisparities_r);
    sgbm_r->setNumDisparities(numberOfDisparities_r);
    sgbm_r->setUniquenessRatio(10);
    sgbm_r->setSpeckleWindowSize(100);
    sgbm_r->setSpeckleRange(32);
    sgbm_r->setDisp12MaxDiff(1);
    if (alg == STEREO_HH)
        sgbm_r->setMode(cv::StereoSGBM::MODE_HH);
    else if (alg == STEREO_SGBM)
        sgbm_r->setMode(cv::StereoSGBM::MODE_SGBM);
    else if (alg == STEREO_3WAY)
        sgbm_r->setMode(cv::StereoSGBM::MODE_SGBM_3WAY);

    sgbm_r->compute(right_image,left_image,disparity_sgbm_r);
    disparity_sgbm_r.convertTo(disparity_r,CV_32F,1.0/16.0f);
    disparity_r=abs(disparity_r);
    insertDepth32f(disparity_r);
    cv::imwrite("./disparity_r.jpg",disparity_r); 
    vector<Vector4d,Eigen::aligned_allocator<Vector4d>> pointcloud_r;
    for(int v=0;v<right_image.rows;v++)
    {
        for(int u=0;u<right_image.cols;u++)
        {
            if(disparity_r.at<float>(v,u)<=10.0||disparity_r.at<float>(v,u)>=96.0)
            {
                continue;
            }
            Vector4d point1(0,0,0,right_image.at<uchar>(v,u)/255.0);
            double x1 = (u-cx)/fx;
            double y1 = (v-cy)/fy;
            double depth1=fx*b/(disparity_r.at<float>(v,u));
            // double depth1=depthImg_right.at<float>(v,u);
            point1[0]=x1*depth1;
            point1[1]=y1*depth1;
            point1[2]=depth1;

            pointcloud_r.push_back(point1);
        }
    }
    //右视差图

        cv::imshow("disparity",disparity/96.0);
        cv::imshow("disparity_r",disparity_r/96.0);

        // cv::imshow("depth_left",depthImg_left/96.0);
        // cv::imshow("depth_right",depthImg_right/96.0);

        cv::waitKey(0);
        cout<<argv[3]<<endl;
        //showPointCloud(pointcloud);
        showPointCloud(pointcloud_r);
        return 0; 

}

void showPointCloud(const vector<Vector4d, Eigen::aligned_allocator<Vector4d>> &pointcloud) 
{

    if (pointcloud.empty()) {
        cerr << "Point cloud is empty!" << endl;
        return;
    }
    //创建一个pangolin窗口,窗口名“Point Cloud Viewer”,窗口宽度1024,窗口高度768
    pangolin::CreateWindowAndBind("Point Cloud Viewer", 1024, 768);
    glEnable(GL_DEPTH_TEST);//根据物体远近,实现遮挡效果
    glEnable(GL_BLEND);//使用颜色混合模型,让物体显示半透明效果
    //GL_SRC_ALPHA表示使用源颜色的alpha值作为权重因子,GL_ONE_MINUS_SRC_ALPHA表示使用(1-源颜色的alpha值)作为权重因子
    glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
    /*
    //ProjectionMatrix()中各参数依次为
    图像宽度=1024
    图像高度=768
    fx=500、fy=500
    cx=512、cy=389
    最近距离=0.1和最远距离=1000
    *****************************************************************
    ModelViewLookAt()中各参数为相机位置,被观察点位置和相机哪个轴朝上
    ModelViewLookAt(0, -0.1, -1.8, 0, 0, 0, 0.0, -1.0, 0.0)
    表示相机在(0, -0.1, -1.8)位置处观看视点(0, 0, 0),
    并设置相机XYZ轴正方向为(0,-1,0),即右上前
    */
    pangolin::OpenGlRenderState s_cam(
        pangolin::ProjectionMatrix(1024, 768, 500, 500, 512, 389, 0.1, 1000),
        pangolin::ModelViewLookAt(0, -0.1, -1.8, 0, 0, 0, 0.0, -1.0, 0.0)
    );
    //创建交互视图,显示上一帧图像内容
    pangolin::View &d_cam = pangolin::CreateDisplay()
        .SetBounds(0.0, 1.0, pangolin::Attach::Pix(175), 1.0, -1024.0f / 768.0f)
        .SetHandler(new pangolin::Handler3D(s_cam));
    /*
    SetBounds()内的前4个参数分别表示交互视图的大小,均为相对值,范围在0.0至1.0之间
    第1个参数表示bottom,即为视图最下面在整个窗口中的位置
    第2个参数为top,即为视图最上面在整个窗口中的位置
    第3个参数为left,即视图最左边在整个窗口中的位置
    第4个参数为right,即为视图最右边在整个窗口中的位置
    第5个参数为aspect,表示横纵比
    */
    //如果pangolin窗口没有关闭,则执行
    while (pangolin::ShouldQuit() == false) 
    {
        //清空颜色和深度缓存,使得前后帧不会互相干扰
        glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
        //激活显示,并设置相机状态
        d_cam.Activate(s_cam);
        //设置背景颜色为白色
        glClearColor(1.0f, 1.0f, 1.0f, 1.0f);

        glPointSize(2);
        //绘制点云
        glBegin(GL_POINTS);
        for (auto &p: pointcloud) 
        {
            //设置颜色信息
            glColor3f(p[3], p[3], p[3]);
            //设置位置信息
            glVertex3d(p[0], p[1], p[2]);
        }
        glEnd();
        //按照上面的设置执行渲染
        pangolin::FinishFrame();
        //停止执行5ms
        usleep(5000);   // sleep 5 ms
    }
    return;
}

右视差图

生成的点云图:

其他场景测试

原图校正后视差图点云图
原图校正后视差图点云图
原图校正后视差图点云图

  • 2
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值