vcpkg+opencv4(sfm+vtk)+openMVS+SFM算法-github代下载(http://gitd.cc/)

 忘了附上项目下载地址:

https://github.com/PacktPublishing/Mastering-OpenCV-4-Third-Edition

https://github.com/PacktPublishing/Mastering-OpenCV3-Second-Edition

https://github.com/MasteringOpenCV/code


git clone https://github.com/Microsoft/vcpkg/

又遇到了新问题:

Can't use SPARSE_SCHUR with Solver::Options::sparse_linear_algebra_library_type = EIGEN_SPARSE

解决,重新rebuild ceres

./vcpkg install ceres[eigensparse,suitesparse,lapack]:x64-windows --recurse 


我换成了VS2019 english语言 默认安装路径  vcpkg-2020.11-1(改成这个vcpkg-e054fe2b20469c894b88be4ac466776f9f653954) 解决了所有问题。。。。。(md 我想多了)

cmake . ..\src -G "Visual Studio 16 2019" -DCMAKE_TOOLCHAIN_FILE=D:\new_code\vcpkg-2020.11-1\scripts\buildsystems\vcpkg.cmake -DVCPKG_TARGET_TRIPLET=x64-windows -DVCG_ROOT="..\VCG" -DCMAKE_INSTALL_PREFIX="..\install"

 

又出现问题了:

  Could NOT find OpenCV (missing: viz) (found suitable version "4.3.0",
  minimum required is "4"

然后。。。。

https://github.com/microsoft/vcpkg/pull/12785

https://github.com/microsoft/vcpkg/tree/e054fe2b20469c894b88be4ac466776f9f653954

我改成这个了

vcpkg-e054fe2b20469c894b88be4ac466776f9f653954

重新来一遍

但是这个版本的boost换成了1.74.0 不是吧,又要下载。。。

我因为之前有1.73.0,用脚本弄了一下(代码写的很着急比较垃圾大家自己写一个吧):

import os
lines = os.listdir("./data")

# for index, value in enumerate(lines):
#     newname = value.replace("1.73.0", "1.74.0")
#     name = value.split("-")[1]
#     downname = "https://github.com/boostorg/{}/archive/boost-1.74.0.tar.gz".format(name)
#     print(downname)
#     print(newname)

import shutil
lines2 = os.listdir("./data2")
lines2_kuname = []
lines2_kuname_index = []
for index, value in enumerate(lines2):
    lines2_kuname.append(value.split("-")[0])
    lines2_kuname_index.append(index)

for index, value in enumerate(lines):
    newname = value.replace("1.73.0", "1.74.0")
    name = value.split("-")[1]
    index = lines2_kuname.index(name)
    lines2_index = lines2_kuname_index[index]
    ori_name = lines2[lines2_index]
    shutil.copy(os.path.join("./data2", ori_name), os.path.join("./", newname))

编译完以后又遇到问题:

'static_cast': cannot convert from 'const std::string' to 'const cv::utils::logging::LogTag *'

https://github.com/opencv/opencv/issues/17792

https://github.com/opencv/opencv/pull/17822/files

https://github.com/opencv/opencv/pull/17822 

opencv4.4才解决了这个bug,正好我的是4.3有这个bug,太棒了。好巧。但是!我是不可能重新编译的。。。

注释全部的CV_LOG_INFO 还有其他 or and 引起的错误。。尴尬vs2019我记得支持or and的啊,有可能cmake写的c11,难道and是c14?

完整版修改后的我放在文章最后

./ch2_sfm.exe --viz=true --cloud=save.ply --debug=true --mvs=out.mvs D:/new_code/OpenMVS/Chapter_02/crazyhorse/

 

D:\new_code\OpenMVS\build\bin\x64\Release\DensifyPointCloud.exe -i out.mvs

D:\new_code\OpenMVS\build\bin\x64\Release\Viewer.exe -i out_dense.mvs

用到命令集合:

 ./vcpkg install opencv4[sfm,vtk]:x64-windows zlib:x64-windows boost-iostreams:x64-windows boost-program-options:x64-windows boost-system:x64-windows boost-serialization:x64-windows eigen3:x64-windows cgal[core]:x64-windows glew:x64-windows glfw3:x64-windows boost-graph:x64-windows boost-filesystem:x64-windows


cmake . ..\src -G "Visual Studio 16 2019" -DCMAKE_TOOLCHAIN_FILE=D:\new_code\vcpkg-e054\scripts\buildsystems\vcpkg.cmake -DVCPKG_TARGET_TRIPLET=x64-windows -DVCG_ROOT="..\VCG" -DCMAKE_INSTALL_PREFIX="..\install"


cmake .. -G "Visual Studio 16 2019" -DCMAKE_TOOLCHAIN_FILE=D:/new_code/vcpkg-e054/scripts/buildsystems/vcpkg.cmake -DVCPKG_TARGET_TRIPLET=x64-windows -DOpenMVS_DIR=D:/new_code/OpenMVS/build


请一定全部看完再进行开始(下面都是我第一次失败的流程)

Explore Structure from Motion with the SfM Module

需要的东西vcpkg 版本vcpkg-2020.11 (不要用这个直接安装mvs,会出错)

1. 首先vs2015安装,然后语言变成English(后面我会出错,请使用vs2017 or vs2019)

2. powershell v7.0.3以上

3. 由于我没有选择VS2015默认安装路径,error:

https://github.com/microsoft/vcpkg/issues/12488

修改:D:\install\vcpkg\vcpkg-2020.11\toolsrc\src\vcpkg\visualstudio.cpp


        // VS2015 instance from Program Files
        //append_if_has_cl_vs140(program_files_32_bit / "Microsoft Visual Studio 14.0");
        append_if_has_cl_vs140("D:/Program Files (x86)/Microsoft Visual Studio 14.0");

4. opengl的错误

https://blog.csdn.net/baidu_40840693/article/details/106279727

修改:D:\install\vcpkg\vcpkg-2020.11\ports\opengl\portfile.cmake

#vcpkg_get_windows_sdk(WINDOWS_SDK)
set(WINDOWS_SDK "8.1")

5.补充

如果已使用 integrate 选项,则应在删除 vcpkg 实例之前删除该集成。 若要删除和清理该集成,请将目录更改为 vcpkg 根目录。 在 Windows 上,运行 vcpkg integrate remove,确保清除该集成。 在 Linux 或 macOS 上,运行 ./vcpkg integrate remove 命令。

6. 安装

./vcpkg install opencv4[sfm,vtk]:x64-windows

7. 包下载极其不友好,只能自己下载然后改名字放到指定位置

8. 关于包的卸载

./vcpkg remove opencv4:x64-windows --recurse

 9. openMVS官方

#Make a toplevel directory for deps & build & src somewhere:
mkdir OpenMVS
cd OpenMVS

#Get and install dependencies using vcpkg;
#choose the desired triplet, like "x64-windows", by setting the VCPKG_DEFAULT_TRIPLET environment variable or by specifying it after each package:
vcpkg install zlib boost-iostreams boost-program-options boost-system boost-serialization eigen3 cgal[core] opencv glew glfw3

#Get VCGLib (Required):
git clone https://github.com/cdcseacave/VCG.git

#Get and unpack OpenMVS in OpenMVS/src:
git clone https://github.com/cdcseacave/openMVS.git src

#Make build directory:
mkdir build
cd build

#Run CMake, where VCPKG_ROOT environment variable points to the root of vcpkg installation:
cmake . ..\src -G "Visual Studio 15 2017 Win64" -DCMAKE_TOOLCHAIN_FILE=%VCPKG_ROOT%\scripts\buildsystems\vcpkg.cmake -DVCPKG_TARGET_TRIPLET=x64-windows -DVCG_ROOT="..\VCG"

#Open the solution in MSVC and build it

10.

./vcpkg install zlib:x64-windows boost-iostreams:x64-windows boost-program-options:x64-windows boost-system:x64-windows boost-serialization:x64-windows eigen3:x64-windows cgal[core]:x64-windows glew:x64-windows glfw3:x64-windows glfw3:x64-windows

还有如下两个

./vcpkg install boost-graph:x64-windows boost-filesystem:x64-windows

11.记得添加

CMAKE_INSTALL_PREFIX

12.openMVS编译

 

cmake . ..\src -G "Visual Studio 14 2015 Win64" -DCMAKE_TOOLCHAIN_FILE=%VCPKG_ROOT%\scripts\buildsystems\vcpkg.cmake -DVCPKG_TARGET_TRIPLET=x64-windows -DVCG_ROOT="..\VCG" -DCMAKE_INSTALL_PREFIX="..\install"

 如果你没设置VCPKG_ROOT:

cmake . ..\src -G "Visual Studio 14 2015 Win64" -DCMAKE_TOOLCHAIN_FILE=D:\install\vcpkg\vcpkg\scripts\buildsystems\vcpkg.cmake -DVCPKG_TARGET_TRIPLET=x64-windows -DVCG_ROOT="..\VCG" -DCMAKE_INSTALL_PREFIX="..\install"

结果展示:

Selecting Windows SDK version  to target Windows 10.0.18363.
cotire 1.8.0 loaded.
Compiling with C++14
CUDA_TOOLKIT_ROOT_DIR not found or specified
Could NOT find CUDA (missing: CUDA_TOOLKIT_ROOT_DIR CUDA_NVCC_EXECUTABLE CUDA_INCLUDE_DIRS CUDA_CUDART_LIBRARY) 
-- Can't find CUDA. Continuing without it.
WARNING: BREAKPAD was not found: Please specify BREAKPAD directory using BREAKPAD_ROOT env. variable
-- Can't find BreakPad. Continuing without it.
Found Boost: D:/install/vcpkg/vcpkg/installed/x64-windows/include (found version "1.73.0") found components: iostreams program_options system serialization regex 
Eigen 3.3.7 found (include: D:/install/vcpkg/vcpkg/installed/x64-windows/include/eigen3)
OpenCV 4.3.0 found (include: D:/install/vcpkg/vcpkg/installed/x64-windows/include)
Visual Leak Detector (VLD) is not found.
Using header-only CGAL
Targetting Visual Studio 14 2015 Win64
Target build enviroment supports auto-linking
Using VC toolset 140.
Generator uses intermediate configuration directory: $(Platform)/$(Configuration)
Found Boost: D:/install/vcpkg/vcpkg/installed/x64-windows/include (found suitable version "1.73.0", minimum required is "1.48")  
Boost include dirs: D:/install/vcpkg/vcpkg/installed/x64-windows/include
Boost libraries:    
CGAL 5.1 found (include: D:/install/vcpkg/vcpkg/installed/x64-windows/share/cgal/../../include)
VCG  found (include: D:/install/opencv_sfm/OpenMVS/VCG)
GLEW  found (include: D:/install/vcpkg/vcpkg/installed/x64-windows/include)
GLFW3 3.3.2 found (include: )
Configuring done
Generating done

遇到问题:

'ret': declaration is not allowed in 'constexpr' function body

// VC work around for constexpr and noexcept: VC2013 and below do not support these 2 keywords

 openMVS-1.1.1必须vs2017 or vs 2019 建议使用vs2019

 

但是我们已经编译了vs2015了使用低版本

openMVS-0.9.zip

#Make a toplevel directory for deps & build & src somewhere:
mkdir OpenMVS
cd OpenMVS

#Get and install dependencies using vcpkg;
#choose the desired triplet, like "x64-windows", by setting the VCPKG_DEFAULT_TRIPLET environment variable or by specifying it after each package:
vcpkg install zlib boost-iostreams boost-program-options boost-system boost-serialization eigen3 cgal[core] opencv glew glfw3

#Get VCGLib (Required):
git clone https://github.com/cdcseacave/VCG.git

#Get and unpack OpenMVS in OpenMVS/src:
git clone https://github.com/cdcseacave/openMVS.git src

#Make build directory:
mkdir build
cd build

#Run CMake, where VCPKG_ROOT environment variable points to the root of vcpkg installation:
cmake . ..\src -G "Visual Studio 15 2017 Win64" -DCMAKE_TOOLCHAIN_FILE=%VCPKG_ROOT%\scripts\buildsystems\vcpkg.cmake -DVCPKG_TARGET_TRIPLET=x64-windows -DVCG_ROOT="..\VCG"

#Open the solution in MSVC and build it

如果出现少CMAKE_TOOLCHAIN_FILE or VCG_ROOT,cmake-gui打开配置 

 

还是出错:
“boost::throw_exception”: 重定义;"__declspec(noreturn)" 或 "[[noreturn]]" 不同   改一下,取消use static

因为vcpkg编译时候,x64-windows是动态,x64-windows-static才是静态

还是不行

最后换成VS2017 来一遍。。。。。。。。。。

不过最幸运的是,不用再下载包了 

13.项目编译

cmake .. -G "Visual Studio 18 2019" -DCMAKE_TOOLCHAIN_FILE=D:/new_code/vcpkg-2020.11-1/scripts/buildsystems/vcpkg.cmake -DVCPKG_TARGET_TRIPLET=x64-windows -DOpenMVS_DIR=D:/new_code/OpenMVS/build

 

 

 

 

项目readme详情: 

# Chapter 2: Explore Structure from Motion with the SfM Module
Structure from Motion (SfM) is the process of recovering both the positions of cameras looking at the scene, as well as the sparse geometry of the scene. 
The motion between the cameras imposes geometric constraints that can help us recover the structure of objects, hence the process is called Structure from Motion. 
Since OpenCV v3.0+ a contributed ("contrib") module called sfm was added, that assists in performing end-to-end SfM processing from multiple images. 
In this chapter, we will learn how to use the SfM module to reconstruct a scene to a sparse point cloud including camera poses. 
Later we will also densify the point cloud, add many more points to it to make it dense, using an open Multi-View Stereo (MVS) package called OpenMVS.

## Requirements
* OpenCV 4 (compiled with the  sfm contrib module)
* Eigen v3.3.5+ (required by the sfm module and Ceres)
* Ceres solver v2+ (required by the sfm module)
* CMake 3.12+
* Boost v1.66+
* OpenMVS
* CGAL v4.12+ (required by OpenMVS)
* VCG (required by OpenMVS)

## Building instructions

### Building or installing third-party libraries
Follow the instructions on https://github.com/opencv/opencv_contrib to build OpenCV with the contrib modules.
You may only include the following contrib modules: `sfm`, `viz`

Build OpenMVS following these instructions: https://github.com/cdcseacave/openMVS/wiki/Building.
These also include the installation instructions for Ceres, Boost, Eigen and CGAL.

Example for building OpenMVS: (OpenMVS already has a `build` dir, from which CMake should be called)
```
$ cd <openMVS dir>/build
$ cmake .. -DOpenCV_DIR=<opencv4 dir>/build -DCGAL_ROOT=<cgal4 dir>/lib/cmake/ -DVCG_DIR=<vcg dir> -DCMAKE_BUILD_TYPE=Release
$ make
```

### Building project
```
$ mkdir build && cd build
$ cmake .. -DOpenCV_DIR=<opencv dir>/build -DOpenMVS_DIR=<openMVS dir>/build
$ make
```

## Running instructions
```
Usage: ch2_sfm [params] dir 

	-?, -h, --help (value:true)
		help message
	--cloud
		Save reconstruction to a point cloud file (PLY, XYZ and OBJ). Provide filename
	--debug (value:false)
		Save debug visualizations to files?
	--mrate (value:0.5)
		Survival rate of matches to consider image pair success
	--mvs
		Save reconstruction to an .mvs file. Provide filename
	--viz (value:false)
		Visualize the sparse point cloud reconstruction?

	dir (value:.)
		directory with image files for reconstruction
```

Example:
```
$ ./ch2_sfm --viz=true --mvs=out.mvs ../crazyhorse/
$ <openMVS dir>/build/bin/DensifyPointCloud -i out.mvs
$ <openMVS dir>/build/bin/Viewer -i out_dense.mvs
```

## Author
Roy Shilkrot <br/>
roy.shil@gmail.com <br/>
http://www.morethantechnical.com

 

代码:

#include <iostream>
#include <algorithm>
#include <string>
#include <numeric>

#define CERES_FOUND true

#include <opencv2/opencv.hpp>
#include <opencv2/sfm.hpp>
#include <opencv2/viz.hpp>
#include <opencv2/core/utils/logger.hpp>
#include <opencv2/core/utils/filesystem.hpp>
#include <opencv2/xfeatures2d.hpp>

#include <boost/filesystem.hpp>
#include <boost/graph/graph_traits.hpp>
#include <boost/graph/adjacency_list.hpp>
#include <boost/graph/connected_components.hpp>
#include <boost/graph/graphviz.hpp>

#define _USE_OPENCV true
#include <libs/MVS/Interface.h>

using namespace cv;
using namespace std;
namespace fs = boost::filesystem;

class StructureFromMotion {

public:
    StructureFromMotion(const string& dir,
            const float matchSurvivalRate = 0.5f,
            const bool viz = false,
            const string mvs = "",
            const string cloud = "",
            const bool saveDebug = false)
        :PAIR_MATCH_SURVIVAL_RATE(matchSurvivalRate),
         visualize(viz),
         saveMVS(mvs),
         saveCloud(cloud),
         saveDebugVisualizations(saveDebug)

    {
        findImagesInDiretcory(dir);
    }

    void runSfM() {
        extractFeatures();
        matchFeatures();
        buildTracks();
        reconstructFromTracks();
        if (visualize) {
            visualize3D();
        }
        if (saveMVS != "") {
            saveToMVSFile();
        }
        if (saveCloud != "") {
            //CV_LOG_INFO(TAG, "Save point cloud to: " + saveCloud);
			cout << "Save point cloud to: " + saveCloud << endl;
            viz::writeCloud(saveCloud, pointCloud, pointCloudColor);
        }
    }

private:
    void findImagesInDiretcory(const string& dir) {
        //CV_LOG_INFO(TAG, "Finding images in " + dir);
        cout << "Finding images in " + dir << endl;

        utils::fs::glob(dir, "*.jpg", imagesFilenames);
        utils::fs::glob(dir, "*.JPG", imagesFilenames);
        utils::fs::glob(dir, "*.png", imagesFilenames);
        utils::fs::glob(dir, "*.PNG", imagesFilenames);

        std::sort(imagesFilenames.begin(), imagesFilenames.end());

        //CV_LOG_INFO(TAG, "Found " + std::to_string(imagesFilenames.size()) + " images");
        cout << "Found " + std::to_string(imagesFilenames.size()) + " images" << endl;

        //CV_LOG_INFO(TAG, "Reading images...");
        cout << "Reading images..." << endl;
        for (const auto& i : imagesFilenames) {
            //CV_LOG_INFO(TAG, i);
            cout << i << endl;
            images[i] = imread(i);
            imageIDs[i] = images.size() - 1;
        }
    }

    void extractFeatures() {
        //CV_LOG_INFO(TAG, "Extract Features");
        cout << "Extract Features" << endl;

        auto detector = AKAZE::create();
        auto extractor = AKAZE::create();

        for (const auto& i : imagesFilenames) {
            Mat grayscale;
            cvtColor(images[i], grayscale, COLOR_BGR2GRAY);
            detector->detect(grayscale, keypoints[i]);
            extractor->compute(grayscale, keypoints[i], descriptors[i]);

            //CV_LOG_INFO(TAG, "Found " + to_string(keypoints[i].size()) + " keypoints in " + i);
            cout << "Found " + to_string(keypoints[i].size()) + " keypoints in " + i << endl;

            if (saveDebugVisualizations) {
                Mat out;
                drawKeypoints(images[i], keypoints[i], out, Scalar(0,0,255));
                imwrite(fs::basename(fs::path(i)) + "_features.jpg", out);
            }
        }
    }

    vector<DMatch> matchWithRatioTest(const DescriptorMatcher& matcher, const Mat& desc1, const Mat& desc2) {
        // Raw match
        vector< vector<DMatch> > nnMatch;
        matcher.knnMatch(desc1, desc2, nnMatch, 2);

        // Ratio test filter
        vector<DMatch> ratioMatched;
        for (size_t i = 0; i < nnMatch.size(); i++) {
            DMatch first = nnMatch[i][0];
            float dist1 = nnMatch[i][0].distance;
            float dist2 = nnMatch[i][1].distance;

            if (dist1 < MATCH_RATIO_THRESHOLD * dist2) {
                ratioMatched.push_back(first);
            }
        }

        return ratioMatched;
    }

    void matchFeatures() {
        //CV_LOG_INFO(TAG, "Match Features");
        cout << "Match Features" << endl;

        BFMatcher matcher(NORM_HAMMING);

        for (size_t i = 0; i < imagesFilenames.size() - 1; ++i) {
            for (size_t j = i + 1; j < imagesFilenames.size(); ++j) {
                const string imgi = imagesFilenames[i];
                const string imgj = imagesFilenames[j];

                // Match with ratio test filter
                vector<DMatch> match = matchWithRatioTest(matcher, descriptors[imgi], descriptors[imgj]);

                // Reciprocity test filter
                vector<DMatch> matchRcp = matchWithRatioTest(matcher, descriptors[imgj], descriptors[imgi]);
                vector<DMatch> merged;
                for (const DMatch& dmr : matchRcp) {
                    bool found = false;
                    for (const DMatch& dm : match) {
                        // Only accept match if 1 matches 2 AND 2 matches 1.
                        if (dmr.queryIdx == dm.trainIdx && dmr.trainIdx == dm.queryIdx) {
                            merged.push_back(dm);
                            found = true;
                            break;
                        }
                    }
                    if (found) {
                        continue;
                    }
                }

                // Fundamental matrix filter
                vector<uint8_t> inliersMask(merged.size());
                vector<Point2f> imgiPoints, imgjPoints;
                for (const auto& m : merged) {
                    imgiPoints.push_back(keypoints[imgi][m.queryIdx].pt);
                    imgjPoints.push_back(keypoints[imgj][m.trainIdx].pt);
                }
                findFundamentalMat(imgiPoints, imgjPoints, inliersMask);

                vector<DMatch> final;
                for (size_t m = 0; m < merged.size(); m++) {
                    if (inliersMask[m]) {
                        final.push_back(merged[m]);
                    }
                }

                if ((float)final.size() / (float)match.size() < PAIR_MATCH_SURVIVAL_RATE) {
                    //CV_LOG_INFO(TAG, "Final match '" + imgi + "'->'" + imgj + "' has less than "+to_string(PAIR_MATCH_SURVIVAL_RATE)+" inliers from orignal. Skip");
                    cout << "Final match '" + imgi + "'->'" + imgj + "' has less than " + to_string(PAIR_MATCH_SURVIVAL_RATE) + " inliers from orignal. Skip" << endl;
                    continue;
                }

                matches[make_pair(imgi, imgj)] = final;

                //CV_LOG_INFO(TAG, "Matching " + imgi + " and " + imgj + ": " + to_string(final.size()) + " / " + to_string(match.size()));
                cout << "Matching " + imgi + " and " + imgj + ": " + to_string(final.size()) + " / " + to_string(match.size()) << endl;

                if (saveDebugVisualizations) {
                    Mat out;
                    vector<DMatch> rawmatch;
                    matcher.match(descriptors[imgi], descriptors[imgj], rawmatch);
                    vector<pair<string, vector<DMatch>& > > showList{
                            {"Raw Match", rawmatch},
                            {"Ratio Test Filter", match},
                            {"Reciprocal Filter", merged},
                            {"Epipolar Filter", final}
                    };
                    for (size_t i = 0; i< showList.size(); i++) {
                        drawMatches(images[imgi], keypoints[imgi],
                                    images[imgj], keypoints[imgj],
                                    showList[i].second, out, CV_RGB(255,0,0));
                        putText(out, showList[i].first, Point(10,50), FONT_HERSHEY_COMPLEX, 2.0, CV_RGB(255,255,255), 2);
                        putText(out, "# Matches: " + to_string(showList[i].second.size()), Point(10,100), FONT_HERSHEY_COMPLEX, 1.0, CV_RGB(255,255,255));
                        imwrite(fs::basename(fs::path(imgi)) + "_" + fs::basename(fs::path(imgj)) + "_" + to_string(i) + ".jpg", out);
                    }
                }
            }
        }
    }

    void buildTracks() {
        //CV_LOG_INFO(TAG, "Build tracks");
		cout << "Build tracks" << endl;

        using namespace boost;

        struct ImageFeature {
            string image;
            size_t featureID;
        };
        typedef adjacency_list < listS, vecS, undirectedS, ImageFeature > Graph;
        typedef graph_traits < Graph >::vertex_descriptor Vertex;

        map<pair<string, int>, Vertex> vertexByImageFeature;

        Graph g;

        // Add vertices - image features
        for (const auto& imgi : keypoints) {
            for (size_t i = 0; i < imgi.second.size(); i++) {
                Vertex v = add_vertex(g);
                g[v].image = imgi.first;
                g[v].featureID = i;
                vertexByImageFeature[make_pair(imgi.first, i)] = v;
            }
        }

        // Add edges - feature matches
        for (const auto& match : matches) {
            for (const DMatch& dm : match.second) {
                Vertex& vI = vertexByImageFeature[make_pair(match.first.first, dm.queryIdx)];
                Vertex& vJ = vertexByImageFeature[make_pair(match.first.second, dm.trainIdx)];
                add_edge(vI, vJ, g);
            }
        }

        using Filtered  = filtered_graph<Graph, keep_all, std::function<bool(Vertex)>>;
        Filtered gFiltered(g, keep_all{}, [&g](Vertex vd) { return degree(vd, g) > 0; });

        // Get connected components
        std::vector<int> component(num_vertices(gFiltered), -1);
        int num = connected_components(gFiltered, &component[0]);
        map<int, vector<Vertex> > components;
        for (size_t i = 0; i != component.size(); ++i) {
            if (component[i] >= 0) {
                components[component[i]].push_back(i);
            }
        }
        // Filter bad components (with more than 1 feature from a single image)
        std::vector<int> vertexInGoodComponent(num_vertices(gFiltered), -1);
        map<int, vector<Vertex> > goodComponents;
        for (const auto& c : components) {
            set<string> imagesInComponent;
            bool isComponentGood = true;
            for (int j = 0; j < c.second.size(); ++j) {
                const string imgId = g[c.second[j]].image;
                if (imagesInComponent.count(imgId) > 0) {
                    // Image already represented in this component
                    isComponentGood = false;
                    break;
                } else {
                    imagesInComponent.insert(imgId);
                }
            }
            if (isComponentGood) {
                for (int j = 0; j < c.second.size(); ++j) {
                    vertexInGoodComponent[c.second[j]] = 1;
                }
                goodComponents[c.first] = c.second;
            }
        }

        Filtered gGoodComponents(g, keep_all{}, [&vertexInGoodComponent](Vertex vd) {
            return vertexInGoodComponent[vd] > 0;
        });

        //CV_LOG_INFO(TAG, "Total number of components found: " + to_string(components.size()));
        //CV_LOG_INFO(TAG, "Number of good components: " + to_string(goodComponents.size()));
		cout << "Total number of components found: " + to_string(components.size()) << endl;
		cout << "Number of good components: " + to_string(goodComponents.size()) << endl;
        const int accum = std::accumulate(goodComponents.begin(),
                                          goodComponents.end(), 0,
                                          [](int a, pair<const int, vector<Vertex> >& v){
                                                    return a+v.second.size();
                                                });
        //CV_LOG_INFO(TAG, "Average component size: " + to_string((float)accum / (float)(goodComponents.size())));
		cout << "Average component size: " + to_string((float)accum / (float)(goodComponents.size())) << endl;

        if (saveDebugVisualizations) {
            struct my_node_writer {
                my_node_writer(Graph& g_, const map<string,int>& iid_) : g (g_), iid(iid_) {};
                void operator()(std::ostream& out, Vertex v) {
                    const int imgId = iid[g[v].image];
                    out << " [label=\"" << imgId << "\" colorscheme=\"accent8\" fillcolor="<<(imgId+1)<<" style=filled]";
                };
                Graph g;
                map<string,int> iid;
            };
            std::ofstream ofs("match_graph_good_components.dot");
            write_graphviz(ofs, gGoodComponents, my_node_writer(g, imageIDs));
            std::ofstream ofsf("match_graph_filtered.dot");
            write_graphviz(ofsf, gFiltered, my_node_writer(g, imageIDs));
        }

        // Each component is a track
        const size_t nViews = imagesFilenames.size();
        tracks.resize(nViews);
        for (int i = 0; i < nViews; i++) {
            tracks[i].create(2, goodComponents.size(), CV_64FC1);
            tracks[i].setTo(-1.0);
        }
        int i = 0;
        for (auto c = goodComponents.begin(); c != goodComponents.end(); ++c, ++i) {
            for (const int v : c->second) {
                const int imageID = imageIDs[g[v].image];
                const size_t featureID = g[v].featureID;
                const Point2f p = keypoints[g[v].image][featureID].pt;
                tracks[imageID].at<double>(0, i) = p.x;
                tracks[imageID].at<double>(1, i) = p.y;
            }
        }

        if (saveDebugVisualizations) {
            vector<Scalar> colors = {CV_RGB(240, 248, 255),
                                     CV_RGB(250, 235, 215),
                                     CV_RGB(0, 255, 255),
                                     CV_RGB(127, 255, 212),
                                     CV_RGB(240, 255, 255),
                                     CV_RGB(245, 245, 220),
                                     CV_RGB(255, 228, 196),
                                     CV_RGB(255, 235, 205),
                                     CV_RGB(0, 0, 255),
                                     CV_RGB(138, 43, 226),
                                     CV_RGB(165, 42, 42),
                                     CV_RGB(222, 184, 135)};

            vector<Mat> imagesM;
            for (const auto m : images) imagesM.push_back(m.second);
            Mat out;
            hconcat(vector<Mat>(imagesM.begin(), imagesM.begin() + 4), out);
            RNG& rng = cv::theRNG();
            const Size imgS = imagesM[0].size();
            for (int tId = 0; tId < 20; tId++) {
                const int trackId = rng(tracks[0].cols); // Randomize a track ID

                // Show track over images
                for (int i = 0; i < 3; i++) {
                    Point2f a = Point2f(tracks[i].col(trackId));
                    Point2f b = Point2f(tracks[i + 1].col(trackId));

                    if (a.x < 0 || a.y < 0 || b.x < 0 || b.y < 0) {
                        continue;
                    }

                    const Scalar c = colors[tId % colors.size()];
                    a.x += imgS.width * i;
                    b.x += imgS.width * (i + 1);
                    circle(out, a, 7, c, FILLED);
                    circle(out, b, 7, c, FILLED);
                    line(out, a, b, c, 3);
                }
                imwrite("tracks.jpg", out);

                // Show track patches
                const int patchSize = 20;
                const Point2f patch(patchSize, patchSize);
                for (int i = 0; i < tracks.size(); i++) {
                    Point2f a = Point2f(tracks[i].col(trackId));
                    if (a.x < patchSize || a.y < patchSize ||
                        a.x > imgS.width-patchSize || a.y > imgS.height-patchSize) {
                        continue;
                    }

                    imwrite("track_" + to_string(trackId) + "_" + to_string(i) + ".png",
                            imagesM[i](Rect(a - patch, a + patch)));
                }
            }
        }
    }

    bool reconstructFromTracks() {
        //CV_LOG_INFO(TAG, "Reconstruct from " + to_string(tracks[0].cols) + " tracks");
		cout << "Reconstruct from " + to_string(tracks[0].cols) + " tracks" << endl;
        const Size imgS = images.begin()->second.size();
        const float f = std::max(imgS.width,imgS.height);
        Mat K = Mat(Matx33f{f, 0.0, imgS.width/2.0f,
                            0.0, f, imgS.height/2.0f,
                            0.0, 0.0, 1.0});
        cv::sfm::reconstruct(tracks, Rs, Ts, K, points3d, true);

        K.copyTo(K_);

        //CV_LOG_INFO(TAG, "Reconstruction: ");
        //CV_LOG_INFO(TAG, "Estimated 3D points: " + to_string(points3d.size()));
        //CV_LOG_INFO(TAG, "Estimated cameras: " + to_string(Rs.size()));
        //CV_LOG_INFO(TAG, "Refined intrinsics: ");
        //CV_LOG_INFO(TAG, K_);
		cout << "Reconstruction: " << endl;
		cout << "Estimated 3D points: " + to_string(points3d.size()) << endl;
		cout << "Estimated cameras: " + to_string(Rs.size()) << endl;
		cout << "Refined intrinsics: " << endl;
		cout << K_ << endl;
		

        if (Rs.size() != imagesFilenames.size()) {
            //CV_LOG_ERROR(TAG, "Unable to reconstruct all camera views (" + to_string(imagesFilenames.size()) + ")");
			cout << "Unable to reconstruct all camera views (" + to_string(imagesFilenames.size()) + ")" << endl;
            return false;
        }

        if (tracks[0].cols != points3d.size()) {
            //CV_LOG_WARNING(TAG, "Unable to reconstruct all tracks (" + to_string(tracks[0].cols) + ")");
			cout << "Unable to reconstruct all tracks (" + to_string(tracks[0].cols) + ")" << endl;
        }

        // Create the point cloud
        pointCloud.clear();
        for (const auto &p : points3d) pointCloud.emplace_back(Vec3f(p));

        // Get the point colors
        pointCloudColor.resize(pointCloud.size(), Vec3b(0,255,0));
        vector<Point2f> point2d(1);
        for (int i = 0; i < (int)pointCloud.size(); i++) {
            for (int j = 0; j < imagesFilenames.size(); ++j) {
                Mat point3d = Mat(pointCloud[i]).reshape(1, 1);
                cv::projectPoints(point3d, Rs[j], Ts[j], K_, Mat(), point2d);
                if (point2d[0].x < 0 || point2d[0].x >= imgS.width || point2d[0].y < 0 ||
                    point2d[0].y >= imgS.height) {
                    continue;
                }
                pointCloudColor[i] = images[imagesFilenames[j]].at<Vec3b>(point2d[0]);
                break;
            }
        }

        return true;
    }

    void visualize3D() {
        //CV_LOG_INFO(TAG, "Visualize reconstruction");
		cout << "Visualize reconstruction" << endl;

        if (saveDebugVisualizations) {
            // 3d point reprojections
            Mat points2d;
            Mat points3dM(points3d.size(), 1, CV_32FC3);
            for (int i = 0 ; i < points3d.size(); i++) {
                points3dM.at<Vec3f>(i) = Vec3f(points3d[i]);
            }
            for (int j = 0; j < imagesFilenames.size(); j++) {
                //ynh error
                //11张图 第10个会出bug
                //modules\core\include\opencv2\core\mat.inl.hpp
                /*
                inline
                Mat::Mat(const Mat& m)
                    : flags(m.flags), dims(m.dims), rows(m.rows), cols(m.cols), data(m.data),
                      datastart(m.datastart), dataend(m.dataend), datalimit(m.datalimit), allocator(m.allocator),
                      u(m.u), size(&rows), step(0)
                {
                    if( u )
                        CV_XADD(&u->refcount, 1);
                    //断点
                    if( m.dims <= 2 )
                    {
                        step[0] = m.step[0]; step[1] = m.step[1];
                    }
                    else
                    {
                        dims = 0;
                        copySize(m);
                    }
                }
                */
                cv::projectPoints(points3dM, Rs[j], Ts[j], K_, noArray(), points2d);

                Mat out;
                images[imagesFilenames[j]].copyTo(out);
                for (int i = 0; i < points2d.rows; i++) {
                    circle(out, points2d.at<Point2f>(i), 3, CV_RGB(255, 0, 0), FILLED);
                }
                imwrite("reprojection_" + to_string(j) + ".jpg", out);
            }
        }

        // Create 3D windows
        viz::Viz3d window("Coordinate Frame");
        window.setWindowSize(Size(500, 500));
        window.setWindowPosition(Point(150, 150));
        window.setBackgroundColor(viz::Color::white());

        //cout << "rotations[0].rows:" << Rs[0].rows << endl;
        //cout << "rotations[0].cols:" << Rs[0].cols << endl;
        //cout << "translations[0].rows:" << Ts[0].rows << endl;
        //cout << "translations[0].cols:" << Ts[0].cols << endl;

        // Recovering cameras
        vector<Affine3d> path;
        for (size_t i = 0; i < Rs.size(); ++i)
            path.push_back(Affine3d(Rs[i],Ts[i]));

        // Add the pointcloud
        viz::WCloud cloud_widget(pointCloud, pointCloudColor);
        window.showWidget("point_cloud", cloud_widget);
        // Add cameras
        window.showWidget("cameras_frames_and_lines", viz::WTrajectory(path, viz::WTrajectory::BOTH, 0.1, viz::Color::black()));
        window.showWidget("cameras_frustums", viz::WTrajectoryFrustums(path, K_, 0.1, viz::Color::navy()));
        window.setViewerPose(path[0]);

        /// Wait for key 'q' to close the window
        //CV_LOG_INFO(TAG, "Press 'q' to close ... ")
		cout << "Press 'q' to close ... " << endl;

        window.spin();
    }

    void saveToMVSFile() {
        //CV_LOG_INFO(TAG, "Save reconstruction to MVS file: " + saveMVS)
		cout << "Save reconstruction to MVS file: " + saveMVS << endl;

        MVS::Interface interface;
        MVS::Interface::Platform p;

        // Add camera
        MVS::Interface::Platform::Camera c;
        const Size imgS = images[imagesFilenames[0]].size();
        c.K = Matx33d(K_);
        c.R = Matx33d::eye();
        c.C = Point3d(0,0,0);
        c.name = "Camera1";
        c.width = imgS.width;
        c.height = imgS.height;
        p.cameras.push_back(c);

        // Add views
        p.poses.resize(Rs.size());
        for (size_t i = 0; i < Rs.size(); ++i) {
            Mat t = -Rs[i].t() * Ts[i];
            p.poses[i].C.x = t.at<double>(0);
            p.poses[i].C.y = t.at<double>(1);
            p.poses[i].C.z = t.at<double>(2);
            Mat r; Rs[i].copyTo(r);
            Mat(r).convertTo(p.poses[i].R, CV_64FC1);

            // Add corresponding image
            MVS::Interface::Image image;
            image.cameraID = 0;
            image.poseID = i;
            image.name = imagesFilenames[i];
            image.platformID = 0;
            interface.images.push_back(image);
        }
        p.name = "Platform1";
        interface.platforms.push_back(p);

        // Add point cloud
        for (size_t k = 0; k < points3d.size(); ++k) {
            MVS::Interface::Color c;
            MVS::Interface::Vertex v;
            v.X = Vec3f(points3d[k]);

            // Reproject to see if in image bounds and get the RGB color
            Mat point3d;
            Mat(points3d[k].t()).convertTo(point3d, CV_32FC1);
            for (uint32_t j = 0; j < tracks.size(); ++j) {
                vector<Point2f> points2d(1);
                cv::projectPoints(point3d, Rs[j], Ts[j], K_, Mat(), points2d);
                if (points2d[0].x < 0 || points2d[0].x > imgS.width ||
                    points2d[0].y < 0 || points2d[0].y > imgS.height) {
                    continue;
                } else {
                    c.c = images[imagesFilenames[j]].at<Vec3b>(points2d[0]);
                    v.views.push_back({j, 1.0});
                }
            }

            interface.verticesColor.push_back(c);
            interface.vertices.push_back(v);
        }

        MVS::ARCHIVE::SerializeSave(interface, saveMVS);
    }

    vector<String> imagesFilenames;
    map<string, int> imageIDs;
    map<string, Mat> images;
    map<string, vector<KeyPoint> > keypoints;
    map<string, Mat> descriptors;
    map<pair<string, string>, vector<DMatch> > matches;
    vector<Mat> Rs, Ts;
    vector<Mat> points3d;
    vector<Mat> tracks;
    vector<Vec3f> pointCloud;
    vector<Vec3b> pointCloudColor;
    Matx33f K_;

    const float MATCH_RATIO_THRESHOLD = 0.8f; // Nearest neighbor matching ratio
    const float PAIR_MATCH_SURVIVAL_RATE;     // Ratio of surviving matches for a successful stereo match
    const bool visualize;                     // Show 3D visualization of the sprase cloud?
    const string saveMVS;                     // Save the reconstruction in MVS format for OpenMVS?
    const string saveCloud;                   // Save the reconstruction to a point cloud file?
    const bool saveDebugVisualizations;       // Save debug visualizations from the reconstruction process

    const string TAG = "StructureFromMotion";
};


// ./ch2_sfm.exe --viz=true --cloud=save.ply --debug=true --mvs=out.mvs D:/new_code/OpenMVS/Chapter_02/crazyhorse/
// D:\new_code\OpenMVS\build\bin\x64\Release\DensifyPointCloud.exe -i out.mvs
// D:\new_code\OpenMVS\build\bin\x64\Release\Viewer.exe -i out_dense.mvs

int main(int argc, char** argv) {
    utils::logging::setLogLevel(utils::logging::LOG_LEVEL_DEBUG);

    cv::CommandLineParser parser(argc, argv,
                                 "{help h ? |       | help message}"
                                 "{@dir     | .     | directory with image files for reconstruction }"
                                 "{mrate    | 0.5   | Survival rate of matches to consider image pair success }"
                                 "{viz      | false | Visualize the sparse point cloud reconstruction? }"
                                 "{debug    | false | Save debug visualizations to files? }"
                                 "{mvs      |       | Save reconstruction to an .mvs file. Provide filename }"
                                 "{cloud    |       | Save reconstruction to a point cloud file (PLY, XYZ and OBJ). Provide filename}"
    );

    if (parser.has("help"))
    {
        parser.printMessage();
        return 0;
    }

    StructureFromMotion sfm(parser.get<string>("@dir"),
                            parser.get<float>("mrate"),
                            parser.get<bool>("viz"),
                            parser.get<string>("mvs"),
                            parser.get<string>("cloud"),
                            parser.get<bool>("debug")
                            );
    sfm.runSfM();

    return 0;
}

 

评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值