灰色文字
OpenSfM开源软件关键代码学习
前言
OpenSfM是一个用于三维重建的三维软件,它用Python实现了Structure-from-Motion算法。
该软件的使用方法见之前的一篇博客。
Structure-from-Motion算法中较重要的三步是:特征点检测、特征点匹配、增量式重建。OpenSfM的整个重建可以通过在终端中输入特定命令来实现。OpenSfM提供的一些命令包括:
extract_metadata # Extract metadata form images' EXIF tag
detect_features # Compute features for all images
match_features # Match features between image pairs
create_tracks # Link matches pair-wise matches into tracks
reconstruct # Compute the reconstruction
mesh # Add delaunay meshes to the reconstruction
undistort # Save radially undistorted images
compute_depthmaps # Compute depthmap
export_ply # Export reconstruction to PLY format
export_openmvs # Export reconstruction to openMVS format
export_visualsfm # Export reconstruction to NVM_V3 format from VisualSfM
而特征点检测、特征点匹配、增量式重建分别对应着detect_features、match_features、reconstruct命令。
下面记录了我查看这三步操作的源码时的笔记。
1 关键步骤的具体方法概览
以下所列举的方法都是OpenSfM默认设置的方法。如果有自己调参的需求,某些步骤在config文件里也可以自己选用其他方法、其他参数。
- 特征点检测
特征点检测:covdet (covariance descriptor)
构建描述符:sift (scale invariant feature transform)
依赖:VLFeat
依赖源码语言:C++ (需Python binding)
points, desc = pyfeatures.hahog(image.astype(np.float32) / 255, # VlFeat expects pixel values between 0, 1
peak_threshold=config['hahog_peak_threshold'],
edge_threshold=config['hahog_edge_threshold'],
target_num_features=config['feature_min_frames'],
use_adaptive_suppression=config['feature_use_adaptive_suppression'])
- 特征点匹配:FLANN (Fast Approximate Nearest Neighbor Search)
依赖:OpenCV
依赖源码语言:Python - Bundle Adjustment:Ceres
依赖:Ceres
依赖源码语言:C++(需Python binding)
linear_solver_type_ = "SPARSE_NORMAL_CHOLESKY";
2 特征点检测
2.1 detect_feature命令
程序最外层直接调用的命令文件detect_feature.py内容如下:
# opensfm/commands/detect_feature.py文件全部内容
import logging
from timeit import default_timer as timer
import numpy as np
from opensfm import bow
from opensfm import dataset
from opensfm import features
from opensfm import io
from opensfm import log
from opensfm.context import parallel_map
logger = logging.getLogger(__name__)
class Command:
name = 'detect_features'
help = 'Compute features for all images'
def add_arguments(self, parser):
parser.add_argument('dataset', help='dataset to process')
def run(self, args):
data = dataset.DataSet(args.dataset)
images = data.images()
arguments = [(image, data) for image in images]
start = timer()
processes = data.config['processes']
parallel_map(detect, arguments, processes, 1) #重点!
end = timer()
with open(data.profile_log(), 'a') as fout:
fout.write('detect_features: {0}\n'.format(end - start))
self.write_report(data, end - start)
def write_report(self, data, wall_time):
image_reports = []
for image in data.images():
try:
txt = data.load_report('features/{}.json'.format(image))
image_reports.append(io.json_loads(txt))
except IOError:
logger.warning('No feature report image {}'.format(image))
report = {
"wall_time": wall_time