【论文阅读】29-Reconstructing Small 3D Objects in front of a Textured Background‘

目录

1. introduction

2. related work

3. Problem formulation

4. Method

4.1 Step1: Reconstruction of the takes

4.2 Step2: Tracking building

4.3 Step3: Registration of cameras from other takes

4.3.1 pose集合

4.3.2 Group pose

4.4 Step4: Point segmentation

4.4.1 group points(label)

4.4.2 Merge point sets segmentation

4.5 Step5: Reconstruction merging

5. Experiments

6. Appendix


1. introduction

  1. several static configurations—semi-dynamic acquisition of the images of the scene.

  2. segment the points---by finding multiple poses of cameras that capture the scene’s other configurations

  3. reconstructions are merged into the resulting model of the scene

  4. complete models of small objects.

  5. two independently moving:translational motion && Planar

  6. Often, this is done by running an SfM pipeline on images of small objects presented from all sides on a featureless background [39]

  7. capturing a structured background(textured.) together with the object,--background segmentation

  8. unordered and unstructured images

  9. evade the segmentation of the 2D tracks—difficult

  10. presence of the background in the reconstruction step may significantly improve the result

  11. 疑惑:稠密重建如何做的??


2. related work

  1. video input: works [15, 57, 41, 48, 38, 44] and [14, 45, 24, 23, 66, 16]

  • In [38], 3D reconstruction and motion segmentation are performed simultaneously

  • Work [24] performs SfM with additional feedback from motion segmentation and from a particle filter

  • Non-rigid motions are addressed in [45, 23, 66, 16].

2. factorization, e.g., [62, 72, 64, 42, 30, 12, 27, 20, 8, 28, 46, 22]

track completion(NO)

3. evaluate the distances between each pair of points

  • not suitable for large scenes

  • computer memory

  • [29, 26, 71

4. reconstruct each object individually--scale consistency(NO)

5. Motion segmentation: clusters points into different motions

(1)Two views

  • consensus analysis [70, 65, 75, 35]

  • preference analysis [74, 58, 7, 32, 40, 34]

  • energy minimization [19, 3]

  • branch and bound [56],

  • subspace segmentation [63],

  • information theory [60]

(2)Three views: [64].

(3)multiview MS: subspace segmentation [62, 72, 64, 42, 30, 12, 27, 20].--- complete tracks[points are observed by a few cameras]

6. Work [25] assumes that the motion is known a priory—moving object(inconsistent)

7. While [8] assumes affine projection

8. Work [73] requires the motion segmentation as its input, it outputs a corrected segmentation together with the 3D model.

9. 最相关:

  • [53] Filip Srajer. Image matching for dynamic scenes. Master’s thesis, Czech Technical University in Prague, 2016. 1, 2, 7, 12, 20

  • Unfortunately, due to the greedy nature of the grouping algorithm, [53] typically fails if there is no motion between some images


3. Problem formulation

  1. 关于motion

2. 关于相机--perspective camera model

3. task

(1)segment:

(2)reconstruct

4. Method

  1. 每个take-sparse point---不同take,pose分析---B/F全局分割---不同take,点云配准(BF分别进行,配准到take_first)

4.1 Step1: Reconstruction of the takes

  1. Sparse

  2. Every take –local coordinate system

4.2 Step2: Tracking building

  1. 同名点识别:基于二维影像


4.3 Step3: Registration of cameras from other takes

Thus, we aim at grouping the observations of the found poses such that one group will contain the observations of the background and the other group will contain the observations of the foreground

4.3.1 pose集合

Sequential RANSAC [65].(fit-remaining data)

4.3.2 Group pose

见下面章节[Local observation grouping]

  1. using a linkage procedure similar to [33, 59]:??

  • [33] L. Magri and A. Fusiello. T-linkage: a continuous relaxation of j-linkage for multi-model fitting. Computer Vision and Pattern Recognition, pages 3954–3961, 2014. 5

  • [59] R. Toldo and A. Fusiello. Robust multiple structures estimation with j-linkage. European Conference on Computer Vision, volume 5302, pages 537–547, 2008. 5

2. To group the clusters, we use the fact, that if two cameras observe the same points, they observe the same object


4.4 Step4: Point segmentation

4.4.1 group points(label)

1. 两个最多pose的组,对应的两个point sets,分别来自B/F

2. 已知:P_B(相对于背景的pose) && P_F(相对于前景的pose)

对应的Inlier(重投影误差最小)—B/F Point(同mine)

3. 未对B/F做明确区分

4. 针对每个take形成连个point sets (G1 , G2)

4.4.2 Merge point sets segmentation

  1. 第一次:linkage criterion (10) or (11), B/F/U,U占大多数

  2. 第二次:k nearest


4.5 Step5: Reconstruction merging

  1. 实质:点云配准- least squares

  2. BA(F/B采用不同的相机位姿 or P_F = P_B+P_O)

5. Experiments

  1. 公共数据集

(1)ETH 3D dataset

实时运动的双目影像,等效于多个take的semi-dynamic(每个take只有两张影像)

(2 )https://www.eth3d.net/slam datasets

2. 多角度比较:与现有方法

(1)motion segmentation类方法—验证本文分割方法(基于无序,少量,影像,sparse track)

(2)Single body SfM类方法—验证:包含背景对小目标建模有帮助

比较指标:

  • the number of reconstructed points and

  • the median of the reprojection error

(3)Computation time

6. Appendix

  1. https://github.com/petrhruby97/TBSfM(论文代码)

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值