Building mesh and texture in MeshLab using point cloud produced by stereo reconstruction

对不起诸位,我实在没有时间写中文的了,直接贴已经写好的英文版微笑

 

 

Building mesh and texture in MeshLab using point cloud produced by stereo reconstruction

 

The point cloud from a stereo reconstruction could be used in many ways. One of them is to be used to generate a 3D mesh and, further, the 3D model of the object in the real world. In this post, I will share my experience on mesh generation and texturing.

 

These days, I was working on generating meshes from 3D point cloud obtained from stereo reconstruction. The tools I am using is as follows:

 

 

  • Stereo calibration and reconstruction by OpenCV.

  • Mesh generation and texturing by MeshLab.

 

 

This post is arranged as

 

 

  • Exporting the point cloud as a PLY file.

  • Composing a MeshLab project (.mlp) file.

  • Manual mesh generation and texturing.

  • Batch mesh generation and texturing.

 

 

I experienced a lot of try-and-error loops as I walking through these processes. I would like to share those experiences here because somebody else may be working on similar projects and get frustrated about the situation that there are not enough tutorials that we could just watch and learn.

 

 

For this document, I was using Ubuntu 16.04. The MeshLab version is V1.3.2_64bit (Feb 19 2016), and MeshLab_64bit_fp v2016.12. Sorry for mixing the versions. I was working on my laptop and desktop at the same time. Well, I mean working at the lab and working from home.

 

1 Exporting the point cloud as a PLY file

In the sample code of OpenCV, it outputs the 3D point cloud into a PLY file. There are a couple of things that we should take care when we doing this.

 

1.1 Reprojecting to 3D space

 

If the OpenCV function reprojectImageTo3D( ) is used, we need the Q matrix that produced by the stereoRectify( ) function. This may means that we have to do the calibration ourselves by OpenCV. The other things that matter are that, as learned from the sample code of OpenCV, certain elements of Q should be modified to make the 3D point cloud lying along the right direction. If a zero-based indexing is used here, the following elements should be modified:

 

Q[1, 1] *= -1

 

Q[1, 3] *= -1

Q[2, 3] *= -1

 

The modified Q matrix makes the point cloud using a Y-axis parallel to the global Y-axis.

 

 

1.2 Dimension check

 

It is always a good idea to check the dimension inside MeshLab to see if the reprojected point cloud has the right special size. I use the “Measuring Tool” in figure 1 to measure the known distance between two points.

 

 

Figure 1 The “Measuring Tool” of MeshLab.

 

1.3 Per-vertex information and UV coordinate

 

Sometimes we would like the PLY file to contain additional information. I face two issues when I was trying to figure out what I could put inside a PLY file. One is what kind of information is allowed to put into a PLY file. The other is what information MeshLab is expecting or MeshLab could make use of. Unfortunately, I have clear answers on none of them.

 

First of all, it seems to me that the PLY file format only makes constraints on the data type we could use, but not on the data itself. You could put anything you want as long as the data type is defined and the downstream program could recognize the data. But somehow, we always would like a list which makes it clear that what the “pre-defined” data people are using.

 

 

After some search, I found some useful information. One documentation of MATLAB summarized the data types often encountered in a PLY file. I think most of them MeshLab could recognize, but I did not test all of them. Particularly for our project, we want to bake a texture onto a mesh. The pipeline looks like this:

 

 

Table 1 Processing pipeline of this project.

 

Stereo images -> disparity map -> reprojected 3D point cloud -> mesh -> textured mesh

 

The mesh and texture will be generated in MeshLab. So it becomes important how we could provide information about the texture along with the point cloud before the mesh is generated. In the beginning, we are looking for methods to do some sort of a manual UV mapping. The idea is simply that since the point cloud is generated by stereo reconstruction, we could just us the image from the base camera as the texture. The UV coordina

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值