vr测量_摄影测量的力量:在VR中模拟现实世界

vr测量

Get a behind-the-scenes look at a project made with Unity from Varjo, who used photogrammetry and dynamic lighting to create a realistic and lifelike environment in virtual reality (VR).

从Varjo用Unity制作的项目中获得幕后花絮,他使用摄影测量法和动态照明在虚拟现实(VR)中创建了逼真的逼真的环境。

The applications of photogrammetry – the process of using multiple photos of real-world objects or spaces to author digital assets – run the gamut. Photogrammetry has not only gained traction in the gaming world, but also the industrial market. 

摄影测量法的应用(使用现实世界中的物体或空间的多张照片创作数字资产的过程)成为整个领域。 摄影测量不仅在 游戏界 引起了人们的关注 ,而且在工业市场上也受到了欢迎。

For instance, point clouds generated by photogrammetry have become integral to architecture, engineering, and construction (AEC) workflows. And across automotive, transportation, and manufacturing, capturing a physical prototype via photogrammetry and comparing it to its digital CAD model ensures vision matches reality.

例如,通过摄影测量法生成的点云已成为建筑,工程和施工(AEC)工作流程的组成部分。 在汽车,运输和制造领域,通过摄影测量法捕获物理原型并将其与数字CAD模型进行比较,可确保视觉与现实相匹配。

To better simulate real-world environments and showcase the potential of photogrammetry for professional use, the Varjo team recently completed a photogrammetric scan of the largest cemetery in Japan and showed it as a digital twin in VR. We invited them to share in their own words how they tackled this ambitious project. 

为了更好地模拟现实环境并展示摄影测量技术在专业领域的潜力, Varjo 团队最近完成了对日本最大公墓的摄影测量扫描,并将其显示为VR中的数字双胞胎。 我们邀请他们用自己的话分享他们如何完成这个雄心勃勃的项目。

演示地址

Koyasan Okunoin公墓场景的制作 (Making of the Koyasan Okunoin Cemetery scene)

With Varjo VR-1, shown above, exploring the finest details of buildings, construction sites or other spaces is for the first time possible in human-eye resolution VR. 20/20 resolution expands the use cases of photogrammetry VR for industrial use. 

使用上图所示的Varjo VR-1,首次以人眼分辨率VR探索建筑物,建筑工地或其他空间的最佳细节。 20/20分辨率扩展了工业摄影摄影VR的用例。

To illustrate the potential of dynamic, human-eye resolution VR photogrammetry, we at Varjo created a dynamic demo of one of Japan’s holiest places, the Okunoin Cemetery at Mount Koya. In this article, we explain how it was done.

为了说明动态人眼分辨率VR摄影测量技术的潜力,我们在Varjo创建了一个动态演示,演示了日本最神圣的地方之一-高野山的奥金院公墓。 在本文中,我们解释了它是如何完成的。

捕获摄影测量位置 (Capturing the photogrammetry location)

This section was written by Jani Ylinen, 3D Photogrammetry Specialist at Varjo.

此部分由Varjo的3D摄影测量专家Jani Ylinen撰写

Photogrammetry starts with choosing the proper capture location or target object. Not all places or objects are suitable for photogrammetry capture. We chose to do a capture from an old cemetery in Mount Koya in Japan because we wanted to do something culturally significant in addition to having lots of details to explore in the demo. Since this was an outdoor capture, the conditions were very challenging to control. But here at  Varjo we like challenges. 

摄影测量始于选择适当的捕获位置或目标物体。 并非所有的地方或物体都适合摄影测量 。 我们选择从日本高野山的一座老公墓中拍摄照片,是因为除了要在演示中探索很多细节之外,我们还想做一些具有文化意义的事情。 由于这是户外拍摄,因此控制条件非常困难。 但是在Varjo,我们喜欢挑战。

The key challenges in this capture were:

此次捕获的主要挑战是:

  1. Movement. The Okunoin Cemetery at Koyasan is big and ancient. There were surprisingly many tourists visiting it every day, and a camera on a tripod was a real people magnet. But when doing photogrammetry, the scene you’re capturing should be completely still and static without anything moving around. This can be problematic if you are capturing anything large because if the object itself is not moving, maybe the light source, the sun, is moving. If the shoot takes a few hours, the shadows may change a lot.

    运动。 高野山的奥金院公墓既古老又古老。 令人惊讶的是,每天都有很多游客来参观它,而三脚架上的相机确实吸引了人们。 但是,在进行摄影测量时,您要捕获的场景应该完全静止且静止不动。 如果要捕获大的东西,这可能会产生问题,因为如果对象本身不移动,则可能是光源(太阳)正在移动。 如果拍摄需要几个小时,阴影可能会发生很大变化。

  2. Weather. When you do outdoor capture, it should be overcast weather. It, of course, cannot rain during the capture nor before the capture. Wet surfaces have a different look than dry ones, and the scene should look the same throughout the shoot.

    天气。 当您进行户外拍摄时,应该是阴天。 当然,在捕获期间或捕获之前都不会下雨。 湿表面的外观与干表面的外观不同,并且整个拍摄过程中的场景看起来应该相同。

  3. Ground. The cemetery floor in the chosen location was very difficult to capture, as it was filled with short pine branches and twigs that moved when walking around them.

    地面。 所选位置的墓地很难抓获,因为它周围是短的松树枝和树枝,它们在周围行走时会移动。

When taking the photos of the photogrammetry scene, a general rule is that each picture should overlap with the neighboring picture at least 30% or more. The main goal is to take photos of  the  target from as many angles as possible and keep the images overlapping. 

拍摄摄影测量场景的照片时,通常的规则是每张图片应与相邻图片重叠至少30%或更多。 主要目标是从尽可能多的角度拍摄目标的照片,并使图像重叠。

The area captured in Koyasan was scanned similarly than if one would scan a room. For this scene, about 2,500 photographs were taken. 

在高野山捕获的区域的扫描方式与扫描房间的方式类似。 对于该场景,拍摄了约2500张照片。

使用Unity构建动态3D场景 (Building the dynamic 3D scene with Unity)

This section was written by Juhani Karlsson, Senior 3D Artist at Varjo and a former Visual Effects Artist at Unity.  

本部分由Varjo的高级3D美术师和Unity的前视觉效果美术师Juhani Karlsson撰写。

Photogrammetry delivers realistic immersion but often its static lighting narrows down the realistic use cases. We wanted to use dynamic lighting to simulate a realistic environment. Unity provides a great platform for constructing and rendering highly detailed scenes, which made it easy to automate the workflow.

摄影测量法可以实现逼真的沉浸感,但通常其静态照明会缩小实际的使用案例。 我们希望使用动态照明来模拟现实环境。 Unity为构建和渲染高度详细的场景提供了一个出色的平台,使自动化工作流变得容易。

We also used the excellent De-Lighting tool and the Unity Asset Store to help us fill the gaps when needed. Some trees and stones from Unity’s fantastic Book Of The Dead assets were also used. 

我们还使用了出色的 De-Lighting工具 和Unity Asset Store,以帮助我们在需要时填补空白。 还使用了 Unity出色 的《死者之书》资产 中的 一些树木和石头 。

While shooting the site, file transfers were constantly made so we could save time in the 3D construction. First, we used a software called Reality Capture to create a 3D scene of the photographs.

在拍摄站点时,不断进行文件传输,因此我们可以节省3D构造的时间。 首先,我们使用了称为Reality Capture的软件来创建照片的3D场景。

网格处理和UV (Mesh processing and UVs  )

The 3D scene was exported from Reality Capture with a single 10 million polygon mesh with a set of 98 x 8k textures.

3D场景是从Reality Capture导出的,它具有一个1000万个多边形网格以及一组98 x 8k纹理。

In Houdini,  the  mesh was run through Voronoi Fracture that splits the meshes into  smaller and more manageable-sized pieces. Different levels of detail (LOD) were then  generated with shared UVs. This was done to avoid texture popping between LOD.

在Houdini中,网格通过Voronoi裂缝延伸,将网格划分为更小,更易于管理的块。 然后使用共享的UV生成不同级别的细节(LOD)。 这样做是为了避免LOD之间弹出纹理。

That way,  the textures were small enough for Unity to chew and we could get the Umbra occlusion culling working. It was also lighter to generate UVs when the pieces were smaller.

这样,纹理足够小,Unity可以咀嚼,我们可以使本影遮挡剔除工作正常。 当碎片较小时,产生紫外线的重量也较小。

Shader was created to bake different textures. Unity’s De-Lighting tool requires at least albedo, ambient occlusion, normal, bent normal, and position map. Most frame buffers are straightforward to bake out of the box but bent normals are not so obvious. Luckily, bent normals are the direction of missed occlusion rays, and there is a simple VEX function called occlusion() that basically outputs bent normals.

创建Shader以烘焙不同的纹理。 Unity的De-Lighting工具至少需要反照率,环境光遮挡,法线,弯曲法线和位置图。 大多数帧缓冲区都是可以直接使用的,但是弯曲的法线并不那么明显。 幸运的是,弯曲法线是错过的遮挡射线的方向,并且有一个简单的VEX函数,称为occlusion(),基本上可以输出弯曲法线。

灭灯 (De-Lighting)

We created a Python script to automatically run the textures through the batch script provided by the Unity De-Lighting tool.

我们创建了一个Python脚本,以通过Unity De-Lighting工具提供的批处理脚本自动运行纹理。

If the scan has a lot of color variation, the De-Lighting has trouble estimating the environment probe. Therefore, we decided on a mixed approach where we mixed between automatic De-Lighting and traditional image-based lighting shadow removal.

如果扫描的颜色变化很大,则“消除照明”将无法估计环境探头。 因此,我们决定采用一种混合方法,将自动De-Lighting与传统的基于图像的照明阴影去除之间进行混合。

A Unity Asset post-processing script was made to import the processed models. It handled the material creation and texture assignment. A total of 128 of 4k textures  were processed,  baked, and de-lighted.

制作了Unity Asset后处理脚本来导入已处理的模型。 它处理了材质创建和纹理分配。 总共处理了128个4k纹理,对其进行了烘焙和除光。

除光前后
(Before and after De-Lighting)

Varjo VR-1和Unity –轻松集成 (Varjo VR-1 and Unity – Easy integration)

Once the scene was imported, it was just a matter of dragging the VarjoUser Prefab  to  the scene. Instantly, the scene was viewable with VR-1, and we could start tweaking it to match our needs.

导入场景后,只需将VarjoUser Prefab拖动到场景即可。 立即可以使用VR-1看到场景,我们可以开始对其进行调整以适应我们的需求。

The Unity Asset Enviro was used for the daylight-night cycle, and the real- time global illumination was baked to the scene. The generated mesh UVs were used for the global illumination to avoid long preprocessing times. The settings were set so that the lightmapper would do minimal work on the UVs. This can be done by enabling UV optimization in the meshes and adjusting settings.

Unity Asset Enviro 用于白天和黑夜的周期,实时的全局照明被烘焙到场景中。 生成的网格UV用于全局照明,以避免较长的预处理时间。 进行设置是为了使灯光映射器在UV上所做的工作最少。 这可以通过在网格中启用UV优化并调整设置来完成。

-

Our thanks to Varjo for sharing this guest post with our community; learn more about photogrammetry in Unity. Varjo will be exhibiting and presenting at Unite Copenhagen.

感谢Varjo与我们的社区分享此来宾帖子; 我 可以 在Unity中 获得更多关于 摄影测量的知识 。 Varjo将在Unite Copenhagen展出和展示。

Register for Unite Copenhagen today 

立即注册团结哥本哈根

翻译自: https://blogs.unity3d.com/2019/08/01/the-power-of-photogrammetry-simulating-the-real-world-in-vr/

vr测量

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值