VR系列——Oculus Rift 开发者指南:三、Oculus Rift的渲染(四)

帧渲染

帧渲染的过程通常包含以下几个步骤:通过头戴式动作追踪装置来获取所预测的眼部动作,然后为每只眼睛渲染出不同的画面,最后用ovr_SubmitFrame方法将要显示的画面提交至合成器。在帧数据提交后,Oculus合成器处理掉畸变部分并在Rift中展示结果。

在帧渲染之前,最好先初始化一些能够在各个帧之间共用的数据结构。例如我们要查询眼部描述符,并在渲染循环之外初始化一个图层结构体:

// 初始化VR结构,填充描述
ovrEyeRenderDesc eyeRenderDesc[2];
ovrVector3f hmdToEyeViewOffset[2];
ovrHmdDesc hmdDesc = ovr_GetHmdDesc(session);
eyeRenderDesc[0] = ovr_GetRenderDesc(session, ovrEye_Left, hmdDesc.DefaultEyeFov[0]);
eyeRenderDesc[1] = ovr_GetRenderDesc(session, ovrEye_Right, hmdDesc.DefaultEyeFov[1]);
hmdToEyeViewOffset[0] = eyeRenderDesc[0].HmdToEyeViewOffset;
hmdToEyeViewOffset[1] = eyeRenderDesc[1].HmdToEyeViewOffset;
// 初始化单屏的视场角图层
ovrLayerEyeFov layer;
layer.Header.Type = ovrLayerType_EyeFov;
layer.Header.Flags = 0;
layer.ColorTexture[0] = pTextureSet;
layer.ColorTexture[1] = pTextureSet;
layer.Fov[0] = eyeRenderDesc[0].Fov;
layer.Fov[1] = eyeRenderDesc[1].Fov;
layer.Viewport[0] = Recti(0, 0, bufferSize.w / 2, bufferSize.h);
layer.Viewport[1] = Recti(bufferSize.w / 2, 0, bufferSize.w / 2, bufferSize.h);
// 在每帧之后更新ld.RenderPoseld.SensorSampleTime

这段代码先由既定的视场角给双眼创建渲染描述符。ovrEyeRenderDesc结构会返回一些对渲染有用的值,包括双眼的HmdToEyeViewOffset值。视线移开后可以用视线偏移值用来进行调整。

代码中还为全屏图层初始化了ovrLayerEyeFov结构。从Oculus SDK 0.6开始,通过将多个视线内的图像和图案相互合成为一张四边形图层,来实现帧的提交。上述例子是使用单图层来展现VR场景。为此,我们使用被称为双眼图层的ovrLayerEyeFov结构体,它能够完整地覆盖双眼的视线范围。因为两只眼睛使用了相同的图案,所以两只眼睛的彩色图案都用pTextureSet进行初始化,并对其设置视窗,这样左眼和右眼都可分别绘制出这个共用的图案。

注意:虽然通常只在开始时初始化视窗,但如果需要的话,可以将其指定为图层结构的一部分,在每一帧提交时允许应用动态地改变渲染目标大小。这有利于实现渲染性能最优化。

设置结束后,应用可以运行渲染循环。首先我们需要知道眼部动作来渲染左右的视野。

//同时获取双眼的动作,已包含IPD偏移
double displayMidpointSeconds = GetPredictedDisplayTime(session, 0);
ovrTrackingState hmdState = ovr_GetTrackingState(session, displayMidpointSeconds, ovrTrue);
ovr_CalcEyePoses(hmdState.HeadPose.ThePose, hmdToEyeViewOffset, layer.RenderPose);

在VR中,视野渲染依赖于头戴设备在物理空间中的位置和方向,通过内部的IMU和外部的探测器进行追踪。用预测来弥补系统的延迟,从而当帧画面在设备中显示时能给出最佳估计值。在Oculus SDK中,动作的追踪和预测是通过ovr_GetTrackingState来实现的。

为了做出准确预测,ovr_GetTrackingState需知道真正要展示哪一帧。上述代码中调用GetPredictedDisplayTime来为当前帧获取displayMidpointSeconds,通过displayMidpointSeconds来计算出最佳的预测追踪状态。再从这追踪状态中得到头部动作数据并传递给ovr_CalcEyePoses,计算出每只眼睛正确的视线动作。这些动作会存入layer.RenderPose[2]数组中。当眼部动作数据准备好后,我们就可以继续进行真正的帧渲染了。

if (isVisible)
{
    // 在写入前,增量指向下一个图案
    pTextureSet->CurrentIndex = (pTextureSet->CurrentIndex + 1) % pTextureSet->TextureCount;
    // 确立渲染目标
    DIRECTX.SetAndClearRenderTarget(pTexRtv[pTextureSet->CurrentIndex], pEyeDepthBuffer);
    // 渲染视觉缓冲区的场景
    for (int eye = 0; eye < 2; eye++) {
        // Rift相机获取视野和投影矩阵
        Vector3f pos = originPos + originRot.Transform(layer.RenderPose[eye].Position);
        Matrix4f rot = originRot * Matrix4f(layer.RenderPose[eye].Orientation);
        Vector3f finalUp = rot.Transform(Vector3f(0, 1, 0));
        Vector3f finalForward = rot.Transform(Vector3f(0, 0, -1));
        Matrix4f view = Matrix4f::LookAtRH(pos, pos + finalForward, finalUp);
        Matrix4f proj = ovrMatrix4f_Projection(layer.Fov[eye], 0.2f, 1000.0f, ovrProjection_RightHanded);
        // 为双眼渲染场景
        DIRECTX.SetViewport(layer.Viewport[eye]);
        roomScene.Render(proj * view, 1, 1, 1, 1, true);
    }
}
// 提交单图层帧数据
ovrLayerHeader* layers = &layer.Header;
ovrResult result = ovr_SubmitFrame(session, 0, nullptr, &layers, 1);
isVisible = (result == ovrSuccess);

这段代码通过以下几个步骤来渲染场景:

  • 首先增量CurrentIndex指向输出图案组中的下一个图案。在我们绘制新帧前,CurrentIndex得先进行更新。
  • 然后计算视野和投影矩阵,为双眼设置视窗场景渲染。在本例中,视野计算是通过结合原始动作和新动作来实现,新动作是根据追踪状态来获取并存放在图层中。可以输入数据修改原始值,移动玩家在3D世界中的位置。
  • 当图案渲染完成后,调用ovr_SubmitFrame函数来给合成器传递帧数据。从这里可以看出,合成器通过共享内存访问纹理数据,对其进行变形后展现在Rift上。

一旦有帧数据在队列中,且下一帧的ovrSwapTextureSet中下一个图案数据是有效的,ovr_SubmitFrame函数将会返回结果。若能成功,则返回值将是ovrSuccess或ovrSuccess_NotVisible。

当帧数据未展现时会返回ovrSuccess_NotVisible,表明VR应用可能失去焦点了。本例通过更新isVisible标识,检查渲染操作逻辑来处理这个问题。当帧数据不可见时,渲染操作也要暂停,以减轻不必要的GPU负载。

如果收到ovrError_DisplayLost,表示设备已经被移除,会话失效。要先释放共享资源(ovr_DestroySwapTextureSet),销毁会话(ovr_Destory),然后重建(ovr_Create),并创建新的资源(ovr_CreateSwapTextureSetXXX)。其中应用程序已有的图像资源不需要重建,除非调用ovr_Create时返回了不同的GraphicsLuid。


原文如下


Frame Rendering

Frame rendering typically involves several steps: obtaining predicted eye poses based on the headset tracking pose, rendering the view for each eye and, finally, submitting eye textures to the compositor through ovr_SubmitFrame. After the frame is submitted, the Oculus compositor handles distortion and presents it on the Rift.

Before rendering frames it is helpful to initialize some data structures that can be shared across frames. As an example, we query eye descriptors and initialize the layer structure outside of the rendering loop:

 // Initialize VR structures, filling out description.
ovrEyeRenderDesc eyeRenderDesc[2];
ovrVector3f hmdToEyeViewOffset[2];
ovrHmdDesc hmdDesc = ovr_GetHmdDesc(session);
eyeRenderDesc[0] = ovr_GetRenderDesc(session, ovrEye_Left, hmdDesc.DefaultEyeFov[0]);
eyeRenderDesc[1] = ovr_GetRenderDesc(session, ovrEye_Right, hmdDesc.DefaultEyeFov[1]);
hmdToEyeViewOffset[0] = eyeRenderDesc[0].HmdToEyeViewOffset;
hmdToEyeViewOffset[1] = eyeRenderDesc[1].HmdToEyeViewOffset;
// Initialize our single full screen Fov layer.
ovrLayerEyeFov layer;
layer.Header.Type = ovrLayerType_EyeFov;
layer.Header.Flags = 0;
layer.ColorTexture[0] = pTextureSet;
layer.ColorTexture[1] = pTextureSet;
layer.Fov[0] = eyeRenderDesc[0].Fov;
layer.Fov[1] = eyeRenderDesc[1].Fov;
layer.Viewport[0] = Recti(0, 0, bufferSize.w / 2, bufferSize.h);
layer.Viewport[1] = Recti(bufferSize.w / 2, 0, bufferSize.w / 2, bufferSize.h);
// ld.RenderPose and ld.SensorSampleTime are updated later per frame.

This code example first gets rendering descriptors for each eye, given the chosen FOV. The returned ovrEyeRenderDescstructure contains useful values for rendering, including the HmdToEyeViewOffset for each eye. Eye view offsets are used later to adjust for eye separation.

The code also initializes the ovrLayerEyeFov structure for a full screen layer. Starting with Oculus SDK 0.6, frame submission uses layers to composite multiple view images or texture quads on top of each other. This example uses a single layer to present a VR scene. For this purpose, we use ovrLayerEyeFov, which describes a dual-eye layer that covers the entire eye field of view. Since we are using the same texture set for both eyes, we initialize both eye color textures to pTextureSet and configure viewports to draw to the left and right sides of this shared texture, respectively.

Note: Although it is often enough to initialize viewports once in the beginning, specifying them as a part of the layer structure that is submitted every frame allows applications to change render target size dynamically, if desired. This is useful for optimizing rendering performance.

After setup completes, the application can run the rendering loop. First, we need to get the eye poses to render the left and right views.

// Get both eye poses simultaneously, with IPD offset already included.
double displayMidpointSeconds = GetPredictedDisplayTime(session, 0);
ovrTrackingState hmdState = ovr_GetTrackingState(session, displayMidpointSeconds, ovrTrue);
ovr_CalcEyePoses(hmdState.HeadPose.ThePose, hmdToEyeViewOffset, layer.RenderPose);

In VR, rendered eye views depend on the headset position and orientation in the physical space, tracked with the help of internal IMU and external trackers. Prediction is used to compensate for the latency in the system, giving the best estimate for where the headset will be when the frame is displayed on the headset. In the Oculus SDK, this tracked, predicted pose is reported by ovr_GetTrackingState.

To do accurate prediction, ovr_GetTrackingState needs to know when the current frame will actually be displayed. The code above calls GetPredictedDisplayTime to obtain displayMidpointSeconds for the current frame, using it to compute the best predicted tracking state. The head pose from the tracking state is then passed to ovr_CalcEyePoses to calculate correct view poses for each eye. These poses are stored directly into the layer.RenderPose[2] array. With eye poses ready, we can proceed onto the actual frame rendering.

if (isVisible)
{
 // Increment to use next texture, just before writing
 pTextureSet->CurrentIndex = (pTextureSet->CurrentIndex + 1) % pTextureSet->TextureCount;

 // Clear and set up render-target.
 DIRECTX.SetAndClearRenderTarget(pTexRtv[pTextureSet->CurrentIndex], pEyeDepthBuffer);
 // Render Scene to Eye Buffers
 for (int eye = 0; eye < 2; eye++)
 {
 // Get view and projection matrices for the Rift camera
 Vector3f pos = originPos + originRot.Transform(layer.RenderPose[eye].Position);
 Matrix4f rot = originRot * Matrix4f(layer.RenderPose[eye].Orientation);
 Vector3f finalUp = rot.Transform(Vector3f(0, 1, 0));
 Vector3f finalForward = rot.Transform(Vector3f(0, 0, -1));
 Matrix4f view = Matrix4f::LookAtRH(pos, pos + finalForward, finalUp);

 Matrix4f proj = ovrMatrix4f_Projection(layer.Fov[eye], 0.2f, 1000.0f,
 ovrProjection_RightHanded);
 // Render the scene for this eye.
 DIRECTX.SetViewport(layer.Viewport[eye]);
 roomScene.Render(proj * view, 1, 1, 1, 1, true);
 }
}
// Submit frame with one layer we have.
ovrLayerHeader* layers = &layer.Header;
ovrResult result = ovr_SubmitFrame(session, 0, nullptr, &layers, 1);
isVisible = (result == ovrSuccess);

This code takes a number of steps to render the scene:

  • First it increments the CurrentIndex to point to the next texture within the output texture set. CurrentIndex must be advanced round-robin fashion every time we draw a new frame.
  • It applies the texture as a render target and clears it for rendering. In this case, the same texture is used for both eyes.
  • The code then computes view and projection matrices and sets viewport scene rendering for each eye. In this example, view calculation combines the original pose (originPos and originRot values) with the new pose computed based on the tracking state and stored in the layer. There original values can be modified by input to move the player within the 3D world.
  • After texture rendering is complete, we call ovr_SubmitFrame to pass frame data to the compositor. From this point, the compositor takes over by accessing texture data through shared memory, distorting it, and presenting it on the Rift.

ovr_SubmitFrame returns once frame present is queued up and the next texture slot in the ovrSwapTextureSet is available for the next frame. When successful, its return value is either ovrSuccess or ovrSuccess_NotVisible.

ovrSuccess_NotVisible is returned if the frame wasn’t actually displayed, which can happen when VR application loses focus. Our sample code handles this case by updating the isVisible flag, checked by the rendering logic. While frames are not visible, rendering is paused to eliminate unnecessary GPU load.

If you receive ovrError_DisplayLost, the device was removed and the session is invalid. Release the shared resources (ovr_DestroySwapTextureSet), destroy the session (ovr_Destory), recreate it (ovr_Create), and create new resources (ovr_CreateSwapTextureSetXXX). The application’s existing private graphics resources do not need to be recreated unless the new ovr_Create call returns a different GraphicsLuid.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值