Deferred shading on mobile: An API overview

The Vulkan and OpenGLES APIs expose mechanisms for fetching framebuffer attachments on-tile. A special feature of tile-based GPUs, like Arm Mali, is the ability to implement fast programmable blending and on-chip deferred shading-like techniques.

In this post, we will look at how these features can be used to specifically implement deferred shading, a style of rendering which is still quite common. The deferred techniques have evolved over time, but the fundamental remains where a G-buffer is rendered, and lighting is computed based on that G-buffer data. This decouples geometry information from shading. Most of the innovation in the last years in the deferred space has revolved around reformulating the lighting pass, but the fundamentals remain the same.

For example, here we have a typical G-buffer layout of albedo, normals, material parameters (here: metallic-roughness) and depth.

These G-buffers are then used to shade the final pixel; here, just a trivial directional light.

The traditional implementation

The traditional way to implement this style of rendering is the use of multiple render targets (MRT) with a G-buffer pass, which is then followed by a lighting pass, where the G-buffer is sampled with plain old textures. This is still the conventional way on immediate mode GPUs (primarily desktop).

G-buffer pass

layout(binding = 2) uniform sampler2D TexMetallicRoughness;

void main()

{

Albedo = texture(TexAlbedo, vUV).rgb;

MetallicRoughness = texture(TexMetallicRoughness, vUV).xy;

// Many different ways to implement this.

vec2 tangent_xy = 2.0 * texture(TexNormal, vUV).xy - 1.0;

float tangent_z = sqrt(max(0.0, 1.0 - dot(tangent_xy, tangent_xy)));

Lighting pass

layout(binding = 0) uniform sampler2D GBufferAlbedo;

layout(binding = 1) uniform sampler2D GBufferNormal;

layout(binding = 2) uniform sampler2D GBufferMetallicRoughness;

layout(binding = 3) uniform sampler2D GBufferDepth;

vec3 compute_light(vec3 albedo, vec3 normal, vec2 metallic_roughness, highp vec3 world)

{

// Arbitrary complexity.

...

}

Blending is enabled for the color attachment, and we can render as many lights as we want, each with different shaders. This kind of flexibility was critical for the early motivations for deferred shading, as shaders back in the mid 2000s had to be extremely simple and highly specialized. The downside of this technique is the fill-rate, memory, and bandwidth requirements. Storing large G-buffers is costly, and shading multiple lights with overdraw is a big drain on bandwidth on immediate mode GPUs.

What tile-based architectures can optimize

With some help from the engine developer, there is a lot to gain by exploiting tile memory. For example, in the case of deferred shading, a straightforward implementation consumes a certain amount of bandwidth. For the sake of simplicity and clarity, let us use normalized units and call them BU, where 1 BU represents width * height * sizeof(pixel) bytes of either reads or writes to external memory.

Write G-buffer: 4 BU (albedo, normal, pbr, depth)
Read G-buffer in lighting pass: >= 4 BU (could be more if caches are thrashed or lots of overdraw)
Blend lighting buffer in lighting pass: >= 2 BU (1 BU read + 1 BU write, could be more with overdraw)

Cost: ~10 BU

Eliminating texture sampling

The first observation we make is that whenever texelFetch(gl_FragCoord.xy) appears, it could map directly to the current pixel’s data. Since we have framebuffer fetching mechanisms, we do not even need the texture, and the shader code could be replaced with the hypothetical readDataFromTile(). However, the assumption that pixels reside on-tile only holds if everything can happen in the same render pass. Thus, to make tile-based GPUs shine, G-buffer, and lighting passes must happen back-to-back.

Eliminate bandwidth

Writing a G-buffer attachment, or reading from it all happens on-chip, and thus costs no external memory bandwidth.

Eliminate memory requirements

Tile-based GPUs hold their framebuffers on-chip, and can choose whether the result actually needs to be flushed out to main memory. This flush is the only real bandwidth cost associated with framebuffer rendering and we can choose to skip it if we so choose. glInvalidateFramebuffer() or STORE_OP_DONT_CARE express this intent in OpenGL ES and Vulkan respectively.

Theoretical optimal saving

In the best case scenario, we only have 1 BU, so write the final shaded light attachment, which yields a nice ~10x bandwidth reduction.

Do these assumptions actually hold?

This kind of optimization hinges on some key assumptions. There are various scenarios which will break them, and we will have to settle for the worst optimization as a result. The developer will need to weigh these losses against achieved image quality.

Screen-space AO

In this technique, we need to sample depth randomly to create a rough approximation of true ambient occlusion. This mask now needs to modulate ambient lighting in the lighting pass.

The naïve workaround here is to split rendering in G-buffer → SSAO → Lighting, but now we are back to 10 BU bandwidth cost, and any tile optimizations become meaningless.

Instead, we can try to split rendering into: Depth pre-pass (1 BU write) → SSAO → [G-buffer → Lighting] (1 BU read depth + 1 BU write light), where G-buffer reads depth read-only. This lessens the bandwidth hit, but requires a depth pre-pass, which also has an overhead associated with it.

Another idea would be to implement SSAO through a reprojection from the previous frame. The frame would look something like: SSAO (depth previous frame) → [G-buffer → Lighting] (1 BU write light + 1 BU write depth).

Using depth in post processing

If we need this (very likely in this day and age), we cannot avoid writing out depth. Fortunately, this just costs 1 BU, and is not a big deal.

Other attachments?

We can also imagine scenarios where we need other G-buffer attachments in other rendering passes, with screen-space reflection techniques coming to mind. Similar ideas apply, where we should try to avoid splitting G-buffer and lighting passes. Essentially, to gain anything from having tile memory, we need to be able to fuse G-buffer → Lighting into one pass.

Implementation

With the theoretical out of the way, let us implement this on Arm Mali. These three methods all have their pros and cons, but accomplish more or less the same thing. The methods we will go through here are:

  • Pixel local storage
  • Framebuffer fetch
  • Vulkan multipass

Commonalities

As explained earlier, it’s critical that we keep pixel data on-tile, so under no circumstance must we rebind framebuffers with glBindFramebuffer() (in GLES), or end the render pass with vkCmdEndRenderPass() (in Vulkan). Both commands cause pixel data to be flushed to memory.

Pixel local storage

API availability: OpenGLES 3.0+, GPUs: Arm Mali Midgard family and later

In this style, we have raw access to the tile buffer. We will have to limit ourselves to 128 bits, but this should not be a problem with the G-buffer layout that we have chosen in the previous examples.

This API was introduced in a time before floating-point render targets were the norm, so the HDR lighting is handled with programmable blending. Eventually, we add a “resolve” pass, which converts PLS raw data to a real color attachment.

API side

  • Bind an FBO with one color attachment
  • glEnable pixel local storage must be active while accessing PLS data

G-buffer shader modifications

Rather than declare color attachment outputs, we declare a view of the raw tile memory and write there:

#extension GL_EXT_shader_pixel_local_storage : require

__pixel_local_outEXT OutPLS

{

layout(r11f_g11f_b10f) vec3 Emissive;

layout(rgba8) vec4 Albedo;

layout(rgb10_a2) vec4 Normal;

layout(rgba8) vec4 MetallicRoughness;

};

Writing to these variables works like any other out variable in fragment shaders. Another modification to consider is the albedo attachment. PLS does not use true hardware render targets, so to deal with sRGB, we have to add some ALU ourselves:

// Albedo is linear (sRGB texture), and to preserve accuracy, we need to encode.

// Gamma 2.0 is a decent choice since sqrt() is significantly faster than pow().

Albedo.rgb = sqrt(clamp(Albedo.rgb, vec3(0.0), vec3(1.0)));

Fortunately, ALU is rarely the bottleneck in G-buffer shaders, so this shouldn’t matter in practice. sqrt() is a decent choice as it is faster than pow(), and decoding it in the lighting pass later is just a square operation.

Lighting shader

Here, we need to declare the same G-buffer layout as an input and output. We also need to recover G-buffer depth, so we enable GL_ARM_shader_framebuffer_fetch_depth_stencil.

__pixel_localEXT InOutPLS

{

layout(r11f_g11f_b10f) vec3 Light;

layout(rgba8) vec4 Albedo;

layout(rgb10_a2) vec4 Normal;

layout(rgba8) vec4 MetallicRoughness;

};

highp float depth = gl_LastFragDepthARM;

vec2 metallic_roughness = MetallicRoughness.xy;

Resolve pass

Before we end rendering, we must write the final result over to a real render target. As mentioned, in the days of GLES 3.0, we did not necessarily have floating-point render target support, so merely copying the HDR light data into a render target was not possible. The natural thing to do here is to do some kind of tone map, so it can fit into a UNORM or sRGB render target.

To optimize this some more, if we know that the last light we render is a “full-screen” quad, for example a directional light, we can combine tone map and lighting into one shader.

__pixel_local_inEXT InPLS

{

layout(r11f_g11f_b10f) vec3 Light;

layout(rgba8) vec4 Albedo;

layout(rgb10_a2) vec4 Normal;

layout(rgba8) vec4 MetallicRoughness;

};

// RGBA8, SRGB8, RGB10A2,

Framebuffer fetch

With GL_EXT_shader_framebuffer_fetch on r29 drivers, we move to a more traditional rendering setup with multiple render targets. In the shaders, we will interact with real hardware render targets, and we lose the ability to freely reinterpret tile memory. However,  for deferred shading, we will not need this functionality either way, since the formats remain fixed. For effective HDR rendering; however, we will need floating point render target support, which was added in GLES 3.2 as a mandatory feature.

API side

  • Bind a multiple render target FBO holding: light, albedo, normal, metallic-roughness, depth
  • Before rendering, make sure G-buffer data is cleared or invalidated so we avoid loading the entire G-buffer from memory
  • After we are done rendering, we must remember to discard render targets we do not need to preserve with glInvalidateFramebuffer()

G-buffer pass

This remains the same as the traditional, immediate mode style rendering, convenient.

Lighting pass

Rather than binding the G-buffer attachment as textures at this point and changing the FBO, we keep going.

// inout rather than out

layout(location = 0) inout vec3 Light;

layout(location = 1) inout vec3 Albedo;

layout(location = 2) inout vec3 Normal;

layout(location = 3) inout vec2 MetallicRoughness;

// Still need the Arm extension to read G-buffer depth

highp float depth = gl_LastFragDepthARM;

// Accumulate light

Light += compute_light(albedo, normal, metallic_roughness, world);

 With framebuffer fetch, we can choose whether we use programmable blending or fixed-function blending. If we opt-in to API side blending, we need to be very careful and limit blending to just the Light render target, or weird things can happen. glEnablei() can enable blending on a per-attachment level. In practice, we might as well just use programmable blending here.

Resolve pass?

Unlike pixel local storage, there is no need to “resolve” anything here, since the Light render target we accumulated against is bound to a real texture. If we want to implement on-tile tone-mapping however, we have to be a bit creative, since we lose the ability to reinterpret tile formats. Fortunately, the albedo attachment tends to be SRGB8, and is free real estate here.

layout(location = 0) inout vec3 Light;

layout(location = 1) out vec3 Albedo;

Albedo = tonemap_to_sdr(Light);

The albedo render target is now the attachment we do not discard in the end.

Vulkan multipass

In Vulkan as it stands, we cannot directly access tile storage like in the GLES extensions. Instead, we give the driver enough information up front so that it can optimize a render pass into framebuffer fetch. We also need to modify the shaders a little, which makes them look a bit like GL_EXT_shader_framebuffer_fetch. The key strength of Vulkan’s abstraction here is that this implementation can be implemented once and run optimally on any type of GPU.

Input attachments

The fundamental abstraction in Vulkan is the input attachment, which looks a lot like the input variable in framebuffer fetch. In Vulkan GLSL, we represent this with a subpassInput.

#version 320 es

precision mediump float;

layout(input_attachment_index = 0, set = 0, binding = 0)

uniform mediump subpassInput SomeAttachment;

layout(location = 0) out vec4 SomeOutput;

void main()

{

SomeOutput = subpassLoad(SomeAttachment);

}

In the shader itself, we have no idea what this means yet. The compiler will give it meaning later when we give it a VkRenderPass. A peculiarity we need to consider is that subpass inputs are potentially backed by a real texture, so we have to bind it to a descriptor set. The input_attachment_index decoration is used so that the compiler knows how to fetch data from tile if it wants to.

VkRenderPass and multipass

In Vulkan, we create render pass objects which express how a render pass is laid out. A render pass can consist of multiple subpasses. In our case, we will need 2 subpasses:

  • Subpass #0: G-buffer
  • Subpass #1: Lighting

We can set up dependencies between the subpasses. For deferred shading, the critical things to consider are:

  • COLOR / DEPTH writes → INPUT_ATTACHMENT_READ / DEPTH_READ
  • BY_REGION dependency flag

A subpass has attachments set up, so that we can do:

  • Subpass #0 color: Light/emissive, albedo, normal, metallic-roughness, ATTACHMENT_OPTIMAL
  • Subpass #0 depth: ATTACHMENT_OPTIMAL

For the second subpass:

  • Subpass #1 color: Light/emissive
  • Subpass #1 input attachments: albedo, normal, metallic-roughness, depth, READ_ONLY_OPTIMAL
  • Subpass #1 depth: READ_ONLY_OPTIMAL

For the attachments, we use STORE_OP_DONT_CARE for everything except for the final light attachment.

Relying on a sprinkle of magic

Unfortunately, we cannot be 100% sure that the driver will actually optimize this. This is the price we pay for compatibility with all devices. This pattern works well on Mali GPUs at least.

The best practices document for Mali can be consulted for details here

G-buffer shader

Same as framebuffer fetch, nothing changes 

Lighting shader

This is basically the same as framebuffer fetch, except we use subpassInput, as mentioned earlier. One thing to note is that we do not need a special extension to fetch depth either. They also use input attachments. Finally, we have to use fixed-function blending to accumulate light, since programmable blending is not really a thing in Vulkan yet.

// Reconstruct world position.

highp vec4 clip = vec4(2.0 * gl_FragCoord.xy * inv_resolution - 1.0,

depth, 1.0);

highp vec4 world4 = inv_view_projection * clip;

highp vec3 world = world4.xyz / world4.w;

// Use fixed-function blending in API to accumulate.

Light = compute_light(albedo, normal, metallic_roughness, world);

}

Conclusion

For mobile, there are a lot of considerations when implementing deferred shading, as the potential gains for bandwidth, power and battery life are too great to ignore. However, we have outlined three ways we can tap into tile storage. This will improve the overall development experience when implementing deferred shading.

From:

Deferred shading on mobile - Graphics, Gaming, and VR blog - Arm Community blogs - Arm CommunityThis blog explains how to use deferred shading techniques on mobile.https://community.arm.com/arm-community-blogs/b/graphics-gaming-and-vr-blog/posts/deferred-shading-on-mobile

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值