Post-processing for VR and mobile devices
Do or don’t the post-processing for VR?
Is there anyone today who does not like graphics refined by post-processing? These breathtaking photo-realistic views, glowing neon lights, reflections in puddles and windows, beautifully saturated colors. Thanks to such effects, the games more and more resemble movies made using a physical camera.
Actually, there is a group of people who hate post-processing. These are VR developers and in particular, those working on projects for mobile VR devices based on the Android system. The performance capabilities of these devices are similar to top smartphone devices and must provide smooth gameplay in the highest possible resolution. As you can easily guess, the necessity of using the post-process does not make their task easier.
Wait, but what exactly is this post-processing for VR?
Simply put, post-processing involves adding full-screen filters and effects that improve the visuals, just before the image appears on the screen. This can be compared to putting on glasses on a sunny day. We do it so that we are not disturbed by the sun and that we have better visibility (better contrast).
Image comparison with and without post-processing:
But why VR developers don’t like post-processing for VR so much?
Well, as I said, VR requires a very specific approach. We must remember that the image is rendered for each eye separately. This usually works in such a way that the engine renders the image for the left eye, then slightly changes the perspective and does it again for the right eye. So, as you can guess, post-processing for VR is also carried out twice for one eye and for the other. With limited capabilities of mobile VR devices such as Oculus Quest, GO or Google Daydream (which is no longer supported), so much rendering causes that each frame can take a long time.
Why can’t frames last too long?
Films that we watch in the cinema are usually played at 24 frames per second (FPS). This means that 24 consecutive images are displayed per second. In games and applications, the image should be displayed at a speed of 30 FPS to enjoy a smooth experience. In the case of VR, this is the minimum number that must always be reached to make the user feel comfortable when wearing the headset. The number of ideal FPS for VR is considered to be 60. This is very important due to the specificity of the use of the equipment, which can cause nausea.
As for the length of the frame, if we assume that the image is to be displayed at a speed of 60 FPS, frames cannot be prepared for more than about 16 ms. As you can guess, it’s very little time.
What advice do you have for those who want to use post-processing in their VR application?
Don’t do it.
I’m serious. If you check e.g. Oculus VR or Google VR documentation or some video diaries of developers who have released their games or applications for VR devices, you will almost certainly hear that you should not use the post-processing for VR. You can always use it to visualize the effect you want to achieve and then recreate it using e.g. custom shaders or by editing object textures and lighting settings. Of course, this approach requires a lot of extra time.
And what if you don’t have much time and your key application functions are based on image effects?
In this situation, it is worth remembering a few things that can help you achieve your goal. I will speak based on the Unity engine on which we work every day creating our applications.
Universal Render Pipeline (URP)
The first important thing is to update the project to the latest stable version of the editor and use URP. URP is Scriptable Render Pipeline, made by Unity that offers better performance than a legacy solution, about 25% on mobile hardware. Below is the comparison of both render pipelines. The same project and similar quality were used and readings were taken on the iPhone 6s.
URP statistics:
Built-in renderer pipeline statistics:
When the best tool for the job is ready, it’s time to consider which effects are actually needed in the project. There are a few things to remember to make choosing easier. Below is a list of the lightest effects in order of cost:
- Vignette
- Color grading
- Bloom (with High-Quality Filtering disabled)
- Anti-aliasing (FXAA is recommended for mobile platforms)
It is also worth knowing which effects do not use because they can cause nausea and disorientation. These effects are:
- Lens distortion (cause motion sickness)
- Chromatic aberration (occurs naturally in VR)
- Motion blur (cause motion sickness)
- Depth-of-field (user’s eyes need to focus themselves)
Some effects like Screen Space Reflection and Screen Space Ambient Occlusion are definitely too expensive for the mobile VR.
I will now briefly discuss each of the effects that I recommend.
-
Vignette
This effect is derived from photography. This means darkening and desaturation towards the edges of an image compared to the center. It can be used to draw focus to the center of an image.
Vignette is a good choice to use when you decide to implement post-processing because it can reduce motion sickness.
-
Color grading
This effect is used to tweak the overall tone, brightness, contrast, white balance of the final rendered image. It is most often used for correct color and luminance.
-
Bloom
This effect creates fringes of light extending from the bright areas in an image. This creates the illusion of very bright light overwhelming the camera. It also has a feature to apply dirtiness to the bloom effect like a layer of smudges or dust.
-
Anti-aliasing
This effect is very important in VR because it helps reduce the effect of the perforated or jagged lines which are visible on the edges of the models. This effect may be less visible on devices with higher resolution. Please note that objects given this effect appear less sharp than without it.
After adding effects, it remains to check how many FPS we have after running the application on the target device and optimizing the content, but that’s a topic for another article.
Are there any other solutions?
If you do not want to use the post-processing tool built into Unity, you can try to use one of the solutions available in the Asset Store. Assets prepared by Rufat’s ShaderLab deserve special mention, as they can work with URP and on mobile and VR devices.
Rufat’s ShaderLab assets on the Asset Store (Unity Asset store):
The creator claims that his solutions are the fastest of similar solutions available in the asset store and that they are much faster than the solution built in Unity. Judging by the many positive reviews, this is true. Qualitatively, its effects may differ from those prepared by Unity and may not be as universal and easy to use. However, due to the speed value, they are a very interesting alternative. Once you know which post-processing effects for VR you need and you are still missing a few FPS for smooth application operation it is definitely worth giving them a chance.
I think we will also check them in action while working on subsequent VR applications for our clients from around the world.