AMD LiquidVR SDK Aims for Silky Smooth VR on all Headsets
As GDC progresses here in San Francisco, AMD took the wraps off of a new SDK for game developers to use to improve experiences with virtual reality (VR) headsets. Called LiquidVR, the goal is provide a smooth and stutter free VR experience that is universal across all headset hardware and to keep the wearer, be it a gamer or professional user, immersed.
AMD's CTO of Graphics, Raja Koduri spoke with us about the three primary tenets of the LiquidVR initiative. The 'three Cs' as it is being called are Comfort, Compatibility and Compelling Content. Ignoring the fact that we have four C's in that phrase, the premise is straight forward. Comfortable use of VR means there is little to no issues with neusea and that can be fixed with ultra-low latency between motion (of your head) and photons (hitting your eyes). For compatibility, AMD would like to assure that all VR headsets are treated equally and all provide the best experience. Oculus, HTC and others should operate in a simple, plug-and-play style. Finally, the content story is easy to grasp with a focus on solid games and software to utilize VR but AMD also wants to ensure that the rendering is scalable across different hardware and multiple GPUs.
To address these tenets AMD has built four technologies into LiquidVR: late data latching, asynchronous shaders, affinity multi-GPU, and direct-to-display.
The idea behind late data latching is to get the absolute most recent raw data from the VR engine to the users eyes. This means that rather than asking for the head position of a gamer at the beginning of a render job, LiquidVR will allow the game to ask for it at the end of the rendering pipeline, which might seem counter-intuitive. Late latch means the users head movement is tracked until the end of the frame render rather until just the beginning, saving potentially 5-10ms of delay.
The next feature, asynchronous shaders, is what allows LiquidVR to handle that late latch properly. Being able to use different ACEs (asynchronous compute engines) from the GCN GPU on different tasks, LiquidVR can execute VR-specific post processing while other renders are occurring. This means the time warp function that maps the head tracking movement to the rendered image can be done at the last possible moment. Time warping alters the rendered frame slightly to properly track the head movement after the frame drawing by the GPU is complete. If you have moved your head more to the right after rendering then the warp function will alter pixels to move the image to the right as well. This is a really complex process but the fundamental understanding is straight forward.
Affinity multi-GPU brings us to the past - a return of SFR, split frame rendering. AMD realizes as most of us have that the ability to map a GPU to each eye makes the most sense and is surprisingly easy to integrate. The benefit again is lower latency, rather than the inherent delay in a multi-GPU alternate frame system (AFR). Developers will also benefit from lower CPU overhead thanks to a removal of duplicate common operations between the two eyes. This is not limited to just two GPUs though - AMD said that 3, 4, 5 GPUs could all be supported if the developer builds in support.
Finally we have direct-to-display, and this feature is mostly to promote compatibility between VR headsets. LiquidVR brings native HMD support with direct front buffer rendering and provides direct application control to the headset even in operating systems and environments that didn't plan for it. This might be less useful for Windows gaming environments where VR is expected to move but for professional applications this should ensure a better user experience.
AMD then provided several examples following the rendering process to a VR headset and the progression of performance and latency with original implementations, changes since Oculus has launched and what they expect to improve with LiquidVR.
There is a lot of data in this one slide, but focusing on a couple of the most important points will help us understand specifically what LiquidVR does. The round green circle represents the warping function, a rendered frame is slightly modified to match the additional movement of the headset after the graphics engine started its process. The top example shows the result when a frame has correctly rendered inside the Vsync window, has time to "late-latch" from the data from the CPU/head tracking and also has time to properly time warp the frame before output to a frame buffer. Everything is great and works as expected.
The bottom example in that slide shows what happens when a frame doesn't render in time to meet the Vsync; and actually in this case does not render fast enough to meet the Vsync + time warp requirement. In this case, the previous frame (that was already warped and then output to the user) would be re-warped again with the additional head tracking data and sent to the user. Obviously the more this occurs the more likely you are to run into artifacting and edge issues on the output image, but that depends on the speed and distance of the motion.
There are lot more specific examples of what LiquidVR and AMD can help fix in the virtual reality pipeline and more than likely many of you are noting the similar information provided by NVIDIA as VR Direct with the launch of the GTX 980 and GTX 970 graphics cards. At that point NVIDIA talked about asynchronous warping as well as VR SLI - matching the discussion we had with AMD today in key ways. I do know that NVIDIA has demos and meetings at GDC this week focused around VR Direct and I think we will learn more as the week progresses on how these two initiatives from AMD and NVIDIA compare.
My initial impression is that LiquidVR seems to be in an earlier state than VR Direct as AMD was upfront about this being an "early alpha" stage with engine and game developers getting access to it during GDC. Koduri stated that AMD wanted LiquidVR to be in full production by the time consumer VR headsets are hitting the market.
This now makes two areas in which PC gaming continues to differentiate itself from the rest of the ecosystem that focus on displays: variable refresh and virtual reality. If only we could combine these two...