Imagination PowerVR Ray Tracing with UE4 & Vulkan Demo

Subject: Graphics Cards, Mobile | June 2, 2017 - 02:23 AM |
Tagged: Imagination Technologies, PowerVR, ray tracing, ue4, vulkan

Imagination Technologies has published another video that demonstrates ray tracing with their PowerVR Wizard GPU. The test system, today, is a development card that is running on Ubuntu, and powering Unreal Engine 4. Specifically, it is using UE4’s Vulkan renderer.

The demo highlights two major advantages of ray traced images. The first is that, rather than applying a baked cubemap with screen-space reflections to simulate metallic objects, this demo calculates reflections with secondary rays. From there, it’s just a matter of hooking up the gathered information into the parameters that the shader requires and doing the calculations.

The second advantage is that it can do arbitrary lens effects, like distortion and equirectangular, 360 projections. Rasterization, which projects 3D world coordinates into 2D coordinates on a screen, assumes that edges are still straight, and that causes problems as FoV gets very large, especially full circle. Imagination Technologies acknowledges that workarounds exist, like breaking up the render into six faces of a cube, but the best approximation is casting a ray per pixel and seeing what it hits.

The demo was originally for GDC 2017, back in February, but the videos have just been released.

Video News

June 2, 2017 | 07:57 AM - Posted by psuedonymous

Doing lens compensation in the rendering step is also an advantage for VR, as it eliminates the post-render lens compensation pre-warp pass (through a post-render warp will still be performed for latency compensation). This allows for more efficient rendering, rather than needing to use massive overrednering (current state) and throw away a bunch of pixel data, or try and massage things into rectilinear-rendered chunks (as with Nvidia's Lens Matched Shading).

It may also have the advantage of performance consistency, as ray-tracing scales with rendered resolution much more strongly than with scene complexity (dependant on how you cull reflected rays).
In the future as video transport moves away from full-frame transport to (initially) interlaced or chequerboarded transport, and later foveated transport and per-pixel updating, this can be done by rendering only the pixels needed for the next update rather than needing to fill a whole framebuffer and then selectively sample that buffer (or varying the buffer dimensions each frame).

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.