Shedding a little light on Monday’s announcement

Shedding a little light on Monday’s GDC announcement.

Most of our readers should have some familiarity with GameWorks, which is a series of libraries and utilities that help game developers (and others) create software. While many hardware and platform vendors provide samples and frameworks, taking the brunt of the work required to solve complex problems, this is NVIDIA's branding for their suite of technologies. Their hope is that it pushes the industry forward, which in turn drives GPU sales as users see the benefits of upgrading.

This release, GameWorks SDK 3.1, contains three complete features and two “beta” ones. We will start with the first three, each of which target a portion of the lighting and shadowing problem. The last two, which we will discuss at the end, are the experimental ones and fall under the blanket of physics and visual effects.

The first technology is Volumetric Lighting, which simulates the way light scatters off dust in the atmosphere. Game developers have been approximating this effect for a long time. In fact, I remember a particular section of Resident Evil 4 where you walk down a dim hallway that has light rays spilling in from the windows. Gamecube-era graphics could only do so much, though, and certain camera positions show that the effect was just a translucent, one-sided, decorative plane. It was a cheat that was hand-placed by a clever artist.

GameWorks' Volumetric Lighting goes after the same effect, but with a much different implementation. It looks at the generated shadow maps and, using hardware tessellation, extrudes geometry from the unshadowed portions toward the light. These little bits of geometry sum, depending on how deep the volume is, which translates into the required highlight. Also, since it's hardware tessellated, it probably has a smaller impact on performance because the GPU only needs to store enough information to generate the geometry, not store (and update) the geometry data for all possible light shafts themselves — and it needs to store those shadow maps anyway.

Even though it seemed like this effect was independent of render method, since it basically just adds geometry to the scene, I asked whether it was locked to deferred rendering methods. NVIDIA said that it should be unrelated, as I suspected, which is good for VR. Forward rendering is easier to anti-alias, which makes the uneven pixel distribution (after lens distortion) appear more smooth.

Read on to see the other four technologies, and a little announcement about source access.

The second technology is Voxel Accelerated Ambient Occlusion (VXAO). Currently, Ambient Occlusion is typically a “screen-space” effect, which means that it is applied on the rendered image buffers. It can only use the information that is available within those buffers, which is based on the camera's 2.5D projection of the world. Voxel Accelerated Ambient Occlusion calculates ambient occlusion results upon a grid of voxels, in world space. It is not limited to camera's view of the world.

This is the current technology, which collects data from the camera's buffers.

The actual occlusion information is gathered by ray tracing from points within this voxel grid, outward in a hemisphere, to other points in the voxel grid. Axis-aligned voxels are highly efficient to ray trace, especially compared to triangles. Volume elements (vo… x el…) that are very close to other objects tend to appear darker, which is basically because indirect light bounces have fewer potential directions to come from.

VXAO uses world-space data, which properly shades the ground under the tank.
SSAO, on the other hand, has no way of knowing how deep the tank is.

In a truly realistic simulation, global illumination would be computed directly, rather than dimming your added “indirect” light term by some AO value at various points in space. That is slow, though… like, “too slow for Pixar” levels of slow. SSAO does a pretty good job considering its limitations, but VXAO takes the approximation further by accounting for the actual environment (rather than the camera's slice of it, as we've mentioned). This should be a major improvement for moving cameras, although you can definitely see the difference even in screenshots.

The buffer NVIDIA creates. Partially filled voxels visualized as blue; full as red; empty clear.

The third technology is called Hybrid Frustum Traced Shadows (HFTS), which increases the quality of dynamic shadows. Rather than using just shadow maps, shadows are also computed by rasterizing geometry in light-space and determining if a list of screen pixels are occluded by them. The two results, frustum traced shadows and the soft shadows that are calculated by PCSS, are interpolated between by distance from the occluding object. This gives sharp, accurate, high-quality shadows up close that smoothly blur with distance.

Some sites reported, based on Monday's original press release, that NVIDIA is ray tracing these shadows. That is incorrect. NVIDIA states that the algorithm is two-stage. First, the geometry shader constructs four planes for each primitive in the coordinate system that the light sees when it projects upon the world. Basically, imagine that the light is a camera. The pixel shader then tests every (applicable) screen pixel, converted into the light's coordinate system, to see where it is. If it overlaps with a primitive, and that primitive is closer to the light than that screen pixel is, then that screen pixel is shadowed from that light. Unless I'm horribly mistaken, this looks like an application of the Irregular Z-Buffer algorithm that NVIDIA published in a white paper last year. They have not yet responded to my inquiry about whether this is the case.

Those were the three released features. The last two are classified as experimental betas.

The first of these is NVIDIA Flow. This technology simulates combustible fluid, fire, and smoke. It does so with an adaptive voxel simulation. This version is now able to leak outside of its bounding box, and it also handles memory properly in that case. It will be added to Unreal Engine 4 in Q2 of this year, although they did not specify whether it would be available in Epic's binary version in that timeframe, or just the GitHub source.

The second technology is PhysX-GRB. This is their popular rigid-body physics simulation, which has been given a major speed boost in this (experimental) version. NVIDIA claims that it is about two- to six-fold faster when measured under heavy load. They show a huge coliseum being reduced to rubble as balls from space crash upon it, managing ~40 FPS on whatever GPU they used. NVIDIA also claims that both CPU and GPU solvers should now produce identical results. “Flipping the switch” should just be a performance consideration.

NVIDIA closed their presentation with a few announcements of GameWorks source code being released to the public. PhysX, PhysX Clothing, and PhysX Destruction are already available, and have been for quite some time. Two new technologies are being opened up at GDC as well, though. The first is their Volumetric Lighting implementation that we discussed at the top of this article, and the second is their “FaceWorks” demo, which models skin and eye shading with sub-surface scattering and eye refraction.

NVIDIA has also announced plans, albeit not at GDC, to release the source for HairWorks, HBAO+, and WaveWorks. Again, they are not ready to announce a timeline yet, but their intentions have been declared. In fact, they intend to open up “most or all technologies over time.” This is promising because, while registered developers can access source code privately, the community at large benefits when the public gains access. They say that they do not want to open up projects until they've matured, and that makes sense. Both Mozilla and The Khronos Group do the same, holding some projects to their chest until they believe they are ready for the public.

The part that counts is whether they actually are released when complete.