Earlier today Caustic Graphics passed us this new video that demonstrates its hardware and software combination running a real-time global illumination demo inside of 3dsMax. 



I’ll let the guys at Caustic give you the breakdown on what makes this demonstration unique:

This is our latest interactive demo of “true” global illumination using the CausticRT platform with Brazil in 3dsMax.  Its a fully ray traced interior living room scene with 2,080,957 polygons, and outdoor and indoor lighting that includes classic 3D geometry such as the Stanford Dragon & Bunny.  The poly count for each of the objects are as follows:

Bunny: 70K polygons
Dragon: 800K polygons
Buddha: 1M polygons
Interior: 22K polygons

The 3dsMax ActiveShade window resolutions are: 400×300 & 800×600

Why is this significant?

No one has ever shown such a complex demonstration of true interactive GI using a GPU or CPU implementation.  Why? Because GPUs and CPUs are not architected to fully solve the ray tracing problem efficiently.  The CausticRT platform addresses these deficiencies and enables GPUs and CPUs to shade with an efficiency comparable to rasterization.

The raytracing problem is very different from rasterization.  We have seen some impressive demos.  Researchers have achieved nearly 100 million rays per second [1].  But there is always a catch.  Sometimes the demos show effects that don’t really require ray tracing.  Often, every object is shaded with the same material.  Usually, the scenes are much too small to be representative of real-world use cases.

Why haven’t we seen a complete ray tracing renderer running on a GPU?

To answer that question, we need to take a closer look at the differences between the ray tracing and rasterization algorithms.

Rasterization is the perfect example of a streaming algorithm.  The scene geometry flows sequentially through a pipeline where each polygon in a mesh can be decomposed into fragments that all need to be shaded with the same shader.  This allows the GPU to shade many pixels together, running the shader instances in lock-step on what amounts to a wide SIMD machine.  Additionally, because of the locality of the fragments in scene space, it is likely that each instance of the shader will load the same assets and sample nearby texels, thereby getting excellent utilization of a modest size cache.

Unlike rasterization, ray tracing allows objects to interact with each other.  Inter-object visibility is the key reason that ray tracing enables the stunning visual effects viewers have come to expect from production quality 3D content.  But the technical upshot of rays bouncing all over the scene is that a ray tracer needs random access to everything.  This means the entire scene (all geometry, shaders, and assets) must be accessible in RAM.

Indirect visibility creates another problem for a stream processor when a surface shader uses secondary rays to compute the incident lighting.  If two adjacent pixels being shaded in together lock-step emit secondary rays that encounter different materials, those shaders can no longer run together.  Additionally, assets needed by the one shader will compete for cache space with assets needed by the the other shader.

If you want more information on Caustic hardware check out previous write ups: