Unreal Engine Samaritan Demo Running On Single NVIDIA Kepler GPU

Subject: General Tech | March 7, 2012 - 09:38 PM |
Tagged: unreal, udk, samaritan, nvidia, fxaa

Last year we saw Unreal unviel their Samaritan demo which showed off next generation gaming graphics using three NVIDIA 580 GTX graphics cards in SLI.  Epic games showed off realistic hair and cloth physics along with improved lighting, shadows, anti-aliasing, and more bokeh effects than gamers could shake a controller at with their Samaritan demo, and I have to say it was pretty impressive stuff a year ago, and it still is today. What makes this round special is that hardware has advanced such that the Samaritan level graphics can be achieved in real time with a single graphics card, a big leap from last year's required three SLI'd NVIDIA GTX 580s!

The Samaritan demo was shown at this years' GDC 2012 (Games Developers Conference) to be running on a single NVIDIA "Kepler" graphics card in real time, which is pretty exciting. Epic did not state any further details on the upcoming NVIDIA graphics card; however, the knowledge that the single GPU was able to pull off what it took three Fermi cards to do certainly holds promise.

According to GeForce; however, it was not merely the NVIDIA Kepler GPU that made the Samaritan demo on a single GPU possible. The article states that it was the inclusion of NVIDIA's method for anti-aliasing known as FXAA, or Fast Approximate Anti-Aliasing that enabled it. Unlike the popular MSAA option employed by (many of) today's games, FXAA uses much less memory, enabling single graphics cards to avoid being bogged down by memory thrashing. They further state that the reason MSAA is not ideal for the Samaritan demo is because the demo uses deferred shading to provide the "complex, realistic lighting effects that would be otherwise impossible using forward rendering," a method employed by many game engines. The downside to the arguably better lighting in the Samaritan demo is that it requires four times as much memory. This is because the GPU RAM needs to hold four samples per pixel, and the workload is magnified four times in areas of the game where there are multiple intersecting pieces of geometry.

MSAAvsFXAA.png

FXAA vs MSAA

They go on to state that without AA turned on, the lighting in the Samaritan demo uses approximately 120 MB of GPU RAM, and with 4x MSAA turned on it uses about 500 MB. That's 500 MB of memory dedicated just to lighting when it could be used to hold more of the level and physics, for example and would require a GPU to swap more data that it should have to (using FXAA). They state that FXAA on the other hand, is a shader based AA method that does not require additional memory, making it "much more performance friendly for deferred renderers such as Samaritan."

Without anti-aliasing, the game world would look much more jagged and not realistic. AA seeks to smooth out the jagged edges, and FXAA enabled Epic to run their Samaritan demo on a single next generation NVIDIA graphics card. Pretty impressive if you ask me, and I'm excited to see game developers roll some of the Samaritan graphical effects into their games. Knowing that Epic Game's engine can be run on a single graphics card implies that this future is all that much closer. More information is available here, and if you have not already seen it the Samaritan demo is shown in the video below.

Source: GeForce
Author:
Manufacturer: Epic Games

The Truth

There are few people in the gaming industry that you simply must pay attention to when they speak.  One of them is John Carmack, founder of id Software and a friend of the site, creator of Doom.  Another is Epic Games' Tim Sweeney, another pioneer in the field of computer graphics that brought us the magic of Unreal before bringing the rest of the gaming industry the Unreal Engine. 

At DICE 2012, a trade show for game developers to demo their wares and learn from each other, Sweeney gave a talk on the future of computing hardware and its future.  (You can see the source of my information and slides here at Gamespot.) Many pundits, media and even developers have brought up the idea that the next console generation that we know is coming will be the last - we will have reached the point in our computing capacity that gamers and designers will be comfortable with the quality and realism provided.  Forever. 

tim-sweeney.jpg

Think about that a moment; has anything ever appeared so obviously crazy?  Yet, in a world where gaming has seemed to regress into the handheld spaces of iPhone and iPad, many would have you believe that it is indeed the case.  Companies like NVIDIA and AMD that spend billions of dollars developing new high-powered graphics technologies would simply NOT do so anymore and instead focus only on low power.  Actually...that is kind of happening with NVIDIA Tegra and AMD's move to APUs, but both claim that the development of leading graphics technology is what allows them to feed the low end - the sub-$100 graphics cards, SoC for phones and tablets and more.

Sweeney started the discussion by teaching everyone a little about human anatomy. 

01.jpg

The human eye has been studied quite extensively and the amount of information we know about it would likely surprise.  With 120 million monochrome receptors and 5M color, the eye and brain are able to do what even our most advanced cameras are unable to.

Continue reading our story on the computing needs for visual computing!!