The Truth

In a talk at DICE this year, Epic Games’ Tim Sweeney discussed the notion that we were at the “good enough” point with gaming hardware.

There are few people in the gaming industry that you simply must pay attention to when they speak.  One of them is John Carmack, founder of id Software and a friend of the site, creator of Doom.  Another is Epic Games’ Tim Sweeney, another pioneer in the field of computer graphics that brought us the magic of Unreal before bringing the rest of the gaming industry the Unreal Engine. 

At DICE 2012, a trade show for game developers to demo their wares and learn from each other, Sweeney gave a talk on the future of computing hardware and its future.  (You can see the source of my information and slides here at Gamespot.) Many pundits, media and even developers have brought up the idea that the next console generation that we know is coming will be the last – we will have reached the point in our computing capacity that gamers and designers will be comfortable with the quality and realism provided.  Forever. 

Think about that a moment; has anything ever appeared so obviously crazy?  Yet, in a world where gaming has seemed to regress into the handheld spaces of iPhone and iPad, many would have you believe that it is indeed the case.  Companies like NVIDIA and AMD that spend billions of dollars developing new high-powered graphics technologies would simply NOT do so anymore and instead focus only on low power.  Actually…that is kind of happening with NVIDIA Tegra and AMD’s move to APUs, but both claim that the development of leading graphics technology is what allows them to feed the low end – the sub-$100 graphics cards, SoC for phones and tablets and more.

Sweeney started the discussion by teaching everyone a little about human anatomy. 

The human eye has been studied quite extensively and the amount of information we know about it would likely surprise.  With 120 million monochrome receptors and 5M color, the eye and brain are able to do what even our most advanced cameras are unable to.

With a resolution of about 30 megapixels, the human eye is able to gather information at about 72 frames per second which explains why many gamers debate the need for frame rates higher than 70 in games at all.  One area that Sweeney did not touch on that I feel is worth mentioning is the brain’s ability to recognize patterns; or more precisely, changes in them.  When you hear the term "stuttering" or "microstutter" on forums this is what gamers are perceiving.  While a game could run at 80 FPS consistently, if the frame rate varies suddenly from 90 FPS to 80 FPS then a gamer may "feel" that difference though it doesn’t show up in traditional frame rate measurements. 

In terms of the raw resolution though, Sweeney then posits that the maximum required resolution for the human eye to reach its apex in visual fidelity is 2560×1600 with a 30 degree field of view or 8000×4000 with a 90 degree FOV.  That 2560×1600 resolution is what we see today on modern 30-in LCD panels but the 8000×4000 resolutions is about 16x that of current HDTVs. 

According to the Nyquist theorem that debates the amount of information required to present the "good enough" result for a given resolution, game engines would need about 40 billion triangles per second to reach perfection at the 8000×4000 resolution.  Currently, the fastest GPU for triangle processing can handle 2.8 billion per second and Sweeney claims we are only a factor of 50x from reaching that goal.  That difference could likely be reached in another two generations of architecture which actually gives some credence to those that say the end is near. 

But triangle processing hasn’t been the primary focus of gaming engines for some time and it doesn’t tell the whole story. 

« PreviousNext »