Rendering Games with Raytracing Will Revolutionize Graphics
Other ray tracing benefits
While the ability for raytracing to render standard PC games in a more impressively looking fashion is the main reason for promoting the technology today, there are other benefits that could potential change the way games are developed if raytracing is accepted in the community.
As it turns out, because of the way raytracing scales with the resolution of the screen, future handheld devices that succeed the likes of the Sony PSP or Nintendo DS could use raytracing as well. The frame rates in a raytraced title are in fact nearly linearly dependent on the number of pixels being rendered. The poster in the photo above shows the relationship: if a certain configuration of hardware can render 1280x720 images at 30 frames per second, then that same hardware will be able to push 563 FPS at a resolution of 256x192 (which happens to be what the DS has).
Of course, since 563 FPS is slightly overkill, less powerful hardware could then render the image at 256x192 at 30 FPS. The images and effects would be the same as on the faster system, just at a lower resolution making the scalability of the handheld systems a nearly non-issue.
Antialiasing and Texture Filtering
Having been a student of rasterized graphics for so long, I of course had a large list of questions for Daniel about how other interesting graphics issues would be addressed in raytraced game engine. First up was antialiasing: could it be done using ray tracing? It turns out that antialiasing is not only possible but also has the potential to be much more efficient that current AA methods used by NVIDIA and AMD graphics cards. Aliasing occurs when lines of pixel intersect that have very different colors and don't blend smoothly; one way raytracing technology could be used for AA is to simply shoot a ray out from the "eye" or camera towards every pixel on the screen and compare the color value of the pixel to color of the immediately surrounding pixels. If the difference meets a certain threshold, then the engine knows the intersection is going to cause aliasing and the color values can be blended in whatever fashion the programmer wants.
There are several other AA algorithm options that raytracing could possibly adopt, but the general idea is that raytracing could keep the GPU/CPU from doing a lot of unnecessary work; an issue that is continuing to be addressed by the rasterization engines by NVIDIA and AMD.
Another common feature of current games and current GPUs is the ability to filter textures in way to make them appear sharper to the user. Raytracing can also address this, according to Daniel, by using using a method called "ray differentials" (information on the theory here) and doing some simple math. By knowing the angle between two rays of the filtering pass, the engine can decide how far away the texture is and filter the texture more or less depending on the ability for it to affect the gamer's experience. Items further away obviously don't need to be filtered to the degree that the floor tiles under the gamer's feet.
For now, Daniel has implemented the AA algorithm mentioned above in the game engine that was on display, but the anisotropic filtering method hasn't yet been implemented though he is positive it's not only possible, but that it will produce good results.
When can we have it?
All of this leads us to one thought: we want this now. Unfortunately, that's not quite possible as the hardware isn't fast enough to get high resolution and high frame rates at the same time. But it turns out we aren't far from that convergence. The team at Intel estimates that within 2 years or so, the hardware will exist that will allow "game quality" ray tracing on a desktop machine. That means that in that timeframe we might see a fully raytraced game engine (though probably from a team like one at Intel).
Today's Intel quad-core CPUs can perform quite well with this technology and with Intel confirming that Nehalem processors will have up to eight cores on a single die (and that falls in the two year time span), it's possible that we'll be running raytraced games on hardware that we were shown this week at IDF. If it's not the Nehalem or the next iteration of standard processors, then how about the much-famed Terascale processor from another of Intel's development research teams? These 80-core 2 TeraFlop processors are being shown again at IDF and we'll have more information on them very soon as well.
According to Daniel and the Intel team, the magic number of rays they'll need to process each second to achieve that "game quality" and frame rate is around one billion (though interesting designs can be done with considerably fewer). That would allow for about 30 rays per pixel to be processed for each frame, with different rays necessary colors, lighting and other special effects. Doing that math, at a 1024x768 resolution for a total of 786,432 pixels times 30 rays per pixel and 60 frames per second, you get 1.415 billion rays per second required. That is an impressive amount of processing horsepower and a level that we just are not yet at. The dual Clovertown system running our live demo was pushing approximately 83 million rays per second plus the work of standard trilinear filtering.
The future of raytracing looks VERY bright and with minds like Daniel's and his teammates' pushing the technology, I don't see any reason why in a couple of years raytracing couldn't be at the leading edge of gaming development. Daniel and his teammates have promised us a lot of other great information on the raytracing development and I hope we'll get it sooner rather than later. Raytracing won't be revolutionizing the graphics world next year, but I don't think you'll be able to hold it back for long.
Also, be sure to check out Daniel Pohl's first article on raytracing in computer gaming here at PC Perspective.
Be sure to use our price checking engine to find the best prices on the latest Intel Core 2 Duo processors, and anything else you may want to buy!