Antialiasing is a difficult task for a computer to accomplish in terms of performance and many efforts have been made over the years to minimize the impact while still keeping as much of the visual appeal as possible. The problem with aliasing is that while pixels are the smallest unit of display on a computer monitor, it is large enough for our eye to see it as a distinct unit. You may however have two objects of two different colors partially occupy the same pixel, who wins? In real life, our eye would see the light from both objects hit the same retina nerve (that is not really how it biologically works but close enough) and it would see some blend between the two colors. Intel has released a whitepaper for their attempt at this problem and it resembles a method that Matrox used almost a decade ago.

Matrox’s antialiasing method.

(Image from Tom’s Hardware)

Looking at the problem of antialiasing, you wish to have multiple bits of information dictate the color of a pixel in the event that two objects of different colors both partially occupy the same pixel. The simplest method of doing that is dividing the pixel up into smaller pixels and then crushing them together to an average which is called Super Sampling. This means you are rendering an image 2x, 4x, or even 16x the resolution you are running at. More methods were discovered including just flagging the edges for antialiasing since that is where aliasing occurs. In the early 2000s, Matrox looked at the problem from an entirely different angle: since the edge is what really matters, we can find the shape of the various edges and see how much area of a pixel gets divided up between each object giving an effect they say is equivalent to 16x MSAA for very little cost. The problem with Matrox’s method: it failed with many cases of shadowing and pixelshaders… and came out in the DirectX 9 era. Suffices to say it did not save Matrox as an elite gaming GPU company.

Look familiar?

(Both images from Intel Blog)

Intel’s method of antialiasing again looks at the geometry of the image but instead breaks the edges into L shapes to determine the area they enclose. To keep the performance up they do pipelining between the CPU and GPU which keeps the CPU and GPU constantly filled with the target or neighboring frames. In other words, as the GPU lets the CPU perform MLAA, the GPU is busy preparing and drawing the next frame. Of course when I see technology like this I think two things: will this work on architectures with discrete GPUs and will this introduce extra latency between the rendering code and the gameplay code? I would expect that it must as the frame is not even finished let alone drawn to monitor before you fetch the next set of states to be rendered. The question still exists if that effect will be drowned in the rest of the latencies experienced between synchronizing.

AMD and NVIDIA both have their variants of MLAA, the latter of which being called FXAA by NVIDIA’s marketing team. Unlike AMD’s method, NVIDIA’s method must be programmed into the game engine by the development team requiring a little bit of extra work on the developer’s part. That said, FXAA found its way into Duke Nukem Forever as well as the upcoming Battlefield 3 among other games so support is there and older games should be easy enough to just compute properly.

The flat line is how much time spent on MLAA itself, just a few milliseconds and constant.

(Image from Intel Blog)

Performance-wise the Intel solution performs ridiculously faster than MSAA, is pretty much scene-independent, and should produce results near the 16x mark due to the precision possible with calculating areas. Speculation about latency between render and game loops aside the implementation looks quite sound and allows users with on-processor graphics to not need to waste precious cycles (especially on GPUs that you would see on-processor) with antialiasing and instead use it more on raising other settings including resolution itself while still avoiding jaggies. Conversely, both AMD and NVIDIA’s method run on the GPU which should make a little more sense for them as a discrete GPU should not require as much help as a GPU packed into a CPU.

Could Matrox’s last gasp from the gaming market be Intel’s battle cry?

(Registration not required for commenting)