NVIDIA GeForce 6200 with TurboCache Technology
NV44 - 6200 w/ TurboCache
NVIDIA recently had a good portion of the press out at its Santa Clara offices to show off their next entry into the GeForce 6-series graphics card - the 6200 with TurboCache (appropriately shortened to 6200TC from this point forward). This chip is meant for the entry-level graphics market and completes NVIDIA's circle of DX9 capable cards (with Shader Model 3.0) for all price ranges. The only thing left for NVIDIA is an integrated video chipset, but this seems to be an adventure for 2005.
The 6200TC does more than just bring the GeForce 6 series architecture to the sub $100 price point, it also introduces some new memory technology as well as a modified core architecture as a result.
The NV44 chip, which the 6200TC was codenamed, is a rather small chip that measures only 10mm x 11mm for a total die size of 110mm^2. It is based on the 0.11 micron process as with the 6600 GPU, and makes these latest chips from NVIDIA are significantly smaller than the original NV40s (as our pictures below demonstrate).
NV44 die - 6200 w/ TurboCache
NV40 die - 6800 Ultra - to "almost" exact same scale
What a difference those changes have made in size: the NV40 is about 16mm x 19mm or 304mm^2, almost three times the size. From this it is obvious why NVIDIA is excited about the new technology - any chip that is smaller is also cheaper to manufacture, which can bring better profits.
The smaller die size doesn't happen by magic, and even the move to the 0.11 micron process from NV40's 0.13 can't claim all the die size change. The 6200 has had several architectural modifications to it that make it a bit different than other 6-series GPUs. The 6200TC features 3 vertex shaders and 4 pixel pipelines, though only two pixels can technically be written at each clock cycle. This is due to a "fragment crossbar" that interprets the data coming from the 4 pipes and then balances them. It sounds a bit confusing, so here is a block diagram to help clarify:
6200 w/ TurboCache block diagram
If you have followed the GeForce 6-series and their core closely, you may recoginze that hardware support for color and z-compression is no longer listed. This is one of those changes that reduced the die size and allowed for better support of the TurboCache technology (more about TurboCache on the next page). But the price we pay is that antialiasing has a larger performance hit than you'd see in the other 6-series GPUs. NVIDIA says that they made this decision consciously because the majority of entry-level graphics card users play games at lower resolutions and with little to no filter. While anti-aliasing is something that enthusiasts may want, it's really hard to argue with their reasoning for going without it.