AMD Radeon HD 4870 and HD 4850 Review - Mid-range GPU mix up
High Price: $362.34
View 2 sellers
RV770 Makes its Debut
You got another thing coming...
If you were confused into believing that AMD's next-generation architecture, the RV770 design, was supposed to be out MUCH earlier than today, you wouldn't be alone. We started posting about the RV770 design as far back as late December and the stream of news and rumors about the number of shader processors, memory technologies and more. Last week we were "gifted" with the early release of the Radeon HD 4850 512MB - one of the products based on the RV770 design, but today we'll walk you through not only both the HD 4850 and HD 4870 graphics boards but also the RV770 architecture itself.
AMD's New GPU Design Strategy
If you follow our graphics coverage you have saw the recent release of NVIDIA's GT200 architecture in the form of the GeForce GTX 280 and 260 cards. That GPU consists of 1.4 billion transistors on a 576 mm^2 die built on TSMC's 65nm process technology; that makes for one BIG chip.
AMD, obviously trying to make their alternative method more appealing to customers and investors, points out some potential drawbacks to this method that all revolve around the design process. When building a larger monolithic chip NVIDIA (and ATI's previous designs) had difficulty managing power requirements for their high end parts - just take a look at the max TDP that the GeForce GTX 280 has to see proof of that. However, as you can see below, AMD's solution is to use multiple-GPUs on a single board (or on two cards) for the same level of performance, which again puts similar power requirements on the customer's system.
AMD also points out a 6-12 month lag time to develop those smaller, cheaper, less power hungry versions of the chip; this is very subjective though and depends greatly on the design team's focus. The potential of having to use larger chips for smaller and slower parts is pretty significant - remember the days of NVIDIA and AMD both having GPUs that could be "unlocked" by end users since NVIDIA had to take a chip that was "good" for a higher end part but use it for a lower cost card that need inventory.