AMD Radeon HD 7970 3GB GHz Edition Review - Taking on the GTX 680
A new SKU for a new battle
On launch day we hosted AMD's Evan Groenke for an in-studio live interview and discussion of about the Radeon HD 7970 GHz Edition. For the on-demand version of that event, check it out right here. Enjoy!
AMD has had a good run in the discrete graphics market for quite some time. With the Radeon HD 5000 series, the company was able to take a commanding mindshare (if not marketshare) lead from NVIDIA. While that diminished some with the HD 6000 series going up against NVIDIA's GTX 500 family, the release of the HD 7970 and HD 7950 just before the end of 2011 stepped it up again. AMD was the first to market with a 28nm GPU, the first to support DX11.1, the first with a 3GB frame buffer and the new products were simply much faster than what NVIDIA had at the time.
AMD enjoyed that crowned location on the GPU front all the way until the NVIDIA GeForce GTX 680 launched in March. In a display of technology that most reviewers never thought possible, NVIDIA had a product that was faster, more power efficient and matched or exceeded just about every feature of the AMD Radeon HD 7000 cards. Availability problems plagued NVIDIA for several months (and we just now seeing the end of the shortage) and even caused us to do nearly-weekly "stock checks" to update readers. Prices on the HD 7900 cards have slowly crept down to find a place where they are relevant in the market, but AMD appears to not really want to take a back seat to NVIDIA again.
While visiting with AMD in Seattle for the Fusion Developer Summit a couple of weeks ago, we were briefed on a new secret: Tahiti 2, or Tahiti XT2 internally. An updated Radeon HD 7970 GPU that was going to be shipping soon with higher clock speeds and a new "boost" technology in order to combat the GTX 680. Even better, this card was going to have a $499 price tag.
The Radeon HD 7970 3GB GHz Edition
To be honest, there is not very much that has changed for this new iteration of the Tahiti GPU. It is very fair to call it that – an iteration. In response to the NVIDIA GeForce GTX 680 graphics card and its improved performance, AMD is taking the same chip, binning it for better clock speeds, and releasing a factory overclocked model.
Based on the GCN (Graphics Core Next) architecture, the Radeon HD 7970 3GB GHz Edition will still include 32 compute units, a total of 2048 stream processors – along with a 384-bit memory bus – and 3GB of GDDR5 memory. The chip remains 4.31 billion transistors strong.
Excuse the mistake on this table - AMD was NOT able to crunch the GHz Edition from 4.31 to 2.8 billion transistors...
While the base clock speed of the original HD 7970 card was 925 MHz, the new GHz Edition hits –you guessed it – 1.0 GHz. You'll also notice a new "boost" clock listed here at 1.05 GHz, or 50 MHz higher than the base clock. In response to the success of the NVIDIA GPU Boost technology, it would appear that AMD has taken a similar stance to the world of GPU speeds. We'll touch more on this below, but things are still quite different. These clock speed increases result in a total theoretical peak processing rate of 4.3 TFLOPS for single precision math and 1.08 TFLOPS for double precision math. To put that in perspective, NVIDIA's 7.1 billion transistor GK110 GPU is rumored to also have just over 1 TFLOPS of DP performance.
The 384-bit memory bus is also running at higher speeds – now with a clock rate equal to that of the GTX 680 – at 1.5 GHz or 6 Gbps. Compared to the original HD 7970 data rate of 1375 MHz, the new GHz Edition of the HD 7970 has a 9% increase in available memory bandwidt (up to 288 GB/s). Of course, AMD is still maintaining the 3GB frame buffer on this card which has been shown to give Tahiti an advantage over NVIDIA's Kepler on triple panel gaming and super-high resolution scenarios.
Interestingly, AMD has not increased the "typical board power" of the Radeon HD 7970 GHz Edition with higher core and memory clock speeds. While this is possible with proper binning, we'll have to see how the product performs in terms of heat, power and noise when they start getting into the hands of a wide range of gamers.
PowerTune with Boost
When NVIDIA first unveiled GPU Boost technology with the GeForce GTX 680 graphics cards in March, I went through a couple of internal monologues swinging back and forth between liking the addition of it and despising it. In the end, I decided that the GPU Boost technology was a win for gamers and the consumer and even though it made testing GPUs a bit more complicated, the intent of the increase clock speeds was going to help enthusiasts for gaming.
At the time, AMD's stance was pretty firm on the idea that GPU Boost would create performance variability for customers buying the same graphics cards, even when running in the same or similar configurations. This variability was bad according to AMD, as it cheated some gamers out of performance and created the possibility for "golden samples" to be sent to press or to certain card vendors to swing performance expectations from reality. While I don't think that has occurred for the most part since the Kepler launch, today AMD is debuting the first card in its family with a similar boost clock technology.
AMD's PowerTune with Boost technology differs from NVIDIA's GPU Boost in a couple of important ways. First, much to its original premise, AMD can guarantee exactly how all Radeon HD 7970 3GB GHz Edition graphics cards will operate and what speeds in any given environment – there should be no variability between the card that I get and the card that you can buy online. Using digital temperature estimation in conjunction with voltage control, the PowerTune implementation of boost is completely deterministic.
As the above diagram illustrates, the "new" part of PowerTune with the GHz Edition is the ability to vary the voltage of the GPU in real-time to address a wider range of qualified clock speeds. On the previous HD 7970s the voltage was a locked static voltage in its performance mode, meaning that it would not increase or decrease during load operations. As AMD stated to us in a conversation just prior to launch, "by having multiple voltages that can be invoked, we can be at a more optimal clock/voltage combination more of the time, and deliver higher average performance."
The problem I have with AMD's boost technology is that they are obviously implementing this as a reaction to NVIDIA's technology. That isn't necessarily a bad thing, but the tech feels a little premature because of it. We were provided no tools prior to launch to actually monitor the exact clock speed of the GPU in real-time. The ability to monitor these very small changes in clock speed are paramount to our ability to verify the company's claims, and without it we will have questions about the validity of results. GPU-Z and other applications we usually use to monitor clock speeds (including AMD's driver) only report 1050 MHz as the clock speed – no real-time dynamic changes are being reported.
(As a side note, AMD has promised to showcase their internal tool to show real-time clock speed changes in our Live Review at http://pcper.com/live on Friday the 22nd, 11am PDT / 2pm EDT.)
So the new Radeon HD 7970 3GB GHz Edition graphics card offers an 8% clock speed increase (or 13.5% if you use the boost clock) along with a 9% memory speed increase. How will that, along with improvements in some key games via updated Catalyst drivers, affect how the HD 7970 competes with the GTX 680?