A very early look at the future of Catalyst
Today is a very interesting day for AMD. It marks both the release of the reference design of the Radeon HD 7990 graphics card, a dual-GPU Tahiti behemoth, and the first sample of a change to the CrossFire technology that will improve animation performance across the board. Both stories are incredibly interesting and as it turns out both feed off of each other in a very important way: the HD 7990 depends on CrossFire and CrossFire depends on this driver.
If you already read our review (or any review that is using the FCAT / frame capture system) of the Radeon HD 7990, you likely came away somewhat unimpressed. The combination of a two AMD Tahiti GPUs on a single PCB with 6GB of frame buffer SHOULD have been an incredibly exciting release for us and would likely have become the single fastest graphics card on the planet. That didn't happen though and our results clearly state why that is the case: AMD CrossFire technology has some serious issues with animation smoothness, runt frames and giving users what they are promised.
Our first results using our Frame Rating performance analysis method were shown during the release of the NVIDIA GeForce GTX Titan card in February. Since then we have been in constant talks with the folks at AMD to figure out what was wrong, how they could fix it, and what it would mean to gamers to implement frame metering technology. We followed that story up with several more that showed the current state of performance on the GPU market using Frame Rating that painted CrossFire in a very negative light. Even though we were accused by some outlets of being biased or that AMD wasn't doing anything incorrectly, we stuck by our results and as it turns out, so does AMD.
Today's preview of a very early prototype driver shows that the company is serious about fixing the problems we discovered.
If you are just catching up on the story, you really need some background information. The best place to start is our article published in late March that goes into detail about how game engines work, how our completely new testing methods work and the problems with AMD CrossFire technology very specifically. From that piece:
It will become painfully apparent as we dive through the benchmark results on the following pages, but I feel that addressing the issues that CrossFire and Eyefinity are creating up front will make the results easier to understand. We showed you for the first time in Frame Rating Part 3, AMD CrossFire configurations have a tendency to produce a lot of runt frames, and in many cases nearly perfectly in an alternating pattern. Not only does this mean that frame time variance will be high, but it also tells me that the value of performance gained by of adding a second GPU is completely useless in this case. Obviously the story would become then, “In Battlefield 3, does it even make sense to use a CrossFire configuration?” My answer based on the below graph would be no.
An example of a runt frame in a CrossFire configuration
NVIDIA's solution for getting around this potential problem with SLI was to integrate frame metering, a technology that balances frame presentation to the user and to the game engine in a way that enabled smoother, more consistent frame times and thus smoother animations on the screen. For GeForce cards, frame metering began as a software solution but was actually integrated as a hardware function on the Fermi design, taking some load off of the driver.
Summary Thus Far
Because of the complexity and sheer amount of data we have gathered using our Frame Rating performance methodology, we are breaking it up into several articles that each feature different GPU comparisons. Here is the schedule:
- 3/27: Frame Rating Dissected: Full Details on Capture-based Graphics Performance Testing
- 3/27: Radeon HD 7970 GHz Edition vs GeForce GTX 680 (Single and Dual GPU)
- 3/30: AMD Radeon HD 7990 vs GeForce GTX 690 vs GeForce GTX Titan
- 4/2: Radeon HD 7950 vs GeForce GTX 660 Ti (Single and Dual GPU)
- 4/5: Radeon HD 7870 GHz Edition vs GeForce GTX 660 (Single and Dual GPU)
Welcome to the second in our intial series of articles focusing on Frame Rating, our new graphics and GPU performance technology that drastically changes how the community looks at single and multi-GPU performance. In the article we are going to be focusing on a different set of graphics cards, the highest performing single card options on the market including the GeForce GTX 690 4GB dual-GK104 card, the GeForce GTX Titan 6GB GK110-based monster as well as the Radeon HD 7990, though in an emulated form. The HD 7990 was only recently officially announced by AMD at this years Game Developers Conference but the specifications of that hardware are going to closely match what we have here on the testbed today - a pair of retail Radeon HD 7970s in CrossFire.
Will the GTX Titan look as good in Frame Rating as it did upon its release?
If you are just joining this article series today, you have missed a lot! If nothing else you should read our initial full release article that details everything about the Frame Rating methodology and why we are making this change to begin with. In short, we are moving away from using FRAPS for average frame rates or even frame times and instead are using a secondary hardware capture system to record all the frames of our game play as they would be displayed to the gamer, then doing post-process analyzation on that recorded file to measure real world performance.
Because FRAPS measures frame times at a different point in the game pipeline (closer to the game engine) its results can vary dramatically from what is presented to the end user on their display. Frame Rating solves that problem by recording video through a dual-link DVI capture card that emulates a monitor to the testing system and by simply applying a unique overlay color on each produced frame from the game, we can gather a new kind of information that tells a very unique story.
The capture card that makes all of this work possible.
I don't want to spend too much time on this part of the story here as I already wrote a solid 16,000 words on the topic in our first article and I think you'll really find the results fascinating. So, please check out my first article on the topic if you have any questions before diving into these results today!
|Test System Setup|
|CPU||Intel Core i7-3960X Sandy Bridge-E|
|Motherboard||ASUS P9X79 Deluxe|
|Memory||Corsair Dominator DDR3-1600 16GB|
|Hard Drive||OCZ Agility 4 256GB SSD|
NVIDIA GeForce GTX TITAN 6GB
NVIDIA GeForce GTX 690 4GB
AMD Radeon HD 7970 CrossFire 3GB
AMD: 13.2 beta 7
NVIDIA: 314.07 beta (GTX 690)
NVIDIA: 314.09 beta (GTX TITAN)
|Power Supply||Corsair AX1200i|
|Operating System||Windows 8 Pro x64|
On to the results!
Subject: Graphics Cards | November 23, 2012 - 06:43 PM | Jeremy Hellstrom
Tagged: amd, radeon, southern islands, hd 7990
PowerColor ignored the claims that there would be no dual GPU HD 7990 and created the DEVIL13, with two southern island GPUs on a single PCB. Both GPUs run at the standard HD 7970 speed of 925MHz, with a button to overclock them to 1000MHz and ups the amount of voltage provided to the cores as well, the 6GB of RAM run at the stock 5.5GHz effective. Seeing three 8pin PCIe power connectors is impressive, as is the 3 slot card its self. [H]ard|OCP overclocked the card to a stable 1125MHz GPUs and 6.3GHz memory which put its performance noticeably above that of the SLI'd GTX 680 that they compared this card to. The question remains, if you can get the exact same performance from two overclocked Powercolor HD 7970s for $860 then why spend $1000 on the hard to find DEVIL13?
"PowerColor has beaten AMD to the punch with its own creation of a dual-GPU Radeon HD 7970 CrossFireX solution in a single video card package. We evaluate this awe inspiring video card and of course overclock it to its highest potential. We put it up against the best GTX 680 SLI solution also overclocked, all with the latest drivers."
Here are some more Graphics Card articles from around the web:
- HIS Radeon HD 7970 X Turbo 3 GB @ techPowerUp
- ASUS Matrix HD 7970 Platinum Video Card Review @ Legit Reviews
- AMD's New Catalyst Linux Driver Isn't Too Good @ Phoronix
- Prolimatech MK-26 @ XSReviews
- Desktop Graphics Card Comparison Guide @ TechARP
- Workstation Graphics Card Comparison Guide @ TechARP
- ASUS GeForce GTX 660 DirectCU II Top OC Edition Review @ Hi Tech Legion
Subject: Graphics Cards | August 23, 2012 - 03:17 PM | Tim Verry
Tagged: radeon hd 7990, hd 7990, graphics card, dual gpu, amd
Today, more rumors emerged on the ever elusive dual-GPU AMD graphics card. Reportedly, graphics card vendor PowerColor will be one of the Add In Board (AIB) partners producing the Radeon HD 7990. Previous rumors suggested that the HD 7990 would be comprised of two Radeon HD 7970 GPUs and it would be available in late August 2012. While there is no confirmation on the release date, the PowerColor 7990 "Devil 13" graphics card is using two 7970 GPUs in CrossFire on a single PCB.
Back in July, some details emerged on the 7990 that the PowerColor card rumors do not seem to disprove. Some highlights from the rumor mill so far include:
- The 7990 will use two 7970 Tahiti XT GPUs connected by a PLX chip.
- 6GB of GDDR5 memory (3GB per GPU)
- 4,096 stream processors, 64 ROP units
- (at least) a dual slot design with three fan cooler
- Four mini DisplayPorts and two Dual Link DVI video outputs
- Four 6-pin PCI-E power connectors
The earlier post did also mention that the default clock speed would be 850 MHz, but that does not seem to be the case with the PowerColor model. There may still be Radeon HD 7990 cards that come clocked at that speed, however.
As for the PowerColor model specifically, the new rumors suggest that it will be part of a limited run with a total of 500 cards. Coming in a red and black design, the three slot graphics card will use two 7970 GPUs clocked at 925 MHz in CrossFire. While there is no shot of the other side of the board to see how many PCI-E connectors it has, it will reportedly draw as much as 400 Watts. Using a BIOS switch, you will be able to choose between default and factory overclocked clockspeeds for both the GPU and GDDR5 memory.
Videocardz managed to unearth a photo of the elusive dual GPU AMD card.
When in its default mode, the card will run the GPU at 925 MHz and the memory at 5500 MHz (effective), which is the same as the Radeon HD 7970 single GPU graphics card. After flipping the BIOS switch, the card will use overclocked speeds of 1000 MHz for the GPU and 5500 MHz for the memory (so the GPU is the only part getting overclocked, according to the rumors).
According to Videocardz, the PowerColor 7990 has been refined somewhat compared to a showing at Computex earlier this year. From the photos comparison, it looks as though the company has changed out the red PCI back plate for a standard silver color rather than the custom red version. Also, the three fans are slightly different models. It appears as though the card will provide two DVI outputs as well as a full-size HDMI and two mini DisplayPort outputs. The site claims that AMD will not be releasing any reference version and has given its partners free reign to engineer and design custom versions (perhaps we’ll see a massive 12GB version heh).
While there is no word on when this card will be released, according to sources speaking with Hardware Canucks, the Powercolor 7990 “Devil 13” will cost between $899 and $999 in the US. While not the card that many were likely hoping for (because of the price), it may well be the best that users hoping for a dual Graphics Core Next card will be able to get–assuming you can get your hands on one of 500 available cards. NVIDIA has had its own dual GPU GTX 690 on the market for some time now, and it is looking more and more likely that AMD is not going to have an answer any time soon in any big way (outside of limited edition runs from partners that design their own custom versions), and that’s unfortunate.
I speculated that users would be better off with two single Radeon 7970s in CrossFire, and I still believe that is likely the best option right now. Especially if you opt for the 7950 with PowerTune boost (which we recently reviewed) or 7970 GHz Edition cards with boost as it is looking like the 7990 will not have that functionality.
What do you think though, are you still holding out for the ever-elusive 7990?
You can find more coverage of the AMD Radeon HD 7990 by following the 7990 tag!
Subject: Graphics Cards | April 29, 2012 - 03:55 AM | Ryan Shrout
Tagged: nvidia, kepler, jen-hsun huang, hd 7990, GTX 690, gtx 680, geforce, 7990
During a keynote presentation at GeForce LAN 2012 being held in Shanghai, NVIDIA's CEO Jen-Hsun Huang unveiled what many of us have been theorizing would be coming soon; the dual-GPU variant of the Kepler architecture, the GeForce GTX 690 graphics card.
Though reviews aren't going to be released yet, Huang unveiled pretty much all of the information we need to figure it out. With the full specifications listed as well as details about the stunning new design of the card and cooler, the GTX 690 is without a doubt going to be the fastest graphics card on the market when it goes on sale next month.
The GeForce GTX 690 4GB card is based on a pair of GK104 chips, each sporting 1536 CUDA cores, basically identical to the ones used in the GeForce GTX 680 2GB cards released in March. The base clock speed of these parts is slightly lower at 915 MHz but the "typical" Boost clock is set as high as 1019 MHz, pushing it pretty close to the performance of the single GPU solutions. With a total of 3072 processing cores, the GTX 690 will have insane amounts of compute horsepower.
Each GPU will have access to 2GB of independent frame buffer still running at 6 Gbps, for a grand total of 4GB on the card.
Sitting between the two GPUs will be a PCI Express 3.0 capable bridge chip from PLX supporting full x16 lanes to each GPU and a full x16 back to the host system.
In terms of power requirements, the GTX 690 will use a pair of 8-pin connectors and will have a TDP of 300 watts - actually not that high consider the TDP of the GTX 680 is 195 watts on its own. It is obvious that NVIDIA is going to be pulling the very best chips for this card, those that can run at clock speeds over 1 GHz with minimal leakage.