Subject: Graphics Cards | June 20, 2013 - 04:05 PM | Ryan Shrout
Tagged: radeon, nvidia, geforce, frame rating, fcat, crossfire, amd
Well, the date has been set. AMD publicly stated on its @AMDRadeon Twitter account that a new version the prototype driver we originally previewed with the release of the Radeon HD 7990 in April will be released to the public on July 31st. For a problem that many in the industry didn't think existed.
Big news for CrossFire! We plan to release our driver that delivers improved multi-GPU frame pacing on July 31. More info soon.
— AMD Radeon Graphics (@AMDRadeon) June 20, 2013
Since that April release AMD has been very quiet about its driver changes and actually has refused to send me updated prototypes over the spring. Either they have it figured out or they are worried they haven't - but it looks like we'll find out at the end of next month and I feel pretty confident that the team will be able to address the issues we brought to light.
For those of you that might have missed the discussion, our series of Frame Rating stories will tell you all about the issues with frame pacing and stutter in regards to AMD's CrossFire multi-GPU technology.
- Frame Rating Dissected: Full Details on Capture-based Graphics Performance Testing
- Frame Rating: Visual Effects of Vsync on Gaming Animation
- Frame Rating: AMD Improves CrossFire with Prototype Driver
AMD gave the media a prototype driver in April to test with the Radeon HD 7990, a card that depends on CrossFire to work correctly, and the improvements were pretty drastic.
So what can we expect on July 31st? A driver that will give users the option to disable or enable the frame pacing technology they are developing - though I am still of the mindset that disabling is never advantageous. More to come in the next 30 days!
Subject: Graphics Cards | June 18, 2013 - 03:39 PM | Ryan Shrout
Tagged: radeon, nvidia, geforce, frostbite 3, ea, dice, amd
The original source article at IGN.com has been updated with some new information. Now they are saying the agreement between AMD and EA is "non-exclusive and gamers using other components will be supported."
The quote from an EA rep says as follows:
DICE has a partnership with AMD specifically for Battlefield 4 on PC to showcase and optimize the game for AMD hardware," an EA spokesperson said. "This does not exclude DICE from working with other partners to ensure players have a great experience across a wide set of PCs for all their titles.
END UPDATE #3
This could be a huge deal for NVIDIA and AMD in the coming months - according to a story at IGN.com, AMD has entered into an agreement with EA that will allow them exclusive rights to optimization for all games based around the Frostbite 3 engine. That includes Battlefield 4, Mirror's Edge 2, Need for Speed Rivals and many more games due out this year and in 2014. Here is the quote that is getting my attention:
Starting with the release of Battlefield 4, all current and future titles using the Frostbite 3 engine — Need for Speed Rivals, Mirror's Edge 2, etc. — will ship optimized exclusively for AMD GPUs and CPUs. While Nvidia-based systems will be supported, the company won't be able to develop and distribute updated drivers until after each game is released.
Battlefield 4 will be exclusive optimized for AMD hardware.
This is huge news for AMD as the Frostbite 3 engine will be used for all EA published games going forward with the exception of sports titles. The three mentioned above are huge but this also includes Star Wars Battlefront, Dragon Age and even the next Mass Effect so I can't really emphasize enough how big of a win this could be for AMD's marketing and developer relations teams.
I am particularly interested in this line as well:
While Nvidia-based systems will be supported, the company won't be able to develop and distribute updated drivers until after each game is released.
The world of PC optimizations and partnerships has been around for a long time so this isn't a huge surprise for anyone that follows PC gaming. What is bothersome to me is that both EA and AMD
are saying are rumored to have agreed that NVIDIA won't get access to the game as it is being developed - something that is CRUCIAL for day-of driver releases and performance tweaks for GeForce card owners. In most cases, both AMD and NVIDIA developer relations teams get early access to game builds for PC titles in order to validate compatibility and to improve performance of these games for the public release. Without these builds, NVIDIA would be at a big disadvantage. This is exactly what happend with the recent Tomb Raider release.
AMD called me to reiterate their stance that competition does not automatically mean cutting out the other guy. In the Tomb Raider story linked above, Neil Robison, AMD's Senior Director of Consumer and Graphics Alliances, states quite plainly: "The thing that angers me the most is when I see a request to debilitate a game. I understand winning, I get that, and I understand aggressive companies, I get that. Why would you ever want to introduce a feature on purpose that would make a game not good for half the gaming audience?"
So what do we take away from that statement, made in a story published in March, and today's rumor? We have to take AMD at its word until we see solid evidence otherwise, or enough cases of this occurring to feel like I am being duped but AMD wants us all to know that they are playing the game the "right way." That stance just happens to be counter to this rumor.
NVIDIA had performance and compatibility issues with Tomb Raider upon release.
The irony in all of this is that AMD has been accusing NVIDIA of doing this exact thing for years - though without any public statements from developers, publishers or NVIDIA. When Batman: Arkham Asylum was launched AMD basically said that NVIDIA had locked them out of supporting antialiasing. In 2008, Assassin's Creed dropped DX 10.1 support supposedly because NVIDIA asked them too, who didn't have support for it at the time in GeForce cards. Or even that NVIDIA was disabling cores for PhysX CPU support to help prop up GeForce sales. At the time, AMD PR spun this as the worst possible thing for a company to do in the name of gamers, that is was bad for the industry, etc. But times change as opportunity changes.
The cold truth is that this is why AMD decided to take the chance that NVIDIA was allegedly unwilling to and take the console design wins that are often noted as being "bad business." If settling for razor thin margins on the consoles is a risk, the reward that AMD is hoping to get is exactly this: benefits in other markets thanks to better relationships with game developers.
Will the advantage be with AMD thanks to PS4 and Xbox One hardware?
At E3 I spoke in-depth with both NVIDIA and AMD executives about this debate and as you might expect both have very different opinions about what is going to transpire in the next 12-24 months. AMD views this advantage (being in the consoles) as the big bet that is going to pay off for the more profitable PC space. NVIDIA thinks that AMD still doesn't have what it takes to truly support developers in the long run and they don't have the engineers to innovate on the technology side. In my view, having Radeon-based processors in the Xbox One and Playstation 4 (as well as the Wii U I guess) gives AMD a head start but won't win them the race for the hearts and minds of PC gamers. There is still a lot of work to be done for that.
Before this story broke I was planning on outlining another editorial on this subject and it looks like it just got promoted to a top priority. There appear to be a lot of proverbial shoes left to drop in this battle, but it definitely needs more research and discussion.
Remember the issues with Batman: Arkham Asylum? I do.
I asked both NVIDIA and AMD for feedback on this story but only AMD has replied thus far. Robert Hallock, PR manager for gaming and graphics, Graphics Business Unit at AMD sent me this:
It makes sense that game developers would focus on AMD hardware with AMD hardware being the backbone of the next console generation. At this time, though, our relationship with EA is exclusively focused on Battlefield 4 and its hardware optimizations for AMD CPUs, GPUs and APUs.
Not much there, but he is also not denying of the original report coming from IGN. It might just be too early for a more official statement. I will update this story with information from NVIDIA if I hear anything else.
What do YOU think about this announcement though? Is this good news for AMD and bad news for NVIDIA? Is it good or bad for the gamer and in particular, the PC gamer? Your input will help guide or upcoming continued talks with NVIDIA and AMD on the subject.
Just so we all have some clarification on this and on the potential for validity of the rumor, this is where I sourced the story from this afternoon:
END UPDATE #2
Subject: Graphics Cards | June 7, 2013 - 02:33 PM | Ryan Shrout
Tagged: amd, radeon, hd 7970 ghz edition, HD 7970, never settle
AMD just passed me a note that I found to be very interesting. In an obvious response to the release of the NVIDIA GeForce GTX 770 that offers the GK104 GPU (previously only in the GTX 680) for a lower price of $399, AMD wants you to know that at least ONE Radeon HD 7970 GHz Edition card is priced lower than the others.
The Sapphire Vapor-XHD 7970 GHz Edition is currently listed on Newegg.com for $419, a cool $30 less than the other HD 7970 GHz Edition cards. This is not a card-wide price drop to $419 though. AMD had this to say:
In late May I noted that we would be working with our partners to improve channel supply of the AMD Radeon™ HD 7970 GHz Edition to North American resellers like Newegg.com. Today I’m mailing to let you know that this process has begun to bear fruit, with the Sapphire Vapor-X HD 7970 GHz Edition now listing for the AMD SEP of $419 US. Of course, this GPU is also eligible for the Never Settle Reloaded AND Level Up programs!
Improving supply is an ongoing process, of course, but we’re pleased with the initial results of our efforts and hope you might pass word to your readers if you get a chance.
This "ongoing process" might mean that we'll see other partners' card sell for this lower price but it also might not. In AMD's defense, our testing proves that in single GPU configurations, the Radeon HD 7970 GHz Edition does very well compared to the GTX 770, especially at higher resolutions.
I did ask AMD for some more answers in regards to what other partners think about a competitor getting unique treatment with AMD to offer this lower price unit, but I haven't received an answer yet. I'll update here when we do!
For today though, if you are looking for a Radeon HD 7970 GHz Edition that also comes with the AMD Never Settle game bundle (Crysis 3, Bioshock Infinite, Far Cry 3: Blood Dragon and Tomb Raider), it's hard to go wrong with that $419 option.
The Architectural Deep Dive
AMD officially unveiled their brand new Bobcat architecture to the world at CES 2011. This was a very important release for AMD in the low power market. Even though Netbooks were a dying breed at that time, AMD experienced a good uptick in sales due to the good combination of price, performance, and power consumption for the new Brazos platform. AMD was of the opinion that a single CPU design would not be able to span the power consumption spectrum of CPUs at the time, and so Bobcat was designed to fill that space which existed from 1 watt to 25 watts. Bobcat never was able to get down to that 1 watt point, but the Z-60 was a 4.5 watt part with two cores and the full 80 Radeon cores.
The Bobcat architecture was produced on TSMC’s 40 nm process. AMD eschewed the upcoming 32 nm HKMG/SOI process that was being utilized for the upcoming Llano and Bulldozer parts. In hindsight, this was a good idea. Yields took a while to improve on GLOBALFOUNDRIES new process, while the existing 40 nm product from TSMC was running at full speed. AMD was able to provide the market in fairly short order with good quantities of Bobcat based APUs. The product more than paid for itself, and while not exactly a runaway success that garnered many points of marketshare from Intel, it helped to provide AMD with some stability in the market. Furthermore, it provided a very good foundation for AMD when it comes to low power parts that are feature rich and offer competitive performance.
The original Brazos update did not happen, instead AMD introduced Brazos 2.0 which was a more process improvement oriented product which featured slightly higher speeds but remained in the same TDP range. The uptake of this product was limited, and obviously it was a minor refresh to buoy purchases of the aging product. Competition was coming from low power Ivy Bridge based chips, as well as AMD’s new Trinity products which could reach TDPs of 17 watts. Brazos and Brazos 2.0 did find a home in low powered, but full sized notebooks that were very inexpensive. Even heavily leaning Intel based manufacturers like Toshiba released Brazos based products in the sub-$500 market. The combination of good CPU performance and above average GPU performance made this a strong product in this particular market. It was so power efficient, small batteries were typically needed, thereby further lowering the cost.
All things must pass, and Brazos is no exception. Intel has a slew of 22 nm parts that are encroaching on the sub-15 watt territory, ARM partners have quite a few products that are getting pretty decent in terms of overall performance, and the graphics on all of these parts are seeing some significant upgrades. The 40 nm based Bobcat products are no longer competitive with what the market has to offer. So at this time we are finally seeing the first Jaguar based products. Jaguar is not a revolutionary product, but it improves on nearly every aspect of performance and power usage as compared to Bobcat.
Subject: Editorial, General Tech, Graphics Cards, Processors | May 8, 2013 - 09:32 PM | Scott Michaud
Tagged: Volcanic Islands, radeon, ps4, amd
So the Southern Islands might not be entirely stable throughout 2013 as we originally reported; seismic activity being analyzed suggests the eruption of a new GPU micro-architecture as early as Q4. These Volcanic Islands, as they have been codenamed, should explode onto the scene opposing NVIDIA's GeForce GTX 700-series products.
It is times like these where GPGPU-based seismic computation becomes useful.
The rumor is based upon a source which leaked a fragment of a slide outlining the processor in block diagram form and specifications of its alleged flagship chip, "Hawaii". Of primary note, Volcanic Islands is rumored to be organized with both Serial Processing Modules (SPMs) and a Parallel Compute Module (PCM).
So apparently a discrete GPU can have serial processing units embedded on it now.
Heterogeneous Systems Architecture (HSA) is a set of initiatives to bridge the gap between massively parallel workloads and branching logic tasks. We usually make reference to this in terms of APUs and bringing parallel-optimized hardware to the CPU. In this case, we are discussing it in terms of bringing serial processing to the discrete GPU. According to the diagram, the chip within would contain 8 processor modules each with two processing cores and an FPU for a total of 16 cores. There does not seem to be any definite identification whether these cores would be based upon their license to produce x86 processors or their other license to produce ARM processors. Unlike an APU, this is heavily skewed towards parallel computation rather than a relatively even balance between CPU, GPU, and chipset features.
Now of course, why would they do that? Graphics processors can do branching logic but it tends to sharply cut performance. With an architecture such as this, a programmer might be able to more efficiently switch between parallel and branching logic tasks without doing an expensive switch across the motherboard and PCIe bus between devices. Josh Walrath suggested a server containing these as essentially add-in card computers. For gamers, this might help out with workloads such as AI which is awkwardly split between branching logic and massively parallel visibility and path-finding tasks. Josh seems skeptical about this until HSA becomes further adopted, however.
Still, there is a reason why they are implementing this now. I wonder, if the SPMs are based upon simple x86 cores, how the PS4 will influence PC gaming. Technically, a Volcanic Island GPU would be an oversized PS4 within an add-in card. This could give AMD an edge, particularly in games ported to the PC from the Playstation.
This chip, Hawaii, is rumored to have the following specifications:
- 4096 stream processors
- 16 serial processor cores on 8 modules
- 4 geometry engines
- 256 TMUs
- 64 ROPs
- 512-bit GDDR5 memory interface, much like the PS4.
20 nm Gate-Last silicon fab process
- Unclear if TSMC or "Common Platform" (IBM/Samsung/GLOBALFOUNDRIES)
Softpedia is also reporting on this leak. Their addition claims that the GPU will be designed on a 20nm Gate-Last fabrication process. While gate-last is considered to be not worth the extra effort in production, Fully Depleted Silicon On Insulator (FD-SOI) is apparently "amazing" on gate-last at 28nm and smaller fabrication. This could mean that AMD is eying that technology and making this design with intent of switching to an FD-SOI process, without a large redesign which an initially easier gate-first production would require.
Well that is a lot to process... so I will leave you with an open question for our viewers: what do you think AMD has planned with this architecture, and what do you like and/or dislike about what your speculation would mean?
Subject: Memory | May 8, 2013 - 12:01 AM | Josh Walrath
Tagged: radeon ramdisk, radeon, memory, amd, 4GB, 2133, 1.65v
AMD makes memory! Ok, they likely contract out memory. Then they brand it! Then they throw in some software to make RAMDisks out of all that memory that you are not using. Let us face it; AMD is not particularly doing anything new here with memory. It is very much a commodity market that is completely saturated with quality parts from multiple manufacturers.
So why is AMD doing it? Well, I guess part of it is simply brand recognition and potentially another source of income to help pad the bottom line. They will not sell these parts for a loss, and they will have buyers with the diehard AMD fans. Tim covered the previous release of AMD memory pretty well, and he looked at the performance results of the free RAMDisk software that AMD bundled with the DIMMs. It does exactly what it is supposed to, but of course it takes portions of memory away. When dealing with upwards of 16 GB of memory for a desktop computer, sacrificing half of that is really not that big a deal unless heavy duty image and video editing are required.
*Tombraider not included with Radeon Memory. Radeon RAMDisk instead!
Today AMD is announcing a new memory product and a new bundled version of the RAMDisk software. The top end SKU is now the AMD Radeon RG2133 DDR-3 modules. It comes in a package of up to 4 x 4GB DIMMS and carries a CAS latency of 10 with the voltage at a reasonable 1.65v. These modules are programmed with both the Intel based XMP and the AMD based AMP (MP stands for Memory Profiles… if that wasn’t entirely obvious). The modules themselves are reasonable in terms of size (they will fit in any board, even with larger heatsinks on the CPU). AMD claims that they are all high quality parts, which again is not entirely surprising since I do not know of anyone who advertises that their DIMMS feature only the most mediocre memory modules available.
Faster memory is faster, water is wet, and Ken still needs a girlfriend.
AMD goes on to claim that faster memory does improve overall system performance. Furthermore AMD has revealed that UV light is in fact a cancer causing agent, Cocoa Puffs will turn any milk brown, and passing gas in church will rarely be commented upon (unless it is truly rank or you start calling yourself “Legion”). Many graphs were presented that essentially showed an overclocked APU with this memory will outperform a non-overclocked APU with DDR-3 1600 units. Truly eye opening, to say the least.
How much RAMDisk can any one man take? AMD wants to know!
The one big piece of the pie that we have yet to talk about is the enhanced version of Radeon RAMDisk (is Farva naming these things?). This particular version can carve out up to 64 GB of memory for a RAMDisk! I can tell you this now, me and my 8 GB of installed memory will get a LOT of mileage out of this one! I can only imagine the product meeting. “Hey, I’ve got a great idea! We can give them up to 64 GB of RAMDisk!” While another person replies, “How do you propose getting people above 64 GB, much less 32 GB of memory on a consumer level product…?” After much hand wringing and mumbling someone comes up with, “I know! They can span it across two motherboards! That way they have to buy an extra motherboard AND a CPU! Think of our attach rate!” And there was much rejoicing.
So yes, more memory that goes faster is better. Radeon RAMDisk is not just a comic superhero, it can improve overall system performance. Combine the two and we have AMD Radeon Memory RG2133 with 64 GB of RAMDisk. Considering that the top SKU will feature 4 x 4GB DIMMS, a user only needs to buy four kits and four motherboards and processors to get a 64GB RAMDisk. Better throw in another CPU and motherboard so a user can at least have 16GB of memory available as, you know, memory.
Update and Clarification
Perhaps my tone was a bit too sarcastic, but I just am not seeing the value here. Apparently (and I was not given this info before hand) the 4 x 4 GB kits with the 64 GB RAMDisk will retail at $155. Taking a quick look at Newegg I see that a user can buy quite a few different 2 x 8 GB 2133 kits anywhere from $139 to $145 with similar or better latencies/voltages. Around $155 users will get better latencies and voltages down to 1.5v. For 4 x 4GB kits we again see prices start at the $139 mark, but there are a significant number of other kits with again better voltages and latencies from $144 through $155.
Users can also get the free version of the Radeon RAMDisk that will utilize up to 4GB of space. There are multiple other software kits for not a whole lot of money (less than $10) that will provide you up to 16 GB of RAMDisk. I just find the whole kit to be comparable to what is currently out there. Offering a 64 GB RAMDisk for use with 16 GB of total system memory just seems to be really silly. The only way that could possibly be interesting would be if you could allocate 8 GB of that onto RAM and the other 56 GB onto a fast SSD. I do not believe that to be the case with this software, but I would love to be proved wrong.
Our 4K Testing Methods
You may have recently seen a story and video on PC Perspective about a new TV that made its way into the office. Of particular interest is the fact that the SEIKI SE50UY04 50-in TV is a 4K television; it has a native resolution of 3840x2160. For those that are unfamiliar with the new upcoming TV and display standards, 3840x2160 is exactly four times the resolution of current 1080p TVs and displays. Oh, and this TV only cost us $1300.
In that short preview we validated that both NVIDIA and AMD current generation graphics cards support output to this TV at 3840x2160 using an HDMI cable. You might be surprised to find that HDMI 1.4 can support 4K resolutions, but it can do so only at 30 Hz (60 Hz 4K TVs won't be available until 2014 most likely), half the refresh rate of most TVs and monitors at 60 Hz. That doesn't mean we are limited to 30 FPS of performance though, far from it. As you'll see in our testing on the coming pages we were able to push out much higher frame rates using some very high end graphics solutions.
I should point out that I am not a TV reviewer and I don't claim to be one, so I'll leave the technical merits of the monitor itself to others. Instead I will only report on my experiences with it while using Windows and playing games - it's pretty freaking awesome. The only downside I have found in my time with the TV as a gaming monitor thus far is with the 30 Hz refresh rate and Vsync disabled situations. Because you are seeing fewer screen refreshes over the same amount of time than you would with a 60 Hz panel, all else being equal, you are getting twice as many "frames" of the game being pushed to the monitor each refresh cycle. This means that the horizontal tearing associated with Vsync will likely be more apparent than it would otherwise.
I would likely recommend enabling Vsync for a tear-free experience on this TV once you are happy with performance levels, but obviously for our testing we wanted to keep it off to gauge performance of these graphics cards.
Subject: General Tech | April 24, 2013 - 03:51 PM | Jeremy Hellstrom
Tagged: ARES II, amd, radeon, hd 7990, malta, tahiti
We've seen tests of dual 7970s in CrossFire simulating a 7990 and ASUS released the ARES II which was the closest we had until today with the release of the reference HD 7990. There are many reviews to chose from when looking at this new flagship card, such as from a pure performance perspective such as [H]ard|OCP's which did not come out well for AMD's new card. If you are more interested in our new Frame Rating process then there are two reviews to read, one that deals with the 7990 on the publicly available driver and perhaps more interesting is a prototype driver provided to Ryan that is intended to fix Crossfire stuttering on single displays but not for EyeFinity
"Today marks the launch of AMD's Radeon HD 7990. The Radeon HD 7990 is a dual-GPU video card that has its two GPUs down on a single PCB that uses CrossFire to operate the two Radeon HD 7970 GPUs. We will test this video card in the latest games, comparing it to GeForce GTX 680 SLI and Radeon HD 7970 GHz Edition CrossFire. "
Here are some more Graphics Card articles from around the web:
- AMD Radeon HD 7990 Review: 7990 Gets Official @ AnandTech
- AMD Radeon HD 7990 6GB Dual GPU @ Tweaktown
- AMD Radeon HD 7990 @ Hardware.info
- AMD Radeon HD 7990 6 GB @ techPowerUp
- AMD Radeon HD 7990 6GB Malta Video Card Review @ Legit Reviews
- AMD HD 7990 Review; Malta Arrives @ Hardware Canucks
- AMD Radeon HD 7990 @ Legion Hardware
- AMD Radeon HD 7990 @ TechSpot
- AMD HD7990 Malta @ Kitguru
- XFX R7790 Black Edition OC 2 GB @ techPowerUp
- Diamond HD 7790 1GB @ LanOC Reviews
- Sapphire Radeon HD 7790 2GB OC Edition Video Card Review @ Legit Reviews
- Gigabyte HD 7790 1GB OC @ LanOC Reviews
- Sapphire Radeon HD 7790 2GB OC Review @ OCC
- XFX R7790 Black Edition Overclocked Review @ OCC
- Nouveau vs. NVIDIA Linux Comparison Shows Shortcomings @ Phoronix
- MSI GeForce GTX 650 Ti 2GB Boost Twin Frozr @ Tweaktown
- EVGA GeForce GTX TITAN 6GB SuperClocked @ Tweaktown
- GeForce GTX 650 Ti Boost & SLI Performance @ Techspot
- EVGA GeForce GTX TITAN 6GB SuperClocked @ Tweaktown
- EVGA GeForce GTX TITAN 6GB SuperClocked Video Cards in SLI @ Tweaktown
- NVIDIA GeForce GTX TITAN 3-Way SLI @ [H]ard|OCP
The card we have been expecting
Despite all the issues that were brought up with our new graphics performance testing methodology we are calling Frame Rating, there is little debate in the industry that AMD is making noise once again in the graphics field. From the elaborate marketing and game bundles with all Radeon HD 7000 series cards over the last year to the hiring of Roy Taylor, VP of sales but also the company's most vocal supporter.
Along with the marketing though goes plenty of technology and important design wins. With the dominance of the APU on the console side (Wii U, Playstation 4 and the next Xbox), AMD is making sure that the familiarity with its GPU architecture there pays dividends on the PC side as well. Developers will be focusing on AMD's graphics hardware for 5-10 years with the console generation and that could result in improved performance and feature support for Radeon graphics for PC gamers.
Today's release of the Radeon HD 7990 6GB Malta dual-GPU graphics card shows a renewed focus on high-end graphics markets since the release of the Radeon HD 7970 in January of 2012. And while you may have seen something for sale previously with the HD 7990 name attached, those were custom designs built by partners, not by AMD.
Both ASUS and PowerColor currently have high-end dual-Tahiti cards for sale. The PowerColor HD 7990 Devil 13 used the brand directly but ASUS' ARES II kept away from the name and focused on its own high-end card brands instead.
The "real" Radeon HD 7990 card was first teased at GDC in March and takes a much less dramatic approach to its design without being less impressive technically. The card includes a pair of Tahiti, HD 7970-class GPUs on a single PCB with 6GB of total memory. The raw specifications are listed here:
Considering there are two HD 7970 GPUs on the HD 7990, the doubling of the major specs shouldn't be surprising though it is a little deceiving. There are 8.6 billion transistors yes, but there are still 4.3 billion on each GPU. Yes there are 4096 stream processors but only 2048 on each GPU requiring software GPU scaling to increase performance. The same goes with texture fill rate, compute performance, memory bandwidth, etc. The same could be said for all dual-GPU graphics cards though.
A very early look at the future of Catalyst
Today is a very interesting day for AMD. It marks both the release of the reference design of the Radeon HD 7990 graphics card, a dual-GPU Tahiti behemoth, and the first sample of a change to the CrossFire technology that will improve animation performance across the board. Both stories are incredibly interesting and as it turns out both feed off of each other in a very important way: the HD 7990 depends on CrossFire and CrossFire depends on this driver.
If you already read our review (or any review that is using the FCAT / frame capture system) of the Radeon HD 7990, you likely came away somewhat unimpressed. The combination of a two AMD Tahiti GPUs on a single PCB with 6GB of frame buffer SHOULD have been an incredibly exciting release for us and would likely have become the single fastest graphics card on the planet. That didn't happen though and our results clearly state why that is the case: AMD CrossFire technology has some serious issues with animation smoothness, runt frames and giving users what they are promised.
Our first results using our Frame Rating performance analysis method were shown during the release of the NVIDIA GeForce GTX Titan card in February. Since then we have been in constant talks with the folks at AMD to figure out what was wrong, how they could fix it, and what it would mean to gamers to implement frame metering technology. We followed that story up with several more that showed the current state of performance on the GPU market using Frame Rating that painted CrossFire in a very negative light. Even though we were accused by some outlets of being biased or that AMD wasn't doing anything incorrectly, we stuck by our results and as it turns out, so does AMD.
Today's preview of a very early prototype driver shows that the company is serious about fixing the problems we discovered.
If you are just catching up on the story, you really need some background information. The best place to start is our article published in late March that goes into detail about how game engines work, how our completely new testing methods work and the problems with AMD CrossFire technology very specifically. From that piece:
It will become painfully apparent as we dive through the benchmark results on the following pages, but I feel that addressing the issues that CrossFire and Eyefinity are creating up front will make the results easier to understand. We showed you for the first time in Frame Rating Part 3, AMD CrossFire configurations have a tendency to produce a lot of runt frames, and in many cases nearly perfectly in an alternating pattern. Not only does this mean that frame time variance will be high, but it also tells me that the value of performance gained by of adding a second GPU is completely useless in this case. Obviously the story would become then, “In Battlefield 3, does it even make sense to use a CrossFire configuration?” My answer based on the below graph would be no.
An example of a runt frame in a CrossFire configuration
NVIDIA's solution for getting around this potential problem with SLI was to integrate frame metering, a technology that balances frame presentation to the user and to the game engine in a way that enabled smoother, more consistent frame times and thus smoother animations on the screen. For GeForce cards, frame metering began as a software solution but was actually integrated as a hardware function on the Fermi design, taking some load off of the driver.