Review Index:

Frame Rating Dissected: Full Details on Capture-based Graphics Performance Testing

Summary of Data and Early Thoughts


Because of the complexity and sheer amount of data we have gathered using our Frame Rating performance methodology, we are breaking it up into several articles that each feature different GPU comparisons.  Here is the schedule:


Without question we have inundated you with about as much information as any one person can take in with a single story. This is why we are breaking up our results in a few days of releases. There are a lot of answers as well as a lot of questions that resulted from this story and I am going to try and address as many of them as possible in our summary, but I encourage our readers to use the comments below and this thread in our forums to keep the discussion going.  I’ll be following both over the next week (with GDC getting in the way initially) and look forward to getting some more input.


Single GPU Configurations – Performance as Expected

Today’s results focus on the Radeon HD 7970 GHz Edition and the NVIDIA GeForce GTX 680 as well as their SLI/CrossFire options, but let’s start with a quick talk about the results we see with the single card and single GPU configurations.  Frame Rating still tells an interesting and unique story compared to FRAPS and thanks to some of our data analysis, (Min FPS percentiles, International Stutter Units) the HD 7970 and GTX 680 compare different than they might otherwise. 

We definitely can’t say the same for the multi-GPU results, but when using only a single GPU both AMD and NVIDIA platforms show consistent results on a run to run basis as well as when we compare Frame Rating to the traditional FRAPS average frame rates and frame times.  When we showed you the FRAPS graph followed by the Observed FPS graphics you should have seen that both the single GTX 680 and the single HD 7970 are basically the same on both. 

Frame time graphs are going to be different due to the different locations in the graphics pipeline in which the frame times are measured between FRAPS and our capture solution, but generally both versions tell a similar story.  If there is hitching or stutter found using the FRAPS time stamps then our at-the-display data will show the same thing, but maybe at different specific locations.  Patterns are the key to find though as very few gamers are really just playing a game for 60 seconds at a time, let alone the same 60 seconds over and over.

View Full Size

The overall picture comparing the two cards indicates that the AMD Radeon HD 7970 GHz Edition is a faster card for gaming at 1920x1080, 2560x1440 and 5760x1080 triple-monitor resolutions.  In Battlefield 3 the performance gap between the HD 7970 and GTX 680 was small at 19x10 and 25x14 but expanded to a larger margin at 57x10 (19%).  AMD’s HD 7970 also shows less frame to frame variance in the BF3 than the GTX 680.  This same pattern is seen in Crysis 3 as well, though at 5760x1080 we are only getting frame rates of 13 and 16 on average, getting the HD 7970 a 23% advantage. 

DiRT 3 performed very well on both cards even at the 5760x1080 resolution though AMD’s HD 7970 maintained a small advantage.  Far Cry 3 was much more varied with the GTX 680 taking the lead at 1920x1080 (20%) but at 2560x1440 and 5760x1080 the cards change places giving the HD 7970 the lead.  Skyrim was another game that saw small performance leads for AMD at higher resolutions though I did find there to be less frame time variance on the GTX 680 system which provided a better overall experience for game that can run on most discrete GPUs on the market today.

Finally, one of the newest games to our test suite, Sleeping Dogs, the AMD Radeon HD 7970 holds a sizeable advantage across the board of the three tested resolutions.  The margins are 34% at 1920x1080, 37% at 2560x1440 and 23% when using triple displays. 

While some people might have assumed that this new testing methodology would paint a prettier picture of NVIDIA’s current GPU lineup across the board (due to its involvement in some tools), with single card configurations nothing much is changing in how we view these comparisons.  The Radeon HD 7970 GHz Edition and its 3GB frame buffer is still a faster graphics card than a stock GeForce GTX 680 2GB GPU.  In my testing there was only a couple of instances in which the experience on the GTX 680 was faster or smoother than the HD 7970 at 1920x1080, 2560x1440 or even 5760x1080.


AMD CrossFire Performance - A Bridge over Trouble Water?

Where AMD has definite issues is with HD 7970s in CrossFire, and our Frame Rating testing is bringing that to light in a startling fashion.  In half of our tested games, the pair of Radeon HD 7970s in CrossFire showed no appreciable measured or observed increase in performance compared to a single HD 7970.  I cannot overstate that point more precisely: our results showed that in Battlefield 3, Crysis 3 and Sleeping Dogs, adding in another $400+ Radeon HD 7970 did nothing to improve your gaming experience, and in some cases made it worse by introducing frame time variances that lead to stutter.  Take a look at some of our graphs on those game pages and compare the FRAPS FPS result to the Observed FPS result that calculates an average frame rate per second after removing runts and drops.  Clearly the performance of the dual-card configuration is only barely faster than the single card, removing the “scaling” of CrossFire.  This occurs at 1920x1080 and 2560x1440 on those three games and actually happens several times on DiRT 3 but only at 2560x1440 (which actually leads me to believe this is a GPU performance issue, not a CPU performance issue). 

View Full Size

It is worth pointing out that this does not necessarily mean you won’t have a fluid gaming experience on an AMD CrossFire configuration.  Sleeping Dogs at 2560x1440 is a perfect example of this: CrossFire shows nearly 50% of the frames as runts, cutting the average frame rate in half, but those non-runt frames are actually delivered in a consistent manner.  But a smooth gaming experience at 33 FPS on average on two HD 7970s in CrossFire doesn’t sound that good when you can get the same smooth experience at 33 FPS average with a single HD 7970.  Dual GeForce GTX 680s in SLI on the other produce a fluid animation in Sleeping Dogs at 46 FPS. 

In Far Cry 3 and Skyrim we did not have this problem with our performance metrics since we didn’t see large numbers of runts or drops in our testing.  For Far Cry 3 in particular, the AMD cards had quite a bit more frame time variance (leading to stutter, non-fluid gameplay) with even the single HD 7970 getting higher marks on the International Stutter Units (ISU) graph than the GTX 680s in SLI. 

The second major concern for AMD CrossFire users occurs when you enable triple-monitor configurations with Eyefinity.  In every single game we tested, even Skyrim, DiRT3 and Far Cry 3 that didn’t show major runt issues on single monitor resolutions, just about every other frame of the game was being dropped.  Just like the runt frame issue we mentioned above, the Eyefinity drop problem basically means you are running your 5760x1080 configuration at the performance level of a single HD 7970 even though you have invested twice the money AND that other performance software (in-game tests, FRAPS) are telling you differently.  The results are so bad in fact from the recorded video that the FCAT Perl scripts aren’t quite able to decipher them because it thinks it is a poor capture; we can assure you that is not the case.

As much as we told you the single card results continued to favor AMD’s Radeon HD 7970 GHz Edition, the CrossFire results here counter that.  As a buyer of a high end graphics card that will cost you over $400, the assurance of being able to run a multi-GPU solution to improve performance were not just insinuated, but verbally given.  At this point, it is fair to say that AMD is not living up to its promises.


NVIDIA SLI Performance – How we expected multi-GPU to work

The NVIDIA GeForce GTX 680 looks slower than the HD 7970 in our single GPU comparisons, but that all changes when we compare dual-GPU to dual-GPU in this category.  While AMD’s solution showed thousands of runt frames on BF3, Crysis 3 and Sleeping Dogs (two of which are AMD Gaming Evolved titles), NVIDIA’s SLI was able to handle scaling without a problem.  Battlefield 3 at 2560x1440 goes from an average of 57 FPS on one GTX 680 to 100 FPS on two of them; Crysis 3 at 1920x1080 scales from 31 FPS to 56 FPS; Sleeping Dogs goes from 24 to 46 FPS at 2560x1440.  And it is able to do so without massive frame time variance, which means the animations are not only improved by better frame rates but are still nearly as smooth as the single card options.

The secret to NVIDIA’s success lies it the hardware frame metering technology that it has built into the SLI infrastructure since the beginning, but is only just recently coming to light.  Apparently a combination of both hardware on the GPU and software in the driver, the frame metering technology’s sole purpose to balance the output of frames from the GPU to the display in such a way to provide the best animation possible and balance performance and input latency. 

View Full Size

In my talks with AMD before this article went live they told us that they were simply doing what the game engine told them to do – displaying frames as soon as they were available.  Well as we can clearly see with the runts in more than half of our tested games, display a frame too early can be just as detrimental as display it too late.  Without the ability to balance the two GPU’s output (or three or four) you will run into these problems and in fact we have seen the same thing happen with NVIDIA cards when metering is disabled.  We are hoping that NVIDIA will give us the option to disable it and run some more Frame Rating tests to see how they compare in the near future.

In a couple games, Far Cry 3 and DiRT 3 on occasion, CrossFire is working as we would expect it to.  Skyrim does not exhibit the runt problem but it also doesn’t seem to scale at all over a single GPU either.  The inconsistency of this behavior might be just as troubling if my theory is correct.  In Skyrim, Far Cry 3 and DiRT 3 at low resolutions, it would appear that the CPU may be the primary bottleneck for performance, and for Far Cry 3, a game that has numerous other technical issues, this maybe be why CrossFire is actually working.  An artificial limiter on the game engine that helps meter out requests for frames to be rendered would essentially act like the hardware frame metering in NVIDIA’s SLI GPUs allowing for a better overall experience.  In games like BF3, Crysis 3 and Sleeping Dogs where the GPU is in more demand, the AMD hardware/software combination is the limiting point in the pipeline and this is where the AMD solution falters. 


Vsync – Only a Partial Answer

When I posted my preview of these results during the launch of the GeForce GTX Titan, many of you wanted to know what effects Vsync would have on the runts and frame time variance.  As it turns out, Vsync can in fact improve the situation for AMD’s CrossFire pretty dramatically, but still leaves a lot of problems on the table.  By doing metering on the frame rendering times of all GPU combinations including CrossFire, it is able to remove the runts from our captures and from affecting performance.  Take a look at the results in Crysis 3 at 1920x1080 on the Radeon HD 7970s in CrossFire to see the other emerging issue though: drastically increased frame time variance.  The constant shifting between 16ms and 33ms frame times means that you will often see stuttering animation even when the GPU has performance to handle higher or even more consistent frame rates.

To be fair, this same effect happens to NVIDIA’s GTX 680s in SLI.  The only difference is NVIDIA has some options to try to fix it called Adaptive Vsync and Smooth Vsync.  Both are activated through the NVIDIA Control Panel but Smooth Vsync is only available for SLI users (we are hoping this will be added for single GPU users as well soon).  The Adaptive Vsync fixes the frame times at your display refresh rate (16ms, 60 Hz most of the time) anytime your frame rate would be higher than 60 FPS but then allows the engine to essentially “turn off” Vsync under 60 FPS so you don’t get the dramatic stuttering.  Smooth Vsync is a little known feature that attempts to only change the frame rate / frame times when it knows it will have extended periods of available performance.

In some select instances for AMD's CrossFire we can actually see a completely resolved frame variance result, as demonstrated with the Battlefield 3 2560x1440 graphs.  But Vsync still introduces other problems to latency and interactivity of PC games and is a topic we are going to dive into again soon.


Final Thoughts

Below is our full video that features the Frame Rating process, some example results and some discussion on what it all means going forward.  I encourage everyone to watch it but you will definitely need the written portion here to fully understand this transition in testing methods.  Subscribe to our YouTube channel if you haven't already!

Our Frame Rating graphics performance methods have definitely put a different light on the world of GPU performance.  By enabling at-the-display evaluation rather than depending solely on the FRAPS reported numbers that don’t take into account many other steps of the gaming pipeline, we are able to paint a picture of the smoothness and real-world experiences as the user would see it directly.  The system and process is time consuming, costly and harder to interpret but I think you will agree that the results we are seeing here today are well worth the effort.  Until recently, very few people would have believed that AMD’s CrossFire and Eyefinity technologies would have had so many problems, thanks in many ways to previous testing platforms.  As it stands now half of our six tested gaming titles show significant problems with single monitor CrossFire scaling and all six of them have serious problems with Eyefinity scaling.  And that includes AMD sponsored titles like Crysis 3, Far Cry 3 and Sleeping Dogs. 

NVIDIA’s frame metering technology, which still largely remains a mystery thanks to the company’s desire to keep its multi-GPU advantage as long as possible, is more important than we ever thought it would be and makes the GTX 680 stand out as a better solution for high end gaming than the HD 7970.  That would not have been the case when looking at only single GPU results or by simply relying on the average frame rates of FRAPS data. 

Another problem this causes for AMD and its partners is with dual-GPU hardware, the Powercolor Devil 13 or the ASUS ARES II.  While both are powerful graphics cards, they depend completely on CrossFire technology to meet the performance expectations set by marketing and by cost and a quick glance at the HD 7970 CrossFire results I’ve showed you today (which basically mirror an ARES II) tells you that neither option is worth the price of admission. 

It should come as no surprise that AMD was only recently made aware of these performance issues and was kind of stunned to find out how bad they were.  The truth is that AMD could “hack” together a fix to make our frame time graphs look better but it would likely be a solution that introduced more problems than answers without doing the research required to get it right.  The driver team has told me several times over the past two weeks that they should have a testable driver to fix the CrossFire problems “in 2 to 3 months.”  Until then, buyers that consider a multi-GPU solution a goal or a requirement will want to seriously debate dropping Radeon cards from consideration. 

We have quite a bit more test results to share with you as the week continues as well as more testing to do to get a better and clearer picture of what is going on in each and every scenario we can.  We also plan on taking a look at more gaming titles with a smaller selection of hardware to find more patterns of success and failure with our current generation of GPUs.

March 27, 2013 | 09:16 AM - Posted by grommet

Hey Ryan, is the variable "t_ready" that the text refers to "t_present" in the diagram two paragraphs above?

March 27, 2013 | 09:22 AM - Posted by Ryan Shrout

Yes, fixed, thanks.

March 27, 2013 | 10:20 AM - Posted by Prodeous (not verified)

I was just wondering.. since the capture and analysis system relies on the left bar only, why doesn't it trunckate the rendered frame it self and keep only 1-3 pixels from the left side of the frame?

For some tests if you want to show off the specific "runts" "stutters" then you can keep it the entire frame captured.

But for most tests, you can record only the left colour bar and do analysis on that bar only, therefore you will not have to save the 600GB per second of uncompressed frames.

Just a thought.

March 27, 2013 | 10:40 AM - Posted by Ryan Shrout

We have that option (notice the crop capture option in our software) but we often like to reference recorded video down the road.

March 28, 2013 | 08:53 AM - Posted by Luciano (not verified)

If you write a piece of software with that "colored" portion only in mind you dont need any of the additional hardware and any user could use it just like Fraps.

March 28, 2013 | 07:02 PM - Posted by Anonymous (not verified)

LOL - AMD you are soooo freaking BUSTED !

Runt frames and zero time ghost frames is an AMD specialty.

AMD is 33% slower than nVidia, so amd pumped in ghosts and runts !!

Their crossfire BF3 is a HUGE CHEAT.

ROFL - man there are going to be some really pissed off amd owners whose $1,000 dual gpu frame rate purchases have here been exposed as LIES.

Shame on you amd, runts and ghosts, and all those fanboys screaming bang for the buck ... little did they know it was all a big fat lie.

March 29, 2013 | 02:42 AM - Posted by Anonymous (not verified)

true words of a fan boy.

May 26, 2013 | 08:23 PM - Posted by Johan (not verified)

Even i have Nvidia, but his comment, was really a fan-boy comment.

March 27, 2013 | 09:43 AM - Posted by Anon (not verified)

Run the benchmarks like any sane gamer would do so!

Running v-sync disabled benches with 60+ fps is dumb!

Running v-sync enabled benches with 60- fps is way more dumb!


March 27, 2013 | 10:00 AM - Posted by John Doe (not verified)

Don't forget that V-sync also sometimes fucks shit up.

The best solution is to limit the frames on the game end, or using a 3rd party frame limiter.

March 27, 2013 | 10:51 AM - Posted by Anon (not verified)

And sometimes disabling it fucks physics like in Skyrim.

Your point?

March 27, 2013 | 10:18 AM - Posted by grommet

Read the whole article before commenting- there is an entire page that focuses on your argument (surprise- you're not the first to suggest this!!)

March 27, 2013 | 10:32 AM - Posted by John Doe (not verified)

And if you have actually watched one of his streams, you'd have seen that he INDEED IS doing it "wrong".

When he was testing Tomb Raider, the card was artifacting and flickering and shimmering. It was complete and totally obvious that he didn't know what he's doing.

March 27, 2013 | 10:31 AM - Posted by Anonymous (not verified)

To the guy who says "you're doing it wrong".

Not true. He's doing it right. Many hardcore gamers run v-sync disabled over 60 fps, and only a few of the scenarios tested are consistently over 60fps.

And read the story to see the OBSERVED FPS is much less in many cases, not just the FRAPs FPS (which does not give proper info at the end of the pipeline (at the display) what users actually see).

March 27, 2013 | 10:38 AM - Posted by John Doe (not verified)

There's absolutely nothing "hardcore" about running an old ass game at 1000 FPS with a modern card.

All it does is to waste resources and make the card work at it's hardest for no reason. It's no different than swapping a gear at a say, maximum, 6500 RPM on a car that revs 7000 per gear. You should keep it lower so that the hardware doesn't get excessively get stressed.

If the card is obviously way beyond the generation of the game you're playing, then you're best off, if possible, LIMITING frames/putting a frame lock on your end.

March 27, 2013 | 10:42 AM - Posted by Ryan Shrout

Vsync does cause a ton of other issues including input latency.

March 27, 2013 | 10:44 AM - Posted by Anon (not verified)

You've said this already in both the article and comments but you only talked about input latency.

What are those other issues?

March 27, 2013 | 10:52 AM - Posted by John Doe (not verified)

It makes the entire game laggy an unplayable depending on condition.

You can read up over that TweakGuides guy's site on Vsync.

March 27, 2013 | 11:09 AM - Posted by Josh Walrath

Some pretty fascinating implications about the entire game/rendering pipeline.  There are so many problems throughout, and the tradeoffs tend to impact each other in varyingly negative ways.  Seems like frames are essentially our biggest issue, but how do you get around that?  I had previously theorized that per pixel change in an asynchronous manner would solve a lot of these issues, but the performance needed for such a procedure is insane.  Essentially it becomes a massive particle model with a robust physics engine being the base of changes (movement, creation, destruction, etc.).

March 27, 2013 | 02:18 PM - Posted by Jason (not verified)

research voxels... the technology exists but is pretty far off.

March 27, 2013 | 05:44 PM - Posted by Anonymous (not verified)

Regarding per-pixel-updates: Check this out:

It's obvious the result must be delivered in frames (no way around it with current monitor technology), but the engine in the vid clearly works differently from the usual by just putting the newly calculated pixels into the framebuffer that are ready by the time.

March 27, 2013 | 02:07 PM - Posted by bystander (not verified)

Due to many frames being forced to wait for the next vertical retrace mode, while others do not, it will result in some time metering issues. This can result in a form of stuttering.

March 29, 2013 | 02:49 AM - Posted by Anonymous (not verified)

hardcore gamer here play BF3 v-sync ON!


1. My monitor (as 90% of us) is 60hz!
don't need the extra stress/heat/electric juice on card/system.

2. Fed up with frame tearing every time i turn around.

March 27, 2013 | 10:27 AM - Posted by Prodeous (not verified)

With regards to the Nvidia settings of v-sync, it seems that Addaptive (half refresh rate) was selected capping it at 30fps vs Addaptive which would cap at 60fps.

Was that on purpose?

March 27, 2013 | 10:31 AM - Posted by Noah Taylor (not verified)

I'm still interested to see what type of results you come up with when using amd crossfire with a program like afterburner's riva tuner to limit the fps to 60, which would seems to be everyone's preference for this situation.

March 27, 2013 | 10:42 AM - Posted by Ryan Shrout

I think you'll find the same problems that crop up with Vsync enabled.

March 27, 2013 | 11:14 AM - Posted by Noah Taylor (not verified)

I have to admit observed FPS graphs are DAMNING to AMD, and I own 2 7970s and 7950s so I'm not remotely biased against them in any way. One thing i did notice is that dropped frames don't effect every game so hopefully this is something AMD may be able to potentially mitigate through driver tweaks.

I have to admit, crysis 3 can be a sloppy affair in AMD crossfire and now I can see exactly what I'm experiencing without trying to make guesswork out of it.

Regardless, the bottom line EVERYONE should take away from this, is that crossfire DOES NOT function as intended whatsoever, and we can now actually say AMD is deceptive in their marketing as well, this is taken directly from AMD's website advertising crossfire:

"Tired of dropping frames instead of opponents? Find a CrossFire™-certified graphics configuration that’s right for you."

They have built their business on a faulty product and every crossfire owner should speak up so that AMD makes the changes they have control over to FIX a BROKEN system.

March 27, 2013 | 11:17 AM - Posted by Josh Walrath

Just think how bad a X1900 CrossFire Edition setup would do here...  I think Ryan should dig up those cards and test them!

March 27, 2013 | 11:49 AM - Posted by SB48 (not verified)

I wonder if that one of the reasons why AMD never really released the 7990,
also if this was already a obvious problem with the 3870 X2...
or the previous gens card, like HD5000, 6000.

anyway, is there any input lag difference from NV to AMD (SLI-CF) without vsync?

also I would be curious to see some CPU comparison.

March 27, 2013 | 01:31 PM - Posted by John Doe (not verified)

I had both a dual 3870 and a dual 2900 setup, both were more or less the same thing.

Both has driven a CRT at 160HZ for Cube and both were silk smooth.

This is a recent issue. It has nothing to do with cards of that age. The major problem with those old cards was the lack of CrossFire application profiles. Before the beginning of 2010, you had absolutely no application profiles like you have with nVidia. So CF either worked or you had to change the game ".exe" names, which either worked or, made the game mess up or just kick you out of Steam servers due to VAC.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.