Review Index:
Feedback

Frame Rating Dissected: Full Details on Capture-based Graphics Performance Testing

Battlefield 3 – HD 7970 versus GTX 680

View Full Size

Our first graph shows the results of the average frame rates per second of the single HD 7970, HD 7970 CrossFire, a single GTX 680 and GTX 680s in SLI.  You can see that the single GTX 680 is running just a bit slower than the performance of the HD 7970 but when we add in that second card, the CrossFire configuration is actually noticeably faster!  But what does the Frame Rating system actually see?

View Full Size

 

Here we can see how all of the raw frame times are shown on the screen via our capture system and the FCAT extraction tools.  The frame time variance of the single card configurations, both HD 7970 and GTX 680 are pretty minimal, as indicated by a tighter, thinner line across the time window.  When we add in the second card for SLI on the GTX 680s you can see that there is a less consistent frame time with a wider, more dense blue line.  The orange line though is a much bigger story with a frame times cycling between near 0ms (the runts we see) and 15ms.  Indicating not only some potential animation stuttering but also that the small frames are so insignificant as to be useless in our observed frame rates.

View Full Size

Remember that the Observed FPS takes out runts and drops and is the result and benefit of measuring performance with our new Frame Rating system.  Clearly the advantages that CrossFire appeared to have over SLI in the first result nearly completely negated and in many cases the observed performance of a two card HD 7970 configuration is no better than that of a single card. 

The same cannot be said for NVIDIA’s SLI though – the frames are presented to the gamer in a consistent pattern that indicates good scaling and that looks nearly identical to that of the first FRAPS-based graph. 

View Full Size

Our minimum frame rate percentile graph shows a similar story to our observed frame rate and shows the frame rate of the HD 7970s in CrossFire to be very similar to that of a single HD 7970; in fact the two frame rates / frame times get CLOSER as we progress down the percentile curve.  The GXT 680s in SLI are the only configuration to see any appreciable difference.

View Full Size

Finally, let’s look at our custom algorithm stutter graphic.  Without a doubt the CrossFire configuration is the worst offender here and will have the most potential for animation stutter of any other GPUs compared here.  Keeping mind that this graph has very little to do with absolute frame time (thus absolute frame rates), both the HD 7970 and GTX 680 have nearly identical, low variance levels. 

 

Let’s dive into the next resolution, 2560x1440. 

View Full Size

The single card comparisons again shine a positive light on the Radeon HD 7970 as it is able to maintain performance higher than the GTX 680 throughout or test runs of Battlefield 3.  This information is also reporting a similar result for CrossFire though…

View Full Size

A quick glance at this graph tells you all you need to know – the CrossFire runt issue remains a problem at 2560x1440 in addition to 1920x1080.  Without a doubt the average frame rates you see above are NOT indicative of real world performance for AMD’s multi-GPU solutions.  Interestingly, the width of the blue line, for the GXT 680 SLI setup, are not much different than those of the HD 7970 and GTX 680 single cards.  It seems that with less CPU bottleneck on the system the SLI configuration can actually improve frame time consistency compared to lower resolutions.

View Full Size

Based on the frame times above it should not surprise you to once again see our Observed FPS for CrossFire is very different than the FRAPS measured average frame rate.  Both of the single GPU configurations as well as the GTX 680s in SLI show nearly identical results.

View Full Size

This is a RUN plot graph for the HD 7970 CrossFire configuration at 2560x1440.  We aren't going to be showing these for most of the cards and games, only in a select few instances where we can illustrate a point.  This time, the frame times plot above and this RUN graph can show the two sides of the same story.  Notice that the FRAPS frame rate is seen at the top in the black line with the observed frame rate represented by the blue line.  The seperation between them is eithe runts (orange) or dropped frames completely (red).  Interestingly, there are two points in our 60 second run at the 43 second and 47 second mark where it looks like the CrossFire setup is performing correctly - where the FRAPS and observed frame rate lines meetup, though they quickly diverge again. 

View Full Size

For comparison, here is the RUN plot from the GTX 680s in SLI.  The data here shows us that there are no runt frames and no dropped frames from the GTX 680s.  That is a stark comparison to what the HD 7970s show above.

View Full Size

Even though the GTX 680 is at the bottom of the pack in our percentile chart of minimum frame rates, the HD 7970 and HD 7970 CrossFire again show the lack of scalability of the CrossFire configuration.  And it should give GeForce owners some pride that their single GTX 680 barely slower than a pair of Radeon HD 7970s in real-world performance.

View Full Size

Our ISU ratings show the HD 7970 CrossFire is again the most variant of configurations and we do see improvement with the GTX 680s in SLI – now matching the level of even a single Radeon HD 7970. 

 

As I mentioned before, there are some cases where the 5760x1080 results for AMD CrossFire + Eyefinity were so bad, with so many dropped frames, that the Perl code in FCAT couldn’t produce a solid result.  With Battlefield 3, this is one of those cases so you will not get any HD 7970s in CrossFire in our graphs here.  What I can tell you though is that our video footage of the 57x10 EF+CF testing did show a dropped frame on every other color, telling me that the work of one of the GPUs was 100% thrown away and never shown to the gamer.

View Full Size

Without the HD 7970 CF config in our plots we see that while the GTX 680s in SLI is scaling quite well (going from somewhere in the 30s range to somewhere in the 50s) the HD 7970 single card remains the better single GPU option. 

View Full Size

Frame variance looks pretty bad for the GTX 680s in SLI at 5760x1080 jumping from under 10ms to over 40ms in some cases.  Meanwhile, both single cards have pretty tight data plots some noticeable spikes on the single GTX 680.

View Full Size

The observed FPS is identical to that of the FRAPS FPS since we did not see any runts or drops from our data.

View Full Size

Our percentile graph of minimum frame times shows us something interesting.  Looking at the 50th percentile is CLOSE to looking at the average frame rate over the entire run of the test and the GTX 680s scale from 32 FPS to 52 FPS with the HD 7970 coming in around 38 FPS.  But as we scale down the line to higher percentiles the minimum FPS of the GXT 680 SLI setup is coming down more steeply than the single cards and at the 95% mark are much closer together with the HD 7970 actually splitting the difference. 

View Full Size

You can ignore the HD 7970 CF line here along the bottom – residual data and graph cleanup is needed.  What is important though is that the frame time variance of SLI is clearly seen here and the “spikes” we saw in the frame time graph are affecting the overall variance poorly for NVIDIA’s GTX 680.  While these results are interesting, without the plots from the HD 7970 CrossFire, I would leave the Battlefield 3 5760x1080 results open ended. 

 

What is the total takeaway from our first set of results from Frame Rating, on Battlefield 3?  First, if we look at only single GPU solutions, the GTX 680 falls behind the Radeon HD 7970 in average frame rate while exhibiting no more frame time variance or stutter.  For SLI and CrossFire though, there isn’t even a debate and the real-world value of adding a second HD 7970 to your system is near zero.  That’s a tough pill to swallow for any gamers out there with two cards already in their system and I expect we will get some hate-fueled email and comments on the subject but the data is true and matches our game play testing experiences. 

March 27, 2013 | 09:16 AM - Posted by grommet

Hey Ryan, is the variable "t_ready" that the text refers to "t_present" in the diagram two paragraphs above?

March 27, 2013 | 09:22 AM - Posted by Ryan Shrout

Yes, fixed, thanks.

March 27, 2013 | 10:20 AM - Posted by Prodeous (not verified)

I was just wondering.. since the capture and analysis system relies on the left bar only, why doesn't it trunckate the rendered frame it self and keep only 1-3 pixels from the left side of the frame?

For some tests if you want to show off the specific "runts" "stutters" then you can keep it the entire frame captured.

But for most tests, you can record only the left colour bar and do analysis on that bar only, therefore you will not have to save the 600GB per second of uncompressed frames.

Just a thought.

March 27, 2013 | 10:40 AM - Posted by Ryan Shrout

We have that option (notice the crop capture option in our software) but we often like to reference recorded video down the road.

March 28, 2013 | 08:53 AM - Posted by Luciano (not verified)

If you write a piece of software with that "colored" portion only in mind you dont need any of the additional hardware and any user could use it just like Fraps.

March 28, 2013 | 07:02 PM - Posted by Anonymous (not verified)

LOL - AMD you are soooo freaking BUSTED !

Runt frames and zero time ghost frames is an AMD specialty.

AMD is 33% slower than nVidia, so amd pumped in ghosts and runts !!

Their crossfire BF3 is a HUGE CHEAT.

ROFL - man there are going to be some really pissed off amd owners whose $1,000 dual gpu frame rate purchases have here been exposed as LIES.

Shame on you amd, runts and ghosts, and all those fanboys screaming bang for the buck ... little did they know it was all a big fat lie.

March 29, 2013 | 02:42 AM - Posted by Anonymous (not verified)

true words of a fan boy.

May 26, 2013 | 08:23 PM - Posted by Johan (not verified)

Even i have Nvidia, but his comment, was really a fan-boy comment.

March 27, 2013 | 09:43 AM - Posted by Anon (not verified)

Run the benchmarks like any sane gamer would do so!

Running v-sync disabled benches with 60+ fps is dumb!

Running v-sync enabled benches with 60- fps is way more dumb!

YOU'RE DOING IT WRONG!

March 27, 2013 | 10:00 AM - Posted by John Doe (not verified)

Don't forget that V-sync also sometimes fucks shit up.

The best solution is to limit the frames on the game end, or using a 3rd party frame limiter.

March 27, 2013 | 10:51 AM - Posted by Anon (not verified)

And sometimes disabling it fucks physics like in Skyrim.

Your point?

March 27, 2013 | 10:18 AM - Posted by grommet

Read the whole article before commenting- there is an entire page that focuses on your argument (surprise- you're not the first to suggest this!!)

March 27, 2013 | 10:32 AM - Posted by John Doe (not verified)

And if you have actually watched one of his streams, you'd have seen that he INDEED IS doing it "wrong".

When he was testing Tomb Raider, the card was artifacting and flickering and shimmering. It was complete and totally obvious that he didn't know what he's doing.

March 27, 2013 | 10:31 AM - Posted by Anonymous (not verified)

To the guy who says "you're doing it wrong".

Not true. He's doing it right. Many hardcore gamers run v-sync disabled over 60 fps, and only a few of the scenarios tested are consistently over 60fps.

And read the story to see the OBSERVED FPS is much less in many cases, not just the FRAPs FPS (which does not give proper info at the end of the pipeline (at the display) what users actually see).

March 27, 2013 | 10:38 AM - Posted by John Doe (not verified)

There's absolutely nothing "hardcore" about running an old ass game at 1000 FPS with a modern card.

All it does is to waste resources and make the card work at it's hardest for no reason. It's no different than swapping a gear at a say, maximum, 6500 RPM on a car that revs 7000 per gear. You should keep it lower so that the hardware doesn't get excessively get stressed.

If the card is obviously way beyond the generation of the game you're playing, then you're best off, if possible, LIMITING frames/putting a frame lock on your end.

March 27, 2013 | 10:42 AM - Posted by Ryan Shrout

Vsync does cause a ton of other issues including input latency.

March 27, 2013 | 10:44 AM - Posted by Anon (not verified)

You've said this already in both the article and comments but you only talked about input latency.

What are those other issues?

March 27, 2013 | 10:52 AM - Posted by John Doe (not verified)

It makes the entire game laggy an unplayable depending on condition.

You can read up over that TweakGuides guy's site on Vsync.

March 27, 2013 | 11:09 AM - Posted by Josh Walrath

Some pretty fascinating implications about the entire game/rendering pipeline.  There are so many problems throughout, and the tradeoffs tend to impact each other in varyingly negative ways.  Seems like frames are essentially our biggest issue, but how do you get around that?  I had previously theorized that per pixel change in an asynchronous manner would solve a lot of these issues, but the performance needed for such a procedure is insane.  Essentially it becomes a massive particle model with a robust physics engine being the base of changes (movement, creation, destruction, etc.).

March 27, 2013 | 02:18 PM - Posted by Jason (not verified)

research voxels... the technology exists but is pretty far off.

March 27, 2013 | 05:44 PM - Posted by Anonymous (not verified)

Regarding per-pixel-updates: Check this out: https://www.youtube.com/watch?feature=player_embedded&v=pXZ33YoKu9w

It's obvious the result must be delivered in frames (no way around it with current monitor technology), but the engine in the vid clearly works differently from the usual by just putting the newly calculated pixels into the framebuffer that are ready by the time.

March 27, 2013 | 02:07 PM - Posted by bystander (not verified)

Due to many frames being forced to wait for the next vertical retrace mode, while others do not, it will result in some time metering issues. This can result in a form of stuttering.

March 29, 2013 | 02:49 AM - Posted by Anonymous (not verified)

hardcore gamer here play BF3 v-sync ON!

because:

1. My monitor (as 90% of us) is 60hz!
don't need the extra stress/heat/electric juice on card/system.

2. Fed up with frame tearing every time i turn around.

March 27, 2013 | 10:27 AM - Posted by Prodeous (not verified)

With regards to the Nvidia settings of v-sync, it seems that Addaptive (half refresh rate) was selected capping it at 30fps vs Addaptive which would cap at 60fps.

Was that on purpose?

March 27, 2013 | 10:31 AM - Posted by Noah Taylor (not verified)

I'm still interested to see what type of results you come up with when using amd crossfire with a program like afterburner's riva tuner to limit the fps to 60, which would seems to be everyone's preference for this situation.

March 27, 2013 | 10:42 AM - Posted by Ryan Shrout

I think you'll find the same problems that crop up with Vsync enabled.

March 27, 2013 | 11:14 AM - Posted by Noah Taylor (not verified)

I have to admit observed FPS graphs are DAMNING to AMD, and I own 2 7970s and 7950s so I'm not remotely biased against them in any way. One thing i did notice is that dropped frames don't effect every game so hopefully this is something AMD may be able to potentially mitigate through driver tweaks.

I have to admit, crysis 3 can be a sloppy affair in AMD crossfire and now I can see exactly what I'm experiencing without trying to make guesswork out of it.

Regardless, the bottom line EVERYONE should take away from this, is that crossfire DOES NOT function as intended whatsoever, and we can now actually say AMD is deceptive in their marketing as well, this is taken directly from AMD's website advertising crossfire:

"Tired of dropping frames instead of opponents? Find a CrossFire™-certified graphics configuration that’s right for you."

They have built their business on a faulty product and every crossfire owner should speak up so that AMD makes the changes they have control over to FIX a BROKEN system.

March 27, 2013 | 11:17 AM - Posted by Josh Walrath

Just think how bad a X1900 CrossFire Edition setup would do here...  I think Ryan should dig up those cards and test them!

March 27, 2013 | 11:49 AM - Posted by SB48 (not verified)

I wonder if that one of the reasons why AMD never really released the 7990,
also if this was already a obvious problem with the 3870 X2...
or the previous gens card, like HD5000, 6000.

anyway, is there any input lag difference from NV to AMD (SLI-CF) without vsync?

also I would be curious to see some CPU comparison.

March 27, 2013 | 01:31 PM - Posted by John Doe (not verified)

I had both a dual 3870 and a dual 2900 setup, both were more or less the same thing.

Both has driven a CRT at 160HZ for Cube and both were silk smooth.

This is a recent issue. It has nothing to do with cards of that age. The major problem with those old cards was the lack of CrossFire application profiles. Before the beginning of 2010, you had absolutely no application profiles like you have with nVidia. So CF either worked or you had to change the game ".exe" names, which either worked or, made the game mess up or just kick you out of Steam servers due to VAC.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.