Review Index:
Feedback

Frame Rating Dissected: Full Details on Capture-based Graphics Performance Testing

Sleeping Dogs – HD 7970 versus GTX 680

View Full Size

Our Sleeping Dogs testing shows that the game is quite a bit more GPU intensive (at least at our quality settings) than I had originally thought.  The initial test results have the HD 7970 a solid margin faster than the GTX 680 and HD 7970s in CrossFire an even bigger advantage ahead of GTX 680s in SLI.

View Full Size

But, just as we witnessed with BF3 and Crysis 3, there is definitely a problem brewing for AMD.  At 1920x1080 the frame times are very consistent for the single cards and NVIDIA’s SLI solution but for the AMD cards in CrossFire, the experience is a mess, filled with runt frames.

View Full Size

The result is an observed frame rate average well below what would be reported by FRAPS and essentially no faster than a single HD 7970 graphics card.  Interestingly, the spikes of higher observed frame rate match up perfectly with the very few “tight” areas of our frame time map.

View Full Size

The good news for AMD is that the HD 7970 is consistently faster than the GTX 680 in a single card test scenario and the results are very even.  The very bad news of course is that two Radeon HD 7970s in CrossFire are only as fast as a single card when looking at your perceived frame rates.  That gives the GTX 680s in SLI the win almost by default.

View Full Size

Our ISU based stutter graphic shows a very similar story with the CrossFire frame times easily pull away from the rest of the group by the 80th percentile.

 

View Full Size

At 2560x1440 the story starts once again with the HD 7970 outpacing the GTX 680 and CrossFire running much faster than the GTX 680s in SLI.

View Full Size

The plot of frame times tells the whole story though and the inconsistent frame times and runts that are plaguing the CrossFire technology. 

View Full Size

Thus, the observed FPS we see here is much much lower than what FRAPS sees and in fact is just barely faster than the performance of a single card!

View Full Size

The minimum FPS percentile graph shows another angle of the same problem for AMD – the frame rates are identical for single and dual GPU combinations.  Despite the fact that NVIDIA’s GTX 680 is slower than the HD 7970, with SLI working correctly and efficiently it is able to scale Sleeping Dogs from 24 FPS average through the entire run to about 46 FPS.

View Full Size

Our graph of ISU actually shows us that when we take out the runts the frame time variance from the CrossFire cards is actually kind of minimal up until the 90th percentile after which it skyrockets.

 

View Full Size

We couldn’t get appropriate results with the 5760x1080 testing on Sleeping Dogs and the HD 7970 CrossFire setup so instead we are looking at just another 3 card set of graphs.  At this setting we have turned down the AA from the Extreme setting to High, which explains the capability for these GPUs to run at all in this resolution.  The GTX 680 is once again slower than the HD 7970 though the 680s scale correctly and quite well taking performance from 30 FPS on average to 51 FPS or so.

View Full Size

There is a bit more frame time variance than I would like to see with the GTX 680s in SLI with a few dozen hitches obviously seen in our image above.  Single card results continue to be level and reliable for both AMD and NVIDIA.

View Full Size

The observed frame rate remains the same from the FRAPS metrics.

View Full Size

The Min FPS percentile graph shows us the value of consistent frame times on the GTX 680 – it is nearly a straight line across the screen, starting at 30 FPS at the 50% mark and only dropping to 27 FPS at the 99% level. 

View Full Size

The frame time variance graph (our ISU) shows a very flat pattern with the GTX 680 and HD 7970 cards all the way through the test runs though the SLI configuration does see more potential for stutter with a rising line as we hit the 99th percentile.

March 27, 2013 | 07:34 PM - Posted by Ryan Shrout

I'll see if we can take a look at that.

March 29, 2013 | 03:31 PM - Posted by jgstew

The whole time I was reading this article, I was more and more curious how Virtu's technology would effect things. I'm curious about more than just their Virtual V-sync, but their other options as well for both single and multiple GPUs. Virtu has not had the scaling that SLI & Crossfire have had, but perhaps their technology would show well in other areas with this analysis.

I do feel that Frame Rating & the input to display latency are much more interesting metrics.

Great work on the article.

March 27, 2013 | 05:32 PM - Posted by serpinati of the wussu (not verified)

I've read somewhere that this will not be the norm for pcper to do this type of testing with all video card reviews (too labor intensive). Is this true?

If it is true, will pcper at least record the frame data and simply give it a quick look to make sure video cards aren't doing something absolutely crazy (for example, if you were reviewing those 7970's crossfire you might not plan to actually to analyze the frame times, but you would at least look over the recorded frames anyways and catch the crazy runt frames and mention it in your reviews.)

March 27, 2013 | 07:34 PM - Posted by Ryan Shrout

No, my plan is to take this route going forward.

March 27, 2013 | 05:57 PM - Posted by Anonymous (not verified)

Greatly appreciate the work behind this! And opening up the tools / scripts to everyone, pushing it to other hardware magazines. Huge kudos! Oh, and you should slap Anandtech over the head for not even mentioning your involvment into the new metering method in their introductory article, that only mentions nVidia...

Is there any way to determine latency between t_present and t_display? If not, it should be - maybe some kind of timestamp could be worked into the overlay? Because that would be interesting not only in the context of VSync, but also regarding nVidias frame metering, which must take some time analyzing the frametimes. Supposedly, AMD has some smoothing algorithm coming up for Crossfire as well, so there it would also be valuable information.

Regarding Adaptive / Smooth VSync: They clearly come off too good in this article. They're not the focus of the article of course, still a little more thought should have been put behind this, considering the amount of time taken to create this awe inspiring effort of an article and the tools behind it. Adaptive VSync turns VSync off when tearing is most annoyingly visible, i.e. at framerates below monitor refresh rate - true triple buffering (instead of the queue nowadays called triple buffering) would be the good solution here. Smooth Vsync does not seem to do anything particularly positive judging from what little measurement is available in the article - only time (further tests that is) will tell what it actually does...

March 27, 2013 | 07:36 PM - Posted by Ryan Shrout

We have more research into the different NVIDIA Vsync options coming up, stay tuned.

As for the timestamp different to check for the gaps between t_present and t_display...we are on that same page as well.  :)

March 27, 2013 | 05:57 PM - Posted by Mawrtin (not verified)

Have you tried CF using radeonpro? Apparently it offers some kinda of Dynamic V-sync similar to nVidias adaptive if I'm not mistaken.

March 27, 2013 | 08:03 PM - Posted by Marty (not verified)

No, it does not. It's a frame limiter, which eliminates some of the stutter, but causes more lag (increases latency).

March 27, 2013 | 06:16 PM - Posted by Mangix

Hmmm. I wonder if there is a difference between Double and Triple Buffered VSync. Newer Valve games support both. Would appreciate testing there.

Also, are there driver settings that help/harm frame rating? Nvidia's Maximum Pre-rendered frames setting sounds like something that can have an effect.

March 27, 2013 | 07:25 PM - Posted by xbeaTX (not verified)

AMD has complained on Anandtech for this type of test using fraps ... now anyone know the real situation and it's shocking
this article sets the new stantard of excellence ... congratulations and thanks for the enormous work done... keep it up! :)

March 27, 2013 | 07:37 PM - Posted by Ryan Shrout

Thank you!  Help spread the word!

March 28, 2013 | 01:55 AM - Posted by TinkerToyTech

Posted to my facebook page

March 27, 2013 | 07:40 PM - Posted by Marty (not verified)

Ryan, I've a suspicion that you wanted to select Adaptive VSync on NVidia, but have selected Adaptive (half refresh rate) instead in the control panel. Would you please check it out.

March 28, 2013 | 05:20 AM - Posted by showb1z (not verified)

²

Other than that great article. Would also be interested in results with frame limiters.

March 27, 2013 | 08:23 PM - Posted by Dan (not verified)

This site is so awesome that it is one of th eonly ones that I disable Adblock in Chrome for. You deserve the ad $'s.

Rock on, Ryan and crew!

March 29, 2013 | 10:37 AM - Posted by Ryan Shrout

Thank you we appreciate it!

March 27, 2013 | 08:45 PM - Posted by Mike G (not verified)

Thanks for the all of the time taken to accurately test and then explain your frame rating methods to us. I wonder if AMD would be willing to have a representative come on and speak on how they will be addressing this issue. I for one will be holding off purchasing an additional 7970 at this time.

March 27, 2013 | 09:43 PM - Posted by bystander (not verified)

They have, with AnandTech, though the person they spoke to was the single GPU driver guy, they did mention they have a plan to offer fixes in July.
http://www.anandtech.com/tag/gpus

March 27, 2013 | 09:51 PM - Posted by Soulwager (not verified)

Is your capture hardware capable of capturing 1080p @ 120hz? The data rate should be less than 1440p@60hz.

Also, I would like to see some starcraft 2 results. It's frequently CPU limited, and I'm wondering how that impacts the final experience when compared to a gpu limited situation. I'd recommend the "unit preloader" map as a benchmark run, once to pre-load all the assets and again for the capture.

March 29, 2013 | 10:38 AM - Posted by Ryan Shrout

Actually, I think 1080p@120 is a higher pixel clock than 25x14@60; we tried to do basic 120 Hz testing right before publication without luck but we'll be trying again soon.

As for SC2, we'll take a look.

March 29, 2013 | 03:00 PM - Posted by Soulwager (not verified)

You're right, the pixel clock is higher. I guess I was only thinking about total number of pixels that need to be recorded, but the smaller resolution is more heavily impacted by overscan.

March 28, 2013 | 12:28 AM - Posted by SPBHM

great stuff.

I would be interested in seeing some results with some framerate limit, not from vsync, but another limit, something like FPS max 45 for Crysis 3 (higher than a single card, around the average for the CF), you can easily do that with dxtory

March 28, 2013 | 12:49 AM - Posted by Trey Long (not verified)

Its nice to see the truth come out. Save the excuses and fix it AMD.

March 28, 2013 | 06:41 AM - Posted by Carol Smith (not verified)

Why did you completely ignore AMD's RadeonPro freeware ?
Which offers and has offered Dynamic V-Sync that's superior to Nvidia's Adaptive V-Sync for years now.
Tom's Hardware seems to be the only site that actually used this utility to make a fair comparison between SLI and Crossfire.

RadeonPro's Dynamic V-Sync was shown to be clearly superior to Nvidia's Adapative V-Sync Implementation .

Here is the Tom's link
http://www.tomshardware.com/reviews/radeon-hd-7990-devil13-7970-x2,3329-...

I hope that you learn from your fellow journalist's to include this fantastic solution in your upcoming article.
Thanks for your hard work and good luck.

March 28, 2013 | 09:27 AM - Posted by Marty (not verified)

A frame limiter improves stuttering, but increases lag. You loose some Fps too. So you have to decide between the devil and lucifer in the case of Crossfire.

March 29, 2013 | 10:01 AM - Posted by Carol Smith (not verified)

Exact same thing with Nvidia SLI and Adaptive V-Sync, if you don't want stuttering you have to sacrifice framrates.

March 29, 2013 | 10:39 AM - Posted by Ryan Shrout

I'm going to take a look, but this is NOT "AMD's" software.

March 28, 2013 | 06:50 AM - Posted by Andre3000 (not verified)

Thanks for the eye opener! Which AMD drivers have been used for this review? I have read the article.. but i might have missed it?

March 29, 2013 | 10:39 AM - Posted by Ryan Shrout

For AMD we used 13.2 beta 7 and for NVIDIA we used 314.07.

March 29, 2013 | 11:27 AM - Posted by Steve (not verified)

Ryan: Is it possible to test AMD's CF with older drivers to see if this problem has been around for a long time or if it is a more recent problem with AMD's continuous driver upgrades to improve speeds?

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.