Review Index:
Feedback

Frame Rating Dissected: Full Details on Capture-based Graphics Performance Testing

AMD CrossFire and Eyefinity Concerns

It will become painfully apparent as we dive through the benchmark results on the following pages, but I feel that addressing the issues that CrossFire and Eyefinity are creating up front will make the results easier to understand.  We showed you for the first time in Frame Rating Part 3, AMD CrossFire configurations have a tendency to produce a lot of runt frames, and in many cases nearly perfectly in an alternating pattern.  Not only does this mean that frame time variance will be high, but it also tells me that the value of performance gained by of adding a second GPU is completely useless in this case.  Obviously the story would become then, “In Battlefield 3, does it even make sense to use a CrossFire configuration?”  My answer based on the below graph would be no.

View Full Size

View Full Size

 

Another issue that cropped up with the AMD configurations in CrossFire mode showed its face when we tried 5760x1080 testing.  In nearly every case, an AMD CrossFire dual-GPU configuration running an Eyefinity configuration at 5760x1080 showed alternating dropped frames.  Whereas with single monitor gaming we were seeing some full game frames being turned into runts at the display, with Eyefinity those frames were missing completely (seen as missing overly colors from the expected pattern). 

This actually caused two things to happen.  First, our Perl scripts and data generation would sometimes error out completely (after the capture and extraction steps) because the code was never able to find the initial sequence of 16 colors in the correct order to validate the video capture.  We are still working on a fix for this in order to present that information in a useable format, but for now we will point out specifically when that occurred.  Secondly, the frame rates were smoother!  Without the runts getting in the way of the animation sequences the motion on the screen was actually more fluid than it would have been otherwise, but don’t take this as a vindication of what AMD’s Eyefinity is actually doing.  While the animation is smoother, you are still not seeing any benefit from the second card in your CrossFire configuration and you could view it simply as wasted investment.

Finally, we saw some interesting visual problems in our captures of Eyefinity that cause us to raise some eyebrows.  Because the overlay changes colors with every frame I was able to notice some instances where the digital scan out of the graphics cards were not matching up; frames were being split by other frames.

View Full Size

View Full Size

This second screen uses an alternate overlay for specific multi-GPU scenarios

At first I was afraid something was going on with our capture hardware, that somehow the EDID of the Datapath VisionDVI-DL card was incorrectly communicating with the AMD GPUs.  But in fact, we saw several problems and inconsistencies with AMD’s graphics performance when more than one display was attached to the system, even if we were NOT in an Eyefinity setup!  As I later learned, enabling Vsync actually does not work at all with Eyefinity and that, combined with the results I have seen (of which the screenshot above is an indicator) with our testing, lead me to believe that something is fundamentally wrong with AMD’s Eyefinity implementation.  And if it’s not “wrong”, it is definitely counter intuitive.  We’ll be asking AMD for more information in the coming weeks and hope to get more information from them as our Frame Rating process evolves. 

 

For our first set of data we are going to be looking at the Radeon HD 7970 GHz Edition and the GeForce GTX 680 in both single and dual-GPU combinations.  By looking at the high-end graphics cards first we can set a baseline for what to expect moving forward into the mid-range options and even mainstream.

March 27, 2013 | 04:34 PM - Posted by Ryan Shrout

I'll see if we can take a look at that.

March 29, 2013 | 12:31 PM - Posted by jgstew

The whole time I was reading this article, I was more and more curious how Virtu's technology would effect things. I'm curious about more than just their Virtual V-sync, but their other options as well for both single and multiple GPUs. Virtu has not had the scaling that SLI & Crossfire have had, but perhaps their technology would show well in other areas with this analysis.

I do feel that Frame Rating & the input to display latency are much more interesting metrics.

Great work on the article.

March 27, 2013 | 02:32 PM - Posted by serpinati of the wussu (not verified)

I've read somewhere that this will not be the norm for pcper to do this type of testing with all video card reviews (too labor intensive). Is this true?

If it is true, will pcper at least record the frame data and simply give it a quick look to make sure video cards aren't doing something absolutely crazy (for example, if you were reviewing those 7970's crossfire you might not plan to actually to analyze the frame times, but you would at least look over the recorded frames anyways and catch the crazy runt frames and mention it in your reviews.)

March 27, 2013 | 04:34 PM - Posted by Ryan Shrout

No, my plan is to take this route going forward.

March 27, 2013 | 02:57 PM - Posted by Anonymous (not verified)

Greatly appreciate the work behind this! And opening up the tools / scripts to everyone, pushing it to other hardware magazines. Huge kudos! Oh, and you should slap Anandtech over the head for not even mentioning your involvment into the new metering method in their introductory article, that only mentions nVidia...

Is there any way to determine latency between t_present and t_display? If not, it should be - maybe some kind of timestamp could be worked into the overlay? Because that would be interesting not only in the context of VSync, but also regarding nVidias frame metering, which must take some time analyzing the frametimes. Supposedly, AMD has some smoothing algorithm coming up for Crossfire as well, so there it would also be valuable information.

Regarding Adaptive / Smooth VSync: They clearly come off too good in this article. They're not the focus of the article of course, still a little more thought should have been put behind this, considering the amount of time taken to create this awe inspiring effort of an article and the tools behind it. Adaptive VSync turns VSync off when tearing is most annoyingly visible, i.e. at framerates below monitor refresh rate - true triple buffering (instead of the queue nowadays called triple buffering) would be the good solution here. Smooth Vsync does not seem to do anything particularly positive judging from what little measurement is available in the article - only time (further tests that is) will tell what it actually does...

March 27, 2013 | 04:36 PM - Posted by Ryan Shrout

We have more research into the different NVIDIA Vsync options coming up, stay tuned.

As for the timestamp different to check for the gaps between t_present and t_display...we are on that same page as well.  :)

March 27, 2013 | 02:57 PM - Posted by Mawrtin (not verified)

Have you tried CF using radeonpro? Apparently it offers some kinda of Dynamic V-sync similar to nVidias adaptive if I'm not mistaken.

March 27, 2013 | 05:03 PM - Posted by Marty (not verified)

No, it does not. It's a frame limiter, which eliminates some of the stutter, but causes more lag (increases latency).

March 27, 2013 | 03:16 PM - Posted by Mangix

Hmmm. I wonder if there is a difference between Double and Triple Buffered VSync. Newer Valve games support both. Would appreciate testing there.

Also, are there driver settings that help/harm frame rating? Nvidia's Maximum Pre-rendered frames setting sounds like something that can have an effect.

March 27, 2013 | 04:25 PM - Posted by xbeaTX (not verified)

AMD has complained on Anandtech for this type of test using fraps ... now anyone know the real situation and it's shocking
this article sets the new stantard of excellence ... congratulations and thanks for the enormous work done... keep it up! :)

March 27, 2013 | 04:37 PM - Posted by Ryan Shrout

Thank you!  Help spread the word!

March 27, 2013 | 10:55 PM - Posted by TinkerToyTech

Posted to my facebook page

March 27, 2013 | 04:40 PM - Posted by Marty (not verified)

Ryan, I've a suspicion that you wanted to select Adaptive VSync on NVidia, but have selected Adaptive (half refresh rate) instead in the control panel. Would you please check it out.

March 28, 2013 | 02:20 AM - Posted by showb1z (not verified)

²

Other than that great article. Would also be interested in results with frame limiters.

March 27, 2013 | 05:23 PM - Posted by Dan (not verified)

This site is so awesome that it is one of th eonly ones that I disable Adblock in Chrome for. You deserve the ad $'s.

Rock on, Ryan and crew!

March 29, 2013 | 07:37 AM - Posted by Ryan Shrout

Thank you we appreciate it!

March 27, 2013 | 05:45 PM - Posted by Mike G (not verified)

Thanks for the all of the time taken to accurately test and then explain your frame rating methods to us. I wonder if AMD would be willing to have a representative come on and speak on how they will be addressing this issue. I for one will be holding off purchasing an additional 7970 at this time.

March 27, 2013 | 06:43 PM - Posted by bystander (not verified)

They have, with AnandTech, though the person they spoke to was the single GPU driver guy, they did mention they have a plan to offer fixes in July.
http://www.anandtech.com/tag/gpus

March 27, 2013 | 06:51 PM - Posted by Soulwager (not verified)

Is your capture hardware capable of capturing 1080p @ 120hz? The data rate should be less than 1440p@60hz.

Also, I would like to see some starcraft 2 results. It's frequently CPU limited, and I'm wondering how that impacts the final experience when compared to a gpu limited situation. I'd recommend the "unit preloader" map as a benchmark run, once to pre-load all the assets and again for the capture.

March 29, 2013 | 07:38 AM - Posted by Ryan Shrout

Actually, I think 1080p@120 is a higher pixel clock than 25x14@60; we tried to do basic 120 Hz testing right before publication without luck but we'll be trying again soon.

As for SC2, we'll take a look.

March 29, 2013 | 12:00 PM - Posted by Soulwager (not verified)

You're right, the pixel clock is higher. I guess I was only thinking about total number of pixels that need to be recorded, but the smaller resolution is more heavily impacted by overscan.

March 27, 2013 | 09:28 PM - Posted by SPBHM

great stuff.

I would be interested in seeing some results with some framerate limit, not from vsync, but another limit, something like FPS max 45 for Crysis 3 (higher than a single card, around the average for the CF), you can easily do that with dxtory

March 27, 2013 | 09:49 PM - Posted by Trey Long (not verified)

Its nice to see the truth come out. Save the excuses and fix it AMD.

March 28, 2013 | 03:41 AM - Posted by Carol Smith (not verified)

Why did you completely ignore AMD's RadeonPro freeware ?
Which offers and has offered Dynamic V-Sync that's superior to Nvidia's Adaptive V-Sync for years now.
Tom's Hardware seems to be the only site that actually used this utility to make a fair comparison between SLI and Crossfire.

RadeonPro's Dynamic V-Sync was shown to be clearly superior to Nvidia's Adapative V-Sync Implementation .

Here is the Tom's link
http://www.tomshardware.com/reviews/radeon-hd-7990-devil13-7970-x2,3329-...

I hope that you learn from your fellow journalist's to include this fantastic solution in your upcoming article.
Thanks for your hard work and good luck.

March 28, 2013 | 06:27 AM - Posted by Marty (not verified)

A frame limiter improves stuttering, but increases lag. You loose some Fps too. So you have to decide between the devil and lucifer in the case of Crossfire.

March 29, 2013 | 07:01 AM - Posted by Carol Smith (not verified)

Exact same thing with Nvidia SLI and Adaptive V-Sync, if you don't want stuttering you have to sacrifice framrates.

March 29, 2013 | 07:39 AM - Posted by Ryan Shrout

I'm going to take a look, but this is NOT "AMD's" software.

March 28, 2013 | 03:50 AM - Posted by Andre3000 (not verified)

Thanks for the eye opener! Which AMD drivers have been used for this review? I have read the article.. but i might have missed it?

March 29, 2013 | 07:39 AM - Posted by Ryan Shrout

For AMD we used 13.2 beta 7 and for NVIDIA we used 314.07.

March 29, 2013 | 08:27 AM - Posted by Steve (not verified)

Ryan: Is it possible to test AMD's CF with older drivers to see if this problem has been around for a long time or if it is a more recent problem with AMD's continuous driver upgrades to improve speeds?

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.