Review Index:
Feedback

Frame Rating Dissected: Full Details on Capture-based Graphics Performance Testing

Data Analysis via FCAT

Finally, we get into the art of the Frame Rating process, analyzing the data that is presented to us via the overlay, capture and extractor.  NVIDIA wrote a set of Perl scripts they are calling FCAT (Frame Capture and Analysis Tool) that takes as input the XLS file mentioned above and generates a set of graphs that attempt to tell a performance story.  There are Perl scripts that generate batch files that call other Perl scripts – it’s not pretty but it works.

For those of you interested, I have uploaded the most recent copy of FCAT right here for you to download and look over.  I am curious to get some feedback from the community on these and look forward to any improvements the engaged users can come up with. 

While there are literally dozens of file created for each “run” of benchmarks, there are several resulting graphs that FCAT produces, as well as several more that we are generating with additional code of our own. 

 

The PLOT File

View Full Size

The primary file that is generated from the extracted data is a plot of calculated frame times including runts.  The numbers here represent the amount of time that frames appear on the screen for the user, a “thinner” line across the time span represents frame times that are consistent and thus should produce the smoothest animation to the gamer.  A “wider” line or one with a lot of peaks and valleys indicates a lot more variance and is likely caused by a lot of runts being displayed.

 

The PERcentile File

View Full Size

Scott introduced the idea of frame time percentiles months ago but now that we have some different data using direct capture as opposed to FRAPS, the results might be even more telling.  In this case, FCAT is showing percentiles not by frame time but instead by instantaneous FPS.  This will tell you the minimum frame rate that will appear on the screen at any given percent of time during our benchmark run.  The 50th percentile should be very close to the average total frame rate of the benchmark but as we creep closer to the 100% we see how the frame rate will be affected. 

The closer this line is to being perfectly parallel the better as that would mean we are running at a constant frame rate the entire time.  A steep decline on the right hand side tells us that frame times are varying more and more frequently and might indicate potential stutter in the animation.

 

The RUN File

While the two graphs above show combined results for a set of cards being compared, the RUN file will show you the results from a single card on that particular result.  It is in this graph that you can see interesting data about runts, drops, average frame rate and the actual frame rate of your gaming experience. 

View Full Size

For tests that show no runts or drops, the data is pretty clean.  This is the standard frame rate per second over a span of time graph that has become the standard for performance evaluation on graphics cards.

View Full Size

A test that does have runts and drops will look much different.  The black bar labeled FRAPS indicates the average frame rate over time that traditional testing would show if you counted the drops and runts in the equation – as FRAPS FPS measurement does.  Any area in red is a dropped frame – the wider the amount of red you see, the more colored bars from our overlay were missing in the captured video file, indicating the gamer never saw those frames in any form.

The wide yellow area is the representation of runts, the thin bands of color in our captured video, that we have determined do not add to the animation of the image on the screen.  The larger the area of yellow the more often those runts are appearing.

Finally, the blue line is the measured FPS over each second after removing the runts and drops.  We are going to be calling this metric the “observed frame rate” as it measures the actual speed of the animation that the gamer experiences.

 

The PCPER FRAPS File

View Full Size

While the graphs above are produced by the default version of the scripts from NVIDIA, I have modified and added to them in a few ways to produce additional data for our readers.  The first file shows a sub-set of the data from the RUN file above, the average frame rate over time as defined by FRAPS, though we are combining all of the GPUs we are comparing into a single graph.  This will basically emulate the data we have been showing you for the past several years.

 

The PCPER Observed FPS File

View Full Size

This graph takes a different subset of data points and plots them similarly to the FRAPS file above, but this time we are look at the “observed” average frame rates, shown previously as the blue bars in the RUN file above.  This takes out the dropped and runts frames, giving you the performance metrics that actually matter – how many frames are being shown to the gamer to improve the animation sequences. 

As you’ll see in our full results on the coming pages, seeing a big difference between the FRAPS FPS graphic and the Observed FPS will indicate cases where it is likely the gamer is not getting the full benefit of the hardware investment in their PC. 

 

The PCPER Frame Time Variance File

Of all the data we are presenting, this is probably the one that needs the most discussion.  In an attempt to create a new metric for gaming and graphics performance, I wanted to try to find a way to define stutter based on the data sets we had collected.  As I mentioned earlier, we can define a single stutter as a variance level between t_game and t_display. This variance can be introduced in t_game, t_display, or on both levels.  Since we can currently only reliably test the t_display rate, how can we create a definition of stutter that makes sense and that can be applied across multiple games and platforms?

We define a single frame variance as the difference between the current frame time and the previous frame time – how consistent the two frames presented to the gamer.  However, as I found in my testing plotting the value of this frame variance is nearly a perfect match to the data presented by the minimum FPS (PER) file created by FCAT.  To be more specific, stutter is only perceived when there is a break from the previous animation frame rates. 

Our current running theory for a stutter evaluation is this: find the current frame time variance by comparing the current frame time to the running average of the frame times of the previous 20 frames.  Then, by sorting these frame times and plotting them in a percentile form we can get an interesting look at potential stutter.  Comparing the frame times to a running average rather than just to the previous frame should prevent potential problems from legitimate performance peaks or valleys found when moving from a highly compute intensive scene to a lower one.

View Full Size

While we are still trying to figure out if this is the best way to visualize stutter in a game, we have seen enough evidence in our game play testing and by comparing the above graphic to other data generated through our Frame rating system to be reasonably confident in our assertions.  So much in fact that I am going to going this data the PCPER ISU, which beers fans will appreciate the acronym of International Stutter Units.

To compare these results you want to see a line that is as close the 0ms mark as possible indicating very little frame rate variance when compared to a running average of previous frames.  There will be some inevitable incline as we reach the 90+ percentile but that is expected with any game play sequence that varies from scene to scene.  What we do not want to see is a sharper line up that would indicate higher frame variance (ISU) and could be an indication that the game sees microstuttering and hitching problems.

 

I think the best way to really understand all of this data is just to start looking at it and discuss it as we go along.  We are going to start with battle between similarly priced graphics cards from both AMD and NVIDIA with the GeForce GTX 680 and the Radeon HD 7970 GHz Edition and use six different PC games at different resolutions including 1920x1080, 2560x1440 and even 5760x1080 for Surround/Eyefinity comparisons.  The games we are going to test with include Battlefield 3, Crysis 3, DiRT 3, Far Cry 3, Skyrim and Sleeping Dogs.  No, we did not purposefully try to find as many games with ‘3’ in them as possible.

Test System Setup
CPU Intel Core i7-3960X Sandy Bridge-E
Motherboard ASUS P9X79 Deluxe
Memory Corsair Dominator DDR3-1600 16GB
Hard Drive OCZ Agility 4 256GB SSD
Sound Card On-board
Graphics Card AMD Radeon HD 7970 GHz Edition 3GB
NVIDIA GeForce GTX 680 2GB
Graphics Drivers AMD: 13.2 beta 7
NVIDIA: 314.07 beta
Power Supply Corsair AX1200i
Operating System Windows 8 Pro x64

Testing both 1920x1080 and 2560x1440 resolutions was very direct as we simply used our capture configuration as described on the previous pages.  However, testing AMD Eyefinity and NVIDIA Surround required a bit more imagination since we need to use three different monitor connections and the higher data rates than the Datapath VisionDVI-DL is capable of.  By capturing the left hand monitor though we are able to limit our actual resolution to 1920x1080 for the capture card while still presenting a full 5760x1080 to the gamer.  This allows us to capture the overlay and thus measure frame rates and frame times as expected.

March 27, 2013 | 09:16 AM - Posted by grommet

Hey Ryan, is the variable "t_ready" that the text refers to "t_present" in the diagram two paragraphs above?

March 27, 2013 | 09:22 AM - Posted by Ryan Shrout

Yes, fixed, thanks.

March 27, 2013 | 10:20 AM - Posted by Prodeous (not verified)

I was just wondering.. since the capture and analysis system relies on the left bar only, why doesn't it trunckate the rendered frame it self and keep only 1-3 pixels from the left side of the frame?

For some tests if you want to show off the specific "runts" "stutters" then you can keep it the entire frame captured.

But for most tests, you can record only the left colour bar and do analysis on that bar only, therefore you will not have to save the 600GB per second of uncompressed frames.

Just a thought.

March 27, 2013 | 10:40 AM - Posted by Ryan Shrout

We have that option (notice the crop capture option in our software) but we often like to reference recorded video down the road.

March 28, 2013 | 08:53 AM - Posted by Luciano (not verified)

If you write a piece of software with that "colored" portion only in mind you dont need any of the additional hardware and any user could use it just like Fraps.

March 28, 2013 | 07:02 PM - Posted by Anonymous (not verified)

LOL - AMD you are soooo freaking BUSTED !

Runt frames and zero time ghost frames is an AMD specialty.

AMD is 33% slower than nVidia, so amd pumped in ghosts and runts !!

Their crossfire BF3 is a HUGE CHEAT.

ROFL - man there are going to be some really pissed off amd owners whose $1,000 dual gpu frame rate purchases have here been exposed as LIES.

Shame on you amd, runts and ghosts, and all those fanboys screaming bang for the buck ... little did they know it was all a big fat lie.

March 29, 2013 | 02:42 AM - Posted by Anonymous (not verified)

true words of a fan boy.

May 26, 2013 | 08:23 PM - Posted by Johan (not verified)

Even i have Nvidia, but his comment, was really a fan-boy comment.

March 27, 2013 | 09:43 AM - Posted by Anon (not verified)

Run the benchmarks like any sane gamer would do so!

Running v-sync disabled benches with 60+ fps is dumb!

Running v-sync enabled benches with 60- fps is way more dumb!

YOU'RE DOING IT WRONG!

March 27, 2013 | 10:00 AM - Posted by John Doe (not verified)

Don't forget that V-sync also sometimes fucks shit up.

The best solution is to limit the frames on the game end, or using a 3rd party frame limiter.

March 27, 2013 | 10:51 AM - Posted by Anon (not verified)

And sometimes disabling it fucks physics like in Skyrim.

Your point?

March 27, 2013 | 10:18 AM - Posted by grommet

Read the whole article before commenting- there is an entire page that focuses on your argument (surprise- you're not the first to suggest this!!)

March 27, 2013 | 10:32 AM - Posted by John Doe (not verified)

And if you have actually watched one of his streams, you'd have seen that he INDEED IS doing it "wrong".

When he was testing Tomb Raider, the card was artifacting and flickering and shimmering. It was complete and totally obvious that he didn't know what he's doing.

March 27, 2013 | 10:31 AM - Posted by Anonymous (not verified)

To the guy who says "you're doing it wrong".

Not true. He's doing it right. Many hardcore gamers run v-sync disabled over 60 fps, and only a few of the scenarios tested are consistently over 60fps.

And read the story to see the OBSERVED FPS is much less in many cases, not just the FRAPs FPS (which does not give proper info at the end of the pipeline (at the display) what users actually see).

March 27, 2013 | 10:38 AM - Posted by John Doe (not verified)

There's absolutely nothing "hardcore" about running an old ass game at 1000 FPS with a modern card.

All it does is to waste resources and make the card work at it's hardest for no reason. It's no different than swapping a gear at a say, maximum, 6500 RPM on a car that revs 7000 per gear. You should keep it lower so that the hardware doesn't get excessively get stressed.

If the card is obviously way beyond the generation of the game you're playing, then you're best off, if possible, LIMITING frames/putting a frame lock on your end.

March 27, 2013 | 10:42 AM - Posted by Ryan Shrout

Vsync does cause a ton of other issues including input latency.

March 27, 2013 | 10:44 AM - Posted by Anon (not verified)

You've said this already in both the article and comments but you only talked about input latency.

What are those other issues?

March 27, 2013 | 10:52 AM - Posted by John Doe (not verified)

It makes the entire game laggy an unplayable depending on condition.

You can read up over that TweakGuides guy's site on Vsync.

March 27, 2013 | 11:09 AM - Posted by Josh Walrath

Some pretty fascinating implications about the entire game/rendering pipeline.  There are so many problems throughout, and the tradeoffs tend to impact each other in varyingly negative ways.  Seems like frames are essentially our biggest issue, but how do you get around that?  I had previously theorized that per pixel change in an asynchronous manner would solve a lot of these issues, but the performance needed for such a procedure is insane.  Essentially it becomes a massive particle model with a robust physics engine being the base of changes (movement, creation, destruction, etc.).

March 27, 2013 | 02:18 PM - Posted by Jason (not verified)

research voxels... the technology exists but is pretty far off.

March 27, 2013 | 05:44 PM - Posted by Anonymous (not verified)

Regarding per-pixel-updates: Check this out: https://www.youtube.com/watch?feature=player_embedded&v=pXZ33YoKu9w

It's obvious the result must be delivered in frames (no way around it with current monitor technology), but the engine in the vid clearly works differently from the usual by just putting the newly calculated pixels into the framebuffer that are ready by the time.

March 27, 2013 | 02:07 PM - Posted by bystander (not verified)

Due to many frames being forced to wait for the next vertical retrace mode, while others do not, it will result in some time metering issues. This can result in a form of stuttering.

March 29, 2013 | 02:49 AM - Posted by Anonymous (not verified)

hardcore gamer here play BF3 v-sync ON!

because:

1. My monitor (as 90% of us) is 60hz!
don't need the extra stress/heat/electric juice on card/system.

2. Fed up with frame tearing every time i turn around.

March 27, 2013 | 10:27 AM - Posted by Prodeous (not verified)

With regards to the Nvidia settings of v-sync, it seems that Addaptive (half refresh rate) was selected capping it at 30fps vs Addaptive which would cap at 60fps.

Was that on purpose?

March 27, 2013 | 10:31 AM - Posted by Noah Taylor (not verified)

I'm still interested to see what type of results you come up with when using amd crossfire with a program like afterburner's riva tuner to limit the fps to 60, which would seems to be everyone's preference for this situation.

March 27, 2013 | 10:42 AM - Posted by Ryan Shrout

I think you'll find the same problems that crop up with Vsync enabled.

March 27, 2013 | 11:14 AM - Posted by Noah Taylor (not verified)

I have to admit observed FPS graphs are DAMNING to AMD, and I own 2 7970s and 7950s so I'm not remotely biased against them in any way. One thing i did notice is that dropped frames don't effect every game so hopefully this is something AMD may be able to potentially mitigate through driver tweaks.

I have to admit, crysis 3 can be a sloppy affair in AMD crossfire and now I can see exactly what I'm experiencing without trying to make guesswork out of it.

Regardless, the bottom line EVERYONE should take away from this, is that crossfire DOES NOT function as intended whatsoever, and we can now actually say AMD is deceptive in their marketing as well, this is taken directly from AMD's website advertising crossfire:

"Tired of dropping frames instead of opponents? Find a CrossFire™-certified graphics configuration that’s right for you."

They have built their business on a faulty product and every crossfire owner should speak up so that AMD makes the changes they have control over to FIX a BROKEN system.

March 27, 2013 | 11:17 AM - Posted by Josh Walrath

Just think how bad a X1900 CrossFire Edition setup would do here...  I think Ryan should dig up those cards and test them!

March 27, 2013 | 11:49 AM - Posted by SB48 (not verified)

I wonder if that one of the reasons why AMD never really released the 7990,
also if this was already a obvious problem with the 3870 X2...
or the previous gens card, like HD5000, 6000.

anyway, is there any input lag difference from NV to AMD (SLI-CF) without vsync?

also I would be curious to see some CPU comparison.

March 27, 2013 | 01:31 PM - Posted by John Doe (not verified)

I had both a dual 3870 and a dual 2900 setup, both were more or less the same thing.

Both has driven a CRT at 160HZ for Cube and both were silk smooth.

This is a recent issue. It has nothing to do with cards of that age. The major problem with those old cards was the lack of CrossFire application profiles. Before the beginning of 2010, you had absolutely no application profiles like you have with nVidia. So CF either worked or you had to change the game ".exe" names, which either worked or, made the game mess up or just kick you out of Steam servers due to VAC.

March 28, 2013 | 07:20 AM - Posted by Beany (not verified)

I agree. I have 2x 6970 and the Crossfire results here are just embarrassing for AMD. It clearly don't work well and i've noticed myself with certain games, because you don't need graphs and stuff, as often you can see it with your own eyes. It's just BAD.

I used to build hundreds of PC's with both Nvidia and AMD, and from that experience i also noticed that 1) NV's drivers are generally better and less problematic and 2) Crossfire don't work as well as SLI.

The only reason i use AMD is for image quality. Not 3D/gaming stuff but general image output on the desktop. Colours and whites look better and more intense with AMD cards on my high-end monitors (but without being saturated), and no amount of tweaking on NV hardware will get this stuff to look as good. I've tried it with multiply different monitors and AMD just looks better. NV hardware is washed out in comparison. Most people wont be able to tell the difference and you need a good IPS diplay to properly notice but it's always bothered me because i'm a graphics designer and image quality freak. If NV sort this out i'll switch to them instantly.

March 28, 2013 | 07:20 AM - Posted by Beany (not verified)

I agree. I have 2x 6970 and the Crossfire results here are just embarrassing for AMD. It clearly don't work well and i've noticed myself with certain games, because you don't need graphs and stuff, as often you can see it with your own eyes. It's just BAD.

I used to build hundreds of PC's with both Nvidia and AMD, and from that experience i also noticed that 1) NV's drivers are generally better and less problematic and 2) Crossfire don't work as well as SLI.

The only reason i use AMD is for image quality. Not 3D/gaming stuff but general image output on the desktop. Colours and whites look better and more intense with AMD cards on my high-end monitors (but without being saturated), and no amount of tweaking on NV hardware will get this stuff to look as good. I've tried it with multiply different monitors and AMD just looks better. NV hardware is washed out in comparison. Most people wont be able to tell the difference and you need a good IPS diplay to properly notice but it's always bothered me because i'm a graphics designer and image quality freak. If NV sort this out i'll switch to them instantly.

March 28, 2013 | 05:24 PM - Posted by Anonymous (not verified)

Dont know if you read the patch notes for Radeon, But they stated that they have improved crysis 3 and more performance with their latest releases AND that they are working on GREATLY increasing CROSSFIRE performance. Keep an eye open PCPER!

March 28, 2013 | 08:35 PM - Posted by Anonymous (not verified)

lol - it's just amazing isn't it. We'll have the same excuse the politicians always give us...

" "Tired of dropping frames instead of opponents? Find a CrossFire™-certified graphics configuration that’s right for you."

" We had no idea!!!! "

I want to know how many YEARS this has been going on.

These tools need to be used on 3870 Cf on up - 4850 4870 5850 5860 6850 6870 6950 6970 and so on....

So for many years we have been fed a complete freaking lie across the board, and from every "scientific unbiased tester website"....
Just wow. If there are conspiracy theorists here, you missed this one, because right now full blown conspiracy (if amd seemed it knew what it was and is doing) would be the conclusion. (instead amd marketing lies and incompetence and pure luck at the massive cover up over many years looks correct)

I also have to say, PALMFACE !

This also SHAMES EVERY REVIEW SITE OUT THERE. lol How stupid they have been - they should have spent far less time attacking nVidia looking and hunting for every tiny chink in their armor they could dream up and squared off at AMD since these COMPLAINTS about Crossfire have been present since DAY ONE.

Just think, all those reviewers, all those tests, all those years, and complete obliviousness till now... and a CONSTANT HAMMERING on people like me who tried in vain to point out the CF issues.
"it's your imagination"
"you're crazy"
"nVidia shill"
"paid operative"
"I don't see anything wrong ! "

Just W O W.

March 29, 2013 | 05:59 AM - Posted by John Doe (not verified)

You're a moron and have some real issues.

I ran that HD 2900 CF setup up to 200 HZ and haven't had a SINGLE frametime issue with it.

I was playing Cube on it at 160 HZ on a Diamondtron SB2070 and it didn't skip a SINGLE beat.

March 29, 2013 | 12:01 PM - Posted by Anonymous (not verified)

LOL - a single example, untested, swears the amd fanboy, on an obsolete hd2900 said to be the most terrible hd flagship even by amd fanboys, and one game, from the historical mind of the fanboy....

R O F L - and you call me names.

No, we need the entire lineup of amd tested for the YEARS their scam has existed.

I feel a MASSIVE VINDICATION coming.

BTW - hold onto your undies feller, AS THIS SOFTWARE FCAT WITH THE COLOR BANDS IS BEING RELEASED TO THE PUBLIC FIRST ON MSI AFTERBURNER ON MONDAY AND THEN ON EVGA PRESICION X LATER IN THE WEEK !

ROFL - The truth is coming amd fan. NOTHING will stop it now.

March 29, 2013 | 03:31 PM - Posted by John Doe (not verified)

Indeed.

I bought those 2900's because they were fucking cheap as hell because I bought them SECOND HAND and had a couple of PULSE Digital PWM's and 512-bit bus with a ton of memory bandwidth and HDCP onboard.

How many words in that text did you had to Google?

April 2, 2013 | 09:15 PM - Posted by Anonymous (not verified)

Hey I, that's ME, I loved the HD2900, but all the amd fanboys were crying, it lost miserably to the nVidia flagship.
Have a good friend who got a Sapphire hd2900 "pro" version, but it wasn't the pro, as I recall it had an unlisted weird and in between 320 shaders with the 256 bus.
He had EXTENSIVE trouble with it - I spent countless hours hammering out the bugs in every board he ever ran it in - a Dell dimension 4700 for instance kept it locked at 500mhz core and 600mhz memory.
He hammered the crap out of it for years, he broke it three times, I resurrected it three times, then finally against advice he kept the pink rubbery foam strip off the VRMs upon reassembly after cleaning - he was told NOT to but insisted it was just a cushion and not needed - that freaking cooked it for good -
I didn't hate the HD2900 but every amd fanboy did, and said so, it was a crushing blow to their egos, as nVidia stomped it on performance.

March 29, 2013 | 03:50 PM - Posted by Anonymous (not verified)

yet another fan boy, this time on the nvidia side. (paid?)
i wish you all disappeared.

March 29, 2013 | 03:52 PM - Posted by John Doe (not verified)

I've never been paid by anybody in my entire life to speak garbage about a product.

You have absolutely no idea who I am, which just goes to show how much of an ignorant idiot you are.

Have a nice and a long life.

April 3, 2013 | 11:17 PM - Posted by Anonymous (not verified)

He was replying to me, please be able to follow the threading insets.

Now you were calling whom an idiot ? Uhh, nevermind.

April 10, 2013 | 06:21 PM - Posted by John Doe (not verified)

The more you speak to yourself, the more of a retard you'll realise you are.

March 29, 2013 | 03:51 PM - Posted by Anonymous (not verified)

Well it could just as well be "lazy" game developers not bothering to optimize for AMD and just going with nVidia and then just accepting the faults that comes when using AMD and just hoping no one notices??

March 29, 2013 | 03:51 PM - Posted by Anonymous (not verified)

Well it could just as well be "lazy" game developers not bothering to optimize for AMD and just going with nVidia and then just accepting the faults that comes when using AMD and just hoping no one notices?

March 29, 2013 | 04:05 PM - Posted by John Doe (not verified)

Yes.

April 3, 2013 | 11:15 PM - Posted by Anonymous (not verified)

lol

It happens on many AMD Gaming Evolved games that have direct amd/developer input fella.

Nice try but you obviously are clueless and a total amd fan.

March 27, 2013 | 10:50 AM - Posted by JonasMX (not verified)

My only concern is that in my country are a bunch of people don't care about this kind of stuff. They think is too boring and the readers don't care.

We are years behind what you are trying to demostrate and bring to light, but definitly I'll follow you like an example.

Thanks for you hard work

March 27, 2013 | 11:12 AM - Posted by Anonymous (not verified)

Hi Ryan
I was wondering if you could also post the RUN graphs for the 680 sli. I didn't get a clear picture from the article if the 680 sli also dropped any frames, had any runt frames or totally avoided those behaviors.

March 27, 2013 | 11:25 AM - Posted by Ryan Shrout

I'll do that now!

March 28, 2013 | 08:42 PM - Posted by Anonymous (not verified)

No drops no runts for nVidia 680 SLI - NONE

LOL

That's why they were left out... like no news is good news.

I feel so good about myself, so many years, so many attacks on me by so many raging amd lovers.
I sooooo want to see them cover their heads in shame and admit how treasonous they were.
Yes treasonous liars. (lol whatever couldn't think of the word)
I suspect heck would freeze over first.

Oh look at that the river styx has ice skaters on it.

March 27, 2013 | 11:32 AM - Posted by Eriol (not verified)

Great job on the article! This is how GPUs should be benched in the future.

What I would also like to see is how the input latencies are affected by things like vsync or nvidia frame metering. Also, digging deeper into how frame rate limits can help alleviate the stutter issues would be welcome. I found that back when I was testing 6950cf that a 3rd party fps limiter was the best solution combating micro stutter.

In the end CF is just not worth it atm, regardless of vsync or fps limiters. AMD has a lot of work to do to get CF sorted out. I doubt a new driver 2-3 months down the line will be the end of it, but hopefully they're good to go when the next gen arrives.

March 27, 2013 | 07:22 PM - Posted by Ryan Shrout

Input latency testing is my next target!

March 27, 2013 | 11:47 AM - Posted by JLM (not verified)

Great work and great review/article!

I do wish you guys had included the effects of a framerate limiter in addition to vsync, though... I've actually had a better experience using a framerate limiter through Afterburner on 7970 CF than I do with vsync; it seems to almost always remove stutter completely while also preserving the scaling.

March 27, 2013 | 07:30 PM - Posted by Ryan Shrout

I'll definitely be giving that shot!

March 27, 2013 | 12:20 PM - Posted by Anonymous (not verified)

What about older Nvidia cards?
What about Crossfire 6970s?

We played a bit of Natural Selection 2 the other day..meh..silly bunny jumpy game..imbalanced..go Marine..
Anyway..

Crossfire enabled 72 FPS
Crossfire disabled 64 FPS
What a ripoff!

Then we tried it on 2 GTX 580s same room and systems and almost the same results..

Multi GPU setups are just a ripoff unless you get Exactly DOUBLE for performance.

March 27, 2013 | 01:35 PM - Posted by John Doe (not verified)

Scaling depends on A TON of factors. GPU's, drivers, applications so on and so forth.

You need to understand the basics of these things before you throw out such pointless and silly statements.

If you want to compare scalability, first compare a pair of single cards in FurMark. Then SLi/CF them and compare again.

March 29, 2013 | 03:07 AM - Posted by Anonymous (not verified)

CF is failing in half the games across the board, and has been for some time.
TPU for instance is an example where he mentions it if you don't believe me.
That's the other big laugh, shouldn't have been recommended for a long time now.

Maybe AMD will actually do something now, I doubt it though, I suspect apologetics, misdirection, and lies.

March 27, 2013 | 02:26 PM - Posted by Noah Taylor (not verified)

What is or isn't a ripoff is completely subjective and will never be an agreed upon issue. However, you can get a great experience moving to multi gpu setups that isn't necessarily double performance. Hi resolution gaming especially, at 2560x most single cards can't quite get you to >40fps avg with everything turned up in most modern games and what we will see in the next year. If i can get 15 more fps at that resolution it really DOES make all the difference.

However, if I buy 2 graphics cards because AMD says that framerate is how i should rate their cards, and their are frames being counted that detract from the quality of the experience, and essentially bloat their test scores up to 300% over what is actually being displayed, then not only have we been deceived, but AMD has really scammed you, since the extra card is actually detracting from the experience in many ways.

March 27, 2013 | 02:34 PM - Posted by John Doe (not verified)

Moral of the story, get this before it goes out of stock.

http://www.newegg.com/Product/Product.aspx?Item=N82E16814187196

It's a card of limited availibity and is one hell of a damn good, limited custom design.

I'm eagerly waiting for mine to be shipped.

Heh.

March 29, 2013 | 03:58 PM - Posted by Anonymous (not verified)

2GB is not enough for 1080p this days.

March 29, 2013 | 04:16 PM - Posted by John Doe (not verified)

Horse shit:

http://hardforum.com/showthread.php?t=1703048

March 27, 2013 | 02:36 PM - Posted by John Doe (not verified)

Moral of the story, get this before it goes out of stock:

http://www.newegg.com/Product/Product.aspx?Item=N82E16814187196

It's a card of very limited availability and is one hell of a damn good card.

Best cooler, only Tantalum caps and no cylindrical caps over VRM area, backplated, best looking. Such a boss. Reminds me of those black 68 Mustang Bosses.

March 27, 2013 | 12:30 PM - Posted by Mountainlifter (not verified)

Great Job, Ryan and team. No, Amazing job. I've been following your frame rating articles from the first one and I've been waiting for this full reveal for a long time.

I do hope you show some frame rating benches for the GTX TITAN too (cause I own one now).

I am finding it tough not to conclude that AMD's two GPU system is completely broken although it baffles me that a company of their scale could not detect/think of such issues.

They have been known to take shortcuts on drivers before. http://forum.notebookreview.com/sager-clevo/567681-should-i-switch-out-m... (I'm only getting this from a reliable NBR forumer).

March 27, 2013 | 01:38 PM - Posted by John Doe (not verified)

AMD drivers are put out by a single guy called Catalyst Maker... he has a little team to help him out, but after all, he's just one guy.

That's why they suck.

It's been this way for AGES. The last cards that didn't in one way or another suck, were the X1950's which were made by ATi and not AMD.

Also, dual 7900 setups aren't "completely broken". That's just your pointless assumption.

If there's anything that's "completely broken", it's your wallet for shelling out $1000 for a dumbass card.

March 27, 2013 | 01:51 PM - Posted by Josh Walrath

AMD/ATI used to have a smaller driver team than NV, but actually when AMD bought ATI and started down the APU path... they hired a lot more folks to support these products.  Catalyst Maker was originaly Terry Makedon, but he has moved onto marketing more than software development.  There are (iirc) several groups now working on drivers, but their workload is arguably greater than it was before.  Supporting several different architectures spanning from GPUs to APUs is certainly no easy task, and I think they continue to hire more people.  Reviews like this provide a lot more impetus inside companies to improve their products, and the software drivers certainly are the quick fix are for AMD now.

I think that NV still has a larger driver team, and that massive server room that pretty much only works on nightly builds of drivers from NV is still much bigger than what AMD has.  However, AMD is not as far behind as many think.  But obviously from what we have seen here, and read about from other sources, their focus was not exactly where it needed to be.

March 29, 2013 | 03:12 AM - Posted by Anonymous (not verified)

That's nice and all to amd but it looks like half a decade of total screw job right now, and outright fraud.

March 27, 2013 | 10:27 PM - Posted by Mountainlifter (not verified)

just leave this website man.

March 27, 2013 | 10:29 PM - Posted by Mountainlifter (not verified)

Would you kindly... leave this website man. Thanks.

March 27, 2013 | 10:29 PM - Posted by Mountainlifter (not verified)

Would you kindly... leave this website man. Thanks.

March 27, 2013 | 10:31 PM - Posted by Mountainlifter (not verified)

Was addressing john doe, not walrath.

March 28, 2013 | 12:16 AM - Posted by Josh Walrath

Heh, you can address me.  It won't offend.  As long as I don't offend Ryan, I could be good.

March 29, 2013 | 05:43 AM - Posted by John Doe (not verified)

Hey mountainlifter, would you lift my T-Bird if it gets stuck on a mountain?

Lift my cock while you're there as well.

GG.

March 29, 2013 | 12:14 PM - Posted by Justin Anderson (not verified)

I agree John Doe is retarded and needs to leave. I think he's is trying to pick up dudes on this site he keeps telling people to suck his dick. Why don't you just go to chat roulette if you want to get your rocks off man.

March 29, 2013 | 12:14 PM - Posted by Justin Anderson (not verified)

I agree John Doe is retarded and needs to leave. I think he's is trying to pick up dudes on this site he keeps telling people to suck his dick. Why don't you just go to chat roulette if you want to get your rocks off man.

March 29, 2013 | 12:15 PM - Posted by Justin Anderson (not verified)

I agree, John Doe is retarded and needs to leave. I think he's is trying to pick up dudes on this site he keeps telling people to suck his dick. Why don't you just go to chat roulette if you want to get your rocks off man.

March 29, 2013 | 03:43 PM - Posted by John Doe (not verified)

LOL that's hilarious.

That's all.

Hahaha.

April 2, 2013 | 10:51 PM - Posted by Anonymous (not verified)

Don't tell the truth, the haterz then come forth...

I mean how many times can we give AMD the most gigantic pass ?

Not on this - this new tech with the single card to suck up the actual frames delivered to the end gamer screen with the quad SSD's to take all that in on the fly - this is called PROGRESS, and it has exposed a huge, huge amd flaw.

I feel sorry for all those foolish amd fanboys who screamed at me to buy two amd and crossfire it's a better deal and OC and blah blah blah blah - it's now a big joke.

All those "top crossfire amd gamerz" denying what was right on their screens before them - now the scientific facts show what was before them runts and ghost frames...

I haven't seen a bigger emerging scandal in the video card wars ever.

I am very interested in seeing how it plays out. Of course I will accept NO EXCUSES from amd or their fans. I never have before, but this is an even bigger mess up than even I considered AMD capable of pulling, and that is saying an aweful lot.

You just keep doing your thing and laughing at the haterz, they should be ashamed of themselves honestly.

I want to see if they went with runt and ghost frames after the 69xx series was outed as junk for crossfire by HardOcp with Terry Makedon there in the forums- AMD "became aware" according to that head Cat Maker that there "was a problem" and my guess is the solution was runts and ghosts.
That would keep the DAMAGE to the last 2 major series - but I want data all the way back to 3870/3870X2 to see how deep the rabbit hole goes.

March 27, 2013 | 01:52 PM - Posted by Anonymous (not verified)

so basically what i'm getting from this story and the one at anand is that amd has been reviewed over the years with inflated fraps #s due to more runt and drop frames that are no good to gamers?

March 27, 2013 | 02:30 PM - Posted by Noah Taylor (not verified)

You are absolutely right, AMD's setups have fraudulently inflated their benchmark numbers for years which have helped them sell products and grow their % of discrete market share. This really is the most groundbreaking reveal I've ever come across in the gaming industry.

March 27, 2013 | 02:42 PM - Posted by John Doe (not verified)

It's not like nVidia hasn't done it either:

http://tpucdn.com/reviews/NVIDIA/7950GX2/images/singlebenchlow.jpg

March 27, 2013 | 02:43 PM - Posted by Anonymous (not verified)

Yeah, I'm sure AMD's share of the discrete market has grown thanks to inflated Xfire numbers. I mean, so many people have multi-GPU setups in the first place, amirite?

Plus, jumping to the conclusion that there's deliberate "fraud" going on can only be the result of deluded fanboy thought processes. Considering FCAT is the first tool that can really show this behavior, it's rather more reasonable to believe that AMD was simply unaware of it.

March 27, 2013 | 04:04 PM - Posted by John Doe (not verified)

Alright, let me get it the easily way for you.

What the hell does the inflated CF performance figures have to do with the frame latencies? NOTHING.

The frame latency is a problem only because AMD has yet to be able to completely man up the GCN architecture.

April 3, 2013 | 11:27 PM - Posted by Anonymous (not verified)

No, fraps showed the problem, and the extended FCAT has only highlighted it even more.
The other thing that showed the problem - countless end user complaints, many times raging radeon fanboys screaming you cannot see beyond 30fps because "the human eye is incapable of it", something we call fanboy fantasy, apologetics, excuses, denial, brain washed psycho hardcore amd total verdetrol absolutist.
There are a lot of them, and that unfortunately has allowed amd to scam the entire industry for this long.

You see we still have a boatload of deniers here. Total amd fanboys, nothing will ever change their minds, no facts are facts they can accept if the facts prove amd sucks.

March 29, 2013 | 12:15 PM - Posted by Anonymous (not verified)

Yes it is, but the river in Egypt will start to grow to cover the entire earth for the amd fanboys, oh wait it already has.

BURNED in the extreme, the failure of crossfire, the scam of the decade.
Ghosts and runts, but boy oh boy our fps looks frappalicious !

LOL - so, so pathetic.

nVidia's SLI monsters ? ZERO runts, ZERO ghost frames.
PERFECTION incarnate.

Let's review:
nVidia - perfect execution, no cheating no runts, no ghost frames
amd - lying failure, HALF OF THE FRAMERATE IS A LIE. ghost runt every other frame !

ROFL amd you truly suck as bad as I have been saying for half a decade, wait no, you are worse.

Now we will get treated to endless excuses, denials, it doesn't matter, who cares, and all the rest of the radeon ragers standard operating apologetic procedures they have gang banged on the net for years.

Watch for this development: amd fanboys new spewing propaganda
" using two cards is a hassle, you should buy a single card, and just upgrade when the next best single card comes out like the 7970 with how great it is. Who wants to waste money on two cards, it's like <1% of gamers, so no one really does this at any end user level except fanatics, it's just not needed and is a budget buster. "

LOL - A thousand excuses already coming

March 27, 2013 | 03:21 PM - Posted by Wingless (not verified)

How does changing Render Frame Ahead/Pre-Rendering affect Crossfire and SLI? In Battlefield 3, people are saying it gets smoother if you change it to 0 or 1 vs the default value of 3.

Thanks for your great analysis and extremely informative video.

March 28, 2013 | 09:56 AM - Posted by Wingless (not verified)

Battlefield 3
Guide: How to Fix Low FPS - Stuttering - Lag
http://battlelog.battlefield.com/bf3/forum/threadview/2832654347723598222/

There is a well documented stuttering fix for both Nvidia and AMD users on multiple forums. I've tried this for my HD 4870 Crossfire setup and it works. This particular user from the above link has a NVIDIA GTX 470.

5.Open notepad and paste this into it and save it as "user.cfg" inside your "program files/origingames/battlefield3" folder:

RenderDevice.TripleBufferingEnable 0
RenderDevice.ForceRenderAheadLimit 1
WorldRender.SpotLightShadowmapResolution 256
WorldRender.SpotlightShadowmapEnable 0
Render.DrawFps 1

With this applied to the game, are there any differences? Render Ahead seems to really affect these results and it would be nice if it were tested with FCAT.

Thanks

March 29, 2013 | 04:40 PM - Posted by John Doe (not verified)

Do you like Winger?

They're really a crappy band but they have some decent stuff LOL.

http://www.youtube.com/watch?v=rWSVGcivuGs

March 27, 2013 | 03:43 PM - Posted by Randy (not verified)

Absolutely fantastic work Ryan. This article really blew me away and gave me a lot of tools to better understand what I've been dealing with at home. One question though: you point out that vsync can solve most of amds problems provided the frame rate doesn't have to drop from 60 to 30, in which case the drastic stutter caused by such a change adversely impacts smoothness. What if instead of using vsync you used a frame rate limit that corresponded with the apps average framerate? This way when the frame rate dips below that threshold it isn't cutting its' self in half as it would with vsync and in theory not introducing nearly as much stutter.

I advised my friend to do this with his dual 6950 setup in far cry 3 and he said it gave excellent subjective performance. I used the same method for my single 7970 and also noticed a nice bump to the smoothness of the game, but again this is all subjective. It would be awesome if you could test this as if proven correct it would give a lot of amd owners the tools to make the best of a bad situation.

Again thanks for the tremendous work you've done here, this is truly awesome work.

March 27, 2013 | 04:43 PM - Posted by grommet

Quick question- Could Lucidlogix's Virtual Vsync technology help out with the CrossFire issues?
http://www.lucidlogix.com/technology-virtual-v-sync.html
Not that this would let AMD off the hook as this needs to be taken care of in-house, but could it help?

March 27, 2013 | 05:02 PM - Posted by Anon (not verified)

That's a nice question.

March 27, 2013 | 07:34 PM - Posted by Ryan Shrout

I'll see if we can take a look at that.

March 29, 2013 | 03:31 PM - Posted by jgstew

The whole time I was reading this article, I was more and more curious how Virtu's technology would effect things. I'm curious about more than just their Virtual V-sync, but their other options as well for both single and multiple GPUs. Virtu has not had the scaling that SLI & Crossfire have had, but perhaps their technology would show well in other areas with this analysis.

I do feel that Frame Rating & the input to display latency are much more interesting metrics.

Great work on the article.

March 27, 2013 | 05:32 PM - Posted by serpinati of the wussu (not verified)

I've read somewhere that this will not be the norm for pcper to do this type of testing with all video card reviews (too labor intensive). Is this true?

If it is true, will pcper at least record the frame data and simply give it a quick look to make sure video cards aren't doing something absolutely crazy (for example, if you were reviewing those 7970's crossfire you might not plan to actually to analyze the frame times, but you would at least look over the recorded frames anyways and catch the crazy runt frames and mention it in your reviews.)

March 27, 2013 | 07:34 PM - Posted by Ryan Shrout

No, my plan is to take this route going forward.

March 27, 2013 | 05:57 PM - Posted by Anonymous (not verified)

Greatly appreciate the work behind this! And opening up the tools / scripts to everyone, pushing it to other hardware magazines. Huge kudos! Oh, and you should slap Anandtech over the head for not even mentioning your involvment into the new metering method in their introductory article, that only mentions nVidia...

Is there any way to determine latency between t_present and t_display? If not, it should be - maybe some kind of timestamp could be worked into the overlay? Because that would be interesting not only in the context of VSync, but also regarding nVidias frame metering, which must take some time analyzing the frametimes. Supposedly, AMD has some smoothing algorithm coming up for Crossfire as well, so there it would also be valuable information.

Regarding Adaptive / Smooth VSync: They clearly come off too good in this article. They're not the focus of the article of course, still a little more thought should have been put behind this, considering the amount of time taken to create this awe inspiring effort of an article and the tools behind it. Adaptive VSync turns VSync off when tearing is most annoyingly visible, i.e. at framerates below monitor refresh rate - true triple buffering (instead of the queue nowadays called triple buffering) would be the good solution here. Smooth Vsync does not seem to do anything particularly positive judging from what little measurement is available in the article - only time (further tests that is) will tell what it actually does...

March 27, 2013 | 07:36 PM - Posted by Ryan Shrout

We have more research into the different NVIDIA Vsync options coming up, stay tuned.

As for the timestamp different to check for the gaps between t_present and t_display...we are on that same page as well.  :)

March 27, 2013 | 05:57 PM - Posted by Mawrtin (not verified)

Have you tried CF using radeonpro? Apparently it offers some kinda of Dynamic V-sync similar to nVidias adaptive if I'm not mistaken.

March 27, 2013 | 08:03 PM - Posted by Marty (not verified)

No, it does not. It's a frame limiter, which eliminates some of the stutter, but causes more lag (increases latency).

March 27, 2013 | 06:16 PM - Posted by Mangix

Hmmm. I wonder if there is a difference between Double and Triple Buffered VSync. Newer Valve games support both. Would appreciate testing there.

Also, are there driver settings that help/harm frame rating? Nvidia's Maximum Pre-rendered frames setting sounds like something that can have an effect.

March 27, 2013 | 07:25 PM - Posted by xbeaTX (not verified)

AMD has complained on Anandtech for this type of test using fraps ... now anyone know the real situation and it's shocking
this article sets the new stantard of excellence ... congratulations and thanks for the enormous work done... keep it up! :)

March 27, 2013 | 07:37 PM - Posted by Ryan Shrout

Thank you!  Help spread the word!

March 28, 2013 | 01:55 AM - Posted by TinkerToyTech

Posted to my facebook page

March 27, 2013 | 07:40 PM - Posted by Marty (not verified)

Ryan, I've a suspicion that you wanted to select Adaptive VSync on NVidia, but have selected Adaptive (half refresh rate) instead in the control panel. Would you please check it out.

March 28, 2013 | 05:20 AM - Posted by showb1z (not verified)

²

Other than that great article. Would also be interested in results with frame limiters.

March 27, 2013 | 08:23 PM - Posted by Dan (not verified)

This site is so awesome that it is one of th eonly ones that I disable Adblock in Chrome for. You deserve the ad $'s.

Rock on, Ryan and crew!

March 29, 2013 | 10:37 AM - Posted by Ryan Shrout

Thank you we appreciate it!

March 27, 2013 | 08:45 PM - Posted by Mike G (not verified)

Thanks for the all of the time taken to accurately test and then explain your frame rating methods to us. I wonder if AMD would be willing to have a representative come on and speak on how they will be addressing this issue. I for one will be holding off purchasing an additional 7970 at this time.

March 27, 2013 | 09:43 PM - Posted by bystander (not verified)

They have, with AnandTech, though the person they spoke to was the single GPU driver guy, they did mention they have a plan to offer fixes in July.
http://www.anandtech.com/tag/gpus

March 27, 2013 | 09:51 PM - Posted by Soulwager (not verified)

Is your capture hardware capable of capturing 1080p @ 120hz? The data rate should be less than 1440p@60hz.

Also, I would like to see some starcraft 2 results. It's frequently CPU limited, and I'm wondering how that impacts the final experience when compared to a gpu limited situation. I'd recommend the "unit preloader" map as a benchmark run, once to pre-load all the assets and again for the capture.

March 29, 2013 | 10:38 AM - Posted by Ryan Shrout

Actually, I think 1080p@120 is a higher pixel clock than 25x14@60; we tried to do basic 120 Hz testing right before publication without luck but we'll be trying again soon.

As for SC2, we'll take a look.

March 29, 2013 | 03:00 PM - Posted by Soulwager (not verified)

You're right, the pixel clock is higher. I guess I was only thinking about total number of pixels that need to be recorded, but the smaller resolution is more heavily impacted by overscan.

March 28, 2013 | 12:28 AM - Posted by SPBHM

great stuff.

I would be interested in seeing some results with some framerate limit, not from vsync, but another limit, something like FPS max 45 for Crysis 3 (higher than a single card, around the average for the CF), you can easily do that with dxtory

March 28, 2013 | 12:49 AM - Posted by Trey Long (not verified)

Its nice to see the truth come out. Save the excuses and fix it AMD.

March 28, 2013 | 06:41 AM - Posted by Carol Smith (not verified)

Why did you completely ignore AMD's RadeonPro freeware ?
Which offers and has offered Dynamic V-Sync that's superior to Nvidia's Adaptive V-Sync for years now.
Tom's Hardware seems to be the only site that actually used this utility to make a fair comparison between SLI and Crossfire.

RadeonPro's Dynamic V-Sync was shown to be clearly superior to Nvidia's Adapative V-Sync Implementation .

Here is the Tom's link
http://www.tomshardware.com/reviews/radeon-hd-7990-devil13-7970-x2,3329-...

I hope that you learn from your fellow journalist's to include this fantastic solution in your upcoming article.
Thanks for your hard work and good luck.

March 28, 2013 | 09:27 AM - Posted by Marty (not verified)

A frame limiter improves stuttering, but increases lag. You loose some Fps too. So you have to decide between the devil and lucifer in the case of Crossfire.

March 29, 2013 | 10:01 AM - Posted by Carol Smith (not verified)

Exact same thing with Nvidia SLI and Adaptive V-Sync, if you don't want stuttering you have to sacrifice framrates.

March 29, 2013 | 10:39 AM - Posted by Ryan Shrout

I'm going to take a look, but this is NOT "AMD's" software.

March 28, 2013 | 06:50 AM - Posted by Andre3000 (not verified)

Thanks for the eye opener! Which AMD drivers have been used for this review? I have read the article.. but i might have missed it?

March 29, 2013 | 10:39 AM - Posted by Ryan Shrout

For AMD we used 13.2 beta 7 and for NVIDIA we used 314.07.

March 29, 2013 | 11:27 AM - Posted by Steve (not verified)

Ryan: Is it possible to test AMD's CF with older drivers to see if this problem has been around for a long time or if it is a more recent problem with AMD's continuous driver upgrades to improve speeds?

March 29, 2013 | 11:28 AM - Posted by Steve (not verified)

Ryan: Is it possible to test AMD's CF with older drivers to see if this problem has been around for a long time or if it is a more recent problem with AMD's continuous driver upgrades to improve speeds?

March 28, 2013 | 07:00 AM - Posted by Filip Svensson (not verified)

First, a very interesting article with lots of information but..

there are some pretty big holes in the conclusions in this article.

First, the conclusion is that you observe smoother graphics if you have lots of GPU frames inside each display frame. This is just not true. If you accept this the following conclusions will also be not true. That runt and dropped frames always affect the perceived frame rate.
I will give you an example that proves this:

Say that the graphics card is able to produce two frames (letters) for each display frame (numbers). So if the the output will look like this to the display:
1A,2C,3E,4G etc.
then you would have an optimal smoothness (as long as the output from the game engine is constant). This even if you in this case drops 1 gpu frame each display frame. If in stead it would be like the following:
1A,2D,3E,4H
then you could perhaps notice some unevenness even though you still have the same number of drops. I doubt it but it could be possible. Someone should do a video and see if this behaviour could be detected by the human eye :)

If you instead have drop frames when the gpu frame is spread across a multiple of display frames, then you would potentially have a serious issue with stuttering. But that is not what you are measuring here. Ex. 1A,2A,3C,4C

One conclusion to this is that you Observed FPS is totally wrong. Both from what I write above and that you are not limiting this figure to the refresh rate of your monitor. Capping the graphs to 60 frames would make it some way better. Alternately give it some other headline for example (and here comes the flame bite):
NVIDIA sponsored measuring technique to make there technology to look good

March 29, 2013 | 10:41 AM - Posted by Ryan Shrout

I don't follow your letters/numbers analogy at all but I can assure we are confident in our results and experiences here.

March 28, 2013 | 07:10 AM - Posted by Martin Trautvetter (not verified)

Very insightful article full of interesting data points, thanks for all the work that went into this!

I wonder if you plan to expand this testing down to AMD's APU and Crossfired APUs, as well as Intel iGPUs in the future, I know it's not as flashy as the 'big guns', but that's where a huge chunk of the market is going and I'm curious if there are skeletons to find in that closet, too.

March 29, 2013 | 10:41 AM - Posted by Ryan Shrout

I do plan on using this method whenever possible going forward.  Laptops are a bit more of a pain since we'd have to external displays, but we are going to experiment.

March 30, 2013 | 06:25 AM - Posted by Martin Trautvetter

Cool, can't wait for your findings!

_
btw: Twich is SO much better when you guys are in the same room, really enjoyed this week's episode!

March 30, 2013 | 06:26 AM - Posted by Martin Trautvetter

Cool, can't wait for your findings!

_
btw: Twich is SO much better when you guys are in the same room, really enjoyed this week's episode!

March 28, 2013 | 07:58 AM - Posted by steen (not verified)

Nice work Ryan. The key is the capture card. Input metering from eg FRAPs to output at the monitor. What you're missing is that games seem to use use single frame timing to determine simulation engine steps. No smoothing to account for any overheads - at all.

This whole AFR caper is just a sham, though. NUMA-esque multi gpu designs are the only way to do it. Simple 3dfx SLI was better at distributing load, but in the days of DX11+, load balancing is tricky. V-Sync with triple buffering is also an option, but input lag is a problem any way you slice it.

I do have concerns over Nvidia's overlay layer & software, though. They do kick an own goal with the GTX68 being slower than a 7970, but that's been known for a while now. They're banking on SLI & Titan. Your comments also spruik Nvidia, rather than just give facts.

March 29, 2013 | 10:42 AM - Posted by Ryan Shrout

Thanks, appreciate it!

I have another version of the overlay from a third party we are testing out as well.

March 30, 2013 | 10:03 AM - Posted by Luciano (not verified)

They're not the same code nor are available through the same ways.
But they are the same methods and persue the same results.
SLI and Crossfire are not the same thing...
But...

March 28, 2013 | 08:08 AM - Posted by steen (not verified)

P.S. Did you get a visit from Tom Petersen, too? ;)

March 30, 2013 | 10:22 AM - Posted by John Doe (not verified)

He gets a visit from me everyday.

March 28, 2013 | 08:37 AM - Posted by steen (not verified)

P.P.S. (Sorry) Haven't you fixed the sampling rate of the capture card at 60Hz?

March 28, 2013 | 08:59 AM - Posted by ThorAxe

Thank you very much for testing SLI and crossfire. It confirms my suspicion about my Crossfire and SLI configurations.

To give you some background I have run 8800GTX SLI, 4870x2 + 4870 Trifire, 6870 Crossfire and GTX 680 SLI.

The 4870x2 + 4870 appeared to my eyes to be okay, however 6870 Crossfire never seemed to be quite right while the GTX 680 has always appeared smooth to me. I don't recall any issues with 8800GTX SLI but that was a while ago.

March 28, 2013 | 09:28 AM - Posted by Luciano (not verified)

Error in the article:
"Smooth Vsync", "Adaptive VSync", etc, are not exclusive to nVidia.
They are available for everyone and you can use them through console commands.
The names differ due to manufacturers marketing.
But they are available since at least 2005 (rFactor game).

Various names: "double vsync", "vsync", "dynamic vsync", "vsync with double or triple buffering", "vsync with 1~5 frame queue", etc.

If the game lack the option in a menu, you can use console commands or ini files.

Radeonpro is the most famous "ini profile" creator to that use.

March 29, 2013 | 10:43 AM - Posted by Ryan Shrout

These are definitely not the same things...

March 30, 2013 | 10:06 AM - Posted by Luciano (not verified)

They're not the same code nor are available through the same ways.
But they are the same methods and persue the same results.
SLI and Crossfire are not the same thing...
But...

March 30, 2013 | 10:06 AM - Posted by Luciano (not verified)

They're not the same code nor are available through the same ways.
But they are the same methods and persue the same results.
SLI and Crossfire are not the same thing...
But...

March 28, 2013 | 09:59 AM - Posted by Luciano (not verified)

You have created a minimum quality level where the basic requirement is "more than X scan lines displayed" because of "its contribution for the animation observed".
"Animation" is measured in full frames in sequence.
Any corruption in alternating frames is animation corruption.
Thus you have to filtered half of the SLI performance too.

SLI is UNDOUBTLY superior as the data shows.
But "animation" is corrupted by ANY tearing or stutter.
Simracers always use framecap and vsync with triple buffer for that matter.

March 28, 2013 | 12:00 PM - Posted by onehourleft (not verified)

How would the framerating change on Windows 8 vs. Windows 7? Linus appears to have found FPS improvements in some games in Windows 8. http://youtu.be/YHnsfIJtZ2o . I'm wondering if runt or dropped frames are increasing or there are actual improvements in user experience.

March 29, 2013 | 10:44 AM - Posted by Ryan Shrout

We started this process on Windows 7 before moving to Windows 8 and nothing changed.

March 28, 2013 | 01:30 PM - Posted by gamerk2 (not verified)

I've speculated since 2008 that SLI/CF introduced unacceptable latency into the system, based on all the threads titled "Why do I get 90 FPS and my game is laggy?" in various forums. I'm glad someone is FINALLY really looking into this aspect of the actual rendering chain.

March 28, 2013 | 06:25 PM - Posted by Anonymous (not verified)

Hi;

Can you please test other SLI render methodologies such as split frame rendering (SFR).

I know that SFR is not officially supported by nvidia anymore but you can always force it using Nvidia Inspector as some of us sometimes do.

It would be great if you could try other render methods with AMD as well. (such as scissor or supertile methods as far as I know they can be forced using radeon pro tool)

Best Regards

March 28, 2013 | 07:59 PM - Posted by Foosh (not verified)

I play with Radeon Pro's Dynamic Vsync Control which eliminates stuttering without introducing any noticeable input lag. Vsync off will run your video cards at 100% full time generating a lot of heat and decreasing their life with minimal benefit. If you're playing for twitch response you're running at 120Hz double buffered, your latency will be 16ms max which isn't bad considering good human response is 226ms. If you're playing for maximum visual quality then screen tearing is unacceptable. Statements like "Crossfire does nothing" just creates unnecessary drama.

March 29, 2013 | 10:45 AM - Posted by Ryan Shrout

I consider it entirely necessary to make sure people see what is going on.

Input latency is our next thing to try and address though and its possible that CrossFire, even with its runt frames, is improving that more.

March 29, 2013 | 08:06 PM - Posted by steen (not verified)

I bet that's exactly what you'll find. AMD will have reduced input lag at the expense of these "runt" frames, whereas Nvidia's metering will show huge input lag. AMD were just outmanouvered by Nvidia subverting your (& other's) inverstigations on frame latency. I can see AMD introducing a latency/metering control for Xfire in future drivers. Will Nvidia do the same, I wonder? As I said a pox on AFR. SFR is an alternative with Nvidia via hack, but has its own issues.

March 29, 2013 | 08:07 PM - Posted by steen (not verified)

I bet that's exactly what you'll find. AMD will have reduced input lag at the expense of these "runt" frames, whereas Nvidia's metering will show huge input lag. AMD were just outmanouvered by Nvidia subverting your (& other's) inverstigations on frame latency. I can see AMD introducing a latency/metering control for Xfire in future drivers. Will Nvidia do the same, I wonder? As I said a pox on AFR. SFR is an alternative with Nvidia via hack, but has its own issues.

March 30, 2013 | 12:50 AM - Posted by bystander (not verified)

Given that AFR has every other frame rendered by a different card, the actual time between moving the mouse and it being displayed on the screen would not improve with crossfire/SLI over a single card.

However, how often a move initiates a frame does improve, but if those extra updates are almost at the same exact time as the single cards updates, it won't give you any benefit, so spacing will likely help.

March 30, 2013 | 01:57 AM - Posted by bystander (not verified)

Hopefully when you guys test latency, you realize that there is a polling component to consider.

If you have evenly spaced out times when you initiate a frame, your input is more evenly received. While simply taking an input when each GPU is ready may reduce latency, two in a row at almost the same exact time results redundant frames and input.

However, if those frames evenly distributed and received, more useful mouse inputs are gathered and utilized. The benefit of this may out weigh pure latency readings.

The difference may be the difference between receiving input a max of 33ms intervals, and having up to 66ms intervals with near 0 intervals at other points.

March 30, 2013 | 05:43 AM - Posted by Ryan Shrout

Interesting, hadn't considered the pros/cons of smoother or erratic input polling.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.