Review Index:
Feedback

Frame Rating Dissected: Full Details on Capture-based Graphics Performance Testing

DiRT 3 – HD 7970 versus GTX 680

View Full Size

Dirt 3 is a game that is already running at very high frame rates at 1920x1080 so it comes as no surprise that the scaling in CrossFire and SLI is minimal.  In terms of single GPU performance though the HD 7970 stays ahead of the GXT 680 consistently.

View Full Size

This is an interesting plot; while the green and black lines both are pretty thin and tight indicating a solid gaming experience with little stutter or frame variance, both the blue and orange lines of the multi-GPU configs have higher than expected variance.  What might be more interesting is that this is the first time thus far that we haven’t found the HD 7970s in CrossFire to be completely abysmal. 

View Full Size

Well look at that!  Both the observed and FRAPS FPS graphs are nearly the same, telling us that while AMD has some definite CrossFire scaling issues with Battlefield 3 and Crysis 3, DiRT 3 doesn’t appear to have suffering from the same ailment. 

View Full Size

The minimum frame rate percentile graph shows us that the HD 7970 is indeed faster than the GTX 680 across the board, though both cards are overpowered for this game at 19x10.  CrossFire and SLI both stay in range of each other the entire time as well with SLI falling a bit behind after the 84th percentile.

View Full Size

Our ISU graph here shows an interesting result and different view of performance than the minimum FPS percentile graph above it.  While the two cards were close in comparison before the HD 7970s in CrossFire actually look much better than SLI in this case, indicating a “tighter” band of frame times. 

 

View Full Size

The results are starting out the same here at 2560x1440 though there is much more scaling occurring on both AMD and NVIDIA setups by adding a second card.

View Full Size

Hmm, that’s interesting.  This was an occurrence we saw nearly every run of Dirt 3 with CrossFire at 25x14 so I wanted to be sure to include it here.  At the times between 25 and 30 seconds (it was different times in each run) we saw runts pop up again, almost as if the CrossFire cards were “getting out of phase” or something like that.  SLI has a few bigger spikes but nothing ever showed up that consistently. 

As a tip to one of my concluding thoughts of this article, I think that this result, and others we’ll show before the end, is indicative of the major problem for CrossFire.  It appears that when the GPU is the primary bottleneck, CrossFire just can’t keep up in some way and produces runts.  That would explain why would see it at 25x14 here but NOT at 19x12.  Just keep that in mind as you keep reading.

View Full Size

As a result of that series of runts, the observed FPS of HD 7970s in CrossFire drops during that time.

View Full Size

The minimum FPS percentiles graph shows the same result in a different way – it is obvious that the runts are causing the frame rates at the back 5% of this chart and the “build up” to and away from that subsection are what causes the decline around the 85% mark.

View Full Size

The frame variance and potential stutter graph here shows the results of that spike of runts as well with a much higher overall variance than SLI or either single card.

 

View Full Size

Dirt 3 was another game case where 5760x1080 results didn’t pan out for the HD 7970 in CrossFire so we are only seeing single GPUs and SLI results.  SLI is scaling very well but the single GTX 680 is getting bested by the HD 7970 the entire time.

View Full Size

All of our results are pretty good here the GTX 680s in SLI produce a fairly tight frame time line and frame rates well above either single card.

View Full Size

Without any runts or drops our observed frame rates are identical to our FRAPS frame rates.

View Full Size

Tight plots of frame times indicate and nearly straight lines here match our real-world game play experiences of smooth frame rates.

View Full Size

While this might look bad for the cards in question – check the scale on the left!  The most frame time variance we see on any option here is 1.6ms so we come away from our testing happy with 5760x1080 results for everything but the HD 7970s in CrossFire.

March 30, 2013 | 10:07 AM - Posted by Luciano (not verified)

About various vsync methods:
They're not the same code nor are available through the same ways.
But they are the same methods and persue the same results.
SLI and Crossfire are not the same thing...
But...

March 30, 2013 | 10:25 AM - Posted by John Doe (not verified)

This site and the people of it suck and choke on AMD's dick every single day.

They've recently gotten a Titan and have been shilling it so far but that's all sadly, compared to how much AMD shilling is going on on here.

Suckers.

March 30, 2013 | 11:48 AM - Posted by bystander (not verified)

You should read this:
http://www.tomshardware.com/reviews/graphics-card-benchmarking-frame-rat...

"You take the red pill - you stay in Wonderland, and I show you how deep the rabbit hole goes."
- Morpheus, The Matrix

"As a rule, human beings don't respond well when their beliefs are challenged. But how would you feel if I told you that the frames-per-second method for conveying performance, as it's often presented, is fundamentally flawed? It's tough to accept, right? And, to be honest, that was my first reaction the first time I heard that Scott Wasson at The Tech Report was checking into frame times using Fraps. His initial look and continued persistence was largely responsible for drawing attention to performance "inside the second," which is often discussed in terms of uneven or stuttery playback, even in the face of high average frame rates."

March 30, 2013 | 12:15 PM - Posted by John Doe (not verified)

My statement wasn't at Scott or whoever the fuck you are talking about that I don't give a rats ass about.

It was at the runners of this site.

THIS site, you're currently commenting on, fucking SCREAMS AMD ALL OVER. Just take off your AMD shirt already. Sigh.

March 30, 2013 | 01:33 PM - Posted by billeman

Please tell me there's a way do discard the (not verified) comments :)

April 3, 2013 | 11:47 PM - Posted by Jeremy Hellstrom

I could, but ...

March 30, 2013 | 06:38 PM - Posted by tidsoptimist (not verified)

This is a fascinating and quite informative bunch of articles.
Still i'm having some doubt, but as "bystander" stated above it's hard for a person to get challenged so hard in their beliefs.
This will make it hard for other sites to do as deep reviews as you do though and I hope you somehow start using open source on the code used so everyone who thinks you are payed by either "team" can check it out and even do some of the tests themselves with some modifications depending on the hardware used.

One thing I would like to know is if three cards would make any difference at all? I know its even more rare for people to have three cards and if you would use your conclusion then it wouldn't change much.
And this splitting you use, it is a passive one or is it somehow doing something to the stream?

Thank you for the informative articles.

March 31, 2013 | 02:34 PM - Posted by Luciano (not verified)

It would:

http://www.tomshardware.com/reviews/radeon-geforce-stutter-crossfire,299...

April 9, 2013 | 05:45 PM - Posted by tidsoptimist (not verified)

Thanks for the link!

trifire has had its benefits in the past as well thats why i started thinking about it..

March 30, 2013 | 06:45 PM - Posted by netsez (not verified)

OK, You have flooded us with a ton of charts and stats, but can you put in a paragraph or two explaining what the gaming experience FEELS like? Does a game play better with or without SLI/XFIRE, vsync on/off, etc. In the end the gameplay experience is what matters MOST.
That is something hardocp.com does well.

March 30, 2013 | 11:25 PM - Posted by Pendulously (not verified)

"Another stutter metric is going to be needed to catch and quantify them directly."

EXAMPLE: If Average FPS (over 60 seconds) is 100, then Total Frames observed over 60 seconds is 6000.

If ONE SINGLE FRAME is above 100ms, then for the y-axis value '100' (milliseconds), the x-axis value will be '99.9983' (percentile), i.e. one minus (1/6000).

If FOUR FRAMES are above 30ms, then for the y-axis value '30' (milliseconds), the x-axis value will be '99.9933' (percentile), i.e. one minus (4/6000).

If TEN FRAMES are above 20ms, then for the y-axis value '20' (milliseconds), the x-axis value will be '99.8333' (percentile), i.e. one minus (10/6000).

So instead of PERCENTILE on the X-AXIS, you can put NUMBER OF FRAMES on the X-AXIS. For the y-axis value of '100' (ms), the x-axis value will be '1' (frame), for y-axis '30' (ms), the x-axis will be '4' (frames), and so on.

March 31, 2013 | 05:27 AM - Posted by Anonymous (not verified)

same story in every place I go. 2 lines to say 7970 is better than 680 1x1 than 3 pages to say about temperature/power and shutter when it's 2x2 ... =_=

March 31, 2013 | 11:26 AM - Posted by Anonymous (not verified)

Well, These articles will change the graphic card industry a lot

but I also have a suggestion, not just test the graphic card on pc, maybe it can be used to test the console(xbox360, ps3 or xbox720 ps4 in the future) games.

April 1, 2013 | 10:18 AM - Posted by FernanDK (not verified)

hey pcper's guys,

you have my deepest sympathies. awesome job you got there. thank you for sharing and introducing us to this new benchmark system, it is for sure facts more trustful than any other way the majority uses to measure FPS nowadays.

keep it up. you have my support and probably the whole community is also on your side.

April 2, 2013 | 04:13 AM - Posted by Anonymous (not verified)

I find your VSYNC tests to be invalid because the game needs to be able to run at minimum of 60 FPS for VSYNC to work properly, FPS can not be allowed to drop below 60. So you should actually fiddle with the game settings until you get a minimum of 60 FPS and only then enable VSYNC and test the results, because that's what a knowledgeable player would do. Nobody plays the game at full settings and VSYNC enabled if they can't get MIN 60 FPS, that's just stupid.

April 2, 2013 | 08:36 AM - Posted by JC (not verified)

I do. I can't stand the tearing and would rather have a lower frame rate that's free of tearing.

April 3, 2013 | 05:16 AM - Posted by Anonymous (not verified)

Reading is hard, isn't it ?

I said that in order for the VSYNC tests to be valid the game needs to run over 60 FPS constantly, because that's what a knowledgeable user would do in order to enjoy smooth gameplay without tearing.

Now if you're stupid enough to run VSYNC while under 60 FPS then that's your loss, still doesn't mean jack shit for the purposes of me challenging their VSYNC test.

April 2, 2013 | 01:46 PM - Posted by MadManniMan (not verified)

Hey there guys,
when will the missing parts of the article series follow? I am like a cat on a hot tin roof here ...

April 2, 2013 | 04:36 PM - Posted by HalloweenJack (not verified)

FRAPS can manipulate what it receives so if this software is on the same place , then it too can manipulate anything - also , having yet another layer WILL slow down everything , once the frame is finished its being intercepted before being sent to the RAMDAC.

So basically this NVidia software , which you`ve had for a year , has helped NVidia `silently` attempt fix there own SLI frame syncing issues , but now you`ve `come out` against AMD.

NVidia pay roll that good now? they do need help since tegra 5 will be old before its out , given that ARM have now sampled X64 V57 on 16nm

April 3, 2013 | 07:53 AM - Posted by ThorAxe

I'm not sure if you are being serious or are just posting to make us laugh. I hope it's the latter because no one can be that stupid, can they?

The frame is intercepted before being sent to RAMDAC??...LOL!!!

April 18, 2013 | 11:20 PM - Posted by CoderX71 (not verified)

NVIDIA was responsible for developing the color overlay that sits between the game and DirectX (in the same location of the pipeline as FRAPS essentially)

Yeah ... nVidia would do anything untoward here right?

April 18, 2013 | 11:20 PM - Posted by CoderX71 (not verified)

NVIDIA was responsible for developing the color overlay that sits between the game and DirectX (in the same location of the pipeline as FRAPS essentially)

Yeah ... nVidia would do anything untoward here right?

April 4, 2013 | 07:46 AM - Posted by Anonymous (not verified)

Thank you very much for finally exposing this AMD Crossfire scam using runts and ghost frames to artificially inflate frame rate and sell its products based on false data. I applaud you for giving us a clear picture of AMD's dirty runt&ghost frames games, AMD plays many benchmark games that's why AMD fled BAPco in shame. Nothing AMD claims is to believed, and if it wasn't for honest independent websites like yours that expose AMD for what it has become, AMD would still be selling their inferior products based on false viral marketing and propaganda pumping message board bullying.

Being the under dog doesn't give AMD the right to blatantly pump false benchmarks as fact. If AMD has to cheat to sell its products then AMD needs to be exposed as a cheater and people need to be made aware of it, thank you for doing so.

April 5, 2013 | 04:35 AM - Posted by Joseph C. Carbone III (not verified)

Dear Mr. Stroud,

Thank you for a tremendous amount of work, diligence, and integrity...

I thoroughly enjoyed the video with you and Tom Petersem. Although, I have to mention; I have been very disappointed in Nvidia since my purchase of a group of GTX 480s, believing, from day one, I had thrown away more than $1500 for three unmanageable 1500 Watt hairdryers marketed as graphics cards, which were subsequently relabeled GTX 580s, once the bugs were worked out, kind of like Microsoft's Vista to XP, kind of like scamming on people--no, definitely scamming.

I have always been an enthusiast of the Nvidia since the days of 3dfx and had likewise always enjoyed anticipating and buying Nvidia's new products, and the GTX Titan is awesome.

With memories of ATI, Matrox, and Nvidia (I still have my RIVA 128), a home has been found within my memories, and that is why I am excited about what you and the rest of PC Perspective have done and are going to do. With collaboration you-all are moving a beloved industry onward toward a better future for us and for the companies we want to succeed.

Sincerely,
Joseph C. Carbone III; 05 April 2013

April 8, 2013 | 12:19 PM - Posted by Anonymous (not verified)

I find the testing methodology to be flawed.

Using a competitors product where they get to define the capture data and define the test results is not accurate scientific method.

Has anyone asked the question as to why Nvidia defined a runt as 21 scan lines?

Has an analysis been done to see how many scan lines an Nvidia product has produced that are not considered runts because they are above the arbitrary 21 scan lines, vs how many on an AMD product just happen to be at 21 or below?

I am skeptical, as you should be too as reviewers and analyzers, that the 21 scan lines was chosen because, for whatever reason, Nvidia products produce frames that are 22 scan lines or greater, so therefor 21 scan lines were chosen as runts.

This would seem to be even supported by your own data, where Nvidia products do not produce any runts on any test, which would seem a remarkable situation unless you consider the fact that they get to define the metric that determines what a runt is.

Is it possible to rerun these tests where the runt was defined as perhaps 28 scan lines instead? 35 scan lines? Greater? How about report only fully rendered frames as the framerate?

These above would be much more accurate tests, as they would determine if there was a noticeable jump in AMD related runts and what the number of Nvidia related runts would become.

Would it surprise anyone that Nvidia produced 22 scan line runts vs AMD 21 scan line runts and that is why Nvidia chose 21 scan lines as the break point?

April 9, 2013 | 05:50 AM - Posted by Danieldp

Hi guys,

I am new to the forum and have been following this discussion for a while now and thought I would register. I currently own nvidia (690) and I have also owned ATI (7970 CF).

In regards to your comment sir or mam, I think you make an extremely valid point. I have been looking through these articles and every time I see the graphs I almost feel like there is a slight bias, maybe not intentional, as you mentioned, but I do think it is there. looking at some of the CF results I just don't find them accurate in regards to user experience. I remember playing on one 7970 ghz in crysis 3 then upgrading to a CF setup and I could immediately tell the difference the game play was a lot better, there was the occasional stuttering but nothing as bad as these graphs make them out to be... This is a slight subjective view but I do think we need testing methods that are more thorough and as you say, more towards the scientific method.

I own nvidia and I think anyone who does will get exited at what these graphs are saying as for amd owners you will probably be a bit disheartened, from everything I have read I can only conclude that nvidia has had a big hand in this testing {mostly indirect) and therefore I can not take these results too serious.

There is definitely truth in what these graphs say but I think it is blown a little out of proportion, especially with the parameters of testing set to be nvidia optimized.

I welcome critics but please be professional about it.

Thanks,
Dan

April 9, 2013 | 09:53 AM - Posted by Danieldp

Hi guys,

This confirms what I have been saying the CROSSFIRE SETUP WINS against the TITAN in FRAME TIME VARIANCE!

http://www.tomshardware.com/reviews/crysis-3-performance-benchmark-gamin...

Still lolve my 690 though =)

April 9, 2013 | 08:41 PM - Posted by Danieldp

And by win I mean JUST. But, it does, the results are different from PCP. Check the triple monitor frame time deviance as well.

http://www.tomshardware.com/reviews/crysis-3-performance-benchmark-gamin...

The important thing here is to note that ALL setups are BELLOW 20 ms... what does that say? All are playable.

so in my opinion, from what I just saw, if I have to choose between a titan or two sapphire vapor-x 6gig in CF, I would choose the latter. Unless you want to SLI with the titan, then it is titan FTW! But, that is just what I would do based on the results.

Again, what is shocking to me is how different these results are to PCP... Toms hardware has been around longer than any site I know of and they have always been held in the highest regard (not an opinion).

It will be interesting to see what the new AMD runt fix in July will do to these results. I am thinking they might be more than just back on the board after that (speculation).

Interesting, is it not?

April 12, 2013 | 04:13 AM - Posted by JCCIII

Dear Daniel,

With Tom’s Hardware disclosing, from the link you provided, that they do not consider their dual-card and dual-GPU results as accurate and, with their tests showing the Radeon HD 7950’s time variance at 23.8 in the 95th percentile, how is it you see PC Perspective’s research as invalidated?

I first went to Maximum PC and Tom’s Hardware to try to understand this complicated subject, but they are both behind on the subject of why we are spending much money and getting unpredictable quality on our screens.

Ultimately, good research is fundamental, and it is obvious PC Perspective has committed a great deal of resources in order to be helpful and do a good job. They have worked to present a balanced perspective for us to consider, and it is still being translated into tangible empirical value. However, I am certain, with collaboration of Tom’s Hardware and others, PC Perspective’s results are accurate, significant, and will benefit us all.

Sincerely, Joseph C. Carbone III; 12 April 2013

April 12, 2013 | 04:15 AM - Posted by JCCIII

...

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.