Review Index:
Feedback

Frame Rating Dissected: Full Details on Capture-based Graphics Performance Testing

Crysis 3 – HD 7970 versus GTX 680

View Full Size

Crysis 3 is the currently the biggest GPU hog and both the GTX 680 and HD 7970 handle it equally well at 1920x1080.  Even using our FRAPS metrics though, GTX 680s in SLI are scaling better than the HD 7970s in CrossFire.

View Full Size

The frame time plot from our Frame Rating system shows another instance though of CrossFire’s inability to keep consistent animation on the screen.  The single card configurations are pretty consistent with each other but both also exhibit some tiny bumps in frame times on a repeating pattern, obviously a particular of the CryEngine.  SLI does have some increases in frame time variance across the board with a few minor “hitches” as well.  CrossFire though appears to be alternating between 2ms frame times and up to 50ms frame times resulting in…

View Full Size

Not only a lower observed frame rate but a frame rate that is LOWER than the single card!  I can tell you from first-hand experience that this definitely was the case in play through as well; it felt slower than the single card experience.

View Full Size

SLI looks fantastic in this graph and is able to take the matching performance of the GTX 680 and HD 7970 up from 30 FPS average for the entire run to 57 FPS.

View Full Size

Our custom ISU rating tells me that the GTX 680 SLI configuration looks GREAT and only differs from the single card configurations at 95% and above percentile.  The HD 7970 in CrossFire though shows huge amounts of variance from the outset and in fact does exhibit a lot of stutter in game as well.

 

View Full Size

At the higher resolution the single card HD 7970 as a slight edge over the GTX 680 this time around and this time the scaling of CrossFire appears to be faster than SLI.

View Full Size

A quick glance at our results from the observed frame rate clearly shows that isn't the case though as only in short bursts does the CrossFire experience actually match that of the dual GTX 680s in SLI.

View Full Size

Our frame time plot indicates where the alternating frame times in Crysis 3 occur with CrossFire and how it relates to the performance of SLI.  Other than the single large hitch seen at the 12 second mark or so in SLI, the GTX 680s handle Crysis 3 much better.

View Full Size

The GTX 680s are able to scale from about 19 FPS on average to 36 FPS - a solid 89% scaling factor.  The HD 7970 GHz Edition cards are not so impressive, only going from 21 FPS to 25 FPS but that quickly falls down to just even performance with a single card.

View Full Size

Ouch, another very poor result here for HD 7970s in CrossFire with Crysis 3 at 2560x1440 with as much as 25 ms of frame variance (nearly two full refresh cycels). 

 

View Full Size

This is one of the few Eyefinity runs for the HD 7970 CrossFire configuration that was able to run until completion and generate the necessary graphs for us.  So we’ll finally get to see some interesting results.  Even at first glance, we can tell that something here isn't quite right.  According to FRAPS, the HD 7970 is pushing out more than 200 FPS to the screen on Crysis 3 at 5760x1080, which is obviously inaccurate. 

View Full Size

Ouch, there are definitely some problems here, not the least of which is the graphs poor setting of range maximums (will fix soon!)  Notice the spots on the plot of the orange line (HD 7970 CF) where there is no data – that indicates a dropped frame and a lot of frame time variance. 

View Full Size

Removing those and any runts we find that the observed FPS is actually right in line with that of the single HD 7970 graphics card.  Also, without the CrossFire misreported results out of the window, the update scale helps us see the scaling that the GTX 680s in SLI. 

View Full Size

Here again is another one of our RUN files to show you the affects of dropped frames on Eyefinity testing.  FRAPS based frame rates sky rocket up though the observed frame rate is much lower, in line with a single HD 7970 GHz Edition card.  There are some runts involved in this but the biggest factor is obviously the dropped frames (missing colors in our pattern).

View Full Size

Again, for comparison, here is the RUN graph for the GTX 680s running in SLI at 5760x1080.  Notice that the frame rate is consistent with no drops or runts.

View Full Size

Interesting results here with the CrossFire setup taking a lower position across the board in our percentile minimum frame rate graphs. Also, we see the pair of GTX 680s in SLI start out much faster than anything else tested but at the 94th percentile or so fall below that of the HD 7970.

View Full Size

Keeping mind that we are looking at pretty low frame rates across the board, the HD 7970 has the best overall result here in our ISU graph with the least amount of frame variance over the course of our 60 second run.  Obviously CrossFire has a big issue once again and see significant variance starting at the 50th percentile and it only gets worse from there.

March 30, 2013 | 08:47 PM - Posted by bystander (not verified)

I have a hard time trying to grasp exactly how erratic input would affect the results. I have a feeling, based on my constitution (I get simulator sickness with poor latency), that the best case is which ever has the lowest worst case interval times.

March 31, 2013 | 01:55 AM - Posted by bystander (not verified)

..But then you have occasional latency increases. Of course those increases are to remove redundant frames, and once increased, they probably don't need much adjustments most the time.

This whole topic always gets me going back and forth, but my instincts is overall, even if latency is considered, even spacing matters more as it adds more useful points of input, assuming it adds only marginal/occasional increases of latency.

March 28, 2013 | 08:20 PM - Posted by Bob Jones (not verified)

Can you address the visual quality differences between the two cards? Specifically on Sleeping Dogs, the 660 Ti seems to be missing some lighting sources outside - most noticeable is the cafe/shop lights before you go down the stairs, and then the store across the street right at the end of the vieo.

March 29, 2013 | 10:46 AM - Posted by Ryan Shrout

Time of day in the game when those videos were recorded?  They should be basically identical, I'll check.

March 29, 2013 | 03:10 AM - Posted by I don't have a name. (not verified)

Fascinating article. I think it'll take a few reads to fully comprehend everything that is being said. Thank you indeed, I found it fascinating. Certainly, as a 7970 owner, I'll be holding off on a potential second 7970 purchase for the time being.

March 29, 2013 | 05:09 AM - Posted by rezes (not verified)

The last GeForce 314.22 Beta Drivers and last Radeon drivers 13.3 beta 3. Please using these drivers on yours test.

AMD driver may be more stable on this drivers!

March 29, 2013 | 10:46 AM - Posted by Ryan Shrout

Nothing has changed on the issues in question.

March 29, 2013 | 05:15 AM - Posted by technogiant (not verified)

Thanks for the great article and all the work you've done guys.

I run 2x gtx 460's is sli and while I dislike screen tearing I've noticed that options such as vsync, active vsyn and frame rate limiters actually make the experience less smooth as appears to have been highlighted in this article.

I've considered getting a 120Hz monitor just so I can run without any of those options at a decent frame rate but use sufficiently high settings so as not to go above 120Hz and so incur screen tearing.

Thinking further I'd like Nvidia to develop a variation of their gpu boost technology that would actually down clock the gpu to prevent frame rates from exceeding the monitors refresh rate.....think this would give the benefits of no screen tearing without the negatives of vsync and the like.

Thanks again for the article guys.

March 29, 2013 | 05:37 AM - Posted by technogiant (not verified)

Actually using gpu boost dynamically to both under and overclock the gpu to achieve a target frame rate could be a very nice way of producing a smoothed experience without any of the negatives of other methods as its occurring directly at the gpu instead of in the game engine to display time line.

March 29, 2013 | 08:01 AM - Posted by Pick23 (not verified)

So is this article saying that even with the new testing methodologies:
1.Single card 7970ghz is still slightly better than the 680
2.Crossfire absolutely sucks
?

March 29, 2013 | 08:18 AM - Posted by John Doe (not verified)

7970 Ghz is slightly better than a 680 ONLY at stock.

When you're comparing 7970 Ghz to 680, things ENTIRELY depend on clock speeds since the 7970 Ghz is nothing more than a pre-OC'ed 7970.

But yes, CF does indeed sorta suck. Still.

March 29, 2013 | 07:26 PM - Posted by Anonymous (not verified)

Sweet 7970 still the best card under.. well under $1000 lol

March 30, 2013 | 09:21 PM - Posted by steen (not verified)

What's stock for a 680? ;) 7970 GE is slightly slower than Titan...

CF sucks just like SLI. What's your poison, input lag or frame metering? Do poeple understand what "runts" are? CF is actually rendering the frames, you just don't benefit as they're too close together. One frame renders the top 1/4 of the screen when the next frame starts. Your top ~200 lines are the fastest on your screen. ;)

(Sorry for the previous multi-posts. Don't know what happened there.)

April 7, 2013 | 01:15 PM - Posted by John Doe (not verified)

I have ran far more CF and SLi setups than you did FOR YEARS and understand far more abouts these things than you and your little silly mind does.

March 29, 2013 | 11:39 AM - Posted by fausto412 (not verified)

interesting piece, good job pcper.com

now I wonder if when AMD does a global fix my 6990 performance will be dramatically improved on bf3?

and what is the effect of correcting this to latency and actual framerate? will we see FPS go down at the expense of frametimes?

It is Obvious Nvidia was on top of this for some time...I just don't see a proper fix in 120 days.

March 29, 2013 | 12:24 PM - Posted by Chad Wilson (not verified)

Just out of scientific curiosity, did you do a run without the splitter in the mix to confirm that the splitter is not adding any latency to your tests?

March 29, 2013 | 12:33 PM - Posted by Anonymous (not verified)

Do people actually use vsync without some form of triple buffering?

I don't see how. Jumping between 30 and 60fps or whatever is not an enjoyable, nor smooth experience.

So, if you can enable vsync AND allow the game to sweep through a normal range of available framerates, does this negate the increased frame times of constantly switching back and forth between high fps and low fps?

March 30, 2013 | 12:56 AM - Posted by bystander (not verified)

V-sync, even with triple buffering, still jumps back and forth between 16ms and 33ms, but it does it between frames. A 120hz monitor helps here, as you can have 25ms frames too, so it is less of a variance.

March 29, 2013 | 12:36 PM - Posted by Anonymous (not verified)

Furthermore, is playing a game without vsync enabled REALLY an option?

Are you sure gamers all over the world disable it to be rid of the latency issues? I'm not so sure.

I'll happily take a little latency in a heated round of counter-strike than end up dead, or missing my shot because 50% of the screen shifted 8 feet to the right. (screen tearing).

Pretty much all games are unplayable without the use of vsync and I'm not convinced it's a personal preference, either, if you enjoy your experience while you're tearing frames - I'd just call you a mean name that insinuates you're not telling the truth.

March 29, 2013 | 02:00 PM - Posted by Marty (not verified)

If you are a competitive player, VSync is not an option, you are lagging an extra frame behind.

March 29, 2013 | 12:45 PM - Posted by rezes

Where is the new tests? and when?

March 29, 2013 | 02:20 PM - Posted by Anonymous (not verified)

So how much did nVidia pay you?

While I can see the potential in this kind of testing, and some of the issues you have mentioned are valid, you have drawn quite a bold and one sided conclusion using the competitor's software. I'll save my judgements for when this becomes open source.

March 29, 2013 | 08:12 PM - Posted by Fergdog (not verified)

It's not purely Nvidia made software, if you read the article or paid attention to this site you'd know Ryan co-developed this benchmark with Nvidia and he's been working on it for a long time.

April 2, 2013 | 11:10 PM - Posted by Anonymous (not verified)

nVidia has to do almost everything, the amd fans need to get used to it.

AMD's years long broken bottom line and years of massive layoffs and closings mean they claim they "weren't even aware of this issue !? !! "

- like every other total screw up AMD gets caught on with their dragon drawers on the floor next to their spider platform, buck naked in epic fail and presumably clueless...

Maybe we need some depositions and subpoenas of internal emails to see just how much they covered up their epic fail here.

March 29, 2013 | 08:00 PM - Posted by Fergdog (not verified)

Quick question, for adaptive vsync, you put it on half refresh rate didnt you?

March 29, 2013 | 08:02 PM - Posted by Fergdog (not verified)

Half refresh adaptive only really makes sense on 120hz monitors, not sure why you used that setting on a 60hz monitor for benchmarking.

March 29, 2013 | 11:06 PM - Posted by Anonymous (not verified)

Hoping you post the titan and and info today as promised!!!!!!!!!

March 29, 2013 | 11:07 PM - Posted by Anonymous (not verified)

Titan and amd

March 30, 2013 | 12:56 AM - Posted by MaxBV (not verified)

Waiting on that GTX 690 and Titan article still, hope you guys haven't forgotten.

March 30, 2013 | 10:07 AM - Posted by Luciano (not verified)

About various vsync methods:
They're not the same code nor are available through the same ways.
But they are the same methods and persue the same results.
SLI and Crossfire are not the same thing...
But...

March 30, 2013 | 10:25 AM - Posted by John Doe (not verified)

This site and the people of it suck and choke on AMD's dick every single day.

They've recently gotten a Titan and have been shilling it so far but that's all sadly, compared to how much AMD shilling is going on on here.

Suckers.

March 30, 2013 | 11:48 AM - Posted by bystander (not verified)

You should read this:
http://www.tomshardware.com/reviews/graphics-card-benchmarking-frame-rat...

"You take the red pill - you stay in Wonderland, and I show you how deep the rabbit hole goes."
- Morpheus, The Matrix

"As a rule, human beings don't respond well when their beliefs are challenged. But how would you feel if I told you that the frames-per-second method for conveying performance, as it's often presented, is fundamentally flawed? It's tough to accept, right? And, to be honest, that was my first reaction the first time I heard that Scott Wasson at The Tech Report was checking into frame times using Fraps. His initial look and continued persistence was largely responsible for drawing attention to performance "inside the second," which is often discussed in terms of uneven or stuttery playback, even in the face of high average frame rates."

March 30, 2013 | 12:15 PM - Posted by John Doe (not verified)

My statement wasn't at Scott or whoever the fuck you are talking about that I don't give a rats ass about.

It was at the runners of this site.

THIS site, you're currently commenting on, fucking SCREAMS AMD ALL OVER. Just take off your AMD shirt already. Sigh.

March 30, 2013 | 01:33 PM - Posted by billeman

Please tell me there's a way do discard the (not verified) comments :)

April 3, 2013 | 11:47 PM - Posted by Jeremy Hellstrom

I could, but ...

March 30, 2013 | 06:38 PM - Posted by tidsoptimist (not verified)

This is a fascinating and quite informative bunch of articles.
Still i'm having some doubt, but as "bystander" stated above it's hard for a person to get challenged so hard in their beliefs.
This will make it hard for other sites to do as deep reviews as you do though and I hope you somehow start using open source on the code used so everyone who thinks you are payed by either "team" can check it out and even do some of the tests themselves with some modifications depending on the hardware used.

One thing I would like to know is if three cards would make any difference at all? I know its even more rare for people to have three cards and if you would use your conclusion then it wouldn't change much.
And this splitting you use, it is a passive one or is it somehow doing something to the stream?

Thank you for the informative articles.

March 31, 2013 | 02:34 PM - Posted by Luciano (not verified)

It would:

http://www.tomshardware.com/reviews/radeon-geforce-stutter-crossfire,299...

April 9, 2013 | 05:45 PM - Posted by tidsoptimist (not verified)

Thanks for the link!

trifire has had its benefits in the past as well thats why i started thinking about it..

March 30, 2013 | 06:45 PM - Posted by netsez (not verified)

OK, You have flooded us with a ton of charts and stats, but can you put in a paragraph or two explaining what the gaming experience FEELS like? Does a game play better with or without SLI/XFIRE, vsync on/off, etc. In the end the gameplay experience is what matters MOST.
That is something hardocp.com does well.

March 30, 2013 | 11:25 PM - Posted by Pendulously (not verified)

"Another stutter metric is going to be needed to catch and quantify them directly."

EXAMPLE: If Average FPS (over 60 seconds) is 100, then Total Frames observed over 60 seconds is 6000.

If ONE SINGLE FRAME is above 100ms, then for the y-axis value '100' (milliseconds), the x-axis value will be '99.9983' (percentile), i.e. one minus (1/6000).

If FOUR FRAMES are above 30ms, then for the y-axis value '30' (milliseconds), the x-axis value will be '99.9933' (percentile), i.e. one minus (4/6000).

If TEN FRAMES are above 20ms, then for the y-axis value '20' (milliseconds), the x-axis value will be '99.8333' (percentile), i.e. one minus (10/6000).

So instead of PERCENTILE on the X-AXIS, you can put NUMBER OF FRAMES on the X-AXIS. For the y-axis value of '100' (ms), the x-axis value will be '1' (frame), for y-axis '30' (ms), the x-axis will be '4' (frames), and so on.

March 31, 2013 | 05:27 AM - Posted by Anonymous (not verified)

same story in every place I go. 2 lines to say 7970 is better than 680 1x1 than 3 pages to say about temperature/power and shutter when it's 2x2 ... =_=

March 31, 2013 | 11:26 AM - Posted by Anonymous (not verified)

Well, These articles will change the graphic card industry a lot

but I also have a suggestion, not just test the graphic card on pc, maybe it can be used to test the console(xbox360, ps3 or xbox720 ps4 in the future) games.

April 1, 2013 | 10:18 AM - Posted by FernanDK (not verified)

hey pcper's guys,

you have my deepest sympathies. awesome job you got there. thank you for sharing and introducing us to this new benchmark system, it is for sure facts more trustful than any other way the majority uses to measure FPS nowadays.

keep it up. you have my support and probably the whole community is also on your side.

April 2, 2013 | 04:13 AM - Posted by Anonymous (not verified)

I find your VSYNC tests to be invalid because the game needs to be able to run at minimum of 60 FPS for VSYNC to work properly, FPS can not be allowed to drop below 60. So you should actually fiddle with the game settings until you get a minimum of 60 FPS and only then enable VSYNC and test the results, because that's what a knowledgeable player would do. Nobody plays the game at full settings and VSYNC enabled if they can't get MIN 60 FPS, that's just stupid.

April 2, 2013 | 08:36 AM - Posted by JC (not verified)

I do. I can't stand the tearing and would rather have a lower frame rate that's free of tearing.

April 3, 2013 | 05:16 AM - Posted by Anonymous (not verified)

Reading is hard, isn't it ?

I said that in order for the VSYNC tests to be valid the game needs to run over 60 FPS constantly, because that's what a knowledgeable user would do in order to enjoy smooth gameplay without tearing.

Now if you're stupid enough to run VSYNC while under 60 FPS then that's your loss, still doesn't mean jack shit for the purposes of me challenging their VSYNC test.

April 2, 2013 | 01:46 PM - Posted by MadManniMan (not verified)

Hey there guys,
when will the missing parts of the article series follow? I am like a cat on a hot tin roof here ...

April 2, 2013 | 04:36 PM - Posted by HalloweenJack (not verified)

FRAPS can manipulate what it receives so if this software is on the same place , then it too can manipulate anything - also , having yet another layer WILL slow down everything , once the frame is finished its being intercepted before being sent to the RAMDAC.

So basically this NVidia software , which you`ve had for a year , has helped NVidia `silently` attempt fix there own SLI frame syncing issues , but now you`ve `come out` against AMD.

NVidia pay roll that good now? they do need help since tegra 5 will be old before its out , given that ARM have now sampled X64 V57 on 16nm

April 3, 2013 | 07:53 AM - Posted by ThorAxe

I'm not sure if you are being serious or are just posting to make us laugh. I hope it's the latter because no one can be that stupid, can they?

The frame is intercepted before being sent to RAMDAC??...LOL!!!

April 18, 2013 | 11:20 PM - Posted by CoderX71 (not verified)

NVIDIA was responsible for developing the color overlay that sits between the game and DirectX (in the same location of the pipeline as FRAPS essentially)

Yeah ... nVidia would do anything untoward here right?

April 18, 2013 | 11:20 PM - Posted by CoderX71 (not verified)

NVIDIA was responsible for developing the color overlay that sits between the game and DirectX (in the same location of the pipeline as FRAPS essentially)

Yeah ... nVidia would do anything untoward here right?

April 4, 2013 | 07:46 AM - Posted by Anonymous (not verified)

Thank you very much for finally exposing this AMD Crossfire scam using runts and ghost frames to artificially inflate frame rate and sell its products based on false data. I applaud you for giving us a clear picture of AMD's dirty runt&ghost frames games, AMD plays many benchmark games that's why AMD fled BAPco in shame. Nothing AMD claims is to believed, and if it wasn't for honest independent websites like yours that expose AMD for what it has become, AMD would still be selling their inferior products based on false viral marketing and propaganda pumping message board bullying.

Being the under dog doesn't give AMD the right to blatantly pump false benchmarks as fact. If AMD has to cheat to sell its products then AMD needs to be exposed as a cheater and people need to be made aware of it, thank you for doing so.

April 5, 2013 | 04:35 AM - Posted by Joseph C. Carbone III (not verified)

Dear Mr. Stroud,

Thank you for a tremendous amount of work, diligence, and integrity...

I thoroughly enjoyed the video with you and Tom Petersem. Although, I have to mention; I have been very disappointed in Nvidia since my purchase of a group of GTX 480s, believing, from day one, I had thrown away more than $1500 for three unmanageable 1500 Watt hairdryers marketed as graphics cards, which were subsequently relabeled GTX 580s, once the bugs were worked out, kind of like Microsoft's Vista to XP, kind of like scamming on people--no, definitely scamming.

I have always been an enthusiast of the Nvidia since the days of 3dfx and had likewise always enjoyed anticipating and buying Nvidia's new products, and the GTX Titan is awesome.

With memories of ATI, Matrox, and Nvidia (I still have my RIVA 128), a home has been found within my memories, and that is why I am excited about what you and the rest of PC Perspective have done and are going to do. With collaboration you-all are moving a beloved industry onward toward a better future for us and for the companies we want to succeed.

Sincerely,
Joseph C. Carbone III; 05 April 2013

April 8, 2013 | 12:19 PM - Posted by Anonymous (not verified)

I find the testing methodology to be flawed.

Using a competitors product where they get to define the capture data and define the test results is not accurate scientific method.

Has anyone asked the question as to why Nvidia defined a runt as 21 scan lines?

Has an analysis been done to see how many scan lines an Nvidia product has produced that are not considered runts because they are above the arbitrary 21 scan lines, vs how many on an AMD product just happen to be at 21 or below?

I am skeptical, as you should be too as reviewers and analyzers, that the 21 scan lines was chosen because, for whatever reason, Nvidia products produce frames that are 22 scan lines or greater, so therefor 21 scan lines were chosen as runts.

This would seem to be even supported by your own data, where Nvidia products do not produce any runts on any test, which would seem a remarkable situation unless you consider the fact that they get to define the metric that determines what a runt is.

Is it possible to rerun these tests where the runt was defined as perhaps 28 scan lines instead? 35 scan lines? Greater? How about report only fully rendered frames as the framerate?

These above would be much more accurate tests, as they would determine if there was a noticeable jump in AMD related runts and what the number of Nvidia related runts would become.

Would it surprise anyone that Nvidia produced 22 scan line runts vs AMD 21 scan line runts and that is why Nvidia chose 21 scan lines as the break point?

April 9, 2013 | 05:50 AM - Posted by Danieldp

Hi guys,

I am new to the forum and have been following this discussion for a while now and thought I would register. I currently own nvidia (690) and I have also owned ATI (7970 CF).

In regards to your comment sir or mam, I think you make an extremely valid point. I have been looking through these articles and every time I see the graphs I almost feel like there is a slight bias, maybe not intentional, as you mentioned, but I do think it is there. looking at some of the CF results I just don't find them accurate in regards to user experience. I remember playing on one 7970 ghz in crysis 3 then upgrading to a CF setup and I could immediately tell the difference the game play was a lot better, there was the occasional stuttering but nothing as bad as these graphs make them out to be... This is a slight subjective view but I do think we need testing methods that are more thorough and as you say, more towards the scientific method.

I own nvidia and I think anyone who does will get exited at what these graphs are saying as for amd owners you will probably be a bit disheartened, from everything I have read I can only conclude that nvidia has had a big hand in this testing {mostly indirect) and therefore I can not take these results too serious.

There is definitely truth in what these graphs say but I think it is blown a little out of proportion, especially with the parameters of testing set to be nvidia optimized.

I welcome critics but please be professional about it.

Thanks,
Dan

April 9, 2013 | 09:53 AM - Posted by Danieldp

Hi guys,

This confirms what I have been saying the CROSSFIRE SETUP WINS against the TITAN in FRAME TIME VARIANCE!

http://www.tomshardware.com/reviews/crysis-3-performance-benchmark-gamin...

Still lolve my 690 though =)

April 9, 2013 | 08:41 PM - Posted by Danieldp

And by win I mean JUST. But, it does, the results are different from PCP. Check the triple monitor frame time deviance as well.

http://www.tomshardware.com/reviews/crysis-3-performance-benchmark-gamin...

The important thing here is to note that ALL setups are BELLOW 20 ms... what does that say? All are playable.

so in my opinion, from what I just saw, if I have to choose between a titan or two sapphire vapor-x 6gig in CF, I would choose the latter. Unless you want to SLI with the titan, then it is titan FTW! But, that is just what I would do based on the results.

Again, what is shocking to me is how different these results are to PCP... Toms hardware has been around longer than any site I know of and they have always been held in the highest regard (not an opinion).

It will be interesting to see what the new AMD runt fix in July will do to these results. I am thinking they might be more than just back on the board after that (speculation).

Interesting, is it not?

April 12, 2013 | 04:13 AM - Posted by JCCIII

Dear Daniel,

With Tom’s Hardware disclosing, from the link you provided, that they do not consider their dual-card and dual-GPU results as accurate and, with their tests showing the Radeon HD 7950’s time variance at 23.8 in the 95th percentile, how is it you see PC Perspective’s research as invalidated?

I first went to Maximum PC and Tom’s Hardware to try to understand this complicated subject, but they are both behind on the subject of why we are spending much money and getting unpredictable quality on our screens.

Ultimately, good research is fundamental, and it is obvious PC Perspective has committed a great deal of resources in order to be helpful and do a good job. They have worked to present a balanced perspective for us to consider, and it is still being translated into tangible empirical value. However, I am certain, with collaboration of Tom’s Hardware and others, PC Perspective’s results are accurate, significant, and will benefit us all.

Sincerely, Joseph C. Carbone III; 12 April 2013

April 12, 2013 | 04:15 AM - Posted by JCCIII

...

April 9, 2013 | 11:18 AM - Posted by Anonymous (not verified)

Thanks Dan,

As you mentioned, there is certainly something to this stuttering. It is worthwhile to continue to optimize the test criteria so that it completely removes any Nvidia bias.

Ryan, can you comment about the possibility of raising the size of the scan lines for runts?

Thanks!

April 9, 2013 | 05:52 AM - Posted by Danieldp

Sorry, double post. XD

April 11, 2013 | 06:46 AM - Posted by Cookie (not verified)

Dan,

Just because Toms hardware is longer around does not mean that they do a better job. My vote goes to Pcper, I prefer to read something proper, no offence to Toms hardware.

April 11, 2013 | 11:53 PM - Posted by Danieldp

Hi,

Not really the point I was making, obviously both sites have sufficient expertise. The thing I was pointing at is the vastly different results...

Dan

April 11, 2013 | 08:22 PM - Posted by Dominic (not verified)

Can you do a test with an APU like the A10-5800K in crossfire with like an HD 6670 to see if this frame rate discrepancy occurs in this circumstance as well?

I would assume it would based on your results but nonetheless I'm curious on its results.

April 13, 2013 | 03:34 AM - Posted by JCCIII

Dear Mr. Stroud and friends,

Thank you for a tremendous amount of work, diligence, and integrity...

I thoroughly enjoyed the video with you and Tom Petersem. Although, I have to mention; I have been very disappointed in Nvidia since my purchase of a group of GTX 480s, believing, from day one, I had thrown away more than $1500 for three unmanageable 1500 Watt hairdryers marketed as graphics cards, which were subsequently relabeled GTX 580s, once the bugs were worked out, kind of like Microsoft's Vista to XP, kind of like scamming on people--no, definitely scamming.

I have always been an enthusiast of the Nvidia since the days of 3dfx and had likewise always enjoyed anticipating and buying Nvidia's new products, and the GTX Titan is awesome.

With memories of ATI, Matrox, and Nvidia (I still have my RIVA 128), a home has been found within my memories, and that is why I am excited about what you and the rest of PC Perspective have done and are going to do. With collaboration you-all are moving a beloved industry onward toward a better future for us and for the companies we want to succeed.

Sincerely,
Joseph C. Carbone III; 13 April 2013

April 18, 2013 | 04:57 AM - Posted by Anonymous (not verified)

All you have to do is use RadeonPro and it will fix all these issues...

April 18, 2013 | 11:29 PM - Posted by CoderX71 (not verified)

This

April 22, 2013 | 09:07 PM - Posted by Anonymous (not verified)

I dont see this huge problem i guess i am blind or only run 1 screen but i have played all titles listed and got better FPS in all of them using my GTX680's and My 7970's.

I still prefer my AMD cards for now for these reason:

1)In benchmarks my AMD cards kill my 680's in crossfire overclocked.

2)Graphics just look allot nicer on the AMD cards.

3)Biggest problem is Nvidia cards Cant Mine!!! (That's the big killer there for me)as id rather make $500.00 off my cards a week if i feel like it and be able to game as well.

I am not an Nvidia or AMD Fanboy as i have 680's in one of my builds and 7970's in a couple others plus have bought many of both cards in between.

I think maybe there is a problem for those running triple screens that hopefully AMD fixes as you have to admit they did a hell of allot on drivers recently that gave huge performance boosts.But 1920 x 1080 60 hertz i have no issues and def a big difference when i add a 2nd 7970 as i have swapped a card to other builds and put it back in do to loss of performance. The other thing i use my cards for is overclocking and Benchmarking and they def show huge performance there.I had my 680's over 1300mhz and they couldn't come close to 2 7970's at only 1225mhz.

So id have to say for working computers i will run my 7970's until there is no more money to be made and i will game on the 680's.

Pretty much not many games need more then 1 card anyway unless running multiple screens and high resolutions, Hell an APU can Max most console port games on the Market but the very few true PC games we actually have.

Yes i agree AMD fix the damn problem. But i also don't think Nvidia fan boys should be rooting for this because if they do the 7970 will be a nasty card all around that's capable of allot more then playing just games.

Just my opinion and experience on the to many cards i have owned to count in my life from both brands.

Take Care

May 5, 2013 | 12:13 AM - Posted by Evo (not verified)

If only we had a tool such as radeon pro to tweak the crossfire to make it operate properly. If only it existed...

The crysis 3 results are a bit questionable as in game vsync was causing havoc for nvidia (input lag) and amd (stuttering). Also crossfire was not engaging properly unless you alt-tabbed (this still occurs half the time). Not to mention the weird fix of opening an instance of google chrome to fix some of the problems with frame rate people were having with AMD setups.

My 7970 cf with radeon pro and other fixes works perfectly for me with Crysis 3, but there are some people still having issues. Also depends when the testing was done as when patch 1.3 was first released it caused massive problems for AMD cards that were later fixed.

May 20, 2013 | 02:17 AM - Posted by tigerclaws12894

Might have a typo or grammar error in the paragraph before the last in the Vsync topic. Anywho, have you ever considered triple buffering on AMD solutions as well as that config on 60Hz vs 120Hz as well? Input latency sure will be an issue but it'd be nice to know if it's better with 120Hz monitors.

May 20, 2013 | 07:11 PM - Posted by ServerStation668 (not verified)

This Dell T5600 has good video benchmarks: https://www.youtube.com/watch?v=3uK1J3o1fks

August 13, 2013 | 09:04 PM - Posted by Anonymous (not verified)

I have read this article four or five times and I find it intriguing. I must admit to not understanding most of it though. However, I must say being the owner of THREE 7970's, MSI Lightning BE, MSI Ghz OC Edition & Club3D RoyalAce at a cost of around £1300.00 GBP, just shy of $2000.00 USD I feel somewhat cheated. I hope AMD's forthcoming "Fixes" will redress these issues. Brilliant article and I am looking forward to all the follow ups.

September 1, 2013 | 02:35 AM - Posted by BryanFRitt (not verified)

What would a game look like if

it got a smooth 120+ fps,
was on a 60Hz display,
and in addition to the regular 'vsync' spot, it would 'vsync' at the '1/2' way spot?
aka update the display at the top and middle, updating at these same two spots every time, and only updating at these two spots.

Would the middle 'vsync' spot be annoying? helpful? noticed? informative? etc...? (This sounds like a good way to see how important fps is)

What's a good name for this?
1/2 x vsync, 2 x vsync, vsync 1/2 x, vsync 2 x, or something else?
What's the logic behind your pick(s)?

September 2, 2013 | 04:39 PM - Posted by lyo (not verified)

(forward note: i have bad english.)
1) is there a diff in observer fps between cards with more ram?
i.e. sli of 2x gtx770 2g vs sli of 2x gtx770 4g?

2) can you publish a min/max/var of partial frame per frame?
insted of runt i wanna know how many different "color" are per frame, and if they are evenly spread.

January 30, 2014 | 12:03 PM - Posted by BBMan (not verified)

Nice review. I'm interested as to how this tech is evolving.
But now I'm curious- I've read some of your test methods- but I may have missed something. I've seen mostly games that are more single player/first-person. Is that part of your methodology? I'm thinking of more intensive object rendering titles like Rome Total War II that has to render myriads of objects and stress memory more. Have you considered something like that?

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.