Review Index:
Feedback

Frame Rating: GeForce GTX 660 Ti and Radeon HD 7950

Sleeping Dogs

Sleeping Dogs (DirectX 11)


 

Welcome to Hong Kong, a vibrant neon city teeming with life, whose exotic locations and busy streets hide one of the most powerful and dangerous criminal organizations in the world: the Triads. In this open world game, you play the role of Wei Shen, an undercover cop trying to take down the Triads from the inside out. You'll have to prove yourself worthy as you fight your way up the organization, taking part in brutal criminal activities without blowing your cover. Torn between your loyalty to the badge and a criminal code of honor, you will risk everything as the lines between truth, loyalty and justice become permanently blurred

Our settings for Sleeping Dogs

View Full Size

View Full Size

At these settings Sleeping Dog is very hard on current graphics cards, even ones with the power of the GTX 660 Ti and Radeon HD 7950.  Just as we saw on many previous titles though, the CrossFire combination of two HD 7950s are producing much lower observed frame rate per second averages after removing the runt frames.

View Full Size

Both single cards and the GTX 660 Tis in SLI all have very consistent frame times in our graph of data above; only the CrossFire results stand out with the wildly swinging results.

View Full Size

The GTX 660 Ti scales from about 37 FPS average over the entire run to something near 70 FPS by adding in the second card, but the same cannot be said for AMD's Radeon HD 7950.

View Full Size

Even though the frame variance levels of the HD 7950s in CrossFire are consistent through much of the performance testing, once we hit the 90th mark the levels skyrocket away from either single GPU or even the SLI setup.

 

View Full Size

View Full Size

More solid scaling results from NVIDIA's SLI technology and more runt problems for CrossFire...

View Full Size

Even though the NVIDIA SLI results do start to see some spikes and hitches in performance from inconsistent frame times, the results pale in comparison the to mess that CrossFire shows up as.  Both single cards show very nice results with the edge going to the AMD Radeon HD 7950.

View Full Size

By nearly doubling the frame rate, the GTX 660 Ti SLI configuration looks much better than the AMD single or dual-GPU options using the HD 7950.

View Full Size

Only the single card setups here truly offer a 100% stutter free gaming experience with both SLI and CrossFire showing some higher than expected frame time variance. 

 

View Full Size

View Full Size

The name of the game for 5760x1080 testing is more drops and runts from the AMD HD 7950 cards running in CrossFire.

View Full Size

Even though the SLI results aren't as pretty as we would liked to have seen, the AMD CrossFire setup is dropping frames throughout our play testing.

View Full Size

View Full Size

These two images compare individual run data from CrossFire and SLI - by losing nearly every other frame the CrossFire data looks much better under FRAPS than the game actually sees!

View Full Size

SLI does scale well again (from 24 FPS to 45 FPS) though it does droop down towards single card performance levels at the tail end of this minimum FPS percentile metric. 

View Full Size

Interestinly, because we take AWAY the dropped frames for this calculation, the HD 7950 CrossFire results look pretty good - but don't forget to include the data above to get the whole story.  Clearly the SLI solution does introduce some stutter possibility with frame variance but it doesn't even exceed 5 ms until we get past more than 90% of the rendered frames.

April 3, 2013 | 10:25 AM - Posted by bystander (not verified)

The TechReport articles "were" accurate when they were released, and sparked change from AMD, who went through their drivers to fix the problem.

They didn't prove TechReport wrong. They prove AMD fixed things on single cards configurations.

April 3, 2013 | 05:59 AM - Posted by Cpt. Obvious (not verified)

Why do the graphs for "FRAPS FPS" and "Observed FPS" match exactly except for Crossfire?!

There should be at least little differences between these graphs.

You don't have FRAPS running during the capture, do you?!

April 3, 2013 | 11:15 AM - Posted by Ryan Shrout

No, FRAPS was not running at the same time as the overlay and capture testing.

April 3, 2013 | 08:38 PM - Posted by Cpt. Obvious (not verified)

So what would be the explanation for it?

The graphs match exactly.

If you measure the same scene twice, the results wont be exactly the same.

So you let the bench run, measure with FRAPS.
You let the bench run and measure with FCAT.
The results are EXACTLY the same except for crossfire.

There has to be something wrong.

April 3, 2013 | 07:48 AM - Posted by Anonymous (not verified)

Another article to confirm that with vsync off CFX is nerfed atm...

Which doesn't affect me in the slightest as i don't play with vsync off .. i won't play with vsync off fullstop.

Twin lightning 7970s @ 1400 mhz , 60 fps solid in crysis 3 with RP 60 DFC and vsync on and its butter smooth. Slightly different to the results i see in this article i might add.

My advice to any CFX user is simply run with vsync on and radeon pro DFC (dynamic frame control) 60 fps with vsync and you won't get 90 % of the issues in these articles.

It is a pity that pcper won't simply do a review with vsync on running radeon pro .. i guess they wouldn't have much to write about then.

April 3, 2013 | 09:09 AM - Posted by Lord Binky (not verified)

I don't think dropped frames are as significant to the viewing experience as the runt frames are which also cause small tears. Also, keeping the image being displayed the most current representation of the game state would seem to support dropping a frame if it gets in the way of a newer frame.

That said, I really want to see some rules of thumb applied to the graphs as to what consitutes perceptable gameplay degredation. For example in the frame variance chart, I would think there is a slope based assessment as to the type of problem the person would percieve.

April 3, 2013 | 11:16 AM - Posted by Ryan Shrout

The idea of "rules of thumb applied to the graphs as to what consitutes perceptable gameplay degredation" is a good idea, but it is much harder to actually do.  That is the eventual goal but in all honestly it will likely be an issue that changes for each gaming title.

April 4, 2013 | 01:43 PM - Posted by bystander (not verified)

I was thinking about how many FPS compared to your refresh rate has an effect on what the average frame size would be. This might mean what is considered a runt frame might be something that should scale based on FPS. This may also help narrow down when something is not acceptable.

April 3, 2013 | 12:03 PM - Posted by Verticals (not verified)

Yeah, the hidden-agenda accusations are rather bizarre considering the AMD ads, but any tech site is going to get its share of unstable posters. don't let it get to ya.

Anyway, AMD's single gpus came out looking nicer than I would have figured. My experience with previous gens had me expecting to see disparity in observed frames for the single cards as well. I'd also like to see tests on the last gen from both manufacturers, but I know that may not be a practical use of copy.

April 3, 2013 | 12:52 PM - Posted by Anonymous (not verified)

Window cause more of those issue then everything else combined.that is what I mean with the long post Ryan didn't understand.

April 3, 2013 | 01:00 PM - Posted by Anonymous (not verified)

You have to use radeon pro, and set up a profile for each game being used on Crossfire setups. Its the only way to get them to run smoothly. Its an extra program and a bit of work but the results are promising, regarding frame time. I hope Future AMD driver versions have this feature built in.

April 3, 2013 | 01:23 PM - Posted by Anonymous (not verified)

You use nVidia Hardware to measure SOMETHING?
Evilol....

April 3, 2013 | 03:50 PM - Posted by endiZ (not verified)

Hey Ryan, great articles.

Have you tried RadeonPro with Dynamic V-sync Control? It looks they are using a similar technique that Nvidia uses with Adaptive Vsync.

April 3, 2013 | 05:06 PM - Posted by Cannyone

I'm really surprised by this article. And that is an ambivalent sort of surprise! On one hand I'm pleased to see that SLI 660 Ti's do so well at 2560x1440. On the other hand I'm dismayed that Crossfire is NOT competitive. So the crux of what I'm feeling is disappointment. Because "competition" is good for me as a consumer.

See I have a pair of Nvidia (specifically EVGA) GTX 660 Ti's in my computer. And they were chosen because I wanted to run game's on a 2560x1440 display at full resolution. But it troubles me to think that AMD isn't being competitive because that points to a Market where Nvidia's dominance is most likely to mean higher prices. Which contributes to the feeling that PC Gaming my indeed be on its last leg.

The bottom line, economically speaking, is that while computers are a boon to our lives our "life" itself does not depend on them. So when things get really tight we will find ways to get by without spending money on new hardware. And both AMD and Nvidia will be forced to change to maintain a revenue stream.

... so despite the fact I own Nvidia cards I sure hope AMD can get their act together!

April 3, 2013 | 07:05 PM - Posted by Zarich (not verified)

I enjoyed this article. However, after watching all the comparison videos I learned one thing. ATI and Nvidia do not have the same colors. So Out of the box which card is producing the most accurate colors?

April 3, 2013 | 08:45 PM - Posted by Cpt. Obvious (not verified)

Why do the graphs for "FRAPS FPS" and "Observed FPS" match exactly except for Crossfire?!

There should be at least little differences between these graphs.

You already said, that tests with FCAT and FRAPS weren't made at the same time.

So what would be the explanation for it?

The graphs match exactly.

If you measure the same scene twice, the results wont be exactly the same.

So you let the bench run, measure with FRAPS.
You let the bench run and measure with FCAT.
The results are EXACTLY the same except for crossfire.

There has to be something wrong.

April 3, 2013 | 10:09 PM - Posted by ThorAxe

The only thing wrong is the borked Crossfire. The single GPUs and SLI are doing what they should.

April 4, 2013 | 03:24 AM - Posted by Cpt. Obvious (not verified)

Read. -> Think. -> Answer.

The results can't be exactly the same.
2 Benches, 2 different tools that give different data,
but 1 result? I don't think so.

April 4, 2013 | 09:22 AM - Posted by ThorAxe

Maybe you should re-read what has been written or at least look at the definition of observed frame-rate.

Observed Frame Rates will, indeed, must show identical frame rates to FRAPS if all frames are defined as useful. That means there were no dropped or runt frames in the benchmark.

The tools use the same data. Only discarding useless frames would result in a variance. If there are no useless frames then there should not be a variance.

April 4, 2013 | 11:40 AM - Posted by Cpt. Obvious (not verified)

So if you bench the same scene two times, the results will be the same? Interesting.

April 4, 2013 | 01:31 PM - Posted by Tri Wahyudianto (not verified)

completely no, this called "blind and randomization" in journal, health journal, social journal, or other

cause we know the same scene never do the same results in two testing in all graphic card

so it do to all graphic card was testing, bad results, good results, all graphic card has same chance to have it

in the end, we called it a fair comparison

April 4, 2013 | 11:23 PM - Posted by ThorAxe

And there is your problem. The benchmark is only run once.

In the test run FRAPS displays all Frames rendered regardless of size. The video output is then captured via the capture card and the raw AVI file is analyzed. If there are no runts or drops (as is the case with the single GPUs tested) then the Observed Frame Rate must be identical to the FRAPs results.

April 5, 2013 | 03:46 AM - Posted by Cpt. Obvious (not verified)

Wrong. I already asked. Ryans answer:

April 3, 2013 | 11:15 AM - Posted by Ryan Shrout
No, FRAPS was not running at the same time as the overlay and capture testing.

I am not trolling, I know that AMD has bigger issues with MGPU than Nvidia.
The only thing I am saying:
Somethig has to be wrong with the graphs, as there always will be slight differences between two bench runs.

April 5, 2013 | 10:47 AM - Posted by ThorAxe

If that is the case then there would normally be a slight variation.

I don't know why it would need to be run separately, though I think the podcast mentioned issues with FRAPS and the overlay being used at the same time.

April 5, 2013 | 03:58 PM - Posted by KittenMasher (not verified)

You're right, there should be at least slight differences between the two runs. However, that doesn't mean those differences when displayed on a graph that is made for human consumption are going to be noticeable. In order for the graph's to be completely accurate, the 90seconds or so of benchmark would have to be sampled at frame. In at least one of the cases that's about 150fps, which means the graph would have to have a width of about 810,000 pixels. You may have noticed that the graphs on pcper are a bit smaller than that.

April 5, 2013 | 03:58 PM - Posted by KittenMasher (not verified)

You're right, there should be at least slight differences between the two runs. However, that doesn't mean those differences when displayed on a graph that is made for human consumption are going to be noticeable. In order for the graph's to be completely accurate, the 90seconds or so of benchmark would have to be sampled at frame. In at least one of the cases that's about 150fps, which means the graph would have to have a width of about 810,000 pixels. You may have noticed that the graphs on pcper are a bit smaller than that.

April 4, 2013 | 04:08 AM - Posted by Tri Wahyudianto (not verified)

Thanks for the objective and numberical article despite many critics and subjectivity of the reader

well done ryan

thumbs up pcper.com

April 4, 2013 | 04:54 AM - Posted by sLiCeR- (not verified)

I would very much like to read a quick analysis of previous generations crossfire setup (ie: 2x 5850/5870), to try to understand if these issues are only related to the drivers / GCN arch or the crossfire technology itself.

And of course, AMD should have the ability to discuss these results with pcper.com and eventually revisit these tests with the *future* drivers that will address these issues.

Regardless of the impacts this kind of articles have, it's known today that there are issues with AMD drivers. Even if there are some errors with some tests, if were not for them we wouldn't even be looking at this. So.. well done pcper.com, but don't leave it at this bombshell ;)

cheers.

April 4, 2013 | 09:18 AM - Posted by AlienAndy (not verified)

Thank you for taking your time to do this Ryan. I used to use Crossfire but it never quite felt like what was being displayed. What I mean is when I ran 5770 CF it always seemed jerky and stuttered even though the readings I was seeing seemed pretty high (as it should have been, just ahead of a 5870).

I swore off of Crossfire after that and just concluded it was down to poor drivers. I'm now using GTX 670 in SLI and I have to say it's been fantastic. No stuttering and pretty smooth. I have had the odd issue waiting for drivers (Far Cry 3) but other than that it seems to work great.

Funny really. Crossfire always felt rather crude to me. Would also love to see some 5xxx results !!

April 4, 2013 | 10:22 AM - Posted by AlienAndy (not verified)

BTW could you also test 3dmark11 please?

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.