Review Index:
Feedback

Frame Rating: Eyefinity vs Surround in Single and Multi-GPU Configurations

Author: Ryan Shrout
Manufacturer: Various

AMD Eyefinity and NVIDIA Surround plus 4K

As I mentioned on the previous page, Eyefinity and Surround are technologies that have been around for a long time but are picking up steam as the price of high quality 1080p monitors drops.  AMD was the progenitor of multi-display gaming with Eyefinity and it has been a part of their promotion and marking for Radeon graphics cards for years, both high-end and mid-range cards. 

View Full Size

These configurations put a lot of pressure on GPUs.  A 2560x1440 screen results in a pixel count of 3.68 million while a 5760x1080 (triple 1080p screen) setup results in 6.22 million pixels.  So, to maintain a steady frame rate of 60 FPS the gamer’s PC must be able to handle 69% more pixel per second.  Single GPU configurations (with the exception of maybe NVIDIA’s GeForce GTX TITAN) are unable to really meet this demand while setting games to higher quality settings and thus many users that invest in multiple panels for Eyefinity/Surround are also investing in multiple GPUs for CrossFire or SLI.

And thus our problem is stated.  AMD has said that the 13.8 Catalyst beta driver (and the fix found in it) was only good for 2560x1600 resolutions and below.  There was some confusion if this meant Eyefinity was included or not as some people speculated that because each screen was only 1920x1080, it would work.  As it turns out, that isn’t the case.

Another problem AMD may have coming up is the move to 4K.  Current 60 Hz 4K gaming panels like the ASUS PQ321Q use a dual-head configuration.  That means that although only a single cable connects to the monitor via DisplayPort, multiple streams are being sent back and forth, reporting to the PC as a multi-monitor connection.  Essentially, gaming at 4K 60Hz on current screens is a result of setting up a two screen Eyefinity or Surround configuration with each “head” running at 1920x2160.  The combined resolution is 3840x2160 for a total pixel count of 8.29 million pixels, or 2.25x the pixels of 2560x1440. 

Obviously multi-GPU setups are going to be demanded for 4K. 

View Full Size

The ASUS PQ321Q

In order to test AMD Eyefinity and NVIDIA Surround with our Frame Rating configuration, we had to be a bit more creative.  Previous testing was done simply by using a dual-link DVI splitter sitting between the panel and the graphics card, with the secondary output going into our external system and capture card.  That card records the raw data and then scripts analyze the video (with overlay) to get performance data.

With Eyefinity and Surround there are three screens so which one do we capture?  The answer turns out to be any of them.  Thanks to an updated overlay that can present as many sets of overlay bars on the screen as we request, we can capture the left, center or right hand screen to see performance data.  We are simply intercepting ONE of the screens rather than the ONLY screen. 

View Full Size

Our Datapath DVI-DL Capture Card

As it turns out, performance measurements are the same regardless of which screen you capture from, as you would expect.  For my testing then we decided to capture the center monitor which allows to create some side-by-side animation comparison videos as well.

 

Test System Setup
CPU Intel Core i7-3960X Sandy Bridge-E
Motherboard ASUS P9X79 Deluxe
Memory Corsair Dominator DDR3-1600 16GB
Hard Drive OCZ Agility 4 256GB SSD
Sound Card On-board
Graphics Card AMD Radeon HD 7970 3GB CrossFire
NVIDIA GeForce GTX 770 2GB SLI
Graphics Drivers AMD: 13.8 (beta)
NVIDIA: 326.41
Power Supply Corsair AX1200i
Operating System Windows 8 Pro x64

 

For my testing I wanted to use the highest end graphics offerings while maintaining price parity.  The Radeon HD 7970 GHz Edition has come down in price steeply and currently sells for as low as $370 with a 3GB frame buffer.  So while NVIDIA does have faster single GPU offerings, the best comparison for our purposes is the GeForce GTX 770 2GB cards that sell for about $390.  I do realize that the GTX 780 and GTX TITAN would offer better single GPU and SLI results for NVIDIA but keeping the prices within range is key for any PC component discussion.

September 18, 2013 | 09:18 AM - Posted by Ryan Shrout

The SEIKI's should be supported with just about any modern GPU (single head, 30 Hz) but it's the 60 Hz models that require some more testing.  I'll try my best to include them going forward!

September 18, 2013 | 11:09 AM - Posted by Batman (not verified)

Thanks Ryan & PcPer for doing this & the previous investigative work; it is much appreciated & good to see AMD taking the findings onboard to make fixes!

September 18, 2013 | 05:25 PM - Posted by Pete (not verified)

If you used a monitor that was capable of 600fps would the problem persist?

September 18, 2013 | 06:31 PM - Posted by Ryan Shrout

Yes, the supported refresh rate is irrelevant here.

September 18, 2013 | 05:42 PM - Posted by Serpent of Darkness (not verified)

@ Ryan Shrout,

1. You need to state if your using AMD Beta Driver 13.8A (8-1-2013), or AMD Beta Driver 13.8B(8-19-2013). If you're using 13.8A on purpose during a discussion/benchmark on Surround and 1600p, multiple viewers could come to the conclusion you did this on purpose to make AMD look bad. AMD Beta Driver 13.8A doesn't have 1600p support. It only addresses the issues for DX11 API. 13.8B addresses 1600p and surround, if I am not mistaken. A possible upcoming 13.8C may address DX9 API, or it could have already been done in the new 13.9 WHQL update.

2. Personally, I can't take your discussions on a more serious manner. In your conclusions, you state things that give me the impression that you don't fully understand graphs, or have poor views of AMD Graphic Cards. At the very least, it is leading me to believe that you are bias towards Nvidia. Having favoritism, or a bias point of view to one company over the other, isn't a good way to approach a discussion or benchmark on any product. It doesn't help you seem serious, experienced, or reasonable to both bases (AMD and Nvidia users). It only tells readers that you pander to one side, and talk crap about the other brand's short-comings. AnAndtech doesn't do it, Guru3D doesn't do it, techpowerup.com doesn't do it either, and they all come out with really good benchmarks about computer-based products. Both bases read their benchmarks because they aren't bias. Mr Shrout, you are bias either because you are letting people know of your hatred towards AMD, or you want to cater discussions and benchmarks that make AMD look bad to the Nvidia Base. Those are reasonable conclusions. If I don't see a benchmark on here discussing why the GTX 600, 700 and Titan series doesn't fully support DX11.1(support only software, but not hardware-wise), you are only going to prove me right.

3. Looking at the Frametime Variance Graphs that you posted, AMD 7970 will have a lower minimum band because the cards push lower latency to produce batches of frames. Problem with it, and it's true, is somewhere along the way, they will produce "runt frames." Frames that aren't one whole frame. It could be like 0.8 frames, or 0.9, or 0.7. On the other hand, it takes less time for AMD video cards to produce those batches of frames. Nvidia takes longer to produce the batch because, hardware wise, the system probably calculates whether it needs to spend more time producing an extra "whole" frame. That's why their minimum frame time band is higher than AMD. The hardware is always trying to push 1.0 frames times x amount of frames to a batch.

September 18, 2013 | 06:30 PM - Posted by Ryan Shrout

1. You are incorrect in this assumption.  No beta or full release of driver from AMD addresses Eyefinity.

2. I don't understand the relevance to DX11.1 reference here honestly.  This story isn't about API support but rather multi-display + multi-GPU gaming.  As to the bias question, this is something that gets targeted at people all the time when their results clearly show an advantage to one side or another.  Throughout our Frame Rating series of stories I have continued to tell the truth - that AMD cards are fantastic for single GPU configurations but need work on the multi-GPU side.  You can have your opinion obviously, but obviously we disagree.  As do many of the readers commenting here.

3. Sorry, I'm going to need more explanation on what you are saying here.  Frames are not produced in "batches" at all.  I think you are trying to describe the runt frame problem maybe?

September 18, 2013 | 06:59 PM - Posted by Allyn Malventano

2. Personally, I can't take your comment on a more serious manner. In your post, you state things that give me the impression that you don't fully understand reviews, or have poor views of NVidia graphics cards. At the very least, it is leading me to believe that you are bias towards AMD. Having favoritism, or a bias point of view to one company over the other, isn't a good way to approach a discussion or benchmark on any product.

Sucks how that works, doesn't it? Oh, for your point 3, it doesn't matter how fast a card can batch process *anything*, so long as what's presented to the user is inferior to the competition. The result is all that matters. Rolling back to point 1, your statements are moot as they are made without the far greater level of knowledge Ryan has - as he speaks with AMD about these various beta versions on an almost daily basis.

September 19, 2013 | 12:16 AM - Posted by technogiant (not verified)

As AMD have just stated that their Hawaii gpu's are smaller and more efficient but not intended to compete with the "ultra extreme" gpu's of Nvidia (aka 780/titan)as this is something that will be addressed by AMD's multi gpu cards....then it is all the more essential that AMD sorts these problems out properly and completely otherwise their product/business model is flawed as badly as their multi gpu performance.

September 19, 2013 | 12:24 AM - Posted by technogiant (not verified)

On a slightly different note but still regarding multi gpu I'd be interested in the views of you pcper guru's on the present state of multi gpu systems.

It's always seemed like such a waste to me using alternate frame rendering on multi gpu cards where each gpu has to have access to its own complete frame buffer size of memory.

Surely it would be better to use a tiled render approach where each gpu is working on individual tiles of the same frame and sharing one frame buffer sized chunk of memory?

September 26, 2013 | 02:20 AM - Posted by kn00tcn

someone always brings this up, & both ati & nv have said for years that AFR brings the most performance in the least complex way (unless a game engine has inter frame dependancies)

in the past, ati had tiled CF as an option, also scissor mode

but think about this, let's say you're doing 2 tiles at a horizontal split, you may end up with one card rendering an empty sky, the second rendering a ton of detail, basically resulting in a useless solution that doesnt scale

on top of that, you have to synchronize the tiles to display a final image at the same time, but the cards cant physically render at the exact same time, so you'll introduce lag or artifacts (which eyefinity does see)

i would say AFR is good enough & the way to go for multiple cards, but i would want to see a new paradigm... do you remember the first core2quad? it was 2 duals stitched together, imagine if 2 gpus were stitched together (no more mirroring the vram, just adjust the socket connections)

September 19, 2013 | 12:31 AM - Posted by Nvidia Shill (not verified)

http://www.brightsideofnews.com/news/2013/9/18/nvidia-launches-amd-has-i...

September 19, 2013 | 06:13 AM - Posted by Ryan Shrout

LOL.  Some stories are funny, you know?

I replied to this here: http://www.overclock.net/t/1427828/bsn-nvidia-launches-amd-has-issues-marketing-offensive-ahead-of-hawaii-launch/30#post_20827758

September 19, 2013 | 05:43 AM - Posted by Anonymous (not verified)

I don't even understand the point of this article.

http://www.anandtech.com/show/7195/amd-frame-pacing-explorer-cat138/3

Even before that article it was known Amd was going to fix it phases.

September 19, 2013 | 06:15 AM - Posted by Ryan Shrout

The point to showcase the very specific Eyefinity problems compared to Surround as they had not been discussed or shown in any form before today. 

September 19, 2013 | 08:18 AM - Posted by JJ White

Will you be taking a look at the Phanteks Enthoo Primo case? According to Hardware Canucks it might be the "Case of the year", not bad for such a small company entering the case market. I would be interested in what you think about it.

Here's the link to the HwC video: http://www.youtube.com/watch?v=rg_DzdHGgN4

September 19, 2013 | 12:47 PM - Posted by BIGGRIMTIM

How are your displays connected? I was having this issue until I connected all of my displays via DisplayPort. I know this is not ideal but it has eliminated the issue for me. I have 2 HD 7970s in crossfire and 3 Dell U2410 displays.

September 19, 2013 | 01:48 PM - Posted by Anonymous (not verified)

If the Asus PQ321 supports DisplayPort 1.2 and the HD 7970 supports DP 1.2 as well, and DP 1.2 can do 4k at 60Hz, then why is 4K necessarily a "dual head" affair? Is that simply due to the way the Asus was designed?

September 19, 2013 | 02:01 PM - Posted by Anonymous (not verified)

Ok. Nevermind. The whole tiled display thing. Is there a particular reason why 4k displays have to be tiled (or multi-headed)?

September 19, 2013 | 02:02 PM - Posted by Anonymous (not verified)

Ok. Nevermind. The whole tiled display thing. Is there a particular reason why 4k displays have to be tiled (or multi-headed)?

September 19, 2013 | 02:05 PM - Posted by Anonymous (not verified)

Ok. Nevermind. The whole tiled display thing.

From another comment on this site by NLPsajeeth:

"Currently there are no timing controllers that support 4K@60p. In order to drive the asus/sharp at 4K@60p, two separate TCONs are used. This is why this monitor has the unique capability of supporting dual HDMI. Each HDMI port feeds into its own TCON.

There is no 4K display that can do 60Hz without tiling. 4K@60p TCONs are supposed to start shipping in small amounts this year and in mass quantities in 2014."

September 19, 2013 | 04:22 PM - Posted by coffeefoot

Keep the faith, Ryan and co. Just continue to call it like you see it and let the chips fall where they may.
Hopefully AMD will get its stuff together otherwise they are going to lose a few folks.

September 19, 2013 | 06:42 PM - Posted by ArcRendition

I sincerely admire your journalistic integrity Ryan... as well everyone else at the PCper team!

-Stewart Graham

September 20, 2013 | 11:19 AM - Posted by ezjohny

what a difference in AMD graft, they improved on there driver.
Is this so with an APU + Graphic card. Good job Ryan.

September 20, 2013 | 06:42 PM - Posted by BigDaddyCF

Well currently rolling with 2x7970's on a 1920x1200 triple display setup. Can't say I ever really been personally bothered the various issues raised in the article in regards to the frame interlieaving and stepped tearing enough to stop playing, though I trust the guys over at PCPer to give it to me straight. I noticed the stuttering with crossfire more than anything else you guys brought up with your new testing methodology. I think most of us gamers at least gained a better understanding about the various issues involved. Sometimes my benchmarking applcation(be it FRAPS or Dxtory) would say I was getting a certain frame amount but the game just felt too jittery, whereas if I disabled crossfire the game felt more smooth even with a lower framerate.

That is not to say I haven't thoroughly enjoyed my 7970's/Eyefinity setup. When I've been been able to play at Eyefinity resolutions I've done so, when I haven't I've just adjusted my quality or resolution settings until I could get a smooth enough playing experience.

Do I hope that AMD is able to smooth out those circumstances where I can't play at a give resolution/quality due to micro-stuttering with crossfire, yeah that would be awesome. I think a lot of us out here still don't have a full appreciation for the phenomena due to not having been able to test multi-GPU solutions side by side, so it just comes down to "the game doesn't feel fluid enough at my current settings so I'll dial them down until it does", which I'm sure people have different sensitivities to. Keep up the good work PCPer crew.

September 21, 2013 | 06:11 PM - Posted by tbone8ty (not verified)

what about this ryan?

http://www.brightsideofnews.com/news/2013/9/18/nvidia-launches-amd-has-i...

September 22, 2013 | 03:07 AM - Posted by Davros (not verified)

why use 2 hdmi cables when you can use a single displayport cable and the problem does not exist with displayport ?

September 22, 2013 | 04:04 AM - Posted by Russ (not verified)

At first I thought this article may have been over egging the problem with eyefinity + crossfire. Having now disconnected my second HD 7970 and played a few games in eyefinity I have seen that I is not. Radeon Pro may tell me that I'm getting half the FPS that I was but my eyes see the same low FPS experience.

Not impressed AMD, I feel like a chump for spending £300 on a card whose only additional effect to my system has been extra heat and noise.

Still at least I can go back and play Farcry 3 now with out the giant oversized HUD problem.

Thanks For the good article and thanks for bending AMD's ear.

September 22, 2013 | 04:14 AM - Posted by Gregster

A damned good read thanks Ryan. AMD owners should be pleased that these issues are highlighted and making sure AMD keep on their toes. Like the FCAT article, it was good to see AMD address the issue and get it fixed and again, it was PCper who made AMD aware of the issues (like they didn't already know!)and forced them into sorting that out for their users.

September 22, 2013 | 05:23 AM - Posted by drbaltazar (not verified)

I think a lot of hardware maker define CPU differently then Microsoft.you can ask I wrote a bug report to and today 13.10 beta.if I recall message signal interrupt and its extended variant were implemented in vista for consumer?ROFL we know how vista was received so this might be one overlooked good thing.my case?in regedit MSI was enabled (sad was not for some reason ,can't enable it)but no amount of MSI set!(if it isn't set isn't it defaulting to one msi / socket?but I have 4 CPU in my i5 2500k(ya only physical CPU ms say)so imagine amd 8 core fx lol stuck with 1 MSI / msix.I think this is the cause.sadly on my system none were set . I normally tweak but from what I saw on ms it isn't a case of 0 or 1.and ms recommend hex value.Rolf a bit too complex for my knowledge.but you guys know a lot of hardcore tweaker . if I'm right ? I would be like what the eck am I the only one that used vista ?

September 22, 2013 | 05:33 AM - Posted by drbaltazar (not verified)

PS:what I wrote is for w8 64 bit!But I suspect a lot of hardware maker default to 1 (probably easier to implement)since socket come fro 2 to 12) detecting might be entertaining.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.