Review Index:
Feedback

The AMD Radeon Pro Duo Review

Author:
Manufacturer: AMD

History and Specifications

The Radeon Pro Duo had an interesting history. Originally shown as an unbranded, dual-GPU PCB during E3 2015, which took place last June, AMD touted it as the ultimate graphics card for both gamers and professionals. At that time, the company thought that an October launch was feasible, but that clearly didn’t work out. When pressed for information in the Oct/Nov timeframe, AMD said that they had delayed the product into Q2 2016 to better correlate with the launch of the VR systems from Oculus and HTC/Valve.

During a GDC press event in March, AMD finally unveiled the Radeon Pro Duo brand, but they were also walking back on the idea of the dual-Fiji beast being aimed at the gaming crowd, even partially. Instead, the company talked up the benefits for game developers and content creators, such as its 8192 stream processors for offline rendering, or even to aid game devs in the implementation and improvement of multi-GPU for upcoming games.

View Full Size

Anyone that pays attention to the graphics card market can see why AMD would make the positional shift with the Radeon Pro Duo. The Fiji architecture is on the way out, with Polaris due out in June by AMD’s own proclamation. At $1500, the Radeon Pro Duo will be a stark contrast to the prices of the Polaris GPUs this summer, and it is well above any NVIDIA-priced part in the GeForce line. And, though CrossFire has made drastic improvements over the last several years thanks to new testing techniques, the ecosystem for multi-GPU is going through a major shift with both DX12 and VR bearing down on it.

So yes, the Radeon Pro Duo has both RADEON and PRO right there in the name. What’s a respectable PC Perspective graphics reviewer supposed to do with a card like that if it finds its way into your office? Test it of course! I’ll take a look at a handful of recent games as well as a new feature that AMD has integrated with 3DS Max called FireRender to showcase some of the professional chops of the new card.

Continue reading our review of the AMD Radeon Pro Duo!!

Radeon Pro Duo Details

The information provided here is an overview of the specifications and design of the card itself. If you read over our preview story already, there isn’t much new here other than a couple of photos we took in-house. If you are ready to jump right to the test results, feel free to do so!

The design of the card follows the same industrial design as the reference designs of the Radeon Fury X, and integrates a dual-pump cooler and external fan/radiator to keep both GPUs running cool.

The 8GB of HBM (high bandwidth memory) on the card is split between the two Fiji XT GPUs on the card, just like other multi-GPU options on the market. The 350 watts power draw mark is exceptionally high, exceeded only by AMD’s previous dual-GPU beast, the Radeon 295X2 that used 500+ watts and the NVIDIA GeForce GTX Titan Z that draws 375 watts!

Here is the specification breakdown of the Radeon Pro Duo. The card has 8192 total stream processors and 128 Compute Units, split evenly between the two GPUs. You are getting two full Fiji XT GPUs in this card, an impressive feat made possible in part by the use of High Bandwidth Memory and its smaller physical footprint.

  Radeon Pro Duo R9 Nano R9 Fury R9 Fury X GTX 980 Ti TITAN X GTX 980 R9 290X
GPU Fiji XT x 2 Fiji XT Fiji Pro Fiji XT GM200 GM200 GM204 Hawaii XT
GPU Cores 8192 4096 3584 4096 2816 3072 2048 2816
Rated Clock up to 1000 MHz up to 1000 MHz 1000 MHz 1050 MHz 1000 MHz 1000 MHz 1126 MHz 1000 MHz
Texture Units 512 256 224 256 176 192 128 176
ROP Units 128 64 64 64 96 96 64 64
Memory 8GB (4GB x 2) 4GB 4GB 4GB 6GB 12GB 4GB 4GB
Memory Clock 500 MHz 500 MHz 500 MHz 500 MHz 7000 MHz 7000 MHz 7000 MHz 5000 MHz
Memory Interface 4096-bit (HMB) x 2 4096-bit (HBM) 4096-bit (HBM) 4096-bit (HBM) 384-bit 384-bit 256-bit 512-bit
Memory Bandwidth 1024 GB/s 512 GB/s 512 GB/s 512 GB/s 336 GB/s 336 GB/s 224 GB/s 320 GB/s
TDP 350 watts 175 watts 275 watts 275 watts 250 watts 250 watts 165 watts 290 watts
Peak Compute 16.38 TFLOPS 8.19 TFLOPS 7.20 TFLOPS 8.60 TFLOPS 5.63 TFLOPS 6.14 TFLOPS 4.61 TFLOPS 5.63 TFLOPS
Transistor Count 8.9B x 2 8.9B 8.9B 8.9B 8.0B 8.0B 5.2B 6.2B
Process Tech 28nm 28nm 28nm 28nm 28nm 28nm 28nm 28nm
MSRP (current) $1499 $499 $549 $649 $649 $999 $499 $329

The Radeon Pro Duo has a rated clock speed of up to 1000 MHz. That’s the same clock speed as the R9 Fury and the rated “up to” frequency on the R9 Nano. It’s worth noting that we did see a handful of instances where the R9 Nano’s power limiting capability resulted in some extremely variable clock speeds in practice. AMD recently added a feature to its Crimson driver to disable power metering on the Nano, at the expense of more power draw, and I would assume the same option would work for the Pro Duo.

The rest of the specs are self-explanatory – they are double everything of a single Fiji GPU. The card will require three 8-pin power connectors, so you’ll want a beefy PSU to power it. In theory, the card COULD pull as much as 525 watts (150 watts from each 8-pin connector, 75 watts over the PCI Express bus).

AMD is definitely directing the Radeon Pro Duo towards professionals and creators, for several reasons. In terms of raw compute power, there isn’t a GPU on the market that will be able to match what the Pro Duo can do. For developers looking to have access to more GPU horsepower, the price of $1500 will be more than bearable and will give a pathway to really start diving into multi-GPU scaling integration for VR and DX12. AMD even calls out its FireRender technology, meant to help software developers integrate a rendering path for third-party applications.

View Full Size

But calling yourself out as the “world’s fastest graphics card” also means you are putting yourself squarely in the sights of PC gamers. At the Capsaicin event, AMD said the card was built for "creators that game and gamers that create." AMD claims the Radeon Pro Duo is 1.5x the performance of the GeForce GTX Titan X from NVIDIA and 1.3x the performance of its own Radeon R9 295X2.

Obviously the problem with the Radeon Pro Duo for gaming is that it depends on multi-GPU scaling for it reach its potential. The Titan X is a single GPU card and thus NVIDIA has much less trouble getting peak performance. AMD depends on CrossFire scaling to get peak performance (and the rated 16 TFLOPS) for any single game. For both NVIDIA and AMD, that can be a difficult process, and is a headache we have always discussed when looking at multi-GPU setups, whether they be on a single card or multiple.

View Full Size

The build of the Radeon Pro Duo is impressive. Much like the Fury X that was released last year, the RPD design is both sleek and classy, representing the gaming market better than any previous AMD reference products we have tested.

View Full Size

The Radeon Pro Duo is heavy though – so be careful if you are shipping a system with one installed. Mounting the water cooling radiator is a bit easier thanks to the extended tubing compared to the Fury X, which is a nice change. The red Radeon branding along the top of the card remains part of the design as well, and it helps you stand out if you are building the PC inside a windowed case.

View Full Size

Even better – there is NO noticeable pump noise or coil whine from the card! Unlike the Fury X sample we got on day one, where I cringe starting it up each and every time we have to do testing, the Radeon Pro Duo appears to have been fitted with higher quality pumps and electrical components.


April 29, 2016 | 04:57 PM - Posted by RadioActiveLobster

The GTA V Frametime AMD graphs look like a seismograph during the largest earthquake in history.

April 29, 2016 | 05:49 PM - Posted by Klyde (not verified)

Any word on fan and pump noise and coil whine?

April 29, 2016 | 06:00 PM - Posted by biohazard918

You didn't read the article did you its on the first page...

"Even better – there is NO noticeable pump noise or coil whine from the card! "

April 30, 2016 | 09:42 AM - Posted by Rosc0e (not verified)

I have two 2016 manufactured Asus Fury X cards and the pumps are completely silent. I believe AMD has that problem completely straightened out.

April 29, 2016 | 05:55 PM - Posted by iceman11 (not verified)

Hi any reason that you only testing aging DX11 games? The situation on DX 12 games will be absolutely different and the AMD solution the clear winner. As everybody know the SLi GTX solution will fail there!

April 29, 2016 | 06:15 PM - Posted by Anonymous (not verified)

Not only that but where's Far Cry Primal, or The Division, newer games that AMD would, and have outperformed Nvidia. But yeah, not to mention Hitman or Ashes... These games smell of handpicked titles.

April 29, 2016 | 06:17 PM - Posted by Anonymous (not verified)

None of those games have crossfire profiles.

April 29, 2016 | 06:32 PM - Posted by Anonymous (not verified)

Wrong.

April 29, 2016 | 07:22 PM - Posted by Ryan Shrout

*sigh*

You guys just keep finding your keyboards each morning, don't you?

April 29, 2016 | 07:24 PM - Posted by Anonymous (not verified)

Don't give them any excuses Ryan test them.

April 30, 2016 | 12:34 AM - Posted by Zoran (not verified)

DGLee tested Fury vs 980 Ti vs Titanium X.
Wide range of games.
Up to quad cards.

Results contradict what this review, to say the least.
http://iyd.kr/753

GPU Journalism comes down to "choosing right games" I guess.

May 1, 2016 | 06:45 PM - Posted by bptrav

That site didn't test the same cards as PCPER, and specifically didn't test the card this review is about. They didn't even test a Nano which is the most similar card to this one. They also didn't test frametimes at all which is probably the most important metric, especially when looking at SLI/CF stuff.

No one will argue that you can't get some good FPS numbers in benchmarks with this card, but the gaming experience/smoothness is lacking and it's too expensive. I know it's crazy but some people use video cards to PLAY GAMES.

May 23, 2016 | 08:47 AM - Posted by Ballonymous (not verified)

I actualy have two Fury Xs in CF for gaming and for example for GTA V all you need is to turn the Vsync on and then it is buttersmooth. Its the same story with other games.

May 23, 2016 | 08:49 AM - Posted by Ballonymous (not verified)

Also if you want a very good results with AMD GPUs, just force 8x tesselation thru driver and voila, you have much better results with no visible difference.

April 29, 2016 | 11:47 PM - Posted by Cyclops

It's crazy. It's as if the same person with a personality disorder keeps talking to you.

You just never know who he/she is.

April 30, 2016 | 03:00 PM - Posted by Anonymous (not verified)

that's messed up

May 4, 2016 | 11:18 PM - Posted by Jeremy Hellstrom

pretend anonymous is one person, arguing with themself.  It makes it a beautiful thing.

April 30, 2016 | 12:27 AM - Posted by Anonymous (not verified)

no DX12? thats a shame. what are you afraid of?

April 30, 2016 | 10:07 AM - Posted by fiddler (not verified)

they are afraid of losing the nvidia money for showing amd in a bad light

April 30, 2016 | 01:17 PM - Posted by bptrav

A DX12 game might be interesting but the thing is 99% of games are still DX11 so I'm not sure how representative it would be of gaming in general or even DX12 in general.

April 30, 2016 | 02:50 AM - Posted by Anonymous (not verified)

yeah something is up with these benches
amd is clearly the superior solution but they picked all nvidia biased gameworks games to show amd in a bad light

something fishy is going on

April 30, 2016 | 10:10 AM - Posted by fiddler (not verified)

nobody even cares about dx11 anymore
all the good games are dx12 now

the only reason they are testing nothing but dx11 GAMEWORKS games is because they make amd look bad

the cant say anything good about amd or they will lose the nvidia payoff money they get for making amd look bad

April 30, 2016 | 04:23 PM - Posted by bptrav

Can you name some of these games since "all the good games are DX12 now"? If you look at Steamcharts there is ONE game with DX12 support in the top100 most played games (Rise of the Tomb Raider at #94).

Let me guess Steamcharts is a giant Nvidia conspiracy too and they helped fake the moon landing.

May 1, 2016 | 12:41 AM - Posted by biohazard918

Are you high? There is a grand total of 10 games that support dx12 2 of which are early access, and one is a "remaster" of a 10 year old game. Did you forget what happened with dx11 when it came out? We are likely to see major titles come out on dx11 for years into the future just like with dx9 titles. Also idk about you but my games library has an awful lot of dx11 titles in it. Not to mention there is currently no way to frame rate dx12 games which is a pretty big fucking deal when your testing crossfire/sli. You're also strait up deluded if you think nvidia or amd or intel is paying hardware reviewers off.

April 29, 2016 | 06:22 PM - Posted by icebug

Does AMD support using each card per eye for VP scenarios? Isn't this what the card is mainly being marketed for in the pro market?

April 29, 2016 | 06:58 PM - Posted by Stry8993 (not verified)

They will. Especially in DX12.

April 29, 2016 | 07:23 PM - Posted by Ryan Shrout

This literally makes no sense. VR does not mean DX12, and DX12 multi-GPU is still difficult to do from a developer point of view. 

Again....*sigh*

April 29, 2016 | 08:27 PM - Posted by Anonymous (not verified)

To add to this: VR multi-GPU is an even HARDER prospect than DX12 explicit multi-GPU. The optimisation pathway is different (latency-focused rather than throughput focused), and because the relative mix of parallelisable vs. non-parallelisable jobs changes when you seperate 'eft eye' and 'right eye' rendering, all the optimising you did to minimise latency for a single GPU pretty much has to be thrown out for multi-GPU.

Thus far, NOBODY has released any VR program that supports multi-GPU outside of the proof-of-concept tech demos put out by Nvidia and AMD themselves. Valve are the only ones who have even promised a game (The Lab) would implement it, but thus far has not.

April 30, 2016 | 01:22 AM - Posted by tatakai

is that really true? Because I think the way people look at it is like running separate monitors on separate GPUs rendering separate things. That itself should not be that hard. In fact it should be easier to do than typical multiGPU because in concept its not actually the same thing as multiGPU (rendering the same thing on the same screen with different GPUs). Issue might be synchronizing and would probably require identical GPUs or slow the whole thing down to the slower GPU.

Doesn't SEEM more difficult.

May 2, 2016 | 09:45 AM - Posted by Anonymous (not verified)

You're VASTLY underestimating the difficulty of rendering for VR, and trying to do that across multiple GPUs just makes the problem harder.

You have an 11ms windows to complete all your rendering and start reading out the framebuffer. If you miss it, you're SOL. Within that 11ms, you need to perform the game world simulation, hand that data over to the GPU, perform all geometry operations, perform all buffer operations (e.g. shadowing), copy all that output across the PCIe bus to the other GPU, render the actual on-screen pixels (the only really parallelisable part), merge the two buffers into one by passing data back over the PCIe bus again, warp the final composited buffer (along with performing a just-in-time orientation correction depending on SDK path), then output it.

It's already somewhat of a hack job to do just-in-time warping as it is, due to GPUs not having a natively exposed function for "how long is it until scanout for the next VSYNC?" resulting in various tricks like late-latching to try and get data ready just in time without overshooting (missed frame = BAD) or undershooting (wasted GPU time, latency penalty). Trying to coordinate this on just one GPU is hard enough, trying to do it with two is even harder.

May 4, 2016 | 04:28 AM - Posted by VRerse (not verified)

Valve implemented the cross fire for their Aperture Robot Repair demo which is created using Source2 engine. I haven't tested The Lab using multi GPU setting yet, but I see no reason otherwise. And they've stated they practically doubled the framerate using AMD's crossfire tech in their Advanced VR Rendering talk in last year's GDC and encourage everybody to do so.

April 29, 2016 | 09:00 PM - Posted by Anonymous (not verified)

This SKU, and Others from both AMD and Nvidia will be old long before the Benchmarking and Games themselves have been optimized for DX12 and Vulkan and the VR gaming Hardware/Software/Driver stack. There will be no definitive body of evidence for a few more years, and maybe somebody will have the spare time to test this new and soon to be older generation GPU technology in a few years to settle the argument!

April 30, 2016 | 12:31 AM - Posted by Anonymous (not verified)

you shouldnt show contempt for your audience. sigh.

May 1, 2016 | 07:35 PM - Posted by Anonymous (not verified)

This!

April 29, 2016 | 06:55 PM - Posted by Steve_H

amd is no longer you're friend Ryan. in fact I suspect they hate your guts today....but we loves ya ;)

April 29, 2016 | 07:06 PM - Posted by Anonymous (not verified)

Did you use the professional drivers for the professional testing?

April 29, 2016 | 07:24 PM - Posted by Ryan Shrout

No, AMD said it wasn't necessary.

The FireRender plug in just sees them as OpenCL addressable devices. No qualification needed or helping there.

May 2, 2016 | 08:45 PM - Posted by wcg

It will be interesting to see how RPD compares to a high end AMD and Nvidia pro card. They are much more expensive than $1500, which might mean the RPD is a great price for pro users. Many gamers don't realize there is a whole other market for GPUs with prices that would make your eyes water.

April 29, 2016 | 07:15 PM - Posted by Anonymous (not verified)

Needs some DX12 stuff man, though I guess those are currently difficult to benchmark or whatever?

April 29, 2016 | 07:25 PM - Posted by Ryan Shrout

Yeah, the problem is we can't use our Frame Rating methods, that are pivotal to finding things like the issues in GTA V or The Witcher 3. Soon! :)

April 29, 2016 | 09:08 PM - Posted by Anonymous (not verified)

Issues in GameWorks titles? NO!!!

Next thing your going to tell me is that water is wet.

April 30, 2016 | 09:35 AM - Posted by Anonymous (not verified)

How DARE you suggest Gameworks might possibly create ANY issues in games just because of the overwhelming amount of games that run like garbage with it? Huh? REALLY?

Yeah... didn't think so...

April 30, 2016 | 12:32 AM - Posted by Anonymous (not verified)

oh so you just assumed there would be issues, and didnt bother to test? a good article would have mentioned this... "i didnt want to test modern titles because...". instead this comes off as a very biased review to anyone with the slightest clue Ryan.

April 30, 2016 | 04:56 AM - Posted by Anonymous (not verified)

I am sorry but the titles you picked (and proudly presented) are all Nvidia biased titles from the start. It makes the review really really smelly.

at least you could have shown some other titles:

Quantum break (AMD biased)
AOTS (DX11,DX12 and good benchmarking)
Hitman (DX11, DX12 AMD biased)
The division

May 2, 2016 | 04:51 PM - Posted by Anonymous (not verified)

Oh yea, that rise of the tomb raider is so nvidia biased...

It was ONLY the FLAGSHIP GAME of AMD's version of Hairworks...

LOL...

Quantum Break is a clusterfuck right now and doesn't work properly with dual gpus, same with the Division. They totally could have thrown in Hitman or the Division, but it wasn't as if this was even supposed to be presented as a gaming card....

IT LOST IN ALL PROFESSIONAL BENCHMARKS...

April 29, 2016 | 08:13 PM - Posted by BillDStrong

In Professional work, the CPU is also used when doing rendering. I understand that you are wanting to test the GPU here, but just like I know how important testing real world scenarios in games is to you, this is important to professionals. So a set with and without would have been great. It could also help pinpoint bandwidth issues arising from using one card vs two.

Great article otherwise, and thanks for the hard work.

April 29, 2016 | 10:12 PM - Posted by Anonymous (not verified)

The CPU will factor in less and less for rendering work as GPUs get more ACE type functionality, the Professional drivers graphics APIs and software packages are doing even more acceleration on the GPU for Ray Tracing rendering that used to require a CPU(Lots of CPUs with lots of cores at the costs of thousands/millions of dollars).

The Pro rendering is done in the frames per minute and longer/minutes times if AO/other settings and heavy ray tracing sampling is done on the graphics software on the GPU(Pro GPUs). The CPU is a piss poor tool for rendering or there would have never been a need for GPUs in the first place. Future GPU ACE/ACE type units may even get all of the CPU type branch/VM memory management functionality and do completely without any need for the CPU for any workloads, GPUs may even get embedded CPU functionality!

Watch out for AMD's APUs on an interposer with future greater than HSA 1.0 functioanlity. Those APUs on an interposer will be so integrated with the CPUs that the CPU will be able to directly dispatch floating point/Integer/Other Instructions to the GPU. The inetrposer based APUs/SOCs will probably have the CPU and GPU sharing the same L1/L2/L3 I$ D$ cache memory subsystems with the CPU and GPU both having a wired up via the interposer CPU/GPU Cache subsystem such that instructions fetched into the CPUs instruction cache will automatically be forwared to the GPUs Cache and executed there if the CPUs FPUs are busy(If the CPU even has FP units, or SIMD units, of its own on these systems). I'm looking for more dedicated Ray Tracing functionality like the PowerVR(wizard) has for mobile GPUs only on AMD and Nvidia SKUs in the future when that catches on for the desktop GPU market gaming/pro graphics.

April 29, 2016 | 10:15 PM - Posted by Anonymous (not verified)

Edit: so integrated with the CPUs
to: so integrated with the GPU functionality wise

April 29, 2016 | 08:57 PM - Posted by Sneaky (not verified)

Holy cow those frametimes

Just as interesting is that SLI 980tis deliver much more consistent frametimes than a single fury x

amd have a VERY long way to go with their drivers

April 30, 2016 | 06:58 AM - Posted by YuRaLL (not verified)

In these Nvidia centric games sure.

that's the problem with the review.

AMD drivers have actually come along way and are in many ways now even better then Nvidia's. certainly the latest fiasco with the nvida drivers when people updated and the PC refused to boot up, even after they downgraded back to the other version they had.

April 29, 2016 | 09:18 PM - Posted by Anonymous (not verified)

Is this the totality of your un-bias testing?

All these games were released with GameWorks enhancements. To be un-bias you could equally include the same amount of game favoring AMD and include both or better yet find games that aren't "sponsored" by either.

April 30, 2016 | 01:28 AM - Posted by SV108 (not verified)

Yeah, adding some games that are optimized for AMD or optimized for neither seems more balanced than just a slew of gameworks titles. That said, I also agree that the pro duo isn't really for gamers and will probably not sell that much.

May 6, 2016 | 12:54 PM - Posted by Anonymous (not verified)

Agreed. need to add more games for the Red team or go with out the sponsored ones

April 29, 2016 | 10:30 PM - Posted by SDBob (not verified)

Is the Pro Duo's limited 4gb per gpu to blame for it's shortcomings compared the 980ti's 6gb?

April 29, 2016 | 11:43 PM - Posted by Anonymous (not verified)

I normally do not post here, but leaving out DX12 games, and putting in only Nvidia Gamework titles into your review seems very very biased.

Do not get me wrong, I think the Pro Duo is a waste of money, and not worth getting, but I mean if you want people to stop saying you are heavily biased all the damn time. Well...this review shows how biased you truly are.

Hopefully in the future you can change the view neutral people who read reviews have.

Maybe its time to change your frametime methods since DX12 will be the new standard.

April 29, 2016 | 11:45 PM - Posted by Bhappy (not verified)

Love how whenever amd products lose in benchmarks which happens more often than not, the fanboys all come out whining about the benchmark selection lol

April 30, 2016 | 12:28 AM - Posted by Anonymous (not verified)

I think AMD saw the limitations of what this card was for gaming and decided to offer it with a professional Driver option/repurpose for more professional/development uses. As for gaming, RTG had probably washed their hands of this SKU for gaming and moved on to Polaris and its positioning for the more affordable mainstream market position. The real money is being made on the lower priced SKUs, and the laptop/mobile SKUs market where AMD needs a better presence.

There will be time for more flagship fapping when Vega and Volta get here with HBM2 and a improved Driver/Gaming engine software stack and more games/VR games support, as well as games support/optimizations for DX12/Vulkan!

Watch out for those M$ 3Es, and attempts at corralling that gaming market into M$'s 30% of all the gaming handle action with its windows 10 and UWP shenanigans! Keep pluggng away Gabe we may very well need your efforts after all!

April 30, 2016 | 04:15 PM - Posted by AnonymousE (not verified)

Exactly , this card is for devs and so on, not gamers. But the fanboys here have missed that point.

May 1, 2016 | 12:02 PM - Posted by Anonymous (not verified)

Hell yes for non pro(non Full Pro Hardware features) rendering with the pro drivers, I want One for Blender/other Rendering, as soon as Polaris hits that market and the deals begin on these and other SKUs. Even as much as this SKU costs, that availability of the Poefessional drivers, even if the hardware may not have the proper full pro SKU error correction makes these SKUs great for educational training or other uses. Hell a college or school could get these for the students for learning only, and only get one or two full firepro versions for students' final projects, and grad students' final projects that heve to be as error free as possable, and they would save loads of money.

It's the pro certified drivers that make the Fire Pro/Quadro SKUs so costly, that and the years of hardware/software/driver support you get when you get the Full Pro versions of the $2000-$5000 versions of the cards with the full hardware error correction and such.

The only people that would need the Full costly(FirePro, Quadro) versions would be the post grad students and their professores doing actual reserach where the pro versions would be required for actual products/actual research funded by grants with strict requirments for safty, and peer review.

April 30, 2016 | 12:06 AM - Posted by Mandrake

Great write-up. Thanks Ryan!

April 30, 2016 | 12:46 AM - Posted by J Nev (not verified)

Yay you've finally updated your games list for benches. About time.

April 30, 2016 | 12:50 AM - Posted by J Nev (not verified)

Not sure why AMD bothered really, they may sell perhaps 10 at most

April 30, 2016 | 12:55 AM - Posted by Anonymous (not verified)

Shame there's no DX12 titles. That seems like the direction games are headed, and it would be nice to know how such an expensive card would handle in the long-term.

Gears of War, Hitman, Quantum Break tend to do better for AMD w/ DX12.

April 30, 2016 | 01:09 AM - Posted by Anonymous (not verified)

Ryan isn't going to test any DX12 games until Nvidia gives him the tools to do so.

April 30, 2016 | 01:58 AM - Posted by Anonymous (not verified)

Correction:

Ryan isn't going to test any DX12 games until Nvidia wins in DX12 benchmarks.

May 2, 2016 | 12:12 PM - Posted by Anonymous (not verified)

"Nvidia gives him the tools"

And thats the problem right there folks.

May 3, 2016 | 03:32 AM - Posted by Scott Coburn (not verified)

No the actual problem was that before we had FCAT screen capture / frame rating, CrossFire was a mess and AMD / ATI wouldn't publicly admit anything was wrong. AMD fanboys were complaining that even though their benchmarks showed stellar performance, when they actually played a game with CrossFire, it didn't feel stellar. Then nVidia came along with FCAT to show what was happening and lo and behold, look AMD CrossFire drivers produce runt frames and dropped frames (that never get displayed) but artificially lower frame times which influenced the overall reported framerate. With FCAT, these were taken out and the actual observed framerate looked a lot different than what was being reported before FCAT. Once this evidence seen the light of day, AMD had no choice but to improve their CrossFire drivers, and they have for the most part.

So you AMD punters might want to get down and kiss nVidia's ass for helping "out" the problem, because without them, AMD would still be flaunting rubbish benchmark results and ignoring your pleading to fix a problem they weren't going to ever admit to.

Let's be honest, if the shoe had been on the other foot and nVidia's SLI had been performing poorly, but the benchmarks said otherwise, AMD would have stepped in with a way to "out" nVidia. This has been going on for as long as these two companies have been competing. Be glad that it does, otherwise we'd be getting spoon fed crap and we'd have to eat it with a smile.

I for one don't give a rat's ass where the tools come from for performance testing, as long as they are made freely available and can be tested by impartial third parties to verify fairness. After they've been proven trustworthy, there shouldn't be any beef over who developed it, just be glad we have it. If there had been a nVidia flavored spin on FCAT, I'm certain that either sites like PCPer or AMD themselves would have challenged it. Since that didn't happen, and it's become the standard for testing DX11 (and earlier) multi-GPU setups, there is no point in complaining where the tools came from now. Besides who better than nVidia and AMD to test something that only they know so intimately. I challenge that they understand the graphics API's and how best to test performance in them (especially since they build the drivers) better than any other entity. So who better to develop these tools?

I'm sure PCPer and Guru3D and every other reputable review site out there wants a way to fairly test DX12 titles. Not every game is going to have the included benchmark that AoS has built in. Beyond that, AoS IS NOT going to be indicative of all DX12 performance across the board. It's a massive RTS which focuses on enormous amounts of draw calls and definitely benefits from Async Compute. Not all titles are going to lean as heavily (or at all) on these particular advantages of DX12.

April 30, 2016 | 10:32 AM - Posted by Anonymous (not verified)

I really hope that AMD gets more of the professional GPU/APU(on an Interposer) workstation/HPC market for GPU/GPU accelerators, then both AMD and Nvidia can become less dependent on the gaming market for revenues. In the HPC/Workstation and supercomputer markets there will be more income to support R&D for the long term improvement of the GPU for both compute and gaming. This will free both AMD and Nvidia from the fickle fanboy gaming market as a major source of GPU revenues and allow both AMD and Nvidia to move towards giving their GPU SKUs the very same functionality that the CPU provides while at the same time providing the massively parallel vector processing/other graphics/compute functionality that no CPU can match.

I look forward to GPUs getting some dedicated ray tracing functional blocks on the GPU and freeing the graphics users from any dependency on Intel’s high priced CPU server/workstation SKUs for graphics/workstation and yes gaming uses also! AMDs ACE units are acquiring more of that CPU type functionality with each new GCN generation, and Nvidia appears to be going that direction also with some ACE types of functionality of its own, Nvidia did hire AMD's Phil Rogers(AMD's HSA guru) so maybe by the time Volta is here that async compute on the GPU will be a non issue, except for the CPU fanatics of Intel's overpriced CPU SKUs. Let's get the CPU out of all graphics workloads uses, and relegate the CPU to running the OS and janitorial duties on the computing systems while the GPU does the graphics/gaming and serious number crunching!

May 4, 2016 | 07:50 AM - Posted by KillBoY_UK (not verified)

Imagination technologies already have dedicated Raytrace hardware available. Also most peeps dont do there final render of scenes on the CPU due to the lack of Ray trace performance/hardware they do it due to the lack of memory as some scene in film CGI and 40gb+ which no GPU cards can hold but you can easily have that amount of system ram. I will try and dig up link but seems this is the case.

April 30, 2016 | 01:26 AM - Posted by SV108 (not verified)

Looks like a niche card that's more aimed at developers. I feel like gamers would be better served by getting any 2 high-end cards like anything in the Fury series (including the Nano) or the 980 or 980Ti.

Personally, I don't like SLI / Crossfire that much, and would rather go with just 1 card. That said, Polaris and Pascal are coming out soon, so I feel like it's better to just wait if you want 1 super-card.

Even if you don't mind the old stuff, it'll be on sale at clearance prices when the new stuff hits. Good time to grab a Nano or a 970 or whatnot, I think.

April 30, 2016 | 02:42 AM - Posted by Anonymous (not verified)

nvidia shill?
all gameworks games and no VR and no dx12 with async compute

April 30, 2016 | 03:11 AM - Posted by Anonymous (not verified)

Hey Ryan, when Nvidia will give you the green light to test DX12?

April 30, 2016 | 05:30 AM - Posted by YuRaLL (not verified)

when they 'fixed' async in their drivers. ;)

April 30, 2016 | 08:26 AM - Posted by godrilla (not verified)

Even with async titles this card makes little sense because amds own cards can beat it at less money. But now we know bias when its clearly staring us in the face!

April 30, 2016 | 09:51 AM - Posted by Anonymous (not verified)

pcper doesnt even try to be objective anymore lol

no dx12 testing
only benching on gameworks titles
only benching on titles that is known that amd is behind
dont bench any of the recent games on dx11

good old pcper i dont even wanna think about what they will think to do when polaris will come out

April 30, 2016 | 10:04 AM - Posted by fiddler (not verified)

im getting sick of these reviews all over the web that are clearly paid for by nvidia

amd has the superior products yet nvidia pays off all the websites to make amd look bad

you couldnt pick a more nvidia biased set of games if you tried - they are all gameworks games and no dx12 to be seen
nobody cares about dx11 anymore

the few unbiased reviews around all show the furyx is a much better card than 980ti when tests are not biased against amd and i bet when the unbiased reviews for radeon pro duo come out all these supposed issues with the radeon pro duo in this review will just disappear

WAKE UP nvidia owners.!
you are being sold a lesser product for a higher price and you are being sucked in to what the paid for websites and the BS marketing are brainwashing you to buy, if you use your brain you will see amd are the best in every situation and every price point

April 30, 2016 | 10:24 AM - Posted by Anonymous X Nvdia REP (not verified)

Yes Confirmed

Nvidia paid PC perspective

i do not even have to tell you as its so damn obvious

April 30, 2016 | 02:26 PM - Posted by Josh Walrath

I'm laughing pretty hard here.  There are some real technical limitations right now to benchmark VR and DX12 that simply cannot be overcome at this time.  Remember that magic VR stuff that Oculus does with frame present/shifting to allow a smoother experience for the user?  How exactly do you measure frame results when the Oculus software actively changes and adjusts presentation?

Throw in DX12 and how it again has changed how frames are displayed so that FRAPS and the frame rating tech are not accurate.  The benchmark for Ashes has also been shown to be inaccurate with what it reports vs. what is displayed and experienced.  Would you rather we present misinformation?

Too bad Ryan didn't attempt a title like DiRT Rally though.  I do know how much time he spent trying to get this all done before the weekend hit, so can't blame him for not testing everything under the sun.  I like AMD stuff, but that $1500 price tag on this card just takes it out of the reach of any sane gamer.  Get 2 x Nanos instead.

April 30, 2016 | 02:55 PM - Posted by Anonymous (not verified)

What about the Pascal and Polaris GPUs any ETA on the benchmarking samples or any non NDA violating information about any engineering/review samples. Hell even some NDA violating leaks from those usual sources.

The DX12/Vulkan and VR benchmarking software stack needs to catch up to the new graphics APIs/driver and VR technology, and that is a story in itself. Looks like the testing tools are going to take time with all the hardware/graphics API changes for GPU's and VR coming online over the next few months/years!

April 30, 2016 | 03:01 PM - Posted by Josh Walrath

Haven't heard nor seen any real leaks about the upcoming products from NV and AMD.  They are going to be very interesting from both sides, as they are taking some drastically different philosophies in design and production.  You are correct though that the tools in place to accurately measure performance under these new circumstances are just not there.  Not only that, but support for Vulkan and DX12 from the GPU guys is also very new and very unoptimized.  Things will certainly start to clear up this summer though!

May 2, 2016 | 08:35 PM - Posted by wcg

It might be time to clone what Digital Foundry does - measure everything externally. Their setup might be insanely expensive but we need to be looking at the video output rather than what software might or might not be telling us.

April 30, 2016 | 03:12 PM - Posted by Anonymous (not verified)

Ryan had time to test 4 games. Choose 4 that were under GameWorks programs especially given how out spoken Nvidia is about helping game developers work since day 1 with the program.

This kind of credibility and excuses of limited timing is laughable to any serious on looker that follows GPU hardware. He could have 2 GameWorks games and 2 AMD games to be balance but that's still too small of a game pool.

Did Ryan not think or did he just say screw it which seems to be the on going sentiment for the past few years not only here but around the community.

PCPerspective is becoming more of a bulletin board news posting of other sites. More and more of the news here is linked to others doing the work. The time limit seams to be self induced. Delay the article to have more time and include a more balance un-bias set of games. Instead of rushing it out to be first with only GameWorks titles.

May 1, 2016 | 07:05 AM - Posted by Allyn Malventano

Some of us have lives outside of doing reviews. Ryan has family stuff to do and can't test every game under the sun. As it is the few games tested took two days of straight testing and over 1TB of frame rating output video to process.

He could have just skipped this one / not bothered trying to get a card in to test, which would have meant zero frame pacing results as other sites are either incapable or don't bother. You're welcome. 

May 1, 2016 | 11:38 AM - Posted by Anonymous (not verified)

You could clone 10 Ryans and that still would no be enough manpower until the Benchmarking software, the graphics API software, and the driver software is there and tweaked with the proper amount of tasting/method calls built into the various software/testing/driver code/API code base. Those benchmarking suits of software require specific API code hooks/method calls in the close to the metal parts of the code base to be able to get the data to properly analyze the frame variance/frame rates/other statistics.

So until the thousands of programmers on all these different parts of the CPU and GPU software/driver/API/Gaming Engine/Gaming engine SDK/OS device driver model/other software stacks get their code bases fully up and running with the proper amount of Testing/Debugging method calls built-in, no amount of Ryans X 10**10 is going to be able to do any testing.

May 1, 2016 | 11:40 AM - Posted by Anonymous (not verified)

edit: tasting
to testing

damn agressive spell checker, and my blindness!

May 1, 2016 | 03:14 PM - Posted by Anonymous (not verified)

So your response is be grateful and thankful for our bias review.

Stop making sorry excuses. He could have delayed it and picked better games instead of being so bias choosing games under the GameWorks program.

He got the card from other then AMD so he had plenty of time. His restriction and limitations were put on by himself.

Other sites have 3 to 4 times more games they test with just a many more card option as comparison.

Thanks again Allyn for showing the typical response of PCPerspective when the basics are even questioned.

May 1, 2016 | 03:26 PM - Posted by Anonymous (not verified)

Allyn wants us to think Ryan is the only one in the tech review community that does testing and reviews that has a life.

April 30, 2016 | 06:16 PM - Posted by Anonymous (not verified)

DiRT Rally would have been a much better choice. So would several other more recent DX11 titles. It's not the lack of DX12 stuff that bothers me so much as the ridiculous decision to pick four of the worst offending Gameworks titles - one, GTA V, being nearly a 3-year-old game at this point (9/2013). The problem is, if you look at the theoretical inverse of this review, it's like benchmarking a new Nvidia card using only AotS, Hitman, Quantum Break, and the DX12 enabled 3Dmark.

Much like everyone else here, I follow this stuff obsessively. It's a hobby. If tasked with choosing four titles to guarantee an Nvidia win, at least 3 of the four Ryan picked would have certainly made my top five. That fact alone is rather disconcerting.

April 30, 2016 | 06:32 PM - Posted by iceman11 (not verified)

Forget this Nvidia paid rag!

April 30, 2016 | 08:50 PM - Posted by the elder smurf (not verified)

Well I will never be using this website for reference when it comes to graphics card benchmarks again, for nvidia or amd, as this is such a bias review that these numbers look fake, yet alone are 4 of the heaviest nvidia titles on the market. I understand nvidia pays you, but maybe its time they see they need to actually do something with their products.

April 30, 2016 | 10:56 PM - Posted by Anonymous (not verified)

This article needs to either be:

-Completely taken down

OR

-Heavily revised with more titles.

Remember in 2006 when Intel had no way of competing with AMD but to threaten OEMs to stop using their chips? 10 years later, now Nvidia is pulling the same stunts. Or, PC Perspective themselves are biased and should no longer be seen as a credible source.

Ryan Shrout, you know exactly what you were doing when you made this "review" specifically with Nvidia-biased titles. You knew EXACTLY which titles to choose to push your agenda.

And to all my fellow gamers reading this: we deserve better.

May 1, 2016 | 12:18 AM - Posted by Anonymous (not verified)

All games were nvidia twimtbp

May 1, 2016 | 04:47 PM - Posted by DevilDawg (not verified)

Guys I'm not defending PcPer here but there are other factors than just the Nvidia titles. Yes they were all Nvidia titles and he could thrown in at least one neutral or Amd title but the main problem is the cards themselves. They are two R9 Nanos. Last time I checked Nanos will not beat a 980 ti in anything. Also the card was throttling itself to stay within the power envelope. I think it reached 1ghz once, but most of the time stayed at 950mhz or below. That will cause a lot of frame drops in itself. What Ryan could have done is raised the power up 50 percent so that the cards stayed near 1ghz.

This would have put it on par with Nvidia's boost technology a little, but as it stands he just did out of the box testing to see what it will do.

May 4, 2016 | 10:51 AM - Posted by Anonymous (not verified)

All great games that people have played or should play, should they instead focus on games no one cares about just because they are DX12? Both Hitman and Ashes are easy to pass on while the games in this review will go down as some of the best PC gaming has to offer.

May 1, 2016 | 05:27 PM - Posted by TREY LONG (not verified)

Who would buy this? Cross fire and wstercooling? Much slower than 2 980tis and stuttering? Is there a single DX 12 game that is retail, not beta? This is not going to be a big seller.

May 1, 2016 | 05:43 PM - Posted by DevilDawg (not verified)

Amd knows this and that is why they said that they were targeting this product at devz and vr devs, hence the high price tag. But certain people are buying this product and it is selling somewhat. I suspect the price will drop quite a bit in a few months.

May 2, 2016 | 12:55 PM - Posted by Anonymous (not verified)

They are buying it to get the pro drivers and develop for markets/applications that need to be tested with the pro drivers. So if you were to get a FirePro branded version of this card it would probably cost more than twice what this SKUs costs!

Look at how AMD is marketing this SKU and that's who will be using this SKU, to any developers that do not have the money for the FirePro versions this is a great deal. And Gamers, wait for other gaming reviews to come out on this SKU that are also using the gaming oriented graphics drivers.

For the developers VR/Others the option to be able to install the professional grapics drivers on this SKUs is a great deal for some cash strapped operations developing applications that are targeting the FirePro branded SKU market that uses on the professional graphics drivers. Pro graphics drivers are designed for accuracy and not Frames Per Second, the gaming drivers use differnet math/other libraries that are quick and dirty and designed for speed not accuracy.

Focus on Polaris/Pascal as that's where both AMD/Nvidia have shifted their resources, the benchmarks on the Radeon Pro Duo will have to be revisited and go read other reviews on other websites to get more and different benchmarks!

BUT, you know that there are probably testing/review Polaris/Pascal meetings and learning that has to be done by the online press under NDA so that the Polaris/Pascal reviews can be done and revealed at the proper time. The learning curve on the new technology is hard for everyone, go read the AMD and Nvidia white papers on their new GPU technologies along with the white papers on the DX12/Vulkan APIs(both still being Tweaked and fully adopted), it's not going to be this year before the dust settles and a clear winner can be crowned!

IT takes TIME!

May 2, 2016 | 03:29 AM - Posted by Anonymous (not verified)

If you decide that this is much slower thatn 2 980Tis and with much more stuttering based on two nvidia games, then you deserve them ripping you off selling their shit to you.

May 1, 2016 | 07:43 PM - Posted by Anonymous (not verified)

The amount of contempt shown by PCPER save for Walrath, for the overwhelming backlash for this article is disconcerting. Guys you should listen to your readers. Because in this case we're right. Regardless of whether or not this card is a good buy, your review of it is shit. Listen to your readers and be willing to admit when you've done a disservice.

May 2, 2016 | 09:43 AM - Posted by Josh Walrath

I AM THE WALRATH!

Actually, it isn't contempt you are seeing.  I think it is exasperation.  Everytime we say something critical about one company or another, we get the same response.  Shill, paid reviewer, bias... or as we say it, "SO BIOSED!"

Is this the perfect article?  No, but it is a good start.  I think it exposes some of the weaknesses of the card.  I think it also shows exactly why AMD released it when they did and at what price and their marketing push for "developer, content creator, etc." rather than gamers with a top end budget.

Win 10 and DX12 have been a rough patch for everyone involved.  I know I was pretty critical of NV in a Sept. through Jan. timeframe for their driver and SLI support.  They powered through it and now *most* titles work fine, but they have their corner cases as well where it just falls down.  AMD has improved their driver support by leaps and bounds over the past two years, but CrossFire is still a sticking point on more games than what SLI has problems with.  Keeping the pressure on these companies, no matter what side you are on, will lead to improvements.  Blind support and excuse making will only lead to stagnation because they have no reason to improve the current experience.

May 2, 2016 | 01:24 PM - Posted by Anonymous (not verified)

Please for the love of all that is fair and just! Do NOT leave the Vulkan graphics API out of any discussion going froward! The Graphics APIs(DX12 and Vulkan) are brand spanking New and the testing/benchmarking software stack has not been updated to be able to use these and the other (M$'s UWP nonsense) and other windows 10 DDM/other Nonsense as well as Steam OS based gaming options!

There are gamers and others looking to go Vulkan/other OS in the time up to and after 2020, so do not froget that before and after 2020 there will be gaming on other OSs using the Vulkan graphics API. Even under windows 7, If M$ does not poison pill Vulkan under windows 7, there may be some gamers doing that for their gaming once 2020 arrives(Windows 7's EOL) and windows 7 has to be safely locked down inside a VM to run some legacy games OpenGL/OpenGL calls wrapped in Vulkan wrapper code/API calls.

May 2, 2016 | 07:47 PM - Posted by Anonymous (not verified)

Thanks for the cool-headed response, Mr. Walrath. I re-read my post the morning after and winced at calling the review shit. It is not. I take that back, especially after you stated that you all were "keeping the pressure on these companies...." I've been considering negative press from only the consumer side, thinking that if we want a competitive marketplace, poor ole struggling AMD can't afford this kind of lopsided review. I never considered these companies are also very interested in how their products are reviewed. Calling them out on their shortcomings, even if that means a review with games seemingly stacked in Nvidia's favor, should only solidify their resolve to improve. Having said that, this review was suspectly void of the areas where this card would be superior to its Nvidia counterpart. A few of the things the card is doing right should be mentioned.

May 2, 2016 | 08:40 PM - Posted by wcg

The thing is, anyone could have anticipated the results we see here. This card is two Nanos in CrossFire, period. We know the warts with CF and shouldn't be surprised at the benchmarks. It might have been better to just concentrate on pro software results only at this stage.

May 1, 2016 | 09:32 PM - Posted by Anonymous (not verified)

You bunch on whinging wankers.

May 2, 2016 | 01:48 AM - Posted by Anonymous (not verified)

Look at all these butthurt (paid) AMD fanboys.

HAHA!

May 2, 2016 | 03:28 AM - Posted by Anonymous (not verified)

Yay, another nvidia infomercial.

And this time the "tester", who admitted recently that is is totally biased toward nVidia in his infomercial about drivers, even comes here to *sigh* about his audience.

Pcper, best "tech" site ever.

May 2, 2016 | 04:48 AM - Posted by Anonymous (not verified)

Ryan is usually better than this. But good people can also make mistakes. Lets move on.

PS: Why not add a few more games, so everybody will be happy and it will portray a better picture of the product overall.

May 2, 2016 | 03:49 PM - Posted by Jeremy Hellstrom

No way to make everyone happy unfortunately.  We are investigating new games to benchmark but it isn't as easy as just benchmarking commentors favourite games.  There are qualifications which need to be asssessed such as longevity and the ability to framerate it.  There are also those like Far Cry Primal which are not different enough from their predecessors to provive worthwhile data.

Like just about any other job, it is a hell of a lot harder than it looks to an observer.

Thanks for the rational comment! ;)

May 2, 2016 | 11:33 AM - Posted by Anonymous (not verified)

Wow! a $1500 turd. So, buy Nvidia if you want the best experience or buy AMD if you like shitting your pants over arbitrary nonsense written on a sheet of paper.

May 2, 2016 | 02:45 PM - Posted by Anonymous (not verified)

First setup with 2 GPU working with HMD and zero VR test?

May 2, 2016 | 02:48 PM - Posted by Subsailor

I think this article is a fair evaluation of the card.

May 2, 2016 | 08:21 PM - Posted by Anonymous (not verified)

I would argue that there is still a case to buy this card if you're a gamer, but only in a very specific scenario. If you're building an ITX gaming system and are looking for the absolute most performance you can wring out of the form factor, this is the card to do it. Having only one expansion slot means that the repeated suggestion of getting two single GPU cards just isn't an option. Considering how popular ITX is recently, I'm surprised this point wasn't given more attention.

May 2, 2016 | 08:27 PM - Posted by NKalabric

much waaawwww such typing

May 3, 2016 | 12:30 AM - Posted by i7Baby (not verified)

I'd like to see more on the'create' side eg render benchmark comparison.

At this stage, I think 2 x r9 nanos is a better proposition. In Australia, you can find new Nanos for $800aud each. The Pro Duo is advertised at $2200aud.

Gaming wise, a Nano is about the same as a GTX970. 970 SLI is about the same as a 980Ti.

Rendering wise, a Nano is better than a 980Ti.

But who knows with the next generations from nVidia and Radeon?

May 3, 2016 | 03:59 AM - Posted by HOkay (not verified)

I agree some render benchmarks would be good considering the target market for this GPU. However I've gotta pick you up on one thing you said; the Nano is about the same as a GTX 980, it's much faster than a 970, check any review! On the 970 sli versus 980ti, I think the vast majority of people would always recommend a single powerful card over 2 less powerful ones any day as you don't run the risk of a bad sli profile.

On topic, why did they wait to release this? If it is aimed at developers well they've been developing DX12/VR stuff for a few months now, ideally they needed the card when they started that process, not now things are slowly releasing! Clearly a true dual card setup is preferable to this, unless you absolutely need the pro drivers.

May 3, 2016 | 04:11 AM - Posted by poobear (not verified)

This would be grate in my mini itx rendering machine. Im just allergic to atx and micro atx builds.

May 3, 2016 | 10:03 AM - Posted by Anonymous (not verified)

Check out AdoredTv's latest video on youtube. He talks about how traditional sites approach benchmarking and the blunders involved.

May 6, 2016 | 05:40 AM - Posted by Anonymous (not verified)

The pro duo is AMD's Nano in crossfire on one pcb. The testing revealed they performed identically. Those people complaining are not real gamers. Who doesn't play GTA 5? lol
It would have been nice to see the Division benched though.

May 6, 2016 | 06:50 AM - Posted by Paul EF (not verified)

You AMD guys are some seriously retarded individuals; it's not even funny. This card sucks. AMD blows. Crossfire is shit. And you are gay.

May 6, 2016 | 12:59 PM - Posted by Maonayze (not verified)

Well to be honest, I too would have liked the DX12 games tested too. Just to see how they did perform on the Pro Duo against the 980Ti SLI setup. Even if it was just to even up the "Gameworks" titles, at least then, a lot of people wouldn't have screamed at the review. Balance is what was needed in the review, even if it made no difference to the outcome.

Lets face it, I wouldnt buy this card for this kind of money and also because the newer cards are due soon, as well as the CF issues in a lot of newer games too. So three good reasons for this card not suiting my purposes.

All of this however, does not excuse the nasty comments aimed at PC Per in particular Ryan Shrout. Personal insults are the last resort of the desperate fanboy regardless of which team they 'Support'. Be nice and engage in reasoned debate over this review to get your point across, not go on an all out rant. Ranting does not encourage anyone to even consider what you say, let alone do anything about it.

They are just Graphics cards FFS!! Life is more important than that.

And before you call me an Nvidia Fanboy....I run a Sapphire Fury Tri-X and it has given me lots of happy gaming hours with my XL2730Z Freesync Monitor.

:-)

May 12, 2016 | 11:18 AM - Posted by Anonymous (not verified)

Done with you. I just listened to the whole podcast and at every level where AMD were talked about (even if they had the advantage) you made them look bad. You are bias. There's no getting around that.

May 12, 2016 | 09:30 PM - Posted by AMD PRO DUO USER - Ex nvidia user (not verified)

I work at a place where we sell these incredible cards. I picked one up as I was getting problems with my dual GTX980TIs in Tom Clancy's Rainbow Six Siege.

Let me tell you this guys... ALL MY GAMES run BUTTER smooth on my 1440P monitor.. I mean like BUTTER with all the graphic details set on high.

I don't care what these reviewers say, maybe NVIDIA pays them not to test DX12 games, no clue.... BOTTOM LINE IS, this is A DREAM GPU that makes gaming on 1440p and 4k a reality on my X99 Deluxe mobo... and if you guys can afford it, get it! Also Tom Clancy's The Division in DX12 IS LIKE GLASS! My dual 980TI setup was shuttering as if it was loading textures etc.. I put these GTX on ebay, let someone else deal with this crap....

GET AMD PRO DUO... you won't regret it! It's future prof in DX12 and VR!

May 24, 2016 | 10:07 PM - Posted by Anonymous (not verified)

Well, I hope the bung you took from nVidia bought you a new porsche. If not, you have just trashed your reputation for nothing, Ryan. That selection of titles is very poor.

May 28, 2016 | 07:44 AM - Posted by i7Baby (not verified)

I think I'll stick to 2 x $499 Nanos.

May 31, 2016 | 10:47 AM - Posted by Petar Krastev (not verified)

I really love it, when there are only Nvidia branded titles in a review. Only Nvidia branded games. Beautiful. Could have at least try to mix it up with one independent game...

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.