DX12 Multi-GPU scaling up and running on Deus Ex: Mankind Divided

Subject: Graphics Cards | November 1, 2016 - 11:57 AM |
Tagged: video, rx 480, readon, nvidia, multi-gpu, gtx 1060, geforce, dx12, deus ex: mankind divided, amd

Last week a new update was pushed out to Deus Ex: Mankind Divided that made DX12 a part of the main line build and also integrated early support for multi-GPU support under DX12. I wanted to quickly see what kind of scaling it provided as we still have very few proof points on the benefit of running more than one graphics card with games utilizing the DX12 API.

As it turns out, the current build and driver combination only shows scaling on the AMD side of things. NVIDIA still doesn't have DX12 multi-GPU support enabled at this point for this title.

  • Test System
  • Core i7-5960X
  • X99 MB + 16GB DDR4
  • AMD Radeon RX 480 8GB
    • Driver: 16.10.2
  • NVIDIA GeForce GTX 1060 6GB
    • Driver: 375.63

View Full Size

Not only do we see great scaling in terms of average frame rates, but using PresentMon for frame time measurment we also see that the frame pacing is consistent and provides the user with a smooth gaming experience.

View Full Size

Video News

November 1, 2016 | 12:21 PM - Posted by Kibrakul (not verified)

Here come the AMD Fanboys in 3...2...1....

November 1, 2016 | 12:44 PM - Posted by Anonymous (not verified)

But this is DX12 multi-adaptor why is the GTX 1060 not showing any improvement? DX12 explicit multi-adaptor has nothing to do with SLI! What is Nvidia doing to gimp the DX12 API from being able to get at more than one GTX 1060? Does Nvidia not want to enable DX12's feature set for obvious green reasons! What kind of hardware is not available when it is plugged into a computer, how can this be allowed as acceptable!

The APIs and the OS need to be able to get at any and all hardware plugged into a computing system and this is the DX12 and OS accessing any and all GPUs plugged into a computer, with the API managing the load balancing and making use of whatever GPUs are connected not the GPUs' drivers! The drivers are supposed to do what they are told to do by the graphics API! WTF Nvidia! Nvidia's customers have the rights to use the GPU hardware that they purchased without any impediments form the GPU's maker.

November 1, 2016 | 12:51 PM - Posted by Kibrakul (not verified)

I really don't care for any type of multi gpu config. Been there done that and never was rock solid from either camp. I powerful GPU all the way even for Vulkan & DX12

November 1, 2016 | 01:05 PM - Posted by Anonymous (not verified)

You are talking about your bad experiences with CF/SLI, and DX12/Vulkan Multi-Adaptor is under the control of the games and gaming engines via the graphics API, with the gaming engine developers assisting the games developers in getting the games themselves able to manage multi-GPUs more effectively! The gaming industry and the OS/API developers have many more rseources to make dual/multi GPU adaptors work with better scaling than just a few GPU hardware makers. The GPU makers need to make the drivers that allow for that close to the metal access to their respective hardware and get the hell out of the way and let the gaming industry and graphics API/OS industries do their jobs.

November 1, 2016 | 07:32 PM - Posted by renz (not verified)

actually it might be the other way around. no one know the hardware better than the one that built the hardware themselves. so in this case AMD and nvidia definitely knows better how to make multi gpu work. and even from gpu maker themselves multi gpu is no easy task. you say let the gaming industries do their job. but their job is not about managing various hardware. their job was to develop games. low level API is not new. they exist in the past. and yet game developer asking for something that can ease their game development on PC that have various hardware and architecture. that's why we get DirectX and OpenGL in the first place. now we are trying to bring that complexity back.

November 2, 2016 | 03:46 AM - Posted by prtskg (not verified)

dx12 and vulkan brings balance between max performance and ease of coding. These apis are new now and i understand why some people aren't happy with it but it's already showing improvements especially with cpu usage. Low level api is a necessity for industry to utilize gpus better as cpus won't improve like gpus.

November 2, 2016 | 07:08 AM - Posted by Kibrakul (not verified)

I will wait and see. As for now not bothered with it.

November 3, 2016 | 02:51 AM - Posted by renz (not verified)

DX12 feels like one step forward in one area and one step backward in another. and from what i can see even some DX12 games did not try to benefit from the cpu improvement being done on DX12 and still pretty much peg one cpu core like DX11 (this even happen on DX12 only games). and ultimately developer themselves need to be ready to use DX12, not forced by anyone for marketing purpose.

November 3, 2016 | 04:25 AM - Posted by JohnGR

There are USB 3.0 flash drives that are slower than some USB 2.0 flash drives. The manufacturer/developer is responsible for the final result of his product, if that product will be a much better product thanks to a new standard, or just a bad implementation with just one more "V" in a box.

November 4, 2016 | 02:11 AM - Posted by renz (not verified)

with with low level API i don't think hardware will be a problem. because the software will be tune for architecture specific to maximize performance potential. it will mostly dependent on developer because the optimization is mostly on their hands and right mind set when doing low level optimization. some dev (like Oxide) mention that they refuse to do architecture specific optimization because it is too time consuming but that exactly defeating the purpose of going low level in the first place.

November 1, 2016 | 01:01 PM - Posted by Scott Michaud

I'm guessing this is DX12 implicit multi-adapter.

November 1, 2016 | 01:03 PM - Posted by Dusty

Ah, so it should be driver and card agnostic, as long as the GPU is visible to the game?

November 1, 2016 | 01:18 PM - Posted by Scott Michaud

No. Other way around. I'm guessing Eidos declined implementing their own load-balancing method, and just told the driver to do it. AMD did. NVIDIA didn't (at least yet?)

November 1, 2016 | 01:41 PM - Posted by Dusty

I wonder if this would work with the GTX 1070. Maybe nVidia decided if the card doesn't support SLI, it wouldn't support DX12 multi-GPU.

November 1, 2016 | 01:48 PM - Posted by Scott Michaud

It might work with the GTX 1070, but even that does not mean that NVIDIA "wouldn't support DX12 multi-GPU". It would only mean that NVIDIA does not support DX12 Implicit Multi-GPU.

I strongly doubt that NVIDIA would mess with explicit multi-GPU on any card. There's little justification for it, and it might even mess with GPU compute customers (especially university students and faculty).

November 1, 2016 | 02:20 PM - Posted by Anonymous (not verified)

It's best to ask Nvidia the hard questions and forget about that Nvidia "review gidelines" manual. Fear of lost review samples be damned.

November 1, 2016 | 03:44 PM - Posted by Anonymous (not verified)

MDA = App controlled doesnt care what GPU

LDA implicit = App hands contorel over to driver (DX11)

LDA explicit = App controlled with matching GPU

Nvidia favors LDA explicit mode. You can find Tom Petersen saying so on Ryans GTX 1080 interview.

November 1, 2016 | 08:23 PM - Posted by renz (not verified)

i try look around and 1080/1070 multigpu did work in DXMD DX12


November 1, 2016 | 08:40 PM - Posted by Anonymous (not verified)

Nvidia might have disabled it driver side on GTX 1060

November 2, 2016 | 03:49 AM - Posted by renz (not verified)

with DX12 the support for multi gpu is baked natively into API unlike the regular CF/SLI. so even if 1060 did not support SLI developer can make it work with DX12. that's why "2 way 1060" able to work in AoS. if nvidia blocking 1060 from working in DX12 multi gpu they should not work in AoS as well.

November 2, 2016 | 12:57 PM - Posted by Anonymous (not verified)

They aren't using the same method.

Ashes of Singularity uses MDA method where the application has full control while Dues Ex: Mankind Divided uses LDA implicit the old way "DX11" style handing over control to the driver for multi adapter function.

November 1, 2016 | 07:24 PM - Posted by renz (not verified)

i see people reporting their SLI setup (970 SLI to be exact) able to work with DX12 multi gpu in DXMD. maybe you should directly ask game developer as why those 1060s did not work.

November 1, 2016 | 07:21 PM - Posted by renz (not verified)

doubt that nvidia able to stop multi gpu from working because now the support is in the API not in drivers anymore like classic multi gpu. i see people with nvidia card in SLI reporting they card able to work with DXMD DX12 multi gpu. and in the past some people did try pairing 1060s to see if they work in AOS DX12 multi gpu and it did work. since the job of implementing multi gpu was in the hand of game developer with DX12 it is better to ask them about it

November 2, 2016 | 01:12 PM - Posted by Anonymous (not verified)

Your confused. Multi-adapter functionality isn't mandatory never has been and DX12 is backwards compatible.

November 3, 2016 | 02:55 AM - Posted by renz (not verified)

i know. just because multi gpu are now supported by the API means the game will automatically detect multi gpu in system and try to use them automatically (without dev effort). just like tessellation in DX11. just because the game is being made for DX11 API then the game will use tessellation. what i mean is it is no longer restricted by gpu maker. developer can make 1060 work in DX12 multi gpu even if the card officially did not support SLI.

November 2, 2016 | 03:46 AM - Posted by Anonymous (not verified)

So why DX12 explicit multi-adaptor working in Ashes of Singularity with GTX 1060 SLI?

DEMD is Gimping Evolved title so AMD dirty tricks hurting Nvidia.

November 2, 2016 | 09:40 AM - Posted by Anonymous (not verified)

Because Nvidia refuses to support SLI on the 1060. As was said above by the author, DEMD chose to set up mGPU support by having the graphics driver do it. AMD's driver is set up to support mGPU. Nvidia's driver does NOT support SLI for the 1060 because they don't want you using two 1060's, they want you to buy a 1070 or 1080.

Nvidia's dirty tricks hurting Nvidia. You're just jealous.

November 4, 2016 | 02:21 AM - Posted by renz (not verified)

buying 1070 is cheaper than buying two 1060. multi gpu on mid range card is a bit complicated story. it has been for years. people always thinking about adding a second card later but most often that did not happen actually. because by the time they want to add their second better solution simply exist. how many people actually end up SLI/CF their 960 or 380? the talk about SLI/CF mid range card become useless unless you do it from the get go or get the second card shortly after your first.

November 1, 2016 | 12:25 PM - Posted by PCPerFan (not verified)

I would be curious to see how a 1070 would fare against two 480s.

November 1, 2016 | 08:11 PM - Posted by Godrilla (not verified)

My 1070 amp extreme is getting only 50to 55 fps at 1080p max settings only 2x anti aliasing latest drivers dx 11, dx 12 too glitchy.a @2ghz/9ghz!

November 1, 2016 | 12:34 PM - Posted by Edkiefer (not verified)

This is on the Game dev to code low level to each type of HW.

If memory is right mutli-GPU did work for Ashes of the Singularity. If so its not driver issue.

November 1, 2016 | 12:57 PM - Posted by Anonymous (not verified)

The dirvers for any GPU(AMD/Nvidia/other) under DX12/Vulkan are supposed to be simplified and offer close to the metal access with the games able to manage any and all explicit milti-adaptor calls made to the graphics API. The GPU driver's job under DX12 and Vulkan is to make the GPUs metal available to the graphics APIs' code and the API's code via the graphics DX12/Vulkan APIs is under direct management of the games/gaming engine to be able to request work from any and all GPUs plugged into the computer. The Dual, or more, GTX 1060s should be accessable through the graphics drivers via the graphics API and do the graphics APIs work that the game/gaming engine requested.

November 1, 2016 | 01:02 PM - Posted by Dusty

We are seeing the problem with the way things are currently. The API allows multi-GPU, the game allows multi-GPU, it's just that nVidia disabled SLI on the GTX 1060 to push more sales to the GTX 1070. I wish that games were not optimized to each and every different card, and that GPU vendors didn't put so much time and money into tweaking every driver for each game! Let the device stand as a computing device, have the drivers just communicate the API calls to the hardware, and focus on making the whole stack more agnostic to what is actually doing the computing!

November 1, 2016 | 01:28 PM - Posted by Scott Michaud

Be careful. SLI (along with CrossFire) is an implementation of Multi-GPU. Its advantage is that NVIDIA (or AMD for CrossFire) takes care of everything in the driver. Usually, this means taking the commands given by the game, sorting them by frame, and dividing them between nearly-identical cards. We might see per-eye for VR and 3D at some point too, though. At one point, SLI used to cut a single frame up, but basically no-one does that anymore (except for Firaxis for Civilization: Beyond Earth in Mantle).

It's also possible to ask the system "tell me all the GPUs you have" with the new graphics APIs (and the compute APIs as well). At that point, the developer can just manage multiple lists (or even multiple whole contexts if required) to whatever GPU they care about. This circumvents SLI, and GPU vendors should never interfere with this, but it still allows multi-GPU scaling.

It does, however, require the game developer to do (and support) everything themselves. Oxide took this burden (again, except for Firaxis for Civilization: Beyond Earth in Mantle) but they seem to be alone at the moment. That said, the major engine developers (Epic, Crytek, Unity, etc.) could jump on the scene at any moment.

November 1, 2016 | 01:48 PM - Posted by Dusty

Scott, thank you for your in depth response!

November 1, 2016 | 02:54 PM - Posted by Scott Michaud

No problem. Thanks!

November 1, 2016 | 01:54 PM - Posted by Anonymous (not verified)

CF and SLI have been poorly developed over a long period of time by the Few GPU hardware makers with their limited resources and selfish market intentions. The entire OS/and API industries should have never allowed any hardware makers to offer any feature sets restricted at the GPU drivers’ level other than a close to the metal features that allow for the graphics APIs, and the OSs, and the games/gaming engines themselves the full control over the GPUs’ hardware.

The OS makers should have never white-listed ANY GPU hardware except the GPUs that only had the simplest drivers that only allowed for the closest to the metal GPU features to be made available to the OSs/Graphics APIs! Any GPU hardware plugged into a computer should have the simplest drivers that allow for all the hardware’s feature sets be made available to the OS/APIs. What a decades lone ripoff of the consumer to not have full always on access to any processing hardware plugged into their computers!

It’s rather shameful for anyone to find the restricting of hardware for any selfish reasons by the GPU/SOC makers acceptable. GPUs integrated or discrete should always be available not matter the make model or brand to work for their intended purposes under and OS/API managed abstraction layers via the simplest driver model that allows the closest to the metal access! There is no excuse there will never be an excuse for letting this happen in the first place!

November 1, 2016 | 01:56 PM - Posted by Anonymous (not verified)

edit: decades lone
eo: decades long

December 24, 2016 | 08:27 PM - Posted by Anonymous (not verified)

It's not "shameful". What you describe is in reality very, very difficult to achieve.

The game API, game engine, game, and video drivers all have to be properly coded for this to work.

We're only just now starting to approach the point where Split Frame Rendering can be easily achieve but we're not there yet.

The GPU manufacturer is mostly limited to Alternate Frame Rendering to alternate frames between identical GPU's, and when SFR finally works that's mostly on the game developer to get working.

November 1, 2016 | 01:06 PM - Posted by Scott Michaud

AotS did explicit multi-adapter (albeit still the AFR algorithm, at least for now).

As I understand it by applying my background in OpenCL, in Explicit mode, the driver just enumerates the GPUs in the system and lets the developer target commands at whatever list they want. If Eidos went for Implicit on the other hand, which I'm guessing they did for Deus Ex, then it's relying on the driver to do SLI / CrossFire-like management for the developer.

So yes, it would be a driver issue. The game developer seems to have chose the leave the burden with the hardware developers instead of picking it up themselves.

November 1, 2016 | 01:17 PM - Posted by Edkiefer (not verified)

you guys should of tried then 1-480+1-1060 to see if that scaled or use other cards, not that anyone in there right mind would want to have 2 drivers installed on a single system.

Short story with Dx12, just run best single gpu you can, that way you get best frametimes and compatibility.

November 1, 2016 | 05:29 PM - Posted by Anonymous (not verified)

AotS did MDA since you could mix and match and it was noted that AMD being the master provided a benefit

November 1, 2016 | 12:43 PM - Posted by Geforcepat (not verified)

last time I checked the 1060 didn't support sli. not sure what that's doing in the list.

November 1, 2016 | 12:49 PM - Posted by Dusty

You are right! Nvidia's website confirms: http://www.geforce.com/hardware/10series/geforce-gtx-1060

November 1, 2016 | 12:52 PM - Posted by Buyers

Right, the 1060 doesn't support SLI. But this brief article isn't about SLI, it's about DX12's ability to directly utilize multiple GPU's. Explicit Multi-Adapter is what i believe it's called. DX12 should be able to use nearly any combination of GPU's, regardless of model or brand, without the use of any brands proprietary multi-gpu drivers requirements (SLI or Crossfire), even the integrated GPU's on the processors.

November 1, 2016 | 12:53 PM - Posted by Kibrakul (not verified)

Seeing how bad most game devs are with implementing things of this nature it will be a very long time to see this done proper and right.
I will stick with my powerful sing GPU solution for the coming years thnx.

November 1, 2016 | 01:17 PM - Posted by JohnGR

Microsoft was suppose to give away free code in August for multi GPU implementation under DX12. It wasn't going to be and wasn't meant to be the more optimized code out there, but it was going to make the implementation much easier for the developer. A nice free advantage over Vulkan for DX12.

November 1, 2016 | 04:39 PM - Posted by Anonymous (not verified)

Vulkan is supposed to have the same level of support that DX12 has for multi-graphics adaptor! So maybe there can be some testing with Vulkan as well once Khronos gets that support working. There are those that will never accept M$'s snooping/forcing with windows 10! Linux gaming will use Vulkan's Multi-Graphics adaptor support also!

November 9, 2016 | 01:54 AM - Posted by prabindh

The sample is here (Commit dated July 2016), showing AFR.


I havent tested it yet,

December 24, 2016 | 08:33 PM - Posted by Anonymous (not verified)

The problem with AFR is that games are starting to utilize frame dependent code. The game takes advantage of similarities between frames similar to how video compression works.

You can't use AFR and have frame dependency at the same time. (AFR is an implementation of SLI but most people just say "SLI" without understanding how it works).

So AFR is going to disappear. SFR will replace it. I'm not sure how many games that run DX12/Vulkan can also use AFR but likely another two years and AFR will be history.

November 1, 2016 | 01:22 PM - Posted by Anonymous (not verified)

It's the job of the gaming engine developers(Real systems software engineers) to assist the games developers! The gaming engine SDKs have all the necessary tools to assist the bog standard games developers in making use of the new DX12/Vulkan feature sets. Most of the games developers are just working the scripts that give access to the gaming engines. All games developers make use of the real software engineers at the gaming engine companies, or use expert consultants, to get the multi-adaptor support working. That games companies have the budgets to hire the PHD level folks and the real software engineers to get the games making use of the new graphics API feature sets. Sure it's hard but that why there are real software engineers and script kiddies need not apply!

November 1, 2016 | 01:09 PM - Posted by Scott Michaud

The GTX 1060 still supports multi-GPU. SLI is a driver-implemented simplification / automation of multi-GPU. You can still ask the system to enumerate all GPUs and load them individually in DX12, Vulkan, OpenCL, CUDA, etc. Just not DX<=11 and OpenGL.

I'd recommend watching my animation on the topic (and share it, too!)

November 1, 2016 | 01:15 PM - Posted by JohnGR

After that result with the GTX 1060's that show no scaling, there are two tests I would like to see someone running.

One with two GTX 1070s. If the second card there increases performance, then Nvidia is artificially crippling any card that doesn't support SLI. If not, then probably it is something that will be fixed in a future driver/patch.

One RX 480 and one GTX 1070, in case multi GPU in Dues Ex: MD supports a configuration from different GPU manufacturers. And if it does, a second test replacing the GTX 1070 with a GTX 1060 to see if Nvidia lock(in case there is one) prohibits even that kind of multi GPU.

PS One last question. Is this the implementation Microsoft was going to give away for free through github in August?

November 1, 2016 | 01:44 PM - Posted by maonayze

Not sure why Nvidia users are poo pooing on this from the get go? It's not the developers or AMDs fault if Nvidia choose not to implement or even worse block the 1060 from MGPU access. Looking at the scaling from the two 480s it is very good indeed, especially as it is the first iteration of MGPU in Duex Ex MD. It can only get better from here on in.

I hope that all DX12 titles get MGPU if the scaling is going to be at least this good. And I mean for both AMD and Nvidia. For years SLI/CF have been patchily implemented and as DX12 was supposed to go down another path and solve this issue then I think it can only be good for everyone in the long run if MGPU works and works well. It just takes one more headache away from owners of multiple cards regardless of which team you support.

Well Done to the DEMD developers and may it continue.

November 1, 2016 | 02:03 PM - Posted by Anonymous Nvidia User (not verified)

Sure if you consider 59% scaling very good then I guess it's very good. I personally wouldn't pay 100% extra for losing 41% of your potential when you can get a better single card that gets 100% performance at half the wattage as well. And cue the wattage doesn't matter AMD fanboys in 3, 2, 1.

November 1, 2016 | 02:50 PM - Posted by remc86007

Power wattage doesn't double unless both cards are fully stressed.

I'm running a single card now, but over the years that I was running SLI if I got any positive scaling that didn't introduce bad frame times I was happy; over a 50% improvement is pretty good considering the large number of games with no multigpu capabilities right now.

November 1, 2016 | 03:15 PM - Posted by maonayze

Firstly, why not go get a proper login and stop hiding behind the Anonymous Nvidia User.

Secondly I do think that 59% is good for a first un-tweaked attempt at MGPU in this title by those devs....as I said it is only going to get better (If the work is put in, of course).

It has already been proven by a few of the sites that two RX480s bench/game faster than the 1070(As I think this is the card you are eluding to) and just a whisker behind the 1080 in most games. If Devs get the MGPU right then it will benefit everyone with dual card setups.

The wattage does matter...if you want to save a few pence per week or you have your card in a very small shoebox with no decent airflow. I personally wouldnt pay over £1000 for a TXP but a lot of people do and for just 20% performance for the extra £400-£500.

Oh and I have a single card(not a 480), so not trying to justify it to myself either. Although, I have been considering that exact course of action in the future (2x480s that is...depending on MGPU progress, of course).

November 1, 2016 | 03:37 PM - Posted by Anonymous Nvidia User (not verified)

Why? So I can chose a name that is a cute metaplasmus of a word like mayonnaise. Being registered doesn't do anything different except when I click on your user name it tells me how long you've been a member. Also allows editing functions too. I chose my name instead of being simply anonymous. There are a bunch of those running around.

Yeah I wouldn't get the Titanx Pascal either but I read it's the card for VR. Can't currently enjoy "Premium" VR with AMD polaris cards. Maybe Vega will change that or perhaps not.

The burden of multigpu performance in dx12 is on the programmers more than ever. Sometimes they only release a few patches for a game and consider it complete. We'll see what dx12 can do for us but so far it isn't looking too good. Most games are unoptimized console ports with negative scaling for both sides when compared with dx11 performance.

November 1, 2016 | 03:45 PM - Posted by Jeremy Hellstrom

Plus 10 pints for metaplasmus.

November 1, 2016 | 05:02 PM - Posted by Anonymous (not verified)

Even if your paying 100% more sometimes your paying more than 100% for that card that is twice as fast. 2 x 480 is still cheaper than 1 1080.

November 1, 2016 | 08:14 PM - Posted by renz (not verified)

it is shown than 1060 in multi gpu able to work with AoS (DX12)so i don't think nvidia can block 1060 from working with DXMD. multi gpu control is directly in game maker hands with DX12. unless game developer did not want to make specific adjustment for 1060s due to 1060 in multi gpu is very unlikely to be used by people.

November 2, 2016 | 04:02 AM - Posted by prtskg (not verified)

That's true for explicit multi gpu adapter not implicit. I guess DXMD use implicit adapter as that would explain the situation.

November 1, 2016 | 02:51 PM - Posted by Anonymous (not verified)

As an nvidia user, it's obvious that AMD has the upper hand in DX12. Probably mostly due to the PS4&XB1.

I'm not sure why anyone cares at this point.

Nvdia leads the high end
AMD is doing well on the lower-high end

who cares... this is all a good thing

The only thing i really wish right now is that Nvidia supported freesync as there are some cheap freesync monitors out there right now.

November 1, 2016 | 03:01 PM - Posted by Anonymous (not verified)

The future looks bright DX12, Vulkin and MultiGPU in games

November 1, 2016 | 03:12 PM - Posted by maonayze

I agree with both the above replies. Good things will come out of this if ALL devs support MGPU via DX12. It gives people choice in what they want to do. Do I go for say another RX480 and spend £250 to upgrade or do I spend over £400 - £600 for 1080 performance. I also have a Freesync monitor so I wouldnt want to get an Nvidia card because it would cost me a lot more to get the G-Sync equivalent monitor to go with it.

On that note I think Nvidia are shooting themselves in the foot not supporting Active Sync.

November 2, 2016 | 10:50 AM - Posted by Crunchy005 (not verified)

I agree, my next card will be AMD because I want variable refresh rate, and gsync is far to expensive to be worth it($200+ extra for the same monitor with freesync), on top of the fact that I get locked to Nvidia hardware, and that will NEVER change with a gsync monitor. Nvidia COULD support freesync in the future which makes a freesync monitor a far better deal because we are only locked to AMD for freesync because nvidia refuses to support a VESA standard feature.

November 1, 2016 | 03:28 PM - Posted by maonayze

"For the record a pair of RX480s are a tiny bit slower than a GTX 1080 (73.0fps v 75.7fps) when they all are overclocked and running DX12."

- Kaapstad, OCUK Forums 1st Nov 2016 (Deus-Ex Mankind Divided)

2.7 FPS behind a 1080 (Yes, a 1080 NOT a 1070) and the price/performance ratio will go up once the exchange rates in the UK settle down. I think the RX480 will be looked upon a little differently than the way it is seen at the moment, especially as the rumored refresh is coming in 2017 with improvements in 14nm manufacture and stricter binning processes.

Interesting indeed.

November 1, 2016 | 04:43 PM - Posted by Anonymous Nvidia User (not verified)

Yeah in a Gaming Evolved title that is highly optimized for AMD cards and not as much for Nvidia.

What's 480 performance in say Gears of War 4? Or any other neutral titles. Most titles are not going to use multiple cards in dx12.

Hitman finally added multigpu support for dx12 with their last episode 6. Only took 7-8 months.

I think both companies still support multiple cards for "crappy old" dx11 versions of games at release of the game or shortly afterward.

November 1, 2016 | 06:06 PM - Posted by Jeremy Hellstrom


Just curious what neutral means when you are using it?

November 2, 2016 | 09:51 AM - Posted by Anonymous (not verified)

To him, neutral means, "any game in which Nvidia has a performance advantage over AMD." Because in his world, the only way AMD could ever outperform Nvidia is if the game is deliberately built to advantage AMD and cheat Nvidia.

November 2, 2016 | 04:30 PM - Posted by Anonymous Nvidia User (not verified)

I know Gears of War 4 is Nvidia Gameworks. This was to counter his AMD game example. By neutral I meant any game that doesn't have Nvidia or AMD's stamp on it. Sometimes a game has one or two Game works features that you can turn off, which I think would make the game neutral but probably not in AMD fan's eyes.

AMD Gaming Evolved games usually take it to a different level. No async support for Pascal. Performance way out of performance tiers. Mid range cards beating the high end cards of Nvidia and low end beating mid range Nvidia by a big margin. Even in dx11 at 1080p. Sometimes even the Furyx is eclipsed by 390 series card.

November 2, 2016 | 07:47 PM - Posted by Jeremy Hellstrom

OK, can you give an example of a neutral game you feel shows what you are talking about?

November 2, 2016 | 08:55 PM - Posted by pdjblum

I feel for this guy. He never lets up. It's as if AMD killed someone in his family. Outside of that, I cannot imagine a rational reason to harbor and express such hate for an inanimate object, AMD. I know the hate I used to have for Apple was all about fear and jealousy and insecurity. Glad I got past that.

The perverse reader might argue as to how AMD, being an inanimate object, could kill somebody. Fuck, that is confusing.

November 3, 2016 | 12:31 PM - Posted by Anonymous Nvidia User (not verified)

It's you as an AMD fanboy drawing more incorrect conclusions. I don't hate AMD. I simply inform people and like Nvidia products/features. It's not very nice to imply something negative about someone. Disprove the post if you can and don't attack the person.

AMD is a company and yes some companies have been responsible for deaths. Engineering defects in cars for example. I'm pretty sure neither AMD or Nvidia products have directly caused the death of anyone.

November 3, 2016 | 12:19 PM - Posted by Anonymous Nvidia User (not verified)

Jeremy, It's hard to find a true neutral game in dx 12 as there aren't many yet and those that are usually need some tuning.

However here are reviews of the Rx 480 crossfire prowess which take into account a bunch of varying games.

The conclusion the first reaches is that you are better off with a single 1070 for the money rather than 2 480s.


The OP posted one biased toward AMD benchmark (DEMD) that put it within a few fps of a single 1080. Anyone who buys based on one benchmark is asking for trouble.

Or consider this one that says crossfire or SLI is not recommended for most users.


And one site that seems to favor AMD. But reaches same conclusion that a single card is better and to get a 1070 for 1440 and 4k. 480 is good for 1080 however.


Yet another site to reach the same conclusion that crossfire 480s compared to a single 1080 is not very good value as it uses 60% more wattage and generally has worse (putting it mildly) frame times.


There are plenty of games in there to compare crossfire 480 performance with 1070 and 1080 even. So readers please draw your own conclusions about worth to you.

In fact one site actually found that two 480s outperformed a single 1080 in the Witcher 3 at 4k. I thought Nvidia Gameworks gimps AMD performance. At least that's what AMD fanboys say.

November 3, 2016 | 01:18 PM - Posted by Jeremy Hellstrom

That would be my point. Arguing that it is unfair a game, or more correctly an engine, is "biased" for one architecture or another other doesn't make any sense because all of them are to one extent or another.

November 2, 2016 | 09:06 PM - Posted by Anonymous (not verified)

Okay, I've seen you use the argument, multiple times, that "It's not Nvidia's job to optimize GameWorks for AMD," whenever an AMD fan complains that GameWorks titles run like garbage on AMD cards.

Now you want to complain that Gaming Evolved titles have, "No async support for Pascal."


It's not AMD's job to optimize Gaming Evolved libraries for Nvidia.


November 3, 2016 | 11:22 AM - Posted by Anonymous Nvidia User (not verified)

Quote from a post where I said that it's not Nvidia's job to optimize for AMD. It's not Nvidia's fault that AMD's cards don't utilize tesselation and well as Nvidia cards do. The optimizing of software is on the programmers.

Async is an optional feature of dx12 and to support it for one side and not the other is biased. It is the game publisher's responsibility to incorporate these features into their game. If they are too biased their sales will be flat as a result. Did Ashes of the Singularity or Hitman sell very well. Probably not. Definitely wouldn't get a sale from me due to poor Nvidia performance.

November 1, 2016 | 03:47 PM - Posted by YTech

The multi-GPU setup, was it tested within these 3 scenarios?
- 2x RX480
- 2x GTX 1060
- 1x RX480 + 1x GTX 1060 (This is the interesting setup I'm interested in)

November 1, 2016 | 04:37 PM - Posted by BrightCandle (not verified)

Would also be useful to see 2x 1070 as the 1060 doesn't support SLI but the 1070 does.

November 1, 2016 | 04:56 PM - Posted by Anonymous (not verified)

Vulkan/Dx12 GPU Multi-adaptor has nothing to do with CF or SLI, Vulkan/DX12 Multi-adaptor support comes through the graphics API and not the GPU's driver. The simplified GPU drivers that Vulkan and DX12 use only abstract the basic GPU's to the metal functionality with the DX12/Vulkan APIs giving the games/gaming engines the ability to manage the GPU's/GPUs' load balancing and management. Under explicit Multi-GPU adaptor each GPU is just another resource to the graphics APIs(DX12/Vulkan) and the games/gaming engine developers are free to call each GPU directly and make use of each GPU's processing power, no matter the GPU's make model or makers.

CF/SLI support does not matter for DX12’s/Vulkan’s Multi-adaptor as they only call on the simplified drivers to get at the GPUs’ as cloes to the metal features made avaible by the DX12/Vulkan GPU driver model. This functionality needs to be provided by the GPU makers in their driver packages and if the GPU’s drivers do not support it then the GPU’s maker does not have the rights to call that hardware DX12/Vulkan compatible!

November 1, 2016 | 08:47 PM - Posted by Scott Michaud

If Deus Ex really is Implicit Multi-Adapter, the latter isn't possible. It's driver-implemented load-balancing. The game basically tells the driver: "Give me a list and I'll give you all my data. Sort it between your supported GPUs." That's not possible with two vendors.

You would need the game to manage multiple lists, some that point to AMD and some that point to NVIDIA, and choose which data to send where. That's called "Explicit Multi-Adapter".

November 1, 2016 | 04:46 PM - Posted by Anonymous (not verified)

Very nice to see a AAA developer pushing the limits of technology with DX12 in this case.

If you don't want to push the tech limits just go back to DX11.

November 1, 2016 | 05:18 PM - Posted by Anonymous Nvidia User (not verified)

Dx12 wasn't designed to push the limits of tech. It was designed solely to help eliminate CPU bottlenecking of dx 11 (allow more threads to be utilized) on mainly AMD hardware. Coincidentally the maker of dx12 also has a console with you guessed it AMD hardware. Nvidia does gain but not as much as AMD does. However Nvidia still has higher total threads even under directx12.

The high technology of dx12 helps older, and weaker hardware close the gap with newer hardware. The weaker the CPU the more you gain.

November 1, 2016 | 07:02 PM - Posted by Anonymous (not verified)

So Nvidia has more threads, and yet is still relatively slower. Very interesting that...

November 2, 2016 | 04:59 PM - Posted by Anonymous Nvidia User (not verified)

Define slower. Does AMD have a single card solution that is faster than 1070,1080 or Titanx Pascal ATM. No because Nvidia upped their compute heavily with Pascal so can't be used excessively like in the past to hamper Nvidia performance. If you're talking Maxwell back then I'd say you have a point.

And yes it goes back to the fact of dx12 inherent design favoring AMD architecture. Adding support for such new and innovative tech such as async compute which was mouldering unused in AMD cards for 3-4 years before being tapped.

November 2, 2016 | 10:03 AM - Posted by Anonymous (not verified)

You mean, after years of being on the benefit side of, "AMD does gain but not nearly as much as Nvidia does," and trash-talking anyone who doesn't spend their money on what you spend yours on, now you're on the other side of the field and seeing how much it sucks, and you're jealous and angry about it?

Gee, what a surprise.

November 2, 2016 | 04:46 PM - Posted by Anonymous Nvidia User (not verified)

DX11 was also catered to AMD who had Xbox 360 at the time. Nvidia didn't even have a tesselation capable cards out at debut. Nvidia users suffered lower frame rates for about a year until new cards were released. Nvidia overcame their disadvantages and shortcomings by redesigning architechture. AMD on the other hand didn't try as hard or lacked the funds to do as much as Nvidia did.

When did I ever trash on someone for their purchase. You usually get what you pay for. I've only ever owned low end - mid range cards. I may be going high end for the first time.

AMD makes good cards but they have their shortcomings like tesselation. Instead of designing better cards they give you a hack to lower or eliminate it. Did that solve the problem? It made the game more playable at the sacrifice of visual fidelity. AMD users claiming they didn't lose anything by getting rid of it are only fooling themselves.

I'm not jealous or angry in the least. Whenever AMD pushes hard on Nvidia it forces them to up their game. They shoved dx12 so hard in their face that I think the Pascal refresh or Volta are going to be true dx12 killers. When that happens thank AMD for it. AMD will be begging Microsoft for dx13 at that point. And the cycle will go on.

December 24, 2016 | 08:41 PM - Posted by Anonymous (not verified)

First sentence is entirely wrong. It's not just about the CPU. It's just that you don't have the background to understand all the features that go into the new API's.

November 1, 2016 | 05:29 PM - Posted by AnonX (not verified)

Why it has not been tested with the latest version of AMD drivers?

November 1, 2016 | 08:57 PM - Posted by Scott Michaud

That was a non-WHQL hotfix for Titanfall 2 crashes. It shouldn't affect performance results (unless the fix made a performance regression).

From a philosophy issue, I guess it depends whether you care more about "fully supported" or "absolute latest". There's merits either way.

Also, it's possible that Ryan and company did the benchmark early because he's moving offices. So, despite the previous two mini-paragraphs, I really have no idea why specifically they chose the older drivers.

November 1, 2016 | 07:22 PM - Posted by Anonymous (not verified)

One thing is for sure both DX12's and Vulkan's API controlled Multi-graphics adaptor ability needs to be reported on with plenty of continuous follow up articles and benchmarks as both the Dx12 and Vulkan APIs progress and are tweaked, ditto for the games/gaming engines.

The entire gaming industry as well as the OSs/APIs developers doing the multi-GPU adaptor developing will result in much better multi-GPU scaling than any Driver only CF/SLI solutions from the GPU hardware makers, who have their own selfish reasons to hold back advancements. Let's get the GPU makers control over our PC/Laptops limited to only the simple drivers with only the necessary functional ability to get at the GPU's metal, and leave the rest to the games/gaming engine makers and OSs/graphics APIs. We only need the GPU makers' hardware and not any stranglehold over our ability to use any and all GPUs plugged in on an all the time basis! That means all GPUs plugged in, integrated GPUs and discrete GPUs, from any maker. Multi-Adaptor for any GPUs via the Graphics APIs and no monkey business from any GPU makers.

Stop the gimping in the drivers, in the middleware, and in the hardware, GPU makers, or find yourselves out of the loop!

November 1, 2016 | 09:01 PM - Posted by Scott Michaud

The problem is that, if you limit the hardware vendors as you suggest, these innovations wouldn't occur in the first place. It's a bit more Catch-22 than you realize, but yeah, you do have a point about issues with vendor-specific features.

Proprietary encourages quickly new features. Open and thin encourages robust.

November 1, 2016 | 10:08 PM - Posted by Anonymous (not verified)

Consumers have been getting ripped off for over a decade by GPU companies, and SOC/APU companies, who have been intentionally doing this graphics driver nonsense! GPU makers should have been providing only the necessary driver code to make their hardware accessible to the OS/Graphics APIs and no other extra features. All this other GPU driver/proprietary middleware nonsense is only there for vendor lock-in and profit milking. People have the right to have their GPU hardware unencumbered by useless GPU diver manipulation from the GPU’s/SOC's/APU's makers!

All this switchable graphics nonsense between integrated and discrete and other methods to keep users from using more than one GPU(GTX 1060 and no ability to use 2 1060s) or inability to use integrated and discrete GPUs at the same is just anti-consumer. If the user’s motherboard supports more than one GPU then GPU makers need to allow for more than one of their GPUs to be plugged in and used, same for GPUs from more than one GPU from 2 different manufacturers. GPU’s have been interacting with computers for decades and DX12 and Vulkan types of multi-GPU capabilities have been achievable for decades! It’s simply down to the greed of the GPU makers and the SOC/APU makers for not properly developing the minimal drivers to make their graphics work close to the metal and be available all along with GPU milti-adaptor managed by the OS/Graphics API.

GPU Hardware vendors do not provide any more innovation than they are forced to provide by fair competition and better industry standards. GPU makers are only suppliers of parts, ditto for the suppliers of OSs. GPU makers’ greedy self interest has held back the necessary innovations concerning the proper methods to handle more than one GPU of any make and model on computing devices(PC/Laptops) for over a decade. Acceptable Vendor specific features should have never included the purposefully designed inability for any PC/laptop hardware to work with more than one GPU of the same make and model or any other GPU of any make or model. The is no excuse and there never was one, this is not defendable by any maker of Graphics products.

GPU makers are like the cable companies trying to lock users into their brand of extraneous services that are unnecessary for any graphics related and real OS/API related functionality. The proprietary OS makers want to be like that cable companies also, and the end users are getting tired and PC sales are continuing to decline. Proprietary without healthy competition and PC standards only stifles the innovative progress and encourages milking for profits over innovation. Who wants a processor plugged into their computing system that can not be used because of the processor maker’s greed and for no other reason.

Hardware Drivers only shoud have one job and that job is to make the hardware available to the OS/API with any other non necessary software functionality the sole domain of the OS/API. Multi GPU load balancing should have been in the OS/Graphics API and never in the GPU driver!

November 2, 2016 | 01:39 AM - Posted by zme-ul

I really like for someone to explain how 102% mGPU scaling is actually possible

November 2, 2016 | 01:39 AM - Posted by zme-ul

I really like for someone to explain how 102% mGPU scaling is actually possible

November 3, 2016 | 02:22 AM - Posted by John Blanton (not verified)

Didn't Tom Peterson, during the PCPer live stream say that the 1080/70 will be the only cards that support SLI? I haven't seen any GTX 1060 with the SLI fingers on the edge of the card. I'll go out on a limb and say that the only way to get DX12 multiadaptor support working is to use the cards that Nvidia supports for SLI, those being the GTX1080/70. (driver updates for DX12 multiadaptor support notwithstanding)

November 3, 2016 | 03:05 AM - Posted by renz (not verified)

officially only 1080/1070 will be supported by nvidia SLI. with DX12 the controls are more in developer hands. they can make 2 way 1060 (or even 3-4 way)work in multi gpu with DX12. case in point two 1060 able to work in multi gpu with AoS (DX12). so it is up to developer to do it. but then again officially nvidia did not support 1060 in SLI. how many people actually going to have two 1060 in their system? knowing that developer might ditch to support 1060 multi gpu in DX12. because it will like wasting your time to tweak the performance for setup that most likely not be used by anyone.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.