Review Index:
Feedback

DX12 GPU and CPU Performance Tested: Ashes of the Singularity Benchmark

Author: Ryan Shrout
Manufacturer: Stardock

Results, Average

These results are the average frames per second, derived from the frame times, for the entire 180 second benchmark run, combining data for normal, medium and heavy batch scenes.

View Full Size

There is a lot of information in each graph so be sure you are paying attention closely to what is being showcased and what bars represent what data. This first graph shows all the GPUs, resolutions and APIs running on the Core i7-5960X, our highest end processor benchmarked. The first thing that stands out to me is how little different there is between the DX11 and DX12 scores on the NVIDIA GTX 980 configuration. In fact, only the 1080p / Low preset shows a performance advantage at all for DX12 over the DX11 in this case; the other three results are showing better DX11 performance!

The AMD results are very different – the DX12 scores are as much as 80% faster than the DX11 scores giving the R9 390X a significant FPS improvement. So does that mean AMD’s results are automatically the better of the two? Not really. Note the DX11 scores for the GTX 980 and the R9 390X – at 1080p / Low the GTX 980 averages 71.4 FPS while the R9 390X averages only 43.1 FPS. That is a massive gap! After utilizing the DX12 that comparison changes to 78.3 FPS vs 78.0 FPS – a tie. The AMD DX12 implementation with Ashes of the Singularity in this case has made up the difference of the DX11 results and brought the R9 390X to a performance tie with the GTX 980.

View Full Size

Our results with the latest Skylake CPU from Intel show nearly the same results as the Core i7-5960X above: in all but one of the four tests with the GTX 980, DX11 is actually faster than DX12 on GeForce. AMD’s R9 390X sees as much as 68% improved performance at 1080p and 56% at 2560x1600, once again bringing it to performance parity (or slightly ahead) with the GTX 980.

View Full Size

Things look a little different as we step down to the Core i3-4330 processor with only two full cores and four threads courtesy of HyperThreading. This time, the DX12 implementation on NVIDIA’s hardware is faster than on the DX11 code path across the board, but only by 11% or so. For the R9 390X, that DX12 advantage is increased to 38-40%, again bringing the Hawaii-based GPU up to the same performance level as the Maxwell-based GTX 980. Again though, note the difference on where the two GPUs “started” with DX11 scores – the GTX 980 was quite a bit higher than the Radeon card.

View Full Size

Somewhat telling about the overall performance provided by the AMD FX-8370 compared to the 8-core Intel processor, the results for this round of testing look very similar to those found on the Core i3-4330. NVIDIA’s GTX 980 sees consistent DX11 to DX12 scaling of about 13-16% while AMD’s R9 390X scales by 50%. This puts the R9 390X ahead of the GTX 980, a much more expensive GPU based on today’s sale prices.

View Full Size

Our final individual CPU test shows the same pattern – smaller scaling for NVIDIA’s configuration and larger scaling amounts for the Radeon hardware but with very similar “end” results in terms of DX12 average frame rates. And again, the DX11 performance is significantly better for NVIDIA’s hardware.

View Full Size

This graph shows all of the GTX 980 results across the five different processors without adding in the R9 390X data. It is easy to see now how the 5960X and the 6700K processors perform and behave very similarly to each other, with very little headroom for DX12 to improve things on already very faster CPUs. It seems obvious to me here that the DX11 driver implementation from NVIDIA is so robust and optimized that the high single / dual threaded performance of these higher end CPUs just doesn’t offer much room for DX12 to improve things. Once you get to the Core i3-4330 and the two AMD CPUs though you can tell that DX12 is helping the engine perform a little better – but again the DX11 advantage that NVIDIA holds over AMD’s hardware is making it difficult for DX12 to shine as brightly as you might have expected.

View Full Size

Things are very different for the R9 390X in our testing – the high end processors offer up DX12 scores that are as much as 80% faster than the DX11 scores for the same hardware! As the CPUs slow down this difference scales a bit less but is still consistent and large enough to make a noticeable different in gameplay and frame rate smoothness.

View Full Size

Finally, this graph attempts to show which card has an advantage on each processor with each setting and with each API. We are using the R9 390X as the baseline score, meaning that any positive percentage result means the GTX 980 is faster while a negative score means the R9 390X is faster.

The green bars are for DX11 performance and in every single instances the GTX 980 is faster, never by less than 15% and often stretching to over 50%! That is not a small advantage and though this benchmark is built to showcase DX12 potential, AMD users should at least by concerned about lost DX11 performance potential in AMD’s current hardware with current shipping games.

The blue bars of DX12 show a different story where AMD’s R9 390X is more often faster than the GTX 980. There are a few cases where NVIDIA’s performance is slightly ahead, but more often than not the gains that AMD sees moving from DX11 to DX12 not only completely make up for the DX11 performance deficit but also bump it ahead of the more expensive graphics card from NVIDIA.

Now, let’s see how the results change if we only look at the heavy batches of test scenes.


August 17, 2015 | 04:24 PM - Posted by AMDBumLover (not verified)

need a fury x vs 980ti bench-off!

August 17, 2015 | 05:23 PM - Posted by unacom

I second the proposal for a FuryX and 980Ti bench-off.

August 18, 2015 | 07:36 PM - Posted by Anonymous (not verified)

amd has said for years it hardware from 7800 onwards has been dx12 compatible, just all not the features. It also has been criticizing nvidia for it lack of dx12 support in it gpu's as only 900 series is dx12 compatible at the hardware level.

amd's dx11 games optimizations fails have become common knowledge was gamers have been seeing 15-30% improvement by just renaming executables.

i want to see amd rage's vs nvidia titan 2 / 980 ti (titan 2 with very small criple but big price cut) at 1080p vs 4k

i would also like to see some older gpu's tested to address amd claims

August 18, 2015 | 09:44 PM - Posted by lf000a (not verified)

Did you mean:

*Fury
*Titan X

August 19, 2015 | 09:44 AM - Posted by Anonymous (not verified)

yes

September 4, 2015 | 02:22 AM - Posted by Anonymous (not verified)

FuryX not Fury. FuryX V.s. 980ti V.s. TitanX single, crossfire and SLI

August 17, 2015 | 05:28 PM - Posted by Anonymous (not verified)

SAME!

August 17, 2015 | 07:07 PM - Posted by fkr

would really like to see amd's newest architecture compared to nvidias newest. at least an air cooled fury should have been used and not a rehashed 290. still a great review but hawaii came out in 2013 and gm204 is not even a year old.

since we are talking about dx12 which is an api that will not even come into mainstream use for another year or more it would be nice to see tech that was released this year compared. hawaii is going to be old tech by the time dx12 comes along

the 1300 more shader units and 80 more texture mapping units of the fury x could play an interesting role in performance.

i think people realize that amd is brute forcing there way to performance while nvidia brings the whole package of drivers and a great architecture.

August 17, 2015 | 07:07 PM - Posted by Anonymous (not verified)

ExtremeTech did it.

http://www.extremetech.com/gaming/212314-directx-12-arrives-at-last-with...

Spoiler alert - the Fury X ties or beats the 980Ti, and they found no evidence of the MSAA "bug" that Nvidia's complaining about.

August 17, 2015 | 08:26 PM - Posted by Anonymous (not verified)

Of course there is a bug in MSAA. They forgot to add Nvidia Blurr Tech to increase FPS.

August 18, 2015 | 11:11 AM - Posted by Anonymous (not verified)

One game not even out yet with DX12, probably before NVidia has optimized DX12 drivers and I assume not comparing the results of an OVERCLOCKED GTX980Ti (which overclocks a lot more compared to Fury X).

Watch the PCPER video however if the article didn't stick and see how much behind AMD is for their DX11 drivers which affects a lot of current title performance. A worst-case scenario sure, but basically it boils down to if you have to buy an AMD card at least get a good Intel CPU to help minimize CPU bottlenecks due to poor AMD drivers (again, for DX11).

Plus, if AMD still can't find the budget to pay their team to optimize DX11 now will they be able to support DX12 as well as NVidia going forward?

August 18, 2015 | 05:49 PM - Posted by evernessince (not verified)

So AMD is being criticized for having good DX12 and bad DX11 while Nvidia is getting a pass on bad DX12 drivers? Seems like one hell of a double standard. I seem to remember that AMD trades blows with Nvidia on DX11 games. The only titles it doesn't do well on are GameWorks titles, where nvidia applies tessellation to everything to cripple AMD cards.

"Plus, if AMD still can't find the budget to pay their team to optimize DX11 now will they be able to support DX12 as well as NVidia going forward?"

The reverse is also applicable. If Nvidia, the self proclaimed GPU leader, cannot implement DX12 drivers that work before a company a fraction of it's size, what does that say going forward?

August 23, 2015 | 01:49 AM - Posted by RubyX (not verified)

Self proclaimed backed by 82% market share.

August 24, 2015 | 02:32 PM - Posted by nobodyspecial (not verified)

It says they don't care yet, because there are no games running DX12 in the wild?

LOL. The most important metric currently is DX11. Hence the 82% market share to Nvidia. Also note, the original benchmark which this game is based on (the engine anyway), NV handily beat AMD in everything with cards tested at multiple price ranges. I won't be surprised when NV optimizes for this (like STAR SWARM demo) and blows them away again. They are WISELY concentrating on exactly where users should want them to.

It is extremely sad AMD pretty much dumped DX11 driver development (finally put out a new driver after 8 months from Dec 2014), which basically handed NV a lot of market share. But that is what happens when R&D goes down for 4 yrs straight due to spending it all on xbox1/apu etc instead of CORE products (CPU/GPU), and no yearly profits for ages, while losing ~7B in the last 15yrs. Ouch. I don't expect a change in their quarterly reports until we see some ZEN benchmarks and hopefully they made a CPU just as big as Intel's FULL cpu+apu in broadwell/skylake. If they didn't do that and merely match Intel's cpu side die space, they won't win squat, and will have ZERO pricing power. Intel currently dedicates over half of their die to GPU, which should give AMD a TON of transistors to smack them around for a few years until Intel comes out with something sans gpu and much bigger again. If they just aimed at skylake's cpu die size (ONLY the cpu side I mean), they'll be beaten shortly again with cannonlake etc. Hopefully AMD went HUGE, causing Intel to make something NEW that could take a few years since they long ago stopped making tons of different designs at once when AMD pretty much gave up cpu race.

Perfect timing for this ZEN, IF AMD got it right and it's HUGE (meaning the size of Broadwell or Skylake's ENTIRE DIE SIZE). Intel's 10nm chip need to LOSE to this chip due to the brute size of the cpu. If you don't do this, Intel will just match and price you to death until they get back on top 3yrs later. You need to WIN, so you can price higher than Intel and bag some profits for 3yrs straight! This is AMD's last chance to make money to fund the future.

November 28, 2015 | 02:18 PM - Posted by Anonymous (not verified)

LOL

"Perfect timing for this ZEN, IF AMD got it right and it's HUGE (meaning the size of Broadwell or Skylake's ENTIRE DIE SIZE). Intel's 10nm chip need to LOSE to this chip due to the brute size of the cpu. If you don't do this, Intel will just match and price you to death until they get back on top 3yrs later. You need to WIN, so you can price higher than Intel and bag some profits for 3yrs straight! This is AMD's last chance to make money to fund the future."

Look it up, AMD makes more annually, per employee, than Intel does. AMD is not as bad off as you would believe. Intel is just so fricking huge, that they need to jack up prices to earn the same amount/employee ratio.

What needs to be done from AMD though, is they need a powerhouse, then the prices will drop. This is for consumers, not the companies.

September 17, 2015 | 03:30 AM - Posted by Anonymous (not verified)

Not quite the case. AMD's hardware or drivers are poorly optimised for DX 11 Nvidia does not see much of an improvement because none can prob be made. The lack of improvement in DX 12 might be due to hardware limitations rather than driver ones. We have know for a while the sheer computational power of the R9 290/390 etc just wasn't well translated into games very well. I think rather than everyone (not ness you) whining about AMD driver vs NVidia Drivers I think we should be happy that an $460au AMD card is as fast as a $460au NVidia GTX 970 in DX11 but when fully utilised it gains more in future games and competes with the titans. Changing the topic to CPU's Imagine if DX 12 was available 4-6 years ago when bulldozer/piledriver came out. Games being able to utilise 6-8 cores properly or just better. AMD would be in a very different position. Imagine if there was solid competition the CPU power we'd have now? 6 Generations of I series and only a 20% IPC improvement pfft. Often its not AMD's hardware that is the issue its the lack of support. Perhaps it makes things harder to code for I'm not sure I am not a programmer. Just an enthusiast. Anyhow rant over

August 23, 2015 | 01:22 PM - Posted by chrisl (not verified)

you do know that nVidia specifically released a driver for this game, right? They have all the access they need, they just can't work miracles to change their inferior architecture in current cards.

The point of DX12 is low-level access to the GPU, which by definition eliminates the work of drivers. There will be less and less work AMD/nVidia can do in the drivers.

nVidia's next gen card may fix some of the problems, but current cards are not gonna suddenly becomes better in DX12. It can't transform itself.

September 1, 2015 | 10:18 PM - Posted by Anonymous (not verified)

AMD did better than Nvidia in the DX12 Benchmarks by a long shot. AMD's R9 290X which came out in January 2013 was toe to toe with Nvidia's 980ti and the R9 Fury X nearly doubled the performance of the 980ti. This was due to Nvidia only supporting tier 2 DX12 and their hardware not having Asynchronous Computation which is necessary for full DX12 optimization. Nvidia claimed their cards are asynch compute compatible but they are not.

August 18, 2015 | 12:12 PM - Posted by Anonymous (not verified)

You need to wait for a few more Dx12 games to come out rather than buying a card now before we really know whattsup ;)

August 17, 2015 | 04:33 PM - Posted by Anonymous (not verified)

240p... c'mon YouTube, hurry up!

August 17, 2015 | 04:43 PM - Posted by Anonymous (not verified)

At least in the context of this game at this stage, this just shows that AMD's hardware isn't the problem.

August 18, 2015 | 03:00 PM - Posted by jessterman21 (not verified)

It's true - that is an awesome improvement. Or a depressing one, depending on how you look at it.

August 17, 2015 | 04:51 PM - Posted by Anonymous (not verified)

Would you mind testing DX12 Intel/AMD IGP as well? I think all the GCN APUs and Broadwell IGP support base-level DX12. Skylake I think offer the greatest compatibility. Not sure about new Carrizo...

August 17, 2015 | 05:09 PM - Posted by Anonymous (not verified)

Haven't Stardock traditionally favored AMD in the past, especially with Mantle? I'm inclined to believe NVidia on this one and take these results with a grain of salt.

August 17, 2015 | 05:46 PM - Posted by Anonymous (not verified)

They are legit numbers but yes favor AMD in this case. DX12 gives the capability to greatly increase performance but it's not without allot of work put it not it. If devs had spent a bunch of time with Nvidia then I'm sure the numbers would have been better.

It will be interesting to see how much more work this makes for PC games. Do devs now have to make essentially two versions of their PC game to optimize them the best and take full advantage of what DX12 can offer.

August 17, 2015 | 06:19 PM - Posted by arbiter

The game is Alpha so likely they been in bed with AMD since day 1 and Nvidia hasn't had much time to really optimize DX12 side of things yet. I don't see a DX12 option for game yet so don't think its released to public but game does have Mantle option which says a lot.

Game is only in Alpha so results are really not really anything to take to seriously as between now and final could be very different.

August 17, 2015 | 07:44 PM - Posted by Anonymous (not verified)

http://oxidegames.com/2015/08/16/the-birth-of-a-new-api/

Baker wrote a specific section just for you.

"All IHVs have had access to our source code for over year, and we can confirm that both Nvidia and AMD compile our very latest changes on a daily basis and have been running our application in their labs for months."

Then, farther on:

"Often we get asked about fairness, that is, usually if in regards to treating Nvidia and AMD equally? Are we working closer with one vendor then another? The answer is that we have an open access policy. Our goal is to make our game run as fast as possible on everyone’s machine, regardless of what hardware our players have.

To this end, we have made our source code available to Microsoft, Nvidia, AMD and Intel for over a year. We have received a huge amount of feedback. For example, when Nvidia noticed that a specific shader was taking a particularly long time on their hardware, they offered an optimized shader that made things faster which we integrated into our code.

We only have two requirements for implementing vendor optimizations: We require that it not be a loss for other hardware implementations, and we require that it doesn’t move the engine architecture backward (that is, we are not jeopardizing the future for the present)."

I believe that answers both your arguments.

August 17, 2015 | 11:27 PM - Posted by arbiter

Wonder if that is same kinda Claim AMD loved to make about mantle being Open source?

Being associated with AMD gotta ask that.

August 17, 2015 | 11:56 PM - Posted by Anonymous (not verified)

When Ryan doesn't believe Nvidia like he pointed out. You know your grasping at straws.

Poor arbiter.

August 18, 2015 | 08:14 PM - Posted by wiak (not verified)

well Vulkan = Mantle

August 17, 2015 | 10:54 PM - Posted by Ryan Shrout

^ this.

NVIDIA has had source code access for a year. There are no excuses then.

August 18, 2015 | 05:28 PM - Posted by Anonymous (not verified)

And what about this Ryan? Where's nVidia's logo?

http://i.imgur.com/yAkA3KJ.jpg

August 18, 2015 | 05:39 PM - Posted by Jeremy Hellstrom

Ah, sponsor branding is not what you should be looking for ... that's "Way it's meant to be Played" versus "Gaming Evolved". 

That has to do with money,  nothing to do with access to code.

August 18, 2015 | 05:47 PM - Posted by Anonymous (not verified)

Even with access control, a GPU vendor is at the mercy of how the developer does with the coding and optimization on their end as well. If you think Stardock is not favoring AMD in any way you are being naive. And it could also very well be nVidia just doesn't think this game is that important to them and would rather focus their energies elsewhere.

August 19, 2015 | 04:30 AM - Posted by Anonymous (not verified)

Either AMD & Stardock are in bed or nV has bigger priotities.

It just cannot be the possible that nV's DX12 drivers are not up to snuff yet. That very thought is just so unbelievable for you, I know.

August 19, 2015 | 05:19 PM - Posted by Anonymous (not verified)

It cannot POSSIBLY be that AMD GPU's work better in DX12 than Nvidia does. Anything that suggests so must be because of sneaky backroom deals between the game dev and AMD. There is absolutely no way that AMD could compete with Nvidia, anytime, anywhere, for any reason. Never.

Clearly, AMD is cheating somehow.

Besides, who cares about AMD's performance in DX12 when everything out there is in DX11 right now, and Nvidia still rules in DX11.

Doesn't matter what the evidence says, I WILL FIND A WAY TO MAKE NVIDIA LOOK BETTER THAN AMD AND YOU ALL WILL ACCEPT AND ADOPT MY OPINION.

....ugh. Suck it up and deal with it, man.

August 22, 2015 | 06:24 AM - Posted by Durzog (not verified)

Great...another green team ass licker. Like there aren't enough already. Grow up and get real, man. If you like Nvidia cards that is fine with me. Just don't go apeshit and tell people what they should like or not.

August 24, 2015 | 02:15 PM - Posted by Anonymous (not verified)

If you couldn't detect the copious amounts of sarcasm simply dripping from my post, that's your problem, not mine.

August 18, 2015 | 08:20 PM - Posted by wiak (not verified)

people seem to blame amd for bad drivers all the time, with DX12/Vulkan its the opposite, in vulkan presentation amd basicly told that with Vulkan, if your a game developer and fck up code, its your fault 90% of the time not the gpu vendor driver (AMD)

watch the excellent vulkan presentation video
https://youtu.be/qKbtrVEhaw8

August 18, 2015 | 05:53 PM - Posted by evernessince (not verified)

Can't say I have any sympathy for Nvidia if that's true. They crippled performance on project cars, the witcher 3, and the new batman game.

August 18, 2015 | 05:53 PM - Posted by Anonymous (not verified)

No, they didn't. All those games had issues on the developers end.

August 22, 2015 | 12:16 AM - Posted by Anonymous (not verified)

All I can say is that I get a steady 59 FPS @ 1440P with my AMD system. The Witcher 3 gives me between 45 and 55 FPS. I cannot believe that some people are so blinded by the propoganda spouted about the Internet and on TV that it makes me sick. 5-7 FPS is the actual real world difference between a decent AMD build vs a double the price Intel/Nvidia build. Comparing a 290X to a 980 would be like comparing a HD6850 to a GTS450.

August 19, 2015 | 03:20 AM - Posted by Anonymous (not verified)

AMD is a partner with Stardock on this game.

http://i.imgur.com/yAkA3KJ.jpg

I'm not saying that the AMD gains are anything but impressive, but lets wait until the game is done before we condemn Nvidia on DX12

August 19, 2015 | 01:27 PM - Posted by Maonayze (not verified)

LOL

If these test results had been flipped on its head the green team would have been whooping like banshee's and proclaiming their godlike status.

Instead it looks like the rumours of Fury/Fury X DX12 performance could actually have a grain of truth about them and the green team not only don't like it up 'em. But some of 'em are probably thinking....F**K what if this is true? What if AMDs new cards are faster than 980Ti's in DX12. Oh SHIT!

You can talk about DX11 all you like....technology only moves one way dudes.....an dat's forwards. NOT BACKWARDS!!

So we are getting DX12 and higher resolutions in the future...Oh hang on those AMD cards showed they improved at the higher resolutions and combined with DX12......Lions and Tigers and Bears.......OH MY!! ;-P

As for PCper - Talk about hypocracy and moving the goalposts whenever it suits them. Proof that PCper are Nvidia fanboys
(Shakes head and gives PCper the middle digit).

As I have always said in the past...I do not care which companies gfx cards edge each other out here and there but it would be nice to see reviewers at least reviewing the tech on a level playing field and not skewing the data to their own ends.

If either of them fall (Obviously AMD could go bust in reality sooner than Nvidia) then it will be disastrous for us gamers....end of!!

Careful Nvidia, your getting just like Apple

August 17, 2015 | 05:09 PM - Posted by Andrew (not verified)

so nvidia dx11 drivers are already well optimized for api-overhead..ok
does this mean that game engines like assassin's creed unity and batman arkham knight are reaching limits of game development, for now and near future on pc?
or do u think those engines are just poorly optimized

August 18, 2015 | 12:19 AM - Posted by Ty (not verified)

I think it means nvidia is only moved to move heaven and earth to optimize cpu constrained performance in dx11 when they are forced to.

They already have a lead on amd so normally there is less need to go crazy on the optimization route, now that dx12 removes that need nvidia tasks its engineers to work overtime to micromanage every molecule of coding minutia in the dx11 driver to make it seem that much better than amd. That's what they did with the starswarm demo, and that's what they did here.

if AC:Unity had been released as a dx12 capable game, I'd expect far better dx12 performance from both venders AND far better dx11 performance from nvidia, because their hand would be forced to not slack off due to amd not being a credible presence.

With dx12 kicking off, all that optimization effort will wither away, and they have to get gains by having the better gpu now. They might even have to start offering better gpus in the sub 200 dollar range and not relying on amds iffy cpu scaling in dx11 to win the day with weaker gpus.

August 17, 2015 | 05:15 PM - Posted by Heavy (not verified)

can someone clear some stuff up for me .I was wondering if we are going to start seeing real cpu cores minimums with dx12 instead of every game being able to run with just 1 or 2 core just fine.i also see in some of the benchmarks that AMD can do more draw calls then nvidia this wwas a while back so it might have changed, will graphics cards that dont make as many draw calls take a real big hit or small one,or will it make a difference at all

August 17, 2015 | 05:40 PM - Posted by snook

yeow, nvidia caught sleeping.

August 17, 2015 | 06:06 PM - Posted by arbiter

yea cault sleeping in an AMD de-evolved game in alpha stage.

August 17, 2015 | 06:23 PM - Posted by snook

sure, that's why nvidia are calling unfair, lol. numbers don't lie bro.

August 19, 2015 | 03:21 AM - Posted by Anonymous (not verified)

One of the two graphic card companies is a Stardock partner and are co-developing.

Hint, its the company with the huge DX12 performance gains.

http://i.imgur.com/yAkA3KJ.jpg

August 19, 2015 | 05:22 PM - Posted by Anonymous (not verified)

As Ryan already mentioned above, Nvidia has had source code access for over a year. They have no excuse.

August 17, 2015 | 06:25 PM - Posted by funandjam

I find your hyp·o·critical style funny, keep up with the LOL's, it brightens my day!

August 17, 2015 | 06:28 PM - Posted by snook

lol, good?

August 17, 2015 | 11:17 PM - Posted by arbiter

Funny you say that when AMD usually is the one whineing about it all and complaining about nvidia. Who are ones that are really hypo critic's? AMD are only ones that can cry foul?

August 19, 2015 | 04:31 AM - Posted by Anonymous (not verified)

good god your comments are so toxic.

August 20, 2015 | 03:36 PM - Posted by snook

thank you. glad someone else see this for what it is.

August 17, 2015 | 08:26 PM - Posted by ppi (not verified)

But nVidia clearly believes this matters a lot, as for this alpha-version benchmark, they released special drivers ...

August 17, 2015 | 08:27 PM - Posted by Anonymous (not verified)

Please read this before continuing to sound like an Nvidia fanboy. http://oxidegames.com/2015/08/16/the-birth-of-a-new-api/

August 18, 2015 | 05:54 PM - Posted by evernessince (not verified)

Better than Nvidia CrapWorks (tm). At least the bench is playable on Nvidia hardware. Nvidia would have made sure that only the most expensive Nvidia cards could run it and of course, no AMD cards.

August 17, 2015 | 05:45 PM - Posted by killurconsole

great job Ryan

once fable comes out , u really need to benchmark more GPUs + crossfire/SLI and don't forget about 4K

August 17, 2015 | 10:54 PM - Posted by Ryan Shrout

Of course!

August 17, 2015 | 05:50 PM - Posted by StephanS

So even at 1080p the 290x HW can beat a GTX980... ?

We already saw a glimps of what AMD HW is capabpe of at 4K, but the picture seem to get clearer.
It seem that GTX 980 HW itself is very much in line with the 290x, but the wins come from the driver/software side of things. So Kudos to the nvidia driver team for this accomplishment.

Its sad for AMD HW team to design what seem like a superior HW architecture, only to have it all taken away by their inadequate driver team... Who is running the division ?!?
Guys, stand up for youtself, go knock on Lisa doors and demand an answers. This is THE single reason that have destroyed AMD GPU market in the past 3 years.

DX12 dominance is at least 2 years away... AMD do something about your Dx11 driver now, or continue to die and suffer another massive round of layoff in the next 24 month.

Red team... wake up!

August 17, 2015 | 06:12 PM - Posted by arbiter

Well Superior arch may seen that way on the surface but when you dive in to things a bit its not the case. Look at Nvidia chip is able to match pace and even beat AMD gpu even with 25-30% fewer shader's and half the memory bandwidth. Really Nvidia is the better arch.

DX12 least the speed side should take hold pretty fast given its benefit, I see most games having it in 1 year. The graphic side yea probably is a bit more off yet.

August 17, 2015 | 06:25 PM - Posted by snook

better arch, two gens ahead and not winning. superior is one name for it.

August 17, 2015 | 07:14 PM - Posted by Heavy (not verified)

wait dosnt anyone see what im seeing, i think AMD made their southern island architecture to last this long just waiting for new die sizes and hmb so they wont have to waste money, same with their cpu whats the point of having great architecture when you dont have the right hardware.just look at all the features amd unlock with this old ass chip.their either 500 steps ahead of everyone and we dont know it or they just suck

August 17, 2015 | 08:24 PM - Posted by ppi (not verified)

Exactly. They are not that behind, and considering how old is their architecture, they sure had some savings on that front.

Tbh, AMD certainly wanted HBM with Northern Islands @ 20nm, but were screwed by TSMC. Current Fury seems like a quick band-aid.

August 17, 2015 | 11:19 PM - Posted by arbiter

Are you sure? Really memory bandwidth has kept them in it. When pascal comes out it will have same bandwidth so?

August 18, 2015 | 02:44 AM - Posted by StephanS

Well, the 290x was released in oct 2013 at 440mm.
The GTX 980 came out a year later, sep 2014 at 400mm.

10% smaller die, ok.. but a full year AFTER. and as we see the GTX 980 struggle to make a clear win at 4K and now with Dx12 the GTX 980 might also have a problem showing a win at 1080p.

I'm not saying the GTX 980 is crap, or anything remotely like this.
What I'm pointing out is how BAD, Horrible, nasty, AMD driver team is.

AMD HW is actually state of the art, way ahead of nvidia.. But whats killing AMD is their software team.

Even their driver installer look like some 1999 recycled junk...

I just cant believe AMD, smack middle of silicon valley cant find some hard core software guru to run their driver division.

August 19, 2015 | 04:34 AM - Posted by Anonymous (not verified)

Their driver UI is indeed absolutely god awful. Lags like hell, slow to load and looks terribly dated.

To be fair, nV's driver UI looks straight from windows XP. Intel's massive space wasting no-meaningful-options control panel isn't much better.

August 17, 2015 | 05:58 PM - Posted by Anonymous (not verified)

AMD, Oxide & Stardock are sabotaging Nvidia performance just like Crystal Dynamics & Nixxes did with Tomb Raider. Avoid all Gaming Evolved games, those developers are in bed with AMD and are sabotaging other users performance.

http://techreport.com/news/24463/nvidia-acknowledges-tomb-raider-perform...

August 17, 2015 | 06:28 PM - Posted by snook

wow, really??? gameworks ring a bell? christ you and what's his face should form a club.

August 17, 2015 | 06:41 PM - Posted by killurconsole

well Tom from Nvidia once said regarding AMD's bad performance in some games "don't blame us because the game runs slower on ur hardware and then release a driver that fixes everything " well that pretty much what happened regarding Tomb raider although i have never heard about Nixxes doing shady business

August 17, 2015 | 11:20 PM - Posted by arbiter

The whole issue around it was Nvidia never got advanced copy of the final game code TIL after it was released.

August 18, 2015 | 06:05 AM - Posted by killurconsole

u r just completing the story , that is what AMD claims about the source code being locked and Nvidia's black magic box

stop being a fanboy , just because NV said something does not mean it's true that applies for AMD as well

August 17, 2015 | 06:49 PM - Posted by Anonymous (not verified)

I'd bet a dollar that when a game made by a studio that's in bed with Nvidia works really badly on AMD cards, and AMD fans say exactly what you just said, you point and laugh at them.

How's it feel to be on the other side of the fence now?

Besides that, I must admit I enjoyed how you cherry-picked a story about Tomb Raider giving Nvidia cards problems, while ignoring the fact that a week later the game was patched and Nvidia updated their driver, and suddenly the game worked BETTER on Nvidia cards - because that would argue against your hypothesis, therefore that data must be thrown out.

http://techreport.com/news/24479/tomb-raider-update-addresses-nvidia-issues
http://techreport.com/news/24517/new-geforce-drivers-address-tomb-raider...

The difference lies in the fact that when an AMD-supported game works poorly on Nvidia cards, Nvidia has access to the source material they need to fix things up. When an Nvidia-supported game works poorly on AMD cards? Tough nuggets, AMD. Figure it out yourself or suffer.

August 17, 2015 | 07:27 PM - Posted by ChangWang

Couldn't have said it better myself.

August 18, 2015 | 06:00 PM - Posted by evernessince (not verified)

Well said. People quickly forget that just in the last few months, Project Cars, The Witcher 3, and Batman: Arkham knight all released with varying issues with AMD hardware, all stemming for Nvidia GamWorks.

August 18, 2015 | 12:20 AM - Posted by Ty (not verified)

.... did chizow change his name to anonymous?

August 18, 2015 | 01:40 PM - Posted by JohnGR

Probably it's not the best idea to post as chisow in this article.

August 22, 2015 | 12:18 AM - Posted by Anonymous (not verified)

LMAO

August 17, 2015 | 06:28 PM - Posted by Raghar (not verified)

Could you post results with HT disabled on 6700K? (And if you have time you can try 5960X as well, but 6700K without HT might be more interesting.)
I'm curious about something.

(Boot into BIOS, find Hyperthreading under CPU options, and disable it.)

BTW AMD had really bad drivers under DX11 according to this test. That 1920x1080 increase was massive.

August 17, 2015 | 06:32 PM - Posted by zaniix

I care less about the nVidia vs AMD thing and more about the lack of multicore CPU benefit.

I keep hearing how DX12 is going to make 6 + core CPUs outperform quads, but I don't see any proof.

August 17, 2015 | 06:43 PM - Posted by Raghar (not verified)

6-core CPU definitely shouldn't outperform quad CPUs in games that need 2 cores. High speed per one core 4-core would outperform 6-core CPU any day.

And in games that needs 4 real cores, spreading DX12 CPU workload to multiple cores can cause interesting types of hiccups and strange stuttering. At most you could see AMD 6-cores not being outperformed by Intel 4-cores so badly.

HEDP 6-cores were more about: "browser with 500 pages opened on background and listening to musics will not slow down a CPU intensive game", than a "it needs 6 cores because computer game developers committed economic suicide by locking the game to be able to run on 6-core and more CPUs".

August 17, 2015 | 06:47 PM - Posted by zaniix

but I thought DX12 was suppose to be able to spread the load to make use of more cores.

I guess I am glad I decided to stick to the fastest quad instead of jump on the x99 train.

August 17, 2015 | 10:46 PM - Posted by BillDStrong

Yes and no. DX12 unblocked the render thread from being on one core, a requirement under DX11 if the stars didn't align under very specific circumstances and the engine could finagle a second temporary thread.

Which means that the render thread could use all of the CPU core, and be stuck, not able to go any faster.

DX12 does two things to alleviate this issue.

1. The DX12 API is lighter weight, meaning you can get more done with the same single core threading. This alleviates much of the overhead of DX11, at the cost of increasing complexity for the engine developer.

2. It made talking to the GPU usable from multiple threads. This means you can push that same load onto other cores, allowing the amount of CPU usage to go down significantly.

There are more changes that help with all of this with DX12, but these are the big ones. It also means that you can easily max out the GPU quickly, without maxing out the CPU. It gives you much more headroom for multi GPU setups as well. One of the biggest issues with Crossfire and SLI are the CPU threading constraints. With DX12, you should see a much more linear scaling with 4 GPUs, once Drivers are better, and game engines start targeting it.

This particular demo is an RTS, which from a developer standpoint will be a GPU bound game, thanks to its reliance on GPU effects, such as each laser being a lightsource. (This by the way is an amazing feat, considering that DX11 is limited to 8 at a time. Looking at the video shows hundreds at a time.) RTSes are not traditionally very CPU intensive, as they aren't doing advanced physics, and a bunch of other things. This one has each unit have its own AI, which is very cool tech, but generally these games aren't CPU bound. Think Civ V or Star Craft II.

It is however a great look at what can be done when you are maximizing the GPU, with CPU cycles to spare. I would be curious to see what the SLI and Crossfire results are, if the DX12 codepath supports it.

August 18, 2015 | 03:51 PM - Posted by Albert89 (not verified)

I would like to see Civ conquest optimised for DX12 !

August 18, 2015 | 04:09 PM - Posted by snook

thanks for the breakdown bill.

August 18, 2015 | 03:02 PM - Posted by jessterman21 (not verified)

Really sad that the i3 is killing the FX 8-core...

August 17, 2015 | 07:13 PM - Posted by FireZergUS (not verified)

I wonder if this is AMD's strong compute performance coming through

DX11 = Tessellation, DX12 = Compute?

August 17, 2015 | 07:41 PM - Posted by Anonymous (not verified)

No. Stardock and AMD have just worked more on this game/engine since Mantle.

August 17, 2015 | 07:18 PM - Posted by JohnGR

Nvidia all over the place with damage control. I have to give them credit. AMD is usually pretty slow in it and when doing it, it does it in the worst possible way.
Anyway Nvidia fanboys and Nvidia internet accounts are on fire all over the internet. They are really worried.

August 17, 2015 | 07:40 PM - Posted by Anonymous (not verified)

We'll just ignore the fact this proves nVidia is more efficient at DX11, which by the way, more games are than DX12.

We'll also ignore that Stardock is in bed with AMD.

And while we're at it we'll just ignore nVidia usually does everything better than AMD.

August 17, 2015 | 08:06 PM - Posted by JohnGR

No no. I agree totally with you in the first comment. You may believe whatever you like in your second comment. As for the third comment in your post, you just describe yourself as a biased Nvidia fanboy. Thanks for making it clear.

August 17, 2015 | 09:21 PM - Posted by annoyingmoose (not verified)

he's just stating the truth. it's you, the biased AMD fanboy.

August 17, 2015 | 09:49 PM - Posted by JohnGR

I see all Nvidia fanboys are doing overtime.

August 18, 2015 | 06:03 PM - Posted by evernessince (not verified)

You don't have to be biased to go against something that is clearly asinine.

August 18, 2015 | 06:16 AM - Posted by DerekR (not verified)

What's clear reguardless of which fanboy one may or may not be is what Ryan has shown though the real world example. NVidia does still hold an advantage in DX11 performance over AMD. In DX12, nVidia is still very strong (Which should make nVida fans happy.), but AMD is shown with the greatest improvement with DX12 (Which should make AMD fans happy.).

Nvidia is coming to DX12 with an already strong and well optimized foundation with DX11, because of this the gains they are getting are naturally smaller. AMD on the other hand had a poor implmentation of DX11 to work from, so they naturally had the most to gain. The only thing that should be a surprise is the amount AMD gained in DX12 performance versus DX11.

This exact situation is mirrored in the Mercedes and Ferrari Formula 1 Engines from 2014 to 2015. Mercedes got things right with their 2014 Turbo V6 engine and Ferrari didn't, so there was a noticeable difference in performance on race day. Come 2015, the Mercedes engine is still strong, but Ferrari is now stealing race wins from Mercedes. Ferrari made the most gains with their engine from 2014 to 2015.

Mercedes is still better and made gains, but Ferrari made more gains and narrowed the gap.

Nvidia is still stronger and made gains, but AMD made more gains and narrowed the gap.

August 18, 2015 | 07:52 AM - Posted by JohnGR

It seems that Nvidia's hardware has a lower ceiling than AMD's hardware in DX12. At least today's Maxwell based cards with today's driver at this specific test and also in 3DMark's APIs test and Star Swarm. This could easily change with Pascal.

I wouldn't say that Nvidia gains less, or that gains in persentage over DX11 are important. It performs close to what AMD's cards can offer, it's just that current Nvidia cards are not as strong - in brute force - as AMD cards. So the ceiling is a little lower. You should only look the DX12 lines to see the potential of every card. Ignore DX11 scores if you only want to see the hardware's potential.

This is normal because AMD's hardware was usually stronger. The problem was always with the driver's performance on the AMD side. Nvidia's hardware on the other hand is not superior as Nvidia fanboys say. But their drivers are much superior. Nothing strange here. AMD is a hardware company. Nvidia is primarily a Software company that creates also good hardware.

August 19, 2015 | 03:00 AM - Posted by DerekR (not verified)

I think you're right about the biggest differentiator being the driver. Thinking back, AMD did tend to have superior hardware over nVidia throughout the years. When I was growing up it always felt like when I bought a new nVidia GPU, AMD had something, hardware wise faster coming out. It seems that within the last couple generations nVidia really had leaped forward with their driver quality over AMD.

I'd like to ask Apple why they choose AMD's FirePro series for their Mac Pros. I want to say that I heard somewhere that AMD was best because of it's raw throughput.

August 18, 2015 | 05:28 PM - Posted by Anonymous (not verified)

Just sayin:

http://i.imgur.com/yAkA3KJ.jpg

August 20, 2015 | 04:42 PM - Posted by Anonymous (not verified)

Nvidia's had access to up-to-date source code for over a year. They have no excuses.

Besides, if you're going to cling to the idea that the AMD logo is the only reason AMD beats Nvidia in DX12, then you must also accept the idea that the Nvidia logos on all those DX11 games are the only reason Nvidia beats AMD in DX11.

You don't get it both ways.

Just sayin.

August 17, 2015 | 07:50 PM - Posted by kenny1007

I don't care how much fps can a game display as long ad i can have a smooth playable game.

August 17, 2015 | 08:06 PM - Posted by Leo DS (not verified)

On the NVIDIA side some instances of DX12 are performing WORSE than DX11? Can someone give me a reason why DX12 would perform worse than DX11 except for poor implementation?

To me it seems pretty clear that the game was implemented to run great on the AMD Chips and didin't pay much Attention to the Nvidia side.

It's also interesting to see that this game's Performance is strongly CPU bound, which is curious, considering DX12 should reduce CPU Overhead. More than that, this seems to be very reliant on single threaded Performance.

Last but deffinitely not least, seriously, look at those graphics. Are These guys even trying to make a game that Looks good and runs well? because the graphics I have seen in the Video are nothing Special at all, and seeing this running on pretty good Hardware with this poor Performance does make it seem like they're going out of their way to load the Hardware instead of just making the game lighter to run.

This Looks like it could run on DX11 (if not lower) and mediocre Hardware if implemented correctly.

Am I crazy here?

August 17, 2015 | 09:11 PM - Posted by Anonymous (not verified)

Nvidia obviously spent a lot of resources optimizing for DX11. Given all of the numbers here, I suspect Nvidia will be able to get their performance such that DX12 will be better than DX11, but I doubt we will see any massive improvements. They seem do be getting very good utilization with DX11, so there is limited room for improvement. It also possible that their hardware scheduling system has bottlenecks with DX12, and software optimization will not fix it. This pattern should extend to the Fury vs. 980 Ti also. AMD has more compute resources so once the bottlenecks are removed, AMD will probably get the bigger performance boost. This is probably close to a best case scenario for DX12 gpu scaling though.

As far as the CPU is concerned, the CPU has not been that much of a bottleneck for a while. There are some cases where DX12 should help. It should help favor more CPU cores, but it was probably not much of a bottleneck unless you are comparing a slower dual core chip to a quad core or larger. Also note that RTS type games are usually CPU intensive; they are sometimes CPU bound when other games are not. This game tries to have ridiculously large numbers of units onscreen. This will take a lot of CPU power to simulate, in addition to a lot of small draw calls. We will need to see more samples with different types of games. It is unclear what will actually be the bottleneck. DX12 increases performance of AMDs GPUs significantly, so this could actually shift the performance bottleneck back onto the CPU in some cases.

August 21, 2015 | 01:16 PM - Posted by Anonymous (not verified)

It performs worse because it was not designed for DX12, where the AMD GCN architecture is perfect for DX12. That is the key difference

August 17, 2015 | 08:07 PM - Posted by JohnGR

Ryan, do you have any Phenom IIs siting around? It would have been interesting to have a comparison between a quad core Phenom II and a quad core FX, or a Thuban and a six core FX. FX scores are disappointing at least.

August 17, 2015 | 08:09 PM - Posted by ANON17 (not verified)

Can you please include AMD A10-7870K in there please? Lets look at APU + GPU DX12 results.

August 17, 2015 | 08:14 PM - Posted by Anonymous (not verified)

can you please also release CPU utilization graphs with such results? as for now it seems that ashes of singularity can only use up to 4 threads (thats why fx-8370 only is only on par with i3)

August 17, 2015 | 10:57 PM - Posted by VincentCarter (not verified)

I think it means dx12 driver is still running only on 1 core, like dx11 driver.

August 17, 2015 | 11:47 PM - Posted by Anonymous (not verified)

but there are pictures like this
which shows that ashes of singularity can use 8 threads
http://images.techhive.com/images/article/2015/08/screenshot-100607878-l...

maybe different version?

August 17, 2015 | 11:54 PM - Posted by VincentCarter (not verified)

It doesn't matter, until the directx driver is running only on 1 core, it will hold back a slow cpu (like an fx) really bad.

August 17, 2015 | 08:45 PM - Posted by Anonymous (not verified)

This looks like it is probably a best case scenario for DX12 vs. DX11. I think this does show that Nvidia probably put a lot of R&D into optimizing DX11 that AMD did not. AMD put resources into developing mantle though, which is obviously quite similar to DX12. This obviously hurt AMD a lot in the short term. AMD seems to have been focusing on the long term for a while now with taking a significant amount of time developing a new set of hardware design libraries and a new API. It has left AMD behind for a while, but perhaps their work will be paying off now.

It always annoys me when fan boys and enthusiast seem to think that they know better than the engineers who design these things. A lot of decisions have to be made years in advance of when we actually see the product. A lot of things may be more clear in hindsight. Many trade-offs have to be made. AMD probably didn't have the resources to do both. Given their contracts for console hardware (which requires low level, efficient APIs), the choice may have been obvious.

It would be interesting to know where the split is between software and hardware, although this could be difficult to determine. A lot of optimization can be done in the drivers, but the differences could also be due to differences in the hardware. The driver will have a lot of interaction with the hardware scheduling system. I am wondering if it is just drivers, or whether AMD's GPUs have simpler scheduling hardware also. Keeping thousands of units busy is not a simple problem to solve.

I would like to see some IGP/APU test also. Intel has been behind on the GPU side for a long time with a large part of that probably being drivers. DX12 may be a lot simpler to schedule, so Intel may get a boost.

August 17, 2015 | 10:05 PM - Posted by siriq

Well, i am not sure if mentioned before or not but AMD did not make DX 11 profile to this bench/game ATM. So the performance benefit is not what you can see in the real life.

August 17, 2015 | 10:44 PM - Posted by Anonymous (not verified)

Nvidia be catching the vaporz

August 18, 2015 | 12:55 AM - Posted by Donny Stanley (not verified)

Great article.. my only critique: needs more Allyn. lol

August 18, 2015 | 12:59 AM - Posted by Anonymous (not verified)

So...what we've established is that the test is actually a CPU bottleneck on the benchmarks?

The results are within 5% most of the time yet varies enormously with CPU performance.

Granted, it's a good thing AMD gets around their API bottle-necking, but it all just dead-ends when the CPU tops out anyway and doesn't actually show the maximal potential of each card.

August 18, 2015 | 03:33 AM - Posted by Anonymous (not verified)

It is an RTS and RTS type games are often much more CPU bound than other games. This may not be representative of general performance because of this.

August 18, 2015 | 03:33 AM - Posted by Anonymous (not verified)

It is an RTS and RTS type games are often much more CPU bound than other games. This may not be representative of general performance because of this.

August 18, 2015 | 01:40 AM - Posted by Anonymous (not verified)

Ok, ok, so Oxide Games come now with "Ashes of Singularity", a "new" game that shows us what good programmers are.

Where is the Star Warm game, uh? Yes, it's intended to be a game, not only a test for the glory of AMD. Ah, I know, I know... never settle in one development, two fake games with the same engine and with every AMD's show to the public about how good is its dead API Mantle, or now how good is the performance with DX12 of its gpus.

Do you not see nothing strange in this graphic configuration? About temporal antialiasing (that you, Ryan, maintain activated when this is a trojan option, a trojan? yes, because this option duplicate at least ALL the instances in the game, with Star Warm all the points of the stars (a pixel) were a instance (ridiculous), and with temporal AA twice or more instances), about sampling... it's a test make to AMD.

Go to:

http://www.ashesofthesingularity.com/

And see the banners at the end of the page. Clearly, a "good neutral DX11 test", like starwarm with DX11/Mantle, this game that you can play now.... ;-)

August 18, 2015 | 01:42 AM - Posted by Anonymous (not verified)

Sorry, the Ashes of singularity is a "Good neutral DX12 test", not DX11.

August 20, 2015 | 04:48 PM - Posted by Anonymous (not verified)

Oh I know. It's like all those DX11 games with Nvidia logos on them that apply 64x tessellation to every single thing on the screen, especially objects that don't need it, like the interior of a concrete barrier. That's CERTAINLY NOT there to make Nvidia perform better. Heavens no.

August 18, 2015 | 02:08 AM - Posted by Anonymous (not verified)

It isn't surprising that Nvidia is backing away from supporting real-world game benchmarks. It used to be that reviewers were encouraged to use AAA title games which had been tweaked by Nvidia middleware software and Nvidia consultants. They have then be able to get games to run and look better on Nvidia hardware. However, lately, it seems like Nvidia has just been making games suck resulting in crippled game play for Watch_Dogs, Witcher 3 and Batman: Arkham Knight.

At some point, Nvidia will update their drivers and middleware (like HairWorks) to take advantage of DX12. Once they have Nvidia tweaked AAA DX12 titles, they will go back to pushing "real world" benchmarks. Until that happens, gamers are supposed to care about Nvidia demos of buffalo with "realistic" hair as if that is what any gamer really cares about.

I have never seen a company so out of touch with it's customer base.

August 18, 2015 | 03:38 AM - Posted by El_Tech_Gato (not verified)

Uh, what about Windows 8? Microsoft seemed pretty out of touch to me........

August 18, 2015 | 03:02 AM - Posted by Anonymous (not verified)

But will it run Nvidia GameWorks?

August 18, 2015 | 03:07 AM - Posted by JL (not verified)

The bench just proved AMD DX11 driver has serious overhead.

August 18, 2015 | 04:56 AM - Posted by DAMIT (not verified)

Were the processors being used in these benchmarks running at stock speeds? I didn't see it mentioned anywhere in the review. I'd be interested to see what kind of a difference this would make to the results, especially the 5960X with its relatively low base clock speed compared to the other CPU's in the test.

August 18, 2015 | 05:18 AM - Posted by Hakuren

So to sum everything up. Unless game is code-optimized for either nVidia or AMD the difference otherwise is negligible which is so 'shocking' that not even worth mentioning... On my end I couldn't care less about over-hyped DX12. All my games are CPU heavy and VGA is just display adapter.

August 18, 2015 | 07:10 AM - Posted by Irishgamer01

Is this benchmark Press only?
Been looking around and haven't seen it posted up anywhere?

August 18, 2015 | 07:13 AM - Posted by sawe (not verified)

What I see is that AMD has real problem on DX11 side, could you get a comment on that from AMD ?

From Pcper I wish that you ran these tests with Fury and 980 Ti level hardware ?

August 24, 2015 | 09:04 PM - Posted by sawe (not verified)

Found answer to my own question: http://www.dsogaming.com/news/amds-directx-12-advantage-explained-gcn-ar...

August 18, 2015 | 07:40 AM - Posted by ET3D (not verified)

At the end of the first page: "Results we be presented first comparing the GTX 980 and R9 390X..."

Anyway, thanks for the interesting benchmark. I'd love seeing some benchies on lower level GPU's.

August 18, 2015 | 07:40 AM - Posted by Anonymous (not verified)

Is there any way to tell how high the utilization is? I am wondering if the Nvidia GPUs have much more headroom with driver improvements, or is performance close to the max already. The 980 Ti and the 390x are actually quite close in terms of raw hardware (numbers from Wikipedia):

Shader Processors : Texture mapping units : Render output units
980 Ti 2816:176:96
390x 2816:176:64

The 390x also has higher theoretical GFLOPS rating, so I have wondered what is really holding it back (980Ti 5632/176, 390x 5914/739, single/double precision). It seems that it may be their driver implementation. Most systems never get anywhere close to their max theoretical performance. Nvidia seems to have made significant improvement to their performance in this game with the last driver update according to extremetech:

"Nvidia has done a great deal of work on Ashes of the Singularity over the past few weeks. DirectX 11 performance with the 355.60 driver, released on Friday, is significantly better than what we saw with 353.30."

This was with DX11 though. It is unclear whether there is much more performance to be gained with DX12. Since it is lower level there is going to be much less opportunity for driver optimization.

August 18, 2015 | 08:01 AM - Posted by Anonymous (not verified)

I had the wrong GPU there. This review is the 390x vs. the 980, not the 980 Ti. I think the point still stands though. The 390x actually has significantly more hardware than a 980 on paper. I don't know if it is that surprising that it matches or exceeds a 980 in performance once the driver bottleneck is reduced.

Shader Processors : Texture mapping units : Render output units
980 Ti 2816:176:96
390x 2816:176:64
980 2048:128:64

GFLOPS single/double precision
980Ti 5632/176
390x 5914/739
980 4612/144

We will have to wait and see if this performance is representative of other game engines and if Nvidia can push higher performance under DX12 with driver optimizations.

August 18, 2015 | 08:30 AM - Posted by RDG (not verified)

This just proves what Digital Foundry has been saying for years: AMD's DirectX 11 driver is quite problematic. AMD hardware has probably been faster than the NVIDIA counterparts for years but held back by the atrocious driver. DirectX 12 largely solves AMD's driver issues, so we can see the full potential and performance of AMD hardware.

This is good news for both sides, AMD will get more sales, NVIDIA will have to bring something better to the table come Pascal. Competition is always good for the consumer.

August 18, 2015 | 12:39 PM - Posted by Anonymous (not verified)

Rarely will you ever see me commenting on the articles of any web site, but:

I am still trying to make up my mind about why this article was put together. So far, I am leading on the side of trying to get views at all costs.

I own multiple models of each vendors in current and previous generations of graphics cards, I do not have a favourite… I just happen to need that variety because some tools are better with AMD and others with NVidia.

One thing is for sure, writing a piece about ONE game that was used by one of the two brands to showcase their side of things, feels like a click bait piece. All this accomplishes is adding fuel to the flame wars between fan boys.

I expect better than this from this site.

Have a nice day.

P.S: Yes, NVidia has had access to the code for over a year. But stating that, because of that, they have no excuse is quite a strange thing to do. NVidia is a business, as such, they have access to many developer’s games. And honestly as a business, where do you think their priority are going to go? Ashes of the singularity, or things like a Batman, Shadow of Mordor, …?

August 18, 2015 | 03:41 PM - Posted by snook

so they were paid to say this? they are biased towards AMD? AMD shills?
or, perhaps, too stupid to realize they got played? which is the one you are hinting at?

August 18, 2015 | 12:41 PM - Posted by Anonymous (not verified)

Nvidia crying over a true DX12 game engine - they`ve had access to the source code from day 1 so no excuses what so ever.

August 18, 2015 | 12:51 PM - Posted by gamerk2 (not verified)

Am I the only one seeing the somewhat obvious CPU bottleneck compressing the results? It's worth noting the NVIDIA DX11 path is the fastest, and coincidentally, thats as far as AMD can catch up.

What I believe is happening:

There's a CPU bottleneck at the point where NVIDIA's current DX11 performance is. As a result, when using DX12, the faster API allows the slower 390x to catch up until it hits the same bottleneck. Meanwhile, NVIDIA is stuck because the CPU is preventing the card from going any faster. So AMD has a larger effective gain, because NVIDIA can't go faster because of the CPU.

August 18, 2015 | 03:37 PM - Posted by snook

you are contending that a 5960x is the cause of a bottleneck for the GTX 980? you know the saying "grasping at straws"? what you are saying is, "grasping at quarks"...

August 18, 2015 | 03:47 PM - Posted by Albert89 (not verified)

Love the inclusion of the i3 4330.

August 18, 2015 | 08:41 PM - Posted by wiak (not verified)

AMD/Valve talkin literally about us in "Vulkan Presentation" :=)
https://youtu.be/qKbtrVEhaw8?t=4656

August 18, 2015 | 11:03 PM - Posted by Anonymous (not verified)

What about this... Microsoft Duped us into thinking DirectX 12 was better than DirectX 11? I fell for it, i've spent 2 days trying to get win10 to act like win7 and try to disable all the privacy concerns.

What if this is all just a ploy to get us on win10?

We all know that nvidia spends more resources optimizing each game with the developers than amd and it's not really that amd's hardware sucks.

So now this benchmark shows AMD is on par with Nvidia... Amd has always been on par with Nvidia. What it really shows is DirectX 12 is the same as DirectX 11.

We are the suckers....

(lets hope this isn't the case)

August 18, 2015 | 11:34 PM - Posted by J Nev (not verified)

I am shocked? Why? Because this article tells us Ryan has a 390x yet there STILL isn't a review of said 390x

August 19, 2015 | 04:48 PM - Posted by Anonymous (not verified)

and what about having both NVidia and AMD cards at the same time?
DirectX 12 is supposed to support this setup.

it will be interesting to see how this is working :)

August 19, 2015 | 08:48 PM - Posted by Doeboy (not verified)

I hope open world games like GTA V and the upcoming Tomb Raider start using dx12 in order to improve performance of complex scenes.

August 20, 2015 | 09:38 AM - Posted by Anonymous (not verified)

Ditto

August 20, 2015 | 09:37 AM - Posted by Anonymous (not verified)

FWIW my lower mid-range GPU combined with a C2Q 9550 8GB DDR2 runs my games faster/smoother with W10.
In fact everything on the web , etc. scales better and looks better with my lowly 25 in HP 1080P monitor.
BTW...I use the Edge browser and it should be full-featured by November's Threshold 2 W10 according to Thurrott.

August 21, 2015 | 05:47 AM - Posted by zMeul (not verified)

what I don't see in any of the benchmarks posted here, or anywhere else for that matter, was VRAM usage comparing DX11 vs DX12 at the same quality settings

August 21, 2015 | 09:22 PM - Posted by Anonymous (not verified)

The i3 killing the fx 8370 doesn't make sense.

The i3 can't cpu benchmark higher then the fx 8370.
It has a significantly lower passmark score and
only better single core performance.

I consider the fx 8370 benchmarks to be invalid.

Proper implementation of the asynchronous compute engines
in the radeon 290x cannot be being used here as dx12 scales to 6 cores and the i3 isn't going to.

http://www.pcworld.com/article/2900814/tested-directx-12s-potential-perf...

Synthetic benchmarks show that an r9 290x with it's asynchronous
compute engines takes advantage of extra cores of a cpu and
boosts performance greatly. The whole point of this is
to make more powerful cpu's unnecessary.

The O.P.s benchmarks don't make sense.

In the 3dmark api overhead test it's obvious that more cores
help dramatically.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.