DOOM on Vulkan Benchmarks

Subject: Graphics Cards | July 13, 2016 - 09:20 PM |
Tagged: vulkan, R9 Fury X, nvidia, Mantle, gtx 1070, fury x, doom, amd

We haven't yet benchmarked DOOM on Vulkan Update (immediately after posting): Ryan has just informed me that, apparently, we did benchmark Vulkan on our YouTube page (embed below). I knew we were working on it, I just didn't realize we published content yet. Original post continues below.

As far as I know, we're trying to get our testing software for frame time analysis running on the new API, but other sites have posted framerate-based results. The results show that AMD's cards benefit greatly from the new, Mantle-derived interface (versus the OpenGL one). On the other hand, while NVIDIA never really sees a decrease, more than 1% at least, it doesn't really get much of a boost, either.

View Full Size

Image Credit:

I tweeted out to ID's lead renderer programmer, Tiago Sousa, to ask whether they take advantage of NVIDIA-specific extensions on the OpenGL path (like command submission queues). I haven't got a response yet, so it's difficult to tell whether this speaks more toward NVIDIA's OpenGL performance, or AMD's Vulkan performance. In the end, it doesn't really matter, though. AMD's Fury X (which can be found for as low as $399 with a mail-in rebate) is beating the GTX 1070 (which is in stock for the low $400s) by a fair margin. The Fury X also beats its own OpenGL performance by up to 66% (at 1080p) with the new API.

The API should also make it easier for games to pace their frames, too, which should allow smoother animation at these higher rates. That said, we don't know for sure because we can't test that from just seeing FPS numbers. The gains are impressive from AMD, though.

July 13, 2016 | 09:27 PM - Posted by Anonymous (not verified)

Another reason Tom Petersen didn't want to talk about asynchronous compute on Pascal because there is none. Just like Maxwell was said to have async compute but after all these years that mythical driver enabler is nowhere to be found.

I'm sure Nvidia will keep stringing along the deception that they have async just like they did with Maxwell hoping it will go away. Just like Maxwell async mystery Ryan and PCPerspective will ignore it hoping it will all go away.

July 13, 2016 | 09:32 PM - Posted by Scott Michaud

I still don't really understand why people think we're against performance-improving features.

July 13, 2016 | 09:36 PM - Posted by RadioActiveLobster

Because this is the Internet and people are morons.

July 13, 2016 | 09:37 PM - Posted by Anonymous (not verified)

Your "PCPerspective" isn't against performance improvement. You just don't investigate common issues on Nvidia side of thing and by the comments everywhere should give you a PERSPECTIVE of what the community here at large think of your practices and as Ryan & crew always try to joking shrub off you BIOS.

July 13, 2016 | 09:55 PM - Posted by Anonymous (not verified)

PCPer has called out Nvidia on a number of things. Such as the G-Sync problem they had where the power draw would ramp up for no reason. Or when the cards wouldn't drop their core clock when idle.

You stupid fuck.

July 14, 2016 | 02:59 AM - Posted by JohnGR

It's not a GSync only problem and some people, even with Pascal cards, still have it. Did you knew it? No? Maybe because there was no update about it? Just a speculation.
It seems that Nvidia can't fix that completely, as they can't fix other little things, but nevermind. Let's all concentrate on AMD's problems.

July 14, 2016 | 09:11 PM - Posted by Anonymous (not verified)

It would have to be a low number, dude. 'Cause it fixed it for me when I was on my 980ti - even spoke to the main mod on the GeForce forums about it directly. And now with my 1080 I still don't have it (XB270HU monitor). So, I don't really know. There are still some people complaining about old 580 issues, maybe PCPer should cover those as well?

But I was responding to the person saying that nothing gets reported about Nvidia's issues, which is false. Hell, the 970 3.5gb fiasco was uncovered by PCPer, wasn't it? I don't care much about AMD vs NVIDIA. But I do care about PCPer's reputation, and it's usually AMD customers that bash them, which is not OK with me.

And also, look at it this way. Tom from NVIDA, Huddy from AMD and even Raja come to PCPer to talk about GPUs with Ryan. PCPer is a great site, with tech savvy reviewers. All this NVIDIA shill this, and NVIDIA shill that is utter nonsense.

July 13, 2016 | 09:51 PM - Posted by pdjblum

Hey Scott, from my reading of it he is saying Ryan is biased. He is not the only one who thinks that. Anyway, the numbers are just fucking awesome for AMD.

July 14, 2016 | 09:23 AM - Posted by Anonymous (not verified)

"Another reason Tom Petersen didn't want to talk about asynchronous compute on Pascal because there is none. Just like Maxwell was said to have async compute but after all these years that mythical driver enabler is nowhere to be found."

A common refrain from people who don't actually know what Asynchronous Compute actually is.

tl;dr version: Maxwell & Pascal have been doing Async Compute all along, even under DX11. AMD implemented the hardware for Async Compute (Async Shaders) in GCN, but never updated their drivers to actually utilise them under DX11 (same as DX11 multithreading). Maxwell and Pascal don't see any performance benefits from forcing Async Compute under DX12 because they're already doing it. GCN sees a speedup because large areas of silicon were previously doing bugger-all.

July 14, 2016 | 10:32 AM - Posted by Kaotik (not verified)

Async compute has absolutely nothing to do with DX11, and neither Maxwell or Pascal are doing it in DX11, and neither is AMD, since it isn't something you can do there.

You mix multithreading with the ability to concurrently and/or independently run graphics and compute tasks regardless of the other.
Maxwell can do this, but it can never gain performance benefits from it, because when ever one task finishes before the other, the portion of resources dedicated to task finishing first will remain idle 'till next tasks. allocation of gpu can be changed only via expensive context switches.
Pascal can benefit from async compute, thanks to being able to re-allocate resources on the fly from compute to graphics or vice versa. It's not as flexible as AMDs solution is, and is still relying on preemption, but it can still bring some benefits.

July 14, 2016 | 11:00 AM - Posted by Anonymous (not verified)

Maxwell and Pascal do asynchronous compute in software(emulation) and AMD has a fully in its GPU's hardware implementation with AMD's asynchronous shaders/ACE units and hardware schedulers. So AMD’s asynchronous shaders have the more responsive hardware based solution. AMD’s ACE units and hardware schedulers can manage the fast changing GPU processor thread scheduling and dispatch in the hardware.

So is not simply a matter of doing “async-compute” is a matter of having the asynchronous shaders, ACE units, and hardware schedulers/dispatchers implemented in hardware as AMD has done on their GCN based GPUs for 4 generations of GCN SKUs. I suppose that the Nvidia folks are now going to say that Intel’s version of SMT, HyperTheading(TM), should be done in software and not done in the hardware on Intel’s CPU cores. And SMT(simultaneous Multi-Threding) is a good example of asymc-compute done/managed in a CPU’s hardware and SMT gets way more IPCs and much better CPU core execution resources utilization, and low latency responsiveness than any CPUs without SMT. Zen is Getting SMT also.

This significant performance improvement for AMD’s GCN based SKUs on Doom/Vulkan is showing that having the asynchronous shaders, the ACE units and the hardware based schedulers on AMD’s GCN GPUs is very helpful in allowing AMD’s GCN GPU’s to manage their GPU processor threads with hardware that can respond with very low latency to any fast changing asynchronous events. That low latency response provided by AMD’s GCN hardware is what is giving the frame rate improvements with the Vulkan API able to get at the low level hardware of AMD’s GCN GPUs. There are a whole lot of older version GCN customers out there that are very happy to get that Vulkan Boost on their GCN GPUs. The gaming software/API ecosystem has now begun to catch up to AMD’s GCN hardware, and AMD was not dishonest about the improvements that its GCN hardware would get from both Vulkan and DX12.

That reddit post is damage control that attempts to digress the discussion from the fact that Nvidia does NOT have async-compute fully implemented in it's GPU's hardware! And it's not an argument about Async-Compute and what Async-Compute means it's an argument about having the Async-Compute implemented FULLY in the GPU’s hardware like AMD has, and Nvidia does NOT have!.

July 14, 2016 | 03:51 PM - Posted by johnny rook (not verified)

Explain me as if I were a 4 year old

How can a post that argues "about having the Async-Compute implemented FULLY in the GPU’s hardware like AMD has, and Nvidia does NOT have!", be "damage control"?

July 14, 2016 | 03:47 PM - Posted by johnny rook (not verified)

Great link! I learned a new thing today! Thank you. :)

July 13, 2016 | 11:14 PM - Posted by Anonymous (not verified)

I think the reference RX 480 out of the box leaves a lot of performance untapped... Well for most.

I wouldn't be surprised, just because of power limits & TDP limits the reference card is ~10% slower then custom RX 480 (not overclocked)

So doom benchmark number could jump another 20 to 25% with custom / overclocked RX 480 vs reference.

And this mean, that those RX 480 will cleanly beat a Founder Edition GTX 1070 in Doom.

If AMD can get Rockstar, Blizard and Activision on board with Vulkan / DX12, the trend of a RX 480 beating the GTX 1070 might solidify and people will just switch to the red team.

I just got a RX 480 (very pleased with it.. first thing I did was undo the factory overvolting & put the silicon near its sweat spot) but I'm still interested in the GTX 1060 if a 6GB blower model is offered at $250

July 14, 2016 | 12:52 AM - Posted by arbiter

I wouldn't declare it over just yet. NV product manager replied on twitter to a question about update on vulkan, "Supported? Yes. Not quite at "Game Ready" support status yet."

So maybe in next driver or couple will improve performance.

July 14, 2016 | 01:04 AM - Posted by Anonymous (not verified)

Is that the same Nvidia product manager that has always said async isn't enabled on consumer drivers.


Hes been saying that since Maxwell

July 14, 2016 | 03:01 AM - Posted by JohnGR

This must be like that mysterious, magical, driver that was going to give Async support to Maxwell cards. Months ago.

July 15, 2016 | 04:16 PM - Posted by odizzido (not verified)

I am still waiting for variable refresh from Nvidia. What a joke.

July 14, 2016 | 12:00 AM - Posted by Aberkae (not verified)

So many low level APIs titles coming out, bf1, masseffect,all Id titles, most likely any valve titles, deus ex in just one month, and civilization vi.
The only ones that are still holding on to dx 11 is nvidia partners.
I have a feeling once big Vega comes out during a flood of low level API titles pascal prices will deflate respective to their Teraflop performance in comparison to the competition.

Nvidia will finally have a dx12 card 2 years after its road map showed in volta.

July 14, 2016 | 12:01 AM - Posted by Aberkae (not verified)

Lol 2 years silly of me 2 generations correction after road map showed!

July 14, 2016 | 12:47 AM - Posted by arbiter

Typical red fan, hypes everything while knowing nothing about well everything and anything. Reason AMD has lead with Vulkan was cause it was ONCE the closed AMD API known as mantle so they had 2-3 years of extra work with it. Wait a few months new drivers for nvidia cards will probably shut you up.

July 14, 2016 | 01:11 AM - Posted by Anonymous (not verified)

Exactly what I was thinking!!

July 14, 2016 | 02:22 AM - Posted by David A Ayala (not verified)

Typical nvidia fan...blames on drivers the lack of proper Async compute.. Go check Rise of the Tomb Raider DX12 with Async benchmarks. Same story, huge boost for AMD and little to no boost on Nvidia.. BTW Vulkan is not mantle it's derived from Mantle, not the same thing.

July 14, 2016 | 03:23 AM - Posted by JohnGR

Well, Nvidia fanboys are full of promises. Wait for async in drivers, wait for better DirectX12 support, wait for better Vulkan performance, wait wait wait....

July 14, 2016 | 06:07 AM - Posted by Batismul (not verified)

Hypocrite much AGAIN, that is exactly the major thing the AMD butthurt fanboys been saying for years. Wait for better drivers, the card will age well blahblahblah

July 14, 2016 | 06:11 AM - Posted by JohnGR

AMD cards do age better. That's something that even some Nvidia fans admit.

July 14, 2016 | 07:03 AM - Posted by Nintex (not verified)

Simple because their drivers suck and take along time to take more advantage of the GPU resources, any PC Gaming enthusiast knows this, apart from the fanboys obviously

July 14, 2016 | 07:23 AM - Posted by JohnGR

Whatever makes you happy. :)

July 14, 2016 | 01:44 PM - Posted by Anonymous Nvidia User (not verified)

Same as AMD. Waiting. You had to wait four years for asynchronous compute to be some what viable. Wait almost as long to get maximum directx 11 performance out of drivers for Radeon cards. If we waited that long for something, we usually get a new video card that is designed to support these features. I usually upgrade every 3-4 years anyways to take advantage of new graphical features of video cards. Besides 3dmark Timespy properly coded asynchronous compute for Pascal.

Lo and behold non-existant asynchronous compute now exists on Nvidia. The companies are only coding for AMD's version of async.

Can't spread any more lies about Pascal not having async support.

Also on a side note Bethesda is working with Nvidia to get async working in Vulkan version of Doom. All the responsibility of getting features to work properly is on the programmer. This is why Vulkan and directx 12 really suck. Hard to code for and too time consuming. Programmers are usually hard pressed to release a semi workable game at deadline with high level API doing most of the work for them. Now that they have to do more, it's a recipe for disaster. If a game is coded for AMD, don't buy if you have Nvidia and vice versa.

I wish someone would do a power consumption of Timespy with and without async on to determine if performance is worth it or just max overclock your card instead for performance increase.

July 14, 2016 | 03:14 PM - Posted by taisserrootseasyaccount (not verified)

No one really waited fro async to be used (mantle used it, but ir still wasn't a selling point).
The point being Nvidia have been promising this for a long time now and it hasn't happened

July 16, 2016 | 12:16 PM - Posted by leszy (not verified)

It will be expensive time for NV, because all XBO games are now coded for DX12, and don't need porting for full AMD optimization. From the other side, for NV optimization, NVidia will need to pay full price. But do not worry. They know how to gain money - anyway they invented Founders Edition cards ;)

July 14, 2016 | 06:58 AM - Posted by Aberkae (not verified)

Fan lol I wish I admit i only bought the green stuff since dual 480s sli sc, upgraded to classified 580, upgraded to classified 690, upgraded to g1 980, upgraded to amp extreme 980 ti, sold that for $500 last month minus amazon fees. And now fcn wait for 1080 amp extreme to show up in stock, but hear that it will only show up againagain in august. I have signed up with auto notification with more places than I remember.

.I'm just for a healthy competition. I don't like the monopoly nvidia became. Amd is bringing more innovation than nvidia, at least the one that matter and make a difference. I'm really tempted to go amd when big Vega shows though.

July 14, 2016 | 07:09 AM - Posted by Aberkae (not verified)

Oh I forgot I rocking a 750ti ftw currently in between card, because my old 8800 ultra capped out I believe because of lack of support. More like atypical fan!

July 14, 2016 | 11:25 AM - Posted by Anonymous (not verified)

No Nvidia was busy stripping out compute and other hardware in its consumer GPU SKUs and marketing that power savings all while not looking towards the future of the consumer market and VR gaming. Nvidia was doing what Intel does with the product segmentation and milking for profits for putting that compute back in at a cost for its consumer SKUs. It's an example of classical monopolistic business practices with that product segmentation that Nvidia continues to do to this very moment, just look at the GTX 1060 SKUs and the lack of SLI support that will force the customers to go up the cost ladder to get at any performance scaling! I’ll bet that Nvidia is none too happy about explicit multi-adaptor in the newer graphics APIs

All that Nvidia SKU gimping and product segmentation to milk for profits is coming back to bite Nvidia in the A$$!

July 14, 2016 | 01:50 PM - Posted by Anonymous Nvidia User (not verified)

I don't think many people will be happy about multi-adapter support used to get 80-90% scaling under dx11. It's down to 45% for Sli in directx 12, at least in Timespy demo. No wonder crossfire and Sli are being dropped.

Another reason to stay directx 12 if you have multiple cards.

July 14, 2016 | 01:52 PM - Posted by Anonymous Nvidia User (not verified)

Edit directx 11 not 12. LOL hard to type on small Google keyboard with big fingers.

July 16, 2016 | 12:26 PM - Posted by leszy (not verified)

Crossfire has nearly 90% scaling.

July 16, 2016 | 01:52 PM - Posted by Anonymous Nvidia User (not verified)

Your math is off. If it was 90% the score would be 8086 not 7678. .9 x 4256= 3830 + 4256 = 8086. .8 x 4256 = 3405 + 4256 = 7661. There it's a little more than 80%.

July 16, 2016 | 12:03 PM - Posted by leszy (not verified)

NV stated, they was the first, who was working on DX12 with Microsoft.There was also some articles over year ago, how hard they are working on Vulcan.

July 14, 2016 | 12:49 AM - Posted by Anonymous (not verified)

Anedia is finished. Fanboys will start crying soon. Now Anedia will have to pay money to game developers to slow the AMD GPUs or Anedia programmers are known to have supplied codes to game developers.

If GPU-ID is AMD, then sleep 1000 milisecond...

July 14, 2016 | 04:59 AM - Posted by rock1m1

This comment doesn't make much sense. If there are financially so well put, they can easily invest it for better GPUs. These are very early days for Mantle, I mean, Vulcan and DX 12, give it some time and you will see performance improvements from both sides.

July 14, 2016 | 03:20 AM - Posted by JohnGR

Also Guru3D did some testings showing the same results.

Also a good idea - from any readers perspective, but probably not Peterson's perspective - would have been to retest Tomb Raider with the latest patch.

As for async, well I think it is obvious, or at least that's my opinion, that Pascal cards do NOT support async. Nvidia used it's marketing again to gimp it's cards. But this time with the help of tech sites and fanboys.

Dynamic load balancing was fast considered from everyone as async. Well, I guess the name is fine to not get sued from about everybody. No one can come out and say that Nvidia talked about async compute. That was just a misconception.

July 14, 2016 | 04:17 AM - Posted by arbiter

How do you know pascal doesn't support it? What is the bases for that claim? That was a rhetorical question which everyone knows the answer. Using TSSAA does provide a boost in performance least on my card and that is on of the AA's that uses async. Its just that former locked API that AMD has had may more years access to cause they wrote it that is the problem, async compute was also another AMD tech so yea.

July 14, 2016 | 04:59 AM - Posted by Anonymous (not verified)

A dead give away that Pascal doesn't support asynchronous compute is P100 whitepaper. No such mention of it. Any cut down derivative is based on it.

July 14, 2016 | 05:13 AM - Posted by JohnGR

Nvidia fanboys like you, about a year ago, where saying that DirectX 12 was something that Microsoft had started working on it, years before AMD rushed out Mantle, to take all the credit. More particularly, Nvidia fanboys like you, where saying that Nvidia was working with Microsoft on DirectX 12 long before AMD, like from 2009. The proof for that, where the first Microsoft DirectX 12 demos. You see, Microsoft was running the first DirectX 12 Demos on Nvidia Titan cards(and a couple more on Intel integrated GPUs).

Now we blame the API and we insist that it was locked for too long and that Nvidia didn't had access to it. The next thing I am going to read is that PhysX and GameWorks are AMD's proprietary techs.

In any case you are free to link me in an official Nvidia page where it clearly says that Pascal offers async support or that Dynamic load balancing IS Nvidia's naming for async.

The truth is that Nvidia promised async in drivers for ALL 900 series Maxwell cards. Instead they thought it would have been a better idea to promote that feature as Dynamic load balancing, an exclusive Pascal feature.

Those are of course just personal opinions. And it is also a personal opinion that many where convince to throw over $700 or even $800 for a Founder Edition 1080, believing that "at least Pascal supports async", so it is a true upgrade compared to a GTX 980Ti for example.

July 14, 2016 | 11:00 AM - Posted by RushLimbaughisyourdaddy (not verified)

"... , async compute was also another AMD tech so yea."

Your statement is absolutely false.

Nvidia is free to implement AC at anytime, they just chose not to until very recently.

July 14, 2016 | 12:03 PM - Posted by Anonymous (not verified)

AMD innovated while Nvidia segmented! AMD pimped its GPUs for the future while Nvidia gimped its GPUs of compute!

AMD Pimped while Nvidia Gimped! AMD not only innovated with its Mantle project that resulted in the DX12/Vulkan APIs, but AMD also never gimped its GPU of compute. AMD took a lot of heat for its power usage but now AMD has got that power usage metric down, and down without having to remove compute resources from its GPUs.

Nvidia was all about marketing the power savings and DX11, and NOT at all about any future improvements like Vulkan/DX12. Nvidia was happy to milk, milk, and Milk! Well, the cows have come home with their new DX12/Vulkan APIs and they are not so happy with DX11/OpenGL and Nvidia's consumer hardware Gimping. The games are now able to tap all that compute that is inside of AMD's GPU hardware via the Vulkan/DX12 APIs close to the metal performance, and AMD has a lot more metal on its GPUs because AMD was never busily stripping out the metal to milk for excessive profits like Nvidia was and is continuing to do!

July 14, 2016 | 12:30 PM - Posted by Anonymous (not verified)

AMD innovated while Nvidia segmented!

Awesome line dude! Straight & to the point! :)

Made me think of the Turtle(AMD) and the Hare(nvidia).

July 14, 2016 | 03:45 AM - Posted by Michael Rand (not verified)

I don't understand how people can call out Ryan as biased, the guy used to run AMD fansites for fuck sakes. I've been reading this site for a long time and listening to the podcasts and I've never heard much in the way of bias towards either GPU company.

What I do see on here though is rampant fanboyism remeniscent of the console wars, which is pretty sad.

July 14, 2016 | 04:18 AM - Posted by arbiter

If anything he is amd biased since he owns a share of AMD stock.

July 14, 2016 | 04:25 AM - Posted by Anonymous (not verified)

Nvidia has all the resource to slowdown or even halt the adoption/advancement rate of be it dx12 or vulkan esp. talking about async. Remember, they have the bigger market share, and financially they can pay soft. devs, rev. sites, I means everyone to do the damage control.

July 14, 2016 | 05:28 AM - Posted by JohnGR

Nvidia is starting regretting that they didn't do everything they could to get consoles. Even giving free integrated graphics to Intel, so they could build an APU better than AMD's and take the whole console market. A few months ago there where even rumors that Nvidia was giving away Tegra SOCs to Nintendo for their NX console.

If Nvidia had the consoles, PhysX and GameWorks would have been everywhere and AMD really dead in the GPU market. They let the console market to AMD, and now everyone is optimizing for GCN.

With Vulkan targeting also mobiles and Google supporting Vulkan in Android and mobile GPU manufacturers like Imagination already having drivers for Vulkan over a year now, it will be at least stupid to NOT use everything that Vulkan gives you, even async. Because games using Vulkan will be targeting ALL markets, not just PCs. Nvidia does have 75% market share in discrete graphics for PCs. But if you add the integrated graphics, mostly Intel, the consoles and the smartphones/tablets market, that 75% goes way down.

If we see a game from Ubisoft using async, it will mean "game set and match" for AMD.

July 14, 2016 | 06:16 AM - Posted by Nintex (not verified)

Go on youtube and you will find MORE than enough videos of OpenGL vs Vulkan comparisons between a bunch of AMD GPU's vs Nvidia's GPU's.

I saw a hefty increase on my overclocked GTX 980 Ti using Vulkan after 2 days of heavy testing using nightmare settings, x16 anisotropic filtering and using AA TSSAA (8TX) on 1440p resolution stuck above 120fps and topping over 160fps. Before I was not reaching those fps and always below 100fps.

You can verify that on various youtube videos of people with Intel i7 CPU's and GTX 980 Ti's.

July 14, 2016 | 06:38 AM - Posted by Nintex (not verified)

PS. This is what you should do if you're a Nvidia user to get the performance.

Seems there is some serious driver installing issue on nvidia that causes Vulkan to not work properly:

Originally Posted by nhornby51743

Sorry about the caps, that was to draw attention.

To get better performance Vulkan has to be installed on your pc first. To do this go into C:\Program Files\NVIDIA Corporation\Installer2\Display.Driver, then install it from there.

After I installed this my GTX 1070 was putting out an average extra 30fps frames, maxed out at 1080p.
And the end result:

Huge difference. Explains why some people has gains and some are getting nothing.

July 14, 2016 | 06:42 AM - Posted by Nintex (not verified)

Scratch that, he admitted mistake, and reported no gains.

July 14, 2016 | 06:35 AM - Posted by mAxius

Question how many lashes with a wet noodle did scott get :P

July 14, 2016 | 06:43 AM - Posted by zgradt

Doesn't surprise me. Just looking at hardware specs tells you that AMD chips should be faster. They have way more FLOPs than nvidia. The new APIs just take better advantage of the resources avalable. Nvidia has already been operating at peak efficiency.

July 14, 2016 | 07:11 AM - Posted by Anonymous (not verified)

Absolutely unsurprising. When you have driver team which is overstretched and likely inexperienced then good game/engine devs can beat them. (But it means they have to waste boatload of time and energy on it and nobody outside benefits from it) And they have to update code constantly for new HW since previous version will at best have inferior performance.

However, when you have skilled, experienced driver team then game/engine devs cannot beat them at all. In fact I would argue there is simply no exception. Driver team has access to internal information about HW and control most of stack.

And that's what we are seen here, again. AMD is once again leaving loads of performance because their drivers are not well optimized nor can use HW well. And Nvidia simply cannot see benefit since their drivers are already optimized and know how to push HW to maximum performance.

TL.DR: It is not case about not supporting something well, but about how well drivers are done.

July 14, 2016 | 08:07 AM - Posted by Anonymous (not verified)

So Vulkan AND DX12 are Mantle-based? AMD really started an API fire.

July 14, 2016 | 12:31 PM - Posted by Anonymous (not verified)

Technically AMD & SONY started and API Wild Fire.

M$ only came on board later.

July 14, 2016 | 12:32 PM - Posted by Anonymous (not verified)

All based on Johan Anderson's work over at DICE.

July 14, 2016 | 08:09 AM - Posted by Anonymous (not verified)

"The API should also make it easier for games to pace their frames, too, which should allow smoother animation at these higher rates."

Except there's no FCAT support for Doom running Vulkan yet, so it's not easy to objectively measure.

July 14, 2016 | 08:38 AM - Posted by Anonymous (not verified)

Just ID implementing Vulkan sees increased sales for Doom now lol
Good for them!

July 14, 2016 | 08:50 AM - Posted by Prodeous13 (not verified)

I do wonder how R9 Nano would perform. my randomly generated guesstimate would be just above the 1070. That would make the card almost worth it as it is currently just above the 1070 price here in Poland.

Do miss Canadian prices, well from the yesteryears.

Still it is slightly confusing why so much hidden performance in all GCN cards. Were AMD drivers that bad? or was something in DX11 holding them back?

It almost makes me not want to upgrade my R9 290x seeing the hidden performance.

July 14, 2016 | 09:17 AM - Posted by Anonymous (not verified)

Very high driver overhead and poorly optimized for their GPU's, this is for DX11 and OpenGL. For years now and finally now with Vulkan and DX12 we can see their GPU's full potential thanks to games devs utilizing those low level API's. No thnx to AMD's driver team!

July 14, 2016 | 01:23 PM - Posted by Butthurt Beluga

AMD's driver team as of 2016 has been as good as or possibly even better than Nvidia's drivers.
If they can continue this trend, however, remains to be seen.

I will say wholeheartedly that as someone who has owned both brands AMD and Nvidia GPUs recently, the AMD drivers of today are better than the best Nvidia drivers I received on my Kepler GPU.

July 14, 2016 | 04:11 PM - Posted by johnny rook (not verified)

What alarms me more is the fact that even with DX12 and Vulkan, AMD is yet to show us a GPU that "actually" performs better and can beat Pascal GPUs. I know is always good to have "free performance" but, even with that 10% more performance, AMD can't beat nVIDIA. (please, don't bring up "Vega"; nVIDIA fans will remember us of "big" Pascal).

The truth is that AMD's performance is still behind nVIDIA's. And that concerns me a bunch.
People look at the game benchmarking charts and what do they pay attention to? To the GPU on top. Average Joe doesn't give a damn about Async-Cumpute and Async shaders; what he cares about is what GPU is the best. And at the moment, that GPU is Pascal.

July 14, 2016 | 09:19 AM - Posted by donut (not verified)

Where's the custom 480's AMD? Hurry up me want.

July 14, 2016 | 04:44 PM - Posted by Goofus Maximus (not verified)

I'm sorry, but I just can't stop myself!

"This is your Doom.

This is your Doom on Vulkan!"

July 14, 2016 | 06:48 PM - Posted by Anonymous (not verified)

Funny how investigative PCPer is when there is an AMD issue with the "power problem". They knew the math, the details to a mind numbing level. But when the Doom Vulkan patch massively improves performance on AMD cards they brush over it almost like they don't believe it. Just a shrug never even attempting to understand the how's or why's. In the podcast they were just like "hrm.. I don't know." If you want good analysis as to why AMD saw such a huge performance boost check out Adored TV's video on the topic.

July 14, 2016 | 07:46 PM - Posted by Anonymous (not verified)

At the end of the day it doesn't really matter. AMD's fastest card is slower than Nvidia's 1070 which isn't even their fastest card.

Also for everyone out there, the new DX12 3Dmark bench has async compute options and you can see the performance improvement when you turn it on with Nvidia cards. So quit trying to downplay it in some childish way like "Nvidia doesn't do real async compute" the results are what matters and there are some improvements when you turn it on. The thing is you don't even understand async compute if you say that. AMD uses async shaders while Nvidia uses pre-emption but both do concurrent work which is the point. The difference is that without async compute specifically enabled, AMD does almost nothing concurrent because they don't write it into the driver.

July 15, 2016 | 05:39 AM - Posted by Mitch 74 (not verified)

For years now when it comes to new hardware, AMD starts with the mid-range and then scales it, and it usually just happens that the chips they sell most are "sweet spot" deals. I'll name but a few:
HD4850 - stayed a recommendation for almost 3 years. HD7770 - play games in 1080p for cheap. R9 285 - first GCN 1.2 card, was the basis for the whole R9 380 segment the next year, fast enough to run most games at high settings with high framerates. Never the best, but never bad and always a good bargain.
Graphics Core Next cards were reviled by some for not being as powerful as they should be, considering how much wattage they required and the sizes of their silicon; at the same time, AMD did push for APIs with better resource management.

Well, here you go: DX12 and Vulkan are now here, and cards you bought from AMD these last 4 years all get a boost. Enjoy.

July 15, 2016 | 02:50 PM - Posted by Anonymous Nvidia User (not verified)

Cheers. No need to buy a new card. AMD sinks further in debt in the meantime. This strategy may be one more nail in AMD's coffin.

July 16, 2016 | 11:44 AM - Posted by leszy (not verified)

There was so loud in last year about incoming Vulkan. Now, when we have first big game using Vulkan, most of the tech sites are silent about it. Why?

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.