Feedback

NVIDIA Talks DX12, DX11 Efficiency Improvements

Author:
Manufacturer: NVIDIA

DX11 could rival Mantle

The big story at GDC last week was Microsoft’s reveal of DirectX 12 and the future of the dominant API for PC gaming.  There was plenty of build up to the announcement with Microsoft’s DirectX team posting teasers and starting up a Twitter account of the occasion. I hosted a live blog from the event which included pictures of the slides. It was our most successful of these types of events with literally thousands of people joining in the conversation. Along with the debates over the similarities of AMD’s Mantle API and the timeline for DX12 release, there are plenty of stories to be told.

After the initial session, I wanted to setup meetings with both AMD and NVIDIA to discuss what had been shown and get some feedback on the planned direction for the GPU giants’ implementations.  NVIDIA presented us with a very interesting set of data that both focused on the future with DX12, but also on the now of DirectX 11.

View Full Size

The reason for the topic is easy to decipher – AMD has built up the image of Mantle as the future of PC gaming and, with a full 18 months before Microsoft’s DirectX 12 being released, how developers and gamers respond will make an important impact on the market. NVIDIA doesn’t like to talk about Mantle directly, but it’s obvious that it feels the need to address the questions in a roundabout fashion. During our time with NVIDIA’s Tony Tamasi at GDC, the discussion centered as much on OpenGL and DirectX 11 as anything else.

What are APIs and why do you care?

For those that might not really understand what DirectX and OpenGL are, a bit of background first. APIs (application programming interface) are responsible for providing an abstraction layer between hardware and software applications.  An API can deliver consistent programming models (though the language can vary) and do so across various hardware vendors products and even between hardware generations.  They can provide access to feature sets of hardware that have a wide range in complexity, but allow users access to hardware without necessarily knowing great detail about it.

Over the years, APIs have developed and evolved but still retain backwards compatibility.  Companies like NVIDIA and AMD can improve DirectX implementations to increase performance or efficiency without adversely (usually at least) affecting other games or applications.  And because the games use that same API for programming, changes to how NVIDIA/AMD handle the API integration don’t require game developer intervention.

With the release of AMD Mantle, the idea of a “low level” API has been placed in the minds of gamers and developers.  The term “low level” can mean many things, but in general it is associated with an API that is more direct, has a thinner set of abstraction layers, and uses less translation from code to hardware.  The goal is to reduce the amount of overhead (performance hit) that APIs naturally impair for these translations.  With additional performance available, the CPU cycles can be used by the program (game) or be slept to improve battery life. In certain cases, GPU throughput can increase where the API overhead is impeding the video card's progress.

Passing additional control to the game developers, away from the API or GPU driver developers, gives those coders additional power and improves the ability for some vendors to differentiate. Interestingly, not all developers want this kind of control as it requires more time, more development work, and small teams that depend on that abstraction to make coding easier will only see limited performance advantages.

The reasons for this transition to a lower level API is being driven the by widening gap of performance between CPU and GPUs.  NVIDIA provided the images below.

View Full Size

On the left we see performance scaling in terms of GFLOPS and on the right the metric is memory bandwidth. Clearly the performance of NVIDIA's graphics chips has far outpaced (as have AMD’s) what the best Intel desktop processor have been able and that gap means that the industry needs to innovate to find ways to close it.

Continue reading NVIDIA Talks DX12, DX11 Efficiency Improvements!!!

View Full Size

Even with that huge disparity, there aren't really that many cases which are ripe for performance improvement with CPU efficiency increases.  NVIDIA showed us this graphic above with performance changes when scaling a modern Intel Core i7 processor from 2.5 GHz to 3.3 GHz. 3DMark and AvP benchmarks don't scale at all, Battlefield 3 scales up to 3% with the GTX Titan, Bioshock Infinite scales across the board up to 5% and Metro: Last Light is the stand out with an odd 10%+ change on the HD 7970 GHz Edition.  

NVIDIA doesn’t deny that a lower level API is beneficial or needed for PC gaming. It does, however, think that the methodology of AMD’s Mantle is the wrong way to go.  Fragmenting the market into additional segments with a proprietary API does not maintain the benefits of hardware abstractions or “cross vendor support”. I realize that many readers will see some irony in this statement considering many in the industry would point to CUDA, PhysX, 3D Vision and others as NVIDIA’s own proprietary feature sets. 

NVIDIA’s API Strategy

Obviously NVIDIA is going to support DirectX 12 and continues to support the latest updates to OpenGL.  You’ll find DX12 support on Fermi, Kepler and Maxwell parts (in addition to whatever is coming next) and NVIDIA says they have been working with Microsoft since the beginning on the new API.  The exact timeline of that and what constitutes “working with” is also up for debate, but that is mostly irrelevant for our discussion. 

What NVIDIA did want to focus on with us was the significant improvements that have been made on the efficiency and performance of DirectX 11.  When NVIDIA is questioned as to why they didn’t create their Mantle-like API if Microsoft was dragging its feet, they point to the vast improvements possible and made with existing APIs like DX11 and OpenGL. The idea is that rather than spend resources on creating a completely new API that needs to be integrated in a totally unique engine port (see Frostbite, CryEngine, etc.) NVIDIA has instead improved the performance, scaling, and predictability of DirectX 11.

View Full Size

This graphic, provided by NVIDIA of course, shows 9 specific Direct3D 11 functions.  The metric of efficiency in this case is rated by the speed increase between the AMD R9 290X in red and the three different progressive driver versions in green on a GTX 780 Ti.  The Draw, SetIndexBuffer and SetVertexBuffers functions have gone through several hundred percent performance improvements since just the R331 driver stack to an as-yet-unreleased driver due out in the next couple of weeks.

These were obviously hand selected by NVIDIA so there may be others that show dramatically worse results, but it is clear that NVIDIA has been working to improve the efficiency of DX11. NVIDIA claims that these fixes are not game specific and will improve performance and efficiency for a lot of GeForce users. Even if that is the case, we will only really see these improvements surface in titles that have addressable CPU limits or very low end hardware, similar to how Mantle works today.

View Full Size

NVIDIA shows this another way by also including AMD Mantle.  Using the StarSwarm demo, built specifically for Mantle evaluation, NVIDIA’s GTX 780 Ti with progressive driver releases sees a significant shift in relation to AMD.  Let’s focus just on D3D11 results – the first AMD R9 290X score and then the successive NVIDIA results.  Out the gate, the GTX 780 Ti is faster than the 290X even using the R331 driver. If you move forward to the R334 and the unreleased driver you see improvements of 57% pushing NVIDIA’s card much higher than the R9 290X using DX11.

If you include Mantle in the picture, it improves performance on the R9 290X by 87% - a HUGE amount! That result was able to push the StarSwarm performance past that of the GTX 780 Ti with the R331 and R334 drivers but isn’t enough to stay in front of the upcoming release.

Thief, the latest Mantle-based game release, shows a similar story; an advantage for AMD (using driver version 14.2) over the GTX 780 Ti with R331 and R334, but NVIDIA’s card taking the lead (albeit by a small percentage) with the upcoming driver.

If you followed the panels at GDC at all, you might have seen one about OpenGL speed improvements as well.  This talk was hosted by NVIDIA, AMD and Intel and all involved openly bragging about the extension-based changes to the API that have increased efficiency in a similar way to what NVIDIA has done with DX11. Even though OpenGL often gets a bad reputation for being outdated and bulky, the changes have added support for bindless textures, texture arrays, shader storage buffer objects, and commonly discussed DirectX features like tessellation, compute shaders, etc. 

View Full Size

Add to that the extreme portability of OpenGL across mobile devices, Windows, Linux, Mac, and even SteamOS, and NVIDIA says their commitment to the open-source API is stronger than ever.

The Effect of DirectX 12

As we discussed in our live blog, the benefits of the upcoming DX12 implementation will come in two distinct parts: performance improvements for existing hardware and feature additions for upcoming hardware.  Microsoft isn’t talking much about the new features that it will offer and instead are focused on the efficiency improvements. These include reductions in submission overhead, improved scalability on multi-core systems, and its ability to mimic a console-style execution environment. All of this gives more power to the developer to handle and manipulate the hardware directly.

View Full Size

NVIDIA claims that work on DX12 with Microsoft “began more than four years ago with discussions about reducing resource overhead. For the past year, NVIDIA has been working closely with the DirectX team to deliver a working design and implementation of DX12 at GDC.” This would indicate that while general ideas about what would be in the next version of DX, the specific timeline to build and prepare it started last spring.

NVIDIA is currently the only GPU vendor to have a DX12 capable driver in the hands of developers and the demo that Microsoft showed at GDC was running on a GeForce GTX TITAN BLACK card. (UPDATE: I was told that actually Intel has a DX12 capable driver available as well leaving AMD as the only major vendor without.)

Will NVIDIA feel heat from Mantle?

Though it doesn’t want to admit it, NVIDIA is clearly feeling some pressure from gamers and media due to AMD’s homemade Mantle API.  The company’s stance is to wait for DirectX 12 to be released and level the playing field with an industry standard rather than proceed down the pathway of another custom option.  In the next 18 months, though, there will be quite a few games released with Mantle support, using the Frostbite engine or CryEngine. How well those games are built, and how much of an advantage the Mantle code path offers, will determine if gamers will respond positively to Radeon cards.  NVIDIA, on the other hand, will be focusing its reticule on improving the efficiency and performance of DirectX 11 in its own driver stack in an attempt to maximize CPU efficiency (and thus overall performance) levels to rival Mantle. 

During a handful of conversations with NVIDIA on DirectX and Mantle, there was a tone from some that leaned towards anger, but hints at annoyance. It’s possible, according to NVIDIA’s performance improvements in DX11 efficiency shown here, that AMD could have accomplished the same thing without pushing a new API ahead of DirectX 12.  Questions about the division of internal resources on the AMD software team between Mantle and DirectX development are often murmured as is the motives of the developers continuing to adopt Mantle today. Finding the answers to such questions is a fruitless endeavor though and to speculate seems useless – for now.

AMD has done good with Mantle.  Whether or not the company intended for the new API to become a new standard, or merely force Microsoft's hand with DirectX 12, it is thanks to AMD that we are even talking about efficiency with such passion. Obviously AMD hopes they can get some financial benefits from the time and money spent on the project, with improved marketshare and better mindshare with gamers on the PC. The number and quality of games that are released in 2014 (and some of 2015) will be the determining factor for that.

Over the next year and half, NVIDIA will need to prove its case that DirectX 11 can be just as efficient as what AMD has done with Mantle.  Or at the very least, the performance deltas between the two options are small enough to not base purchasing decisions on.  I do believe that upon the release of DX12, the playing field will level once again and development on Mantle will come to a close; that is, at least if Microsoft keeps its promises.

March 22, 2014 | 03:50 PM - Posted by KellyKTorres5 (not verified)

Or at the very least, the performance deltas between the two options are small enough to not base purchasing decisions on. I do believe that upon the release of DX12, the playing field will level once again and development on Mantle will come to a close; that is, at least if Microsoft keeps its promises.

March 22, 2014 | 05:27 PM - Posted by Anonymous (not verified)

Appropriate title: NV talks, while AMD does.

March 22, 2014 | 06:05 PM - Posted by Anonymous (not verified)

AMD "does" while it's delayed, available on only BF4 and Thief with negligible performance increase.

There's a difference between "doing" something and whether or not it should be done. They were late, pushed it out prematurely, and it's not worthwhile.

Read: Physx was also a flop.

March 22, 2014 | 08:05 PM - Posted by elajt_1 (not verified)

Negligible performance? I wouldn't say that, it all comes down to what kind of hardware you've got. I've you are on the high end side then in some cases yes. But not if you'r running CF or higher (e.g. more cards).

March 22, 2014 | 09:44 PM - Posted by Anonymous (not verified)

Talking about a 2 month delay of an entire API reeks of nothing but desperation from either a fanboy or short seller. And not only are AMD shipping Mantle in 3 games and a next generation demo/engine, with more games around the corner, developers are lining up by the dozens for access. Crytek also endorsed the product 2 days ago, knowing DX12 was coming (i haven't noticed anyone try to explain that).
It's funny how so many posts on message boards and websites look and read exactly like the shit spewed on various AMD finance boards by those eager to see AMD's bankruptcy. Most couldn't care less about competition, gamers, enthusiasts, or anything but themselves profiting. A good way to aid that agenda is to organize and spread lies and FUD across the internet, boosting negative sentiment and affecting sales.

Mantle is a huge performance boost, and best of all, increases minimum frames dramatically. All while reducing frame latency dramatically. This last point has curiously, but expectedly, vanished from the narratives of the usual shill 'tech press'. The same ones being paid to promote nvidia's propaganda over the last year, affecting sales of their competitors and making 'frame pacing' prominent. It's corruption at it's finest.

March 23, 2014 | 08:17 AM - Posted by Nilbog

A delay is a delay. Period. If they managed themselves better, and didnt rush the announcements with a hard date nobody would have cared.

Nobody has to talk about DX12 support because nearly the entire PC gaming market is already running Windows. Very few people will upgrade to Win 8 just for that. It either comes to Win 7 and a large chunk of the market can use it, or not.

Nobody want's AMDs bankruptcy, they just expect more from them. Meaning people hold them to some kind of quality standard. Which isn't a crazy concept. Or a bad thing.

Mantle is only a "huge" performance boost if you have an APU. People with a CPU that isn't a joke will see less than a 5% improvement.

Frame Latency is exactly what Frame Pacing is about. You just contradicted yourself.

That same "propaganda" also clearly shows that Nvidia has serious Frame Pacing issues with 3 and 4 cards.

Just because you don't have a Crossfire setup to find out, or just don't notice the stutter, does not mean it does not exist. Or that it is propaganda.

I'm a Crossfire user and can assure it is very real. I have resorted to limiting my FPS to keep steady, smooth feeling frametimes.

Put down the Kool-Aid

March 23, 2014 | 01:40 PM - Posted by Anonymous (not verified)

Yeah, BS to absolutely everything you said.

First, nobody wants AMD bankruptcy? That's the biggest pile of BS i've seen in a long time. There are thousands upon thousands of people that want an AMD bankruptcy, there are over 100,000,000 shares shorted on AMD and forums are flooded with those people spreading FUD, viral marketing, and misinformation about AMD. A bankrupt AMD makes them rich, imagine if an organization was involved with an orchastrated effort to spread negativity and affect sales. It's clear, i've seen several posts on many forums that match finance board shit, verbatim. One of the biggest intel/nv shills, Ashraf on Seeking Alpha posts under the name Intel17 on Anandtech and is an administrator on Xtremesystems, for example. I've seen hundreds more posts that articulate the exact sentiment and lies spread on finance boards, and it affects the narrative and sentiment of consumers reading those posts. It's absolute FUD and slander. Don't tell me it ain't happening, there is no doubt about it. I'm starting to think you probably know that though...

A delay of 2 months is trivial for a brand new, pioneering, closer to the metal API. Such a minor thing only serves fanboys, viral marketers and people with an agenda the opportunity to go forth and drive a narrative about how inept AMD must be because, well, there was a 2 month delay! Bottom line is it's on the market now, and end users are benefiting and enjoying yet another pioneering innovation from AMD.

So because Mantle benefits 96% of the CPU market the most, you somehow attempt to turn that into a negative. Meanwhile, that means that more people can enjoy high quality gaming and gaming reaches a much wider audience. What twisted logic and a blatant fallacy that is. Sounds like another attempt to affect mindset and drive a narrative..

BTW, who bought you those cards? Since they have a history of doing so, are you another one that NVidia bought cards for to find any perceivable negative possible? I see lots of posts on many forums where known AMD haters claim to have AMD cards, only to write deriding 'reviews' and comments about some obscure problem that might crop up. Affecting the narrative and sentiment.

The same propaganda about frame latency clearly does NOT affect reviewers scorn of nv in any way close to the way they attacked AMD. Where's the headlines? Mantle is far smoother and a much better experience for end users. Where are those eye catching headlines?

Kool aid..... LOL. Yeah, that seems to be another strategy; to trivialize and marginalize anyone that might indicate what is really happening in the industry. That's all just the tip of the iceburg. Haven't even gotten into the politics, back room deals, anti competition, bribes, and REAL money exchanging hands, etc.... ;)

If you are not one of those people, you'd be in the minority on todays message boards. They are overrun with viral marketers and investors driving negative sentiment, spreading FUD and outright lies to promote their agenda. There are a huge number of 'people' that would benefit from an AMD bankruptcy. NV and intel have a strong history of unfair competition, bribery and anti consumer behavior. Why any real enthusiast/open market/competition supporting person would support those kinds of action is beyond me.

I'll share more soon. :)

March 23, 2014 | 06:46 PM - Posted by Nilbog

AMD does not cover 96% of the CPU market. Do you have a source for that number?

If you must know. My Dad bought me the second card for my birthday. You are more than welcome to think NVIDIA bought it for me though.

The rest of your post is just filled with delusions and paranoia.

You are ranting about fanboys when you need to look into the mirror.

March 23, 2014 | 10:56 PM - Posted by Anonymous (not verified)

You must not be aware that Mantle also increases performance on intel CPU's. Again, Mantle is good for >90% of the CPU market. And for the rest of the miniscual CPU market, what if those users have multi GPU setups? That's right, CPU limited. Of course, you won't find much talk of that in reviews. They like to focus on setting up a system to purposely hinder any affect Mantle has. That's a complete disservice to consumers.

Paranoia and delusions. lol That's what they said about people raising questions about the Vietnamm War and Gulf of Tonkin incident and many, many other events throughout history...

March 24, 2014 | 08:27 AM - Posted by Nilbog

Now its 90%? Is that number going up or down next time we chat?

The increase in performance is less than 5%. Mantle is doing absolutely nothing for Intel CPUs in real world performance. Perhaps it would be more of a difference with an i5 or i3, but i don't recall seeing any Mantle benchmarks with those CPUs.

You are also conveniently leaving out that you need an AMD GPU to even use Mantle right now. I know they want it to be open in the future, but as of today its only supported by one vendor. So according to the Steam Hardware Survey Feb 2014 (http://store.steampowered.com/hwsurvey/), 31.03% of the market can even use Mantle.

Reviewers are setting up a standard high end gaming PC. Just because Intels CPUs are fast enough for good performance in both APIs does not mean people are purposefully hindering anything. It means Intel sells some really fast CPUs.

Yes, if you already have an AMD CPU and happen to have GCN based cards. This is great for you, i agree there. But this is not going to help sell AMD CPUs because they are still slower then Intel in nearly everything else.

If you are running an i7 with two cards you will not be CPU limited. If you are running 3 or 4 cards you should be pushing three or more monitors, in which case you are GPU limited. So i fail to see you point.

You are now comparing wars where actual people died. To the PC hardware market?? Really???? Perhaps you should compare Intel to the Spanish Inquisition next.

You are an idiot.

Sort it out mate.

March 24, 2014 | 11:22 AM - Posted by Anonymous (not verified)

lol @ the shill, typical response from the goon squad. Try to get your propaganda organized, you're all over the place.

First, yep Mantle benefits >90% (let's say 96%, shall we? ;)) since 96% of CPUs sold are under $200, which includes FX/I5 and under. If you can't understand that i'm not going to resort to drawing you a picture, so you're on your own. Maybe ask your Mom for some help on that. The other few processors sold also get a boost, as many of the high end CPU's sold are used in multi GPU systems where the CPU again becomes the bottleneck, thus benefiting from Mantle. You can refer to your Mom again on this if needed. But alas, of course benchmarks on these lower systems are limited. That would show the real potential of Mantle, and reviewers have incentives not to. Gutter journalism, supported by gutter viral marketing. So reviewers are doing a huge diservice to almost the entire market. Pathetic really.

Awww how noble of you, standing up for the fallen War heroes. And yes they are heroes, but when you try to manipulate a simple comparison of how history has shown time and time again that these so called conspiracies are indeed >90% (96%?) of the time proven true, in a way that dishonors fallen Vets, then you cross the line. You are one sick A-hole.

March 24, 2014 | 12:38 PM - Posted by Nilbog

I'm starting to wonder if AMD is paying you to post these. You are clearly just a shill for AMD.

Such blatant, blind fanboyism can only be explained by you being paid. Right?

You are still conveniently leaving out that you need an AMD GCN based card to even use Mantle. Which is definitely WAY less than the generous 31.03% i quoted from Steam. I'm not going to resort to drawing you a picture.

Yeah, Intel and NVIDIA are paying me to post this here. They bought me my AMD GPUs. They even built my computer for me. Intel sent an engineer to my house to hand deliver the CPU and help build the PC. We had pizza after. Then NVIDIA gave me a 780Ti to apologize for the Crossfire stutter i endured for them.

Not sure how my Mother came into the conversation. Though it's still cute to see you resorting that low (You are assuming i live with my parents?). That really helped solidify your point. AMD is sure getting their moneys worth with you on the payroll.

Everything is >90% (96%?) with you. Must be nice.

Let's not forget who compared a marketplace to the Vietnam War. It sure wasn't this asshole. You are not doing AMD any favors comparing those. Just keep digging your hole deeper. You are truly one disturbed individual. You should seek help.

March 24, 2014 | 12:55 PM - Posted by Anonymous (not verified)

How can i be clearly paid if you are only wondering about it??? You seem confused.

And so now you are conveniently moving the goal posts from your claim that Mantle only benefits lower end AMD APU's to something else, after you were proven to have no clue what you're talking about. So do >90% (96%) of the CPU market benefit from Mantle of not? I'll throw you a bone and leave out the highest end CPU's, even though they also get a huge benefit in multi GPU setups. I'll also help you ignore the huge benefit in smoothness over DX, by not mentioning it.

Sarcasm won't hide the fact that NV is going around providing hardware to shills, to dramatise and enhance any possible issue they can come up with.

Let's not twist my original reference to the Vietnam War and how the Gulf of Tonkin incident was portrayed as a conspiracy theory, when in fact if was 100% valid, as are many other so called conspiracy theories. You're attempt to manipulate that into something derogatory on my part is comical.

March 24, 2014 | 04:33 PM - Posted by Nilbog

I am now positive you are being paid by AMD. You are bitching about shills when you are one yourself.

Here is the exact quote you are referring to.

"Mantle is only a "huge" performance boost if you have an APU."

I never said it "only benefits lower end APUs" BTW there is no such thing as a high end APU either. They don't even compete with an i5.

I will reiterate, if you have a CPU that doesn't suck ass, AND you have a GCN based card. You will see a very small improvement over DX11. I'm sure it will keep getting better, but last time i checked it's not much of a difference.

No, Mantle does not benefit even 50% of the CPU market because you need a 7000 series or higher GPU to run Mantle.

You are assuming everyone in the market is running GCN GPUs.

Please, lets go back to smoothness. Which is referring to Frametimes. I thought that was FUD created by NVIDIA shills? Now its important, but only within AMDs API. I see.

Even if that is true. Honestly who gives a shit if NVIDIA is giving people AMD GPUs. They had to buy it. People are still buying AMD GPUs. AMD is still shipping inside every modern console. AMD is still making powerful GPUs that actually compete in the market.

Sarcasm comes from you passing judgement. It's like not you would believe me if i tried to defend that i am not paid by them anyway. (Was it the pizza that gave it away? ..It was the pizza...) Just because i think AMD CPUs are slow, and i notice the Crossfire stutter. Does not mean i am a shill, being paid by someone, or that i'm trying to put them out of business.

Believe it or not, i do like AMD. I just expect more from them, honestly. The CPUs are disappointing, but the GPUs are fantastic. I also think Mantle is great. I just feel that you are trying to over sell it, because it's just not done yet.

I understand the point you wanted to make with the reference, but that is just not the kind of thing you compare petty crap like this to. You could have easily chosen something else... perhaps from this year... that used to be a very famous conspiracy theory.. That maybe doesn't involve people being killed. Just putting that out there.

March 24, 2014 | 06:04 PM - Posted by Anonymous (not verified)

And your quote is still %100 wrong no matter how many times you post it. By your own admission, Mantle is NOT only a huge performance boost if you have an APU. And my point still stands that >%90 (%96?) of the CPU's available have a huge benefit because that's how large the under $200 processor market share is. And it covers everything from APU's to FX's, including I3 and I5 and lower. I'm now wondering if you actually believe that no matter what intel processor we're talking about, the performance is so stratospheric that it renders babies obsolete (i hope that reference to babies doesn't offend your sensibilities).

Yes, let's go back to smoothness. So am i to understand that since Mantle provides a much smoother lower latency experience than those provided by NV and DX that it is no longer an issue? Let's get it off the radar so to speak? How about we get some articles attacking NV's much rougher and higher latent experience. Seems that would be fair, no? If we can get about a years worth of those articles, then i'd drop it.

You can blame NV's and intel's viral marketing for the backlash against them. If their products can't stand for themselves without resorting to illegal conduct and gutter style marketing, then maybe they oughta spend a little more time in their own labs instead of getting their goon squad to peddle their propaganda.

BTW, if you have a problem with the realities of war, take it up with the scum sucking cretens, politicians, and money makers that manufacture these wars with pathetic excuses and propaganda for their own personal gains. It's them that don't give a rats ass about the heros lost in war, not me. I hope you aren't naiive enough to believe ANY of the crap spewed by the elite sponsored main stream media. This 'how about something more recent' attitude is exactly what the powers that be rely on the public to accept. Out of sight, out mind. It's already happened so it's old news, doesn't matter any more. You want something more recent? How about Iraq, or Building 7, or this manufactured plane dissapearance conveniently and abruptly subplanting and distinguishing and mention of Crimea on CNN and Fox. It's like a switch went off and this mysterious plane dissapearing has blanketed the news coverage for 2 solid weeks. You'd have to be a bafoon to swallow any of that BS. History speaks for itself, and if it walks like a duck, and continues to waddle like a duck, well it's a duck.

Same thing goes for intel and their corruption, bribing Dell, forcing motherboard manufacturers to 'white box' any AMD motherboard they sold, getting on the BoD and sabatoging One Laptop Per Child because it would hurt their business, benchmark manipulation, benchmark corruption through ICC, BAPCO, Cinebench, the list goes on and on and on. Otellini was on the pres' technical advisory board, which also has all kinds of implications whith the shananigans of the NSA coming to light. I'd throw my computers, phones and internet out the window if all that was available was NV or intel. They don't deserve and will never get a cent of my hard earned money regardless of any higher numbers in certain benchmarks, but if you feel it's OK to support that kind of culture fill your boots. Don't go around proclaiming AMD's hardware is crap, or junk. They're doing incredibly well with the resources available and politically sponsored sabatoge from their competitors to knock them out of business. Phuck intel, phuck nvidia. And phuck reviewers for propagating their agenda.

March 24, 2014 | 11:52 PM - Posted by Anonymous (not verified)

Poe's Law.

March 25, 2014 | 10:46 AM - Posted by Anonymous (not verified)

AMD are to blame just as all the websites (PCPER and TechReport) that were promoting Xfire/Sli and especially Frame Latency. AMD need to bring out the graphs showing Mantle`s strong points e.g. XFIRE and Frame Latency Vs Nvidia, but they have not done that. And neither have these so called "champion website that invented Frame Latency benchmarks" with the help from Nvidia. We don't see those standards being upheld any more, why?

March 24, 2014 | 11:36 AM - Posted by Anonymous (not verified)

1- DX12 will not come to windows7, and probably not windows8, for the simple reason is that DX is the only selling point for windows they have to link it to an OS, otherwise windows wouldnt sell, if it wasnt for dx11 ppl would still be using XP.
also DX12 being at the majority of player base is false, for that to happen, ppl need to migrate to the new OS, and that like each time takes over 2-3 years, so beside the bullshit they said at the GDC, they are not talking about the player base at launch of DX12 but the projection few years later, and you need to understand that if AMD has 30% of player base, DX12 in 2 years wouldnt even have 5%, and devs will still not develope even though it's available, and they will wait just like they did with DX11, ppl fail to see that if they stick with DX12 they get nothing for 2 years, then at best get about 10 games the following 2 years after it's release, thats reality.

2- you live in denial, if you still need ppl to show you the merites of mantle, after all the havoc that it created these last couple months, when someone is using FX8350+290X and gets 49% boost, to match a config with a 1000$ intel cpu is just amazing.
you need to understand how mantle works, doesnt just work with low/mid cpu bottleneck, but the GPU musnt be the bottleneck or the perf boost wont show anyway, so even with highest cpu, if you give it more gpu resources, say crossfire or lower the setting that makes the gpu bottleneck, the perf boost will show.

3-crossfire doesnt have stutter anymore, they added frame pacing couple months ago, i also have crossfire 280x and i dont have any problemes since catalyst 13.12

January 22, 2015 | 08:00 PM - Posted by Anonymous (not verified)

What were you saying again about dx12 not coming to win7 or win8 just for clarification

March 23, 2014 | 01:35 PM - Posted by Anonymous (not verified)

""Mantle is a huge performance boost, and best of all, increases minimum frames dramatically. All while reducing frame latency dramatically. This last point has curiously, but expectedly, vanished from the narratives of the usual shill 'tech press'. The same ones being paid to promote nvidia's propaganda over the last year, affecting sales of their competitors and making 'frame pacing' prominent. It's corruption at it's finest""

I couldnt have said that better myself. I would love to get PCPER and TECHREPORT thoughts on that. I mean what happened to all the Frame Pacing articles on AMD? Hypocrisy at its best.

March 23, 2014 | 04:23 PM - Posted by arbiter

Huge performance boost? are you still reading AMD's promo from a 8 months ago? I wouldn't call 10% a huge boost. My gtx780 is only a few fps behind a 290x using mantle. 290x went from around 58 fps to 73fps, my 780 avg'ed on the game 69fps using dx11. 290x shouldn't had any issue beating my 780 without mantle. (yes same settings)

July 10, 2014 | 06:30 AM - Posted by TheDudeMan (not verified)

I can't believe how crazy this is getting. I was checking out some GPU review's ect and found this fight between AMD and Nvidia.

First off I just want to say that there are very few game's that use Mantle right now. But I'm sure there will be more to come as it does help without a doubt. You can check out any review and see that it does. Regardless of what CPU you have.

Then you have to remember in allot of review's before mantle was even released or after it was released the reviewer would not use Mantle just to keep thing's more on the Solid Power vs Solid Power level. And would't you know it, the 290 was beating the 780 in many game's or they would be with in + or - 2.50 FPS of each other that it would not even matter in real world.

I know allot of people get top end GPU's to play game's but it seems that benchmarking has become a game of it's own. Yes a Benchmark can give you an idea of how well a GPU would run in real world but I would have to say more so on Single Player game's then Multiplayer games.

I have been looking into video card's for my new build for month's checking out benchmarks, looking at review's and even asking some buddys. But it seem's that the worst part was asking friends as it would start a war in Ventrilo. The 290 user's would say go for the 290 and the 780 users would say go for the 780. It got pretty crazy for a few day's.

Anyway after more reviews and AMD vs Nv I ended up going for the Gigabyte 290 OC Edition but a month before that I was dead set on the 780. The thing that changed my mind was the fact that I have used both AMD and Nvidia in the past. Though it was never so dramatic since I would just pick the better GPU that was released at the time. So I did the same thing this time.

I must say that it is a big waste of time trying to go with a trend or buy a GPU based on how it will work 4 years from now. I did think about that but I didn't go crazy on it. I bought what worked best now and the fact that it is still being optimized is even better. Mantle is nice but I didn't buy the 290 for it, though Im sure it will help more and more as more games are optimized for mantle.

So stop the fighting and try playing a game with your friends. It is much more fun then playing the my video card is better then your video card game.

Anyway, I'm going to go play some Sniper Elite 3 for a while.

March 24, 2014 | 09:53 PM - Posted by Anonymous (not verified)

Moron, the frame pacing tests with FCAT or FRAPS doesn't run with Mantle (because, "you know", the overlay of FCAT or the metric system of FRAPs between STAGES of DirectX doesn't run with apps that use DIFFERENT API), soooOOO it's IMPOSIBLE to test frame pacing with a independent system for all of the Mantle's games.

Hypocrisy? "idiotisy" is the word. And it isn't for PCPER or TR. ¬¬

March 25, 2014 | 10:52 AM - Posted by Anonymous (not verified)

How about Mantle Xfire and DX11 Sli?

Maybe Its just ignorance in your case.

March 25, 2014 | 02:47 PM - Posted by Anonymous (not verified)

It's called Ryan shrout, anandtech, and xtremehardware take relatively large payoffs from both intel and nvidia because both companies have around 30,000,000 shorted stocks on amd. To say anything else is a lie because it's quite obvious they do article editing or ignore a bit of tech news completely if it could look bad for either company in anyway.

Come on dude, it's the same thing with other tech stuff, viral marketing, propaganda, and people that take large bribes while bending over create a new industry standard. It is amazing that amd lasted this long.

March 25, 2014 | 03:06 PM - Posted by Jeremy Hellstrom

Man, I wish I was as rich as some of our readers seem to think.

March 22, 2014 | 06:33 PM - Posted by Anonymous (not verified)

>Appropriate title: NV talks, while AMD does.
Very strange conclusion. I see that NV iteratively doing it's drivers better, while AMD trying to push the task on developers with Mantle, though there is no profit for developer to constantly maintain and improve perf of AMD's GPUs, so prepare for possible bugs, perf drops, other surprises with future hardware.

Also I don't see any parallels between proprietary Mantle and CUDA.

CUDA was the first C like GPGPU api, there were not any industry standart APIs when it was released. NVIDIA founded GPGPU accelerator market from the ground up with CUDA and wrote a tons of libs for it, and CUDA is the only C++ like API on the market right now(I don't see any reason for them to swich on OpenCL, since CUDA was first on this market and it's dominant right now, not speaking that it's more mature, more feature rich, more simple and with much better software infrastructure).

Mantle on the other hand was introduced when there already were industrial standart APIs like DX and GL, we have bindless extensions for GL for years, which could bring similar or better numbers of drawcalls compared to Mantle, we also have DX which could do deffered contexts, but AMD always had shitty drivers team, so the natural way for them was to capitalise on it's console wins by moving software complexity from driver team to big developers(low level is the advantage and the differentiator only for big developers and their engines), DICE wrote for them Mantle and now developers busy by doing the optimisations for the minority of PC market, while ~85% of market sucks, thanks for that AMD

March 23, 2014 | 04:19 PM - Posted by arbiter

what about what AMD did for years while people complained of micro-stutter when using CF? Took Nvidia's release of FCAT to show AMD where they went wrong.

March 24, 2014 | 11:06 PM - Posted by Anonymous (not verified)

LOL, like the 2 yrs to fix frame pacing?

March 22, 2014 | 05:51 PM - Posted by amdBumLover (not verified)

Any specifics on dx12? also for Nvidias super dx11 optimizations, is it application specific of global?

March 22, 2014 | 06:19 PM - Posted by Ryan Shrout

The claim is that these are global.  But we'll have to see.

March 22, 2014 | 06:05 PM - Posted by Moozoo (not verified)

Explain how directx 12 and opengl scale with the number of cores?
Mantle is designed from scratch to do this.

March 22, 2014 | 06:20 PM - Posted by Ryan Shrout

DirectX 12 is built very much like Mantle, thus it will do that.  OpenGL is a very open, very extensible API. Check out this link: http://www.pcper.com/news/General-Tech/GDC-14-NVIDIA-AMD-and-Intel-Discuss-OpenGL-Speed-ups

March 24, 2014 | 08:16 AM - Posted by Anonymous (not verified)

It might make more sense to state that Mantle is built very much like DX12. It's looking more and more like AMD was in the know-how about DX12 plans and simply jumped the gun to materialize its own financial gains while Microsoft was finishing up with their solution with bringing DirectX "closer to the metal". It's a moot point, true, but I think it's a point nonetheless.

March 22, 2014 | 06:22 PM - Posted by Anonymous (not verified)

After seeing various benchmarks based off of prelim mantle API ports I cannot dismiss the fact that the developers can get more FPS by tweaking lower level API access. But if I am making a purchase decision for a GPU am I going to go with a product that can only provide advertised performance if the developer has ported to the API and done a good job of utilizing the API? It seems to me that the biggest performance jumps are simply making games playable on systems that were not even considered to be prior to the API release. The mid to upper end cards show improvement but who really cares once you get over 60fps? Enthusiasts with discrete GPUs are not the target here as I see it. I have used both AMD and NVIDIA and quite frankly NVIDIA has them kicked in overall efficiency to such a degree that I will go NVIDIA on a discrete card regardless of mantle. Intel with it's integrated mainstream graphics in its core lineup is more than likely the real target here as AMD has not been able to really compete in the CPU market on much more than price. Intel integrated graphics are adequate, but AMD may be able to make a compelling argument if they can continue to re-provision old designs with new performance boosts for marketing purposes. This is key to their integrated graphics solutions in their mainstream chip lineup if they can market a chip that previously would never be considered a "Gaming" class chip as such with nothing more than a part number change and mantle API support. In my opinion I would rather have a GPU that doesn't need a special API to perform but performs well under all APIs.

March 22, 2014 | 09:51 PM - Posted by Anonymous (not verified)

pardon my french but thats bullshit, saying that ppl doesnt buy according to api is a delusion, how many needed to upgrade their gpu to directx11 few years back, and how many games actualy were compatible directx11 the follwing year or even couple years after directx11 release ? about a dozen games, even now the majority of games that are released they are directx9 not 11, why? because alot of ppl are still on windows xp.
here is how i see things, now if i buy AMD graphic, i have couple Pros, i have fearly equal performance for about 100-150$ cheaper, then i have about 20 games coming out in the next year compatible mantle where i would get average of 30% performance boost, the card will be compatible directx 12, even though games compatible dx12 would start in 2017 or later.
what's the pros for not getting AMD graphic ? paying 100-150 for a brand ?
to me through logic and reason the value of AMD card is alot higher than Nvidia at the moment.

March 22, 2014 | 06:27 PM - Posted by Anonymous (not verified)

AMD card goes tan on the monitor with vertical lines...audio keeps playing.
Any suggestions ?

March 22, 2014 | 06:28 PM - Posted by spigzone (not verified)

"many readers will see some irony in this statement"

Or rank hypocrisy.

March 22, 2014 | 06:53 PM - Posted by spigzone (not verified)

With Nvidia definitely not getting on board, AMD will probably Mantle to the public sooner than later, probably with the Mantle 1.0 ... June or so? I expect there will be quite a number of developers wanting on board with whatever body is set up to govern it, developers that are eager to promote an OPEN highly efficient highly extensible API free from the clutches of Microsoft, a sentiment I whole heartedly agree with.

I raise my glass to smooth sailing for Mantle and rough seas and rocky shoals for DX12.

March 23, 2014 | 10:47 AM - Posted by Ryan Shrout

Sorry, that just won't be how it turns out.  DirectX is cemented in place with Windows and Xbox - Mantle might supplement but will not replace.

March 23, 2014 | 11:11 AM - Posted by But PS4? (not verified)

I thought that Mantle was supposed to provide developers with an API that was close to the code already used on consoles. Since both new gen consoles are using AMD hardware, how does dx12 stop mantle from being the easiest path? DX12 is not going to appear on the PS4 I don't get why everyone is saying AMD/Mantle is not in a strong position.

March 23, 2014 | 01:47 PM - Posted by Anonymous (not verified)

Exactly. And Devs will also chose Mantle over DX12 bcoz they can port to Windows, Linux, IOS, Android etc. Steam will back Mantle all the way, or they will die and Steam knows that.

THE DEVS DO NOT LIKE MICROSOFT, or Its DX11/12 cr@p.

MANTLE will win simply bcoz its portable to all* OS`s and is not controlled by MS

March 23, 2014 | 04:28 PM - Posted by arbiter

um are you really that dumb? Mantle only supports AMD gpu's aka 30ish % of the market. Why would they go with mantle over DX12 and piss off Nvidia users that on is 50% of the market. On top of that they gotta do all the extra dev work with mantle, dx12 since works for both is the cheaper and smarter option and is gonna be supported on xbox1. Even if that was reversed and it was 50% for AMD and 30% for nvidia, DX12 would still win since as i said works for both and pissing off that much of the market, no game dev wants to do that.

March 22, 2014 | 06:57 PM - Posted by DiceAir (not verified)

I'm telling you they must do sli vs crossfire in mantle and directx then we will see something different.

March 22, 2014 | 07:04 PM - Posted by Anonymous (not verified)

Ryan - this is a well made presentation by Nvidia . However the use of Application specific optimizations has been a mainstay of Dx10 and Dx11 from both GPU companies. How can we know that these are global Dx11 fix from NV and not app patches ?

Mantle was supposed to remove the need for Application specific tweaks and patches - AMD gets out of the way and lets the Engine do the work of memory management and the like . To me that seems like a better deal(in my layman understanding )

PS: Are we getting AMD's take on this scenario too ? After all AMD does love powerpoint slides :-D

March 23, 2014 | 10:49 AM - Posted by Ryan Shrout

We can't really know for sure without some extensive testing.  I am currently out of town but am working on a plan of attack for that.

Mantle does kind of remove the need for app specific tweaks, but mostly because it becomes much more difficult for a driver to change anything with this implementation style.  So its not that they won't WANT to, but that they may not be ABLE to.

I have asked AMD for input on this several times, but no responses yet.

March 23, 2014 | 01:54 PM - Posted by Anonymous (not verified)

"I have asked AMD for input on this several times, but no responses yet"

Pls keep bugging them (AMD) lol. AM glad we got Nvidia`s take on Mantle bcoz Nvidia have been pretty silent over Mantle since its inception, although everyone knew Nvidia, Microsoft and Intel was not happy whatsoever.

March 22, 2014 | 07:10 PM - Posted by Anonymous (not verified)

"The company’s stance is to wait for DirectX 12 to be released and level the playing field with an industry standard rather than proceed down the pathway of another custom option."

I do not see DirectX as an industry standard, it is a M$ standard, that is on all occations limited to one OS, windows. If it were a true Industry standard it would run under OSX and Linux! OpenGL is available on OSX(BSD), Linux, and windows, and is truely an Industry Standard graphics API/HAL. Nvidia can not afford to let itself become tied to one company's API, and ignore the changes in the gaming market, OpenGL the full desktop version works on Nvidia's K1, and be sure that Nvidia will have an intrest is getting the most out of OpenGL, as Nvidia's CEO stated that Android is very important. Survival in the mobile market, where Nvidia has a CPU(ARM refrence and ARM ISA BASED) presence will give Nvidia more than just a GPU income, and the mobile market uses OpenGL for its graphics API needs. Yes Nvidia will wait for a newer version of DirectX just like everybody else, but with the market shifting to phones/tablets/chromebooks do not expect Nvidia to wait around just doing nothing, and even Intel is trying to get into the Linux/Android/chorme OS devices market, via its newer x86 based chips, and that means more support for OpenGL on x86 as well as ARM/ARM ISA based SOCs. With Nvidia's K1 SOC expect Nvidia to have even more intrest in seeing that its new Denver based product line has the most efficient OpenGL possable, so expect OpenGL to get just as much attention from Nvidia, probably more than DirectX.

Steam OS will be a big factor, along with Mobile in putting more attention on a true industry standard graphics API, and mantle will force Nvidia to focus on other graphics APIs that are not tied to the windows OS.

March 23, 2014 | 10:51 AM - Posted by Ryan Shrout

DX is the standard on Windows (the largest and dominate PC gaming platform) and Xbox.  It is a standard, and likely the most important over the last 10 years.

NVIDIA is betting it all on DX though, with Android, Linux, SteamOS, NV has the best OpenGL implementation today.

March 23, 2014 | 03:11 PM - Posted by Anonymous (not verified)

And windows is not the whole industry for graphics, and expect more highend gaming on the linux platform, as well as gaming on mobile. There are other players in the mobile GPU market that are going to shake up things on the hardware end, some just introducing hardware based raytracing! Nividia expects to enter the 64 bit mobile CPU market with its Denver ARM ISA based CPU/GPU SOC SKU, and mobile uses OpenGL, and not Direct X. Any standard limited to one company's OS product/platform can not have the word industry as quoted.

M$'s domination of the PC market just exemplifies the ineffective state of the US justice department over the last 30+ years in allowing the M$ monolopy to continue, and its has resulted in billions in lost productivity for enterprises of all sizes, just look at Vista, 8.* and see why monolopies stifle and ruin productivity for a whole industry.

M$ only shows intrest in graphics as a reason to keep their OS ecosystem and its cash cows, Office etc, in business, as gaming was never M$'s top intrest, Gaming was never even on the radar, as M$ is only about market share and market profits of its core applications/businesses. DX12 is just vaporware until it is released, and in use by gaming engines, and released to Win 7, also, otherwise as M$ has proven in the past that DX is just about getting its new OS adopted, and then it's to the back burner with graphics again, once that new OS release's market share is assured. How our legal system lets M$ get away with forcing its OS on new third party hardware, all these years, just confuses the hell out of me, and please do not bring Apple into it, as Apple does not force its OS/OS ecosystem on any third party hardware OEMs, and M$ should not be able to dictate what OS, or OS version should come with new THIRD PARTY hardware, that should be left up to the consumer, and any M$ interference, or undew influce on the third party hardware market needs to be removed, and is monolopistic(illegal, Hear that Justice Departments of the past 30+ years).

Its looks like mobile is doing what our government and Justice departments have not, and totally locking M$ out of all but a insignificant share of the mobile market. Steam OS and Linux are just begining to take share away from the M$ trust in gaming as well as the desktop market. Expect more Innovation to begin to filter up from the mobile GPU market to the desktop, and not from the top down, and innovation has always been driven by healthy competition, and Nvidia may be cowtowing to M$ in public, but in mobile it's OpenGL that has the market, and Nvidia wants a SOC place at the table, and those plates are full of OpenGL competitors. Do not expect windows to disappear overnight, but there will be much dual booting with win 7 as gaming slowly begins to become a reality on other OSs, and this will be the best for gaming and gamers. What the spinners are saying about Mantle, is just FUD, and all the new intrest in graphics APIs means that healthy competition is beginning its return to the desktop market, filtering up from mobile, as people have learned that PCs are not about just M$ and Intel.

March 23, 2014 | 06:57 PM - Posted by arbiter

Um last i checked could run DX games on linux using wine so saying its limited to 1 OS is incorrect statement. MS shouldn't have to support another OS when it makes them no money to do so. That is kinda idiot thinking to make software for a competitors OS that you won't seen dime 1 from.

March 22, 2014 | 07:15 PM - Posted by spigzone (not verified)

"I do believe that upon the release of DX12, the playing field will level once again and development on Mantle will come to a close; that is, at least if Microsoft keeps its promises."

I doubt that, Mark Cerny stated there was a lot of exploitable performance headroom built into the PS4 hardware, headroom Dice (who said the PS4 and Mantle would be the focus of it's Frostbite R&D)and Crytek will be very busy researching and implementing performance improvements that will make it into Mantle and Mantle games and available on GCN hardware, particularly upcoming Carrizo and Radeon 3xx hardware.

Microsoft's XB1 centric DX12 will be focused on supporting the implementation of their long term closed ecosystem/walled garden strategy.

Essentially AMD is going 'off the reservation' to champion an open clean slate highly efficient highly extensible API and Nvidia is only to happy to snuggle even closer to a fellow proprietary API promoter and get as much of their hardware centric proprietary API included as possible.

It's all starting to fricking look like a snake mating ball.

March 23, 2014 | 10:52 AM - Posted by Ryan Shrout

But the PS4 doesn't use Mantle...it uses OpenGL?

March 23, 2014 | 02:26 PM - Posted by spigzone (not verified)

You're funning me, right?

Dice APU13 slide "Mantle and PlayStation 4 will drive our future Frostbite designs & optimizations"

J Carmack said Mantle was only possible due to AMD's console GPU hat trick.

Mark Cerny said there was several years worth of future optimizing built into the PS4 hardware.

Plus a clean slate APU ... no way in hell DX12 will be able to match the evolving performance gains Mantle will bring to the table.

I know you adore balance and parity, but saying DX12 can achieve parity with Mantle is unrealistic.

March 23, 2014 | 06:54 PM - Posted by arbiter

How do you know what performance gains it will have? Are you a dev that inside access to both api's? So shouldn't be like AMD and make claims that in the end turn out to be over statements of what you put out.

As for console hat trick, If PS4 uses opengl and xb1 uses dx12(going by what MS has said) then AMD well its not on any those. Mantle being closed system right now (yes it is closed since no SDK is out) makes it 1 sided support. Even if they did open it, nvidia would never use it.

March 24, 2014 | 03:05 AM - Posted by Anonymous (not verified)

>Mantle being closed system right now

It's not a closed system, it's still in development, when a non-development release is completed, there will be a release of an sdk. As for right now, there's no point to muddy the water by allowing indie developers to use an sdk for an api which is still being adjusted and edited.

http://vr-zone.com/articles/mantling-alliances-ritchie-corpus-amd-interv...

March 24, 2014 | 03:06 AM - Posted by Anonymous (not verified)

>Mantle being closed system right now

It's not a closed system, it's still in development, when a non-development release is completed, there will be a release of an sdk. As for right now, there's no point to muddy the water by allowing indie developers to learn an sdk for an api which is still being adjusted and edited, to which what they learn on will change.

http://vr-zone.com/articles/mantling-alliances-ritchie-corpus-amd-interv...

March 24, 2014 | 03:07 AM - Posted by Really? (not verified)

>Mantle being closed system right now

It's not a closed system, it's still in development, when a non-development release is completed, there will be a release of an sdk. As for right now, there's no point to muddy the water by allowing indie developers to learn an sdk for an api which is still being adjusted and edited, to which what they learn on will change.

http://vr-zone.com/articles/mantling-alliances-ritchie-corpus-amd-interv...

March 22, 2014 | 07:19 PM - Posted by spigzone (not verified)

"Microsoft's XB1 centric DX12 will be focused on supporting the implementation of their long term closed ecosystem/walled garden strategy."

Far as I'm concerned that's enough reason to support AMD because ... screw Microsoft.

March 22, 2014 | 07:48 PM - Posted by Edkiefer (not verified)

Wow , I want what you guys are smoking !

AMD is the one that jumped ship , how is Mantle open ?
There no SKD yet and SC is closed .

Now if there going to support OpenGL that would be great but how many games support it , other than on Linux/Steam box .

AMD team need to focus more on performance and stability with there drivers , thats there down side drivers .
Now they have to support Mantel and DX , that means more work .

March 23, 2014 | 12:21 AM - Posted by weed (not verified)

wow, I want what you are smoking too!

if you bothered to read around, you would realise Mantle is still at infancy stage and OpenGL has been around before you were even born.

March 23, 2014 | 07:24 AM - Posted by Edkiefer (not verified)

Of course Mantle is in its infancy , it was rushed out, probably knowing limited time before Dx12 . Its basically still alpha/beta stages .

As far as OpenGL around before I was born , hehe I was 35 when it came out (1992).

My point being ATI/AMD going way back there drivers was there weak point .

March 23, 2014 | 04:33 PM - Posted by arbiter

If SDK isn't even out for mantle, then its a closed system. PhysX is more open since AMD COULD of licensed it and didn't.

March 24, 2014 | 05:36 AM - Posted by Anonymous (not verified)

physx is crap anyway, and AMD announced Bullet-physx support, which ofc will be free for the public to use.
mantle SDK is in beta Stage, not for public yet, but whoever wants to use it now they can work with amd...

March 24, 2014 | 09:11 AM - Posted by renz (not verified)

and yet no game that use bullet physics engine ever implement gpu accelerated feature.

March 24, 2014 | 09:11 AM - Posted by renz (not verified)

and yet no game that use bullet physics engine ever implement gpu accelerated feature.

March 24, 2014 | 05:36 AM - Posted by Anonymous (not verified)

physx is crap anyway, and AMD announced Bullet-physx support, which ofc will be free for the public to use.
mantle SDK is in beta Stage, not for public yet, but whoever wants to use it now they can work with amd...

March 24, 2014 | 09:11 AM - Posted by renz (not verified)

and yet no game that use bullet physics engine ever implement gpu accelerated feature.

March 24, 2014 | 03:10 AM - Posted by Tim (not verified)

http://cdn.memegenerator.net/instances/400x/23873635.jpg

March 22, 2014 | 08:04 PM - Posted by JoheJohe (not verified)

Can someone please clarify something to me, cause i think this is news but i haven't seen it written anywhere...

So, a GeForce owner (Fermi Onwards, and probably the equivalent for AMD) does NOT HAVE TO BUY new hardware to get DX12? For example, will my current, humble 560ti get DX12 capability with a simple driver update?

If yes, i think this is great but also weird at the same time from the industry's perspective , as "being able to run the game on the latest DX" was a also a reason to dump the last gen card and get a new one.

March 22, 2014 | 08:12 PM - Posted by johnc (not verified)

>>So, a GeForce owner (Fermi Onwards, and probably the equivalent for AMD) does NOT HAVE TO BUY new hardware to get DX12? For example, will my current, humble 560ti get DX12 capability with a simple driver update?<<

It depends on the DX12 feature set. It appears that at least some DX12 features will be supported on Fermi because NVIDIA has publicly stated so. But we don't know yet if all of them will be. E.g., Fermi doesn't have hardware support for bindless textures.

March 22, 2014 | 08:21 PM - Posted by spigzone (not verified)

If ye be wanting all that DX 12 has to offer, Ye'll be getting AMD 3xx or Nvidia 8xx GPUs when they become available.

March 22, 2014 | 10:12 PM - Posted by Anonymous (not verified)

what's great ? you will still have your 560Ti in 2 years from now ? knowing that the games will start coming slowly a year or 2 after that ? gl playing new games in 2017 with 560ti
ppl need to understand that most of what was said in gdc is just marketing, alot of it is just talk, they wanted to point out that dx contrary to mantle has a wider player base, so they said that it will be compatible with most gpus to make it look a huge player base, but then they didnt say that dx12 will be windows9 ( because the only selling point of windows is directx, if they dont link a directx to a window version no one would buy it), so from this player base they mentioned in reality it's much smaller, because these players need all to migrate to the new windows which doesnt happen easily and it takes years, now because of that devs will keep developing for directx11 because it's bigger player base, which means that by the time ppl start migrating to windows9 to make it worth the time for devs to invest on it, 2-3 years have passed by, so good luck enjoying gaming with your 560Ti in 2018 on windows9, thats if any of us are still alive...

March 25, 2014 | 11:13 AM - Posted by Anonymous (not verified)

Don`t believe Nvidia PR marketing stunt. For DX12 you will need a new next gen card (20nm), unless you want a few features supported only? as in the case with Nvidia DX11.1/DX11.2 that do not have full hardware support, but only some DX11 software support.

March 22, 2014 | 08:04 PM - Posted by johnc (not verified)

I wish devs would just move to OpenGL 4.4 already. The performance advantages are available today, and have been for a year+ now. Waiting for DX12 or flubbing around with Mantle is just unnecessary.

March 22, 2014 | 08:08 PM - Posted by JoheJohe (not verified)

(for example,i remember DX11 coming out, pushing both fermi and amd sales...people running Alien VS Predator to checkout DX11 among other titles...)

March 22, 2014 | 09:46 PM - Posted by Les (not verified)

Ryan I have one question for you. the Star Swarm slide Nvidia provided is using Intel's most expensive 12 thread CPU.

With a 280X i'm able to get 55 FPS with an AMD Phenom II x6 CPU in Mantle, I Notice that the GTX 780TI is able to get about the same FPS in DX with that 12 thread CPU which must be going on 3 times as powerful as my Phenom II x6, whats the chances of that 780TI getting anywhere near my 55 FPS if it was using my CPU.

I would suggest that it wouldn't get over 40 FPS, if it gets that high at all. ergo Nvidia's solution in their "more efficient DX11 is not very efficient at all.

Would you disagree with that?

March 23, 2014 | 12:12 AM - Posted by wujj123456

I think you nailed it. I am still an Nvidia fan, but I am not foolish enough to trust any Nvidia marketing material/talk.

Benchmarking is (black) art. Without deep analysis and repeatable setup, it could always be setup to demonstrate anything you want.

Assessing Mantle on any very powerful CPU (including my 4770K) is meaningless. A game developer can't release a game that saturates top mainstream CPU entirely, otherwise no one can buy it and enjoy. Therefore any games released now won't be severely bottle-necked by top CPUs today by definition. This makes such comparison pointless. It basically first removes the problem AMD is trying to solve and then say "hey, your solution is no better." Sure, if the problem is not there to being with, what can AMD solve?

What is interesting is what if Mantle-like capability has been everywhere, what will the landscape of today's games be? If with Mantle, an i3 can do equally well as 4770K on the games today, and developer still target the average i5 CPU processing ability, what will they use those freed CPU cycles for? How many interesting features could have been implemented? How cleverer AI could be? That's what makes Mantle/DX12 really exciting.

That's also why I think AMD is the pioneer in this case. DX12 only proves how AMD is benefiting the industry by taking up risk and responsibility.

I don't want to go down the route saying pcper is bought by Nvidia or some crap like that. However, I do feel Ryan "think less critically" when publishing Nvidia news. That needs some fix...

March 23, 2014 | 10:56 AM - Posted by Ryan Shrout

I don't disagree that the configuration of these tests needs to be evaluated.  I am working on a test plan to do just that next week.  

I do not agree though that assessing Mantle with a higher end GPU is meaningless - a LOT of enthusiasts and PC gamers utilize those types of processors.  Is better performance for lower end processors a good thing though?  Absolutely.  But that is just a portion of the total audience as well.

March 23, 2014 | 11:32 AM - Posted by Les (not verified)

Looking forward to that, it is true that Mantle has little if any effect if a combination of a very powerful CPU is used in relation to the GPU, this is also very true in BF4, there are parts of that Game where i'm able to beat very overclocked i7 in Mantle where i haven't a hope in hells chance in DX simple because DX bottlecks so much and Mantle does not.

Not everyone can afford i7's, with Mantle they can spend more of what they have on a better GPU and get the same if not even better performance, like me. its people like me that no reviewer seems to have in mind, as they all use 12 thread Intel CPU's to then turn around and say "oh Mantle doesn't really do anything" i can show them i seconds where in BF4 i can get 120+ FPS with a phenom II in Mantle where a 4.8Ghz i7 would struggle to get over 80 FPS, and i have done exactly that.

I also feel that Nvidia are empowering MS to continue with their GFX API monopoly which is the reason PC Gaming has been stuck in a rut for the past 10 years.
MS don't want PC gaming to get any better, what they want is for you to dump the PC, buy an XB1 and pay MS for the privilege of playing games online.

DX12 IMO will end up a Damp Squib! its just a distraction tactic from Mantle as MS clearly feel threatened by it.

It would have been a much better thing if Nvidia had worked with AMD in taking up Mantle to quash MS API monopoly.
AMD made it clear Nvidia were welcome to use Mantle once it was more refined.

March 23, 2014 | 02:09 PM - Posted by Anonymous (not verified)

"""I do not agree though that assessing Mantle with a higher end GPU is meaningless - a LOT of enthusiasts and PC gamers utilize those types of processors"""

WHAT? I hope you realize that the i3 and i5 are about 90% or more of the market.

March 23, 2014 | 02:10 PM - Posted by Anonymous (not verified)

"""I do not agree though that assessing Mantle with a higher end GPU is meaningless - a LOT of enthusiasts and PC gamers utilize those types of processors"""

WHAT? I hope you realize that the i3 and i5 are about 90% or more of the market.

March 23, 2014 | 04:48 PM - Posted by wujj123456

"I do not agree though that assessing Mantle with a higher end GPU is meaningless"

I guess GPU is a typo? You mean CPU, right?

OK, I need to reword. It's useful for enthusiasts to assess whether Mantle is useful for them. Just like now I know there is not much I need from Mantle since I use 4770K.

However, it's not quite useful in understanding what Mantle, as a technology could have brought us. The reason is given in previous post. It's really about the potential usage from the resources freed by Mantle/DX12. With games that usually don't even show the difference of i5 4670K and i7 4770K, we of course won't see lots of benefits from Mantle.

Now coming back to this Nvidia market play. Do you agree 3930K is really kinda an extreme? All additional CPU power just goes into Nvidia's hands to downplay Mantle, which is wrong for 95-99% of gamers?

I am looking forward for the new Nvidia driver and benchmarks to show the truth. :-) Given how DX12 is involving, there is reason to believe some real unsolvable bottlenecks are there in DX11. Many articles pointed out the fundamental API difference, but I am not a game developer and probably shouldn't guess too much.

I do hope AMD balance their effort in Mantle and DX11 though. I don't usually buy AMD because I think they need to put more effort into driver optimization, but now they are even diverting the limited resource.

March 23, 2014 | 09:49 PM - Posted by Les (not verified)

I just ran Star Swarm in Mantle on a low setting that leaves me completely (CPU bound)My GPU is much much weaker than a 780TI and yet with a 4 year old AMD CPU i was able to get 102 FPS.

If the much more powerful 780TI is only getting 70 FPS then it must be because its bottlenecked by that 3960K in DX, despite it being so much more powerful than my CPU the performance is 60% less than what i get in Mantle.

That tells one all they need to know.

March 23, 2014 | 04:49 PM - Posted by wujj123456

Page not found bug resulting in double post. Delete this one.

March 23, 2014 | 12:12 AM - Posted by Anonymous (not verified)

Nvidia has to down play its relevance and prop up its D3D11 efficiency. We are all to believe that Nvidia has been sand bagged its D3D11 efficiency since 2009 for this particular moment.

Nvidia is the only one in the PC platform that doesn't have a CPU where these advantages will be significant so its on the outside looking in on AMD & Intel.

March 23, 2014 | 02:20 AM - Posted by bbbl67 (not verified)

Efficiency improvements on the existing DirectX 11 are all well and good, but it doesn't do anybody any good if it can only be realized on a super-high-end Intel Core i7-4960X, or something. We don't know what hardware Nvidia tested these numbers on. But the goal of Mantle is not get higher frame rates from extreme processors, but to get higher frame rates on low- and middle-end processors. DirectX 11's biggest problem seems to be that it doesn't do a good job of distributing the GPU feed operations to all of the available CPU cores. So DX11 depends on processors with extremely fast single-cores to get this job done. We don't need yet another showcase single-threaded benchmark test, we need to see how this performs on multi-threaded benchmark test, which is the majority of modern processors these days. The fact that DX11 has been so ignorant of multi-core processors is yet another shame on Microsoft, in its long list of shames.

March 23, 2014 | 03:33 AM - Posted by Anonymous (not verified)

Star Swarm uses DX11's deferred contexts(multiple threads), they are disabled for some wierd reason by default. I bet nvidia enabled them to get good scores. Right now I got 50-56 fps(yes, benchmark has huuuuge run to run variation since it does not have scripted camera or units) with gtx780(980 Mhz core), 335.23 driver and i7 940 overclocked to 3.6 Ghz., extreme preset + deffered contexts enabled

March 23, 2014 | 04:51 AM - Posted by Klimax (not verified)

It improves it, but implementation is so horrible in SS, that overhead of DCL is too big. (Number of batches is way too high)

March 23, 2014 | 07:27 PM - Posted by bbbl67 (not verified)

Whether they used DX11's deferred contexts or not, the real issue is that they have to use these API calls to do anything at all! In Mantle they just create their own low-level threads, and that's faster than having to make a call to an API to create a thread.

March 23, 2014 | 07:27 PM - Posted by bbbl67 (not verified)

Whether they used DX11's deferred contexts or not, the real issue is that they have to use these API calls to do anything at all! In Mantle they just create their own low-level threads, and that's faster than having to make a call to an API to create a thread.

March 23, 2014 | 07:36 PM - Posted by bbbl67

I think the point is that these games don't want to use DX11's deferred contexts or any other API-based threading call, they'd rather do this manually themselves at the low-level. Going through a process call to create a thread is inherently more inefficient than doing the same thing yourself.

March 23, 2014 | 04:50 AM - Posted by Klimax (not verified)

Star Swarm is horrible engine. Not only multithreading is done baldy that overhead is killing it (just in non-graphic computing), but its coding is so bad that it does several thousands of identical calls to functions which govern basic processing of vertices. Those functions are cheap on their own (from traces they appear to be around several dozens of instructions, so most of them is entry/exit processing alias stack manipulation).
That's contrary to engines like Civilization V, where it is at worst several hundreds of calls.

Also furthermore it has too many batches per frame, meaning that overhead is bigger then cost of main computation. (Almost as if authors don't know anything)

To test the effect, one can tweak TargetJobSize in Civilization V and see how it works. (Standard setting is 100 but you can get better performance on e.g. Titan by setting it to 1000)

==

Frankly, programmers still don't know how to use well DX11. As for Mantle, it is proprietary fix for AMD's drivers laying task on game engines. That's the only reason why Mantle gets support as it is the only way to get most out of AMD's card, because AMD can't or refuses to support fully DX11 features.

And considering AMD lied about DX 12, I posit AMD intentionally cripples their DX 11 drivers to force adoption of Mantle.

March 23, 2014 | 05:57 AM - Posted by Anonymous (not verified)

AMD supports all DX11.2 features.
GCN 1.0 = DX11.1 w/ DX11.2 features
GCN 1.1 = DX11.2

Nvidia supports DX11 only and 2 game specific features in DX11.2. No Nvidia chip even the Maxwell 750/Ti supports all DX11.2 features.
Fermi = DX11.0
Kepler = DX11.0
Maxwell = DX11.0

March 23, 2014 | 02:17 PM - Posted by Anonymous (not verified)

That`s soo funny. Nvidia & fanboys now coming out & saying DX12 matters when all these years they (nvidia & fanboys) couldn't care less about DX11.2. That`s hilarious; Why does DX12 suddenly matter so much now?

March 23, 2014 | 04:40 PM - Posted by arbiter

nvidia cards supported the 3d side of 11.2 which is used in gaming if i remember right, they just didn't support the 2d side for desktop use hence why they can't be called 11.2 cards. what additions they put in 11.2 really meant nothing to 99.9% of people.

March 23, 2014 | 01:55 PM - Posted by UnlamedAnonymousk (not verified)

This is one of big-o-shit tech PR. Ofc , You cant build that without some truth in it. Beside that, it is black art.

So Nvidia simply says adapting the new driver stack to the specific game by some kinda home-cooked-magic is better than developing the game for a API(which the API itself manages stuff, while Nvidia is re-doing game-specific-driver ).

Nicely done then. as a indie dev, now i should negotiate with Nvidia to include my code-path in their new driver. Unless then i'll probably dont have theese enhanced / calibrated / curve fitted driver-magic, lol.

If there is something worse than a proprietary API, it is the game-specific-driver which you almost got no control to do things.

March 23, 2014 | 02:18 PM - Posted by Anonymous (not verified)

Bingo

March 23, 2014 | 02:03 PM - Posted by drbaltazar (not verified)

Clearly game Dev don't speak to opengl or DX maker!anti-cheat solution caus most of the issue.anti cheat disabling DX feature .

March 23, 2014 | 02:16 PM - Posted by drbaltazar (not verified)

You want closer to the metal?go ask Steve Gibson (GRC)he code in assembler.and now they re making these change while fast Fourier went sparse fast Fourier in 2014 .this alone is humongous.it affect everything.android is probably changing from fast Fourier to the latest 2014 sparse fast Fourier transform already .so I'm sorry but the time table for DX is so slow.I don't see the benefit.they he been at this 4 years supposedly,if this true .ms never took into account sparse fast Fourier.so user would still be stuck with the old fast Fourier??hell no!

March 23, 2014 | 02:41 PM - Posted by Anonymous (not verified)

bottom line is, if i get AMD graphic i have the best of both worlds, over 20 games enhanced perf considerably, and compatible dx12 in 2-3years games ( if microsoft delivers), what do i have to lose ? nothing.
AMD got nvidia by the balls
if what nvidia is saying here is true that they can drasticly increase perf on games, then why they waited untill now to do it ?
option
1-they are in bad with microsoft and intel to keep perf back to sell more
2-they are lying
3-to increase perf this way, they need alot of money and resources, which means they have to pick the game during devlopement to work with the devs, which means you will have couple games a year, for couple years before they stop.

in the end thanks to AMD PC gaming is finaly evolving

March 23, 2014 | 04:46 PM - Posted by arbiter

Might want to rethink that. Mantle was the biggest edge AMD HAD over Nvidia. Now with DX12 doing same thing that edge just disappeared. That edge AMD had with mantle still was kinda small, some games gave AMD 10% advantage but in other games like example theif. 290x only had like 5% advantage over my GTX780, no not 780ti. 73fps(AMD) vs 69fps(gtx780) the 73 number was off AMD's own blog.

http://community.amd.com/community/amd-blogs/amd-gaming/blog/2014/03/17/...

March 23, 2014 | 05:18 PM - Posted by Anonymous (not verified)

Agreed. Nvidia is floundering with empty PR propaganda. There's evidence emerging that Nvidia only started any real commitment to DX12 around the time they saw the documents stolen from AMD regarding their long term internal plans.
The 4 year PR spin is simply about a couple people in a conversation talking to MS about something that they should think about doing. AMD also had those talks, and when MS had other priorities they went ahead with their own solution, backed by developers, also keeping MS in the loop (after all, AMD has been working very closely with Microsoft on console development ;) ). Mantle. This is why AMD said at the time there was no DX12, because at that time, there was no development started.
Once NV got ahold of those documents they had stolen, they went into panic mode, and we are now seeing their PR associated with it. An API doesn't take 6 years to develop for a cash rich company like MS, who is determined to retain their gaming ecosystem (Steam OS also plays a role). AMD did it with a very limited budget in 2 years, with very real and promising results. And it's on the market today for end users to enjoy. Make no mistake, what we are hearing from NV is nothing but propaganda and half truths. The 4 year development PR is flat out spin.

March 23, 2014 | 05:19 PM - Posted by Anonymous (not verified)

Agreed. Nvidia is floundering with empty PR propaganda. There's evidence emerging that Nvidia only started any real commitment to DX12 around the time they saw the documents stolen from AMD regarding their long term internal plans.
The 4 year PR spin is simply about a couple people in a conversation talking to MS about something that they should think about doing. AMD also had those talks, and when MS had other priorities they went ahead with their own solution, backed by developers, also keeping MS in the loop (after all, AMD has been working very closely with Microsoft on console development ;) ). Mantle. This is why AMD said at the time there was no DX12, because at that time, there was no development started.
Once NV got ahold of those documents they had stolen, they went into panic mode, and we are now seeing their PR associated with it. An API doesn't take 6 years to develop for a cash rich company like MS, who is determined to retain their gaming ecosystem (Steam OS also plays a role). AMD did it with a very limited budget in 2 years, with very real and promising results. And it's on the market today for end users to enjoy. Make no mistake, what we are hearing from NV is nothing but propaganda and half truths. The 4 year development PR is flat out spin.

March 23, 2014 | 07:52 PM - Posted by Anonymous (not verified)

Never AMD again...poor drivers and/or hardware design.
Screen goes blank with just a tan color with vertical lines but audio continues.

March 23, 2014 | 08:51 PM - Posted by Anonymous (not verified)

Looks like Nvidia was lying and new about it.

http://techreport.com/news/26210/directx-12-will-also-add-new-features-f...

DirectX 12 will indeed make lower-level abstraction available (but not mandatory—there will be backward-compatibility with DX11) on existing hardware. However, Tamasi explained that DirectX 12 will introduce a set of new features in addition to the lower-level abstraction, and those features will require new hardware. In his words, Microsoft "only teased" at some of those additions this week, and a "whole bunch more" are coming.

Nvidia knew Fermi, Kepler and Maxwell would not be fully DX12 compatible but got all there fans thinking they did with that slide at GDC.

Nvidia sure made a lot of people look like Nvidiots.

March 23, 2014 | 11:03 PM - Posted by Anonymous (not verified)

Yep. And AMD has FULL compatibility with DX12. Not a surprise given that AMD and Microsoft worked closely on Xbox One. Even Tahiti is more feature advanced than NVidia's new fangled Maxwell. Reviewers gave glowing reviews of that piece of crap though. A new architecture that is basically the same as AMD 2+ year older architecture. Downclock a R7 265 and you'll have very similar power consumption and perf/watt as NV's newest architecure. There is nothing groundbreaking or even impressive about that pile of shit. Quite a pathetic showing on NV's behalf. But as long as they can 'convince' reviewers to give glowing reviews, it's all that matters it seems.

March 24, 2014 | 06:52 AM - Posted by arbiter

you would have to down clock that 265 a bit to get power usage to levels of maxwell, so much probably be least 20-30% slower by time you get there.

Maxwell on top of being very lower power it also almost 2x faster per watt on mining. 3x 750ti that only draw around 180 watts beat a r9 290x that sucks 300watts. Throw on top of that can get 25-40% overclock on the 750ti makes it even worse and that was done on a card without a 6pin pci-e connector. One of r9's biggest issues has not only been power draw and questionable performance advertising, is heat it makes.

March 24, 2014 | 11:34 AM - Posted by Anonymous (not verified)

Well yes, i would hope Maxwell would have at least some advantage over AMD's nearly 3 year old architecure. But when i can underclock and undervolt my R7 265 to all but match NV's next gen architecture that's a complete failure in my book. There is no technical merit to considering a Maxwell over Tahiti, especially when Tahiti is still more feature rich than NV's latest.

your mining comparison is of course flawed, misleading, and well, plain ol' FUD. miners know it. enthusiasts know it. After market R9 290X's have as good, if not better thermals than 780's, which come with more expensive stock coolers. Slap the same cooler on both, and thermals are equal. Everything else is just PR on NV and reviewers behalf.

March 24, 2014 | 02:45 AM - Posted by DiceAir (not verified)

I still think mantle will be better than directx. They must take a normal cpu like a 2500k or a 3670k stock speeds with mantle for amd vs directx on 780ti or so. you will see the 780ti being bottleneck and with mantle there is none.

March 24, 2014 | 06:32 AM - Posted by arbiter

if you look at high end r9 card with high end intel or even mid range i5 with and without mantle enabled performance is only around 10-15% so bottle neck of dx11 isn't that massive, as you get with a slower cpu. More so AMD cpu's mostly the bottleneck rears up more and more.

March 24, 2014 | 08:16 AM - Posted by Anonymous (not verified)

Nvidia's graphs might look nice and fancy, but their DX11 improvments are pretty much senseless/nonsense.
I think the Star Swarm benchmark graph is the perfect example. Using a i7-3930K processor (for ~$550) at a not further specified clock speed (possibly overclocked) and not further specified RAM (I'd guess some high-end crap like 2133-2666 MHz low latency) in combination with a GTX 780Ti might beat a 290X with Mantle, but the later is much cheaper and (thanks to Mantle) does not require such a high-end CPU.

March 24, 2014 | 08:28 AM - Posted by Anonymous (not verified)

I can't believe how many times I've sat through these sort of topics. Microsoft will not be beaten by AMD. Intel and NVIDIA are not some back-alley, second-rate "yes please let us lose" corporations that will idly sit by and watch a company at the edge of bankruptcy (saved by the bell...50% of the bell being a Microsoft product - "bell" = current consoles) dent their shining armour. Yes, Mantle is here to stay for a bit, interest will decline once DX12 is out and developers see the numbers, and shareholders realize that the expenses and benefits associated with Mantle are not worth the cost, especially when DX12 is universally accessible. Nobody is questioning Mantle's technological prowess (though it's now debateble if it's an authentic project by AMD seeing that OpenGL is basically the same, and AMD was in the know-how of DX12 for a long time) but still, we all owe AMD lots of thanks for bring this out of the shadows because it gave the other boys the nudge they needed. For now, it's all marketing though. Stats and future numbers. Historical evidence is clear on the outcome, so let's not shy away from those. Mantle simply doesn't have staying power.

March 24, 2014 | 10:35 AM - Posted by Anonymous (not verified)

People seemingly forget that the PS4 is able to run Mantle and not DX12, it's currently whooping the XB1 in sales and will most likely becomes the dominant console (if it hasn't already).
When DX12 arrives, to get the most of it you will need new hardware (especially if you have a Geforce), for Mantle you won't (unless you have a Geforce). When game engines support Mantle, all future games can take advantage of it and all gamers will benefit from it (unless you have a Geforce).
So M$ has 'some' windows PCs and a console currently getting whooped by the PS4 (which can use Mantle) to fly it's DX12 flag; AMD potentially has everything else, including the afore mentioned windows PCs and console.

I hardly doubt Mantle will be so short lived when it has got so much disruptive potential (especially if you have a Geforce).

March 24, 2014 | 02:42 PM - Posted by Grimmy

"...it is thanks to AMD that we are even talking about efficiency with such passion"

And strange that MS just months after Mantle release talks about really optimizing Direct X.
And have rebuild it from the ground up.
Talk about timing... or is it?

And so also for OpenGL.

No, its panic. But that is good.
It's just sad to see that much needed improvements have to be forced.
But that is great that there is competition or else we just get slight improvements.

I hope that the same thing happens for better surround audio. TrueAudio

March 24, 2014 | 04:59 PM - Posted by MarkT (not verified)

Sony forced dx12, as far as Microsoft is concerned, they will use dx12 to get an advantage over ps4. Of course we have the AMD vs Nvidia side but the Sony/Microsoft side of dx12 is interesting. Do you think Microsoft would be so willing if dx12 helped ps4 also? PS4 hardware > XBONE hardware. I agree Mantle is the bulk of the reason but just want to expose the nuance.

March 24, 2014 | 07:02 PM - Posted by MrAnderson (lazy to log in again) (not verified)

I have not had a chance to read through this entire thread (hot topic!) so sorry for repeating already stated comments: I'm responding b/c of the remark about fragmentation and Nvidia speaking out being and irony...

You have to build something and establish it before you worry about fragmentation. Nvidia CUDA, Physx, etc. being proprietary, yes, but no, Nvida practically laid all the groundwork which allows general alternatives to thrive (OpenCL, opensource Bullet, Direct Compute etc).

I'm not sure they could afford, at the time, to give their competitors a free lunch and go all in. So they trail-blazed and championed these technologies under their flag because they needed to open up doors. The competition and their partners road on their tailcoats, which is totally fair as the way of business. This time, AMD seems to be leading the charge. The only difference is that partners and the competition are only foot falls behind.

March 25, 2014 | 05:04 AM - Posted by Gregster

DX11 nVidia optimised - DX12 AMD/nVidia optimised - Mantle - OpenGL.... One thing for sure is, the state of PC gaming is good and no matter what way you look at it, all this competition is only a win for us gamers.

With 4K looking to be a standard sooner, rather than later, we will need all the GPU grunt we can muster and with the API's giving boosts to fps, I will be a happy gamer :D

March 25, 2014 | 06:15 AM - Posted by Anonymous (not verified)

True that, I agree. Happy days to be PC Gamer :)

March 29, 2014 | 09:55 PM - Posted by drbaltazar (not verified)

if intel amd and NVidia all say the same thing?odds are high it will happen reguardless if ms approve or not!why?come on guys!opengl is for all intent and purpose a android thing ,and android is pretty much Linux now a day!do you know how many corporation do hard core coding on Linux ,even ibm is there,and then there is the 2014 sparse fast fourier transform that will replace fast fourier transform ,this alone bring huge gain.and this alone pretty much a majority of stuff use that transform in a form or another.so at the end ?ya intel NVidia and amd decided to push harder for opengl .who wouldn't android is the number one os on that planet !dev do not have the choice.they have to dev for android and window.dx12 would have been nice but I suspect android isn't gona wait for ms to release dx12 before implementing next gen opengl and sparse fast fourier transform 2014 . i bet next android nexus will have all of these inside!the gain are just too important to not make this happen!

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.