Feedback

BUILD 2015: The Final DirectX 12 Reveal

Manufacturer: Microsoft

DirectX 12 Has No More Secrets

The DirectX 12 API is finalized and the last of its features are known. Before the BUILD conference, the list consisted of Conservative Rasterization, Rasterizer Ordered Viewed, Typed UAV Load, Volume Tiled Resources, and a new Tiled Resources revision for non-volumetric content. When the GeForce GTX 980 launched, NVIDIA claimed it would be compatible with DirectX 12 features. Enthusiasts were skeptical, because Microsoft did not officially finalize the spec at the time.

Last week, Microsoft announced the last feature of the graphics API: Multiadapter.

We already knew that Multiadapter existed, at least to some extent. It is the part of the specification that allows developers to address multiple graphics adapters to split tasks between them. In DirectX 11 and earlier, secondary GPUs would remain idle unless the graphics driver sprinkled some magic fair dust on it with SLI, CrossFire, or Hybrid CrossFire. The only other way to access this dormant hardware was by spinning up an OpenCL (or similar compute API) context on the side.

Read on to see what DirectX 12 does differently...

Apart from RAGE, which transcoded textures with CUDA, I do not know of a high-performance game that did that. I am not sure that the task even ran on a non-primary GPU (if you even installed a secondary graphics card that was from NVIDIA).

Introducing Multiadapter for DirectX 12

View Full Size

In OpenCL, a developer needs to explicitly separate their tasks between all compute devices. In DirectX 12, Multiadapter comes in both “Implicit” and “Explicit” varieties. Implicit Multiadapter tells the graphics driver that you do not want to deal with load balancing. Like SLI and CrossFire, this means Alternate Frame Rendering (AFR). I also expect that Implicit Multiadapter would also mirror all memory between devices and graphics cards of different models will not qualify, but neither of these two points were mentioned in the keynote. Of course, Microsoft still recommends that developers collaborate with hardware vendors to create a profile, like SLI and CrossFire do today with various driver updates and the GeForce Experience application.

It is unknown if Vulkan, the competing graphics API from the Khronos Group, will have a feature similar to Implicit Multiadapter. We will probably learn more about that later this year.

View Full Size

DirectX 12 also provides an alternative, called Explicit Multiadapter. This is a new concept for DirectX. Like OpenCL, individual GPUs can be separately addressed, send unique commands, and store unique data in memory. They do not even need to be similar in performance. One possible application is for integrated GPUs to draw a layer of objects, such as a cockpit or a 3D HUD, over what the main graphics card draws. Max McMullen, Principle Development Lead for Direct3D and DXGI at Microsoft, specifically mentioned calculating VR/AR perspective warp on integrated graphics. He also showed the Unreal Engine 4 Elemental Demo with an integrated GPU drawing some of the post-processing effects while the primary GPU worked on the next frame.

Multiadapter then breaks down Explicit further into two groups: Linked and Unlinked.

View Full Size

Linked GPUs allow special pairings of graphics hardware to collaborate more closely. They can share resources in each other's rendering pipeline and they are presented to the engine as a single GPU that has multiple command processors. We don't know how similar GPUs need to be for this classification though. “Look[s] like a single GPU” sounds like it excludes pairing cards from different vendors, because that sounds painful to implement across multiple, independent GPU drivers. It might be less strict than SLI and (non-Hybrid) CrossFire, but even that seems doubtful. Again, “look[s] like a single GPU” implies similar compute capabilities, and several of the other assumptions that make SLI and CrossFire possible to do automatically.

The other group, Unlinked Explicit Multiadapter, is interesting because it is agnostic to vendor, performance, and capabilities -- beyond supporting DirectX 12 at all. This is where you will get benefits even when installing an AMD GPU alongside one from NVIDIA.

On the other hand, Unlinked Explicit Multiadapter is also the bottom of three-tiers of developer hand-holding. You will not see any benefits at all, unless the game developer puts a lot of care in creating a load-balancing algorithm, and even more care in their QA department to make sure it works efficiently across arbitrary configurations. We do not yet know how many developers will care that much. After all, as stated earlier, developers could have launched an OpenCL kernel to secondary graphics cards for years, except on Windows Vista because of its multiple graphics driver bug. They didn't. Will that change? Maybe.

View Full Size

Unlinked Explicit Multiadapter could be important for newer systems with integrated graphics though, which makes me wonder about HSA and similar technologies. Since a graphics processor is co-resident with the CPU, some of them can collaborate with less costly set-up work. Some on-processor GPUs can operate on system memory in-place. This saves the time required to copy and overwrite buffers between two segments of the same memory, which benefits workloads that alternate between GPU- and CPU-friendly tasks. Otherwise, a developer is left wondering whether the performance that they gain in offloading will be nullified, or even negative, because of the overhead. Hopefully DirectX 12 allows graphics vendors to skip irrelevant operations to their specific architecture, but it might not, and a representative from AMD was unable to clarify (granted this was by Twitter on a weekend).

The Final Result on Gaming

DirectX 12 will probably lead to several beautiful games, especially as developers optimize their asset creation process to it. We should be able to justify more objects with unique materials. This will make for more lively scenes and hopefully less person-hours of development for equivalent results. It could take a little while before we see production houses making big changes, except maybe parts of Ubisoft, so the quality might slowly ramp up over time.

As mentioned on last week's TWiCH, Microsoft presented a high-end technology and art demo from Square Enix. It was rendered at 4K, downsampled to 1080p, on four Titan X graphics cards in Implicit Multiadapter, which is the driver-controlled version that is similar to CrossFire and SLI than something like OpenCL. I found the hair and feather cape to be exceptionally well done, although I didn't really find anything else in the demo to be surprising. The hair was really the only thing that felt like, for instance, when Unreal Engine 4 Elemental Demo was shown at E3 2012 alongside the Unreal Editor walkthrough. Maybe I'm being too critical.

At the very least, we can put to the doubt to rest over what Direct X 12 will be. We know every feature and just a few small details elude us. While the specification is finalized, \we are still waiting on content, samples, more than a few tools, or documentation. Those are still in early access only. Microsoft is doing the finishing touches on those, so they say.


May 4, 2015 | 09:01 PM - Posted by hosko

Sure microsoft would have let Nvidia know the spec ahead of time, thats how the Titan X is also DX12 compatible. MS would want devices ready to go at the launch of W10

May 5, 2015 | 12:08 AM - Posted by arbiter

AMD left them selves outta the matter. AMD when they announced mantle claimed "MS had NOTHING in way of next DirectX in the works" That was either straight blatant lie by AMD which is likely since no way AMD wouldn't know about about DX12 being developed or they didn't know which is very doubtful less they didn't want to work with MS on it.

May 5, 2015 | 08:03 AM - Posted by Coupe

Actually, Given the time between Mantle being announced and MS announcing DX12, I would say that DX12 was a direct response to Mantle.

May 5, 2015 | 04:24 PM - Posted by chizow (not verified)

Actually, given the timelines we know, and the fact DX12 was demo'd just a few months after Mantle launched, I would say DX12 was announced earlier than Microsoft wanted to as a direct response to Mantle, but it certainly existed prior to Mantle.

-DICE's Andersson says it took them 2 years to co-develop Mantle, inspired directly by their work on the next-gen console APUs and their desire to achieve similar low-level performance optimizations.
http://battlelog.battlefield.com/bf4/news/view/bf4-mantle-live/

-Yet Microsoft was showing working DX12 console ports in just a few short months after Mantle announced, on Nvidia hardware at Build 2014.
http://blogs.nvidia.com/blog/2014/03/20/directx-12/

-Microsoft has over a decade of low level API experience, that began with their work in the original DirectXBOX that used a cut down version of Longhorn (Vista) kernel.

-Microsoft was actually caught demo'ing their next-gen XBox One prior to release in the summer of 2013 on GeForce hardware.
http://www.slashgear.com/xbox-one-e3-demos-were-played-on-windows-gaming...

-Demo after demo showing MS showcasing DX12 on Nvidia hardware.

All of this kind of points to the logical conclusion that Mantle probably just pissed Microsoft off and put AMD a bit behind the 8-ball for DX12 development stuck on the outside looking in.

One really has to wonder if all that time and money wasted on Mantle was worth it for AMD, and not spent better elsewhere. I guess we will see.

May 5, 2015 | 08:50 PM - Posted by BillDStrong (not verified)

Considering Mantle has become Vulkan, and AMD's current hardware sees the most advantage with these new APIs, I would say AMD considers those dollars well spent. They got a lot of good press, and they showed their ability to innovate in a highly constrained situation.

Now, Microsoft's bet only pays off if they get a large number of users to switch to Windows 10, whereas AMD's bet pays off either way. They win if Microsoft doesn't thanks to Vulkan, which will be available across most target platforms devs are interested in, and they win if Microsoft wins, thanks to the Xbox One hardware being powered by AMD, so DX12 has to be tailored to their cards to some extent.

May 6, 2015 | 07:08 AM - Posted by lantian (not verified)

where do you people come up with this bs, it did not become vulkan, it evaporated in front of your eyes and now it looks like all the fanboys are dead set on making it look like vulkan is mantle are they really that butthurt that mantle died, i mean everyone with knowledge knew it was never gonna last, we already had glide..., if you feel so strongly please give a example in what way is vulkan mantle, where did you even get that bs from

May 6, 2015 | 09:05 AM - Posted by chizow (not verified)

Mantle is dead, they took some parts of it for Vulkan which will just be the next irrelevant API for desktop PCs like OpenGL and Mantle before it. And AMD's hardware sees the most advantage of these new APIs? Maybe for the dead-end Mantle, but its also obvious AMD is behind in their DX12 development as a result. Another unintended consequence of their gamble on Mantle, I guess.

http://www.anandtech.com/show/8962/the-directx-12-performance-preview-am...

Nvidia is 50% faster in DX12 Star Swarm, which is higher than normal, pointing to an inefficient AMD DX12 driver.

Microsoft's bet has already paid off, and sealed Mantle's fate simultaneously, when they announced all Win7 and Win8/.1 users would get a free upgrade to Windows 10. Did you miss that announcement? Shortly afterwards, AMD threw in their hand, probably figured they wasted enough time and money on Mantle, no need to keep burning cash on a dead-end API.

As we have seen though it looks like AMD has already lost on their bet. Mantle is dead, DX12 is reborn and AMD are behind on their DX drivers, again.

May 5, 2015 | 09:03 AM - Posted by Anonymous (not verified)

Not sure AMD was left out of anything. AMDs main push is APUs and HSA and DX12 seems to be designed to make those more viable options for the average consumer. It gets all cores involved, allows better integration of the embedded gpu and a discreet card and seems to set up AMD for better days moving forward with their custom parts business. I think Windows 10 and DX12 will be a major boost for AMD.

May 5, 2015 | 10:00 AM - Posted by funandjam

...or it could be that you left out an entirely different option on purpose in order to make AMD look bad, which is what you do very often. I'd say you do that %99.99 of the time. It isn't so much that you outright print fabrications based upon your ridiculous opinions, it's more of you spreading misinformation by leaving out very valid points.

The option you left out on purpose is that AMD knew DX12 was being worked on, but M$ was dragging its feet yet again for one reason or another. What better way to get them moving than to introduce a seriously competing API?

May 5, 2015 | 04:32 PM - Posted by chizow (not verified)

Even if your supposition was true, do you think this was a worthwhile gambit by AMD? I mean they burned a LOT of valuable resources in terms of both engineering man hours and the flat sum paid to EA/DICE for Mantle support package. For what? What do they have to show for it other than an API that will just as likely be ignored on the PC as its predecessors (OpenGL + Mantle)?

I mean sure the idea of a low level API was sound, but to go the proprietary route (very anti-AMD to begin with) against a juggernaut like Microsoft in the PC gaming market. Certainly money that could've been better spent elsewhere.

May 5, 2015 | 10:01 PM - Posted by Anonymous (not verified)

What did AMD get. Well they did set the effect the direction of the new APIs in a way that seems advantageous to them. Add that they may push that advantage even further once they launch their new architecture next year given their head start. The new direction also seems to add tremendous value to their APUs if you consider their benchmarks against Intel when parallel processing is used. Since, at least with DX12, it see's GPU assets instead of a discreet and embedded GPU. AMD should own the budget and mid-range and probably the top considering the R300 series leaks.

So they have only struck major blows against their two main competitors. Money well spent.

May 6, 2015 | 09:12 AM - Posted by chizow (not verified)

Well, given these new low level APIs actually put even more emphasis on driver development to extract performance out of target hardware, I'd say that's not really in AMD's favor since producing high quality, and high performance drivers hasn't really been their strong suit.

Indeed, if they just produced better DX11 drivers to begin with, Mantle might never have been necessary. But now, its too late, they pissed MS off so now there's no veil of hardware abstraction for them to hide behind. They wanted closer to metal, and now they've got it, but unfortunately for AMD, Nvidia has it too, and as we've seen time and again, Nvidia tends to write better DX drivers to extract more from their hardware.

Here's a good example, look at the awful Single-threaded vs. Multi-threaded DX performance for AMD. Confirms what many have suspected since DX11 launched, AMD's DX11 MT renderer was awful (shows basically no scaling over single-threaded).

http://www.anandtech.com/show/9112/exploring-dx12-3dmark-api-overhead-fe...

Sure DX12 will help them on the low end CPU/APU a little bit, but it helps EVERYONE just as much, if not moreso. Intel will still be faster in just about every CPU metric.

AMD hasn't struck a blow against two major competitors, they've empowered them and now that they are on a level palying field, I have a feeling it will be no competition.

May 6, 2015 | 09:15 PM - Posted by Anonymous (not verified)

I wouldn't dwell on their previous driver work. I think Mantle shows they have a better grasp of what's needed. They built an architecture and API that seem to work well together. The drivers will come.

May 7, 2015 | 08:42 AM - Posted by Mark_GB

AMD, under Rory, was doing a lot of things wrong. Rory was a money guy, brought in to control costs. And that was what he focused on. Let send some incredibly talented engineers packing during his time to reduce costs. Then he hired Lisa Su. To this day, I believe that the AMD board hand picked her, and told Rory to hired her, and to listen to her once he had hired her.

With Lisa Su onboard, there was a slow but steady list of very talented engineers and software designers hired. Some had been at AMD previous, many had not. But things very quietly began to change inside AMD. New projects were quietly started, and a plan on where they wanted to be in 2014-16 was developing. Some of those projects did not generate the interest that AMD anticipated and have now been killed. Skylake is one. Mantle as well, but probably not for the reasons most people thought. Zen has been pushed forward, and is now coming out ahead of the new AMD Server chips. Zen is said to have industry transforming technology. Lets hope it does. Intel has been lazy and sat on these technologies for years. And still has not even tried them out in a single chip. So if AMD can use those new technologies, and bring a fight back to Intel by doing so, everyone wins. Competition is an amazingly good thing for consumers.

I believe Lisa Su to be a brilliant lady. Extremely knowledgeable about the IT industry. Dr. Su has bachelor’s, master’s and doctorate degrees in electrical engineering.

Prior to joining AMD, Dr. Su served as senior vice president and general manager, Networking and Multimedia at Freescale Semiconductor, Inc., and was responsible for global strategy, marketing and engineering for the company’s embedded communications and applications processor business. Dr. Su joined Freescale in 2007 as chief technology officer, where she led the company’s technology roadmap and research and development efforts.

Dr. Su spent the previous 13 years at IBM in various engineering and business leadership positions, including vice president of the Semiconductor Research and Development Center responsible for the strategic direction of IBM’s silicon technologies, joint development alliances and semiconductor R&D operations. Prior to IBM, she was a member of the technical staff at Texas Instruments in the Semiconductor Process and Device Center (SPDC).

Sounds exactly like the kind of person AMD needed to survive. This lady has a street cred!

If anybody can turn AMD into a growing, profitable company, I think it is her. But we shall see. Rather quickly too. Most of these projects she has been working on are due to come out next year. Some this year.

May 4, 2015 | 09:15 PM - Posted by Martin Trautvetter

Not a fan of all the blur in the Square Enix demo. Makes it seem like your looking at this world through tired eyes.

May 5, 2015 | 12:16 AM - Posted by Scott Michaud

Obviously keep in mind that the stream is compressed, and the YouTube video twice-compressed.

May 4, 2015 | 09:26 PM - Posted by Anonymous (not verified)

It's a shame Nvidia cards are not fully compliant to the feature set of DX12. It was advertised by Nvidia the 980 was fully Direct12 but it turns out the hardware only supports level 1. All the while AMD GCN cards support level 3.

I expect my 980 to not have much longevity. Really mad at Nvidia for yet another blunder.

May 4, 2015 | 11:55 PM - Posted by Anonymous (not verified)

This is incorrect. Maxwell v2 (gtx 980 and titan x) are tier 2 resource binding level. GCN is tier 3. For feature level though Maxwell is 12.1 while GCN is 12.0 because GCN doesn't support conservative rasterization tier 1 and ROVs.

May 5, 2015 | 12:09 AM - Posted by arbiter

Go spread your AMD fan boy Lies somewhere else.

May 6, 2015 | 07:01 AM - Posted by nub (not verified)

lol that's good coming from you. The biggest nvidia fanboy ever made.

I'll send you a mirror. Make shore you look at it.

May 6, 2015 | 09:13 AM - Posted by chizow (not verified)

Except he's right, as usual the only people interested in spreading lies/BS are AMD fanboys. That's usually what happens when you don't have the product to back up what you're saying.

May 5, 2015 | 12:17 AM - Posted by Scott Michaud

They are fully compliant with DirectX 12.

May 5, 2015 | 09:09 PM - Posted by BillDStrong (not verified)

You are actually only talking about feature sets, not compliance. A large amount of DX11 hardware is compliant, they just don't contain every feature. This is fine, as they will all benefit to some extent from the main benefits of DX12.

The only issue is how soon will games and software take advantage of those new features.

We are really needing to start purchasing hardware for five years down the line, since that is when software will really catch up.

May 4, 2015 | 10:13 PM - Posted by Anonymous (not verified)

People are going to want to know if 4 or 6 cores is a better option...

May 5, 2015 | 12:41 AM - Posted by Anonymous (not verified)

So still behind AMD tier 2 vs AMD tier 3 for a new card? WOW I'm about to throw this GTX 980 out the window.

May 5, 2015 | 01:20 AM - Posted by Anonymous (not verified)

Sheer AMD fanboi ignorance.

May 7, 2015 | 08:50 AM - Posted by Mark_GB

Both companies will have new video chips that are fully DX12 compatible long before the first game that is DX12 compatible comes out.

So this crap you are arguing about is just that.

May 5, 2015 | 01:33 AM - Posted by His name is Rob...

I can hear the clattering sounds of 980's hitting the pavement because of no "dedicated atomic counter" as we speak.

BTW that Squenix DX12 demo is on a Nvidia tier 2 level, not exactly hideous is it?

May 5, 2015 | 02:18 AM - Posted by gloomfrost

really excited to try multi gpu rendering.

May 5, 2015 | 03:28 AM - Posted by Shadowarez (not verified)

It'd be interesting to see if dx 12 can be a holy grail with all these promises it's getting a lil far fetched I mean there fixing issues nearly 2 decades old with software? Like vram pooling better performance in sli? Better scaling of multiple gpus, it's taken this long for a software fix really? Take all these promises with a factory sized grain of salt srsly. Wait and see if they can deliver to ppl who don't spend $25-35 grand on computers to run tech demos.

May 5, 2015 | 05:12 AM - Posted by AMDfanboi (not verified)

meanwhile AMD are still selling their 4 year old cards. If they don't announce the R9 390 series at E3 its game over.

May 5, 2015 | 06:45 AM - Posted by R-TardAboveMe (not verified)

Why and when has there ever been a GPU unveiling at E3? Last I checked the R9 300 series was being unveiled at Computex in June...http://www.guru3d.com/news-story/amd-unofficially-confirms-radeon-flagship-%E2%80%93-r9-390x-launches-at-computex.html

Patience is a virtue grasshopper.

May 5, 2015 | 09:26 AM - Posted by Anonymous (not verified)

If the leaked specs are to be believed its game over for Titan. $2-300 less with more power and full DX12 compatibility. And that is before they move to a new architecture next year. People seem intent on writing AMD off. I think they are ignoring the big picture. AMD is about to explode in the next few years. Contrary to all implosion theories.

May 5, 2015 | 09:53 AM - Posted by Anonymous (not verified)

If AMD's 4-year-old GPUs are still able to compete well enough with Nvidia's new GPUs that AMD can keep selling them, how is that AMD's failure and not Nvidia's?

May 5, 2015 | 01:22 PM - Posted by Anonymous (not verified)

True...true...

May 5, 2015 | 04:41 PM - Posted by chizow (not verified)

Curious as to what your definition of "compete well enough" is, given AMD is getting slaughtered in the marketplace.

May 5, 2015 | 09:17 PM - Posted by BillDStrong (not verified)

Performance and performance/dollar?

May 6, 2015 | 09:14 AM - Posted by chizow (not verified)

Which means absolutely nothing to their sales numbers and bottom line.

May 7, 2015 | 08:12 AM - Posted by renz (not verified)

being the king of performance/dollar doesn't mean they are doing well in business.

May 5, 2015 | 10:14 AM - Posted by Anonymous (not verified)

M$ should have had multi-adaptor as part of its OS a very long time ago, instead of spending all its resources ruining its UI. Why the hell they could ever call windows an OS if it could not handle all the hardware "certified" to work with windows. You would never find any mainframe OS that could not run all of the hardware on an always on and usable basis. This not being able to use a discrete GPU along side an integrated GPU does not have any basis in reality, and M$ does not have any excuses other that pure laziness and incompetence for not producing a competent OS product with the ability to utilize all of the hardware processing resources. If anyone should have the responsibility for making multi-adaptor work with any graphics API, it's the maker of the computer's OS. For sure multi-adaptor is definitely an HSA aware feature for any HSA aware OS. M$, Linux, or BSD based OS's should be able to always utilize any hardware processing resources for whatever tasks can be accomplished with the processing resources GPU, CPU or other.

It's does not matter what graphics API's limitations are present, It is the responsibility of the OS to load balance, and abstract away any differences in GPU, CPU, Other processing hardware on any platform, and the OS is not a real OS until all of the processing resources can be utilized on an always on, always available for processing availability basis. M$ could have made multi-adaptor part of its windows driver model, and any GPU, CPU, Other hardware that did not comply with multi-adaptor should have never been certified to work with the OS in the First place! There is no excuse for any OS not being able to utilize all of its hardware all of the time, and the owners of the hardware have been, ever since the introduction of GPUs(Discrete or Integrated) should have had multi-adaptor support in the OS, any OS.

May 5, 2015 | 11:11 AM - Posted by Searching4Sasquatch (not verified)

Hey Scott, does AMD support DirectX 12 Feature Level 12_1? No? Only the DirectX 12 API? Well Maxwell supports both.

May 5, 2015 | 03:16 PM - Posted by Scott Michaud

That's true.

May 5, 2015 | 01:05 PM - Posted by BillDStrong (not verified)

It looks like the requirements for Implicit and Explicit Linked would be the same. They are two sides of the same idea, the only difference is who implements the algorithm. Essentially, Implicit is akin to Crossfire/SLI where the vendor's driver controlled the functionality, and Explict:Linked is akin to Mantle's control over the algorithmic split, but DX12 has the same requirements as SLI or Crossfire interms of hardware interop, without the AMD special case of APU Crossfire.

Explicit:UnLinked is the most interesting, since this is essentially that AMD special case, but with more options. After all, how many of us already have Intel graphics builtin? Using that with AMD and Nvidia Hardware can be quite useful. This can give HSA benefits, since the cpu and GPU can work on time sensitive small tasks, and the bulk heavy stuff can get thrown to the heavy hitters.

May 5, 2015 | 01:25 PM - Posted by Mountainlifter

I have a question that cannot be answered yet. Will DX12 impicit have the same consistency/performance trade-off of sli/crossfire? If so, will DX12 linked explicit alleviate that? I hope all those who implement these crazy tiers of multi-adapter options will keep in mind that more frames is not necessarily a great thing if it ends up with dropped frames and stuttering. Like how two years ago AMD implemented crossfire without framemetering until many outlets got onto the frametime bandwagon and pointed it out.

May 5, 2015 | 03:13 PM - Posted by Scott Michaud

Well that's the interesting thing. Explicit Multiadapter (including Unlinked) gives the developer control. You are relying upon them to do the right thing, but they also have deep access to their own engine to do the right thing.

For good developers, or those who use good engines, this will probably be a lot better than trying to make assumptions and implement it at the driver level. For others, it might be best to let NVIDIA, AMD, and maybe others in the future do it for you with Implicit.

And you will get some tradeoffs too. A developer might believe that their game is slow-paced enough to pipe multiple frames together. This will increase the frame rate and even lower the chance of stutter... but it will also increase input lag. Your input is no longer affecting the next frame, it's modifying the one after that, or even the one after that one. They could make other decisions, with other downsides, too.

May 7, 2015 | 01:56 AM - Posted by Relayer (not verified)

Considering the similarities in the DX12 programming guide, I'd say they took Mantle, adapted it for multi vendor and called it DX12.
http://i169.photobucket.com/albums/u233/Rocketrod6a/Mantle%20vs%20DX12%2...

May 7, 2015 | 08:51 PM - Posted by Scott Michaud

The Programming Guide moved addresses btw: it's now here.

May 13, 2015 | 12:07 AM - Posted by Zomgrolf (not verified)

Yeah, that does look almost like a carbon copy of the Mantle spec.

Also if you scroll the video to around 9:18 you can hear McMullen say: "It's exactly what we were hoping for when we started Direct3D 12 two years ago" -- which means they started sometime in 2013.

Microsoft most likely knew about Mantle prior to the public announcement in September 2013, so I wouldn't be the least bit surprised if the whole D3D12 thing was really started in response to the work AMD had done on Mantle.

May 20, 2015 | 01:43 PM - Posted by drbaltazar (not verified)

Could any develop a tool to test the max just in time capability of interrupt?this is getting ridiculous.thwy upgrade everything but ignore the base .msi/xor whatever its called in Linux,apple,android,chrome are all control by the interrupt just in time capability gees.not by its max

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.