Intel Shows Interest in Mantle?

Manufacturer: Intel

When Magma Freezes Over...

Intel confirms that they have approached AMD about access to their Mantle API. The discussion, despite being clearly labeled as "an experiment" by an Intel spokesperson, was initiated by them -- not AMD. According to AMD's Gaming Scientist, Richard Huddy, via PCWorld, AMD's response was, "Give us a month or two" and "we'll go into the 1.0 phase sometime this year" which only has about five months left in it. When the API reaches 1.0, anyone who wants to participate (including hardware vendors) will be granted access.

View Full Size

AMD inside Intel Inside???

I do wonder why Intel would care, though. Intel has the fastest per-thread processors, and their GPUs are not known to be workhorses that are held back by API call bottlenecks, either. Of course, that is not to say that I cannot see any reason, however...

Read on to see why, I think, Intel might be interested and what this means for the industry.

The First Driving Driver

The first reason, that I can think of, is that Mantle is expected to be significantly easier to write drivers for, compared to DirectX 11 and OpenGL. The API is described as not holding the game developer's hand, especially if it means dragging them, kicking and screaming, somewhere they don't want to go. I think I overextended my analogy a little, but basically it pushes responsibility on the developers to manage things like multi-gpu configurations and the specifics of memory access.

View Full Size

If Intel pushes all of this complexity over to the game developer, it will have less tweaking to do in order to reach equivalent performance. This is good for Intel, because this is not an area that I would consider them as an industry leader. There are two key areas where Mantle does not give a free lunch: hardware design and writing efficient compilers (to convert "shader" code into efficient machine language for their specific GPU).

I consider these (hardware design and compiler engineering) to be strengths of Intel. Of course.

And yes, I know that all of the major GPU vendors have acquired great compiler engineers. Some even go to the extent of manually replacing whole, compiled shaders with their own, equivalent ones to get that extra bit of speed. The point is that, I believe, Intel would much prefer rely on those strengths. I do not seem them wanting to make a robust, stable driver, no matter what a developer might throw at it, which is still efficient.

Beyond that, almost every man-hour spent on Intel's strengths (again, hardware and compilers) wil carry over to DirectX. They each use the same shading language and, according to my interview with Guennadi Riguer, the chief architect of Mantle at AMD, both drivers (Mantle and DirectX) use one, shared compiler. Of course, a fast and efficient, feature-complete GPU core will obviously help, regardless of the API.

View Full Size

Intel Atom-powered tablet, from Ryan's review.

A Second Reason and How It Affects the Industry

But then we get to the second, less obvious reason: the mobile market. If Intel gets in early, Mantle could help performance and battery life on cellphones and tablets, by balancing work between CPU cores, and let the processor spend more time asleep (saving power). Mantle's current, leading competitor is DirectX 12 (ignoring Metal because Apple makes their own processors and will do what they want). OpenGL does not, at least yet, have a suitable competitor, except for the highly-skilled developers who know its ins and outs, including vendor-specific extensions. Clearly, putting your hope in DirectX 12 is betting on Microsoft's mobile market share.

Mantle doesn't seem too crazy, now, at least in comparison.

On the same topic, it might, also, be easier on developers to create applications, especially games, because the API is much closer to DirectX than OpenGL (again, including the shading language which defines every material in any given scene). While it seems a little odd to port between desktop and mobile, at least the same shader library could be used, potentially unmodified if the mobile GPU is fast enough.

View Full Size

If Intel straight-up adopts Mantle, especially if they offer some compelling iGPUs alongside it, that puts a few more eyes on NVIDIA. Currently, it would be safe to assume that NVIDIA has little-to-no plans of supporting any API that is controlled and guided by a direct competitor. The company exerts as much control as they possibly can, in terms of how they deliver an experience to their users. Basically, they (habitually) want to be the ones to make their customers happy. On the other hand, if NVIDIA does adopt Mantle, that would leave us with a seemingly well-designed API, with significantly less driver tuning, with compatibility for DirectX-style materials, and with support for Windows, Linux, and other platforms... that is owned by AMD.

I should make another bowl of popcorn...

July 3, 2014 | 07:49 PM - Posted by jackalopeater (not verified)

I've already got my popcorn going. This is getting interesting.

July 3, 2014 | 08:59 PM - Posted by johnc (not verified)

This is probably the most remarkably over-marketed thing I've ever seen in the tech industry. But people just lap it up. It actually serves as a kind of fascinating social experiment.

July 3, 2014 | 09:07 PM - Posted by AMDBumLover (not verified)

It is exactly what the industry needs! unless you can tell me of another technology that will move the industry further?

July 4, 2014 | 02:11 AM - Posted by collie

It is or it is not a secret that Intel would like to end their old as fuck friendship with M$. 80286 processors worked perfectly with 80286 dos (3.30) and the relationship has grown from there, but intel is starting to get sick of the M$ windows/directX ecosystem. They want the Android market, they want the Apple OS market, they want the Steam OS market. If Mantle is a viable alternative to DirectX, than Intel would obviously be all for it. Intel Wants to sell chips, they aren't in this to make friends. Games make customers, customers need systems, systems sell processors, 80X86x64 is still (for now) the most powerful out there. M$ only wants to use what ever means necessary to be the top OS out there, Intel just wants the money. Go with the money, the money wants what you want

July 4, 2014 | 03:23 AM - Posted by JohnGR

"AMD inside Intel Inside???"

Yeap. For years. It's called X86-64.

July 4, 2014 | 04:14 PM - Posted by Scott Michaud

And FMA and a few others. Also, I appreciate that you called it "x86-64" rather than "x64". I was picky about that.

July 4, 2014 | 05:56 AM - Posted by Anonymous (not verified)

Yeah but how about letting games utilize the Intel iGPU that almost everybody with a recent Intel chip already has, in a asymmetric dual-gpu way?

Mantle allows for this, the developers only need to take this kind of configuration into account. Since iGPU's usually have much faster access to system RAM than discrete GPU's this could speed up some game effects considerably.

July 4, 2014 | 02:47 PM - Posted by Scott Michaud

You can do that now, with OpenCL. But yes, Mantle might make it a bit easier than dragging in a second API, especially for commercial products (vs. tech demos).

July 5, 2014 | 04:46 AM - Posted by Anonymous (not verified)

OpenCL is not for graphics, unless you implement that yourself that is... And secondly, as far as I know, Intel's OpenCL implementation runs (mostly) on the CPU side but is quite fast.

What I had in mind is something completely different. Such as advanced post-processing effects or a whole bunch of other useful things you could do with a second accelerator.

But yes, if a game would use DirectCompute / OpenCL / AMPC++ / GPGPU, some of the graphics work could be moved to the iGPU. Why compute on the dGPU? Because of the reason above, compute on Intel doesn't utilize much of the iGPU and we want to offload work as much as possible to run as efficiently as possible

July 5, 2014 | 03:09 PM - Posted by Scott Michaud

Intel lets you choose whether to run on the x86 cores, or on the iGPU. Moreover, with the iGPU selected, CPU usage is minimal (similar to when my GeForce 670 was selected) but GPU-Z notes heavy GPU utilization on the Intel integrated graphics. So, while I am not an Intel driver engineer, I am pretty certain that targeting the Intel iGPU does not load the CPU (apart from driver overhead).

I think the confusion comes from Sandy Bridge not having an iGPU OpenCL driver, only a CPU OpenCL driver. Ivy Bridge processors (and later) had both, at least on Windows.

July 4, 2014 | 06:21 AM - Posted by Anonymous (not verified)

Intel uses IP cross-lincense from Nvidia to make their IGP. This might be a way to close the gap between AMDs APUs.

If intel wants to continue competing down the road against the ARM offerings that are getting stronger it will do well for them to improve their IGPU anyway they can or be left out.

July 4, 2014 | 02:43 PM - Posted by Scott Michaud

It's only a patent cross-license such that Intel can develop their own GPUs without fear of an NVIDIA lawsuit. The actual technology is developed by Intel, based on technology that they acquired, not licensed, from ZiiLabs (which is owned by Creative).

July 4, 2014 | 09:28 AM - Posted by Anonymous (not verified)

Wouldn't it be interesting if certain functions of Mantle found their way into Intel's IGP, ala Quick Sync, to all but eliminate those overhead processes?

July 4, 2014 | 10:00 AM - Posted by Anonymous (not verified)

Share nothing with those Chip Pimps at Intel, they are a monopoly and a technology milker! The makers of gaming PCs have Power8/Power CPUs to look forward to, as the Power IP will be licensed and used by many, and hopefully both Nvidia and AMD will license the Power ISA/IP and make some really high performance gaming CPUs. Power8 on the High end, and ARMv8 on the low to middle! AMD, of course, still has an x86 license, but x86 is not the only game in town anymore! Do not let the Intel clean room suit dancers and marketing monkeys hypnotize you, and don't drink the Intel Kool Aid!

NOTE: Power/Power8 is not PowerPC, Power8 eats Xeon for lunch!

July 4, 2014 | 10:31 AM - Posted by ZoA (not verified)

PCper you should check mantle performance on Battlefield Hardline, from what I have seen, from some sources, it improves performance quit consistently by over 50% even on computers with reasonably powerful Intel chips. It seems mantle has significantly improved sense original premiere in Battlefield 4.

July 4, 2014 | 11:41 AM - Posted by Searching4Sasquatch (not verified)

Such an AMD fanboy.

Geoff understands what Intel did here. You clearly do not.

July 4, 2014 | 11:54 AM - Posted by Anonymous (not verified)

To the guy above, I read the same thing there as I did here. Derp.

July 4, 2014 | 01:01 PM - Posted by Anonymous (not verified)

I think the guys at techreport are actually biased towards Nvidia. They love their GSYNC and have not complained about or argued even a bit against Nvidia's GameWorks nonsense, which serves no purpose other than hurting performance on AMD cards.

I think AMD has been doing a much better job with their open-source initiatives such as FreeSynuc, OpenCL and soon to be Mantle, and it would be a big step forward in the gaming industry if all hardware and software players adopt an open, standard, efficient, OS-agnostic and dynamic API.

July 5, 2014 | 05:01 AM - Posted by renz (not verified)

people keep talking GameWorks hurting AMD performance but so far there is no actual proof on that. AMD says nvidia HairWorks hurt performance on their card but they never show the actual prove either. they just say that their internal test showed that. why they did not dare to show the prove to the public? and ultimately it depends on on game developer to use nvidia gameworks library. in case of Watch Dogs on HBAO+ feature from gameworks that work both AMD and nvidia cards and as showed by [H] test there is no performance hit running HBAO+ using AMD cards. even if HBAO+ did not work on AMD cards the game still offer it's own AO implementation. to me what really hurt the industry (as much as help it) was program like TWIMTBP and Gaming Evolved. it is more likely the reason why developer refuses to accept suggestion from other hardware vendor when they already doing partnership with their direct competitor. it happen to both TWIMTBP and Gaming Evolved title.

July 4, 2014 | 07:32 PM - Posted by Anonymous (not verified)

["On the other hand, if NVIDIA does adopt Mantle, that would leave us with a seemingly well-designed API, with significantly less driver tuning, with compatibility for DirectX-style materials, and with support for Windows, Linux, and other platforms... that is owned by AMD."]

What is it, a Crap fanboy comment? huahuhuahuhua

July 4, 2014 | 10:43 PM - Posted by Scott Michaud

The last five words were not meant to be positive (not very negative, either, but a counter-point none-the-less).

July 5, 2014 | 05:10 AM - Posted by renz (not verified)

AMD touting Mantle being open bla bla bla but when other vendor ask for spec they outright deny the request. AFAIK intel have been asking for several times yet AMD turn down each request with the excuse Mantle still in beta. what kind of beta is that when there already 3 games out there using Mantle? anyway is there any plan to open Mantle up so the future spec of Mantle will be determine the same way as OpenGL? what Open body will maintain it?

July 28, 2014 | 01:15 AM - Posted by Anonymous (not verified)

AMD has already stated several times that they wanted to work with a limited number of people until the tools and API reached a certain level of usability.

Regardless of any games being out, Mantle is very much in its infancy.

So giving Intel an unfinished product which won't truly represent its potential isn't doing Intel any great benefit. Intel already new AMD had a time table so they shouldn't have been surprised anyway.