Review Index:
Feedback

NVIDIA and AMD Fight over NVIDIA GameWorks Program: Devil in the Details

Author:
Manufacturer: Various

The AMD Argument

Earlier this week, a story was posted in a Forbes.com blog that dove into the idea of NVIDIA GameWorks and how it was doing a disservice not just on the latest Ubisoft title Watch_Dogs but on PC gamers in general. Using quotes from AMD directly, the author claims that NVIDIA is actively engaging in methods to prevent game developers from optimizing games for AMD graphics hardware. This is an incredibly bold statement and one that I hope AMD is not making lightly. Here is a quote from the story:

Gameworks represents a clear and present threat to gamers by deliberately crippling performance on AMD products (40% of the market) to widen the margin in favor of NVIDIA products. . . . Participation in the Gameworks program often precludes the developer from accepting AMD suggestions that would improve performance directly in the game code—the most desirable form of optimization.

The example cited on the Forbes story is the recently released Watch_Dogs title, which appears to show favoritism towards NVIDIA GPUs with performance of the GTX 770 ($369) coming close the performance of a Radeon R9 290X ($549).

It's evident that Watch Dogs is optimized for Nvidia hardware but it's staggering just how un-optimized it is on AMD hardware.

View Full Size

Watch_Dogs is the latest GameWorks title released this week.

I decided to get in touch with AMD directly to see exactly what stance the company was attempting to take with these kinds of claims. No surprise, AMD was just as forward with me as they appeared to be in the Forbes story originally.

The AMD Stance

Central to AMD’s latest annoyance with the competition is the NVIDIA GameWorks program. First unveiled last October during a press event in Montreal, GameWorks combines several NVIDIA built engine functions into libraries that can be utilized and accessed by game developers to build advanced features into games. NVIDIA’s website claims that GameWorks is “easy to integrate into games” while also including tutorials and tools to help quickly generate content with the software set. Included in the GameWorks suite are tools like VisualFX which offers rendering solutions like HBAO+, TXAA, Depth of Field, FaceWorks, HairWorks and more. Physics tools include the obvious like PhysX while also adding clothing, destruction, particles and more.

Continue reading our editorial on the verbal battle between AMD and NVIDIA about the GameWorks program!!

However, there are some major differences with GameWorks compared to previous vendor-supplied code and software examples. First, AMD and several game developers claim that GameWorks is a “black box” with only API calls used to access the GW functionality. The term “black box” is used to indicate that little is known about what is going on inside the GameWorks libraries themselves. This is because GameWorks is provided as a set of libraries, not as a collection of example code (though this is debated by NVIDIA later in our story). Because of its black box status, game developers are unable to diagnose buggy or slow game code when using GameWorks and that can lead to issues with different hardware.

View Full Size

A section of AMD's developer website with example code for download.

You might be wondering already why this is different than something like PhysX? Looking at GPU accelerated PhysX only, that particular plugin ONLY runs on NVIDIA hardware. Adding it or changing the implementation does not negatively affect the performance of the AMD or non-PhysX code path. Many of the GameWorks toolsets (basically everything except PhysX and TXAA) though do in fact run on both AMD and NVIDIA hardware if they are implemented by the developer. That means that visual effects and code built directly by NVIDIA is being used on AMD GPUs in GameWorks enabled titles like Watch_Dogs. You can see immediately why this could raise some eyebrows inside AMD and amongst the most suspicious gamers.

This is different than what has been the norm for many years. In the past, both AMD and NVIDIA have posted code examples on their websites to demonstrate new ways of coding shadows, ambient occlusion and other rendering techniques. These could be viewed, edited and lifted by any and all game developers and implemented into their game engine or into middleware applications. GameWorks is taking this quite a bit further by essentially building out a middleware application of its own and licensing it to developers.

The obvious concern is that by integrating GameWorks with this “black box” development style, NVIDIA could take the opportunity to artificially deflate performance of AMD graphics cards in favor of GeForce options. That would be bad for AMD, bad for AMD users and bad for the community as a whole; I think we can all agree on that. AMD points to Watch_Dogs, and previous GameWorks titles Batman: Arkham Origins, Call of Duty: Ghosts and Assassin’s Creed IV: Black Flag as evidence. More interestingly though, AMD was able to cite a specific comparison between its own TressFX hair library and HairWorks, part of the library set of GameWorks. When using TressFX both AMD and NVIDIA hardware perform nearly identically, even though the code was built and developed by AMD. Using HairWorks though, according to numbers that AMD provided, AMD hardware performs 6-7x slower than comparable NVIDIA hardware. AMD says that because TressFX was publicly posted and could be optimized by developers and by NVIDIA, its solution provides a better result for the gaming community as a whole. Keep in mind this testing was done not in a real-world game but in internal testing.

View Full Size

NVIDIA's updated Developer site.

AMD has made other claims to support its theory on the negative impact of GameWorks. The fact that NVIDIA’s previously available DirectX 11 code samples were removed from NVIDIA developer site is a damning statement as it would indicate NVIDIA’s desire to move away from them to GameWorks exclusively. But, as we show on the following page, these code samples were not removed but simply relocated into a difference section of NVIDIA's developer site.

With all of this information it would be easy to see why stories like the one at Forbes found their way across the Internet. However, as I found out after talking with NVIDIA as well, there is quite a bit more to the story.

May 28, 2014 | 06:01 PM - Posted by NVIDIA FTW As always! (not verified)

AMD,
Grabbing at straws and sounding like the ill fated step child.

Dear AMD,

Sack up and do something? perhaps releasing "new" and "powerful" tech at the same time as Nvidia instead of being 2 steps behind them.

"freesync"

Again another AMD ploy to sound like hero's and that they actually did something....wrong again.

Ok now all you AMD poor kids start up your hater aid engines and let it fly.

May 28, 2014 | 06:11 PM - Posted by amdBumLover (not verified)

go buy a titanz meanwhile i'll just get 2 295x2s...

May 28, 2014 | 06:12 PM - Posted by arbiter

toss on another 500+$ to get a PSU that can power those cards

May 28, 2014 | 06:57 PM - Posted by Anonymous (not verified)

why would u need 500$ for psu ?
you know a single one runs fine on a 850watt psu ?

May 28, 2014 | 07:15 PM - Posted by arbiter

"i'll just get 2 295x2s..." <-- read what he said

May 30, 2014 | 12:08 PM - Posted by ArcRendition

Ummm... you mean... $159? Yep, that'll buy you a band new Gold rated 1300 watt EVGA SuperNova power supply on Newegg RIGHT NOW after a small $35 dollar rebate! Even a 1500 watt brand name Silver rated PSU can be had for just $299... I suppose you COULD spend $449 dollars if you wanted to, but why?

SOURCE: http://www.newegg.com/Product/Product.aspx?gclid=CjgKEAjw2KCcBRCQ_6mztcu...

June 3, 2014 | 12:24 AM - Posted by nobodyspecial (not verified)

You are NOT the target market for TitanZ. People using pro apps but can forgo the need for ECC and full driver support by NV ARE the target.

Check prices of Tesla K20 and you'll find it's MORE for 1/2 the cores. Check prices of a Quadro 6000 and you'll find a pair of those will set you back $10000 for the same core count.

You are complaining from an ignorant gamers view of the product. It also runs 375w instead of 500w for the competition. These will fly off the shelve to people looking at the two BILLS I listed above.

http://www.newegg.com/Product/Product.aspx?Item=N82E16814132010&cm_re=te...
$3200 for less than 1/2 cores and 3.52Tflops SP/1.17Tflops DP of compute 5GB mem, while Titan Z has 2.66tflops DP and 12GB.

What do they say this is USED for?:
CFD, CAE, financial computing, computational chemistry and physics, data analytics, satellite imaging, weather modeling

http://www.newegg.com/Product/Product.aspx?Item=N82E16814133494&cm_re=qu...
$5000 for ONE. You'll need two to match the TitanZ. Get it? $10000.

http://www.fool.com/investing/general/2014/06/02/why-is-nvidia-charging-...

Also note it's nearly two times Fire Pro S10000 at 1.48Tflops DP with 6GB and also $3000. Again it takes 2 to catch TitanZ which is far less watts also. Titan Z is for DP people who happen to game on the side, not for gamers who have no idea what DP is for.

May 28, 2014 | 06:29 PM - Posted by Anonymous (not verified)

The fanboyism is off the charts.
If you understood how adaptive sync (what freesync has become) and g-sync work you'd realise how wrong you are. The very fact that VESA has adopted the adaptive sync as part of the displayport spec shows how important that little freesync demo was. AMD came along with the tongue-in-cheek name of free-sync because it is technology that already existed and can be implemented independent of the GPU when NVidia was touting it as completely revolutionary (when it already existed in notebooks as a power saving feature) and requiring proprietary hardware on particular graphics cards.

Not a fanboy, just someone who knows how to objectively analyse what's happening unlike someone.

May 28, 2014 | 06:31 PM - Posted by arbiter

We have seen how well g-sync works, we have yet to see how well freesync works or will work. With All open standards like won't work as well as the closed one since has to allow for a lot.

May 28, 2014 | 06:35 PM - Posted by Anonymous (not verified)

AMD had a demo of free-sync at Computex last year. The fact that VESA are adopting it is big too, it means they're convinced enough to spend time implementing it.

All the technology does is allow the GPU to control the VBLANK interval, it's nothing that will be particularly better on any particular card or that will benefit from a proprietary solution.

May 29, 2014 | 11:30 AM - Posted by renz (not verified)

i believe what people want to see the most is how freesync run real games. i'm not takes sides here. when nvidia come up with g-sync they show real demo running real games. that's how they convince people that the tech is real. for AMD part they only show that windmill video. yes there is no external monitor exist yet to run freesync but why amd can't show real games under freesync using the same demo unit that they show us? we want to know about AMD implementation that will deal with screen tearing and input latency at the same time

May 29, 2014 | 02:30 PM - Posted by Gunbuster

That demo did not show "freesync". It showed a static refresh rate matched to framerate.

AMD has not shown any demo of working freesync.

May 28, 2014 | 07:00 PM - Posted by Anonymous (not verified)

so VESA are incompetent and you are the Pro, if they added it it's because it is working well, and why jump the gun anyway, can't you wait couple more months and judge ? or too hasty to hate ?

May 29, 2014 | 12:23 AM - Posted by wujj123456

> With All open standards like won't work as well as the closed one

Except you should realize VESA is not one of open standards. It's a body that writes "ALL" display standards. Exclude all VESA standards, none of your displays from PC to phone will work. That's why FreeSync getting in is a huge thing. It's optional in DP 1.2a, which probably means it will mandatory in DP 1.3.

Good luck getting G-Sync into any displays afterwards.

May 29, 2014 | 02:25 PM - Posted by Wendigo (not verified)

Ok, freesync is a good think, but you must remember that the true equivalent of Gsync isn't freesync alone.

A freesync monitor must be combined with a tripple buffer middleware/third party soft to make the equivalent work of gsync module/monitor.

It's very much probable that the freesync+triple buffer combination doesn't work properly that a gsync solution.

June 1, 2014 | 09:16 PM - Posted by StewartGraham (not verified)

That's pure speculation.

Also, I've heard nothing about any kind of software or buffer interface. Can you please sight a reliable source for this information?

June 3, 2014 | 12:43 AM - Posted by nobodyspecial (not verified)

Let me know when AMD gets around the SCALER problem that NV ran into which caused them to make this nasty terribly expensive FIX called Gsync. They stated they made it because scaler's couldn't do the job. That's so terrible of them isn't it? I mean what kind of company goes around jerks holding up the train? Oh, the good kind. Did you complain when AMD came up with AMD64 when Intel wouldn't until forced? We'd probably be on Itanic right now if

Freesync will cost Scaler makers R&D and then it becomes ummm, chargesync? NOT-so-FREEsync? Whatever, and that's if they can get them to do it at all. Not to mention we have to see it work, as gsync is pretty proven tech and everyone liked it. They won't do the R&D for free and will pass whatever costs they incur directly to end users whether AMD likes that or not. Only fools work for free while trying to run a business. I'll be shocked if they don't put a premium on it while they can also. No different than blowing up the radeon pricing over mining. Nobody can be forced to use the new Vesa sticker on their monitors. They can just ignore it and use gsync or nothing.

The fact that AOC, Philips, Viewsonic, BenQ, Asus & Acer are all coming with models means something too. They realize freesync is likely a year away or more and only if Scalers up their game, and it isn't a known commodity vs what they can sell RIGHT NOW with Gsync for the next year. Only samsung/LG are really left of the big names. I call that WIDE support from everyone. Did you read the PCper article? I'm pretty sure he and everyone else pointed out the scaler issue NV ran into and that AMD will have to convince them, while NV apparently couldn't. Maybe AMD will have more luck now that they see Gsync can steal their Scaler sales ;) But again, that should mean we thank NV not chastise them.

May 29, 2014 | 09:59 AM - Posted by Jesse (not verified)

AMD's driver team is an order of magnitude smaller and less talented. It's unfortunate, but true.

May 29, 2014 | 12:50 PM - Posted by Anonymous (not verified)

was certainly true in the past, but since the end of 2013 their drivers are really great and frequent, multi gpu scalling, optimisation, fixes frame pacing, now multi rez support for eyefinity, this was the one reason i didnt get eyefinity, because i had the monitors, and couldnt use them, and getting 2 others was money spent for no reason, now that i gave away the old monitors, the driver solve the issue xD.
so i know that many ppl are in the same position and would benifit alot from this eyefinity update.

May 29, 2014 | 04:14 PM - Posted by Anonymous (not verified)

What does any of your comment mean in regards to the previous posters comment that AMD's driver team is specifically for its graphics, in fact, much smaller and much less funded than Nvidia's

Just imagine how much better the drivers would be if they had equal funding.

May 30, 2014 | 01:38 AM - Posted by Anonymous (not verified)

the fact is AMD drivers were crap up to Q3 of 2014, the other fact is AMD Driver are much better for the last 8 months or so.
as i said above more frquent better features, better scalling, faster fixes and optimisations.
so i dont think they are still small and under funded, something probably happened, and it's probably linked to the success of Mantle.

May 30, 2014 | 01:39 AM - Posted by Anonymous (not verified)

Q3 2013 sorry for the typo, cant edit >.<

May 30, 2014 | 12:18 PM - Posted by ArcRendition

I don't think it has much to do with Mantle, in fact Mantle can't yet be called a "success." Its not used by enough developers and it hasn't had enough time to really mature, but with that said, Mantle certainly has changed the API paradigm for the better, we're getting rumblings out of Redmond, WA. AMD managed to wake the lumbering giant for a (more than likely) expedited release of DX12. I think it just has more to do with the fact that AMD is really at the top of their game right now realizing that if they don't push the limits and really challenge their competitors, they won't survive. I'm really happy with how far AMD has come in the past year, its been very exciting to watch AMD realize drivers that are arguably better than Nvidia in many areas. Personally I find "GeForce Experince" to be a terrible application for overclocking. I much prefer AMD's overclocking implementation.

May 30, 2014 | 11:37 PM - Posted by Anonymous (not verified)

You know nothing about Nvidia's drivers obviously. GeForce Experience does no overclocking whatsoever. It is a really great application that automatically updates the graphics driver when a new one comes out, which is at least once a month. And it also recommends the game's graphics settings based off of your computer hardware so that you get the best experience possible. It also controls shadowplay and the lighting effects of the card itself. You had no idea what GeForce Experience was did you?

June 1, 2014 | 09:19 PM - Posted by StewartGraham (not verified)

I don't really think that's true actually... AMD simply needed a fire lit under their asses to get into gear (re: frame rating) and they've shown just how talented they truly are by going from inoperable frame pacing to excellent frame pacing in an industrially small amount of time of just a few months - quite an accomplishment really.

May 29, 2014 | 02:50 PM - Posted by Joshthornton (not verified)

You are kidding right? Freesync was good enough that it was accepted into the VESA standard of displayport. Yeah, that means an AMD technology will be in every displayport starting with the new revision. Basically, G-sync is pretty much dead unless you like paying 150$ more for a proprietary chip built into a monitor that is a good thing. I love Nvidia, in fact I own 2 780's, but their proprietary codes and technologies are crippling the pc gaming industry. Hell their deal with microsoft to keep pushing direct x is one of the reasons our gpu's can only do 25% of what they are capable of.

You are a just a blind follower.

May 30, 2014 | 12:10 PM - Posted by ArcRendition

First let it be known that I have no preference for either Nvidia or AMD, I buy simply what suites my needs best at the time.

New and powerful tech? Like? You clearly have no idea how much AMD has contributed to the x86, GPU, and API industries, forcing their competitors to advance the industry rather than let it stagnate. For example, the 295X2 made such an impact that Nvidia was FORCED to POSTPONE the release of TitanZ BECAUSE of AMD right? lol What are you, like 13 years old?

Without AMD, Nvidia would have no competition and therefore almost no incentive or initiative to produce all of the innovations you probably think are so wonderful. You obviously have no clue or understanding of economics or business let alone the dynamics involved in the GPU and graphics industry. When making comments like the one above you just sound like an ignorant fanboy, which is really unflattering and embarrassing for yourself and everyone that has to read it in this forum. Maybe you should take your misplaced enthusiasm for Nvidia and disdain for AMD somewhere else where informed and educated PCper readers don't have to read your ridiculous rhetoric that in no way contributes to a constructive dialogue.

June 3, 2014 | 06:04 AM - Posted by nobodyspecial (not verified)

You are apparently unaware DX 12 was being worked on for a few years already and that mantle didn't cause this to happen it was the natural course as of action. Also that OpenGL already has the ability to drop draw calls etc? So mantle wasn't needed as the other two api's were already well on the way before mantle.

Since titan Z isn't aimed at gamers I fail to see you point, we have no idea what the delay was. You work for NV? It seems clear to me that if you're not aiming at gamers, 295x2 wouldn't even be in your thoughts. They didn't even change the clocks after the delay, everything stayed the same. If they had raised them 100mhz or 200 maybe we could assume it was due to AMD but they did nothing but push it off it seems.

What did they change that shows they were FORCED to delay? ROFL@ you blasting the other guy while doing the same thing you accuse him of (glorious AMD and their contributions to mankind....LOL). hardocp just tested Bf4 again (770 article posted a day or two ago), and it was basically break even. Is that all Mantle gets you? NV made the game's use of mantle moot with a simple dx11 update. I can see why they say they don't fear it. IF they don't have time to read his/her rhetoric, surely they have none to read your personal attacks of the guy either. He/she is 13yrs old, you're basically calling him/her stupid (he/she not one of the "informed and educated people), he/she clueless, the person is an ignorant fanboy...ROFL.

You appear to be describing yourself I'm afraid.

May 28, 2014 | 06:01 PM - Posted by Daniel Nielsen (not verified)

Really good and interesting read Ryan.Good job.

May 28, 2014 | 06:18 PM - Posted by Ryan Shrout

Thanks Dan. Even thought it wasn't my longest editorial, it took the most back and forth calls and emails in a LONG LONG time. :)

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.