NVIDIA Under Attack Again for GameWorks in The Witcher 3: Wild Hunt

Subject: Graphics Cards | May 17, 2015 - 12:04 PM |
Tagged: The Witcher 3, nvidia, hairworks, gameworks, amd

I feel like every few months I get to write more stories focusing on the exact same subject. It's almost as if nothing in the enthusiast market is happening and thus the cycle continues, taking all of us with it on a wild ride of arguments and valuable debates. Late last week I started hearing from some of my Twitter followers that there were concerns surrounding the upcoming release of The Witcher 3: Wild Hunt. Then I found a link to this news post over at Overclock3d.net that put some of the information in perspective.

Essentially, The Witcher 3 uses parts of NVIDIA's GameWorks development tools and APIs, software written by NVIDIA to help game developers take advantage of new technologies and to quickly and easily implement them into games. The problem of course is that GameWorks is written and developed by NVIDIA. That means that optimizations for AMD Radeon hardware are difficult or impossible, depending on who you want to believe. Clearly it doesn't benefit NVIDIA to optimize its software for AMD GPUs financially, though many in the community would like NVIDIA to give a better effort - for the good of said community.

View Full Size

Specifically in regards to The Witcher 3, the game implements NVIDIA HairWorks technology to add realism on many of the creatures of the game world. (Actually, the game includes HairWorks, HBAO+, PhysX,  Destruction and Clothing but our current discussion focuses on HairWorks.) All of the marketing and video surrounding The Witcher 3 has been awesome and the realistic animal fur simulation has definitely been a part of it. However, it appears that AMD Radeon GPU users are concerned that performance with HairWorks enabled will suffer.

An example of The Witcher 3: Wild Hunt with HairWorks

One of the game's developers has been quoted as such:

Many of you have asked us if AMD Radeon GPUs would be able to run NVIDIA’s HairWorks technology – the answer is yes! However, unsatisfactory performance may be experienced as the code of this feature cannot be optimized for AMD products. Radeon users are encouraged to disable NVIDIA HairWorks if the performance is below expectations.

There are at least several interpretations of this statement floating around the web. First, and most enflaming, is that NVIDIA is not allowing CD Project Red to optimize it by not offering source code. Another is that CD Project is choosing to not optimize for AMD hardware due to time considerations. The last is that it simply isn't possible to optimize it because of hardware limitations of HairWorks.

I went to NVIDIA with these complaints about HairWorks and Brian Burke gave me this response:

We are not asking game developers do anything unethical.
 
GameWorks improves the visual quality of games running on GeForce for our customers.  It does not impair performance on competing hardware.
 
Demanding source code access to all our cool technology is an attempt to deflect their performance issues. Giving away your IP, your source code, is uncommon for anyone in the industry, including middleware providers and game developers. Most of the time we optimize games based on binary builds, not source code.
 
GameWorks licenses follow standard industry practice.  GameWorks source code is provided to developers that request it under license, but they can’t redistribute our source code to anyone who does not have a license. 
 
The bottom line is AMD’s tessellation performance is not very good and there is not a lot NVIDIA can/should do about it. Using DX11 tessellation has sound technical reasoning behind it, it helps to keep the GPU memory footprint small so multiple characters can use hair and fur at the same time.
 
I believe it is a resource issue. NVIDIA spent a lot of artist and engineering resources to help make Witcher 3 better. I would assume that AMD could have done the same thing because our agreements with developers don’t prevent them from working with other IHVs. (See also, Project Cars)
 
I think gamers want better hair, better fur, better lighting, better shadows and better effects in their games. GameWorks gives them that.  

Interesting comments for sure. The essential take away from this is that HairWorks depends heavily on tessellation performance and we have known since the GTX 680 was released that NVIDIA's architecture performs better than AMD's GCN for tessellation - often by a significant amount. NVIDIA developed its middleware to utilize the strength of its own GPU technology and while it's clear that some disagree, not to negatively impact AMD. Did NVIDIA know that would be the case when it was developing the software? Of course it did. Should it have done something to help AMD GPUs more gracefully fall back? Maybe.

Next, I asked Burke directly if claims that NVIDIA was preventing AMD or the game developer from optimizing HairWorks for other GPUs and platforms were true? I was told that both AMD and CD Project had the ability to tune the game, but in different ways. The developer could change the tessellation density based on the specific GPU detected (lower for a Radeon GPU with less tessellation capability, for example) but that would require dedicated engineering from either CD Project or AMD to do. AMD, without access to the source code, should be able to make changes in the driver at the binary level, similar to how most other driver optimizations are built. Burke states that in these instances NVIDIA often sends engineers to work with game developers and that AMD "could have done the same had it chosen to."  And again, NVIDIA reiterated that in no way do its agreements with game developers prohibit optimization for AMD GPUs.

It would also be possible for AMD to have pushed for the implementation of TressFX in addition to HairWorks; a similar scenario played out in Grand Theft Auto V where several vendor-specific technologies were included from both NVIDIA and AMD, customized through in-game settings. 

NVIDIA has never been accused of being altruistic; it doesn't often create things and then share it with open arms to the rest of the hardware community. But it has to be understood that game developers know this as well - they are not oblivious. CD Project knew that HairWorks performance on AMD would be poor but decided to implement the technology into The Witcher 3 anyway. They were willing to sacrifice performance penalties for some users to improve the experience of others. You can argue that is not the best choice, but at the very least The Witcher 3 will let you disable the HairWorks feature completely, removing it from the performance debate all together.

In a perfect world for consumers, NVIDIA and AMD would walk hand-in-hand through the fields and develop hardware and software in tandem, making sure all users get the best possible experience with all games. But that style of work is only helpful (from a business perspective) for the organization attempting to gain market share, not the one with the lead. NVIDIA doesn't have to do it and chooses to not. If you don't want to support that style, vote with your wallet.

Another similar controversy surrounded the recent release of Project Cars. AMD GPU performance was significantly lower than comparable NVIDIA GPUs, even though this game does not implement any GameWorks technologies. In that case, the game's developer directly blamed AMD's drivers, saying that it was a lack of reaching out from AMD that caused the issues. AMD has since recanted its stance that the performance delta was "deliberate" and says a pending driver update will address gamers performance issues.

View Full Size

All arguing aside, this game looks amazing. Can we all agree on that?

The only conclusion I can come to from all of this is that if you don't like what NVIDIA is doing, that's your right - and you aren't necessarily wrong. There will be plenty of readers that see the comments made by NVIDIA above and continue to believe that they are being at best disingenuous and at worst, are straight up lying. As I mentioned above in my own comments NVIDIA is still a for-profit company that is responsible to shareholders for profit and growth. And in today's world that sometimes means working against other companies than with them, resulting in impressive new technologies for its customers and push back from competitor's customers. It's not fun, but that's how it works today.

Fans of AMD will point to G-Sync, GameWorks, CUDA, PhysX, FCAT and even SLI as indications of NVIDIA's negative impact on open PC gaming. I would argue that more users would look at that list and see improvements to PC gaming, progress that helps make gaming on a computer so much better than gaming on a console. The truth likely rests somewhere in the middle; there will always be those individuals that immediately side with one company or the other. But it's the much larger group in the middle, that shows no corporate allegiance and instead just wants to have as much fun as possible with gaming, that will impact NVIDIA and AMD the most.

So, since I know it will happen anyway, use the comments page below to vent your opinion. But, for the benefit of us all, try to keep it civil!


May 17, 2015 | 12:18 PM - Posted by Anonymous (not verified)

cd project red would never side with nvidia right guys.... shit https://i.imgur.com/LjyWezE.jpg

May 18, 2015 | 10:04 AM - Posted by Neutral(ish) gebril (not verified)

Yeah agreed totally.

"Using DX11 tessellation has sound technical reasoning behind it, it helps to keep the GPU memory footprint small so multiple characters can use hair and fur at the same time."

I mean Nvidia would never make use of tesselation in a bad way just to nerf performance on AMD cards even when it has no obvious benefit and actually damages their own performance .
NEVER ; NO ; *cough* Crysis *cough*; NOT IN A MILLION YEARS ....

May 18, 2015 | 10:19 AM - Posted by chizow (not verified)

Or the more obvious answer, AMD has never done Tesselation well despite trumping it up as one of their killer features going back to DX10.1

Now that Tesselation is important I suppose AMD and fanboys are busy sweeping it under the rug and deflecting again as usual.

PS: Tonga greatly improves AMD tesselation performance, so obviously they knew they were deficient there.

http://techreport.com/r.x/radeon-r9-285/tm-x32.gif

May 31, 2015 | 04:19 PM - Posted by Anonymous (not verified)

No Nvidia wouldn't.
AMD likes to put out fake screen shots and tell lies about water being rendered under the ground.

Fool you once shame on AMD. Fool you twice shame on you.
Yet they fool you 10 times, so please go shoot yourself.

May 18, 2015 | 10:05 AM - Posted by Neutral(ish) gebril (not verified)

Very true , but you know that its capitalism and that means its all good right . I mean Greed is good , afterall.

"Using DX11 tessellation has sound technical reasoning behind it, it helps to keep the GPU memory footprint small so multiple characters can use hair and fur at the same time."

I mean Nvidia would never make use of tesselation in a bad way just to nerf performance on AMD cards even when it has no obvious benefit and actually damages their own performance .
NEVER ; NO ; *cough* Crysis *cough*; NOT IN A MILLION YEARS ....

May 17, 2015 | 12:27 PM - Posted by Blown2Buggery (not verified)

Don't don't think GameWorks are the issue here but the downgrade to a more cartoonish visual aesthetic to match the PS4 capabilities.

http://whatifgaming.com/developer-insider-the-witcher-3-was-downgraded-f...

Here's a comparison video of the PC at Ultra with GameWork vs PS4
https://www.youtube.com/watch?v=1fUGFmmikn4

Better but hardly earth shattering

May 18, 2015 | 05:04 PM - Posted by Goldmember (not verified)

Well looks aside, what about frame rate? PS4 version is apparantly not able to hold 30fps very well. The test a saw was probably a demo version and they might have been able to smooth things out a bit, but still.

May 17, 2015 | 12:40 PM - Posted by MAXLD (not verified)

It's quite simple. nVidia has the money not only to pay Devs (specially the ones that have tight amounts), but also to send squads of experts to make sure the games are well suited to nVidia's interests. Specially with the current market share... they do whatever they can to close their techs and grab the opportunity to maximize their marketing. They also are old foxes in how to do PR to acquire sympathy. But it's all part of the business.
Without the same resources, AMD has to spread efforts and hang on for better days. We'll see if Vulkan/DX12 can unleash them more from the chains(on drivers side too) and bring some more balance to the force.

May 18, 2015 | 08:55 AM - Posted by Penteract (not verified)

AMD's graphics division is quite profitable; they are as capable of working with game devs as Nvidia is. They just don't.

Frankly, I'm tired of the AMD complaints here. They sound like kids on game forums complaining about how some other class than their favorite is better. Most of the time it has nothing to do with the class, it's the fact that the complainers are whiners that would rather whine than "get good".

In my opinion, AMD needs to shut up and get good. All they are doing at this point is losing my respect. I mean, I try to stay video card agnostic, I believe in buying the best solution available at the time I make my purchase regardless of brand, but AMD is making it hard for me.

May 18, 2015 | 01:39 PM - Posted by Azix (not verified)

So many people keep claiming AMD does not work with devs. But even in the recent project cars case it was shown that they did. The problem is not AMD working with devs, its the artificial limit on how well they can work with devs because code is closed to them.

May 18, 2015 | 02:30 PM - Posted by chizow (not verified)

No, the problem is as usual, AMD not having enough resources (I highly doubt they "forgot" about these huge titles) to release Day 1 drivers to support these big titles properly.

AMD still working on Pcars drivers nearly 2 weeks after release:
https://twitter.com/amd_roy/status/597406573586026496

And new Witcher 3 Driver? Next week...
https://twitter.com/amd_roy/status/599565321167474689

It has nothing to do with code being closed to them because in reality they shouldn't need the source code anyways if it is using standard API libraries and going through their driver.

May 31, 2015 | 04:28 PM - Posted by Anonymous (not verified)

"It's quite simple. nVidia has the money not only to pay Devs (specially the ones that have tight amounts),"

Sort of like Adobie has to pay people to use photo shop. Otherwise they'd obviously choose MSpaint right? That is what you are saying.

Nvidia charges. Sometimes they will trade for marketing, but gameworks costs money.

May 17, 2015 | 12:45 PM - Posted by H1tman_Actua1

Another reason why not to be cheap and buy AMD.

May 17, 2015 | 10:46 PM - Posted by Heavy (not verified)

some of us cant help not being cheap some of us are poor.is either pay an extra 50 buck for a graphics card or save that 50 bucks and pay the electricity bill.

May 18, 2015 | 01:41 PM - Posted by Azix (not verified)

in witcher 3 a 280x or 285 puts out about the same performance as a titan or 780 card. So... works for me. others can keep paying more for weaker performance all they want

May 19, 2015 | 08:34 AM - Posted by Anonymous (not verified)

Agreed my friend

May 21, 2015 | 12:29 AM - Posted by Anonymous (not verified)

If you are choosing between those two, then maybe you should just buy a console.

May 21, 2015 | 12:29 AM - Posted by Anonymous (not verified)

If you are choosing between those two, then maybe you should just buy a console.

May 18, 2015 | 01:10 AM - Posted by JohnGR

So much ego, with daddy's wallet...

May 18, 2015 | 04:31 AM - Posted by Joakim L (not verified)

Well it's not really about being cheap is it? I do hope Nvidia did a much better job implementing Hairworks in The Witcher 3 than they did with Far Cry 4. Firstly Hairworks was bugged (basically didn't work at all) Secondly it's still only being applied on some animals in the game, meaning not even all of animals of the same type which is kinda ridiculous. I'm guessing this has to do with the fact that the performance hit, which is massive and I'm on a pretty decent system (5Ghz CPU, GTX780Ti).

May 18, 2015 | 09:26 AM - Posted by WesTechNerd

Yeah and if you don't have amd then what happens with nvidia? The prices go up and the innovation goes down. What Nvidia is doing right now should violate the anti-competition laws. Though I am more disappointed in the game developers for using this software than I am with Nvidia for refusing to share with amd.

May 18, 2015 | 11:18 AM - Posted by Anonymous (not verified)

Someone developing a better product and then not giving away their hard work for free is violating anti-competition laws? Doing business well is the product of a healthy capitalist market, not the catalyst for a monopoly. AMD has cornered the console market. Are you up in arms over that as well?

May 18, 2015 | 01:42 PM - Posted by Azix (not verified)

1. gameworks is not better, clearly from the number of broken gameworks games, its next to rubbish.

2. Its the nature of the industry. Nvidia thinks its special tech but it really is not. There are alternatives to gameworks already open, its just nvidias money that pushes gameworks into games and if they are going to do that they should allow users the effects they would otherwise have without gameworks.

May 18, 2015 | 02:32 PM - Posted by chizow (not verified)

1. Sure it is better, if you don't agree you can simply turn the features off. Crying about them certainly doesn't make any sense if you think they are rubbish features.

2. What alternatives are you referring to? Some open garbage AMD pushed on a slidedeck 5 years ago that has amounted to nothing? I am still waiting for the awesome Physics effects that will explode into a thousand pieces that Chris Hook promised 6 years ago on the HD 4890 lol.

July 28, 2015 | 07:13 PM - Posted by Anonymous (not verified)

About the point number two:

It isn't true that AMD gives to the devs OPEN (really open) access to the source code of its techniques, when you go to download some source codes, examples, etc, you must accept a license agreement that shows you that AMD don't lose the IP about these techniques, and..... you do. All the modifications that you makes in the sources are AMD's IP. At least if you comunicate this changes (you develop this enhancements with AMD's knowledge about it).

So its close source code in essence. You can access and uses them (not very different with nvidia's techniques), but you must accept an agreement that protects its IP (the same with nvidia, only with multiple levels of licensing).

About point number one:

The equivalent to hairworks in the market is TressFX, a example of "memory-leak" with all the cards that run it.

With ONLY one character with the simulated hairs, you lost around 300 MB of VRAM, and what is worse, the consumption of VRAM of TressFX is augmented with the use of resolution and/or first and nearly shots of the hairs (is proportional with the number of rendered pixels).

So it's very plausible that the enormous consumption of memory of TressFX with only one character makes it no viable to a game with multiple "hairy" characters in the screen, and with nearly shots of the action.

Don't forgot the faulty simulation of the physics of the hair with TressFX, when the technique uses a artifical limit around the model of the character to skip intersection problems between hairs and the rest of model.

You can see the hair of Lara floating over her body, around 4-7 cm skipping all "human/wearing contact".

It's very funny. By the other side, you can see that with witcher III the hair interacts with the body of Geralt, to the level that you can see hairs reacting to his ears perfectly.

May 18, 2015 | 09:56 AM - Posted by Martin Trautvetter

I for one don't appreciate part of my money going towards screwing over AMD users, of which I am one, too.

May 18, 2015 | 01:10 PM - Posted by chizow (not verified)

Agreed, AMD fans love to say these features aren't important, gimmicks etc and they buy AMD because its the best bang for the buck etc. but every time a new game with GameWorks comes out, its a huge problem lol.

May 22, 2015 | 05:49 AM - Posted by Anonymous (not verified)

"every time a new game with GameWorks comes out, its a huge problem"

You're pretty much proving the point here

May 18, 2015 | 03:54 PM - Posted by funandjam

Hitless non-actual has got jokes!

People, this guy is comedy gold!

May 17, 2015 | 01:08 PM - Posted by arbiter

"That means that optimizations for AMD Radeon hardware are difficult or impossible, depending on who you want to believe."

Just look at how many new drivers AMD has released recently, they are lazy as heck.

"Did NVIDIA know that would be the case when it was developing the software? Of course it did. Should it have done something to help AMD GPUs more gracefully fall back? Maybe"

Not sure what Ass backwards world people should expect that of 1 company to just help a competitor. What's next they expect Intel to provide their fab's to AMD and their newest process node?

"It would also be possible for AMD to have pushed for the implementation of TressFX in addition to HairWorks;"

Seriously anyone that Played tomb raider, did tressFX even look remotely as good as hair works? To me in that game i turned it off cause it offered almost nothing to game graphically that was noticeable most the time.

"As I mentioned above in my own comments NVIDIA is still a for-profit company that is responsible to shareholders for profit and growth."

It takes Money for R&D to make better GPU arch and CPU. AMD is massively behind in both at this point, only thing they can do now is Whine. If hairworks uses DX11 tessellation, All AMD needs to go is improve performance of that on their cards and it would likely even things out but I guess asking AMD to spend $ is not gonna happen. Reason AMD used OpenCL for TressFX, was they were much faster in then Nvidia in that and expect it to be an advantage to their cards. Nvidia uses DX11 tessellation gives them an advantage but its still part of the DX11 standard. It would be safe to theorize if AMD improved performance of DX11 tessellation it would improve performance of hairworks.

"Fans of AMD will point to G-Sync, GameWorks, CUDA, PhysX, FCAT and even SLI as indications of NVIDIA's negative impact on open PC gaming."

G-sync was first tech that removed fixed refresh rate for gamers that is implemented in the right ways to look the best. On FCAT, How quickly does AMD fans forget that it was Pcper using FCAT with help from Nvidia that found the locked down massive stuttering problem AMD CF setup's had that AMD IGNORED and pretty much refused to fix for years and years.

Edit: Random thought, so hair works uses a DX11 standard but Nvidia gets attacked over it. Freesync uses adaptive sync standard in a proprietary way yet its fine? Double standards by AMD fans a little?

May 17, 2015 | 02:04 PM - Posted by Robenger (not verified)

"It takes Money for R&D to make better GPU arch and CPU. AMD is massively behind in both at this point, only thing they can do now is Whine. If hairworks uses DX11 tessellation, All AMD needs to go is improve performance of that on their cards and it would likely even things out but I guess asking AMD to spend $ is not gonna happen. Reason AMD used OpenCL for TressFX, was they were much faster in then Nvidia in that and expect it to be an advantage to their cards. Nvidia uses DX11 tessellation gives them an advantage but its still part of the DX11 standard. It would be safe to theorize if AMD improved performance of DX11 tessellation it would improve performance of hairworks. "

Massively? Your fanboyism is showing.
All they do is whine? Seriously?

"Reason AMD used OpenCL for TressFX, was they were much faster in then Nvidia in that and expect it to be an advantage to their cards."

This a real jewel. What you've shown to everyone is that you literally have no idea what you're talking about. OpenCL was used because its open, meaning that anyone can access the code unlike everything Nvidia does and it runs on all hardware including Nivida GPU's.
See https://developer.nvidia.com/opencl

May 17, 2015 | 02:28 PM - Posted by arbiter

Yea huh, Hairworks USES DX11 TESSELLTION which to be DX11 certified card you HAVE TO SUPPORT IT. Guess what AMD HAS IT TO. Just their tessellation is slow as crap on their cards. Whose fault is that? Say i don't know what i am talking about, then you open your mouth like an idiot. I know both sides have OpenCL smartass. AMD cards are much faster in opencl, just like Nvidia is much faster in DX11 Tessellation.

May 18, 2015 | 03:36 AM - Posted by Joakim L (not verified)

Without the source code (which I hope you just read Nvidia isn't sharing with AMD), it's not very easy to optimize the game for their cards. I think AMD's only option would have been to pay the developer to implement TressFX, but they wouldn't which ofcourse is their own fault. So in the end, money seem to be the real issue here.

July 28, 2015 | 07:22 PM - Posted by Anonymous (not verified)

"This a real jewel. What you've shown to everyone is that you literally have no idea what you're talking about. OpenCL was used because its open, meaning that anyone can access the code unlike everything Nvidia does and it runs on all hardware including Nivida GPU's.
See https://developer.nvidia.com/opencl"

No, you are the one that didn't have any idea about this.

If you develop a DX11 game, the logical gpgpu platform isn't OpenCL, it's Directcompute, it's the best way to comunicate your graphic pipeline with the computation part of your gpgpu source code. About development with OpenCL, yes, OpenCL is open, but NOT your source code, you don't have any obligation to release your application source code to the public only because you use OpenCL, don't be a moron and stop to say nonsense about this.

Nvidia gives access to the source codes to ALL the public in many of its techniques, but in the someones that don't, don't worry, is because you need to accept an agreement only to devs, and maybe pay to a license.

AMD makes something very simmilar when you are forced to accept an agreement to download any example of its source codes of techniques. This agreement protects its IP, so stop the lies, please.

May 17, 2015 | 02:27 PM - Posted by DraftyDodger (not verified)

Why are you comparing hair works to free sync? Two totally different things, it would be like comparing a steak to a broccoli sprout. Yes Hair works uses Directx 11 standard and so does Tressfx. The difference between the two is that AMD let's everyone use the Source code for tressfx and Nvidia does not and will not. FreeSync is AMDs solution to screen tear/refresh rates which Nvidias is Gsync which cost a hell of a lot more then Freesync. If you check out benchmarks out there you will see Freesync even preforms better then Gsync.

May 17, 2015 | 02:32 PM - Posted by arbiter

Point I ment was hair works was based on DX11 tessellation standard. So both cards support it, problem is AMD card is much slower at doing period, kinda same way how Nvidia card is slower at opencl compared to AMD. Now Nvidia has gotten attacked for using the standard yet AMD gets the free pass doing it.

May 17, 2015 | 02:47 PM - Posted by Someguy (not verified)

"Just look at how many new drivers AMD has released recently, they are lazy as heck."

How is that relevant in anyway to what you quoted? AMD are a company not a person they may be under-resourced but not lazy. Either way that's still irrelevant to whether optimizations for Gameworks code is actually possible or not for AMD. "...cannot be optimized for AMD products..." seems fairly unambiguous to me, It's well known that Gameworks is a black box and the dlls are encrypted.

"Random thought, so hair works uses a DX11 standard but Nvidia gets attacked over it. Freesync uses adaptive sync standard in a proprietary way yet its fine? Double standards by AMD fans a little?"

Random is right, Hairworks may use tessellation but that doesn't make it in anyway open or comparable to AMD's implementation of a VESA standard. What do you even mean by leverage in a proprietary way? They have to make A-Sync work with their specific cards how could Freesync not be proprietary in that sense? It's a terrible analogy and hence not a double standard. Black box, encrypted, proprietary middle-ware does not equal AMD's own implementation of an open-standard ie. A-Sync. It's kind of desperate that you attempted to draw parallels.

May 18, 2015 | 01:01 AM - Posted by ROD (not verified)

Tomb Raider is using TressFX v1 and current version is v3. v2 was used in console version.
TressFX uses DirectCompute not openCL!
AMD supports Frame Pacing for some time now and actually with their LiquidVR they are focusing on stuttering issue more than ever and certainly more than nvidia ever did as LiquidVR is for single as crossfire config.
AMD has their Radeon SDK which is pretty much what gameworks is, but free of any terms, its distributed in source code with samples and docs.
AMD doesnt need to improve nothing. Fact that nvidia using overtessellation is not their problem and nvidia customers suffer as much as AMD and all that performance could be used on better visuals. They using line tessellation in HairWorks which AMD users can simply scale down in CCC tessellation factor as hairs in that game do not need that on such a high level anyway.

HairWorks uses DirectCompute with line tessellation with locked source code which noone can review. TressFX uses also Direct Compute with open source code which anyone can review. Hope you see the difference!
Adaptive sync was designed by AMD and as many other technologies AMD did it was made standard! Any company can use it and nvidia actually does on notebooks!

May 31, 2015 | 04:41 PM - Posted by Anonymous (not verified)

paying customers can see gameworks code.
Non-paying customers can see PhysX code.

It's called DirectX11 and it has 0-64 levels of tessellation. That's the standard. ATI/AMD didn't have a problem with it when it was vendor locked to their cards.
It solves many problems and is an advancement. AMD simply wants to make FX CPU's and GCN until the end of time. They want to lock down gaming and force devs to stop improving gaming.

May 21, 2015 | 07:48 PM - Posted by JesusBaltan (not verified)

TressFX not use OpenCL, but it uses DirectCompute

http://physxinfo.com/forum/index.php/topic,260.0.html

http://www.geeks3d.com/20130226/amd-tressfx-new-hair-rendering-technology/

May 17, 2015 | 12:58 PM - Posted by CayderX (not verified)

People are crying that their crappy AMD systems can't run it boohoo it's the communities fault for posting vids that say "AMD budget build that can every game no problem ultra settings!" People are getting the false information it's just the simple fact that the hardware is not powerful enough or optimized enough to run it. I think this shouldn't be a attack on Nvidea it should be a kick in AMD's butt to really focus on performance.

May 17, 2015 | 01:32 PM - Posted by DraftyDodger (not verified)

Its not a AMD issue it is a Nvidia issue. Game works source code is not released to the public unlike tressfx where the code is available to all. So how can AMD improve performance when they have no access to Game works? Both Nvidia and amd cards run well with tressfx but only nvidia benefits from game works.

Also nvidia release G-Sync before AMD released Free Sync so what's wrong with that? I find it somewhat funny that people are always saying bad things about AMD drivers when I've used AMD for 6 years and haven't had any issues.

May 17, 2015 | 01:53 PM - Posted by arbiter

How many games used TressFX? 1? It didn't look really much better then when it was off and game ran 100x better. Hair works uses DX11 Tessellation, AMD needs to improve their performance in that STANDARD. Funny Nvidia will still get attacked over using a standard cause they didn't use one that favors AMD.

May 17, 2015 | 02:05 PM - Posted by Robenger (not verified)

That's subjective and you know it. I personally thought it looked stellar once it was patched.

May 17, 2015 | 02:39 PM - Posted by DraftyDodger (not verified)

"How many games used TressFX? 1? It didn't look really much better then when it was off and game ran 100x better."

You have no clue what you are talking about just from that comment alone. All games will perform better if you turn off different graphics options. Also both Hair works and tressfx have only been in one game to my knowledge and there was there little difference in the two.(CoD:Ghost Hair works dogs)

May 17, 2015 | 09:05 PM - Posted by renz (not verified)

FC4 also use hairworks. also before it become hairworks specifically the feature is just another part of gpu physx. Alice the madness return also use hair physic simulation but it does it in different way than how hairworks does it. as for tressfx actually there are two game currently use it. one is tomb raider and the other is Lichdom:Battlemage. but it probably not being seeing much in that game since the game was actually a first person view game unlike tomb raider.

May 17, 2015 | 01:57 PM - Posted by arbiter

.... site double post ;/

May 18, 2015 | 01:57 PM - Posted by Azix (not verified)

If recent benchmarks are to be trusted, a titan and gtx 780 perform about the same as a 280x or 285 graphics card. As far as performance.. well... you see the problem. What nvidia does hurts older gpus in their lineup as well

May 17, 2015 | 01:27 PM - Posted by Aelussa (not verified)

(User of both Nvidia and AMD GPUs with no corporate allegiance here)

The same thing happened in the opposite direction with Tomb Raider, where they implemented AMD's TressFX. The fancy hair setting worked on Nvidia GPUs, but it ran a lot faster on AMD GPUs.

I'm not a big fan of games implementing code provided by either Nvidia or AMD that makes the game run better on that company's cards at the detriment to the other, but most likely if they hadn't provided that code, the games just wouldn't implement those effects and no one would benefit from them. I guess the question then becomes, is it better to be fair to everyone by providing an equal (but lesser) experience for users of both GPUs, or to provide a better experience for one set of users without necessarily making the game worse than it otherwise would have been for the others?

I think there are arguments to be made for both stances, and to be honest I'm not sure which side of that argument I fall on.

May 17, 2015 | 01:38 PM - Posted by DraftyDodger (not verified)

You are right tressfx does run better on AMD GPUs then Nvidia when TR released but the gap between the two is no where near as bad as Hair works as AMD released the source code for tressfx. Hairworks which heavily favors Nvidia and the source code is not available for AMD or most Developers to improve performance.

May 18, 2015 | 01:07 AM - Posted by JayzTwoCents (not verified)

The problem is AMD still hasnt really come out with any more powerful GPUs since TressFX Was implemented. Their newest tech is aging and almost 2 years old where NVIDIA has released 5 new DX12 cards, all of which are providing a better DX12 experience. Its not NVIDIA job to pull AMD up the rungs... AMD needs to either kick it in to gear and catch up, or stop whining. Those are the only two options.

May 18, 2015 | 08:24 AM - Posted by Joakim L (not verified)

What does AMD's hardware got to do with this. We are talking about open vs. closed graphics extensions here. The fact is that AMD could have payed the developer to use TressFX, but they probably didn't want to (expensive?). Or the developer were not willing to (for reasons we can only speculate about). Would it have been better for Nvidia to release the source code for Hairworks? For the consumer YES, but not for Nvidia. Nvidia might be afraid that their stuff would work to well on AMD's hardware if they did. I also "think" Nvidia payed the developer to use it, or at least provided a lot of help to make it work (why would they otherwise use it when there is a open standard available). I personally want AMD to pass Nvidia in terms of graphics performance, but I doubt it will happens since they so often second on the ball. Not regarding Mantle but in general. Sitting back waiting for Nvidia to develop som new technology, then complaining about it not being open. [/End of speculations]

May 23, 2015 | 06:04 PM - Posted by DCPanther (not verified)

Honestly, I think that it's not nVidia's job to play nice with AMD, it's AMD's job to compete with nVidia. The R290X was launched in October 2013... 2013! Seriously, it's been over a YEAR since AMD has released new hardware. I think nVidia has a right... hell a DUTY to do the best they can to improve the experience for their customers and gaming as a whole. If Hairworks is so broken and runs badly, the AMD needs to go to the developer and show them the benefits of TressFX v3. Cause v2 looked pretty amazing on the Remastered TR for PS4.

AMD is sitting in second place in both CPUs and GPUs. We sit and complain when a dev dumbs down graphics for Consoles.. but you want nVidia to dumb down their R&D so that poor AMD can keep up since they can't afford to ship a new GPU architecture once a year. AMD's performance CPU line-up is 2 years stagnant. I'll continue to vote with my wallet. nVidia can keep progressing and improving. AMD... Compete and earn my money.

It's nice to sit here and make a big broohaa about "Open" and so on... but then I remember the days when all IHVs had their own specific APIs that only worked with their cards. (3DFX Glide anyone). Open is great.. and so on. But... There are benefits to Closed platforms.. the main one being optimizations.

July 28, 2015 | 07:42 PM - Posted by Anonymous (not verified)

Nvidia needs a hell of optimizations to reach a good level of performance with TressFX, it's a trojan horse:

The technique is oriented to exploit the weaks of the rival more than the strong points of AMD.

TressFX have a enormous consumption of VRAM, was very easy in the firts months of the game in the market to force problems of "out of VRAM" with nvidia's cards. Only after many release of drivers nvidia reached a memory management capable of skip the issue with TressFX with supersampling, at least 2X supersampling, with its cards with 2 GB of VRAM.

The technique uses gpgpu massively with a variable load depending of the point of view (when it's near of the POV the technique needs more power), and a very difficult to optimize path with Kepler, that have problems with this moments of the game, when you have a near point of view behind Lara's head, with the bow or walking through some special paths of the maps).

So, must be a difficult source code to paralelize with kepler card and with a costly "change of context" between gpgpu and 3D pipeline (that it's cheaper with AMD's gpus because architectonical reasons). Maxwell improves the situation very much, because the better capabilities of the gpu to paralelize works in its SMMs and better change of contexts than previous nvidia cards.

Kepler fixed almost all the problems with TressFX now via many drivers releases with optimizations, many months after the launch of the game, but it's very obvious that AMD used the game to implement techniques that are exploits against the rival architectures, more than strong points of their chips, because the costs of all these techniques were very high for AMD chips too.

Why AMD pressed the devs to implement supersampling as the only true AA technique two-three years ago?

Because te massive bandwidth and more than enough VRAM skip the problem tha AMD created with this techniques. Why the massive use of DC or OpenCL techniques for things that you can do with HLSL or GLSL shaders, like postprocess AA?

Because AMD knew that its hard, then, can run mixed contexts shaders or have a very little cost of change of context between gpgpu<->3d pipeline than the rival.

They implemented with devs inefficient techniques with the knowledge to make this better, but they didn't. Because when AMD makes boycott to the rivals, all the people see to other side.

May 18, 2015 | 06:13 AM - Posted by Luthair

Except that AMD made TressFX source code available allowing nvidia to optimize for it. (ofc they released the code after Tomb Raider launch)

May 17, 2015 | 01:28 PM - Posted by Anonymous (not verified)

There was nothing stopping AMD from working with cd projekt red in implementing tressfx.

May 17, 2015 | 02:00 PM - Posted by Mark Luther (not verified)

Well that's not strictly true is it. Nvidia has put in place contractual obligations that would prevent the developer from doing this.

This is not just "rumor" or hearsay, Richard Huddy leaked emails with game developers to prove that Nvidia has in face established this practice.

May 18, 2015 | 03:07 AM - Posted by Klimax (not verified)

Nowhere. And unproven.

May 18, 2015 | 11:27 AM - Posted by chizow (not verified)

LMAO, if you believe unsubstantiated rumor reports from Richard Huddy you've got a lot more problems in assessing credibility and character. That guy is a crook and a liar, plain and simple. You can't even find half the stuff he says a week after he says it because it magically disappears off the internet.

But yes, it is a lie, there are no stipulations in these contracts preventing a dev to implement whatever they want. GTA5 is a perfect example as it has tech from both vendors, even for the same simulation/feature. AMD has their Contact Hardened Shadows and Nvidia has their Percentage Closer Shadows. Both run similarly well on either vendor's hardware because they all use MS-standard DirectCompute as the API, but of course, which one does a better job is up to the end user to determine.

Same thing when it comes to GameWorks. You can decide if you like it, or not, but either way, the option to enable it makes the game better than the version that doesn't enable it (junk console port, remember?).

May 18, 2015 | 01:03 AM - Posted by ROD (not verified)

Nvidia contract does!

May 18, 2015 | 03:05 AM - Posted by Klimax (not verified)

There is not. This is simply false. So far nobody prove it at all, so these assertions are wrong and bordering on lies. (Contradicting known facts knowingly)

May 18, 2015 | 06:57 AM - Posted by Luthair

Which "facts" are being contradicted?

Generally speaking contracts have an NDA which would prohibit the parties from discussing the specifics.

May 17, 2015 | 01:50 PM - Posted by Rustknuckle (not verified)

I am saddened by AMDs tendency to over promise, under deliver(both in tech, R&D and support) and complain in the last few years. It has gotten to the point that even if their new card benchmarks higher than the Titan X at a lower price I wont buy it because I cant trust them to support their products with drivers, updates and game features.

I want AMD to do good soooo bad I can taste it. Thing is I am not sure they can ever regain my trust to the point of buying a product unless Nvidia does a self knock out and launches a GPU that turns out to be both carcinogenic and is built by child slaves. At that point I might try them again.

Sure some will argue the lower price on the AMD cards is worth the trouble, but I do not care as I am not the type of hardware user that looks at price first. Smooth operation is to me worth a few hundred bucks even if two different cards has the same raw power.

May 17, 2015 | 01:57 PM - Posted by arbiter

Yea AMD tends to claim a ton then end up back peddling last few years. SO much I take any slides they release as a grain of salt. Most those benchmarks are compared to a 290x not a titan X. So in likely hood it will be almost same or little slower. Real sticking point is gonna be price as if they have to charge near Titan X price its gonna be a problem for a lot of people.

May 18, 2015 | 10:21 AM - Posted by chizow (not verified)

Fully agree and well said.

May 17, 2015 | 02:30 PM - Posted by RedShirt (not verified)

None of this is surprising. AMD have generally* been more open about their stuff than NV is with theirs.

As NV are the biggest when it comes to GPUs it is in their interest to shall we say... make proprietary things or things that will almost always run worse on competitors as it's another hoop your competitor has to jump through.

It's just an inherent advantage of being the big guy.

*Yes, I know AMD made Mantle... However if it wasn't for Mantle I very much doubt we'd be seeing DX12 as soon as we are.

May 17, 2015 | 02:34 PM - Posted by arbiter

MS was working on DX12 for 3-4 years before mantle was announced. Closer to metal was apart of it before that cause no way MS would have time to do it in such a short period since they gotta build in more legacy support in to it then AMD has for their own cards.

May 18, 2015 | 12:27 AM - Posted by Anonymous (not verified)

Source your stuff?
Let's also be clear that AMD was working on mantle for several years before they announced it. See pcper's interview w/ Richard Huddy.

May 18, 2015 | 12:49 AM - Posted by arbiter

First off that is AMD PR rep that Claimed that, well AMD and claims of late tend to be load of bs. MS said they were working on DX12 for like 4 years. If AMD didn't know well there was no way they couldn't of known being a major GPU maker. Reason MS has credibility and AMD doesn't well, look at last 2 or so years of AMD claims that the end up up not living up to. Bulldozer going to make amd cpu's back on par with intel, Mantle being open source, and freesync not requireing new/extra hardware. Just to name a few.

May 18, 2015 | 01:10 AM - Posted by ROD (not verified)

It is plain lie. AMD claimed bulldozer will have benefit of more cores and with almost same IPC! The reason why bulldozer was slow though was not enough money and therefore using automatic layout and for that slow cache. They didn't have money because intel has been using unfair business practices for years so AMD could not really make much profit when intel struggle with P4!

May 18, 2015 | 10:17 AM - Posted by chizow (not verified)

Source is easy:

1) AMD/DICE said it they had been working on Mantle 2+ years before it launched in late 2013.
2) AMD said they were inspired to work on a low level API due to their close involvement with the console wins. Press even guessed it was basically rebadged XB1 API, but all parties distanced themselves shortly after, but it was clear there were similarities.
3) XBox was originally called the DirectXBox and was always lauded for its ability to extract more performance from modest PC specs by leveraging a low level API version of DX.
4) Microsoft has been working on low level hardware for over a decade since XB1. They were also demo'ing the "XBox One" before it even launched or was in production in 2013 on PCs using Nvidia hardware using their precursor DX12 API.
5) They demonstrated full DX12 running Forza on Nvidia GPUs shortly after Mantle launched in Mar 2014, roughly 3 months later. Remember, it took AMD 2+ years to bring Mantle in Beta form.
6) Microsoft was going to launch a new OS in 2015, packaging a new DX with it makes too much sense.

Obviously, based on the timelines and facts we know, it should be obvious Microsoft had been using a precursor version of DX12 already on their XB1, Mantle simply forced them to announce it sooner than they would have liked.

May 18, 2015 | 05:24 PM - Posted by arbiter

"AMD said This" "AMD said that" Yea yea AMD has said a lot of BS over the years that end up being a Lie's.

5) problem is 2+ years for AMD to get beta, give same project to MS or nvidia they would do same work in 6months.

MS being they have to make DX with legacy support 3-4 years sounds like it would be plausible as they gotta make sure its almost 100% perfect in code and not buggy. MS can't do "beta" really for it as AMD could with mantle. but mantle is dead now so no reason to talk about it.

May 18, 2015 | 04:11 AM - Posted by pm (not verified)

MS was working on DX12 for 3-4 years before mantle was announced.
well, yes and no.

MS was working on something called DX12 for a longer time, like any company. however that DX12 was more of a successor and incremental step from DX11 or more specifically D3D11. so it would still not have been an API that was resembling how a modern (well 2-4 year old) card is working.

at some point in the development of D3D12, Mantle showed up and was shown to developers behind closed doors. a lot of those developers knew immediately that this was the way to go for the future of high performance 3D graphic APIs.

at one of those showings folks from Khronos as well as Microsoft saw this API and started to think of if this might really be the future.

as we all know by now, they both decided on going the mantle or better low level way of representing how a modern GPU works.

Khronos has already come forward and said that AMD gave them most of what is now called Vulkan. there are bits and pieces that are new and that were added for compatibility (not necessarily between different hardware but also for different software systems), but looking at both APIs most of what was once mantle has been directly transferred to Vulkan.

Microsoft also adopted a lot of what was done for mantle. there isn’t really that much of a difference. things are named differently and they are used slightly different but that’s mostly it. (at least on the API level)
if you look through the mantle specifications (1) and the specs for D3D12 (2) you’ll see a lot of similarities between the two.

what is definitely different for D3D12 is the fact that it is an C++ object oriented interface where’s Vulkan and mantle is a C procedural interface.

which of the two will win in the end is still open and will probably be decided later this or next year, when both APIs are available to the public and there is actual software that supports and uses it / them.

generally speaking though, all three (well if you count Apples Metal API, which does mostly the same) of these APIs just follow along the lines of the graphic APIs that have been available on consoles for the last decade or even longer.
it just has become much more important in the last couple of years because the current GPUs are much more capable and flexible than older ones. it’s also possible to drive multiple completely independent execution contexts on many of them that allow you to use compute, graphics and other stuff at the same time in parallel.

that latter stuff btw is something that the GCN cards are a little bit better then the chips from Nvidia.

Closer to metal was apart of it before that cause no way MS would have time to do it in such a short period since they gotta build in more legacy support in to it then AMD has for their own cards.
the first iterations of DX12 were not closer to the metal than was we already have with DX11.3.

there is not more legacy support in the API than there is in mantle or vulkan.
the lowest card spec you’ll need is a DX11 capable card. specifically anything starting at GCN on AMDs, Fermi an Nvidias and Haswell on Intels side.

there will however be performance differences between all of those architectures and as usual newer architectures will very likely perform a lot better than older ones.

i don’t know what Nvidias or AMDs driver folks do but i can imagine that AMD isn’t having a hard time adopting the lower level APIs that are Vulkan and D3D12, because their hardware will benefit a lot more than the current nvidia one. that is partly because they are more tailored towards compute and because they, at least on the hardware level, are more efficient in raw performance (not necessarily in performance per watt) than their competitors.
they are however still severely driver limited when it comes to their D3D 10 / 11 performance. if you have a well optimised OpenGL 4.4 AZDO application / game the performance advantage of Nvidia cards fades pretty rapidly.

all i can say is that the future with the new graphics APIs is certainly going to be very interesting for the developers and the consumers.

in case anyone wonders, i use GPUs from all three vendors and have had enough trouble with all of them to hate them each individually. especially from a developer perspective.

---
1 http://www.amd.com/Documents/Mantle-Programming-Guide-and-API-Reference.pdf
2 https://msdn.microsoft.com/en-us/library/windows/desktop/dn899121%28v=vs...

May 21, 2015 | 05:58 AM - Posted by akaronin (not verified)

WRONG ! DX12 IS Mantle.

May 17, 2015 | 03:14 PM - Posted by me (not verified)

I would agree with most of your statement. However when you try to justify business practices of one company completely derails your argument.

I think the statement "Yes, I know Nvidia made Gsync... However if it wasn't for Gsync I very much doubt we'd be seeing adaptive refresh rate tech as soon as we are." The same could be said for all technologies both companies have introduced that have never been completely open.

Part of Ryan's argument is that for the majority of people this stuff pushes graphics technology. I'd claim that most of it is from a complex profit maximization formula. Therefore I would believe that AMD trying to get consumers riled up about these proprietary technologies is as much business strategy as helping gaming. I would claim the same for Nvidia's proprietary tech.

May 18, 2015 | 03:08 AM - Posted by Klimax (not verified)

Tessellation is part of DirectX standard. NVidia is using only standard features. Too bad AMD sucks at them... (including performance of drivers)

May 18, 2015 | 11:19 AM - Posted by chizow (not verified)

And AMD has also been much worse in supporting these same Open initiatives:

OpenCL
Open Physics/Bullet Physics
Open3D/HD3D
FreeSync
TressFX (not even truly open)

You get the idea. AMD loves to throw out some loose standards and call it open, but then fails in supporting their initiatives.

May 18, 2015 | 12:27 PM - Posted by remon (not verified)

You're just showing how clueless you are.

May 18, 2015 | 12:40 PM - Posted by chizow (not verified)

LMAO yes Remon, please enlighten us all and demonstrate how well AMD has supported those features over the years. Oh right, Vapor/Abandonware across the board.

Btw, we are still waiting for AMD to fix FreeSync ghosting/overdrive issues and CF FreeSync support that was promised 2 months ago on release day.

May 18, 2015 | 03:22 PM - Posted by Anonymous (not verified)

ChiZoW
#Nvidiot4life

May 18, 2015 | 03:32 PM - Posted by chizow (not verified)

Anonymous AMD fanboy #2534
#AMD4lifetilbankruptcy/buyout(soon)hopefully

May 18, 2015 | 05:27 PM - Posted by arbiter

If the ghosting issue comes down to voltage used on the panel, more then likely only fix will be at least a firmware update on monitors(yea good luck getting that for your monitor) or hardware revision. Hence why I say for people thinking about getting freesync monitor, should wait for 2nd/3rd gen ones so they can fix the issues in them.

May 23, 2015 | 06:11 PM - Posted by DCPanther (not verified)

It's like.. Maybe before AMD rolled it out, they should have worked closer with their hardware providers and heck.. even made their own hardware based implimentation so the launch was more sucessful and showed off the technology..

Damn.. if only some other company thought of doing that..

May 23, 2015 | 06:11 PM - Posted by DCPanther (not verified)

It's like.. Maybe before AMD rolled it out, they should have worked closer with their hardware providers and heck.. even made their own hardware based implimentation so the launch was more sucessful and showed off the technology..

Damn.. if only some other company thought of doing that..

May 17, 2015 | 02:38 PM - Posted by Aurhinius (not verified)

To be honest I agree that most of the technologies that NVidia have are improving games in general and we shouldn't be upset about that whomever's hardware you prefer.

Bottom line for me is the fact that AMD had the opportunity to implement Tressfx in games and they do. It runs badly on NVidia hardware and we all except it.

This kind of practice will continue for as long as both companies exist. The developer could have implemented both but then they probably didn't have the resources for it. Some times the pendulum swings in AMD favour some times in Nvidia. It sucks when it's a game you really want to play in all it's glory but when the shoe goes on the other foot we rarely hear people complain.

To be honest Ryan I'm not sure why this is even news. Do people ultimately care. The practice has been happening for years. Everyone knows it happens and developers optimise for different vendor cards.

May 17, 2015 | 03:04 PM - Posted by Mark Luther (not verified)

Here's the issue, you're not aware of all the facts it seems.

TressFX runs equally well on both AMD and Nvidia hardware. This is evident in Tomb Raider benchmarks. Initially AMD enjoyed an advantage but because the code is 100% open Nvidia was able to optimize it quickly and get performance on par with AMD.

HairWorks is different in that the developers are NOT allowed to optimize it for AMD, at all. The Witcher 3 developers even publicly talked about this.

May 18, 2015 | 03:10 AM - Posted by Klimax (not verified)

Patently wrong. It cannot be optimized because AMD simply sucks at tessellation. Inssuficent hardware. Too bad AMD ignored and avoided all warnings they got since Kepler. There is no way to shift it on Nvidia...

May 18, 2015 | 06:16 AM - Posted by Luthair

Uh no. If the code were open AMD would have the opportunity to optimize exactly as nvidia did with TressFX

May 18, 2015 | 10:45 AM - Posted by Anonymous (not verified)

Sounds like Nvidia is really worried about AMD's up coming HBM based GPU/s. Nvidia has go out of its way to produce a closed source system lock-in that is an attempt at using closed source software and artificially engineered in software incompatibilities to keep market share, until Nvidia can field some hardware competition that utilizes HBM. Nvidia's latest financials are not up to previous performance, so look for more of the same from Nvidia.
Nvidia has a large share of the GPU market, and will fight by hook or crook to keep its market standing. AMD will have to continue to develop Mantle internally, and use Mantle to provide Khronos, M$, and others with the necessary improvements to these graphics APIs. Nvidia is all about the lock-in, just as any monopolistic interest is, once the market share is so tilted in the monopoly's favor.

Let's hope that AMD can also take its future Zen product line, and create a great HPC/workstation APU, as the HPC/workstation, server, and supercomputer market is what gives Nvidia a lot of its revenues for R&D. AMD needs to get as Much of the enterprise/server accelerator business as it can, so AMD can make the revenues necessary to fund the R&D to compete with Nvidia. The more sales AMD can make to the HPC/Server market, the more that market will fund the technologies that will benefit gaming. Nvidia is in the process of even further segmenting it GPU product line by reducing its products double precision performance, and charging to add the performance back. Nvidia is all about making as much profits at the expense of overall performance for the dollar on its product line. Nvidia is to GPUs what Intel is to CPUs in overall market tactics, so expect more of this type of closed software to gain an advantage to keep the competition from gaining any market share. With Nvidia this is just business as usual, so expect to pay $$$$$ to play.

May 18, 2015 | 11:20 AM - Posted by chizow (not verified)

^ Much pie in sky in this post.

May 18, 2015 | 05:28 PM - Posted by arbiter

Um, AMD doesn't need source, its called get better performance using DX11 tessellation STANDARD. would like fix most hairworks complaining.

May 18, 2015 | 10:10 AM - Posted by chizow (not verified)

I think its an important discussion to be had since there are certain folks trying to frame this as another AMD Good vs. Nvidia Evil discussion, but in reality, it is just another case of Nvidia investing and making its products better for their end-users, if some AMD fans also enjoy it then so be it.

Ryan and PCPer, like most press, have privileged access to all involved parties, Nvidia, AMD, and the game developers, so of course getting the discussion out there and on record is important imo.

More often than not, I think you will find it is AMD that is constantly on the offensive against Nvidia initiatives, not with superior products, or support, but in the PRESS. It is just harder and harder to relate to or support a company like AMD that engages in this kind of "Social Media" marketing rather than a company like Nvidia that just unapologetically introduces new and awesome features that any gamer should enjoy!

May 18, 2015 | 03:16 PM - Posted by Azix (not verified)

Lets get one thing straight - AMD INVESTS. Nvidia just invests in an anticompetitive way. If you look at the major movements that really help gamers on PC as a whole, its ALMOST ALL AMD.

nvidia is just in their own little world pretending they rock when nothing they do has major relevance outside of their current GPU architecture. Meanwhile AMD Is helping them by investing in forward thinking tech.

May 18, 2015 | 03:39 PM - Posted by chizow (not verified)

Sure, AMD invests POORLY. Some examples. Mantle with BF4/EA/DICE deal, and now FreeSync. AMD chose to invest in their own way and they failed.

And how is it anticompetitive? LMAO, again I dont' think you understand what competition means. It is not invest resources and then give it to your competitors, it is invest resources to make their products better for their users to drive a profit for their shareholders. They have no obligation to make things better for AMD.

If anything I would say Nvidia driving AMD through competition is what is forcing AMD to try and keep up, there's numerous examples but it is clear that the harder Nvidia competes, the harder it is for AMD to keep up and remain relevant. But still, you as AMD fans still get better products (at least on paper).

So let's see, Nvidia introduces one thing, AMD (poorly) copies with their own initiative months later, and still never gets it right usually:

1) PhysX > OpenPhysics (Bullet?)
2) G-Sync > FreeSync
3) CUDA > OpenCL
4) DSR > VSR
5) GeForce Experience > Raptr? lol
6) Optimus > Enduro
7) GameWorks > TressFX
8) SLI > CF
9) 3D Vision > HD3D/Open 3D

I mean sure there's a few things AMD brought to market first, like EyeFinity, but as you can see they are few and far between compared to Nvidia driving the market and bringing awesome innovative new features that they continue to support, unlike AMD.

May 17, 2015 | 03:11 PM - Posted by Anonymous (not verified)

The essential take away from this is that HairWorks depends heavily on tessellation performance and we have known since the GTX 680 was released that NVIDIA's architecture performs better than AMD's GCN for tessellation - often by a significant amount. NVIDIA developed its middleware to utilize the strength of its own GPU technology and while it's clear that some disagree, not to negatively impact AMD

Simple way to test.

A Radeon R9 285 should outperform a R9 290X when HairWorks is turned on.

http://techreport.com/r.x/radeon-r9-285/tm-x32.gif

The R9 285 should have minimal impact when HairWorks is turned on since it has almost x2 the tessellation performance of a 290x

May 17, 2015 | 03:19 PM - Posted by Someguy (not verified)

Uncanny I was thinking the same thing!

May 17, 2015 | 05:53 PM - Posted by DaKrawnik

Tessallation is only one part of it.

May 17, 2015 | 10:06 PM - Posted by arbiter

That is pretty much what i said earlier. AMD improves tessellation performance on their cards well would mostly run almost as good. Hopefully a site has a 285 vs 290x test to see if 285 preforms better, and If it does then well would be an end to the argument.

May 18, 2015 | 09:51 AM - Posted by chizow (not verified)

Yep mentioned this in my comments as well, AMD obviously knows their tesselation perf is deficient (it has been since Cypress actually) and they obviously took steps to address it with Tonga.

I guess we must fault Nvidia for making good use of a feature that was one of the headliners for DX11 some 4-5 years after DX11 went mainstream? Hell we all know AMD was screaming from every rooftop about the benefits of Tesselation even going back to DX10.1. Funny how things change so quickly!

May 18, 2015 | 12:26 PM - Posted by Someguy (not verified)

Just thought I'd follow this up with some benchmark numbers.

Witcher 3, Review-Version 1.02 1080p, Ultra-Details, Ingame-AA Catalyst 15.4.1 Beta, Geforce 352.86 WHQL.

R9 290X Hairworks enabled percentage drop in minimums = %37.5
R9 285 Hairworks enabled percentage drop in minimums = %30.2
GTX 970 Hairworks enabled percentage drop in minimums = %21.9
GTX 770 Hairworks enabled percentage drop in minimums = %17.7

May 18, 2015 | 05:34 PM - Posted by arbiter

Yea i would say that numbers make it very plausible that if AMD improves tessellation performance compains of performance issues would be gone. 285 performance hit was half what 290x had vs 970.

May 17, 2015 | 03:18 PM - Posted by Someguy (not verified)

If this is just related to tessellation performance as Nvidia implies (I'm very skeptical it's that simple) we should see Tonga (GCN 1.2) based GPUs and upcoming Fiji GPUs take a much smaller performance hit vs Hawaii with Hairworks enabled. Although pretty poor generally the R9 285 has almost double the tessellation performance of the R9 280X. Should be easy to test.

May 17, 2015 | 03:20 PM - Posted by JohnGR

Probably if Radeon users boycott the game, they will find time to use TressFX or even better to create a physics engine that will run everywhere. On AMD, Nvidia and Intel GPUs.

I really can not understand someone who asks for my money and at the same time apologizes to me, for giving a better product to someone else who payed the same amount of money as I did, because he didn't had enough time for me.

There is NO excuse here. If the developers don't have time for their customers using AMD or Intel GPUs, they should offer another version without Nvidia features at a big discount.

May 17, 2015 | 09:15 PM - Posted by renz (not verified)

lol

May 18, 2015 | 09:49 AM - Posted by chizow (not verified)

So do you cry every time you get on a plane and walk by the 1st class passenger cabin?

May 18, 2015 | 11:53 AM - Posted by JohnGR

Completely pointless example with a huge amount of ego. But I wouldn't expect something more from you, anyway.

The correct question here is, would you accept to pay for first class and sit in the economy class?

May 18, 2015 | 12:27 PM - Posted by chizow (not verified)

But you didn't pay for first class, you paid for economy class and expect 1st class treatment, sorry, you get what you paid for.

May 21, 2015 | 08:01 AM - Posted by JohnGR

Economy class? You mean a 780Ti and a 6GB Titan, or a Titan Z? Got it!

May 21, 2015 | 09:09 PM - Posted by chizow (not verified)

Nah that was 1st class when the ticket was bought, and now that plane is headed toward retirement. Unlike AMD which was only business class back then, and now economy where it belongs.

May 21, 2015 | 09:09 PM - Posted by chizow (not verified)

Nah that was 1st class when the ticket was bought, and now that plane is headed toward retirement. Unlike AMD which was only business class back then, and now economy where it belongs.

May 19, 2015 | 04:54 AM - Posted by Anonymous (not verified)

yeah i'm sure they care about all 5 of you

May 19, 2015 | 05:50 PM - Posted by JohnGR

11

May 23, 2015 | 06:24 PM - Posted by DCPanther (not verified)

They do.. it's called the Xbox One and PS4 versions...

May 17, 2015 | 03:49 PM - Posted by BoMbY (not verified)

The problem is, NVidia already has a significant higher market share, and is abusing this for their benefit (trying to create a monopoly). I would say the FTC (or whoever is responsible) should take a closer look at all this, before it is too late.

May 17, 2015 | 09:43 PM - Posted by arbiter

Its up to game dev in the end to make the call to use it. Nvidia doesn't force them to.

May 18, 2015 | 01:15 AM - Posted by ROD (not verified)

You are wrong! Since they pay for that and money is significant force!

May 18, 2015 | 12:58 PM - Posted by chizow (not verified)

Oh man this nonsense again. Looks like AMD and their fanboys are throwing in the towel already if they are hoping for help from regulatory agencies.

I always find it an interesting dichotomy when AMD and their fanboys claim competition is always good and necessary, but when Intel and now Nvidia compete AMD into the ground, they are competing too hard and they should stop lol.

Unfortunately for AMD and their fans, thats not how it works. These aren't technologies that are going to be protected under typical monopoly guidance and even if they reach monopoly in the PC market, there are numerous other options and alternatives that exist that would deflect any concerns about monopoly.

May 17, 2015 | 04:03 PM - Posted by Daniel Nielsen (not verified)

As a person who alternates between both brands when getting new cards, i seriously dont see the big issue here. I would if it was features that was in no way possible to use or just exclusive to one side and non functional on a another. But that is not the case here, if you are afraid of performance loss, dont use the feature. Its really nothing that will detract from the game.

Also i find it gross that some of the people here playing the "that what you get for being cheap card" Elitist dirtbags.

May 17, 2015 | 04:34 PM - Posted by Anonymous (not verified)

http://techreport.com/r.x/titan-x/tessmark.gif

http://techreport.com/r.x/geforce-gtx-960/tessmark.gif

AMD can't master tessellation at all. AMD holding back gaming and graphics advancement as usual.

May 17, 2015 | 05:35 PM - Posted by Someguy (not verified)

GCN 1.2 (R9 285) is doing relatively well in that second chart. That aside how is this holding back gaming? Doesn't his article show that Nvdia are implementing this regardless?

May 18, 2015 | 01:19 AM - Posted by ROD (not verified)

tessmark is irrelevant tessellation benchmark since there is not game on market atm that would use openGL tessellation!
Secondly GCN is very versatile and using higher tessellation makes no sense as you cant see any difference. Also with high tess. factors efficiency is around 6.25% for AMD and 15% for nvidia! Yes nvidia is better but you still loose 85% of performance there!

May 18, 2015 | 07:32 AM - Posted by Jabbadap (not verified)

Well they have improved it with tonga, check hardocp farcry 4 godray performance test:
http://www.hardocp.com/article/2015/01/12/far_cry_4_graphics_features_pe...

(Enhanced godrays = uses tessellation -> With r9-290x massive performance drop with r9-285 very minor drop in performance)

May 17, 2015 | 04:38 PM - Posted by Anonymous (not verified)

http://blogs.nvidia.com/blog/2014/07/15/how-gameworks-makes-building-bet...

"As someone who used to be on the other side of things, I can provide some perspective here. When I was working on my own game projects, here is what would happen: NVIDIA would create a mind-blowing demo. Then my boss would see that demo and say, “Hey, we should put that in our game, like now.”

This would cause me to panic, because as cool as that demo NVIDIA just published is, it was just that — a demo. The process of going from a tech demo to something that could be incorporated into a functioning game was a huge missing piece of the puzzle.

Even though it was helpful to have full source code to these tech demos, it was still a major task to write from scratch your own implementation of the technique. Also, often a tech demo is just that a demo. It doesn’t have to address any of the myriad of issues raised by a full game engine integration. For example, how a particular technique has to work with every other part of the pipeline was often an open and unsolved problem.

No longer. Since I joined NVIDIA five years ago, I’ve been able to act as an in-house advocate for game developers, and I’m not the only one. NVIDIA has a long record of hiring the best software engineers. Many of my colleagues come from the game industry.

That’s why NVIDIA launched GameWorks. It’s now a mandate within our group that any new technology dreamed up by our R&D team must be packaged in a way that can be integrated and adopted by the game industry with ease. Rather than releasing a white paper, demo and some sample code with the implied message “you figure out the rest,” we’ll develop a formal SDK, documentation and tools, and build the product into one or more major game engines before we will call it “done.”"

AMD should get out of the GPU business if they all they do is to slander hardworking and innovative companies like Nvidia that has world class software team.

May 17, 2015 | 04:48 PM - Posted by Anonymous (not verified)

Holy smokes...

A employee talking up the company which pays him.

THIS MUST BE NEWS!!!!

May 17, 2015 | 05:37 PM - Posted by Someguy (not verified)

Nvidia blogs are my go-to choice for impartial Graphics industry analysis.

May 18, 2015 | 09:47 AM - Posted by chizow (not verified)

Well you could always read the developer interviews and blogs that say the same thing. ;)

Simple fact of the matter is Nvidia dedicates hundreds of millions in software engineering resources that they give away to devs. You will see Nvidia names in the credits in many AAA games that help make games better for Nvidia users, and sometimes, AMD users too.

If you dont' like it, turn the features off or buy a console. Or maybe, even use them on your AMD card and you might actually be surprised to find yourself enjoying them!

May 18, 2015 | 12:31 PM - Posted by Someguy (not verified)

Yes, because AMD have done nothing over the years to advance PC gaming.

Whose AMD card? Are you that partisan that you couldn't imagine someone with a long history of buying Nvidia thinking this is a bad road to go down.

From your post history you obliviously have an intense emotional attachment to Nvidia which although a bit sad is totally fine. Thanks for the condescending and irrelevant advice though really appreciated.

May 18, 2015 | 12:55 PM - Posted by chizow (not verified)

Where did I say AMD did nothing to advance PC gaming? Sure they have done a few things here and there but more often than not they are crying about Big Mean Intel and Nvidia pooping on their parade.

I prefer Nvidia because they've EARNED it by delivering great solutions for my favorite pasttime, PC gaming. There's attachment just as there is with anything you enjoy in your daily routine. Do you enjoy your nice car? Your favorite sports team? Your favorite brand of clothing? Your favorite restaurant? I wouldn't reward AMD for things contrary to these goals sorry, that to me is deep-rooted emotional behavior, which is obvious that can be the only thing driving AMD fanboys nowadays.

May 18, 2015 | 11:13 PM - Posted by Anonymous (not verified)

You are the one that fudged the math, 3.5 = 4, for your wonderful Leader, we love our leader, you say, and you are a marketing monkey. It's called Nvidia scamworks, contrived to build in that Nvidia vendor lock-in, and then bill to the max for the parts. Get ready for those necessary bank loans for Nvidia Products, expensive products that have all of the DP performance taken out, and more cost added to get it back in. The Witcher 3, now with Nvidia's green hairy trolls baked into the middleware.

May 17, 2015 | 04:54 PM - Posted by Anonymous (not verified)

I don't really understand why this is even being debated. NVIDIA did something good, AMD / AMD Fanboys are complaining about it. Just another day on the Internet.

May 17, 2015 | 05:24 PM - Posted by Someguy (not verified)

Thanks for your insightful analysis. Nvidia "did something good" and anyone who has any issues with the closed-source, black box approach Nvidia is taking here with Gameworks and the possible fragmentation this may cause is a fanboy(for a product I don't even own). Bravo, nothing to debate here, Anon has it all figured out.

May 19, 2015 | 03:49 PM - Posted by Anonymous (not verified)

You're suggesting minor visual flair can cause fragmentation. I don't see how that's the case. Can you elaborate?

May 17, 2015 | 05:16 PM - Posted by Anonymous (not verified)

I'd prefer if both AMD and NVidia simply kept out of the middleware space altogether. NVidia is already trouncing AMD in power efficiency and ipc, so this sort of thing just feels like they're laying it on thick. Just imagine the kind of outcry/arms race that would ensue if NVidia acquired the Unity engine, and future versions were mysteriously 'optimized' for their cards! AMD would have to develop/acquire their own proprietary engine to compete. None of this would help to push the industry forward.

May 17, 2015 | 09:33 PM - Posted by renz (not verified)

and then those PC elitist will start whining about graphic parity with console or there is nothing unique graphically for the pc version of the game. we got stuff like this from both amd and nvidia not just because of the competition between the two but also because of pc vs console. nvidia and amd are both selling gpu. so they need reason for people to play the game on pc. and offering stuff like physx and tressfx is one way to differentiate the pc version and the console version of the game. for developer it is probably make more sense for them to aim for the same level of graphic on every platform

May 18, 2015 | 01:26 AM - Posted by ROD (not verified)

AMD has most efficient compute architecture atm as they are 1. in Green500.
Efficiency is always matter of parameters! AMD is better in certain compute perf./TDP, transistors/die size, etc.
IPC is irrelevant on GPU, omg those are highly parallel chips!

May 18, 2015 | 09:32 PM - Posted by renz (not verified)

looking nice sitting at the top right? but not so much until you see this:

http://www.top500.org/system/177996
http://www.top500.org/system/178083

May 17, 2015 | 05:29 PM - Posted by pdjblum

AMD has become irrelevant in the x86 market it seems. Competing against intel is a monumental task. AMD has had every opportunity to stay on level ground in the GPU market. Nvidia is not intel; there is no reason why they cannot compete with Nvidia and thrive. Poor management, beginning with Hector Ruiz, not Nvidia, is to blame. Unlike AMD, Nvidia, not notwithstanding all Nvidia's hyperbole about their other ventures, focused on the GPU and its importance in computing going forward. Any success Nvidia has had in the non gaming segments is because of GPU tech they use in those endeavors.

May 17, 2015 | 05:30 PM - Posted by Anonymous (not verified)

No longer matters.

When you own 75% of the discrete GPU industry you can do whatever you want. Developers are going to go where the market goes. AMD needs to get their shit together.

May 17, 2015 | 06:16 PM - Posted by DaKrawnik

I'm so tired of all this crying. NO ONE IS FORCING ANYONE TO USE GAMEWORKS!!! IF IT RUNS LIKE CRAP, DON'T FREAKING ENABLE IT!

Or ask AMD to create their own version, and get off nVIDIA's coat tails. I'm so freaking tired of these AMD gimps that have this gimme gimme gimme complex. They demand to pay the least amount of money for hardware, and demand the competition to spend their time and money and give them features to make their gaming experience better - for FREE!

Last year Huddy said Mantle could be open to all, but says not now because it's a closed beta that only runs on AMD's GCN hardware he continually says is better. Then what happened? AMD handed Mantle to Khronos.

Huddy says FreeSync can go as low as 9Hz, but current monitors have a minimum of 40Hz.

Huddy says there is no license fees or added cost to the BOM to FreeSync displays, yet their first 1440p monitor (BenQ) has an MSRP of $799us.

But, but, but people say if AMD goes away nVIDIA will raise the shit out of their prices. Really? Is that what Intel did with their CPU's? Is that what Windows did with their OS? NO! Because if it were to happen, consumers would speak with their wallets and guess what would happen? Prices would DROP!

#amdgamersarestupid

May 17, 2015 | 07:06 PM - Posted by Anonymous (not verified)

Well said +1

May 18, 2015 | 06:24 AM - Posted by Luthair

So in your world will people turning off gameworks be able to pay less for the game?

Looking beyond the nvidia-amd spat, the people who are really to blame are the developers; they're choosing to deliver a sub-par experience to part of their customer base.

May 19, 2015 | 04:01 PM - Posted by Anonymous (not verified)

Or they're delivering a premium experience to those that have hardware with support. Using this logic you could say the game should cost less because the card you bought in 2004 can't run the game very well or at all.

This hair feature is not required to enjoy the game on any level. It's a visual plus. Stop laying around being entitled and butt hurt.

May 18, 2015 | 09:45 AM - Posted by chizow (not verified)

+1, AMD fans need to stop making excuses for AMD and accepting their explanations and simply demand better support.

What needs to happen with AMD is they need to stop trying to poorly feature match Nvidia, rein it in and at least come through with what they've already promised to-date. Then go from there.

May 17, 2015 | 06:36 PM - Posted by Keven Harvey (not verified)

I don't like what Nvidia is doing, it can only end with titles that are crippled on one brand, if amd choses to do the same thing, or a monopoly from nvidia if amd ends up not being able to compete. That's why I will be voting with my wallet and my next gpu will be an AMD one.

May 17, 2015 | 09:53 PM - Posted by arbiter

If the Tesselation performance on AMD cards didn't suck as much as it does probably wouldn't even be talking about this. AMD fans would nit pick another issue to be pissy on

May 18, 2015 | 05:41 PM - Posted by funandjam

True, tessellation is not great on most AMD cards, but the last one it was actually improved a fair bit. Your comment(s) make it sound like it's bad on all cards and never going to improve, not true at all. Being first to release a technology doesn't automatically make the competitors bad or worse.

That being said, your second sentence describes you exactly, here you are again being nitpicky and pissy about AMD cards as usual.

May 17, 2015 | 07:12 PM - Posted by ThorAxe

So GCN sucks at tessellation. There are also a few games where GCN performs better than Maxwell, so what?

May 17, 2015 | 07:12 PM - Posted by ThorAxe

So GCN sucks at tessellation. There are also a few games where GCN performs better than Maxwell, so what?

May 17, 2015 | 07:30 PM - Posted by Anonymous (not verified)

TressFX DirectCompute is faster then HairWork CUDA tessellation

http://wccftech.com/tressfx-hair-20-detailed-improved-visuals-performanc...

http://cdn3.wccftech.com/wp-content/uploads/2014/09/tfx_tr_perf.png

May 18, 2015 | 10:32 AM - Posted by chizow (not verified)

It also doesn't look as good, but if AMD is willing to dedicate the man hours to implementing it into games, I will be happy to use it over the competing alternative of vanilla console mode.

I'm certainly not going to cry about it, like AMD fans seem to love to do.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.