A recent post on Top Class Actions suggests that buyers of NVIDIA GTX 970 graphics cards may soon see a payout from a settlement agreement as part of the series of class action lawsuits facing NVIDIA over claims of false advertising. NVIDIA has reportedly offered up a preliminary settlement of $30 to "all consumers who purchased the GTX 970 graphics card" with no cap on the total payout amount along with a whopping $1.3 million in attorney's fees.
This settlement offer is in response to several class action lawsuits that consumers filed against the graphics giant following the controversy over mis-advertised specifications (particularly the number of ROP units and amount of L2 cache) and the method in which NVIDIA's GM204 GPU addressed the four total gigabytes of graphics memory.
Specifically, the graphics card specifications initially indicated that it had 64 ROPs and 2048 KB of L2 cache, but later was revealed to have only 56 ROPs and 1792 KB of L2. On the memory front, the "3.5 GB memory controvesy" spawned many memes and investigations into how the 3.5 GB and 0.5 GB pools of memory worked and how performance both real world and theoretical were affected by the memory setup.
(My opinions follow)
It was quite the PR disaster and had NVIDIA been upfront with all the correct details on specifications and the new memory implementation the controversy could have been avoided. As is though buyers were not able to make informed decisions about the card and at the end of the day that is what is important and why the lawsuits have merit.
As such, I do expect both sides to reach a settlement rather than see this come to a full trial, but it may not be exactly the $30 per buyer payout as that amount still needs to be approved by the courts to ensure that it is "fair and reasonable."
For more background on the GTX 970 memory issue (it has been awhile since this all came about after all, so you may need a refresher):
Seems like 970 owners have
Seems like 970 owners have cheques coming however some long term buying choices would have been made over this that won’t be fixed by money.
Yep. Prior to the 970 I
Yep. Prior to the 970 I bought only Nvidia…I just got an RX480, and from here on I will not give Nvidia a dime.
Looks like you wasted at
Looks like you wasted at least $250. Your 480 will get at best 10-20% better frame rates at dx12 and maybe Vulkan than the 970 you ditched.
In Dx11, which is the vast majority you’ll be better off with a 970. Not to mention that the rx 480 uses more wattage and other issues but hey let’s bash on Nvidia.
If AMD dissatisfies you once are you going to buy Intel only from now on. That Rx 480 what exactly is its TDP is it 110 AMD or is it actually 150 second time AMD reported or actually closer to 165 watts. If you said 165 watts you would be right. Are you going to sue AMD for misstating it’s TDP or for PCI express slot over draw.
A few thoughts to consider:
–
A few thoughts to consider:
– The double precision performance of 480 is ~3X that of GTX 970 (323 GF vs 109); so for professional applications the 480 is a significant step forward. BTW, 480 is also 25% faster than GTX 1080 for double precision floating point.
– Single precision is 50% faster for applications that can use it. (Transcoding).
– 8GB VRAM (Vs 3.5GB) = good future proofing, especially given modern console ports typically float around 5-6GB of VRAM
– The GTX 970 drivers are mature; the 480 drivers are newer, again likely adding another 10% over time to the RX 480, and as has historically been seen by AMD GPUs.
– I sold my last used GTX 970 on ebay for more than I could have bought a new one. So he may have made out quite OK :).
still if gaming is the only
still if gaming is the only concern your first and second point doesn’t matter. and going from 970 to RX480 is small step up.
and for professional application i heard AMD only optimized their driver for their professional card. back in 2012 many get 7970 in hoping to get decent compute performance with it (it’s DP for example did not get cut down like hawaii so 7970/280X DP performance actually are the same as their firepro counter part). but the optimized driver for certain application never come to radeon.
My 970 couldn’t overclock at
My 970 couldn’t overclock at all and my RX480 runs at 1350Mhz all day long with nearly 100GB/s higher memory bandwidth when overclocked.
I can now play DX12 games like Quantum Break above 60fps.
I haven’t found any games that the RX 480 overclocked is slower than my 970, but I have many games where I see over a 25% improvement.
See comment below.
They fixed the PCIE draw issue (if you want to call it that)
I sold my 970 on ebay for more than I paid for my RX480.
This was not a PR “disaster”
This was not a PR “disaster” for nvidia. The 970 sold record amounts, and the $30 slap on the wrist still leaves nvidia with a nice tidy profit on the card. AND nvidia avoided a court decision.
Most probably still more profit than they would have made had they been upfront on the 3.5GB of memory.
In this case nvidia lying paid them well.
Yea, the fanboys were
Yea, the fanboys were standing strong against the onslaught.
Nvidia is pretty anti
Nvidia is pretty anti consumer so anything that hurts them makes me happy.
Is AMDs whole selling
Is AMDs whole selling crossfire for years even though it didnt work also anti consumer?
Maybe it was, but i think
Maybe it was, but i think what really happened was until FCAT came along even AMD didn’t realise there was a frame pacing issue as there was nothing that measured this metric prior to that.
I think it was about 6months to a year after FCAT arrived and the first reviewers highlighted the issue that fixes started to arrive.
Radeon pro actually had the issue fixed as it had inbuilt frame metering , i used it with 7970 CFX and it was fantastic. Toms actually reviewed this software and highlighted it as a fix.
We know by Nvidia’s admission they new all about this 3.5gb issue all along and chose to do nothing about it. So i think its a bit different.
because it is not the first
because it is not the first time they use weird memory config on their card. as it is the memory config on 970 is a lot better than the one in GTX660 2GB for example. with 970 the issue easily coming out to light due to new console where games start using excessively more VRAM. during GTX660 generation you probably will running out of grunt before reaching 1.5GB VRAM usage because back then games are not that VRAM hungry. i own 660SLI in the past (now using 970)so i know what happen when VRAM usage exceed 1.5GB. compared to 970 i was impressed how 970 handle the game when VRAM usage exceeding 3.5GB.
Microstutter and frame pacing
Microstutter and frame pacing was discussed in 2011. AMD didnt even partially fix it until 2013. They still havent fixed it for DX9, so if you use multiple AMD GPUs and play a DX9 game youll stll get stutter and less performance than a single card.
XDMA crossfire didnt fix it with their 290x series, which they sent review samples with special BIOS to hardware review sites to make themselves look good.
AMD has done plenty thats anti consumer, but since they make shit products, people make all kinds of excuses for them.
So you’re saying that because
So you’re saying that because there’s an ancient issue that noone suffered from in any meaningful way that anyone heard about it ever, AMD products are “shit”? And this is somehow an excuse for nvidia to lie and cheat?
Dude, you need to get your fanboy in check, the salt you’re spilling is starting to be pathetic.
No one suffered? There were
No one suffered? There were plenty of people who bought multiple AMD GPUs thinking theyd be getting more performance instead of LESS. They wasted twice the money on something that gave them less performance but thats ok?
The official statement by
The official statement by NVIDIA was that there was a miscommunication between the technical and PR team and I wouldn’t have believed them if my personal experience didn’t proved that this is actually possible in a big company, despite that looking crazy.
The miscommunication is
The miscommunication is believable. What is not believable is that the following did not occur:
Somebody that knew the actual specs of the card read a review of the card they designed and reported the falsity to corporate.
It is clear that Nvidia tried to cover it up. I hate them for many more reasons than just this issue, but this was the needle that broke the proverbial back.
Kind of like how SLI didn’t
Kind of like how SLI didn’t work for years and crashed your PC ? :p i’m certainly never going SLI again, I tell you.
But sure, try to take away from the issue at hand. Fanboys to the rescue!
So…where do I sign up for
So…where do I sign up for the settlement?
You won’t be able to yet as
You won't be able to yet as this was just a preliminary offer by NVIDIA. The courts will still need to approve it. Once the courts approve the settlement (if any else it will go to trial but I think it will be settled), there will likely be a website where you can apply for the settlement offer and/or the lawyers should also contact all party members with mailers at last known addresses.
Whenever it’s ready, I’m sure
Whenever it’s ready, I’m sure you’ll be able to sign up for it somewhere, and you’ll find that on all the usual tech sites.
Just remember you’ll probably have to show some proof of purchase to actually get anything out of this.
I wish lawyer’s fees were
I wish lawyer’s fees were also subject to “fair and reasonable” limits.
They are…and if you only
They are…and if you only knew how much work goes into battling a massive legal team like Nvidia, you would understand just how modest this fee is.
GTFO
GTFO
Consumer: Hey, Nvidia you
Consumer: Hey, Nvidia you falsed advertised
Nvidia: Oops, my bad
Consumer: I’m suing
Nvidia: Lets raise prises on next gen call them Fu Editions.
Consumers: Nvidia your the best.
This was not some honest
This was not some honest mistake. Nvidia obviously knew about it before the release. They just thought that no one would detect it.
The press did not detect it, did they? So Nivida were close to get away with it.
PCper and all other media gave 970 a full score.
This is almost as embarrassing for the press as it is for Nvidia. That is why the press has gone easy on them. If AMD had done the same, it would have been game over for AMD.
AMD fucks up, like the power issue with RX480. But this was addressable and fixable. Also it was discovered early. A serious fuckup sure. But not sneaky like the 3.5GB thing with 970.
It was only after tech sites
It was only after tech sites like this one irrefutably proved that crossfire didnt work, and had not worked for YEARS, that AMD decided to implement a software fix that STILL hasnt fixed it for DX9.
Customers bought two GPUs or dual GPU cards that were advertised to have better performance than a single GPU, yet they had WORSE performance!
Where are the false advertising claims and lawsuits for that shit?
What exactly do you mean “not
What exactly do you mean “not work” – because I’ve seen plenty of xfire reviews that makes it look like it works just fine?
Without knowing anything I’d also figure that the number of people who did in fact buy multiple GPUs is an extreme minority for either manufacturer so that this shit doesn’t work in most games for either is a nonissue.
https://pcper.com/news/Edi
https://pcper.com/news/Editorial/PCPer-Live-Frame-Rating-and-FCAT-Your-Questions-Answered-0
Crossfire, including XDMA Crossfire, did not have frametime metering until AMD released a driver that implemented software(as opposed to Nvidias more effective and efficient hardware based) frametime metering.
PCPer has many articles on the subject, and it goes all the way back to 2011 when Scott Wasson from Tech Report started writing about it.
The benchmarks you refer to are NOT accurate, since the vast majority of them used software on the PC being benchmarked to determine FPS.
FCAT was developed to use a separate PC to capture the ACTUAL frames being output by the GPU(this is impossible without an FCAT type setup).
What the tests revealed is that in virtually ALL GAMES, Crossfires performance was significantly WORSE than a single card.
After months of articles detailing how Crossfire and Eyefinity did not work as advertised, AMD began working on a Band Aid software fix. Crossfire now has positive scaling in DX10 and 11 games, but they never patched it for DX9 games to my knowledge.
Just search for Crossfire frame pacing FCAT on PCPer if you need more history.
And to your last “point”: the 3.5GB memory segmentation causes virtually no issues for most scenarios, yet people are happy Nvidia is getting sued. Why is this? Why not the same schadenfreude towards AMD?
To reiterate, AMD knowibgly continued to sell Crossfire and multi GPU cards as a working product when they had proof that it was NOT. Where are the lawsuits and claims of false advertising for them?
Yes i think Nvidia should get in trouble if they falsely advertised the ROP count intentionally. The 4GB thing is borderline sleazy, but since it actually worked as advertised, less sleazy than AMDs whole Crossfire nonsense.
Here is your answered.
>>
Here is your answered.
>> Where are the false advertising claims and lawsuits for that shit? << You answered yourself, already. Unless, you don't believed in justice and lawsuits!
Customers who bought 2 GPUs
Customers who bought 2 GPUs should be blaming the makers of the OS for never making an OS capable of handling any and all processor adapters that where plugged into any computing platform. PCs using dedicated graphics adaptors from the start of there being PCI card based graphics adaptors available years ago should have had an OS that was capable of managing multi-GPU/Other adaptors, and the makers of the GPUs should have been required to make their products’ drivers allow for the closest to the metal access. The maker of the OS and the devices’ OEMs should be the ones held accountable for their end products inability to do what any PC/Laptop Personal Computing device should have been capable of doing from the start.
How could the entire personal computing industry be let off the hook for providing such crappy OS/API support for all these years with OSs that were unable to properly load balance any multi-GPU/Other processor adaptors and make FULL use of all the hardware plugged into the computing platform from day one of there being PCI based GPUs/Other processors available. The OS makers and the OEM’s should have told the makers of the GPUs to provide a standardized API under the full control of the OS/applications to properly manage the GPU makers’ products used in single or multi-adaptor fashion or their GPU products would not be certified to work under the OS.
They could have developed APIs like Vulkan/DX12 from the very beginning of the GPU industry but they where too greedy and the GPU vendors wanted more lock-in. GPU makers’ hardware drivers should only provide the necessary code to to allow the OS/Graphics API/applications(Via the API) the complete access to the GPU’s “Metal” with the OS/Graphics APIs/applications(Via the API) the software responsible for the managing and load balancing of all attached CPU/GPU/Other processors! What a decades long ripoff from the OS makers, and the GPU makers, who only wanted to sell hardware, and hardware that was not fully enabled for lack of proper OS, and Graphics API investment in the first place. The entire GPU/OS industries should have been required to engineer a solution to Multi-GPU adaptor load balancing and management before the first GPU products where available for inclusion on any PC of any type.
The issue was that Crossfire
The issue was that Crossfire did not have any form of frametime metering, making your entire post completely irrelevant.
This is something handled by GPU hardware or drivers, NOT in the OS. Its like saying defective CPUs or BIOS is the OS makers problem. Its total bullshit.
No its says that some gamers
No its says that some gamers are clueless about how they have been getting screwed over for the last decade+ on being able to use their GPU hardware on PCs! And that some gamers do not know what an OS is for and the OS’s responsibility to be able to manage the hardware on a PC/Laptop. The hell with the GPU makers SLI/CF implementations in their drivers that has always been ineffective at managing milti-adaptor load balancing. AMD and Nvidia should have been told by the OS makers/Maintainers to get their hardware under the direct control of the OS, and the Graphics APIs that are under the OS’s control, and their hardware under the OS’s/Graphics API’s control and management or the GPU makers could take to the highway.
But the clueless nature of the gaming market that started out as a mostly gaming console market is replete with so many computing ignoramuses. With their hands held by the gaming industry some gamers have not the slightest idea what an OS is and does and what the OS is responsible for managing on a computing platform. You see all the OSs that have no problems managing dual socketed CPU systems for decades that are very good at load balancing for mult-socketed CPU systems, and can not make the mental extension in your mind that multi-GPUs in 2 PCIe slots are no different and the OS makers have much more time and resources invested in working with multi-processing systems in the first place.
The GPU makers have failed in their mission in this regard all along and now that the Vulkan/DX12 APIs are getting the attention for some very good multi-GPU load balancing, that is the direction that gaming is heading and it should have been that way all along, but the gaming end users are pretty ill educated on computing and computing sciences as a whole. The entire gaming industry for PC based gaming will be converting over to OS/graphics API and the games’/gaming engine’s control over the GPU hardware on PC gaming systems and it should have been that way all along. CrossFire having whatever ability does not matter, because letting the GPU makers manage Mult-GPU load balencing and other functionality was and is a big mistake, as that is better done by the OS and the Graphics API under the control of the OS/API and the games/gaming engines themselves.
The GPU makers need to shut the hell up and build their hardware with drivers that allow the closest to the metal control to be managed by the OS/Graphics APIs with the gaming software/gaming engine software calling on the graphics APIs that can get at all the GPUs features and allow the entire gaming/OS industries to develop the standard APIs to manage any and all GPU/CPU/Other hardware plugged into a PC. If the GPU maker do not like that then let them make their own OSs/Graphics APIs, not very likely as the GPU makers are terrable at software development.
BOTH Crossfire/SLI suck, they suck now and they have always sucked, and the GPU makers need to stick to making the hardware, and the minimal drivers to allow for the OS/APIs to get at the lowest level of GPU hardware control, and get the hell out of the way of everything else to do with what rightfuly belongs under the OS’s/Graphics API’s control.
I agree with most of what
I agree with most of what youre saying about the shittiness of consumer OSs and GPU drivers and the cluelessness of most consumers.
However, the specific issue of frametime metering should be handled on the lowest possible level, which is not the OS, its the GPUs hardware. If it isnt implemented in hardware then the GPU drivers should handle it. AMD chose to dismiss the issue until enough of the hardware review sites showed consumers irrefutible proof that Crossfire was knowingly falsely advertised shit that didnt work.
Again with your need to slam
Again with your need to slam only AMD, when you should be asking the OS makers, and the PC/Laptop OEMs why they ever whitelisted any of AMD’s/Nvidia’s GPU hardware/drivers in the first place a very long time ago! The OS makers, the OEM’s, the gaming engine developers, and the entire PC industry should have moved to lock out of the loop any GPU maker that would have not complied with the need for OS/API close to the metal access to the GPU hardware features.
The managing of any Processor’s(CPU, GPU, Other) load balancing and workload dispatching belongs at the OS level of control, and that includes the Graphics APIs/Compute APIs/Games software that are under the OS’s control. The GPU’s hardware drivers should be at the absolute minimum size for the OS and Graphics/Compute APIs to get at the GPU/s metal, with no driver bloat to get in the way of the OS’s and Graphics/Compute API’s management of all processors on a PC.
All graphics GPU, or GPGPU compute, should have been made accessible for any make of GPU from any maker on an always on basis with none of this switchable nonsense that robbed the PC’s users for more than a decade of the ability to use any of the GPU processing power on an all of the time basis that was plugged into their PCs/Laptops for lack of proper required support from the OS makers, the OEMs, and the GPU makers themselves for as many years as GPUs have been available!
There is more brain power to solve these GPU load balancing and workload distribution issues in the OS/API/PC and gaming industries combined than any of the GPU makers could ever singularly bring to the table for the development of proper GPU processor load balancing and management on any computing platform. So this whole SLI/CF and non OS/API proper multi-GPU support nonsense has been a decade+ long ripoff of every PC owner that has a PC/Laptop with more than one GPU(Integrated or discrete) on their device.
I am not only slamming AMD,
I am not only slamming AMD, or excusing the poor design of operating systems.
However, whitelisting HARDWARE is absurd. x86 CPUs are a standard, but even they come with a variety of extensions, and you cant expect an industry as spread out and diverse as the consumer computer hardware industry to standardize and whitelist hardware or all the drivers to support them.
Supercomputer vendors and manufacturers custom make and tune software for customers who pay massive fees for them to do so. Your proposal would drive the cost of even basic PCs up and simultaneously reduce the amount of hardware options. Youd be paying Oracle levels of ripoff prices for PCs.
Youre right about incentivizing hardware and software makers to make things easier to develop drivers, but the fact is, they dont. None of the players in the industry ACTUALLY do it.
What Nvidia did do is make their multi-GPU work by implementing frametime metering PROACTIVELY.
AMD, on the other hand, chose to ignore the evidence until they no longer could and finally patched it partially.
It does not excuse any of the stuff Nvidia supposedly did. I am simply pointing out the HYPOCRISY that people display when talking about the two biggest GPU makers.
AMD can knowingly sell defective products and falsely advertise and AMD fanboys say “Crossfire works, FRAPS says im getting more FPS” even though theyre all drops and runts.
Nvidia has a weirdly designed segmented memory architecture that they had basically used with GPUs for years and people cheer the lawsuits. If they intentionally lied about ROP count then they deserve it though.
All that being said, i warned people to avoid the 900 series(if they could wait) because i knew it was a compromised architecture from the moment they revealed that it was 28nm instead of 20.
And yes, the switchable graphics in laptops should never have existed and causes nothing but problems. It is a total gimmick and i have never seen it work correctly as advertised.
OS makers, OEMs, and the
OS makers, OEMs, and the entire gaming industry have plenty of resources to develop the proper GPU load balancing API/Software under the OS’s control. The x86 users have had that software for their multi-CPU/dual CPU motherboards to manage those processors for decades, and GPUs are processors also with a bit more cores but they are still processors, so the GPU makers and the entire PC industry can fund the multi-GPU management in the major graphics APIs(Vulkan/DX12), and Apple funds development of their own APIs/drivers for their hardware.
You need to express your anger at AMD boarders on the pathological, and you need to stop with your illogical affinity towards Nvidia or any GPU maker. Nvidia has done its customers some dishonest service in not properly reporting their GTX 970’s proper specifications.
The entire PC/OS industry as well as the OEMs have done a great disservice by not designing the PCs with a properly functioning OS/API/Application ecosystem that can handle properly more than one GPU, of any make or model, at the same time with all GPUs on and usable on an all the time basis. This problem has been solve for CPUs fore decades and GPUs have been around for over a decade. Both AMD’s AND Nvidia’s attempts at CF/SLI should have been stopped and the proper development of OS/API(Graphics and Compute) should have been undertaken at the very time that GPUs where introduced to the home computing/PC market.
Any PC/commercial OS without the ability to properly manage any numbers of processors plugged into it PCI/other slots, or sockets should be declared as unfit for their intended purpose and promptly recalled, there should be no switchable only GPU adaptor nonsense allowed and the computer’s end user should be able to use the SOC’s built in graphics/GPU and and any number of discrete GPU adaptors as the computer’s hardware allows with every processor available all of the time. It should be up to the game/graphics application to make use of any and all GPU processing power in the same manner the game/graphics application can make use of the CPU/s on any computer.
Weaker integrated graphics should be fine used for some computation tasks while the more powerful GPU/s are used for gaming and compute also, but the end user deserves to have all of the user’s purchased processing power made available for use all of the time by the OS/Graphics API/Compute API, for whatever computing needs to be done. The only one in charge of turning a graphics/other processor adaptor off is the end user, and that control needs to be given to only the end user, and not forced by the OS/OEM/GPU/other processor maker. That switchable graphics nonsense is an even bigger ripoff than the GTX 970 memory ripoff.
The GPU makers need to be brought into line with the OS/API in control of the GPU with the GPU makers drivers as minimal as possible to allow for the OS/API and Games, via the APIs, to get at the GPUs “Metal”. Vulkan/DX12 is the start but more needs to be done because the GPU makers are not doing a very good job of making their GPUs available no matter what maker’s GPU/s are plugged into the system.
If you think I’m angry at
If you think I’m angry at AMD, youve completely missed my point. If you think I have excused Nvidia, youve missed my point.
I get that OS makers(well Microsoft, since they basically are the biggest for desktops and laptops) SHOULD do a lot of things.
They SHOULD have made Windows 10 something other than data collection malware, but they didnt. My anger is at CONSUMERS for buying garbage and driving the market with their ignorant purchases.
The fact that idiot consumers buy stuff that doesnt work, or fall for a ploy to downgrade from Windows 7 or 8 to a spyware reskin of 8 called Wndows 10 isnt going to lead to the kind of changes you describe. I totally agree that current operating systems suck, but that doesnt change my point related to this issue. People display hypocrisy when it comes to AMDs non functional multi GPU and Nvidias functional but mislabeled GPU.
multi gpu is gpu maker
multi gpu is gpu maker problem. not OS maker problem. why multi gpu? because gpu vendor want to sell more gpu. multi gpu is not a necessity to run a computer. if AMD for example want to sell multi gpu they have to be the one providing the software for it. this is also one reason why game developer are not really interested with multi gpu in general. gpu maker want to sell more gpu via multi gpu setup but just to make it work it brings a whole lot more complexity and issue that did not exist with single gpu setup. hence we see most DX12 games did not support multi gpu despite the new API already have support for multi gpu natively.
But… but…
But… but… but…….
“It’s a good design”
Nvidia designs are good by
Nvidia designs are good by default. It goes without saying. No testing necessary.
And AMD can do no wrong, even
And AMD can do no wrong, even when they knowingly sell multi GPU technology that gives worse performance than a single GPU and gives specially prepared samples to review sites.
Your strawman arguments are
Your strawman arguments are kind of absurd.
It’s peak fanboyism when you start believing in conspiracies to defend your fandom.
It’s not straw man if it
It's not straw man if it actually happened. Runt frames were counting as frames with no actual benefit to the user. Check your facts, bro. Also, tone it down a bit, and change your (profane) icon or I will.
*edit* took too long, so I took care of it for ya.
Typical Allyn style jumping
Typical Allyn style jumping on the Nvidia defenders deflecting army.
Are you guys really surprised your viewed as bios when you yourselves act by responding as bad as the trolls.
Allyn,
I promise I’m not here
Allyn,
I promise I’m not here accusing you of bias in any direction.
But you’re asking malurt to “tone it down” when BlackDove is being significantly more aggressive and profane – not just in this thread, but in lots of the front page articles’ threads – and he gets no warning.
Granted, malurt’s icon may have been the catalyst (it was gone before I got here so I didn’t see it) but as far as behavior, there are quite a few posters who are being much worse than malurt’s comment was, and most of them are leaning green.
The perception that they’re allowed to be aggressive and profane and occasionally abusive without consequence, but malurt (who appears to lean red) gets a warning for calling someone a ‘fanboy’ is, I suspect, one of the reasons that some people get the impression of bias that they’ve been complaining about.
I said the word “shit”, which
I said the word “shit”, which I distinctly remember Ryan saying “fix their shit” in reference to AMDs Crossfire issue in one of their YT videos. I have not name called or been “aggressive”. I have also not been “abusive” AT ALL.
I am actually very careful never to directly insult anyone because i know the rules.
Lots of people who post on these threads are nothing more than clueless trolls who know nothing about computers, engineering or anything but being fanboys.
And i find it interesting that you say that the worst trolls lean green. Most of the loud trolls ive seen are AMD fanboys who defended Crossfire for YEARS even when sites like PCPer proved that it didnt work at all. The same ones that drone on and on about Intel sucking when an i3 beats an AMD 8 core in a benchmark.
You know, like the dude posting in the thread about Intel 10nm, saying that theyre milking 14nm too long, while AMD is still making 32nm CPUs from 2010?
Perhaps I was more sensitive
Perhaps I was more sensitive to that commenter because his icon was a dude flipping people off. Either way, I'm really beginning to not give a crap about folks claiming we are biased. I pointed out factual info that we not only verified ourselves, we wrote the articles that led to AMD fixing those issues. Some of these folks are so far out there that Ryan has been accused of bias from *both sides* for the same content in some instances. It's pretty obvious we remain neutral when we get in-studio interviews with both sides. We're not close to any other events, so this means these folks are flying out just to be on our streams. That's the only evidence I personally need to confirm our journalistic integrity, because listening to folks who are just way too offended when their favorite product line doesn't turn out to be the best is, well, a waste of our time. We have better things to do, like write reviews.
Yes and async compute is
Yes and async compute is coming to Maxwell.
Oh… wait. Maxwell just gone legacy.
Pity. So close to get that async driver.
—————
Also I loved your Time Spy little act with Ryan where you where shouting all over the place, in your webcast that, what Futuremark was saying about async support in Time Spy, was like reading from the Bible. You can’t question what is written/said from Futuremark.
“Oh, they create benchmarks for decades. What they say is the absolute unquestionable truth. The other guys are crazy”.
http://www.overclock-and-game.com/news/pc-gaming/50-analyzing-futuremark-time-spy-fiasco
PS Other sites are starting adding Vulkan numbers with the footnote that they can’t still measure with FCAT. But they DO add Doom. You are going to became a minority pretty soon.
I am often accused of being
I am often accused of being an Nvidia fanboy, but I warned everyone that GM204 amd GM200 were wastes of time if you had a decent 28nm GPU already or could wait for GP104(just bought a 1070).
All this “Async compute” stuff is hugely over rated with current GPUs. GDDR5X and HBM2 are actually things that will give significantly better performance since memory bandwidth is typically the bottleneck.
With AMD not offering a hi
With AMD not offering a hi end card, well probably it is over rated. But in my opinion anyone STILL having a GTX 980Ti should wait for Volta. Ignore even GP102.
But for people buying at $200-$300 and thinking of keeping that card for 2-3 years, async shouldn’t be ignored. People who change cards every year can choose either Nvidia or AMD. Wouldn’t matter.
It really depends on the
It really depends on the actual performance that the asynchronous compute feature adds to the system. If it is less cost effective to implement and has a smaller real world performance impact than more memory bandwidth or more L2 cache, why do it?
It sounds a bit like TSX being disabled on most Haswell CPUs. Does it make a huge difference? Most cases, no. In most cases having more channels of memory, higher clocks and more memory capacity actually make a real difference. Why? Because not all applications are really limited by what TSX, HSA or asynchronous compute actually solve.
Read the AMD fluff piece by
Read the AMD fluff piece by an AMD fanboy with his furyx. Do they all always say they used to have an Nvidia.
Timespy is a generic benchmark to compare directx 12 games not async compute only. And certainly not AMD’s definition of async. Async is optional and not required under dx12.
Timespy has no support for conservative rasterization ( Nvidia has hardware support for this but AMD largely doesn’t) so it isn’t true indicator of dx12 anyway. Should they have not put this support of directx 12.1 in Timespy.
Can’t compare Nvidia to AMD brand cards. Only useful when comparing like to like as any benchmark usually has bias. Ashes anyone.
Your link is an article
Your link is an article evaluating the effectiveness of a benchmark that is brand new. Same article also recommends separate code paths for different GPUs. Yeah, that's a great idea. Hey, this one SSD really favors this one workload, so let's make a benchmark that hits it with a workload that just favors that one. If I made such a benchmark, I'd be laughed out of storage reviews. It pretty much undermines the whole idea of benchmarking in the first place.
GPU benchmark effectiveness can only be truly evaluated once its been out for a few years, when we can see how close the Futuremark guys came to what actual game devs do. Looking at previous versions of 3dmark, where people previously bitched about feature x or y favoring green or red, if you take all games that came out in aggregate, it looks like they were reasonably on point.
So yeah, like I was waving my arms around in the video about, they make a thing just like the game devs do, and they are arguably *way* more well connected than any individual game dev (since they talk to all GPU makers and some game engine makers). I'm more inclined to trust their benchmark design choices than by only sticking with one or two games (Doom) as the benchmark for what *every* game is going to do for company x or y.
Nvidia shill at his best.
Nvidia shill at his best.
Its not a strawman. AMD
Its not a strawman. AMD really did sell defective, non functional multi GPU and eyefinity setups for months after the issue was IRREFUTABLY PROVEN TO THEM to cause problems, and deliver less performance than a single card.
Im not saying that Nvidia is RIGHT to use sleazy marketing tactics either. Im just wondering why people WANT to see Nvidia get sued so badly, while you didnt really see the same schadenfreude with the whole Crossfire not working thing.
Im pretty sure that DX9 games, which there are plenty of, STILL get negative scaling with Crossfire, so theyre STILL doing it almost 5 years later.
compared to past card like
compared to past card like 550Ti and GTX660? yes 970 is a good design.
Yeap. Marketing loved it.
Yeap. Marketing loved it. Marketing loves lies.
Oh how about a couple of
Oh how about a couple of recent ones by AMD marketing. It’s an overclocker’s dream and 2.8x performance per watt of Polaris compared to Hawaii or Tonga IDK which.
These are two “lies”. Well the one about Furyx is definite and the jury is still out on Polaris. If Rx 480 is any indicator of efficiency of Polaris, well I’ll just give AMD a chance to pull the last one out of somewhere.
Not even close. In the case
Not even close. In the case of GTX 970 we are talking about WRONG SPECS.
What you are comparing is WRONG SPECS and marketing BS. Marketing BS is everywhere in every product. Every customer knows NOT to believe all the promises from a sales person. But WRONG SPECS? That’s a totally different thing.
Have a look at this.
“Now, you can enjoy up to 10x better performance than integrated graphics in all your favorite PC applications. Gaming is even up to 80% faster, while delivering rock-solid reliability and stability with GeForce Experience™.”
Can you imagine the model that offers the above? Up to 10X faster than integrated graphics. 10 times. TEN. In ALL your favorite PC applications. I guess that also includes Excel, Word, Minesweeper, Chrome, Access. But wait. It says UP. So? It also says ALL.
That’s marketing, but NOT WRONG SPECS.
Wrong specs. How about the
Wrong specs. How about the TDP of rx 480? Listed at 110 at first by AMD. When cornered said it was GPU only. A few sites that sold the card had it listed there for sale at 110 tdp. Has it ever been listed that way only? Then it was 150. Closer to the truth. It really shakes out at about 165 watts.
How about bulldozer CPUs being listed as octocore when it really is quad with cores sharing lots of resources. They are currently in class action about this.
Using same marketing for Rx 470, 120 watts typical board power. Since when has this been listed this way. It’s at least a gross misrepresentation of TDP. Taking a few games that it uses the least wattage in and averaging it.
Sketchy as AMD wants to see that Polaris is seen to be as energy efficient as Pascal. But we know it is BS.
AMD uses up to moniker well. Up to one gigahertz is popular with it’s video cards of yore. Up to 5 GHz on 9590 CPU.
Wait, is this an actual
Wait, is this an actual straight up cash payout to the consumer?
So often when I read about these class actions, it’s the lawyers getting millions in cash, and the consumers getting next-to-useless deals like “$12 off on these 4 specific video cards if you buy in the next 3 months and complete a raft of paperwork that will take you far more than $12 worth of your time even in the unlikely case you kept the highly specific documentation called for.”
I don’t have strong feelings about this case – on the one hand, being wrong about facts is wrong, on the other I find it hard to believe any consumers made a purchase decision on specific number of ROP units (which was wrong) vs. on overall performance as reported by various review sites (which were unaffected) – but I love that at least the consumer isn’t getting screwed worse by the lawyers than by the initial incident.
The attorneys are looking
The attorneys are looking forward to a nice payout for their firm.
Yes the attorneys will get
Yes the attorneys will get paid and any customers of Nvidia will pay more so Nvidia can get off with a settlement and no admission of guilt! I would love to have seen those internal documents among Nvidia’s management and marketing teams as they directed their nefarious plot to foist that “3.5 not really 4 usable gigabytes” of readily accessible GPU memory on the GTX 970’s customers. And now with this settlement the case files and court papers are sealed and no one will get a look see at just what transpired deep in the rotting bowels of Nvidia’s management structure. Those “Founders” edition GPUs will help cushion the legal costs, and Nvidia will just pass on this settlement as another business cost! With such a loyal following of fanatics the will brag about how much their GPU costs like it was more a piece of jewelry, Nvidia will not see to much in the way of lost profits and Nvidia should be getting some other fines for this deceptive practice. Maybe the EU will step up with some fines also if other consumer laws where broken, and the fines should be used to hire monitors to keep a close eye on Nvidia and its business practices.
I’m sure most of the people
I’m sure most of the people unhappy with the memory issue, returned the card and got a refund. This is just a bonus for someone who kept the card and got to enjoy it’s great performance.
How about Fermi DirectX 12
How about Fermi DirectX 12 and Vulkan support? You guys at pcper are way too silent about this matter. This case falls into the same certain senses. I really appreciate the site but just don’t know why are you guys so silent about this matter? Something boogie about this? Something going behind the walls?
I’m still waiting on Windows
I’m still waiting on Windows 10 drivers for my Hercules Riva TNT 2 Ultra card.. maybe this year.
Your AMD directx 12 support
Your AMD directx 12 support only goes back to the 7000 series as well. Maybe it just isn’t possible as Windows 10 is incompatible with lots of older hardware.
As for Vulkan support there is supposed to be a patch coming for Doom that will add async support for Nvidia.
I don’t get why AMD users are always in a rush when it comes to Nvidia not having something to show benchmarks. Then they brag about having better performance on their cards after like 4 years. So be patient for AMD it’s eventually going to be good. But don’t give Nvidia a few months to patch things. LOL
Nvidia already stated no
Nvidia already stated no Vulkan support for Fermi because of low user base. Most people have upgraded to way better cards by now. If you don’t even have a Fermi card then quit complaining. This product is end of life. AMD does the same thing. Anything behind GCN is legacy support as well. Get a life man.
even if nvidia support DX12
even if nvidia support DX12 and Vulkan on those hardware will game developer bother to make specific optimization for fermi hardware? AoS dev mention that they will never going to do architecture specific optimization and will only do general optimization that benefit most modern gpu. and that’s not take into account when dev only supporting DX12 feature only on the the recent hardware despite some older hardware still have the hardware needed to support said feature.
Well, just for the recap,
Well, just for the recap, Nvidia is advertising Fermi cards as DirectX 12 API supported hardware. This practice , falling into Federal Law Felony. Not to mention the response , which is none from Nvidia representatives in the matter of questions. Only ignorance can get into the face but no correct , informative information.
Yeah you right. Not. Check
Yeah you right. Not. Check facts man.
https://forums.geforce.com/default/topic/861873/windows-10-and-compatibility-issues-with-nvidia-geforce-6100-nforce-405/
400 series cards are working and compatible. They do not support all directx 12 features which is stated by Nvidia.
Having support often means it will work just not optimally. It’s like asking your 6 year old printer to work properly under an OS it wasn’t designed for. It usually won’t and only the basic features are supported sometimes even with official driver there are still issues.
There is no driver to use
There is no driver to use DirectX 12. It says on web page DX 12 but Nvidia doesn’t want to release the driver. Hope it is much clearer now.
I am not asking any 6 year old printer to work under any new OS. Simply, Nvidia has annonuced the DirectX 12 support/driver world wide , which is not happened at all. Google it.
https://twitter.com/nvidia/status/558011018367238146
http://wccftech.com/microsofts-directx-12-api-supports-amd-gcn-nvidia-fermi-kepler-maxwell-intels-iris-graphics-cores/
The newest is the Twitter on
The newest is the Twitter on from Jan 2015 and wccftech is two years ago. The feed referenced in mine is August of 2015 newer than both. It is from geforce forums. Issue fixed.
Here is link on how to install driver manually for older geforce cards.
http://ivanrf.com/en/nvidia-compatibility-issue-with-windows-10/
Notice date on this August of 2015.
Issue solved as well.
Here is another link. If
Here is another link. If you’re looking for it to support optional hardware features of directx 12 such as async or rasterization, it’s impossible. Can’t support because it wasn’t designed with hardware needed.
http://forums.guru3d.com/showthread.php?t=407551
Or from your own fav
http://wccftech.com/fermi-dx12-support/
Don’t read the article goes on about not supporting async. Article writer bleeds red and spreading the. Read posts. Even they say Fermi has support and one reader lamenting non support of his hd 6970 because it’s newer than Fermi.
They reach the same basic premise; which is if you have one of the older cards: It’s time to upgrade. While it may work it will still be slow.
Well, you don’t understand
Well, you don’t understand the whole thing 😀
It is fine. 😀
Maybe I don’t understand. Win
Maybe I don’t understand. Win 10 works but dx 12 games don’t run. AMD 7950 doesn’t even have full dx 12 support either. My Kepler don’t even have full dx 12 support. Dx 12 will run I think but I will be better off with dx 11 which my card as well as Fermi was designed for.
I did not upgrade to win10 as spyware ridden as it is. Dx12 was built for AMD hardware. Win 10 wasn’t compelling enough for me. I have a 4 gig Kepler and now I’m even looking at upgrading as soon as 1080ti drops. I will make my choice then.
I know you probably paid a lot for your Fermi. It wasn’t built with dx12 in mind predating dx 12 by around 5 years.
I hope they add support for Fermi for everyone that has one but I wouldn’t count on it.
Do you have a 570? You would
Do you have a 570? You would get negative scaling and worse performance in Dx 12 much like Kepler and Maxwell do. Fermi only has one queue just like Kepler and can only switch between compute or graphics. You are loads better off just using Dx 11.
The Dx 11 performance on that card isn’t the best either. Only 5000 under dx11 3dmark Only 1.5 terraflops of compute. Only 1.25 gigs or vram. Game performance is going to get much worse for you as time goes on.
Look at picking up a more modern card from the second hand markets like eBay and such if money is tight. Or maybe one of your fellow posters on the forums will have a card that they don’t need as they upgraded. Maybe they’ll send it to you for not much more than shipping price.
Maybe go to evga or geforce forums and they can help get it working for you. Maybe some other component in your system isn’t Win10 compatible. A lot of older systems won’t work properly with some older hardware. Good luck to you.
Just leave it bro. I see you
Just leave it bro. I see you don’t understand at all.
OK. But what does Nvidia have
OK. But what does Nvidia have to gain from advertising Fermi as dx12 compatible. I’d say nothing. These cards haven’t been on sale for a long time. They’re 3 generations old now.
I’m sorry it doesn’t work for you. If Nvidia was advertising their latest to be dx 12 compatible and they weren’t. It would be an issue. On 5-6 year old hardware not so much. They say they will add it but do not give a date. Well technically it could come out tomorrow for all we know.
I see the dx12 dolphin demo/test works fine on Fermi. Maybe things run on a case by case basis due to the way programs are coded.
The 970 sold record amounts,
The 970 sold record amounts, and the $30 slap on the wrist still leaves nvidia with a nice tidy profit on the card. AND nvidia avoided a court decision 192.168.1.1 192.168.1.1 192.168.1.1 192.168.1.1