NVIDIA Responds to GTX 970 3.5GB Memory Issue

Subject: Graphics Cards | January 24, 2015 - 11:51 AM |
Tagged: nvidia, maxwell, GTX 970, GM204, 3.5gb memory

UPDATE 1/28/15 @ 10:25am ET: NVIDIA has posted in its official GeForce.com forums that they are working on a driver update to help alleviate memory performance issues in the GTX 970 and that they will "help out" those users looking to get a refund or exchange.

UPDATE 1/26/25 @ 1:00pm ET: We have posted a much more detailed analysis and look at the GTX 970 memory system and what is causing the unusual memory divisions. Check it out right here!

UPDATE 1/26/15 @ 12:10am ET: I now have a lot more information on the technical details of the architecture that cause this issue and more information from NVIDIA to explain it. I spoke with SVP of GPU Engineering Jonah Alben on Sunday night to really dive into the quesitons everyone had. Expect an update here on this page at 10am PT / 1pm ET or so. Bookmark and check back!

UPDATE 1/24/15 @ 11:25pm ET: Apparently there is some concern online that the statement below is not legitimate. I can assure you that the information did come from NVIDIA, though is not attributal to any specific person - the message was sent through a couple of different PR people and is the result of meetings and multiple NVIDIA employee's input. It is really a message from the company, not any one individual. I have had several 10-20 minute phone calls with NVIDIA about this issue and this statement on Saturday alone, so I know that the information wasn't from a spoofed email, etc. Also, this statement was posted by an employee moderator on the GeForce.com forums about 6 hours ago, further proving that the statement is directly from NVIDIA. I hope this clears up any concerns around the validity of the below information!

Over the past couple of weeks users of GeForce GTX 970 cards have noticed and started researching a problem with memory allocation in memory-heavy gaming. Essentially, gamers noticed that the GTX 970 with its 4GB of system memory was only ever accessing 3.5GB of that memory. When it did attempt to access the final 500MB of memory, performance seemed to drop dramatically. What started as simply a forum discussion blew up into news that was being reported at tech and gaming sites across the web.

View Full Size

Image source: Lazygamer.net

NVIDIA has finally responded to the widespread online complaints about GeForce GTX 970 cards only utilizing 3.5GB of their 4GB frame buffer. From the horse's mouth:

The GeForce GTX 970 is equipped with 4GB of dedicated graphics memory.  However the 970 has a different configuration of SMs than the 980, and fewer crossbar resources to the memory system. To optimally manage memory traffic in this configuration, we segment graphics memory into a 3.5GB section and a 0.5GB section.  The GPU has higher priority access to the 3.5GB section.  When a game needs less than 3.5GB of video memory per draw command then it will only access the first partition, and 3rd party applications that measure memory usage will report 3.5GB of memory in use on GTX 970, but may report more for GTX 980 if there is more memory used by other commands.  When a game requires more than 3.5GB of memory then we use both segments.
We understand there have been some questions about how the GTX 970 will perform when it accesses the 0.5GB memory segment.  The best way to test that is to look at game performance.  Compare a GTX 980 to a 970 on a game that uses less than 3.5GB.  Then turn up the settings so the game needs more than 3.5GB and compare 980 and 970 performance again.
Here’s an example of some performance data:

  GTX 980 GTX 970
Shadow of Mordor    
<3.5GB setting = 2688x1512 Very High 72 FPS 60 FPS
>3.5GB setting = 3456x1944 55 FPS (-24%) 45 FPS (-25%)
Battlefield 4    
<3.5GB setting = 3840x2160 2xMSAA 36 FPS 30 FPS
>3.5GB setting = 3840x2160 135% res 19 FPS (-47%) 15 FPS (-50%)
Call of Duty: Advanced Warfare    
<3.5GB setting = 3840x2160 FSMAA T2x, Supersampling off 82 FPS 71 FPS
>3.5GB setting = 3840x2160 FSMAA T2x, Supersampling on 48 FPS (-41%) 40 FPS (-44%)

On GTX 980, Shadows of Mordor drops about 24% on GTX 980 and 25% on GTX 970, a 1% difference.  On Battlefield 4, the drop is 47% on GTX 980 and 50% on GTX 970, a 3% difference.  On CoD: AW, the drop is 41% on GTX 980 and 44% on GTX 970, a 3% difference.  As you can see, there is very little change in the performance of the GTX 970 relative to GTX 980 on these games when it is using the 0.5GB segment.

So it would appear that the severing of a trio of SMMs to make the GTX 970 different than the GTX 980 was the root cause of the issue. I'm not sure if this something that we have seen before with NVIDIA GPUs that are cut down in the same way, but I have asked for clarification from NVIDIA on that. The ratios fit: 500MB is 1/8th of the 4GB total memory capacity and 2 SMMs is 1/8th of the total SMM count. (Edit: The ratios in fact do NOT match up...odd.)

View Full Size

The full GM204 GPU that is the root cause of this memory issue.

Another theory presented itself as well: is this possibly the reason we do not have a GTX 960 Ti yet? If the patterns were followed from previous generations a GTX 960 Ti would be a GM204 GPU with fewer cores enabled and additional SMs disconnected to enable a lower price point. If this memory issue were to be even more substantial, creating larger differentiated "pools" of memory, then it could be an issue for performance or driver development. To be clear, we are just guessing on this one and that could be something that would not occur at all. Again, I've asked NVIDIA for some technical clarification.

Requests for information aside, we may never know for sure if this is a bug with the GM204 ASIC or predetermined characteristic of design. 

The questions remains: does NVIDIA's response appease GTX 970 owners? After all, this memory concern is really just a part of a GPU's story and thus performance testing and analysis already incorporates it essentially. Some users will still likely make a claim of a "bait and switch" but do the benchmarks above, as well as our own results at 4K, make it a less significant issue?

Our own Josh Walrath offers this analysis:

A few days ago when we were presented with evidence of the 970 not fully utilizing all 4 GB of memory, I theorized that it had to do with the reduction of SMM units. It makes sense from an efficiency standpoint to perhaps "hard code" memory addresses for each SMM. The thought behind that would be that 4 GB of memory is a huge amount of a video card, and the potential performance gains of a more flexible system would be pretty minimal.

I believe that the memory controller is working as intended and not a bug. When designing a large GPU, there will invariably be compromises made. From all indications NVIDIA decided to save time, die size, and power by simplifying the memory controller and crossbar setup. These things have a direct impact on time to market and power efficiency.  NVIDIA probably figured that a couple percentage of performance lost was outweighed by the added complexity, power consumption, and engineering resources that it would have taken to gain those few percentage points back.

Video News

January 24, 2015 | 12:01 PM - Posted by godrilla (not verified)

The math seems off when comparing to each gpu before and after had a difference of 3% but when you compare it to each other i believe it should be 6% difference and thus its pretty significant unless my math is off, so the difference should be 2%, 6% , amd 8% fall off respective.

January 24, 2015 | 12:36 PM - Posted by godrilla (not verified)

*2%, 6% and 6%.

January 24, 2015 | 01:01 PM - Posted by Anonymous (not verified)

Yes, sounds like nvidia is trying to obfuscate the hardware flaw in the 970.

Let us not forget bumpgate and how nvidia tried to deny and hide from that issue. It took years and a class action lawsuit for them to be held accountable.

Worth remembering that with nvidia the customer is always last and shareholders are always first.

Hopefully more sites start to investigate this issue and not take nvidia's spin on it as gospel.

January 24, 2015 | 12:07 PM - Posted by HeX (not verified)

How about linking to the statement... Or do we just believe a possibly bullshit quote.

January 24, 2015 | 12:13 PM - Posted by Dictator93

There testing methodology is not good at showing what occurs. Better would be
1. to put the game with low textures at something like 4K. Thereby not filling the VRAM to even 3.5GB, then measure performance.

2. Then putting the textures to ULTRA, thereby filling to beyond 3.5GB at 4K, and measuring the performance. This would show the actual cost of filling that last 512 MB.

January 24, 2015 | 12:30 PM - Posted by Whirlwind (not verified)

They are already doing that
The 1st value is by using just less of the 3.5GB, the 2nd is by using more than 3.5GB so you can see the performance impact.
Bringing the test down to low, wouldn't really show us the performance impact of going over 3.5GB, but rather the impact of running the game on really low textures vs loading it
That would not demostrate the performance dip from 3.49GB to 3.51GB

January 24, 2015 | 02:18 PM - Posted by Anon (not verified)

No, they're changing two (or more) variables at once. They turn up the resolution, or enable SuperSampling, or enable over-rendering - all things which have a large performance impact.

Changing only the textures doesn't normally have much impact at all. That's why keeping the resolutions the same and only changing the textures used is the best way to measure performance differences due to memory limitations.

January 24, 2015 | 07:55 PM - Posted by Anonymous (not verified)

Good point, but remember that pointing out that they just left the rear ends of three quarters of a million paying customers in hurt, is not exactly desirable.

January 24, 2015 | 12:18 PM - Posted by Anonymous (not verified)

release also frametime logs as using slower memory also creates stuttering which is hidden in average framerates

January 24, 2015 | 12:25 PM - Posted by godrilla (not verified)

My thoughts exactly you can't see that from this chart

January 24, 2015 | 12:32 PM - Posted by derFunkenstein

They should release the frame times, but I'm not as convinced that it'll be a discernible micro stuttering pattern. Depends on what's in that last half-GB, but I think the only way it'd micro stutter is if for some reason it chose to draw part or all of the frame buffer in that partition. That'd be a terrible idea. There probably are occasional mini spikes as it tries to get in there to load texture data or whatever else might be there, though.

January 24, 2015 | 12:30 PM - Posted by derFunkenstein

Ryan, as you just posted on Twitter, "Is this why we don't have a GTX 960 Ti yet?" - I think that sort of thing will come (and it's sorely needed in the 250-275 range), and what will likely happen is that you'll get an even number of SM's again, and you'll see a chunk of the memory controller lopped off to go with it. The result would be something like 75% of a GTX 980 with 1536 shaders, 3GB of memory, 48ROPs, 3 tris/clock, and so on.

Why they haven't done that, I don't know - but it seems like it would fit the middle ground nicely and not be affected by this sort of weirdness.

January 24, 2015 | 12:34 PM - Posted by JohnLai (not verified)

The questions remains: does NVIDIA's response appease GTX 970 owners?

Hmmm.........I buy the GTX 970 under assumption that the extra memory will be useful for upcoming games.

Now, nvidia is telling me the gpu is fast in accessing the vram for its first 3.5gb and extremely slow for the remaining 0.5gb?

This doesn't bode well.

January 24, 2015 | 12:40 PM - Posted by derFunkenstein

They're not saying extremely slow. They're saying it's not optimal, and they're using average frame rates to show that the percentage drop isn't much higher than it is on the full fat GPU in the 980.

It's also not the first time they've done something bone-headed like this. The GTX 550Ti and GTX 660/650Ti with 192-bit memory controllers are affected by this weird asymmetrical memory configuration for a much larger percentage of their VRAM. A full half of their RAM is jammed into a single 64-bit channel, and the other 50% is halved and each given their own. And the performance of the 660 is what I would classify as "fine". It might have been slightly better with 3GB instead of 2, but I wouldn't feel cheated if I had that card.

As was pointed out elsewhere, they could really stand to release frame time numbers, but intrepid hardware websites like PCPer have nVidia's test configs available to them so they can do their own.

January 24, 2015 | 01:01 PM - Posted by JohnLai (not verified)

Well, few question still remains......Why GTX980M and GTX970M with either 4GB or 8GB aren't affected?

January 24, 2015 | 02:32 PM - Posted by Vesku (not verified)

But they disclosed the memory configurations of those other cards. They were quite about the 970's configuration. Shouldn't GPU reviewers be a bit more concerned that this information wasn't provided up front?

"there will invariably by compromises made." - Josh Walrath

Yes but shouldn't we be told about the compromises?

January 24, 2015 | 03:02 PM - Posted by Josh Walrath

Every compromise?  I'm not sure, but this is admitedly a pretty significant oversight on NV's part to disclose this right up front.

January 24, 2015 | 07:57 PM - Posted by Anonymous (not verified)

You mean 'oversight', like they once did with bumpgate?

January 24, 2015 | 07:05 PM - Posted by derFunkenstein

It still performs like it does. If this was something they did in new drivers that hurt performance then I'd see a reason to get upset.

January 25, 2015 | 07:51 PM - Posted by Eric (not verified)

Yep, I KNEW I'd seen something like this before on previous GPU models. I remember trying to figure out why performance was dropping so much for me (it was a lot more than 3%) when everything claimed I still had memory available on my 550Ti and I read something about how it's full memory size not being 'available'.

January 24, 2015 | 12:42 PM - Posted by Anonymous (not verified)

It doesn't say that. It says that if the game needs LESS than 3.5 gig, it will access that main chunk. If it needs more, it will call on the other .5 to use.

What this means is that 3rd party programs looking at the memory will only see 3.5 gig and NOT the other .5. There's no mention the extra .5 will be slower.

January 24, 2015 | 12:38 PM - Posted by Vertek (not verified)

The results of Nvidia title Shadow of Mordor where the AMD 290/290X outperformed 970/980 shows there is a design "flaw" with the new Maxwells. A combination of lower bandwidth and 3.5GB VRAM design limitation will give the edge for gaming performance to AMD.

With the upcoming 370x/380x/390x with color compression and very high bandwidth, will leave current Maxwell cards further behind.

January 24, 2015 | 12:41 PM - Posted by Anonymous (not verified)

shadow mordor is not only game where R9 290x outperforms gtxx 980
next one for example is dragon age inquisition (mantle) or civilization beyond earth (mantle)

just look for 1% worst frametimes in reviews as thats much more important than average FPS (as long as average is big enought)

January 24, 2015 | 02:47 PM - Posted by arbiter

290x isn't faster in games then gtx980 cept when they use their cheater API in the few games that support it which will be even less with DX12 release date set. I bet those AMD cards will be 300-400watts at top end which will be nothing more then space heaters.

January 24, 2015 | 03:26 PM - Posted by Anonymous (not verified)

You love nvidia that much? Why not kneel down for them and beg for the D?

January 24, 2015 | 04:16 PM - Posted by JohnGR

Why you think he haven't done that? And many times.

January 25, 2015 | 10:17 PM - Posted by Anonymous (not verified)

He really needs to change his avatar to say "Are you Nvidiot?"

January 24, 2015 | 06:05 PM - Posted by Anubis44 (not verified)

So Mantle is a "cheater" API? You mean like how PhysX is a 'cheater' physics accelerator?

January 24, 2015 | 06:06 PM - Posted by Anonymous (not verified)

Their cheater API is what will allow many to continue using windows 7, and not be pressured to take the 10 bait, let's hope that the cheater API gets into Linux(even as a blob), then there will be less reasons to have any M$ product after 2020. At least AMD is offering the power at an affordable price, without offering any GBs, or fraction advertised, that have gimped down access times. That cheater API, has been available for a good while, and by the time DX 12 comes, that cheater API will have a new iteration.

January 25, 2015 | 01:55 PM - Posted by Anonymous (not verified)

FWIW win10 will be a free upgrade to win7 users.

What bait?

January 25, 2015 | 05:13 PM - Posted by Anonymous (not verified)

The snooping, the TIFKAM/tiles are still baked into the OS, searches of local files run through Bing via the Bing integration into the OS. I just want 7's UI without the App store/closed ecosystem, and metrics gathering, No to the Cloud. The "Free upgrade" is for the undetermined "Supported life of the PC/Device", Who decides what the supported live of the PC/device is. Free as in take the bait, get hooked into the closed ecosystem, and metrics gathering, snooping, monetized existence of 30% off of the top, for games that run on the OS, you know why Gabe is creating a Steam OS distro, but can not see the dark M$ forest for all the talk of the free trees. Don't let M$ fool you, they will be closing the OS API, with each new iteration, Gabe saw which way the wind was blowing, and M$'s OSs will gradually close, little by little, the slow insidious process began with the hated 8. Just more of the three Es, nothing new, same old triple E M$, closing the gate slowly it will be unnoticed, but it's "FREE", but the bait always comes with a switch.

Go ahead take the "Free" and see. Mantle will allow many to remain on 7, and get just as close, if not closer to the metal, and Mantle is in the here and now, unlike DX12, and do not forget, the Khronos group is also getting the looksee at Mantle, and OpenGL will get closer to the metal, along with OpenCL. Linux will get Mantle also, along with a much better OpenGL, OpenCL. Win 7, and 2020, when the switch can be made to Steam OS, once and for good for gaming. Win 7 can be made safe after 2020, safely confined in a VM, running legacy software, there is no reason for 10, and a DX 12 that will not see wide adoption for at least a few years, but Mantle will be constantly improved, no more WINTEL locking AMD out of the GAME, and AMD's K12 custom ARMv8 based tablets, will be running the mobile graphics APIs like everybody else in mobile, not too much WINTEL there also, and never will be. Looking forward to AMDs new Discrete GPUs, as well as Zen, K12, and even better APUs, all at low costs, and running many OSs. It's 7, the last of M$, then on from there, flightless bird gaming, x86, and ARM based.

January 26, 2015 | 01:03 AM - Posted by Anonymous (not verified)

I'm amazed that someone can post in 2015 actually say "M$" like they're stranded in 1995.

January 24, 2015 | 08:01 PM - Posted by Anonymous (not verified)

'Cheater' will imply that AMD is willfully slowing down things on Nvidia cards. Well, that isn't the case. Nvidia (as well as Intel) can optimise for Mantle when it becomes open source, but as of now it is under beta testing, and AMD themselves have suggested as much. On the other hand, remember Assassin's Creed and dx 10.1? Removed by a patch by Ubi on request of Nvidia. Remember Crysis 2 and concrete blocks? Remember Batman cloak? So on and on, and Nvidia still screws around will customers not only theirs, but also AMD's. Gameworks is just another iteration of the same. Now that is cheating, and proof of said cheating.

January 25, 2015 | 09:56 AM - Posted by BlackDove (not verified)

Who cares? Thats going to be AMDs big chip vs Nvidias small chip. GM200 is about to be released.

January 24, 2015 | 12:41 PM - Posted by hACmAn (not verified)

So how do you explain how the memory buss utilizes the 512 from the rest?

January 24, 2015 | 12:44 PM - Posted by Dictator93

This needs to be showing frametimes and not an average. An average can hide horrible microstutter and jank.

January 24, 2015 | 07:22 PM - Posted by Ryan Shrout

You are 100% correct.

January 25, 2015 | 09:07 AM - Posted by Rick0502 (not verified)

I take it you're working on this?

January 25, 2015 | 09:18 AM - Posted by Ryan Shrout

That's my plan for Monday morning.

January 25, 2015 | 04:46 PM - Posted by Anonymous (not verified)


Test games that can exceed the 3.5GB in 1080p & 1440p on Max or Ultra settings.

780 TI 3GB
Titan 6GB
980 4GB
970 4GB

January 26, 2015 | 08:04 AM - Posted by Anonymous (not verified)

Good thing you told him, otherwise he probably wouldn't be able to figure it out.

January 24, 2015 | 12:50 PM - Posted by hansmoleman (not verified)

Yeah, don't advertise it as a 4gig card then if it does funny shit past 3 gigs or whatever it is.

January 24, 2015 | 12:58 PM - Posted by limitedaccess (not verified)

Will you be following up with Nvidia and inquiring about other cards in the Maxwell family (eg. GTX 980m, 970m, 965m, GTX 750)? Possibly Kepler as well?

January 24, 2015 | 01:01 PM - Posted by Anonymous (not verified)

We need proper examination of nvidia claim with frame times anf fps drop compared to 980 at same settings.

January 24, 2015 | 01:03 PM - Posted by Anonymous (not verified)

Pretty disappointing that one of the hardware sites that pioneered properly measuring frametimes is whitewashing Nvidia's PR "average" performance numbers that are an obvious attempt to hide instantaneous frametime drops due to this memory configuration. Posts on forums around the web and even on Nvidia's own official forum bear this out.

January 24, 2015 | 01:31 PM - Posted by drallim


Pretty damning.

January 24, 2015 | 02:07 PM - Posted by Ken Addison

It's a Saturday, we are working on it. We just wanted to put this statement out there as soon as possible to continue the conversation

January 24, 2015 | 03:54 PM - Posted by Humanitarian

I'd be really interested to see what the driver is doing to memory that's in the .5gb portion.

January 24, 2015 | 04:21 PM - Posted by JohnGR

No, it's not disappointing, it's just the usual from PCPer. You can't have exclusives from Nvidia and also be 100% honest to your readers at the same time. Haven't you seen what happens when there are some positive news considering Nvidia? Usually the WHOLE article is in the first page, not a small part of it.

January 24, 2015 | 06:17 PM - Posted by Anonymous (not verified)

You have just described every enthusiast website out there, that gets advertising, and promotional offerings from any company, ever wonder why Intel had its server SKUs "reviewed" on enthusiast websites, with no comparisons to Sparc, or Power8, and no server benchmarks run and discussed. I get a good chuckle when Intel introduces its new server SKUs, but then I go to the proper professional industry websites to get a proper review of any server SKU.

January 25, 2015 | 02:04 AM - Posted by JohnGR

Exactly. You have the knowledge to find info that compares Xeon to every other competitive option in the market. Me? I witness the Xeon awesome power and then go and buy a... Celeron to put a little of that awesome power in me PC.

January 24, 2015 | 01:04 PM - Posted by PissedOff (not verified)

"From all indications NVIDIA decided to save time, die size, and power by simplifying the memory controller and crossbar setup. These things have a direct impact on time to market and power efficiency"

Can it be the reason why AMD is so much behind Nvidia in performance/watt with their newest 285?

I personally from now on will never buy an Nvidia GPU without a clear proof that it can use all onboard memory at the specified speed. But most likely I'll go with AMD.

Also, this is not the first time NV pulls off something like this - derFunkenstein has already written above that 660/650Ti had a similar issue.

January 26, 2015 | 08:29 AM - Posted by Anonymous (not verified)

far as i can tell 285 is a lower priced card 280 will beat it . newegg has 285s 30-40$ cheaper then 280s

January 24, 2015 | 01:12 PM - Posted by Dom (not verified)

So they're telling you to deal with it?

January 24, 2015 | 01:19 PM - Posted by ellessess (not verified)

"Go fuck yourselves :D" - NVIDIA

January 24, 2015 | 01:23 PM - Posted by Anonymous (not verified)

This essentially means we would actually be better off with real 3.5GB than these "4GB" since there would not be frame rate drops etc.

January 25, 2015 | 05:48 AM - Posted by razor512

May not be so. Usually when you run out of video memory, textures and other data a that would normally end up on the video memory, may get pushed to system memory. In that case, you are stick with memory that is doing 20-30GB/s, but having to share its resources with the rest of the game, and the videocard. This caused massive performance drops with games in the past. For example a long time ago, ATI had driver issues relating to the second life game. Second life would read a certain data field specified in the drivers in order to determine how much video memory was present, and if there was not enough, it would make use of system memory (the game is relatively memory intensive since users can upload their own textures, and thus not much optimization.

For many of the older drivers the available video memory was specified separately for direct 3d and openGL, and ATI did not specify anything for opengl, thus certain games would just assume 0, and use system memory for all of their texture needs.

The group that was hit the hardest from it, were users still using AGP (for me, moving from an AGP radeon HD 3850 512MB, to an nvidia 6800 256MB, meant a frame rate boost from the 3-5FPS on the 3850, to around 25FPS on the 6800.

PCI express 3850's were not hit as hard due to the faster interface, but they were still hard hit enough to perform much slower than older/ slower cards.

Overall, you never want system memory to be used for tasks that should be handled by the video memory.

January 25, 2015 | 04:06 PM - Posted by Anonymous (not verified)

A generic answer that applies to any graphics card ever, whether it has 10 MB or 7 TB of VRAM.

Problem here is that the cards are advertized as 4GB VRAM cards but, while they are just that in the most of basic of senses, the moment 3.5 to 4GB space is hit, they start to behave as if they utilize system RAM. Bottom line: if you want normal, decent performance, treat your GTX970 as a 3.5GB card. This obstacle may be negligible now, with current games, but wait a few years and games, problem will surely be more present. And that's certainly not what I paid for. Period.

January 24, 2015 | 01:31 PM - Posted by Anonymous (not verified)

Got the same problem with my GTX 760 2GB
The chunks from 1664 -> 1792 and 1792 to 1920 only read with 16 and 8gb!

January 25, 2015 | 09:00 AM - Posted by The Fucking GOD of the GTX970s (not verified)

No you do not. Learn to run the benchmark noob.

You absolutely have to run it in headless if you want an accurate result.

People running it with forcefully Aero on and saying 'ohh look this is a real world scenario' have pitiable intelligence and are the source cause of why programmers are paid so less today.

January 24, 2015 | 01:34 PM - Posted by Luciano (not verified)

Can you ask Nvidia how their asymmetric memory buses work?

January 24, 2015 | 01:34 PM - Posted by Anonymous (not verified)

Hopefully ryan will do some frametime testing, including more than three games would also be nice. Pcper was one of the tech sites that called amd out on their frametime issues, pretty much forcing them to own up to it. Hopefully we'll see them investigate this issue more thoroughly.

January 24, 2015 | 02:46 PM - Posted by hansmoleman (not verified)

have to agree. if this was an AMD issue, Ryan Shrout would be on that frametime testing rig of his in a heartbeat.
Hopefully he will do a full test with the nvidia cards.

January 24, 2015 | 07:21 PM - Posted by Ryan Shrout

It's a weekend guys. I was trying to spend it with my family. Sorry to let you down.

Monday will be here soon!

January 24, 2015 | 01:39 PM - Posted by MahoganySoapbox

It appears that in the examples above setting changes were made in the Geforce Experience client.
What would happen if changes were made in the game menus?
Does this imply that game profiles in the Experience app can be optimized instead?

Also, questions i tweeted...
Doesn't the gm204-200 use 13smm? That would segment 3.25gb of 4gb. Nai's demonstrates drop after 3200mbytes.

This affects Dram addressable, does this also effect the shared address space of 96kb per SMM for compute tasks & 48kb L1 per SMM?
(The example in my head was that one SMM was disabled on 3 of the 4 GPC's)

January 24, 2015 | 01:46 PM - Posted by Anonymous (not verified)

Misleading title? How do we know it's nvidia that said this?

Until they say something on a official nvidia place like the forums I'm still holding my pitchfork

January 24, 2015 | 02:14 PM - Posted by RickNapril d facebook (not verified)

I own a gtx970 too MSI G1 and it's after 3.2gigs your memory is gone. Rather they fixed this buy not allowing the last 700mb/s to be used.. or they could make sure that space will be used for desktop use only. leave full 3.2gigs for just gaming. Drives in time might fix this.. but I doubt Nvidia will say, my bad.. people are moving to 1440p this year for sure and will now want to upgrade GPU's and this would be a show stopper for the gtx970 for sure... price should drop too 299 from 349. At least stop my gtx970 from using that bad spot on memory please.. fxck it. 3.2gigs is good for now. Next time I might try AMD. Funny thing is, Nvidia had better cores and tech and AMD always had better bandwidth for there cards... with 4k coming soon AMD might be on top rather quickly.

January 24, 2015 | 02:21 PM - Posted by Cubanoman (not verified)

according to reports in the nvidia forum, Gigabyte has released a new BIOS update for all they 970 cards, it looks like its fix the problem, the tester said he tried shadow of mordor in max 1080p and the card uses 3700 mb without problem, he use the not-g1 version

January 24, 2015 | 02:33 PM - Posted by Anonymous (not verified)


You had Tom Petersen there just 2 days ago. Why didn't you ask him about it or get a statement from him at the time?

The mobile parts that have cut down versions of die aren't experiencing this issues. How is that squared with the theory of being an SMM disable issue. If it is a disablement issues Nvidia does the disabling before it gets to there partners so this is squarely on them.

January 24, 2015 | 02:41 PM - Posted by Anonymous (not verified)

He would kinda lose his precious position as an nVidia marionette.

January 24, 2015 | 02:49 PM - Posted by arbiter

This wasn't a story til like yesturday or today. So kinda hard to ask about him about something when he was here 2 days ago.

January 24, 2015 | 03:34 PM - Posted by Ryan Shrout

We definitely asked about it, just not on air. We asked before the stream actually and his response was only that they were "looking into it." Hence they had meetings on Friday and put out a public response on Saturday.

January 24, 2015 | 05:19 PM - Posted by RS84 (not verified)

My big question, did you or other reviewers find this problem on their review before?
Especially on 4K..?

January 24, 2015 | 07:20 PM - Posted by Ryan Shrout

No, we definitely didn't see it in our own 4K testing. As far as I know, no reviewer on the normal circuit I know about has found it before forums and others began to post about it.

January 24, 2015 | 07:39 PM - Posted by RS84 (not verified)

Oke... :)

I've watched this https://www.youtube.com/watch?v=ZQE6p5r1tYE
Really horrible, nvidia need fix this! . A.S.A.P

January 25, 2015 | 03:09 AM - Posted by Trickstik (not verified)

Hey ryan thank you guys at pc perspective for looking into this. You guys do great reviews and are one of the best sources for the pc on the web. Please if you guys can from here on out with gpu reviews and or testing, if possible create a benchmark or extra testing on ram to keep both companies honest. The only reason i bring this up is that this issue was never brought to light by any major pc outlet or reviewer until personal users sought this issue out themselves. Unless this was an oversight by myself and some outlets did then i stand corrected.

January 24, 2015 | 02:43 PM - Posted by Guy Moulton (not verified)

So Nvidia's response to crappy performance of the 970 when more than 3.5gb of memory is used is... well the 980 is just as bad! Do I read this correctly?

January 24, 2015 | 03:25 PM - Posted by Anonymous (not verified)

Um no the GTX 980 is not affected by this...

January 24, 2015 | 10:47 PM - Posted by Anonymous (not verified)

I think he is asking if the 0.5GB extra on the 980 is even worth having since it doesn't effect that card. So even if the 970 had full speed memory 4gb would the 1-3% drop off still be there like the 980 is showing?

Is it a architectural limit.

January 24, 2015 | 02:50 PM - Posted by Vesku (not verified)

"there will invariably by compromises made. "

Sure but a lot of enthusiast GPU buyers like to be informed about the compromises that they chose.

January 24, 2015 | 03:09 PM - Posted by Anonymous (not verified)


I dont care much about 500mb or slow memory allocation since the price difference between 980 and 970 actually is more significant that the difference in performance.

But, i would like to know this before hand. Not after months and different revisions of the card and discovered by the power users.

Nvidia should have told us that we place the 970 almost half the price of 980 because it has problematic ram allocation over 3.5gb.

January 24, 2015 | 02:50 PM - Posted by Anonymous (not verified)


January 24, 2015 | 03:22 PM - Posted by Thedarklord

I think for most people video card purchases comes down to two things;

1) How much does it cost?
2) What will my gaming experience be like?

This is really about the second, even with the 3.5GB memory segmentation, by all accounts the GTX 970 is a great gaming card. The question then becomes with that extra 500MB, does performance really increase that much?

I agree with Josh on this, that it is by design and not a bug. NVIDIA probably did a cost/time/performance analysis and found that this was the best route to go. AKA, we put a fast card on the market now, even though it may not be able to access the last .5GB of memory optimally.

If the GTX 970 had been equipped with 3.5GB from the start this may have been a non issue. Does GTX 970 really benefit from the extra .5GB of memory? OR is this kind of like seeing card makers putting to much memory on a card for marketing, even though the performance increase is ~3%?

Edit: @PCPer I think this warrants a good look into, kinda like you guys did with the Frame Capturing.

January 24, 2015 | 04:17 PM - Posted by Anonymous (not verified)

this is not the point when it goes over 3.5 the game stutters, it would be better than nothing to limit the card to 3.5 vram, but I would hope for a fix and some official word on this.

January 24, 2015 | 05:43 PM - Posted by Anonymous (not verified)

That extra .5gb very well may cause severe performance problems when it is being accessed. Some people have found that after accessing more than 3.5gb the remaining .5gb is accessed substantially slower, therefore it could absolutely be causing stutter even though the total performance loss is similar to the gtx 980. Frametimes need to be tested, and we need a real world experience test between the 970 using that .5gb of ram and the 980 doing the same. Far cry 4 would be an excellent game to test, memory usage varies and it's quite easy to push the memory usage over 3.5gb at 1440p.

January 27, 2015 | 01:38 AM - Posted by Anonymous (not verified)

It can't read from both memory segments at the same time it has to be either or, so it is in effect a 3.5GB card imo...

January 24, 2015 | 05:59 PM - Posted by Thedarklord

Ok, so the issue is less about overall performance impact, and rather about game stutter when the GTX 970 accesses over 3.5GB of video memory?

Hmm, I would think NV would have optimized that with drivers as far as memory management goes.

So far I am uncertain about this, I am not saying that anyone is making anything up, but I think this deserves to be looked at further.

I would really like to see PCPer put a stock/reference GTX 970 through its paces and see if this pops up.

January 24, 2015 | 03:29 PM - Posted by Anonymous (not verified)

This is a non-issue for nvidia fans and also for the vast majority of people who don't read up on this stuff. All of you guys in the comments thread are a little upset, but no one else is. nV cards will continue to sell like hotcakes regardless of whether they make these mistakes.

January 24, 2015 | 05:46 PM - Posted by Anonymous (not verified)

It's a non issue because up until a few days ago this wasn't a known issue. I'm an nvidia user at the moment, this is something I would want to know before purchasing there product.

January 24, 2015 | 07:33 PM - Posted by Anonymous (not verified)

Would you ask everyone to remain calm even if it were AMD? Something tells me you're full of it, especially for picking on some of the customers who have bought this product. What are you? A Nvidia freebie troll? We all know that there are a fair few everywhere.

It is shameful that it wasn't reported by Nvidia to begin with. This is tantamount to cheating. The simple question is, is the performance the same as it would have been with 4gb? The answer is something we all know, and that is no. It is a simple matter of Nvidia coming clean with its customers, and apologising for it at the least. What they have done here can be summed as:
"hey, we could have increased our costs, or reduced your bottomline price affecting our profits, but then F*** you! Oh, don't forget to come again for $1350 Titan 2."

January 24, 2015 | 03:43 PM - Posted by Anonymous (not verified)

This is noy limited to 970's... look at my 980 test


January 24, 2015 | 04:09 PM - Posted by Humanitarian

Have you disabled aero and used it while the card is not being used? (Tested while on integrated graphics)

If the cards in use there's memory that cannot be swapped out and when the tool allocated the 4gb. the 980 isn't affected, and the tool has been shown not to be a problem on 980 cards

January 24, 2015 | 03:45 PM - Posted by Anonymous (not verified)

Sorry.. lets try again..


January 24, 2015 | 04:04 PM - Posted by Anonymous (not verified)

That's more of an example of the Nai CUDA test being run on a 980 and people not knowing the difference between headless and non headless run.

January 24, 2015 | 04:45 PM - Posted by Anonymous (not verified)

You are correct.
After disabling DWM and rebooting my system, i get full 4GB usage and bandwidth.

My bad ....

January 24, 2015 | 04:06 PM - Posted by JohnGR

I bet Geforce Experience will always set the settings in games so that GTX 970 doesn't pass 3.5GBs usage. Nvidia is just buying time. Until things get really ugly with newer and more demanding titles, Nvidia and AMD will already have new cards in the market.

January 24, 2015 | 04:07 PM - Posted by Haunted Abyss (not verified)

thx ryan for being there and askin.
YEs the Fametime should not stutter or even slow down at 3.5-4 IF the system isnt full optimized for it. Then they are douping us simple as that. I still insist there avoiding the honest hard work at the moment. Im angry because i just litterly bought a G1 3 days ago and got it today, (starring at it on my desk) I know .5 Memory that is optimized would give a good extra 4 fps or at least be able to handle games better atm.

Seems for what ive seen tho only certain cards or Memory chips are effected,? is this right>?

January 24, 2015 | 07:39 PM - Posted by Anonymous (not verified)

No. Whatever i've read indicates this to be an issue on all 970 cards.

January 24, 2015 | 04:29 PM - Posted by Anonymous (not verified)


January 24, 2015 | 04:55 PM - Posted by Johnny Rook (not verified)

// open full disclosure
yes, I have a nVIDIA 970 SLi but, I also had an ATI 4870x2, an ATI 5970 and an AMD 7990 in the recent past, so no, no "fanboyism" here)
// close full disclosure

I will definately wait for frametime testings results to make my maind about this.

However, from the FPS average results posted, the lost in performance due to the 0,5GB partition is of 3% in worst case scenario (30 FPS). I really think 1FPS in 30 is negligeable; who can perceive it? I know I can't...

I have not had the time to test NAI's bench headless (nor head long for that matter) but, I will do it asap.

But, I have to say wwith SLi disabled, I experience no more lag or stutter playing FarCry 4, Unity or Shadow of Mordor, Metro Last Light, Hitman Absolution, Crysis 3, etc. using +3,5GB than I experience using less

I remember the times when NVIDIA fanboys bashed at AMD for releasing an otherwise awesome card for the money, for being hot and cheaply built (at launch reference cards, no less): AMD R9 290
God forbid AMD to release a more powerful card than nVIDIA! There must be a failure somewhere

Now, I see AMD fanboys bashing nVIDIA for releasing an otherwise awesome card for the money, for having 3% less performance than it should (at 4K-no.one.really.plays.at-K res no less!), the GTX 970.
God forbid nVIDIA to release a better price/performance ratio card than AMD! I can't happen! There must be a failure somewhere...

And that's how it the world goes...

Anyways, I will wait for the frametimes but, from what I have seen from personal experience, I will be very surprised if it shows a noticeable lost in smoothness while gaming.

January 24, 2015 | 05:34 PM - Posted by JohnB (not verified)

Maxwell is overrated.

The only real improvement was power.

Next gen. will blow it away. First AMD and then next gen Nvidia.

It would have been better to atay on a gtx770 as a Nvidia fanboi and skip Maxwell. As a non fanboi the best is to get a good deal on a 290 or if very patient get 380x in May.

January 24, 2015 | 05:47 PM - Posted by Johnny Rook (not verified)

If pure performance is your only metrics, then maybe Maxwell is overrated - some may argue the 970 has near 780Ti performance, though.
But for me, it made a huge difference both in hot and noise coming from the Keplers I had before (780 SLi).

As for the waiting, sure. If you are patient enough, you can wait for 2016 cards ;)

January 24, 2015 | 05:06 PM - Posted by Anonymous (not verified)

Is it sold as a 4GB or a 3.5GB card?
Stealing prospective buyers from your competition with deceiving marketing, that's what I see- cuz the engineers and folks in charge know what the REAL deal is.
Are most folks EVER gonna notice this? Probably not, but does that make it any less deceiving?

I say it changes nothing, deceit is deceit, they knew exactly what they were doing, which was stealing buyers with a technically false claim.

Great card- Sleazy organization.

January 24, 2015 | 05:31 PM - Posted by Johnny Rook (not verified)

No, the card uses the full 4GB of VRAM. There must be no misunderstandings regarding this point. The "issue" in question is the "speed" it accesses the last 0,5GB.

I clearly see your point but, people are being deceived for as long as I can remember. HDD (and SSD) capacities and dual-GPU cards total memory are the most common I personnaly dealt with.
That doesn't make it right. Granted.
But that doesn't make it new nor exclusive to nVIDIA.

January 24, 2015 | 05:55 PM - Posted by Anonymous (not verified)

Hang on a second...

Lets be clear on this... HDD\SSD capacities and DUAL GPU card capacities are a different kettle of fish. There was total transparency with these and legit\logical expplanations which people accepted...

January 24, 2015 | 07:17 PM - Posted by Johnny Rook (not verified)

Transparency? Are you talking about disclosure on sites like OCN and Guru3D?
Sure, I'm positive 100% of hardware owners visit those sites! Get real! You are talking about enthusiasts being informed, not average "joe" being informed.

I work at a hardware store, an actual physical store and I get people all the time asking me why Windows shows their HDD has 110GB instead of 120Gb or why a certain game - commonly, GTA IV or Max Payne 3, don't allow settings above 2GB VRAM. When I explain why, they don't accept it as open minded as you may think. So, I don't see how that's a different fish.

January 24, 2015 | 07:45 PM - Posted by Anonymous (not verified)

For what it is worth, given you're running a shop, you should have noticed that hdd's have it mentioned right on them that they calculate 1000mb's to be a GB, and hence the discrepancy. Still is uncool, but is much more palatable. If Nvidia marketed it as a 3.5gb + .5gb card, i'm sure most would have laughed about it, but not called them on their apparent dubious intent.

January 27, 2015 | 01:45 AM - Posted by Anonymous (not verified)

Yeah it is essentially a 3.5GB card with a "500mb cache". It can't read from both segments at once has to be either or.


January 24, 2015 | 06:09 PM - Posted by Anonymous (not verified)

I knew someone was gonna say " NO it uses all 4 GB"
Get real, that last 500mb is unusable because it's so slow in comparison, if it's NOT FULL SPEED than it's false advertising.

Let's not make excuses, it is SOLD as a 4GB card, when in reality it looks to me that after 3.2-3.5 it becomes a stutterfest...
Folks spending $ on a card look basically at 3 things:
Power used

If it can't "Technically" use 500mb because it's slow as molasses, what good is it?
Does the fact that it says " 4GB" get people to buy it over a "3.5 GB" card? You bet it does.

January 24, 2015 | 06:14 PM - Posted by Anonymous (not verified)

I agree.. people can sugar coat it however they see fit.. At the end of the day it only has 3.5Gb of USEABLE vRAM, not 4Gb. USEABLE is the key word here...

January 25, 2015 | 01:45 AM - Posted by ThorAxe

Untrue. The 970 has 4GB of useable ram. The 500mb is not slower than the 3.5GB, it is however slower to access it. The 500mb still runs at full speed.

January 26, 2015 | 08:16 AM - Posted by Anonymous (not verified)

That's kind of like saying that this card has 4GB of usable RAM:

What's the f***ing point if it isn't fully optimized?

January 24, 2015 | 07:29 PM - Posted by Johnny Rook (not verified)

And do you know how slow that 500MB partition is? Do you have a real number? Do you know much it affects (if affects at all), the perceived gaming experience @1080p, 1440p? Or are you just pulling an argument out of thin air?
I have a feeling is the last one.

NVIDIA states the difference is 3% in average at 4K. What I really want to know is if that number is accurate and if it is, if that number affects frametimes in a noticeable manner.

However, I am more interested in 1440p results because, I think the 970 was meant to for 4K as much as a 960 was meant for 1440p: they weren't!

I don't give a damn about the rumours, particularly from people who doesn't have access to the card and are only regorgitating what they read elsewhere.
I have the cards, I am testing them as thoroughly as I can and I am yet to experience an actual, objective difference between gaming with 3.5GB or with less. Maybe when I get to 4K testing I find some but, so far... nope - and I am almost done with 1440p gaming tests. I will let my own experience talk for itself...

January 24, 2015 | 06:12 PM - Posted by Anonymous (not verified)

what a rip off. Was about to pull the trigger on purchasing a 970...not anymore.

January 24, 2015 | 06:18 PM - Posted by Anonymous (not verified)

There is zero sourcing for anything in your Nvidia statement, I see 4 other website sourcing you as a source for this statement and you just made it up.

Fuck off

January 24, 2015 | 06:21 PM - Posted by Anonymous (not verified)

Post #696.. https://forums.geforce.com/default/topic/803518/geforce-900-series/gtx-9...

Good enough for you ?

January 24, 2015 | 07:04 PM - Posted by Sebastian Peak

I see no cited sources in your source citation accusation.

But here's one from me to make up for it:

The Tech Report independently reported the NVIDIA response today as well. Check the comments. Site founder Scott 'Damage' Wasson replies to comment #12 with comment #15: "Nvidia PR emailed it to me."

January 24, 2015 | 07:18 PM - Posted by Ryan Shrout

I guess I don't understand how "NVIDIA has finally responded" isn't a source. I know people at NVIDIA. They emailed me.


January 24, 2015 | 06:25 PM - Posted by Anonymous (not verified)

Is this a reason for for a stuttering that many users were complaining about??? On tomshardware forum and overclock.net users were reporting that they have above 60 fps and stutters at the same time. If this is the case then i'll pass on this one tho i wanted one badly

January 24, 2015 | 07:48 PM - Posted by Anonymous (not verified)

Could very well be. One of the users on another forum was wondering why FCAT testing was dropped by all and sundry. No mention of it, what so ever. My recommendation, wait on newer prices, as i fully expect ASP of Nvidia cards to adjust according with the perception in the market.

January 24, 2015 | 07:49 PM - Posted by Anonymous (not verified)

Forgot to mention (possibly watercooled) new AMD cards coming out in q2. The amd cards are reported doing quite well compared to GM200 and cut down GM200.

January 25, 2015 | 12:08 PM - Posted by Anonymous (not verified)

They could sell the card as 208 bit bus card with no glitches, no one would complain. Also, not all cards are affected, some are perfectly fine. I'll definitely will wait to new AMD cards. I've praised 970s very much despite playing games on amd years now, but definitely i'm not going to buy card that could have some big stuttering issues... Like many people said commenting this article, it is odd that there was no FCAT measurements...

January 24, 2015 | 07:00 PM - Posted by GV (not verified)

I've seen reports and benchmark posts that this DOES indeed affect the 980's, as well as some 780's and Titans, too. I have a 970 on order, but I will be returning/refusing it after this fiasco and waiting on the 8GB cards (in whatever form they may come, but still NOT an AMD, I will never, ever join the red team, even despite this situation). If the 8GB cards only have 7.5GB usable, I'm much more ok with that than a 4GB card only having 3.5 usable. I'll make-do until then.

January 24, 2015 | 07:49 PM - Posted by Anonymous (not verified)

And you're right.
Fast cards with working drivers and full memory usage, and good frame pacing at high rez would be a shock for a nVidia stoopid.

January 24, 2015 | 07:51 PM - Posted by Anonymous (not verified)

Did they "wrong" your sheep/ goat/ cat/ dog or whatever else? At the least with them you know what you're buying, is what you're buying. Now mind, i do think 970 is a good buy if it were priced closer to 290 (not 290x as it is now).

January 24, 2015 | 07:50 PM - Posted by Kusanagi (not verified)

If Nvidia knew there was a problem, why didn't they just cut the 970 to 3.5GB instead?

January 24, 2015 | 07:53 PM - Posted by Anonymous (not verified)

Marketing. There possibly would have been chatter on net and may have affected selling prices of products, and sales altogether. This way, they pulled another bumpgate, and showed a middle finger to all paying customers (mind, Nvidia does have a bunch of shills who get free stuff and troll internet for it) while laughing all the way to the bank.

January 24, 2015 | 08:54 PM - Posted by DeDT (not verified)

I demand a FCAT bench, that number doesn't say anything .

January 24, 2015 | 09:14 PM - Posted by Anonymous (not verified)

That explanation by NVIDIA doesn't explain why a similar performance drop-off beyond ~1.6GB occurs with that Nai's Benchmark on the GTX 770 2GB Kepler, when it is a full GK104 GPU.

January 24, 2015 | 09:46 PM - Posted by Anonymous (not verified)

Actually it seems like this perf issue with the Nai's benchmark only occurs on the GTX 770 2GB when Win7 Desktop Composition (Aero) is enabled, which I guess is somewhat expected if part of the memory is already allocated/reserved by Windows.

The GTX 970 issue must indeed be something different, if it's unaffected by Desktop Composition.

January 24, 2015 | 10:17 PM - Posted by ThorAxe

I don't really see the problem here. In Australia the 970 is anywhere from $250 to $350 cheaper than the 980. So given that the 970 s only up to 3% below it's optimum performance had Nvidia redesigned it would users be willing to pay the additional cost of development for minimal performance gain? Is another $50 to $100 more worth it?

Many slower cards are sold with 4GB of ram, do people complain that all the ram is slow so they are really getting no ram at all??

January 24, 2015 | 10:35 PM - Posted by Anonymous (not verified)

Its marketing on false premise.

When you buy a 4GB Dual Channel Kit. 2GB sticks with one stick having address issues that slow your computer down upon accessing them. Are you suppose to be pleased by it because it would have cost you more?

January 24, 2015 | 10:53 PM - Posted by ThorAxe

Did anyone ever buy a GPU based on memory usage? I bought mine based on in-game performance.

I didn't see Nvidia advertising anywhere that ram would be accessed at the same speed - note that it still runs at the same speed, so no false advertising there.

January 24, 2015 | 11:26 PM - Posted by Anonymous (not verified)

Yes, 780 and 780 TI had only 3GB and the 970 would have provided a 1GB frame buffer advantage without the extra expense of a 980. Nvidia GameWorks themselves are recommending 4gb of VRAM.

January 25, 2015 | 06:18 PM - Posted by Anonymous (not verified)

Well back when I got my computer, I bought a 590 "3GB" advertised card only it turned out that half of it is actually useable, hence 1.5GB is useable since it is SLI. Which lets face it is too low for games like Skyrim if you want to max it out with highres textures and make it look awesome. So I bought the 970 4GB for just that reason.

January 24, 2015 | 11:22 PM - Posted by Anonymous (not verified)

This is why people are upset with Nvidia's testing results. It is misleading to just show the frame rate difference from <3.5g to >3.5gb with out at clarifying that there is no noticeable stuttering on the gtx 970 once accessing it's last 0.5gb section.

Stuttering can absolutely be happening if the bandwidth of that last 0.5gb is slow, and just showing the average frame rate does nothing to tell us if that is happening or not. Nvidia would have to tell us if stuttering was present in their tests. This would explain why many gtx 970 users are reporting stuttering when going above 3.5gb of vram usage even in sli while gtx 980 users do not face this problem.

Unfortunately due to the nature of this problem, I fear many people will quickly dismiss this as a non-issue by pointing to fps tests and due to the common misconception that the gtx 970 does not have access to the last 0.5gb of vram. It does have access but that access is slower than the first 3.5gb of vram.

This difference in bandwidth from the first 3.5gb to the last 0.5gb will effect different games differently, some may even run fine at over 3.5gb vram if the game is not stressing the card that much.

We need testing on this to determine how usable that last 0.5gb of vram is across a variety of different game titles to truly determine if this is a widespread issue.

January 24, 2015 | 10:33 PM - Posted by Anonymous (not verified)

How can it be that a site championing and exploiting frame pacing with FCAT can completely ignore it all of a sudden? Do you realize how incriminating that looks? Especially when that is one of the main complaints of Maxwell, and looks even more suspicious given this defect that has come to light. And also to completely sidestep and ignore any comments about it?? Get on it, or your credibility is done.

January 24, 2015 | 10:56 PM - Posted by ThorAxe

I so love comments posted by Anonymous - it makes them so credible.

Every site used similar tests and came to the same conclusion. Those numbers didn't change overnight.

January 24, 2015 | 10:56 PM - Posted by ThorAxe

I so love comments posted by Anonymous - it makes them so credible.

Every site used similar tests and came to the same conclusion. Those numbers didn't change overnight.

January 24, 2015 | 10:59 PM - Posted by pdjblum

Hope all this hoopla has the desired effect, whatever that might be. I still live in the dark ages when flagship gpu's cost around $300. I am amazed how many folks are willing to part with a lot more for a flagship that will become midrange in a year or two; and how many people are willing to part with $350 for a midrange card. It sucks being a cheap motherfucker, but I rather be cheap than broke or sitting on a bunch of credit card debt. These days, to have the flagship, you need to hand over around $300 to nvidia annually (yearly cost of flagship, not actual yearly payment of course). That seems to be a much bigger deal than all this petty shit.

January 24, 2015 | 11:10 PM - Posted by Anonymous (not verified)

They have been getting FLOGGED about this, and review sites like this one have asked the question....I've seen the term a "design flaw/decision", but don't know if it's real.

They were so hot after AMD that they supplied FCAT to have review sites DEMONSTRATE it.....
Why don't THEY just show the results of FCAT and be done with it?
Because they know the design is faulty is my guess!

"Come get your 3.5GB card that we champion and sell as a 4GB card....OOOOO it IS 4GB, but it's only 3.5GB of usable ram....sorry we forgot to tell ya...wake up suckas! cha-ching!"

January 24, 2015 | 11:36 PM - Posted by Anonymous (not verified)

Nvidia should just release a statement indicating how their memory actually is arranged, otherwise there will just be a bunch of baseless speculation.

Now for the baseless speculation:

They obviously do not want to come out and say how their memory is actually arranged, although they have already essentially admitted that some areas are slower to access. This seems to make a negligible performance difference, so it is being blown out of proportion.

I am curious as to what the arrangement actually is though. The Lazygamer image seems to indicate that this extends to cache accesses, not just dram. When they disable a processing unit, do they lose some cache associated with that unit also? This would make sense; a defect in the cache area is not unlikely although most caches have redundant blocks.

Is it possible for this to be a 32-bit virtual address space issue? Do they need to map some other things in to memory other than DRAM from the gpu causing part of the memory to only be accessible via some PAE type mechanism? I was under the impression that virtual memory support was added in some version of DX9 or DX10. Are their caches virtual or physically addressed? We generally get very little information on GPU caches vs. how much we have on CPU caches. Is part of the memory on a 970 essentially not cached, leading to high latencies and low bandwidth?

It is only recently that we started getting cards with more than 4 GB attached to a single GPU and games set-up to use up to (or more) than 4 GB of memory so it wouldn't be surprising to bump up against some limitations. I don't know if physical addressing mechanisms are exposed in the APIs, but I would guess that we are still dealing with 32-bit virtual addresses on current GPUs. This by itself doesn't necessarily cause much of any performance penalty, it would just make the driver more complicated than a larger address space. Caching issues would definitely cause performance issues though.

Anyway, this may not be simple to explain to the average consumer. Advertising it as a 3.5 GB card would just cause more confusion; it does have 4 GB of ram. Also, while it does seem to be slower to access the last half a gigabyte, it is probably still faster than access over the pci-e connection (15.75 GB/s, I think). If you go over 4 GB, it would slow down significantly anyway.

January 24, 2015 | 11:44 PM - Posted by Anonymous (not verified)

Slower 500mb is OK with you? SERIOUSLY?


January 25, 2015 | 02:17 AM - Posted by Anonymous (not verified)

I read a little bit of that thread. It mostly seems to be people deliberately tweaking settings to try to increase the memory usage over 3.5 GB and people complaining that it doesn't perform as well as a 980. They didn't pay for a 980.

January 25, 2015 | 03:38 AM - Posted by johndiggie (not verified)

you know this is the kind of post that makes people suicide...

playing the game on ultra settings
playing the game on downsampling
playing the game at higher native resolutions
is three main areas that people will go in the future...
and on those three situations you get the problem because the cards pass through the 3.5 barrier quite easly and that means quite a few stuttering as countless videos already being on the net shows

no matter how nvidia will try to cover this up by using average fps (like we dont know what lies on that number.. from stuttering to tearing to faultin pacing and the list goes beyond)

but the worst part is the reviewers...in order to keep their mouth shut and get some extra money from nvidia they actually contribute to that misleading and faulty advertisment of nvidia (because on one is naive enough to believe that they didnt try at least to downsample a game and the fact that they do speak about 4k proves me right..)
people got nuts over gamergate because she use her body to sell sex for good reviews i wonder what they should name what those current reviewers are doing in order to please nvidia..

January 25, 2015 | 04:22 AM - Posted by Anonymous (not verified)

This is ridiculous. While the 970 is obviously a good card, it still has limitations. People found a limitation. It mostly seems to be a non-issue. If you turned the settings up high enough to hit it, then turn your settings back down if the performance is unacceptable; same thing as with any other limitation.

January 25, 2015 | 04:45 AM - Posted by johndiggie (not verified)

let me tell you an example..

lets say you want to buy a house on japan that you know they have 9.0+ quakes and you know that their building code is one of the toughest there is on the world
and you go and buy the house and in the next quake of a magnitude of 5(which is moderate) your house collapses who you will blame?
your self because you wanted to live on a place full of quakes and you wanted to test the house and the house fails?(to be more accurate the 90% is fine and the 10% has collapsed...)
or the building company that didnt relugate nothing of the building code when they were building the house you just bought?

or if you buy a car with 300 hp and suddenly when you pass 100km/h the ecu reduces the engine power to 20hp..i guess with your logic we should not drive with more than 100km/h because this is a special circumstance that you cant have in real life.....

lol just lol

and please saying that you should lower your details in a card that works fine while on ultra(its expected to since its desing to do that......) is STUPID and irresponsible people spend MONEY on it and they EXPECT IT TO WORK AS INTENDED(when they say 4gb they should give 4gb not 3.5 prefectly fine and when you access the rest your driver resets) just like amd with the horrid black screen issue that told to their customers to reduce the memory clock 100-200mhz... because like it or not this card is a mid-high range and most of the new titles actually demands 3-4gb for this level of detail....

January 25, 2015 | 05:54 AM - Posted by Anonymous (not verified)

Let's stop for a moment and think about this bold statement.

Is a bold statement because of the numbers you brought into this debate.

According to your own math, a GTX 970 using 3.5Gb VRAM @ 30 FPS should drop down to 2FPS above 3.5Gb.
This assumption is based on what?
On a single tool known as Nai's "benchmark"?
Well, I don't play Nai's. It means nothing to me.

Based on some other results? Citation needed.

I'm more interested in games and gaming performance so, when I open a game like Far Cry 4 and start playing on ULTRA + nVIDIA Gameworks Settings + SMAA @ 4K, I can see 2 things:
1) FPS floating around 30
2) VRAM overing around 3400MB-3490MB

Then, I crank up the AA and I see the VRAM going up to 3700MB.

So, according to you, I should be seing a dramatic performance drop to 2FPS. Now, that would be noticeable with or without stutter. But, you know what? Neither it drops to 2 but to ~20 nor the stutter is more noticeable than what I was getting under 3,5GB. And it drops to 20 not because of VRAM speed but because the GPU itself just doesn't have the power to push more frames!

Your logic fails miserably.

January 25, 2015 | 12:34 AM - Posted by Anonymous (not verified)

I have a large display. That means lots of cram. I just spent $800 on a pair of 670s. This is the first time I have ever regretted an amd purchase and I have spent easily six thousand on nvidia in the last twelve years. They need to address this with more than excuses.

January 25, 2015 | 05:12 AM - Posted by Anonymous (not verified)

"However the 970 has a different configuration of SMs than the 980, and fewer crossbar resources to the memory system."

This sounds like it was done this way as a power optimization. I don't think this is a bug at all. I think it is a deliberate design choice. All complex designs require a complex set of trade offs. While the 970 is obviously a good card, it seems to still have been over hyped. You may not be able to turn absolutely every setting in every game up to max without running into performance issues as some posters seem to believe. This is the case with just about any card; you may have to turn some settings down to achieve the desired performance level.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.