Review Index:

Frame Rating: Looking at GTX 970 Memory Performance

Manufacturer: NVIDIA

A Summary Thus Far

UPDATE 2/2/15: We have another story up that compares the GTX 980 and GTX 970 in SLI as well.

It has certainly been an interesting week for NVIDIA. It started with the release of the new GeForce GTX 960, a $199 graphics card that brought the latest iteration of Maxwell's architecture to a lower price point, competing with the Radeon R9 280 and R9 285 products. But then the proverbial stuff hit the fan with a memory issue on the GeForce GTX 970, the best selling graphics card of the second half of 2014. NVIDIA responded to the online community on Saturday morning but that was quickly followed up with a more detailed expose on the GTX 970 memory hierarchy, which included a couple of important revisions to the specifications of the GTX 970 as well.

At the heart of all this technical debate is a performance question: does the GTX 970 suffer from lower performance because of of the 3.5GB/0.5GB memory partitioning configuration? Many forum members and PC enthusiasts have been debating this for weeks with many coming away with an emphatic yes.

View Full Size

The newly discovered memory system of the GeForce GTX 970

Yesterday I spent the majority of my day trying to figure out a way to validate or invalidate these types of performance claims. As it turns out, finding specific game scenarios that will consistently hit targeted memory usage levels isn't as easy as it might first sound and simple things like the order of start up can vary that as well (and settings change orders). Using Battlefield 4 and Call of Duty: Advanced Warfare though, I think I have presented a couple of examples that demonstrate the issue at hand.

Performance testing is a complicated story. Lots of users have attempted to measure performance on their own setup, looking for combinations of game settings that sit below the 3.5GB threshold and those that cross above it, into the slower 500MB portion. The issue for many of these tests is that they lack access to both a GTX 970 and a GTX 980 to really compare performance degradation between cards. That's the real comparison to make - the GTX 980 does not separate its 4GB into different memory pools. If it has performance drops in the same way as the GTX 970 then we can wager the memory architecture of the GTX 970 is not to blame. If the two cards perform differently enough, beyond the expected performance delta between two cards running at different clock speeds and with different CUDA core counts, then we have to question the decisions that NVIDIA made.

View Full Size

There has also been concern over the frame rate consistency of the GTX 970. Our readers are already aware of how deceptive an average frame rate alone can be, and why looking at frame times and frame time consistency is so much more important to guaranteeing a good user experience. Our Frame Rating method of GPU testing has been in place since early 2013 and it tests exactly that - looking for consistent frame times that result in a smooth animation and improved gaming experience.

View Full Size

Users at have been doing a lot of subjective testing

We will be applying Frame Rating to our testing today of the GTX 970 and its memory issues - does the division of memory pools introduce additional stutter into game play? Let's take a look at a couple of examples.

Continue reading our look at GTX 970 Performance Testing using Frame Rating!

January 28, 2015 | 07:52 PM - Posted by koma

Thanks for all the analysis. This is exactly what everyone was waiting for. Now all the cards are on the table and people can decide how they feel about the memory division.

January 28, 2015 | 08:02 PM - Posted by willmore (not verified)

I second that. I hope the vocal people on both sides can calm down now, but I'm probably hoping for too much.

Then again, their bickering doesn't mean as much as facts.

January 28, 2015 | 08:14 PM - Posted by Nvidialies (not verified)

Yes like the fact nvidia lied about specs.

This article is more of pcper shilling for nvidia. Pcper is an advertisement for nvidia.

Shill site up in here.

January 28, 2015 | 11:00 PM - Posted by Anonymous (not verified)

seemed like a pretty fair article to me, 970 was stuttering more in some circumstances and he reported that. The 970 sli testing will be the really interesting article though.

January 29, 2015 | 01:30 AM - Posted by Anonymous (not verified)

do you recall the 1st article where it was "a philosophical debate" ??

January 29, 2015 | 01:56 AM - Posted by Johnny Rook (not verified)

Because it is?

Even after this article, some will say the (small) difference between the GTX 980 and GTX 970 performance degradation results @ ridiculous settings and resolutions for a single GPU are more due to VRAM but, many will say that difference has to do more with Cuda cores / SMM than anything else.

I'm not convinced it's the VRAM either... My personal experience with the card tells me otherwise.

I'm only disapointed because the lack of SLi results. I get it will introduce more variables but, somehow would delude the Cuda Cores impact, revealing more the VRAM influence, imo

January 30, 2015 | 10:35 AM - Posted by Interested Party (not verified)

SLI testing is needed to conclude whether it is the missing SMTs or the Ram issue... as someone with multiple 970s... I am eager to see those results.

I think the article was beyond fair. Yes they see something is amiss but need more data to pin it down. Reproduced but not root caused.

February 2, 2015 | 11:56 AM - Posted by derFunkenstein

if you have multiple 970s, why not just crank up DSR yourself and see what happens?

January 30, 2015 | 10:49 PM - Posted by Anonymous (not verified)

Where are the SLI bechmarks, that's were we will see the limitation,

January 29, 2015 | 06:47 AM - Posted by Trickstick (not verified)


January 29, 2015 | 07:34 AM - Posted by Parn (not verified)

Ryan's article seems pretty fair to me. This frame buffer split does cause issues to quite a few games and maybe more games in the future. But there is no solid evidence to prove this is the culprit for all the performance degradation compared to 980.

If you absolutely must see an article saying "YES, the 3.5/0.5 VRAM config is the root of all the stutters on a 970" so it makes you feel better, by all means start a new hardware review site and write one up yourself.

January 30, 2015 | 10:50 PM - Posted by Anonymous (not verified)

How is it fair, when we still are not seeing the SLI results: this is where the limitation will show up

January 30, 2015 | 04:10 PM - Posted by Penteract (not verified)

If you think PCper is shilling for Nvidia, then you must believe every hardware site in existence is because they are all saying pretty much the same thing. Which begs the question - why are you reading any of them?

February 4, 2015 | 11:50 PM - Posted by Earnest Bunbury

Penteract: That is obvious to me, they are looking for vindication for their preconceived notions!

January 28, 2015 | 08:09 PM - Posted by Heavy (not verified)

i'm pretty sure this isn't about the last 0.5gb its more about the principle, it feels like Nvidia was just trying to keep it a secret. 99% sure their wouldn't be a problem if the card had 3.5gb or they told us about the slower last 0.5gb and said its performance is decrase was negligible.

January 28, 2015 | 07:58 PM - Posted by Anonymous (not verified)

Thx for the post.

Even though I know there microstutter is a thing introduced with SLI at times, we have agood catalogue of games that very rarely induce microstuter as evidenced by Pcpers Fcat tests.
I still think an SLI test with the 970 at similar settings as above would be interesting (and with 980s for the true comparison). I still get the naggling feeling the inadvertently affects SLI users more since they go for higher res and higher effects and can usually afford it as well.

January 28, 2015 | 08:36 PM - Posted by GoldenTiger (not verified)

Did a little testing of my own using afterburner's frametime readings and other monitoring tools... it's not FCAT but it's very accurate regardless. Here's what I got...

So yeah, using SLI GTX 970's to drive high-res high-settings will result in massive, massive frametime issues, even if the framerate over a given second remains reasonable. It is basically an unplayable mess at that point when using 3.7-4.0gb of VRAM. If you can stay around/below 3.5gb of actual usage, which it does its best to do, frametimes are consistent and tight as you would expect. The framerate averaged around 38, meaning in a perfect world the frametimes would be right around 26.3ms for each frame.

As an interesting aside, when finding my settings to test with I noticed it would literally, over the course of several seconds, try to work its way back down to below 3.5gb of usage if it went over, until I set things high enough that it couldn't and would just stick at 3.7-3.8gb+ the whole time. Otherwise it would fight and keep pingponging from ~3.4gb directly to ~3.7gb and back repeatedly before finally settling at ~3.4gb. That's probably the drivers at work, there.

I have a distinct feeling that this affects SLI much more severely than single card setups even, after seeing this article (which even on single-card is showing a negative effect).

January 28, 2015 | 11:08 PM - Posted by GoldenTiger (not verified)

Did some more testing using Shadow of Mordor, SLI enabled and disabled, sub-3500mb and over 3500mb VRAM consumption. Frametimes stay within normal variation/acceptable consistence when in single-card mode regardless of VRAM, but going to over 3500mb in SLI causes wild and rampant stutters/hitches with vastly fluctuating frametimes to match.

January 29, 2015 | 03:50 AM - Posted by anon (not verified)

Very interesting thanks for taking the time to run some tests. The SoM results look horrific.

I'd like to see a site test this too as it's clearly a topic the public has great interest in. NVidia has admitted that they misinformed journalists and the public - quite possibly with the intent of hiding such results, it deserves a proper and thorough investigation.

January 30, 2015 | 10:53 PM - Posted by Anonymous (not verified)

Yup that's what I thought, sli is where the issue is seen: SLI is a feature of the gpu

January 28, 2015 | 07:58 PM - Posted by Anonymous (not verified)

AMD responds to NVidia's BS:

January 28, 2015 | 09:34 PM - Posted by Mandrake

How about AMD's "up to 1GHz" 290X cards with reference cooler than would never maintain that speed for more than a few minutes (Conveniently just enough time for a benchmark to run...)?

Or the completely broken Crossfire implementation that AMD sold for many years. Literally anyone that purchased anything pre HD 5000 series never saw any benefit at all from that multi-GPU setup. 5000, 6000 cards eventually saw the fix - but only several years after their introduction.

I doubt that AMD is really in much of a position to be claiming the high moral ground here. Hah.

January 29, 2015 | 07:59 AM - Posted by JohnGR

The media instantly attacked AMD for that, not giving time to AMD to answer.
At the same time they acted as a marketing department and press office for Nvidia in the case of 970, posting everything Nvidia was saying and trying to downplay the problem.

In the case of AMD performance was secondary compared to the clock speed. You are doing it here also.
In the case of 970 performance was mentioned all the time to cover up the fact that half the specs on 970 where false.

And last, in the case of AMD with good cooling you get that 1GHz frequency. In the case of 970 you don't get back the 8 ROPs, you don't get back the 256K cache, you don't see memory bandwidth going up to 224GB/sec you don't unify the memory.

Do you understand the difference?

January 29, 2015 | 02:24 PM - Posted by serpico (not verified)

On the other hand, amd's r9 launch caused many reviewers to report performance numbers that were actually false. Reviewers would launch up the card, run their evaluations, and then report the numbers. Many failed to let the card warm up, at which point the card would clock down due to overheating and cause worse performance.

I can definitly see how both of these situations caused consumer disatisfaction. Personally, I prefer Nvidia's blunder because the performance metrics haven't changed, every performance evaluation of 970 before this technical discovery is still valid. With amd r9 , however, every review had to be taken with a grain of salt.

Like you say, though, with aftermarket cooling the problem with the r9 dissappeared.

January 28, 2015 | 08:08 PM - Posted by Alejandro (not verified)

You guys have to take a look on older generations of cards with segmented memory like gtx660 - wich is a good portion of the games.
Have a look at the VRAM usage in the beginning - 1950mb - and then from 40sec onward. And the stutters introduced after the change.

The Nai's results on those cards are also very similar to the GTX970s.

January 28, 2015 | 08:16 PM - Posted by ibbs (not verified)

hey pcper two things.


what about if we have a situation where in an open world game at 4k res, stuff gets constantly fast unloaded and uploaded into the vram. is this a situation? if this is something the engine depends on, non static.

was just thinking.

the second thing:

why on earth do we get in germany this:

"Live Streaming is not available in your country due to rights issues.
Sorry about that."

January 28, 2015 | 08:18 PM - Posted by Anonymous (not verified)

This is a good point, games which have more unique assets and more objects in general will be swapping VRAM more often. Something like the latest dying light, which also seems to have pretty high VRAM usage.

Or Ryse even with how ridiculously high fidellity everything is in the game. I defeniitely think it needs more per game and configuration testing (SLI 980 vs SLI 970)

January 28, 2015 | 09:16 PM - Posted by Ryan Shrout

I'm not sure about the live streaming issue...that's very odd.

Can you go to ?

January 28, 2015 | 10:24 PM - Posted by JL (not verified)

Dear Ryan,

Could you run the test by renaming the game executable?

Eg : from CODAW.exe to ABCD2.exe

I wonder if there will be high variance if there isn't any driver optimization being used for the game.

January 28, 2015 | 11:08 PM - Posted by Anonymous (not verified)

Don't a lot of games have game specific optimizations?

January 29, 2015 | 04:40 AM - Posted by Anonymous (not verified)

Live streaming on YouTube is disabled in Germany, because YouTube doesn't care about purchasing music licences from GEMA. As live streams cannot be analyzed by their software filter for GEMA licenced music, YouTube chose this solution.

January 28, 2015 | 08:19 PM - Posted by Alejandro (not verified)

And testing memory segmentation on a 2gb card with the current games would be way easier.

January 28, 2015 | 08:24 PM - Posted by ThorAxe

No surprises here. Running the card in unrealistic gaming scenarios can cause stuttering, but your frame-rate will already be so low that it is not going to matter whether you have a GTX 970 or GTX 980.

January 28, 2015 | 08:27 PM - Posted by Anonymous (not verified)

If you have SLI, your framerate wouldn't be so unplayably low and you would be pretty poorly affected (ceteris paribus of course).

it is not unrealistic. Especially with the idea that games will include higher res assets and textures over the time of this generation...

January 28, 2015 | 08:36 PM - Posted by ThorAxe

Perhaps. But do you really want to game at between 20-40FPS even without any stuttering?

January 28, 2015 | 08:47 PM - Posted by Anonymous (not verified)

Gysnc, 30fps locked, 40fps locked, rts game, slow paced 3rd person game.

Yes. There is much to be argued for testing that config I think.

January 28, 2015 | 09:43 PM - Posted by ThorAxe

True, if you have GSync and it's not a fast paced game and you want to game at 4K.

But seriously, the 970 was never designed for such astronomical settings. If you are really looking to game at that level then the AMD cards are a better fit for the price or if you have the cash, GTX 980s.

January 29, 2015 | 06:51 AM - Posted by Anonymous (not verified)

same as what I've been thinking this whole time. That's probably why this issue has been in the dark so long, no one is running on settings high enough to push over 3.5gb. Someone has had a play and found a problem... playing on settings that are too hight for the card makes it unplayable! Well, it would be unplayable anyway! lol

If you're thinking of doing 4k or triple display, you should be thinking of 980s that will do the job properly.

January 28, 2015 | 08:38 PM - Posted by Anonymous (not verified)

Instead of turning up resolution, AA, or presets to try to hit the 512MB, vary the texture settings.

You'll see this gets you into the 512MB.

Start with settings turned up where you want them to get near 3.3-3.4GB usage, but with a lower level of textures than max.

Then turn up textures to max. You'll see you start using that space.

January 28, 2015 | 08:47 PM - Posted by Anonymous (not verified)

I disagree, scaling resolution seems like the most normalized variable to increase VRAM usage, with all other things being equal. It makes sense to scale with resolution and keep everything else the same.

My question: How does DSR replicate real 4k results? Is there any other error introduced via the DSR technique that is not totally recapitulated in the FCAT analysis? I don't know enough about the technology to give an opinion.

January 29, 2015 | 07:03 AM - Posted by Anonymous (not verified)

DSR will render to 4K in exactly the same way as 4K, but will add an extra pass on that output to perform the gaussian filtering stage to get the final output image. The main difference between this and rendering to 4K for 4K output is that DSR will add an extra small fixed amount of latency (fixed because you always filter the entire frame, every frame), and will add a slight extra fixed load to the GPU to perform the scaling (Maxwell 2 has some extra dedicated hardware to accelerate this).

January 28, 2015 | 08:48 PM - Posted by Anonymous (not verified)

Very nice tests, thanks for taking the time to do that! For the most part I thought it was quite impartial.

What I find most interesting:

In the BF4 test, the jump happened at the 1.3x DSR for the GTX 970 while things remained mostly linear for the GTX 980. According to the first graph, the 1.3x DSR setting was the first setting to use >3.5Gb VRAM, which coincides perfectly with the partition. Very interesting indeed.

January 28, 2015 | 09:11 PM - Posted by johnc (not verified)

It's not very obvious though.

The article says: "At 130% that number climbs to 3.58GB (already reaching into both pools of memory)". The first memory segment should end at 3,584 MB, so did anything spill over into the second segment?

January 28, 2015 | 09:31 PM - Posted by Anonymous (not verified)

If there is a huge speed cut, you may see an artificial plateau at 3.584GB and then the card has to really struggle to fill after that.

Does that make logical sense? I don't understand very well myself, but it seems like the would be a plausible explanation of why it stops so closely at that number.

January 28, 2015 | 09:22 PM - Posted by guybo

It's over-blown from a technical point of view- very few people in very few situations will hit that 3.5+ GB. But from an ethical point of view, it is completely valid to get pissed about this if you have a 970. It's sleazy.

A 4 GB card would not have sold as well as a 3.5 GB card which is why that 0.5 GB was included. I am not convinced that the marketing team didn't know about the ROPs and L2 cache (sorry, Jeremy- CAYCHE).

Nvidia should be held LEGALLY responsible for false claims (whatever the reason) on the side of the box. There should be an investigation into Nvidia business practices and false claims. If I bought a car that said it's an 8 cylinder with 400 HP and when I took it home found that I actually bought a 6 cylinder with 300 HP and the car company said that the marketing team didn't know what a cylinder really was.... no one would care, heads will roll whether I ever actually use those extra 100 ponies or not. I might drive like a granny but I damn sure better get what I paid for.

January 28, 2015 | 09:38 PM - Posted by johnc (not verified)

I'm not sure that nvidia had specified ROPs and L2 cache in any of their ad copy or marketing materials, so I don't know that there's a case for false advertising there. These kinds of specs aren't listed on their website and I'm not sure if they ever made it to the box either. I understand that they went out in technical docs to reviewers.

Regarding your car analogy, the manufacturers do all kinds of tricks to hit certain numbers all the time. They'll go ahead and put 5W20 oil in there or one-off special tires to get that MPG rating that they post, but you're not going to see that kind of thing in normal use. My own experience (which was common across the fleet of late 90s Fords) required replacing a defective intake design which knocked 15 hp off the car. This kind of stuff happens all the time in the auto industry.

January 30, 2015 | 10:56 PM - Posted by Anonymous (not verified)

I believe there web site had the specs there as well.

January 28, 2015 | 11:23 PM - Posted by Anonymous (not verified)

It would have been diffucult to not include the "last" 512 MB. You don't know which memory controller will have a defect, so you would need 8 different boards, all with a different memory chip missing. If you are going to have to put it on, you should make use of it. The problem is that they didn't communicate the actual specifications.

January 28, 2015 | 09:33 PM - Posted by Anonymous (not verified)

Have to love the closing thoughts

That may not be the answer anyone wants - consumers, gamers, NVIDIA, etc. - but it actually melds with where I thought this whole process would fall. Others in the media that I know and trust, including HardwareCanucks and Guru3D, have shown similar benchmarks and come to similar conclusions. Is it possible that the 3.5GB/0.5GB memory pools are causing issues with games today at very specific settings and resolutions? Yes. Is it possible that it might do so for more games in the future? Yes. Do I think it is likely that most gamers will come across those cases? I honestly do not.

If you are an owner of a GTX 970, I totally understand the feelings of betrayal, but realistically I don't see many people with access to a wide array of different GPU hardware changing their opinion on the product itself. NVIDIA definitely screwed up with the release of the GeForce GTX 970 - good communication is important for any relationship, including producer to consumer. But they just might have built a product that can withstand a PR disaster like this.

I highlighted Nvidias apology, sorry your opinion.

After just 2 games and being inconclusive in those 2. Not even testing SLI or different genre games you came to that assessment.

I don't see how you can foresee no game in the future having an issue even those gamework titles recommending 4GB.

Games with these options or
FOV options
PhysX enabled
DLC Texture Packs

This isn't a examination at all.

January 28, 2015 | 09:49 PM - Posted by ThorAxe

I suppose that you have made similar comments at Guru3D and Hardware Canucks, or are you just trolling PCPer?

January 28, 2015 | 10:00 PM - Posted by Anonymous (not verified)

No, apparently you have.

January 28, 2015 | 09:48 PM - Posted by Rick (not verified)

I think the difference in memory use has something to do with the driver/game. Cod is a nvidia game and I bet they have tweaks in the driver for the 970 with that game. Bf being a amd game maybe doesn't have the same level of driver optimization for the 970.

I do agree with the overall point of if you're getting 25 or less fps, you're not going to be playing at those settings. But that brings sli into the picture. If you're sliing 2 970 you might be able to get playable fps at those setting. With sli stutter plus this stutter, things could get bad quick.

This does make me change my opinion of this card. It still is a really good card for the money, but if you play to sli 2, maybe not.

Good review

January 28, 2015 | 10:38 PM - Posted by Anonymous (not verified)

Great article! Thanks.

January 28, 2015 | 10:38 PM - Posted by Nvidia_Shill (not verified)

Just don't question Nvidia please, everyone knows they would never lie.

The GTX 970 essentially has 4.5GB, the 4GB can be used and additionally the 0.5GB can be used as system memory, essentially you are buying more.

We should be thankful that we had the GTX Titan selling at $1000 for over 6 months, then the GTX 780TI selling at $800 for additional 6 months, until Nvidia gave us the same performance for $550, isn't that great?

January 29, 2015 | 05:01 AM - Posted by Ophelos

No, the card really only got 3.5GB and 500MB on the side for which is a total of 4GB.. Not 4.5GB.

January 28, 2015 | 10:58 PM - Posted by Ty (not verified)

I still don't understand why everyone is only testing single 970 cards at higher resolutions and then claiming no one would play at those levels anyway so no harm no foul.

No sh*t, the point is that many people found the 970s so cheap that they wanted to get TWO of them to use in sli with a 4k display SPECIFICALLY for the purpose of higher resolution gaming.

If sli introduces higher background frame time numbers so be it.

We can still compare 980 sli frame times before 3.5 GB and up to 4, and then 970 from before 3.5 GB and after and see what the differences are. THAT is the prime use case where people actually chose 970s to sli in, not the single gpu. So why is no one testing that?

Because the "it's not an issue to worry about" argument goes away?

January 28, 2015 | 11:03 PM - Posted by Anonymous (not verified)

The real story here is actually the reduce memory bit depth from 256 to 224, and the fact that the 970 can only access one pool at a time and not both. Thus the maximum memory bit depth can only ever be 224 or 32.

Please incorporate this information into your 970 articles going forward because this is the biggest deception to come from this fiasco by far.

January 29, 2015 | 12:08 AM - Posted by johnc (not verified)

Perhaps but that's all reflected in the benchmark performance numbers anyway.

January 29, 2015 | 01:54 AM - Posted by sschaem (not verified)

Very little game ATM truly leverage >3GB
Its mostly cached texture.

When future game comes out that have a higher res texture set, this problem might become more visible.

So there is no point in keeping the lie. Spec should be accurate.

The GTX970 is a card with a split 224bit / 32bit bus
and only one bus can be used at a time, making the total truely usable memory 3.5GB not 4GB..

Again: 224bit, 3.5GB.. not 256bit 4GB

This will impact game optimizations...

January 29, 2015 | 04:53 AM - Posted by Lithium

The GTX970 is a card with a split 224bit / 32bit bus
and only one bus can be used at a time, making the total truely usable memory 3.5GB not 4GB..

Again: 224bit, 3.5GB.. not 256bit 4GB


Lines above are LIE

256 bit for 3,5 GB
32 bit for last 0,5 GB
but you have more ROPs that makes 970 good as it is

January 29, 2015 | 05:01 AM - Posted by Anonymous (not verified)

No, the full 256 bit are not available for the 3.5 GB partition. Just look at the location of the memory controllers and count the number:

January 29, 2015 | 06:28 AM - Posted by Topinio

This is the point.

On a GTX 970 with under 3.5GB VRAM usage you have a 224b 196GB/s memory subsystem not the 256b 224GB/s one that was marketed, bad enough marketing but seems ok in actual usage.

On a GTX 970 using over 3.5GB of VRAM, bandwidth is worse because the card can only use the 224b or the 32b segement at any instant so the effective memory bandwith is *less than* 196GB/s, with the drop off dependant on the fraction of VRAM accesses to the slower chunk.

If assets were spread evenly across the whole 4GB and each bit had the same access pattern, the effective memory bandwidth would go down to just under 172GB/s.

Obviously it's in Nvidia's interests to try and optimize around this in their driver, so IRL it may not be a big deal for gamers.

OTOH, is it possible to code an application to put all the assests in the slow VRAM segment? If so, that has only 28GB/s bandwidth so the performance could be diabolical ;)

January 28, 2015 | 11:49 PM - Posted by Valerant (not verified)

If 970 can't get past 3.5GB in COD but 980 can, would it be because of the softwares (the game and the driver)? Now the game could detect and prevent a specific card to avoid slow down in performance, but only if they are in the knowledge about it. The driver also able to do it. Either way, if that's the case then it means Nvidia knew about this already, the disabled ROP hits performance.

January 29, 2015 | 06:11 AM - Posted by Anonymous (not verified)

They WILL address it with a new driver. So, yeah...

January 29, 2015 | 02:35 PM - Posted by Anonymous (not verified)

Apparently PeterS@nvidia has redacted his statement, they are no longer working on a optimizing a specific driver to address the 970 memory issues:

January 29, 2015 | 03:40 AM - Posted by Klimax (not verified)

Interesting. Although I suspect best game for this type of testing would b e Watch_dogs, because IIRC that games is never really shader bound but VRAM/bandwidth.

Wonder how would it look on G-Sync monitor...

January 29, 2015 | 04:56 AM - Posted by Anonymous (not verified)

I don't think the conclusion "doesn't matter, because this framerates are unplayable and nobody would choose this settings" is a good one.

It is possible to find a variety of scenarios in which the framerate is ok and still stuttering could occure due to more than 3.5 GiB VRAM being in use. In the end the GTX 970 is a high end card, something that non-PC-enthusiasts will barely buy.

January 29, 2015 | 05:34 AM - Posted by n2k (not verified)

While I do appreciate the article and the amount of work that went into researching this. I feel it's only half done. What you're trying to find out is whether or not the two memory pools of the GTX 970 has a noticeable frame variance impact in games.

So as others have stated, why aren't you looking into specific edge cases that address this specific problem? While I do know finding a good testing scenario (like what game with which settings in what part of the game) are really time consuming. And I would like to also acknowledge the fact that these scenarios are not in anyway representative of how the average gamer will utilize their GTX 970.

So my ideal scenario would be the following:
Try to increase the VRAM above 3.5GB while keeping the performance at 60fps, and keep the resolution at 1440p or even 1080p if possible. This way any sort of frame variance is more likely to show up in the FCAT graphs.

When such a test scenario is found, I'd like to see this tested also with two GTX 970 in SLI. Since, this would be the best bang for your buck setup to high end gaming.

Unfortunately I cannot contribute to this in any other way (I'm still using GTX 580 3GB in SLI). I hope to see a followup of this article, and going (even) more in depth of this issue.

January 29, 2015 | 05:56 AM - Posted by n2k (not verified)

I would like to add the following: I think comparing against a GTX 980 is also not worth it, and is only making things more complicated. What you know is that when using < 3.5GB the GTX 970 should be able to use the maximum possible memory bandwidth. So it would be good to use that to create a test scenario to get an actual apples to apples comparison.

January 29, 2015 | 07:44 AM - Posted by ENBSeries (not verified)

I repeat again, videocard works like any other, but have 3.5 gb only available and last 0.5 gb is actually non local video memory (which is system ram) and there is no slow video memory like NVidia said. It's lie and it can be easy prooved (i prooved by writing own tests). Just allocate blocks in vram and dump ram, search in dump the code of those "vram" blocks and you will see that last 0.5 gb is stored in ram. Is that so hard? I feel myself genious seeing noone notice obvious things.

January 29, 2015 | 12:22 PM - Posted by Aaron Barnes (not verified)

Is this Boris, the one, the only?

January 29, 2015 | 12:26 PM - Posted by Aaron Barnes (not verified)

One more:

Boris has been offering a different analysis:

Has anyone else seen this?

January 29, 2015 | 01:49 PM - Posted by Anonymous (not verified)

Holy crap, so the last 512MB isn't being utilized at all? WTF NVIDIA????

From the link:

English (US) · Privacy · Terms · Cookies · Advertising ·
Facebook © 2015
23 hrs ·

Update regarding "GTX 970 memory bug". Wrote another test to check how that slow 0.5 gb memory works and again it's the same thing which driver do for a long time, that memory is stored in RAM instead of VRAM, that's why it slow. Basically, this is standart behavior for the most videocards on the market (vram is physical vram + a bit ram). What it means on practice compared to another videocards? GTX 970 have 3.5 Gb of VRAM. What i see in articles with explanation from NVidia is half-lie and of course casual people are incompenent and better to not listen to them. I don't think it's something horrible to loose 0.5 gb, but it's bad that NVidia hide such information (my own videocard with 2 gb or vram have access to 2.5 gb and nobody annonced it as 2.0 fast and 0.5 slow).

January 29, 2015 | 07:52 AM - Posted by JohnGR

Nice article. Thanks.

January 29, 2015 | 09:17 AM - Posted by Parn (not verified)

While I do believe this article stands true for single GPU scenarios, the question of how bad the 0.5GB memory pool will affect SLI performance still needs to be answered.

With the current crop of games, a single 970 pushing 3.5GB+ VRAM will most likely yield unplayable FPS anyway. However for SLI users 3.5GB+ could be a daily routine.

A comparison between 980 SLI and 970 SLI can easily help us to find out the impact of the 0.5GB memory pool. You can easily remove stutters caused by SLI glitches from the picture by looking at the 980 SLI results.

January 29, 2015 | 04:36 PM - Posted by aparsh335i (not verified)

I have GTX 970 SLI...have had zero probles.
I play on Asus ROG Swift GSync @ 2560x1440.
You want me to try to do some testing?

January 29, 2015 | 10:24 AM - Posted by Jesse (not verified)

Same thing happens with the GTX 660/Ti. VRAM usage will run up to 1536MB but either stutter and go over - after which it's mostly fine, with a very slight framerate hit and possibly more stutters - OR it will just bounce back down to about 1530MB and stay there.

Seems like the exact same thing is happening with the GTX 970 - usage up to 3584MB and then a stutter - where it either goes over or stays right at the 3.5GB limit.

January 29, 2015 | 10:41 AM - Posted by Anonymous (not verified)

Since nVidia aint doing the right thing. AMD is offering a discount if you want to return your 970 for an AMD card:

January 29, 2015 | 01:14 PM - Posted by Anonymous (not verified)

More lies from nVidia, wonder how PCper will defend them on this:


Update regarding "GTX 970 memory bug". Wrote another test to check how that slow 0.5 gb memory works and again it's the same thing which driver do for a long time, that memory is stored in RAM instead of VRAM, that's why it slow. Basically, this is standart behavior for the most videocards on the market (vram is physical vram + a bit ram). What it means on practice compared to another videocards? GTX 970 have 3.5 Gb of VRAM. What i see in articles with explanation from NVidia is half-lie and of course casual people are incompenent and better to not listen to them. I don't think it's something horrible to loose 0.5 gb, but it's bad that NVidia hide such information (my own videocard with 2 gb or vram have access to 2.5 gb and nobody annonced it as 2.0 fast and 0.5 slow). So sad that all my posts on the forums were trolled, fools are always the most active and agressive, hopefully it's their own butthurt as they won't listen to professionals.

January 29, 2015 | 01:21 PM - Posted by Dave4321 (not verified)

You could also think about it this way. For the 980, the vram is also taken up by things that do not need fast ram. Windows, drivers, etc.. If Nvidia' new driver can use the 500mb for things that do not need fast vram,and use the 3.5gb for things that do, then the gap will narrow.

January 29, 2015 | 09:22 PM - Posted by Anonymous (not verified)

Windows OS is in charge of that.

Nvidia would have to "hack" its own driver for each game to do that.

People who bought the 970 and don't play commercially well known games are left out to dry because Nvidia would have to apply that driver hack or optimization to each game that's ever released.

We aren't dead yet which so software isn't self aware.

January 29, 2015 | 01:19 PM - Posted by Dave4321 (not verified)

It would be great if you could compare frame times to a 290 and 290x.

January 29, 2015 | 05:21 PM - Posted by Anonymous (not verified)

Frametimes aren't looking too good there.

January 29, 2015 | 06:15 PM - Posted by Emile (not verified)

i'm pretty sure this isn't about the last 0.5gb its more about the principle, it feels like Nvidia was just trying to keep it a secret. 99% sure their wouldn't be a problem if the card had 3.5gb or they told us about the slower last 0.5gb and said its performance is decrase was negligible.

January 29, 2015 | 11:50 PM - Posted by Anonymous (not verified)

Bye bye magic driver :P

January 31, 2015 | 05:45 PM - Posted by Anonymous (not verified)

They may still do that, but in silence. Peter from Nvidia clearly stated they are working on a fix, why would he lie? But then he could get some drops from management to dement it, because releasing this publicly they would confess their fault - and this could be problem for them. So now they may pretend everything is fine, no problems whatsoever, but in background they can optimize their drivers for 970... And once released, we will have no issues... A miracle :) Maybe BS, but everything is possible :)

January 30, 2015 | 09:16 AM - Posted by hippiehacker

Playing Dying light last night; I've found that it was a game that stopped at 3.5GB Vram and refused to use more. Outside of the CPU patch the dev's are working on though; I didn't have any hiccups using the GTX 970. seem to have found an interesting point. OpenGL apps can use all 4GB of ram. They have DirectX vram test on their site and no matter what I tried I couldn't get it use over 3.5GB.

Also interesting note; I opened up gpuz. Sitting at my desktop with dual 1080p screens I'm using on average 300MB of Vram, and with chrome open about 430MB.

It seems possibly UI elements are just sitting in this slower part? (huge speculation)

Win 7 x64 SP1, using lastest Nvidia driver. Wonder if Windows 8 can use all the memory?

January 31, 2015 | 05:05 AM - Posted by jawad kazmi (not verified)

Thanks for the continued update on the 970 mem issue guys.. much appreciated.. this would surely help people get a fair picture and help in deciding on the purchase.. cheers

February 2, 2015 | 03:21 PM - Posted by RagNoRock

Try using Star Citizen for the tests, it's a super VRAM hog.

February 2, 2015 | 04:58 PM - Posted by Anonymous (not verified)

If you are updet with NVidia for this.

Sign the petition and help people to get their money back:

February 5, 2015 | 05:04 PM - Posted by Mr.Schmucko (not verified)

I believe that people with multi-monitor and 4k monitor set ups will run into these problems today. It will only get worse in the future. Sure the average gamer games at 1920x1200. Doesn't mean that we aren't upgrading to 4k or ultrawide.

February 11, 2015 | 09:56 AM - Posted by drbaltazar (not verified)

What about row hammer?I assume the 500 MB part is ddr3 variant?hopefully you guys will make an article about row hammer,so far may be only Intel fixed this (they re the one with a patent mentioning row hammer(2014)

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.