Review Index:

The NVIDIA GeForce GTX 980 Ti 6GB Review - Matching TITAN X at $650

Manufacturer: NVIDIA


When NVIDIA launched the GeForce GTX Titan X card only back in March of this year, I knew immediately that the GTX 980 Ti would be close behind. The Titan X was so different from the GTX 980 when it came to pricing and memory capacity (12GB, really??) that NVIDIA had set up the perfect gap with which to place the newly minted GTX 980 Ti. Today we get to take the wraps off of that new graphics card and I think you'll be impressed with what you find, especially when you compare its value to the Titan X.

Based on the same Maxwell architecture and GM200 GPU, with some minor changes to GPU core count, memory size and boost speeds, the GTX 980 Ti finds itself in a unique spot in the GeForce lineup. Performance-wise it's basically identical in real-world game testing to the GTX Titan X, yet is priced $350 less that that 12GB behemoth. Couple that with a modest $50 price drop in the GTX 980 cards and you have all markers of an enthusiast graphics card that will sell as well as any we have seen in recent generations.

View Full Size

The devil is in all the other details, of course. AMD has its own plans for this summer but the Radeon R9 290X is still sitting there at a measly $320, undercutting the GTX 980 Ti by more than half. NVIDIA seems to be pricing its own GPUs as if it isn't even concerned with what AMD and the Radeon brand are doing. That could be dangerous if it goes on too long, but for today, can the R9 290X put up enough fight with the aging Hawaii XT GPU to make its value case to gamers on the fence?

Will the GeForce GTX 980 Ti be the next high-end GPU to make a splash in the market, or will it make a thud at the bottom of the GPU gene pool? Let's dive into it, shall we?

Continue reading our review of the new NVIDIA GeForce GTX 980 Ti 6GB Graphics Card!!

GTX 980 Ti Specifications

If my introduction didn't give the whole story away, this table likely will. The GeForce GTX 980 Ti looks almost indistinguishable from the GTX Titan X anywhere that matters.

  GTX 980 Ti TITAN X GTX 980 TITAN Black R9 290X
GPU GM200 GM200 GM204 GK110 Hawaii XT
GPU Cores 2816 3072 2048 2880 2816
Rated Clock 1000 MHz 1000 MHz 1126 MHz 889 MHz 1000 MHz
Texture Units 176 192 128 240 176
ROP Units 96 96 64 48 64
Memory 6GB 12GB 4GB 6GB 4GB
Memory Clock 7000 MHz 7000 MHz 7000 MHz 7000 MHz 5000 MHz
Memory Interface 384-bit 384-bit 256-bit 384-bit 512-bit
Memory Bandwidth 336 GB/s 336 GB/s 224 GB/s 336 GB/s 320 GB/s
TDP 250 watts 250 watts 165 watts 250 watts 290 watts
Peak Compute 5.63 TFLOPS 6.14 TFLOPS 4.61 TFLOPS 5.1 TFLOPS 5.63 TFLOPS
Transistor Count 8.0B 8.0B 5.2B 7.1B 6.2B
Process Tech 28nm 28nm 28nm 28nm 28nm
MSRP (current) $649 $999 $499 $999 $329

Let's first start at the GPU count where the GTX 980 Ti hits 2,816 CUDA cores, 256 fewer than the GTX Titan X, a drop of about 9%. Along with that goes a drop of texture units to 176 from 192. The ROP count and 384-bit memory bus remain the identical though, providing the same amount of memory bandwidth as the $999 flagship card offering from NVIDIA.

View Full Size

Block Diagram of the GM200 inside the GTX 980 Ti - Two SMMs disabled

You will notice a drop from 12GB of memory to 6GB for this card, which is not surprising. In all honesty, for gamers, the 12GB of memory was always fools gold, never affecting game play even in the most extreme configurations; 4K Surround (triple monitors) would likely be unable to utilize it. At 6GB, the GTX 980 Ti becomes the highest default memory capacity for any reference card other than the Titan X. For even the most demanding games, at the most demanding resolutions, 6GB of memory will be just fine. Our test settings at 4K for Grand Theft Auto V, for example, are estimated to use about 4.9GB; and that's the highest we use for any current title. And with all the rumors circulating about AMD's Fiji being stuck at 4GB because of the HBM implementation, expect NVIDIA to tout and advertise that advantage in any way it can.

View Full Size

Even though the rated base clock of the GTX 980 Ti is the same as the Titan X at 1000 MHz, in my experience with the two graphics cards the 980 Ti does seem to hit 40-70 MHz higher Boost clocks over extended gaming periods. As you'll soon see in our benchmarking, this is likely the biggest reason that despite the supposed disadvantage in CUDA core count, the GTX 980 Ti and the Titan X are basically interchangeable in gaming capability. I also suspect that some changes to the GPU power delivery help that a bit, as indicated by a 92C (versus 91C) rated thermal limit for the GTX 980 Ti.

In terms of TDP, the GTX 980 Ti is rated at 250 watts, the same as the GTX Titan X and 40 watts lower than the R9 290X. In reality, through our direct graphics card power consumption testing, the GeForce cards err on the side of caution while the AMD Radeon R9 series swings the other direction, often running hotter than the TDPs would indicate. The GTX 980 Ti does use the exact same cooler design and even the same fan curve implementation (according to NVIDIA) so sound levels and cooling capability should be identical, yet again, to the Titan X.

May 31, 2015 | 06:03 PM - Posted by Anonymous (not verified)

Might want to fix the typo:

"..the GTX 780 Ti finds itself in a unique spot in the GeForce lineup".

I think it's supposed to be the 980 :)

May 31, 2015 | 06:08 PM - Posted by Ryan Shrout


May 31, 2015 | 07:30 PM - Posted by Anonymous (not verified)


You do realize The Witcher 3 and Project Cars had GameWorks. Saying their driver development is miles a head because of that is rather silly and irresponsible. Especially given how the developr statements and patches for both have panned out.

May 31, 2015 | 08:10 PM - Posted by Ryan Shrout

Those aren't really the only two examples though. GTA V, for example.

I think we are going to give more into this with some vendor interviews in the near future.

May 31, 2015 | 08:21 PM - Posted by Anymouse (not verified)

The explanation you gave referred to having optimized drivers on release to GameWorks backed titles. Silly to make such statement given the obvious nature of the business.

Same kind of silliness would be to imply that due to AMD having a driver out on release for a AMD Gaming Evolved backed title translates into being miles ahead.

I hope you ask how come console optimization on GCN isn't translated given how much time is spent on X-Box and PS4 development.

I also hope you don't get a PR spin like usual from either side.

May 31, 2015 | 10:14 PM - Posted by Anonymous (not verified)

The big difference is that nvidia had drivers in the day-0 for Gaming Evolved titles, and AMD hadn't them in the launch of TWIMTBP titles.

Hell, AMD hadn't proper drivers many times with the launch of many Gaming Evolved games and related.

So, don't be silly and stop to see "sillyness" in the Ryan's review because he said a fact:

Drivers that supports games in the launch, the best, nvidia.


Battlefield 3 and 4 (they aren't literally GE titles, but they are more than this, Mantle version, 8 millions of dollars, Johanson making PR to AMD, etc), Bioshock infinite, Civ:Beyond earth. And many more.

June 1, 2015 | 01:20 AM - Posted by chizow (not verified)

Exactly, Nvidia is going to release Day 0 or Day 1 drivers for ANY big game launches regardless of branding because in the end, they know their users demand this level of support.

It is almost as if AMD is spiting their own customers in an attempt to make Nvidia/GameWorks look bad, when in the end, it just hurts their own customers.

June 1, 2015 | 01:29 AM - Posted by Anonymous (not verified)

Now that's what you call nvidiot shilling...

June 1, 2015 | 02:28 AM - Posted by hosko

I think you need to look in the mirrer mate. Its not Nvidia's fault that AMD didn't have a driver out in time for Project Cars release.

This is from Slightly Mad Studios

"Project CARS is not a GameWorks product. We have a good working relationship with nVidia, as we do with AMD, but we have our own render technology which covers everything we need."

"NVidia are not "sponsors" of the project. The company has not received, and would not expect, financial assistance from third party hardware companies."

“We’ve provided AMD with 20 keys for game testing as they work on the driver side. But you only have to look at the lesser hardware in the consoles to see how optimised we are on AMD based chips.

We’re reaching out to AMD with all of our efforts. We’ve provided them 20 keys as I say. They were invited to work with us for years, looking through company mails the last I can see they (AMD) talked to us was October of last year.

Categorically, Nvidia have not paid us a penny. They have though been very forthcoming with support and co-marketing work at their instigation. We’ve had emails back and forth with them yesterday also. I reiterate that this is mainly a driver issue but we’ll obviously do anything we can from our side.”

So AMD can't get a driver out for the release of the game. Nvidia spends more money on driver development for their game ready drivers.

This is a reason to buy an Nvidia card. Reviewers should mention this. Fanboys shouldn't go crazy at reviewers for mentioning it.

So if AMD didn't get advance notice how come they already have a new driver release available? The issue is it should have been ready the day before release date.

June 1, 2015 | 03:02 AM - Posted by Anonymous (not verified)

Nvidia don't need to "pay" directly... Dev support & free marketing/kit are enough. Having code/drivers tuned together is what nvidia banks on. Indeed they flagged their fear of AMD doing exactly this in an article by Anand a number of years ago. AMD dind't take this step, but nvidia is paranoid (like Intel), so they did. Can't blame them with AMD locking up gaming consoles. AMD only had access to the Cars RTM a month before drop. Nvidia code lockout deals & AMD not spending on dev outreach drives current situation.

June 1, 2015 | 03:43 AM - Posted by hosko

So why doesn't AMD offer similar support? Why was their last correspondence to a game developer last October? AMD for a long time has been behind on driver support. That's why they pushed mantle so hard. It would move the burden away from AMD and on to the developers. If you give developers access to the bare metal you don't need specific drivers.

The only issue is we are not that yet, as result AMD's poor driver support is being shown up. nVidia spends a boat load on their drivers and Geforce experience. In the free market you get rewarded for effort and punished if you don't keep up with your competition. Sadly the lack of competition in graphics means this is only going to happen more and more often.

Pcars was delayed so of cause they only got the RTM just before release, but if you read the quotes from Slightly Mad Studios gave access to the game as they were developing it however AMD didn't communicate back to the studio .

June 1, 2015 | 08:54 AM - Posted by Luthair

Lack of money one assumes.

The developers (of the game) still hold responsibility for the performance of their game, blaming others is a cop-out. To me as a non-game developer it seems more than a little ridiculous that every game (written to supposed standard) requires driver customization.

June 1, 2015 | 11:49 AM - Posted by Anonymous (not verified)

"NVidia are not "sponsors" of the project. The company has not received, and would not expect, financial assistance from third party hardware companies."

With all the Nvidia logos plastered over every flat surface they could find, I have a really, really hard time believing this claim.

June 1, 2015 | 03:34 PM - Posted by Anonymous (not verified)

ThIs!! ^^

June 2, 2015 | 04:33 AM - Posted by Anonymous (not verified)

This is PCPerspective we take Nvidia as Gods word.

June 7, 2015 | 02:20 AM - Posted by DaKrawnik

lol Plastering your logo on something because your a sponsor and doing it as advertising because you did some work for them are two different things. It's no different than the guys doing your roof putting a sign with their company logo and phone number on your lawn.

June 1, 2015 | 04:27 AM - Posted by Aysell

The recent day 1 driver updates from nvidia have been awful, the wither and project cars update are among them. While my friends on AMD cards had a little performance hit due to idiotic nvidia features they could at least play the game and not have it crash every 10 min. A significant nr of people needed to roll back just to be able to play. A problem i did not have on my former 7870, using a 970 now.

Having a day 1 driver ready counts for jack shit if it works like crap, something AMD seems to have learned, and dissing them for it while giving nvidia a free pass on shit drivers does not seem like objective review... ijs...

June 1, 2015 | 11:50 AM - Posted by Anonymous (not verified)

You're mental. EVERYBODY knows that AMD drivers always always always suck, and Nvidia drivers come gold-plated.


June 1, 2015 | 02:32 AM - Posted by hosko

ProjectCars isn't Gameworks. You aren't going to develop for nVidia if you are releasing on consoles with AMD hardware. Why does Project Cars play perfectly well on the consoles but not on AMD based PC? Because of the drivers?

June 1, 2015 | 03:30 AM - Posted by Anonymous (not verified)

Perhaps because the console port is tuned appropriately for the fixed hardware & the PC port had nvidia input to redress the performance balance of a straight from console GCN port? PhysX (cpu) is used for some physics calcs in the PC port. Latest AMD drivers only improve performance by 10% & Win10 by 25% apparently. This is what happens when architectures diverge. Nvidia basically made Maxwell a great gaming card for now, all compute capabilities are reduced & DP bye-bye. What will be the outrage if AMD does the same in corresponding games?

June 4, 2015 | 07:10 AM - Posted by Ry (not verified)

"the fact that you can get identical performacne for $350 less is a great thing"

I don't want acne!

May 31, 2015 | 06:28 PM - Posted by arbiter

Well 980ti performance wise is about where it was expected, Question is now is if AMD radeon fury is gonna be able to be fast enough to justify price if the rumored 850$ is true.

May 31, 2015 | 07:15 PM - Posted by Anonymous (not verified)

The compute performance alone on the AMD Fury will justify the price to the prosumer at 8.6 TFLOPs. Titan Z is the closest single precision at $3000 a dual gpu with 8 TFLOPs & lousy double precision. If it keeps its double precision intact unlike Maxwell based cards it will destroy anything they can offer until next year.

May 31, 2015 | 07:29 PM - Posted by arbiter

For gamers compute performance means absolutely dick. If it ment anything to gamers then AMD would been massively ahead since 7000 series cards.

May 31, 2015 | 08:17 PM - Posted by remon (not verified)

Oh the irony.

Anyway, what's keeping back compute on games is Nvidia and their crappy compute cards. There's a reason TressFX is faster than Hairworks even on Nvidia cards, and that is because it's based on directcompute.

May 31, 2015 | 09:00 PM - Posted by arbiter

Maybe AMD needs to get better at DX11 tessellation which is A STANDARD. Funny how AMD fans want it nvidia to make things at standard which hairworks uses. Must hurt pretty bad knowing AMD's top dog 290x card gets BEAT by a 750ti in tessellation performance.

June 1, 2015 | 01:13 AM - Posted by remon (not verified)

Well, they got better tessellation, with Tonga.

June 15, 2015 | 03:21 AM - Posted by SiliconDoc (not verified)

Kinda sad they got better tesselation with tonga when I had to listen to and read the bloviating amd fan gasbags blabbbering AMD had tesselation hardware in the HD2900 and it shows how awesome they are and how far ahead they are in tech, and blah blah blah blah blah !

That turned into - for the past 3 or 4 YEARS... " wahhhh wahhh nvidia is forcing tesselation on blank walls and along sidewalks and no one needs 16x tess or more ever ! "

Thus, the stupdendous hardware amd junkie wacko went from insanely incoherent and inconsequential hardware braggadocio to blubbering and whining in victimhood, again, when their holier than thou amd epic failed.

June 1, 2015 | 01:45 AM - Posted by Anonymous (not verified)

Hairworks is tuned to nvidia's front end geometry strength on higher-end GPUs, but is a dumb way of impelmenting hair. It basically occupies all pipeline stages GS-HS-DS-VS. >64x amplification is pointless when at the sub-pixel level. 8XMSAA is the icing on top... So no, even single/dual primitive (tris) pipeline gpus from nvidia also suck. The cost to Maxwell is static scheduling - read dumb...

It will be interesting to re-visit this stuff with DX12, when nvidia's better DX11 multi-threaded performance (2theads/cycle) won't matter as their command processors are limited compared with AMDs supposed superior command rate of >4threads/cycle. Maxwell's current static model may indeed hurt moving forward, but GCN aint no panacea either and maks a different set of compromises. Time will tell whether DX12 & compute will hurt Nvidia buyers.

May 31, 2015 | 08:18 PM - Posted by remon (not verified)

* The irony of posting this in a Titan review.

May 31, 2015 | 09:16 PM - Posted by pdjblum

Always have a hard time with "irony." Can you please explain the irony here? Thanks much in advance.

June 1, 2015 | 01:22 AM - Posted by chizow (not verified)

Most likely not, and I think this is why Nvidia shot first at $650. They've basically pre-emptively undercut Fury's pricing. It will have to beat 980Ti/Titan X by 15-20% to keep that $850 price tag, and if it is +/-5% it is going to be $500-$600 max. Certainly a far cry from $850 aspirations.

June 1, 2015 | 06:34 AM - Posted by donut (not verified)

Howz that Titan X going for you? Ouch that's gotta hurt.

May 31, 2015 | 06:29 PM - Posted by Dan (not verified)

Just a suggestion, but it would be helpful when you hover your mouse over the charts that a description would popup of what each is or what the measure is of (FPS by Percentile, Frame Variance, Frame Times, etc). I can't be the only one who really has no idea what most of the charts are depicting.

Love the site and content, fantastic article. Keep up the great work!

May 31, 2015 | 08:08 PM - Posted by Ryan Shrout

Do you mean a more detailed description? Because the graph does indicate Frame Variance, etc. in the title of it.

May 31, 2015 | 10:50 PM - Posted by Dan (not verified)

Yes exactly, what frame variance is and the measurement taken is of (and the other graphs). I know you have an entire article on this but a quick sentence recap would be nice.

May 31, 2015 | 06:30 PM - Posted by Crazycanukk


This Card is going to find a new home in my gaming Pc as soon as a decently overclocked version hits retailers. My stock 780 at 1440p is showing its limits with Witcher 3 and im sacrificing a lot of eye candy just to keep a reasonable fps rate. The Gsync monitor helps but i must admit I am an eye candy addict and the 780 cant deliver at 1440p. I will moce the 780 into another PC at the cottage for 1080p gaming and the 760 there to my family room pc at home.

Debated getting the TitanX but just couldnt justify the cost for single monitor 1440p gaming ..this I can justify and i think i can get 2 years like the 780 out of it before i upgrade and shuffle cards again :)

May 31, 2015 | 06:54 PM - Posted by Joe (not verified)

Waiting for Gigabyte Windforce G1 Gaming edition.

May 31, 2015 | 07:02 PM - Posted by siriq111 (not verified)

Well, almost bang on the buck. This price also tells me about the other side as well.

May 31, 2015 | 07:02 PM - Posted by Anonymous (not verified)

How long until someone finds out its missing something like the 970 ?

May 31, 2015 | 07:22 PM - Posted by Crazycanukk

the 970 wasnt missing anything at was a different way of utilizing the memory and a different memory architecture was marketing not being clear on the packaging and advertising about that change..but the consumer didnt miss out on anything and got the full 4 gigs when they needed it.

June 1, 2015 | 01:55 AM - Posted by nvidiot (not verified)

Ummm, it was missing TMUs & L2 cache, which resulted in the 3.5GB+0.5GB memory partitions...

[b]Ryan do we have definitive confirmation that all TMUs & cache are retained with the removal of 2 SMMs?[/b] The block diagram is only marketing as TMUs & L2 (especially) are tied to SMM clusters.

June 1, 2015 | 02:45 AM - Posted by nvidiot (not verified)

Let me answer my own question...

Excising 2xSMMs @ 4pixels/cycle = 88ROPs for singe cycle operation even though 96ROPs are specced. TMUs are down to 176 from 182, but the L2 partition & memory controllers are apparently unaffected, so performance/clock will be worst case ~8% lower than Titan. With a bit more power headroom, 980ti will trade performance lead with Titan depending on workload bottleneck. Not too bad for 6GB & $350 less.

June 1, 2015 | 02:58 AM - Posted by Anonymous (not verified)

I am waiting to see test to determine specifically what the situation is. It would be stupid for them to misrepresent the actual memory bandwidth again, so I am leaning toward it being correct and equivalent to the Titan X. There is also the fact that the the 980 Ti performs almost exactly the same as a Titan X, even though it has less hardware. This implies that they have the same bandwidth and they are also memory bound. They may get a good boost out of memory over clocking, although I assume that GDDR5 has been pushed about as far as it will go.

May 31, 2015 | 07:07 PM - Posted by RadioActiveLobster

I currently have 2 Gigabyte Windforce G1 Gaming 980's in SLI.

That should last me a while (gaming at 1440p on a 144Hz G-Sync panel).

I'm waiting to see what Pascal brings to the table.

May 31, 2015 | 07:10 PM - Posted by billb79

Yes, indeed..........I need one of these!

May 31, 2015 | 08:48 PM - Posted by Anonymous (not verified)

New GPU releases are about as exciting as CPU releases have been. Gone are the days when we saw dramatic performance increases with each new generation.

June 2, 2015 | 02:25 PM - Posted by amadsilentthirst

I haven't been moved to buy a GPU since the 8800GTX in 2007, for £340

Since then it seems everything is just a rebrand of a rebrand with no new advancements, well unless you want to spend a thousand pounds that is.

And all this amd/nvidia is getting real old.

May 31, 2015 | 09:05 PM - Posted by funandjam


May 31, 2015 | 09:23 PM - Posted by pdjblum

If this was a preemptive move by Nvidia anticipating the fuji release, I wonder what they know. Impressive card. Just sucks that flagships these days are so expensive.

May 31, 2015 | 09:36 PM - Posted by Anonymous (not verified)

Maybe they know something.

I certainly don't recall a GPU review on a Sunday.

May 31, 2015 | 11:58 PM - Posted by Anonymous (not verified)

Hmm, the only thing I remember to ever come out on a Sunday was nvidia's response to the 970 debacle.

Now with this card reveal behind us... On to Fiji!

May 31, 2015 | 11:59 PM - Posted by Anonymous (not verified)

Computex starts at Sunday 6pm PST. They want to demo this thing on the show floor.

May 31, 2015 | 09:45 PM - Posted by Anonymous (not verified)

Ordering 1 asap when available in the shops here in the UK!
My acer predator xb270hu 2560x1440 ips 144hz gsync monitor needs more GPU power!

May 31, 2015 | 10:13 PM - Posted by J Nevins (not verified)

Guys, its not 1999 please update your game testing suite. Why not Witcher 3? Or GTA5 , or modern games?

May 31, 2015 | 10:36 PM - Posted by Anonymous (not verified)


Why on the R9 295X2 did it appear to have larger frame time variance in games at 1440p than at 4k?

June 1, 2015 | 12:14 AM - Posted by Ryan Shrout

Likely because there is less of a GPU bottlencek at the lower resolution and thus there is more push back on the CPU, where the multi-GPU management takes place.

May 31, 2015 | 11:18 PM - Posted by StephanS

Its interesting to see the 290x being so close to the GTX 980.
What I don't get is how both card have very similar gaming performance, same amount of memory, but one retail for around $280 and the other $500?

Side note: the gap get small to non existent at 4K between those two cards, telling that the perf difference at lower res might be driver related.

Dx12 will help equalize the driver situation, and it would be hilarious if a $280 card ends up out performing a $500 card, even if its just by a few frames.

June 1, 2015 | 01:19 AM - Posted by remon (not verified)

And one is 1.5 years old.

June 1, 2015 | 11:59 AM - Posted by Anonymous (not verified)

Still stuck at 28 nm, so we haven't been getting the same generational gap we did with previous generations.

June 1, 2015 | 12:41 AM - Posted by Shambles (not verified)

It's hard to get excited about a $650 GPU when the price/performance falls off a cliff after you get past $350. With how well SLI works these days I'd be buying 2x970's long before a 980 Ti.

June 3, 2015 | 08:15 PM - Posted by Anonymous (not verified)

The NVIDIA card takes way less energy? No?

June 1, 2015 | 01:03 AM - Posted by Mandrake

Very nice review. I considered the acquisition of an awesome 980 Ti card. Turns out Nvidia's new Kepler performance driver killed off that idea. My single 780 @ 1200 - 1250MHz in game in The Witcher 3 is running beautifully. At 2560x1440 in the high 40s or 50s fps with most details maxed out, other than hairworks.

Win. :)

June 1, 2015 | 01:29 AM - Posted by Ng Khan Mein (not verified)

ryan, y never test the 6gb vram really fully utilize? thanks.

June 1, 2015 | 01:32 AM - Posted by Eric Lin (not verified)

As the other guy mentioned, I'm curious as to why the 290x is so close to the 980 in terms of performance.

I would also much rather see a modded Skyrim testing than vanilla Skyrim.

June 1, 2015 | 01:52 AM - Posted by Anonymous (not verified)

What about memory segmentation? Does 980Ti have memory split into faster and slower pool just like 970 with its 3.5+0.5 gigs?

June 1, 2015 | 05:28 PM - Posted by arbiter

there is no problem, all 6gb are accessible at same speed. Kinda stupid people would think that would happen 2 times in a row without being told.

June 1, 2015 | 02:56 AM - Posted by Anonymous (not verified)

Saying that the need for more than 6GB of VRAM is a ways off, when GTA V uses 5 seems kind of optimistic.
Also, I didn't think the 295x2 performed that well. Sure, it's a power hog and multi GPU setups have... spotty performance, but look at those averages!

June 1, 2015 | 05:13 AM - Posted by Anonymous (not verified)

I suspect that, going forward, the memory consumption will actually go down, not up. There are a lot of new features that could allow much more efficient management of resources. The 295x2 does quite well on my opinion. Right at the moment, I would say it is still the best bang for your buck in that price range, unless you play an older game or one that has not been well optimized for multiple GPUs yet. Anyway, who would buy either a 295x2 or a 980 Ti right now with AMDs new part comming soon?

June 1, 2015 | 05:13 AM - Posted by Anonymous (not verified)

I suspect that, going forward, the memory consumption will actually go down, not up. There are a lot of new features that could allow much more efficient management of resources. The 295x2 does quite well on my opinion. Right at the moment, I would say it is still the best bang for your buck in that price range, unless you play an older game or one that has not been well optimized for multiple GPUs yet. Anyway, who would buy either a 295x2 or a 980 Ti right now with AMDs new part comming soon?

June 1, 2015 | 05:27 PM - Posted by Anonymous (not verified)

'going forward, memory consumption will go down'
As I said. It seems optimistic.

June 1, 2015 | 11:22 PM - Posted by Anonymous (not verified)

DX12 is set up to use a larger number of smaller draw calls, which should reduce the amount of memory needed at any single instant. Also, a lot is being done with tiling and compression to avoid loading unnecessary resources. This also ties into GPUs making use of virtual memory and unified memory which allows the gpu to load more precisely what is needed (the actual working set).

As AMD has stated, current implementations are very wasteful on memory. There may be some situations where the 4 GB on AMDs upcomming parts is a bottleneck, but it is a trade-off they had to make. The 980 Ti's performance is almost identical to the Titan X, even with less hardware, but the same bandwidth. This implies that it is bandwidth limited. Fuji has even more powerful hardware, so it would be held back severely by GDDR5. AMD has probably done a lot of work to decrease memory usage, but there may be some current games which will push the limits at extreme settings. I don't think this will be much of a problem though.

June 1, 2015 | 03:40 AM - Posted by DaveSimonH

Do you think the 980 Ti would have had as good performance as it does, if the 390X from AMD wasn't just around the corner?

June 1, 2015 | 05:43 AM - Posted by collie

I think the max specks of this architecture (this card) were worked out a long time before the chips were even fabricated, BUT i am super suprised they didn;t wait till AFTER the new AMD flagship. Makes me worry that they(N) already know it(A) can't compete.

June 2, 2015 | 12:54 AM - Posted by Anonymous (not verified)

I am wondering if AMD will manage to launch more than one HBM part. I would expect a full part and at least one salvaged part with cut down units and/or memory interface. It is possible that the cut down part would compete with the 980 Ti and the full part will be higher priced.

June 1, 2015 | 05:01 AM - Posted by Anonymous (not verified)

I have to wonder if they had originally planned on a narrower memory interface or something. The fact that they are essentially selling a Titan X for a much cheaper price seems to imply that AMD may have an incredible product. This release sounds like Nvidia trying to move as many as possible before Fuji comes out. Contrary to the "I'm going to buy this now" trolls, it would be pretty stupid to buy now with a major AMD launch so soon.

June 1, 2015 | 05:22 AM - Posted by Anonymous (not verified)

I feel like Titan X has been used very effectively as premium decoy pricing. Don't get me wrong I'm not anti-Nvidia and I will probably pick-up this card myself, but comparing this card value wise to Titan X seems to be falling into the trap a little. To me Titan X is just an insane reference point.

June 1, 2015 | 05:33 AM - Posted by Esa (not verified)

I still don't understand how NVIDIA can sell the flawed 970 at that price. They should've replaced it with 980 and put the Ti above that. They owe us that for mocking up the 970.

June 1, 2015 | 05:59 AM - Posted by arbiter

Um, its a card that MATCHES AMD's 290x and sells for same price. And if it wasn't for gtx970's price 290x would sell for 450-500$ right now. There is nothing wrong with the 970. just cause specs were revised doesn't mean it magically a slower card.

June 1, 2015 | 06:11 AM - Posted by Anonymous (not verified)

Didn't you read the PCPer review. It has issues on 1 out of the 2 games it tested. At 1440p even SLI had issues.

June 3, 2015 | 03:13 PM - Posted by Anonymous (not verified)

You mean the one test where they used out of the ordinary resolution scaling to push the card to the absolute limit (with acceptable results)? Or the test of another game that had numerous issues with VRAM memory scaling on various Nvidia and AMD cards that required a .cfg edit to fix (that they likely never knew about)?

June 1, 2015 | 06:29 AM - Posted by Tomasz Golda (not verified)

nice review. I have a question.

I have 4790k @ 4,6ghz 2x970 SLI @ 1530/7800, 16GB framework and 1440p monitor.

Do you think it is worthwhile to change the 2x970 on 980Ti?

maybe you could extend the review of 2x970. it would be a good comparison especially since the price is the same ceiling.

June 1, 2015 | 06:44 AM - Posted by donut (not verified)

This is nvidia way of saying thanks for the cash Titan X owners. Ouch

June 1, 2015 | 09:37 AM - Posted by BBMan (not verified)

ROFL! Couldn't have said it better myself!

June 1, 2015 | 09:42 AM - Posted by BBMan (not verified)

AMD once again demonstrates that it is best at sucking down the power grid and contributing to Global Warming.

June 1, 2015 | 10:41 AM - Posted by Ramon Z (not verified)

Is it REALLY 6GB this time, or is there some funny business going on like with the 970???

June 1, 2015 | 05:31 PM - Posted by arbiter

sad how amd fans keep going back to that any chance they get. Like they expect for it to happen again.

June 1, 2015 | 07:50 PM - Posted by Anonymous (not verified)

Sadder is that you do the exact same thing but somehow don't find it sad.

June 1, 2015 | 05:31 PM - Posted by arbiter

double ;/

June 1, 2015 | 11:24 AM - Posted by ChangWang

Hey Ryan,
I didn't see any mention of Mantle on the Battlefield 4 page of the review. Did you get a chance to test it? I'm curious if it would have given the Radeons a performance boost. Especially ye olde 290X.

June 1, 2015 | 05:32 PM - Posted by arbiter

Mantle is dead. AMD killed off developing it. No reason to show results in it anymore.

June 1, 2015 | 11:57 AM - Posted by Doeboy (not verified)

I hope in your next test setup you should replace Grid 2, Skyrim, Bioshock Infinite with the Witcher 3, the upcoming Arkham Knight, and Project Cars. Those are the games that will make high end gpu really sweat. Also, please add another graph that shows the average frame rate and minimum frame rate. It makes it much easier to read than a bunch of squiggly lines.

June 1, 2015 | 05:34 PM - Posted by arbiter

Reason you use older game, likely hood game will get update that changes performance are almost nothing. So keeps a level playing feed when testing new cards.

June 1, 2015 | 03:57 PM - Posted by IvanDDDD (not verified)

I have the intention to change my gpu. After carefully reading your letter , I find illogical buy nvidia nvidia 980 or 980ti .
The R9 290x is 300usd less and practically has the same performance even more than one teraflop than nvidia 980

June 1, 2015 | 03:57 PM - Posted by IvanDDDD (not verified)

I have the intention to change my gpu. After carefully reading your letter , I find illogical buy nvidia nvidia 980 or 980ti .
The R9 290x is 300usd less and practically has the same performance even more than one teraflop than nvidia 980

June 1, 2015 | 08:05 PM - Posted by Anonymous (not verified)

Why would someone use MSAA x4 at 4k res in GTA 5? It's a fps killer, vram hog and at that res it is NOT needed.

You can get a high framerate at 4k with SLI 980s rather than just the Ti and Titan X. I know this since people do it because they don't waste frames on msaa.

The advanced graphics stuff is also excessive and just a waste of fps.

June 1, 2015 | 11:28 PM - Posted by Alvaro (not verified)

Hi Guys,

I have a Sli system with GTX 670+FTW 4GB I will big difference in performance if I replace the SLI with the 980ti?

Also I read the specs of the GTX 670+FTW anf the resolution in digital says is 2560x1600 and I actually ran games in 4K resolution is that because of the SLI configuration I have?

Thank you in Advance!

June 1, 2015 | 11:30 PM - Posted by Alvaro (not verified)

Sorry for the first line I want to say:
I have a Sli system with GTX 670+FTW 4GB I will notice a big difference in performance if I replace the SLI configuration with a single 980ti?

June 2, 2015 | 12:12 AM - Posted by lightsworn (not verified)

The GTX 690 is a great card for those who bought it in its' day. Regarding the frame buffer and the aging tech, how is this card stacking up to these benchmarks? what are the trade off blows ("it's better in this fashion here but not on here")?

June 2, 2015 | 07:17 AM - Posted by FrankJ00 (not verified)

Since the 980ti has two compute units disabled does it have the same memory nuance as the 970 ?

June 2, 2015 | 09:55 AM - Posted by T7 (not verified)

Nice review, It is really funny, the fact that few months ago everyone was yelling at me for buying a GPU with 4GB of memory, everyone was saying "who needs 4gb of memory" .

8GB of graphics memory should be already the normal , 2160p really needs much more than 4GB so 8GB should be the normal. Also hopefully games will start loading all textures for each frame at once! instead of using the ugly texture streaming method that most games are doing because of the 3GB limit on most cards (thanks Nvidia).

June 2, 2015 | 04:35 PM - Posted by Areus

Good review, as a 680 owner I quite appreciate the comparison to my particular card. Please include a poor shirty ancient 680 in 18 months when the gtx 1080 or whatever they name it is released.

June 2, 2015 | 05:40 PM - Posted by PhoneyVirus

To bad TSMC didn't have better Fabs then just using 28nm that's been around well over two years. The EVGA GeForce GTX 670 seems to be holding up for the games I play and its 28nm. Nvidia has amazing graphics cards and this can be said for the GeForce GTX 980 Ti, honestly if the chance was there, I would have two of these in SLI.

Nonetheless I'll be keeping a eye open for Volta. Yes I'll wait that long especially with it taking on the full 3D HBM IC chips all running on 14nm Silicon.


June 2, 2015 | 10:15 PM - Posted by Anonymous (not verified)

Still noticing that no review site seems to have the balls to put the 980TI up against Arma 3 Maxed out. or Maxed out at 4K.

September 12, 2015 | 05:29 PM - Posted by vuther316

My i7-4790k gtx 980 Ti build gets about 38 FPS on the altis benchmark. (everything maxed view distance set to 6500 and objects set to 1600) If you mostly play on small maps like altis or arma 2 maps like takistan it should run alot better. and even that benchmark is kind of a worst case scenario with lost of explosions and smoke effects.

June 3, 2015 | 05:00 PM - Posted by Marco Romero (not verified)

It is a pity that the new GTX 980 TI doesn't have a backplate as GTX 980 reference.

June 4, 2015 | 01:24 PM - Posted by Anonymous (not verified)

FCAT??? Why not?

June 5, 2015 | 01:58 PM - Posted by Anonymous (not verified)

Is there any reason to wait until the non-reference versions come out? Or is it safe to pull the trigger now?

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.