Review Index:
Feedback

The NVIDIA GeForce GTX 980 Ti 6GB Review - Matching TITAN X at $650

Author: Ryan Shrout
Manufacturer: NVIDIA

DX12 Support, GameWorks VR, G-Sync Updates

Even though the GTX 980 Ti doesn't offer any new features compared to the GTX Titan X with the same GM200 GPU behind it, NVIDIA did touch base with us on a few topics of interest; updating me on the current news and hype surrounding DX12's new features, some potential improvements to VR rendering with something called multi-res shading and updates to the G-Sync monitor ecosystem.

Some News on DirectX 12

First up, let's talk about DirectX 12. As we near the release of Windows 10 this summer you'll hear more about DX12 than you could ever imagine, with Intel, AMD and NVIDIA all banging the drum loud and clear. At this point, not everything can be divulged but NVIDIA wanted to be sure we understood that there were two very different aspects of the DX12 story: better CPU utilizing and efficiency along with new features that require new hardware. We have all heard the stories (ours included) talking about the backwards compatibility of DX12 for currently shipping GPUs, but that only accounts for the improved CPU utilization and efficiency portions of DX12. While that is critically important, there are indeed new features that require new GPU hardware to take advantage of, just like in all previous DirectX releases.

View Full Size

In terms of new features, there are currently two different feature levels: Feature Level 12.0 and Feature Level 12.1. Feature level 12.0 supports new rendering technologies like tiled resources, bindless textures and typed UAV access. 12.1 is more advanced and includes the 12.0 features but adds conservative raster and raster ordered views.

View Full Size

NVIDIA says that GM200 supports another feature as well: volume tiled resources. This additional feature brings support for 3D textures to be used in the tiled resource capability, utilizing less memory by only storing the specific tiles of textures required for rendering at that time. The tiled resources feature listed as a requirement for Feature Level 12.0 only needs to support 2D textures. With a 3D texture though a developer has the ability to store an additional dimension of data; NVIDIA gave an example of smoke where the third dimension of texture might indicate the pressure of the fluid, changing the color and response of the physics based on that 3rd dimension of data.

View Full Size

Conservative raster improves pixel coverage determination moving away from specific sample points and instead will register as covered if any portion of the pixel is covered by the geometry in question. This does come with some kind of performance penalty, of course, but it has the ability to improve coverage recognition for better image quality with new rendering techniques. NVIDIA gave the example of ray traced shadows that are free of aliasing image quality issues.

View Full Size

There is a still lot yet to be shown or discussed about DX12 but we can confirm now that Maxwell will support DirectX 12 Feature Level 12.1 as well as the volume tiled resources capability. I'm sure we'll hear AMD's side of this story very soon as well and hopefully some news from Microsoft this summer will help us better understand the overall direction of the API.

GameWorks VR

Released previously with the initial sale date of the GeForce GTX 980 and GTX 970, NVIDIA's collection of software and hardware technologies focused on VR is now being branded GameWorks VR. Many of the technologies have been discussed before including VR SLI, Direct Mode and Front Buffer Rendering, but NVIDIA is introducing a new option to improve VR performance called Multi-res Shading.

To understand it, a quick-and-dirty back story: A VR headset includes a screen and a pair of lenses. The lenses are used to make the screen appear to be further in the distance to help users properly focus on the image. Those lenses warp the image slightly, making the center of the image larger and compresses it around the edges to produce a fish-eye style effect and a wide viewing angle. To account for this optical warping, the display renders a fish-eye looking image of its own - one you have probably seen if you've looked at any footage recorded from an Oculus Rift. The rendered image is "warped" by the optical lens so that it appears correct to the end user.

View Full Size

The problem is that warping of the image in the rendering engine means that resolution is effectively lost around the edges of the screen. Though the middle retains a near 1:1 pixel ratio, the edges do not and you can tell from the diagrams below that much of the pixel data is lost when the image is "warped" and "compressed" to fit the required shape for the VR displays. This makes the rendering of VR games less efficient than rendering for traditional 2D monitors and displays, a disadvantage that can have negative side effects in a computing platform that requires the fastest possible performance and lowest available latency.

NVIDIA's solution is Multi-res Shading that can divide the image that the game engine wants to display into nine different viewports.  The center viewport, the one the user will 99% of the time be focused on and the one without lost pixels to the warping, remains the same resolution and maintains the detail required for a great gaming experience. However, the surrounding viewports can be adjusted and sized to more closely match the final warped resolution that they will display at in the VR headset. The image still has to go through a final warp to the correct shape for the VR lens but the amount of data "lost" along the edges is minimized and thus performance can be improved for the gamer.

View Full Size

Maxwell GPUs have a multi-projection capability built into them that accelerates the distribution of geometry to the different viewports in Multi-res Shading and NVIDIA claims it can offer a 1.3x - 2.0x improvement in pixel shader performance (not frame rates necessarily). This kind of projection can be adjusted with different levels of detail so that developers have the ability to decide how much lower the resolution rendered becomes based on their performance goals and image quality requirements. I saw of demo of the technology at work during our GTX 980 Ti briefing and even when specifically looking on the periphery for the lower resolution and detail level along the edges of the render, it was nearly impossible to spot the difference. 

This is a technology that requires game engine implementation, so don't go thinking you'll be able to just flip a checkmark in the control panel for this. It will be packaged up in the GameWorks SDK and we'll keep an eye out for game engines and developers that attempt to integrate to measure performance gains and image quality for ourselves.

G-Sync Updates: New Monitors and Overdrive Details

I think anyone reading this review or someone that frequents PC Perspective will already know the basics of G-Sync and what it adds to the smoothness of PC gaming. It's definitive, it's dramatic and it's available now in several different variations. 

One area that has seen a lot of debate recently between G-Sync and AMD's FreeSync competition is in the area of overdrive. When Allyn and I first tested the initial wave of FreeSync monitors to make their way to office we immediately noticed some distinctive ghosting on the screens when operating in the variable refresh modes of the monitors. At the time, very little was known about overdrive with variable refresh displays and though I knew that NVIDIA was doing something with G-Sync to help with ghosting, I didn't know exactly what or to what extent. During our briefing with Tom Petersen on the GTX 980 Ti, he was able to share some more details.

View Full Size

At its most basic, monitor overdrive is the process of asking a pixel to go to a higher or lower voltage value than you actually need for what is being presented on the screen in order to get that pixel to your desired power level (color and brightness), faster. This helps move pixels from their previous color/brightness to the new color/brightness you want them to be at more quickly, thus reducing ghosting. With traditional monitors and fixed refresh rates, panel vendors had perfected the ability to time the twisting and untwisting of LCD crystals to account for overdrive.

But with variable refresh rates that all gets turned on its head; the rate at which you apply power to a pixel at a 40 Hz refresh is very different than at 80 Hz, for example, if you are trying to produce a picture without ghosting and without inverse ghosting. NVIDIA claims that because the G-Sync module in the desktop monitors is tuned specifically to each panel, it is able to dynamically apply overdrive based on the variable frame rate itself. Because the module knows and can estimate the time the next frame will appear (at the most basic level, guess whatever the previous frame time was) it can more intelligently apply voltage to the panel to reduce ghosting and give users the best possible picture. Petersen said that it appeared none of the FreeSync monitors were doing this and that is why we see more ghosting at variable refresh rates on those displays.

With the NVIDIA 352.90 driver the company will also be adding a couple of requested features to G-Sync as well: windowed mode and V-Sync options above the displays maximum refresh rate. For those gamers that like to use windows mode and borderless windowed mode to play with other things going on on your display or even other monitors, NVIDIA has found a way to work with the DWM (desktop windows manager) to allow non-full-screen games to operate in a variable refresh mode. NVIDIA is likely doing some behind the scenes trickery to get this to work properly inside the Windows compositing engine, but we saw it in action and VRR operation is controlled by the application in focus. If you move from the game to a browser, for example, you'll essentially return to the static refresh rate provided by the windows display model.

View Full Size

Two new options found there way into the control panel as well: G-Sync will now let you set V-Sync on or off above the maximum refresh rate of the panel (hurrah for peer pressure!) and you can now enable ULMB directly. This change to the V-Sync capability is only available at the high side of the monitor's refresh rate and will let a user disable V-Sync (and thus introduce horizontal tearing) in order to gain the biggest advantage possible with the lowest latency the system can muster. This basically matches what AMD has done with FreeSync though NVIDIA's G-Sync still has a superior implementation of low frame rate technology as we demonstrated here.

View Full Size

Oh, and NVIDIA G-Sync Mobile is now a thing! Just as we showed you back in January with a leaked driver and an ASUS notebook, module-less G-Sync is a reality and will be shipping this summer. Check out this news story for more details on the mobile variant to G-Sync. 

With those educational tidbits in mind, maybe more interesting is that new G-Sync monitors are incoming with new aspect ratios and different specifications than we have seen before.

View Full Size

Acer has four new displays on the horizon with Asus adding three more to the mix. In that group are a pair of 4K IPS 60 Hz G-Sync monitors, a 34-in 3440x1440 IPS screen with a 75 Hz maximum refresh as well as an updated ROG Swift with a 2560x1440 resolution, 144 Hz IPS screen. I am really eager to get my hands on the Acer X34 with the curved 21:9 screen and 75 Hz refresh - that could be the pinnacle of gaming displays for the rest of 2015.

Video News


May 31, 2015 | 06:03 PM - Posted by Anonymous (not verified)

Might want to fix the typo:

"..the GTX 780 Ti finds itself in a unique spot in the GeForce lineup".

I think it's supposed to be the 980 :)

May 31, 2015 | 06:08 PM - Posted by Ryan Shrout

Fixed!

May 31, 2015 | 07:30 PM - Posted by Anonymous (not verified)

Ryan,

You do realize The Witcher 3 and Project Cars had GameWorks. Saying their driver development is miles a head because of that is rather silly and irresponsible. Especially given how the developr statements and patches for both have panned out.

May 31, 2015 | 08:10 PM - Posted by Ryan Shrout

Those aren't really the only two examples though. GTA V, for example.

I think we are going to give more into this with some vendor interviews in the near future.

May 31, 2015 | 08:21 PM - Posted by Anymouse (not verified)

The explanation you gave referred to having optimized drivers on release to GameWorks backed titles. Silly to make such statement given the obvious nature of the business.

Same kind of silliness would be to imply that due to AMD having a driver out on release for a AMD Gaming Evolved backed title translates into being miles ahead.

I hope you ask how come console optimization on GCN isn't translated given how much time is spent on X-Box and PS4 development.

I also hope you don't get a PR spin like usual from either side.

May 31, 2015 | 10:14 PM - Posted by Anonymous (not verified)

The big difference is that nvidia had drivers in the day-0 for Gaming Evolved titles, and AMD hadn't them in the launch of TWIMTBP titles.

Hell, AMD hadn't proper drivers many times with the launch of many Gaming Evolved games and related.

So, don't be silly and stop to see "sillyness" in the Ryan's review because he said a fact:

Drivers that supports games in the launch, the best, nvidia.

Ex:

Battlefield 3 and 4 (they aren't literally GE titles, but they are more than this, Mantle version, 8 millions of dollars, Johanson making PR to AMD, etc), Bioshock infinite, Civ:Beyond earth. And many more.

June 1, 2015 | 01:20 AM - Posted by chizow (not verified)

Exactly, Nvidia is going to release Day 0 or Day 1 drivers for ANY big game launches regardless of branding because in the end, they know their users demand this level of support.

It is almost as if AMD is spiting their own customers in an attempt to make Nvidia/GameWorks look bad, when in the end, it just hurts their own customers.

June 1, 2015 | 01:29 AM - Posted by Anonymous (not verified)

Now that's what you call nvidiot shilling...

June 1, 2015 | 02:28 AM - Posted by hosko

I think you need to look in the mirrer mate. Its not Nvidia's fault that AMD didn't have a driver out in time for Project Cars release.

This is from Slightly Mad Studios

"Project CARS is not a GameWorks product. We have a good working relationship with nVidia, as we do with AMD, but we have our own render technology which covers everything we need."

"NVidia are not "sponsors" of the project. The company has not received, and would not expect, financial assistance from third party hardware companies."

“We’ve provided AMD with 20 keys for game testing as they work on the driver side. But you only have to look at the lesser hardware in the consoles to see how optimised we are on AMD based chips.

We’re reaching out to AMD with all of our efforts. We’ve provided them 20 keys as I say. They were invited to work with us for years, looking through company mails the last I can see they (AMD) talked to us was October of last year.

Categorically, Nvidia have not paid us a penny. They have though been very forthcoming with support and co-marketing work at their instigation. We’ve had emails back and forth with them yesterday also. I reiterate that this is mainly a driver issue but we’ll obviously do anything we can from our side.”

So AMD can't get a driver out for the release of the game. Nvidia spends more money on driver development for their game ready drivers.

This is a reason to buy an Nvidia card. Reviewers should mention this. Fanboys shouldn't go crazy at reviewers for mentioning it.

So if AMD didn't get advance notice how come they already have a new driver release available? The issue is it should have been ready the day before release date.

June 1, 2015 | 03:02 AM - Posted by Anonymous (not verified)

Nvidia don't need to "pay" directly... Dev support & free marketing/kit are enough. Having code/drivers tuned together is what nvidia banks on. Indeed they flagged their fear of AMD doing exactly this in an article by Anand a number of years ago. AMD dind't take this step, but nvidia is paranoid (like Intel), so they did. Can't blame them with AMD locking up gaming consoles. AMD only had access to the Cars RTM a month before drop. Nvidia code lockout deals & AMD not spending on dev outreach drives current situation.

June 1, 2015 | 03:43 AM - Posted by hosko

So why doesn't AMD offer similar support? Why was their last correspondence to a game developer last October? AMD for a long time has been behind on driver support. That's why they pushed mantle so hard. It would move the burden away from AMD and on to the developers. If you give developers access to the bare metal you don't need specific drivers.

The only issue is we are not that yet, as result AMD's poor driver support is being shown up. nVidia spends a boat load on their drivers and Geforce experience. In the free market you get rewarded for effort and punished if you don't keep up with your competition. Sadly the lack of competition in graphics means this is only going to happen more and more often.

Pcars was delayed so of cause they only got the RTM just before release, but if you read the quotes from Slightly Mad Studios gave access to the game as they were developing it however AMD didn't communicate back to the studio .

June 1, 2015 | 08:54 AM - Posted by Luthair

Lack of money one assumes.

The developers (of the game) still hold responsibility for the performance of their game, blaming others is a cop-out. To me as a non-game developer it seems more than a little ridiculous that every game (written to supposed standard) requires driver customization.

June 1, 2015 | 11:49 AM - Posted by Anonymous (not verified)

"NVidia are not "sponsors" of the project. The company has not received, and would not expect, financial assistance from third party hardware companies."

With all the Nvidia logos plastered over every flat surface they could find, I have a really, really hard time believing this claim.

June 1, 2015 | 03:34 PM - Posted by Anonymous (not verified)

ThIs!! ^^

June 2, 2015 | 04:33 AM - Posted by Anonymous (not verified)

This is PCPerspective we take Nvidia as Gods word.

June 7, 2015 | 02:20 AM - Posted by DaKrawnik

lol Plastering your logo on something because your a sponsor and doing it as advertising because you did some work for them are two different things. It's no different than the guys doing your roof putting a sign with their company logo and phone number on your lawn.

June 1, 2015 | 04:27 AM - Posted by Aysell

The recent day 1 driver updates from nvidia have been awful, the wither and project cars update are among them. While my friends on AMD cards had a little performance hit due to idiotic nvidia features they could at least play the game and not have it crash every 10 min. A significant nr of people needed to roll back just to be able to play. A problem i did not have on my former 7870, using a 970 now.

Having a day 1 driver ready counts for jack shit if it works like crap, something AMD seems to have learned, and dissing them for it while giving nvidia a free pass on shit drivers does not seem like objective review... ijs...

June 1, 2015 | 11:50 AM - Posted by Anonymous (not verified)

You're mental. EVERYBODY knows that AMD drivers always always always suck, and Nvidia drivers come gold-plated.

(/s)

June 1, 2015 | 02:32 AM - Posted by hosko

ProjectCars isn't Gameworks. You aren't going to develop for nVidia if you are releasing on consoles with AMD hardware. Why does Project Cars play perfectly well on the consoles but not on AMD based PC? Because of the drivers?

June 1, 2015 | 03:30 AM - Posted by Anonymous (not verified)

Perhaps because the console port is tuned appropriately for the fixed hardware & the PC port had nvidia input to redress the performance balance of a straight from console GCN port? PhysX (cpu) is used for some physics calcs in the PC port. Latest AMD drivers only improve performance by 10% & Win10 by 25% apparently. This is what happens when architectures diverge. Nvidia basically made Maxwell a great gaming card for now, all compute capabilities are reduced & DP bye-bye. What will be the outrage if AMD does the same in corresponding games?

June 4, 2015 | 07:10 AM - Posted by Ry (not verified)

"the fact that you can get identical performacne for $350 less is a great thing"

I don't want acne!

May 31, 2015 | 06:28 PM - Posted by arbiter

Well 980ti performance wise is about where it was expected, Question is now is if AMD radeon fury is gonna be able to be fast enough to justify price if the rumored 850$ is true.

May 31, 2015 | 07:15 PM - Posted by Anonymous (not verified)

The compute performance alone on the AMD Fury will justify the price to the prosumer at 8.6 TFLOPs. Titan Z is the closest single precision at $3000 a dual gpu with 8 TFLOPs & lousy double precision. If it keeps its double precision intact unlike Maxwell based cards it will destroy anything they can offer until next year.

May 31, 2015 | 07:29 PM - Posted by arbiter

For gamers compute performance means absolutely dick. If it ment anything to gamers then AMD would been massively ahead since 7000 series cards.

May 31, 2015 | 08:17 PM - Posted by remon (not verified)

Oh the irony.

Anyway, what's keeping back compute on games is Nvidia and their crappy compute cards. There's a reason TressFX is faster than Hairworks even on Nvidia cards, and that is because it's based on directcompute.

May 31, 2015 | 09:00 PM - Posted by arbiter

Maybe AMD needs to get better at DX11 tessellation which is A STANDARD. Funny how AMD fans want it nvidia to make things at standard which hairworks uses. Must hurt pretty bad knowing AMD's top dog 290x card gets BEAT by a 750ti in tessellation performance.

June 1, 2015 | 01:13 AM - Posted by remon (not verified)

Well, they got better tessellation, with Tonga.

June 15, 2015 | 03:21 AM - Posted by SiliconDoc (not verified)

Kinda sad they got better tesselation with tonga when I had to listen to and read the bloviating amd fan gasbags blabbbering AMD had tesselation hardware in the HD2900 and it shows how awesome they are and how far ahead they are in tech, and blah blah blah blah blah !

That turned into - for the past 3 or 4 YEARS... " wahhhh wahhh nvidia is forcing tesselation on blank walls and along sidewalks and no one needs 16x tess or more ever ! "

Thus, the stupdendous hardware amd junkie wacko went from insanely incoherent and inconsequential hardware braggadocio to blubbering and whining in victimhood, again, when their holier than thou amd epic failed.

June 1, 2015 | 01:45 AM - Posted by Anonymous (not verified)

Hairworks is tuned to nvidia's front end geometry strength on higher-end GPUs, but is a dumb way of impelmenting hair. It basically occupies all pipeline stages GS-HS-DS-VS. >64x amplification is pointless when at the sub-pixel level. 8XMSAA is the icing on top... So no, even single/dual primitive (tris) pipeline gpus from nvidia also suck. The cost to Maxwell is static scheduling - read dumb...

It will be interesting to re-visit this stuff with DX12, when nvidia's better DX11 multi-threaded performance (2theads/cycle) won't matter as their command processors are limited compared with AMDs supposed superior command rate of >4threads/cycle. Maxwell's current static model may indeed hurt moving forward, but GCN aint no panacea either and maks a different set of compromises. Time will tell whether DX12 & compute will hurt Nvidia buyers.

May 31, 2015 | 08:18 PM - Posted by remon (not verified)

* The irony of posting this in a Titan review.

May 31, 2015 | 09:16 PM - Posted by pdjblum

Always have a hard time with "irony." Can you please explain the irony here? Thanks much in advance.

June 1, 2015 | 01:22 AM - Posted by chizow (not verified)

Most likely not, and I think this is why Nvidia shot first at $650. They've basically pre-emptively undercut Fury's pricing. It will have to beat 980Ti/Titan X by 15-20% to keep that $850 price tag, and if it is +/-5% it is going to be $500-$600 max. Certainly a far cry from $850 aspirations.

June 1, 2015 | 06:34 AM - Posted by donut (not verified)

Howz that Titan X going for you? Ouch that's gotta hurt.

May 31, 2015 | 06:29 PM - Posted by Dan (not verified)

Just a suggestion, but it would be helpful when you hover your mouse over the charts that a description would popup of what each is or what the measure is of (FPS by Percentile, Frame Variance, Frame Times, etc). I can't be the only one who really has no idea what most of the charts are depicting.

Love the site and content, fantastic article. Keep up the great work!

May 31, 2015 | 08:08 PM - Posted by Ryan Shrout

Do you mean a more detailed description? Because the graph does indicate Frame Variance, etc. in the title of it.

May 31, 2015 | 10:50 PM - Posted by Dan (not verified)

Yes exactly, what frame variance is and the measurement taken is of (and the other graphs). I know you have an entire article on this but a quick sentence recap would be nice.

May 31, 2015 | 06:30 PM - Posted by Crazycanukk

SOLD !!

This Card is going to find a new home in my gaming Pc as soon as a decently overclocked version hits retailers. My stock 780 at 1440p is showing its limits with Witcher 3 and im sacrificing a lot of eye candy just to keep a reasonable fps rate. The Gsync monitor helps but i must admit I am an eye candy addict and the 780 cant deliver at 1440p. I will moce the 780 into another PC at the cottage for 1080p gaming and the 760 there to my family room pc at home.

Debated getting the TitanX but just couldnt justify the cost for single monitor 1440p gaming ..this I can justify and i think i can get 2 years like the 780 out of it before i upgrade and shuffle cards again :)

May 31, 2015 | 06:54 PM - Posted by Joe (not verified)

Waiting for Gigabyte Windforce G1 Gaming edition.

May 31, 2015 | 07:02 PM - Posted by siriq111 (not verified)

Well, almost bang on the buck. This price also tells me about the other side as well.

May 31, 2015 | 07:02 PM - Posted by Anonymous (not verified)

How long until someone finds out its missing something like the 970 ?

May 31, 2015 | 07:22 PM - Posted by Crazycanukk

the 970 wasnt missing anything at all..it was a different way of utilizing the memory and a different memory architecture design..it was marketing not being clear on the packaging and advertising about that change..but the consumer didnt miss out on anything and got the full 4 gigs when they needed it.

June 1, 2015 | 01:55 AM - Posted by nvidiot (not verified)

Ummm, it was missing TMUs & L2 cache, which resulted in the 3.5GB+0.5GB memory partitions...

[b]Ryan do we have definitive confirmation that all TMUs & cache are retained with the removal of 2 SMMs?[/b] The block diagram is only marketing as TMUs & L2 (especially) are tied to SMM clusters.

June 1, 2015 | 02:45 AM - Posted by nvidiot (not verified)

Let me answer my own question...

Excising 2xSMMs @ 4pixels/cycle = 88ROPs for singe cycle operation even though 96ROPs are specced. TMUs are down to 176 from 182, but the L2 partition & memory controllers are apparently unaffected, so performance/clock will be worst case ~8% lower than Titan. With a bit more power headroom, 980ti will trade performance lead with Titan depending on workload bottleneck. Not too bad for 6GB & $350 less.

June 1, 2015 | 02:58 AM - Posted by Anonymous (not verified)

I am waiting to see test to determine specifically what the situation is. It would be stupid for them to misrepresent the actual memory bandwidth again, so I am leaning toward it being correct and equivalent to the Titan X. There is also the fact that the the 980 Ti performs almost exactly the same as a Titan X, even though it has less hardware. This implies that they have the same bandwidth and they are also memory bound. They may get a good boost out of memory over clocking, although I assume that GDDR5 has been pushed about as far as it will go.

May 31, 2015 | 07:07 PM - Posted by RadioActiveLobster

I currently have 2 Gigabyte Windforce G1 Gaming 980's in SLI.

That should last me a while (gaming at 1440p on a 144Hz G-Sync panel).

I'm waiting to see what Pascal brings to the table.

May 31, 2015 | 07:10 PM - Posted by billb79

Yes, indeed..........I need one of these!

May 31, 2015 | 08:48 PM - Posted by Anonymous (not verified)

New GPU releases are about as exciting as CPU releases have been. Gone are the days when we saw dramatic performance increases with each new generation.

June 2, 2015 | 02:25 PM - Posted by amadsilentthirst

I haven't been moved to buy a GPU since the 8800GTX in 2007, for £340

Since then it seems everything is just a rebrand of a rebrand with no new advancements, well unless you want to spend a thousand pounds that is.

And all this amd/nvidia is getting real old.

May 31, 2015 | 09:05 PM - Posted by funandjam

zzzzzzzzzZZZZZZZZZzzzzzzzzzzzZZZZZZZZZZZzzzzzzzzzzzzzz

May 31, 2015 | 09:23 PM - Posted by pdjblum

If this was a preemptive move by Nvidia anticipating the fuji release, I wonder what they know. Impressive card. Just sucks that flagships these days are so expensive.

May 31, 2015 | 09:36 PM - Posted by Anonymous (not verified)

Maybe they know something.

I certainly don't recall a GPU review on a Sunday.

May 31, 2015 | 11:58 PM - Posted by Anonymous (not verified)

Hmm, the only thing I remember to ever come out on a Sunday was nvidia's response to the 970 debacle.

Now with this card reveal behind us... On to Fiji!

May 31, 2015 | 11:59 PM - Posted by Anonymous (not verified)

Computex starts at Sunday 6pm PST. They want to demo this thing on the show floor.

May 31, 2015 | 09:45 PM - Posted by Anonymous (not verified)

Ordering 1 asap when available in the shops here in the UK!
My acer predator xb270hu 2560x1440 ips 144hz gsync monitor needs more GPU power!

May 31, 2015 | 10:13 PM - Posted by J Nevins (not verified)

Guys, its not 1999 please update your game testing suite. Why not Witcher 3? Or GTA5 , or modern games?

May 31, 2015 | 10:36 PM - Posted by Anonymous (not verified)

Ryan,

Why on the R9 295X2 did it appear to have larger frame time variance in games at 1440p than at 4k?

June 1, 2015 | 12:14 AM - Posted by Ryan Shrout

Likely because there is less of a GPU bottlencek at the lower resolution and thus there is more push back on the CPU, where the multi-GPU management takes place.

May 31, 2015 | 11:18 PM - Posted by StephanS

Its interesting to see the 290x being so close to the GTX 980.
What I don't get is how both card have very similar gaming performance, same amount of memory, but one retail for around $280 and the other $500?

Side note: the gap get small to non existent at 4K between those two cards, telling that the perf difference at lower res might be driver related.

Dx12 will help equalize the driver situation, and it would be hilarious if a $280 card ends up out performing a $500 card, even if its just by a few frames.

June 1, 2015 | 01:19 AM - Posted by remon (not verified)

And one is 1.5 years old.

June 1, 2015 | 11:59 AM - Posted by Anonymous (not verified)

Still stuck at 28 nm, so we haven't been getting the same generational gap we did with previous generations.

June 1, 2015 | 12:41 AM - Posted by Shambles (not verified)

It's hard to get excited about a $650 GPU when the price/performance falls off a cliff after you get past $350. With how well SLI works these days I'd be buying 2x970's long before a 980 Ti.

June 3, 2015 | 08:15 PM - Posted by Anonymous (not verified)

The NVIDIA card takes way less energy? No?

June 1, 2015 | 01:03 AM - Posted by Mandrake

Very nice review. I considered the acquisition of an awesome 980 Ti card. Turns out Nvidia's new Kepler performance driver killed off that idea. My single 780 @ 1200 - 1250MHz in game in The Witcher 3 is running beautifully. At 2560x1440 in the high 40s or 50s fps with most details maxed out, other than hairworks.

Win. :)

June 1, 2015 | 01:29 AM - Posted by Ng Khan Mein (not verified)

ryan, y never test the 6gb vram really fully utilize? thanks.

June 1, 2015 | 01:32 AM - Posted by Eric Lin (not verified)

As the other guy mentioned, I'm curious as to why the 290x is so close to the 980 in terms of performance.

I would also much rather see a modded Skyrim testing than vanilla Skyrim.

June 1, 2015 | 01:52 AM - Posted by Anonymous (not verified)

What about memory segmentation? Does 980Ti have memory split into faster and slower pool just like 970 with its 3.5+0.5 gigs?

June 1, 2015 | 05:28 PM - Posted by arbiter

there is no problem, all 6gb are accessible at same speed. Kinda stupid people would think that would happen 2 times in a row without being told.

June 1, 2015 | 02:56 AM - Posted by Anonymous (not verified)

Saying that the need for more than 6GB of VRAM is a ways off, when GTA V uses 5 seems kind of optimistic.
Also, I didn't think the 295x2 performed that well. Sure, it's a power hog and multi GPU setups have... spotty performance, but look at those averages!

June 1, 2015 | 05:13 AM - Posted by Anonymous (not verified)

I suspect that, going forward, the memory consumption will actually go down, not up. There are a lot of new features that could allow much more efficient management of resources. The 295x2 does quite well on my opinion. Right at the moment, I would say it is still the best bang for your buck in that price range, unless you play an older game or one that has not been well optimized for multiple GPUs yet. Anyway, who would buy either a 295x2 or a 980 Ti right now with AMDs new part comming soon?

June 1, 2015 | 05:13 AM - Posted by Anonymous (not verified)

I suspect that, going forward, the memory consumption will actually go down, not up. There are a lot of new features that could allow much more efficient management of resources. The 295x2 does quite well on my opinion. Right at the moment, I would say it is still the best bang for your buck in that price range, unless you play an older game or one that has not been well optimized for multiple GPUs yet. Anyway, who would buy either a 295x2 or a 980 Ti right now with AMDs new part comming soon?

June 1, 2015 | 05:27 PM - Posted by Anonymous (not verified)

'going forward, memory consumption will go down'
As I said. It seems optimistic.

June 1, 2015 | 11:22 PM - Posted by Anonymous (not verified)

DX12 is set up to use a larger number of smaller draw calls, which should reduce the amount of memory needed at any single instant. Also, a lot is being done with tiling and compression to avoid loading unnecessary resources. This also ties into GPUs making use of virtual memory and unified memory which allows the gpu to load more precisely what is needed (the actual working set).

As AMD has stated, current implementations are very wasteful on memory. There may be some situations where the 4 GB on AMDs upcomming parts is a bottleneck, but it is a trade-off they had to make. The 980 Ti's performance is almost identical to the Titan X, even with less hardware, but the same bandwidth. This implies that it is bandwidth limited. Fuji has even more powerful hardware, so it would be held back severely by GDDR5. AMD has probably done a lot of work to decrease memory usage, but there may be some current games which will push the limits at extreme settings. I don't think this will be much of a problem though.

June 1, 2015 | 03:40 AM - Posted by DaveSimonH

Do you think the 980 Ti would have had as good performance as it does, if the 390X from AMD wasn't just around the corner?

June 1, 2015 | 05:43 AM - Posted by collie

I think the max specks of this architecture (this card) were worked out a long time before the chips were even fabricated, BUT i am super suprised they didn;t wait till AFTER the new AMD flagship. Makes me worry that they(N) already know it(A) can't compete.

June 2, 2015 | 12:54 AM - Posted by Anonymous (not verified)

I am wondering if AMD will manage to launch more than one HBM part. I would expect a full part and at least one salvaged part with cut down units and/or memory interface. It is possible that the cut down part would compete with the 980 Ti and the full part will be higher priced.

June 1, 2015 | 05:01 AM - Posted by Anonymous (not verified)

I have to wonder if they had originally planned on a narrower memory interface or something. The fact that they are essentially selling a Titan X for a much cheaper price seems to imply that AMD may have an incredible product. This release sounds like Nvidia trying to move as many as possible before Fuji comes out. Contrary to the "I'm going to buy this now" trolls, it would be pretty stupid to buy now with a major AMD launch so soon.

June 1, 2015 | 05:22 AM - Posted by Anonymous (not verified)

I feel like Titan X has been used very effectively as premium decoy pricing. Don't get me wrong I'm not anti-Nvidia and I will probably pick-up this card myself, but comparing this card value wise to Titan X seems to be falling into the trap a little. To me Titan X is just an insane reference point.

June 1, 2015 | 05:33 AM - Posted by Esa (not verified)

I still don't understand how NVIDIA can sell the flawed 970 at that price. They should've replaced it with 980 and put the Ti above that. They owe us that for mocking up the 970.

June 1, 2015 | 05:59 AM - Posted by arbiter

Um, its a card that MATCHES AMD's 290x and sells for same price. And if it wasn't for gtx970's price 290x would sell for 450-500$ right now. There is nothing wrong with the 970. just cause specs were revised doesn't mean it magically a slower card.

June 1, 2015 | 06:11 AM - Posted by Anonymous (not verified)

Didn't you read the PCPer review. It has issues on 1 out of the 2 games it tested. At 1440p even SLI had issues.

June 3, 2015 | 03:13 PM - Posted by Anonymous (not verified)

You mean the one test where they used out of the ordinary resolution scaling to push the card to the absolute limit (with acceptable results)? Or the test of another game that had numerous issues with VRAM memory scaling on various Nvidia and AMD cards that required a .cfg edit to fix (that they likely never knew about)?

June 1, 2015 | 06:29 AM - Posted by Tomasz Golda (not verified)

nice review. I have a question.

I have 4790k @ 4,6ghz 2x970 SLI @ 1530/7800, 16GB framework and 1440p monitor.

Do you think it is worthwhile to change the 2x970 on 980Ti?

maybe you could extend the review of 2x970. it would be a good comparison especially since the price is the same ceiling.

June 1, 2015 | 06:44 AM - Posted by donut (not verified)

This is nvidia way of saying thanks for the cash Titan X owners. Ouch

June 1, 2015 | 09:37 AM - Posted by BBMan (not verified)

ROFL! Couldn't have said it better myself!

June 1, 2015 | 09:42 AM - Posted by BBMan (not verified)

AMD once again demonstrates that it is best at sucking down the power grid and contributing to Global Warming.

June 1, 2015 | 10:41 AM - Posted by Ramon Z (not verified)

Is it REALLY 6GB this time, or is there some funny business going on like with the 970???

June 1, 2015 | 05:31 PM - Posted by arbiter

sad how amd fans keep going back to that any chance they get. Like they expect for it to happen again.

June 1, 2015 | 07:50 PM - Posted by Anonymous (not verified)

Sadder is that you do the exact same thing but somehow don't find it sad.

June 1, 2015 | 05:31 PM - Posted by arbiter

double ;/

June 1, 2015 | 11:24 AM - Posted by ChangWang

Hey Ryan,
I didn't see any mention of Mantle on the Battlefield 4 page of the review. Did you get a chance to test it? I'm curious if it would have given the Radeons a performance boost. Especially ye olde 290X.

June 1, 2015 | 05:32 PM - Posted by arbiter

Mantle is dead. AMD killed off developing it. No reason to show results in it anymore.

June 1, 2015 | 11:57 AM - Posted by Doeboy (not verified)

I hope in your next test setup you should replace Grid 2, Skyrim, Bioshock Infinite with the Witcher 3, the upcoming Arkham Knight, and Project Cars. Those are the games that will make high end gpu really sweat. Also, please add another graph that shows the average frame rate and minimum frame rate. It makes it much easier to read than a bunch of squiggly lines.

June 1, 2015 | 05:34 PM - Posted by arbiter

Reason you use older game, likely hood game will get update that changes performance are almost nothing. So keeps a level playing feed when testing new cards.

June 1, 2015 | 03:57 PM - Posted by IvanDDDD (not verified)

I have the intention to change my gpu. After carefully reading your letter , I find illogical buy nvidia nvidia 980 or 980ti .
The R9 290x is 300usd less and practically has the same performance even more than one teraflop than nvidia 980

June 1, 2015 | 03:57 PM - Posted by IvanDDDD (not verified)

I have the intention to change my gpu. After carefully reading your letter , I find illogical buy nvidia nvidia 980 or 980ti .
The R9 290x is 300usd less and practically has the same performance even more than one teraflop than nvidia 980

June 1, 2015 | 08:05 PM - Posted by Anonymous (not verified)

Why would someone use MSAA x4 at 4k res in GTA 5? It's a fps killer, vram hog and at that res it is NOT needed.

You can get a high framerate at 4k with SLI 980s rather than just the Ti and Titan X. I know this since people do it because they don't waste frames on msaa.

The advanced graphics stuff is also excessive and just a waste of fps.

June 1, 2015 | 11:28 PM - Posted by Alvaro (not verified)

Hi Guys,

I have a Sli system with GTX 670+FTW 4GB I will big difference in performance if I replace the SLI with the 980ti?

Also I read the specs of the GTX 670+FTW anf the resolution in digital says is 2560x1600 and I actually ran games in 4K resolution is that because of the SLI configuration I have?

Thank you in Advance!

June 1, 2015 | 11:30 PM - Posted by Alvaro (not verified)

Sorry for the first line I want to say:
I have a Sli system with GTX 670+FTW 4GB I will notice a big difference in performance if I replace the SLI configuration with a single 980ti?

June 2, 2015 | 12:12 AM - Posted by lightsworn (not verified)

The GTX 690 is a great card for those who bought it in its' day. Regarding the frame buffer and the aging tech, how is this card stacking up to these benchmarks? what are the trade off blows ("it's better in this fashion here but not on here")?

June 2, 2015 | 07:17 AM - Posted by FrankJ00 (not verified)

Since the 980ti has two compute units disabled does it have the same memory nuance as the 970 ?

June 2, 2015 | 09:55 AM - Posted by T7 (not verified)

Nice review, It is really funny, the fact that few months ago everyone was yelling at me for buying a GPU with 4GB of memory, everyone was saying "who needs 4gb of memory" .

8GB of graphics memory should be already the normal , 2160p really needs much more than 4GB so 8GB should be the normal. Also hopefully games will start loading all textures for each frame at once! instead of using the ugly texture streaming method that most games are doing because of the 3GB limit on most cards (thanks Nvidia).

June 2, 2015 | 04:35 PM - Posted by Areus

Good review, as a 680 owner I quite appreciate the comparison to my particular card. Please include a poor shirty ancient 680 in 18 months when the gtx 1080 or whatever they name it is released.

June 2, 2015 | 05:40 PM - Posted by PhoneyVirus

To bad TSMC didn't have better Fabs then just using 28nm that's been around well over two years. The EVGA GeForce GTX 670 seems to be holding up for the games I play and its 28nm. Nvidia has amazing graphics cards and this can be said for the GeForce GTX 980 Ti, honestly if the chance was there, I would have two of these in SLI.

Nonetheless I'll be keeping a eye open for Volta. Yes I'll wait that long especially with it taking on the full 3D HBM IC chips all running on 14nm Silicon.

Twitter

June 2, 2015 | 10:15 PM - Posted by Anonymous (not verified)

Still noticing that no review site seems to have the balls to put the 980TI up against Arma 3 Maxed out. or Maxed out at 4K.

September 12, 2015 | 05:29 PM - Posted by vuther316

My i7-4790k gtx 980 Ti build gets about 38 FPS on the altis benchmark. (everything maxed view distance set to 6500 and objects set to 1600) If you mostly play on small maps like altis or arma 2 maps like takistan it should run alot better. and even that benchmark is kind of a worst case scenario with lost of explosions and smoke effects.

June 3, 2015 | 05:00 PM - Posted by Marco Romero (not verified)

It is a pity that the new GTX 980 TI doesn't have a backplate as GTX 980 reference.

June 4, 2015 | 01:24 PM - Posted by Anonymous (not verified)

FCAT??? Why not?

June 5, 2015 | 01:58 PM - Posted by Anonymous (not verified)

Is there any reason to wait until the non-reference versions come out? Or is it safe to pull the trigger now?

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.