Review Index:
Feedback

NVIDIA GeForce GTX 980 and GTX 970 GM204 Review: Power and Efficiency

Author:
Manufacturer: NVIDIA

The GM204 Architecture

James Clerk Maxwell's equations are the foundation of our society's knowledge about optics and electrical circuits. It is a fitting tribute from NVIDIA to include Maxwell as a code name for a GPU architecture and NVIDIA hopes that features, performance, and efficiency that they have built into the GM204 GPU would be something Maxwell himself would be impressed by. Without giving away the surprise conclusion here in the lead, I can tell you that I have never seen a GPU perform as well as we have seen this week, all while changing the power efficiency discussion in as dramatic a fashion.

View Full Size

To be fair though, this isn't our first experience with the Maxwell architecture. With the release of the GeForce GTX 750 Ti and its GM107 GPU, NVIDIA put the industry on watch and let us all ponder if they could possibly bring such a design to a high end, enthusiast class market. The GTX 750 Ti brought a significantly lower power design to a market that desperately needed it, and we were even able to showcase that with some off-the-shelf PC upgrades, without the need for any kind of external power.

That was GM107 though; today's release is the GM204, indicating that not only are we seeing the larger cousin of the GTX 750 Ti but we also have at least some moderate GPU architecture and feature changes from the first run of Maxwell. The GeForce GTX 980 and GTX 970 are going to be taking on the best of the best products from the GeForce lineup as well as the AMD Radeon family of cards, with aggressive pricing and performance levels to match. And, for those that understand the technology at a fundamental level, you will likely be surprised by how much power it requires to achieve these goals. Toss in support for things like a new AA method, Dynamic Super Resolution, and even improved SLI performance and you can see why doing it all on the same process technology is impressive.

The NVIDIA Maxwell GM204 Architecture

The NVIDIA Maxwell GM204 graphics processor was built from the ground up with an emphasis on power efficiency. As it was stated many times during the technical sessions we attended last week, the architecture team learned quite a bit while developing the Kepler-based Tegra K1 SoC and much of that filtered its way into the larger, much more powerful product you see today. This product is fast and efficient, but it was all done while working on the same TSMC 28nm process technology used on the Kepler GTX 680 and even AMD's Radeon R9 series of products.

View Full Size

The fundamental structure of GM204 is setup like the GM107 product shipped as the GTX 750 Ti. There is an array of GPCs (Graphics Processing Clustsers), each comprised of multiple SMs (Streaming Multiprocessors, also called SMMs for this Maxwell derivative) and external memory controllers. The GM204 chip (the full implementation of which is found on the GTX 980), consists of 4 GPCs, 16 SMMs and four 64-bit memory controllers.

Continue reading our review of the GeForce GTX 980 and GTX 970 GM204 Graphics Cards!!

Each SMM features 128 CUDA cores, or stream processors, bringing the total for this product to 2048. That is significant drop from the 2880 CUDA cores found in the GTX 780 Ti (full GK110 chip) but as you'll soon find out, thanks to the higher clock speeds and performance efficiency changes, the GTX 980 matches or beats the GTX 780 Ti in every test we have run. 

View Full Size

The SMM also features an improved Polymorph engine for geometry processing and 8 texture units. All of the above combines for a total of 128 texture units, a 256-bit memory bus, 64 ROPs (raster operators), and 2MB of L2 cache. 

View Full Size

The GeForce GTX 680, the GK104 based graphics card that was launched in March of 2012, provides some interesting comparisons. First, the obvious: the GTX 980 will have 33% more processor cores, higher clock speeds, and thus a much higher peak compute rate. Texture unit count remains the same but the doubling of ROP units gives the Maxwell GPU better performance in high resolution anti-aliasing. Memory bandwidth is increased by a modest amount, but NVIDIA has made more enhancements to improve in that area as well with GM204.

Look at those bottom four statistics though: GM204 is 1.66 billion transistors larger and has a 35% larger die size yet is able to quote a TDP that is 30 watts lower than GK104 using the same 28nm process.  If you take performance into consideration though, the GTX 980 should be going up against the GK110-based GTX 780 Ti - a GPU that has a 250 watt TDP and a 7.1 billion transistor count (and a die size of 551 mm^2). As we dive into the benchmarks on the following pages, you will gain an understanding of why this is so interesting.

The SMMs of Maxwell are a fundament change when compared to Kepler. Rather than a single block of 192 shaders, the SMM is divided into four distinct blocks that each have a separate instruction buffer, scheduler, and 32 dedicated, non-shared CUDA cores.  NVIDIA states that this simplifies the design and scheduling logic required for Maxwell saving on area and power.  Pairs of these blocks are grouped together and share four texture filtering units and a texture cache.  Shared memory is a different pool of data that is shared amongst all four processing blocks of the SMM.

View Full Size

With these changes, the SMM can offer 90% of the compute performance of the Kepler SM but with a smaller die area that allows NVIDIA to integrate more of them per die.  GK104 had 8 SMs (1536 CUDA cores) while GM204 has 16 SMs (2048 CUDA cores) giving it a 2x SM density advantage. 

NVIDIA's Jonah Alben indicated to us that the 192-core based SMs used in Kepler seemed great at the time but that they introduced quite a few inefficiencies due to a non-power-of-2 count. It was more difficult for the scheduling hardware to keep the cores full and processing on data and the move to 128-core SMMs helps address this. Speaking of scheduling, there were efficiency changes there as well. The arrangement of instruction scheduling now occurs much earlier in the pipeline, preventing re-scheduling in many instances, helping to keep power down and performance high. 

Other than the dramatic changes to the SMM, the 2 MB L2 cache that NVIDIA has implemented on Maxwell is another substantial change.  Considering that the Kepler had an L2 cache implementation at 512 KB, we are seeing an 8x increase in available capacity which should reduce the demand on the integrated memory controller of GM204 dramatically.

Texture fill rate between the GK104 and GM204 increases by 12% (thanks to clock speeds, not texture unit counts) though pixel fill rate more than doubles, going from 32.2 Gpixels/s to 72 Gpixels/s on GTX 980.

A 256-bit memory bus might seem like a downgrade for a flagship card as the GTX 780 Ti featured a 384-bit offering (though the GTX 680 featured a 256-bit controller as well). But there are several changes NVIDIA has made to improve memory performance with that smaller bus. First, the clock speed of the memory is now 7.0 GHz and the GM204 cache is larger and more efficient, reducing the number of memory requests that have to be made into DRAM. 

View Full Size

Another change is the implementation of a third-generation delta color compression algorithm that attempts to lower the bandwidth required for any single operation. The compression happens both when data is written out to memory and when it is read again for the application, attempting to get as high as 8:1 compression on blocks of matching color value (4x2 pixel regions). The delta color compression compares neighboring color blocks and attempts to minimize the number of bits stored by looking at color differences. Obviously, if the data is very random and cannot be compressed at all, then it will just be written to the memory in a 1:1 mode.

NVIDIA claims that Maxwell requires 25% less memory bandwidth on a per-frame basis when you combine the improved caching and compression techniques on Maxwell. As a result, even though the raw GB/s values of GM204 are only marginally higher than that of GK104, the effective memory bandwidth of the new GTX 980/970 cards will appear much better to developers and in games. 

There are other changes in GM204 that do not exist in GM107 to help improve performance of certain features that NVIDIA is bringing to the market. Those help build the foundation for VXGI global illumination and more.


September 18, 2014 | 10:46 PM - Posted by Anonymous (not verified)

GTX 970 for me! Been waiting for this card. Great review

November 2, 2014 | 01:58 AM - Posted by Anonymousyan (not verified)

akin kuya video graphics

September 18, 2014 | 10:48 PM - Posted by Robogeoff (not verified)

Very nice Ryan. It's good to see that someone took the time to do SLI benches and framerate pacing.

September 18, 2014 | 11:23 PM - Posted by Ryan Shrout

Thanks its been a pain in the ass and we are still writing stuff up as the article went live. :)

September 25, 2014 | 03:29 AM - Posted by Anonymous (not verified)

I know, the process you gotta go through to get those FCAT results is a pain, but I must say this puts your reviews and this website well above all others, true enthusiasts will understand and I am now sold.

September 18, 2014 | 10:50 PM - Posted by Anonymous (not verified)

Sooo. when is 980/1080/1800 or whatever TI coming out? I need performance more than power efficiency.

September 19, 2014 | 09:53 AM - Posted by funandjam

I'm guessing that a TI version will be pulled out sometime after AMD's new lineup, that is if their new lineup has a flagship that outperforms the 980.

September 19, 2014 | 11:06 AM - Posted by Anonymous (not verified)

Probably 6 months from now. AMD's next-gen is supposed to drop in Q1 2015 so we'll see how much they push things forward.

Nvidia didn't exert much energy (no pun intended) to beat the 18 month old performance of the 700 Series cards so AMD is in a good spot.

September 19, 2014 | 08:33 PM - Posted by Anonymous (not verified)

Where AMD will have problems, not so much in pricing, but in the thermals that are required for the mini/micro sized systems for HTPC/Etc. that may not be able to take the AMD SKUs even if the prices are lower, getting as much GPU power into as small a form factor as possible is going to be a much more important market segment, as more of these products are being introduced.

Small portable form factor portable mini desktop systems, linked wirelessly to tablets, and relying on the mini desktop for most of the processing needs, are going to appear, systems that can be easily carried around in a laptop bag, along with a tablet, the tablet acting as the desktop host for direct(Via ad hoc WiFi) remote into the mini desktop PC. these type of systems will be more powerful than a laptop(the Mini PC part of the pair), but just as portable, and plugged in at the coffee houses/ETC. and wirelessly serving games, and compute to one, or more tablets. Fitting More powerful discrete GPUs into these systems that will not overburden the limited cooling available in the Mini/Micro form factor will be a big business, especially for gaming/game streaming on the go, and taking these devices along while traveling, and having a device that can be configured to be more like a laptop when on battery power, but ramp up the power beyond what a laptop is capable of while plugged in.

September 19, 2014 | 08:38 PM - Posted by Anonymous (not verified)

Edit: the tablet acting as the desktop host for direct(Via ad hoc WiFi) remote into the mini desktop PC

to: Edit: the mini PC acting as the desktop/gaming/compute host for direct(Via ad hoc WiFi) remote from the tablet

Got it backwards!

September 18, 2014 | 10:53 PM - Posted by Holyneo (not verified)

I'm so glad I resisted the urge to not spend 3k on a Titan Z.

September 18, 2014 | 11:46 PM - Posted by Anonymous (not verified)

So you bought a Titan Z?

September 19, 2014 | 07:12 AM - Posted by Anonymous (not verified)

you wild be an idiot to spend 3k on titan z when you could get 2 titan blacks for 2k

September 19, 2014 | 07:41 AM - Posted by Anonymous (not verified)

Pretty sure he was being sarcastic.

September 18, 2014 | 11:00 PM - Posted by anon (not verified)

cant help but notice the 770 has no numbers up. Are those coming still..?

September 18, 2014 | 11:22 PM - Posted by Ryan Shrout

Didn't really have room for that in my graphs this time around, but I'll consider for the retail card releases for sure. You can estimate how much slower the GTX 770/GTX 680 would be by compare it to the GTX 780.

September 18, 2014 | 11:07 PM - Posted by Chaitanya Shukla

If nvidia has managed to drop power requirement so much on current 28nm process, I am eagre to see what they will manage to do next year when 20nm process will start. Die shrink is going to make Amd engineers rethink on thier gpu pricing and architecture.

September 18, 2014 | 11:23 PM - Posted by Ryan Shrout

It's an interesting thought - though all indications are right now that the 20nm process isn't setup to get higher performance.

September 18, 2014 | 11:57 PM - Posted by Anonymous (not verified)

It's nice to see them get the performance gains from the 28nm node with some architectural tweaks, that means they will get even more from a process node shrink, whenever the fabs get the 20nm process set up for higher performance. This year and next should see some solid gains, and hopefully less rebranding, as a new round of the AMD verses Nvidia contest begins anew.

September 19, 2014 | 04:38 PM - Posted by aparsh335i (not verified)

Here is what i think is wild -
The 980 seems to be incredibly "dumbed down" from where it could be if they wanted to max out. They've done this before though, right? It basically gives them headroom for when AMD makes their move. Also notice these prices - $560 for the 980 that beats the 780Ti. In reality the parts that the 980 has are actually cheaper than the 780Ti (less cores, less bits,e tc) so the margins for them right now with the 980 are simply going to be amazing. Nvidia is in a good spot - 70% market share and holding the crown for best single GPU with huge margins. If/When they beef up the 980 it will be an absolute beast! Imagine if it was 384 or 512 bit with 2880 cores+.

September 18, 2014 | 11:11 PM - Posted by Mandrake

That 970 is a killer card. Wow. Brilliant stuff!

September 18, 2014 | 11:23 PM - Posted by StewartGraham (not verified)

Fantastic article and thanks for making a longer video - I'd like to see more video content from PCper! This certainly wasn't too long, if anything I'd be very interested to see longer videos from PCper in general!

September 18, 2014 | 11:26 PM - Posted by Ryan Shrout

Make sure you subscribe to our YouTube channel and keep watching the Pcper.com front page!

September 19, 2014 | 08:58 AM - Posted by Anonymous (not verified)

Speaking of YouTube, Nvidia put up a great video explaining VXGI. At first I thought it was something that would make games like Minecraft perform better (which it very well may), but the truth is much cooler:

http://youtu.be/GrFtxKHeniI

September 18, 2014 | 11:24 PM - Posted by StewartGraham (not verified)

Could you imagine if Nvidia cranked up the TDP? These GPUs would be insanely fast!

September 19, 2014 | 12:12 AM - Posted by arbiter

Nvidia seems to side on side of being conservative when it comes to their gpu's, AMD tends to go balls to wall well 2 diff ways of doing things and see long run who hurts most from it.

September 18, 2014 | 11:25 PM - Posted by Anonymous (not verified)

I'm surprised with the decent pricing on the 970. Are you sure this card is actually from Nvidia?

September 18, 2014 | 11:43 PM - Posted by Mandrake

The 970 seems like a sucker punch aimed directly at AMD. Nvidia is happy to aggressively price when they feel it works in their favour.

September 19, 2014 | 09:55 AM - Posted by Anonymous (not verified)

Economy 101.

September 18, 2014 | 11:39 PM - Posted by Robogeoff (not verified)

I'm impressed with the overclockability of Maxwell. Although, I wonder if watercooling will make much difference.

September 18, 2014 | 11:45 PM - Posted by Anonymous (not verified)

that is a very, VERY good point! this might mean that it will be much easier to get into x2 and x3 SLI!...... Ryan, how in the WORLD did you get your hands on PrecisionX 16!?! EVGA still won't let people download 15..... please send me a download link :). Best article and video out about this great topic! thanks

September 19, 2014 | 12:05 AM - Posted by Ryan Shrout

Thanks for the compliment but sorry, can't share PrecisionX! :)

September 20, 2014 | 02:15 PM - Posted by Polycrastinator (not verified)

It became available for download a couple of days ago. I installed it on my PC last night.

September 19, 2014 | 12:01 AM - Posted by RGSPro (not verified)

It doesn't really look like it has a margin above the 780 Ti from what I can tell. Just less power usage? A lot of the comparisons are just to the standard 780. Am I missing something here or is this not really an upgrade for a 780 Ti (Well, other than the plethora of Displayport connections. Biggest advantage is the extra 1gb of memory I guess, so it should do a bit better on 3840x2160.

September 19, 2014 | 12:05 AM - Posted by tbone8ty (not verified)

Double check bf4 sli 4k frame time graphs. Is orange amd or nvidia?

September 19, 2014 | 12:08 AM - Posted by Ryan Shrout

Yup, fixed!

September 19, 2014 | 07:09 AM - Posted by PapaDragon

All I can say is that Nvidia hit a homerun to center field. Very Impressive results!! The Maxwell Architecture really flexed its muscle. The GTX 980 Is A BEAST! Thanks Ryan for the great review and thanks for adding Sli benchmarks and the video!!!

September 19, 2014 | 12:19 AM - Posted by Mark Harry (not verified)

I thought you were going to make donuts?

September 19, 2014 | 12:32 AM - Posted by General Lee (not verified)

Gotta hand it to Nvidia for making Maxwell so much more efficient on the same node. This gives some more hope that we'll finally get some real nice performance improvements in the high end too once the big chips come in, and when they move to 20nm.

The 970 seems like a fantastic deal right now, and certainly AMD has to start scrambling with their lineup. Tonga suddenly doesn't look that good and probably needs to go down in price very soon, especially if they're planning on a 285X that's unlikely to match a 970 in any area. Actually the same is true all the way up to 290X. Nice to see Nvidia put some actual pressure on the competition for once in terms of pricing. Kinda wish they had done the same with the 980, but I guess you take what you can get. I didn't dare to hope Nvidia would actually want to compete on price.

September 19, 2014 | 12:42 AM - Posted by Keith Jones (not verified)

looking at reviews elsewhere such as Techpowerup.com the benchmarks for the GTX 980 are a bit disappointing on certain games for example Wolfenstein: The New Order,Crysis,Crysis 3 there was very small increases in the frame rate

Bioshock Infinite actually saw a decrease in frames on all resolutions it was just a few frames so in all i am a disappointed

September 19, 2014 | 07:14 AM - Posted by Anonymous (not verified)

Wolfenstein is unoptimized crap also immature drivers

September 19, 2014 | 09:17 AM - Posted by donut (not verified)

Isn't Wolfenstein cap at 60fps?

September 19, 2014 | 09:30 AM - Posted by Merlynn (not verified)

that also lacks multi GPU support not to mention all the performance issues

September 19, 2014 | 06:17 PM - Posted by aparsh335i (not verified)

"very small increases" compared to what?

September 19, 2014 | 12:55 AM - Posted by Anonymous (not verified)

Any chance we can see a color compression comparison ?

Kepler, Maxwell, Tonga. Hawaii

September 19, 2014 | 01:55 AM - Posted by Ryan Shrout

How do you mean? Color compression on the memory bus is always lossless, so there should be no variance in image quality.

September 19, 2014 | 02:33 AM - Posted by Anonymous (not verified)

So many graphs!
My eye holes were completely unprepared. You should have warned us about that or something.
But overall, fantastic, review.

September 19, 2014 | 03:50 AM - Posted by Anonymous (not verified)

Why is the framerating comparing 7970 and gtx 680? Am I missing something?

September 19, 2014 | 04:29 AM - Posted by Anonymous (not verified)

That's the generic explanation of framerating and how to interpret the data. It's been recycled throughout all the reviews since then, as it works the same for all GPUs.

September 19, 2014 | 09:15 AM - Posted by Ryan Shrout

That's correct - thanks for commenting for me!

September 19, 2014 | 04:31 AM - Posted by Prodeous

Any plans to update the benchmarks with some CUDA or OpenCL testing?

CUDA wise - Blender?
OpenCL - Luxmark?

Or any other non Gaming benchmarks.

September 19, 2014 | 04:36 AM - Posted by Anonymous (not verified)

It is great to see a high performing card not pushing the limits of power consumption and cooling. I would caution against assuming that all of these gpus will be able to reach such high clock speeds though. If you significantly reduce the thermal limitations, you will bump up against another limitation quite quickly, and these gpus are already clocked high and the voltage is already high. We may see overclocks vary considerably, as we do with some cpus.

I don't care much about power consumption in a desktop system, and most high-end pc gamers don't seem to care either. I only really care about the load power consumption from a noise perspective, as long as it isn't too ridiculous; I don't need a space heater. The increased performance and new features are more of a selling point as far as I am concerned. I think my next system should support hardware HEVC decode/encode. I may be getting a new laptop before I build a new desktop, so will the 980 gpu make a good laptop part? It seems that a lot of performance comes from the high clock, which may not be doable in notebook thermal limitations. The wider chips may actually do better if you push them down to much lower power.

September 19, 2014 | 04:38 AM - Posted by raghu78 (not verified)

GTX 970 is the real game changer. it obsoletes AMD's R9 290 series and embarasses them in perf/watt. AMD will have to price R9 290 at $279 - $299 and R9 290X at $379 - $399.

September 19, 2014 | 05:07 AM - Posted by Anonymous (not verified)

Sorry but that is a stupid statement to make; how can this card render another card obsolete?
I merely makes the argument to purchase the R9 series a little harder.

September 23, 2014 | 06:33 PM - Posted by Casecutter (not verified)

From this this review it really has tempered my impression of what I seen the 970 in performance against a 290, they seem to spar in this data verse other reviews. I’ve seen other reviews that seem to make the 970 more often as strong as a 290X. While power reduction is noteworthy isn’t by Ryan number the 290 is something close to using 25% more? It’s good don’t wrong, but I might say when a nice OC custom 290 for $300 or less it’s still in the fray.

September 19, 2014 | 05:14 AM - Posted by gloomfrost

Thanks for the review Ryan, seems like the 970 will be a lot more popular, myself i just bought a fresh new 980 from NewEgg, i am upgrading from a 560ti no joking, think ill see any difference? ;)

September 19, 2014 | 05:30 AM - Posted by Osjur (not verified)

Thank you for testing CF vs SLI. So it seems XDMA engine shows its strengths and 290X CF is practically tied with 980 SLI. So "upgrading" would be pointless if you own 290X CF setup like I do other than getting lower power consumption.

September 19, 2014 | 05:37 AM - Posted by Ramos (not verified)

Ryan, if you guys don't mind ofc, would you kindly test the HDMi 2.0 port with an actual TV like fx UE55HD6900 for 4K(UHD really) desktop'ping?

I would appriciate that so much, maybe keep 1x 970/980 fixed and then include like 4 different UHD TV's all over 40 inches ofc, as the gain from normal monitor distance is where a 4K/UHD really shines in desktop usage. Ie using a 40-65 inch display from 2 feet max is where you can really use the UHD and not feel like looking for your magnifier glass.

Great job as usual!

September 19, 2014 | 09:17 AM - Posted by Ryan Shrout

We are working on getting in an HDMI 2.0 display currently!

September 19, 2014 | 02:30 PM - Posted by Anonymous (not verified)

Seams the after market ones that don't use the reference PCB have HDMI 1.4a instead of HDMI 2.0. Even if they have the same or similar layout connector as the reference they use HDMI 1.4a

So all of them.
MSI
Zotac
Gigabyte
EVGA
etc..

If you want HDMI 2.0 for 4k@60hz you need to get the refence model or else you be limited to 4k@30hz.

With all the 4k monitors coming out. This should be pointed out.

September 19, 2014 | 06:21 PM - Posted by aparsh335i (not verified)

Umm....or you can just use the displayport, right?

September 19, 2014 | 07:14 PM - Posted by Anonymous (not verified)

Hes talking about Samsung UHD TVs. They don't use DisplayPort. They all have 4 HDMI 2.0.

September 19, 2014 | 05:49 AM - Posted by Just Me (not verified)

Sorry, but I think the memory interface is just not enough for a high-end product. There seems to be effectively no progress to last generation. I guess the next higher NVIDIA GPU will have more bandwidth.

Otherwise it looks great.

September 19, 2014 | 09:18 AM - Posted by Ryan Shrout

While I too was worried about a 256-bit memory bus, but clearly the performance is able to keep up. Dismissing a part because of an arbitrary spec rather than its performance doesn't make sense.

September 20, 2014 | 07:32 PM - Posted by arbiter

techpowerup tested the 970 in SLI, and it kept up pretty good with the 295x2, yea at highest rez like 4k and 5760x1080 295x2 generally was faster, but 970 wasn't that far off. So even with 256bit bus it uses it well. Even with half the ram path. Most were around 10% difference.

September 19, 2014 | 05:52 AM - Posted by DenzilMonster

just bout a 4790k with a H75 cooler to go with my MSI 970 Frozer, or what ever its name is. Yet I bout a IPS 27 intch screen , when G Sink comes out with IPS screens this one can find a new home. I too have been waiting on these new cards. Your show is the reason I bought the 970. A big hug over frame rating too, forcing the new path of all venders to not put out junk drivers etc. great work PC crew.

September 19, 2014 | 07:08 AM - Posted by Anonymous (not verified)

Still rocking the awful green logo that throws of my theme.Gguess ill get acx again sucks becasue they overheat very quickly on air sandwiched.

September 19, 2014 | 07:09 AM - Posted by Anonymous (not verified)

edit
on tri sli

September 19, 2014 | 07:41 AM - Posted by Anonymous (not verified)

Ill wait for the real maxwell the GM110 and even then you have to wait an additional year becasue the first batch are not flagship the TI's are.

GPU market is becoming a joke not to mention the quality of games make it not even worth spending the money hell even my 780 are not being utilized to full extent becasue all these shit games are running on old/unoptimized engines.

September 19, 2014 | 07:42 AM - Posted by GutsNotDead (not verified)

A clean victory for Nvidia, definitely! It's been a long time since Nvidia was the clear winner on all fonts with a new GPU product release.

September 19, 2014 | 09:58 AM - Posted by Anonymous (not verified)

I see only only one player on the field and you already call victory ? You have information we don't have ?
Or are you comparing nvidia's 2014 arch against AMD's 2011 ?

September 19, 2014 | 09:58 AM - Posted by Anonymous (not verified)

I see only only one player on the field and you already call victory ? You have information we don't have ?
Or are you comparing nvidia's 2014 arch against AMD's 2011 ?

September 19, 2014 | 09:20 AM - Posted by Anonymous (not verified)

Any news on when they expect these cards to go on sale?

September 19, 2014 | 09:33 AM - Posted by Anonymous (not verified)

now

evga had instock couple hours ago but not out of stock

September 19, 2014 | 10:08 AM - Posted by Topjet (John) (not verified)

I see how your are Ryan, not saying anything Wednesday nite, lol The 970 looks to be the sweet spot for most, anything about a 960 ?

September 19, 2014 | 10:47 AM - Posted by Ryan Shrout

Sorry, I couldn't say anything! :)

September 20, 2014 | 04:26 PM - Posted by arbiter

Ryan we knew what was up on Wednesday when the 900 series story popped up and you excused yourself from talking about it along with Allyn. So most of us in the chat knew you had the cards for sure there, not like those box's sitting there didn't confirm it either.

September 19, 2014 | 10:12 AM - Posted by BBMan (not verified)

The thing I'm impressed with is the reduced power demand. I'm actually a little surprised at the higher 970 idles, but.... I've heard the 390X is going to be about 300W. It also may not hit the market until 12/1. If it doesn't totally own this ....

September 19, 2014 | 10:39 AM - Posted by Shambles (not verified)

What? No DP 1.3? They've had a whole 4 days since it was officially released!

September 19, 2014 | 12:59 PM - Posted by Polycrastinator (not verified)

So, I have a single 760 right now. Get a second 760 and SLI with the price drop, or save for another month or two and get a 970?

September 19, 2014 | 03:19 PM - Posted by Hübie

Ryan? Whats the exact setting in your catalyst? Was the framepacing active? I'm curious about the frametimes with the 290X. I mean: AMD has XDMA, framepacing on/off capability and mantle (bf4/thief/sniper elite3). So why they're so bad. Thats hilarious since they twittered a lot about it and as i see nothing dramaticly changed until now.

In addition: a very nice review (as always) and thanks for the hard work. Can you tell us something about the voltage-range? Is it ~1.2 Volts again?

September 19, 2014 | 03:20 PM - Posted by ZoA (not verified)

There is much talk about maxswell's new features but most of them are not really all that new or remarkable. Consequently IPC did not significantly increase when compared to Kepler, so performance per transistor per clock is only slightly improved. What makes Maxwell really notable is it's ability to reach previously unseen high frequencies with remarkably low voltage.

I'm completely lost as to how it does that, and can't find any explanation in any of the reviews. This kind of effect I would expect form node shrink, or form implementation of advance transistor types such as finFET or FD-SOI, but Maxwell is made on standard bulk planar28nm FETs so I don't understand how nvidia is able to make them stable as such high frequencies as such low voltage.

If someone has explanation please tall me.

September 19, 2014 | 06:57 PM - Posted by aparsh335i (not verified)

[Troll]
Alien Technology Bro!
[/Troll]

September 19, 2014 | 05:01 PM - Posted by MarkT (not verified)

The stability at a lower voltage and higher clocks is easy to explain because Ryan already explained in the article that SoC experience ...low voltage experience from mobile products has helped nvidia innovate in areas that AMD has no experience in yet(if ever).....nvidia trying to squeeze water out the rock that was tegra and drip into maxwell

September 19, 2014 | 05:33 PM - Posted by ZoA (not verified)

I don't think that “mobile SoC experience” is adequate explanation. Mobile chips sacrify frequency to get minimal voltage, and being optimized for such low voltages and frequencies they don't handle high frequencies well.

Maxwell behaves differently, and handles high frequencies excellently.

Besides even if “mobile SoC experience” is the answer it tells us nothing about what concrete technical feature imported form SoC is providing the effect.

September 19, 2014 | 06:58 PM - Posted by aparsh335i (not verified)

In all seriousness i think it had something to do with the 192 nodes vs the now 128 nodes?

September 20, 2014 | 04:33 AM - Posted by Anonymous (not verified)

Fantastic review. The best i have seen. WOW. much time.

September 20, 2014 | 09:46 AM - Posted by Dark_wizzie (not verified)

From the benchmarks it seems obvious to me that the game is CPU bottlenecked. The highest FPS is outside of combat. During combat the FPS all drops to 100fps no matter 1440p or 4k. This is my experience as well, from looking at CPU/GPU usage while playing Skyrim. Since it's limited to 2 cores, basically a highly overclocked 4 core is the way to go. However, with something like ENB, I wonder if that'll shift enough of the burden towards the GPU to the point where a better GPU solution actually does something.

September 20, 2014 | 09:47 AM - Posted by Dark_wizzie (not verified)

I am talking about Skyrim, BTW, in my above comment.

September 20, 2014 | 12:13 PM - Posted by Stangflyer (not verified)

Ryan are you planning on testing multimonitor in the near future?
Below is my question.

For example: I have my 7950 sapphire flex in crossfire. I play eyefinity games at 5920x1440. My 3 screens are 1680x1050x2 (16x10) for the outiside screens. And my u2711 at 2560x1440 (16x9) in the middle. All are dp monitors.

I do this because for the games that do not support multi monitor. I like the bigger 2560x1440 screen.

It would be great to know if Nvidia has updated surround capabilities to match AMD's!

September 21, 2014 | 06:59 AM - Posted by Cannonaire

I'm happy to see nVidia endorse downsampling in the form of a supported feature. I'm curious about the downsampling filter they use though - a 13-tap Gaussian filter should produce a decently sharp image without ringing, but is there any word on whether or not it is gamma-aware? That last detail is important when downsampling and particularly for high-contrast details.

September 21, 2014 | 09:09 PM - Posted by Shaun (not verified)

Hi,

I have a request to benchmark skyrim with enb. full quality.
real vision option a full quality is a good videocard destroyer!

my system is i7 4820k @ 4.5ghz and a 290X
skyrim at 4k without enb is 45-50fps
enb on full quality is 17-19FPS..

can you setup a skyrim enb benchmark for reference from now on?
im very interested in your benchmarks for skyrim enb with 290x, 780ti, 970gtx and 980gtx

I know its alot of work but please please please! hehehe

September 21, 2014 | 09:12 PM - Posted by Shaun (not verified)

ohhh, if you do, please add unoffical hd textures, flora overhaul and that hurts performance even more! makes the game so beautiful to play...

July 7, 2015 | 12:42 AM - Posted by Camille (not verified)

Most of those supplements work by stopping the cause
of baldness. Excess consumption of zinc may cause bleeding stomach and severe abdominal pain. There are only 2 St Johns wort products that I know of,
that have had been properly researched and the Flordis Remotiv is one of those.

My site ... ev44.pl (Priscilla)

September 23, 2014 | 10:19 PM - Posted by Gisondinator (not verified)

I have been watching your channel for a long time now. I would like to say that i enjoy the thorough way in which you benchmanrk every card. That being said, i would guess to say that 98% of pc gamers play in 1080p. Im wondering why you test such hi resolutions? Im sure you have 1080p benchmarks on another page. I just feel raked over the coals with GSync and 4K. Im tired of forking over thousands for small increases in performance. This bleeding edge is making my wallet bleed!

September 26, 2014 | 10:53 PM - Posted by matt (not verified)

I agree with Shaun, the realvision ENB would be a great benchmark tester as with the realvision ENB on skyrim. I had average of 15fps in open area outside of white run and the rest. Was average of 20-35fps on a gigabyte r9 290x OC 4gb and I'm not sure if that did but my card eventually broke Or overheating issues but wouldn't load into windows just a black screen with fans spinning fullspeed after windows load screen after POST. So I RMA'd that card. And got credit refund and bought the MSI GTX 970 4Gb and waiting for it to arrived with also with a new motherboard.

So I think that it would be a great Benchmark as it really pushes the GPU not so much the CPU and Skyrim with mods normally uses upto 4gb of VRAM

Anyway Thanks

September 28, 2014 | 09:00 AM - Posted by Br00ksy (not verified)

Awesome Review, SLI power consumption for dual 980's is hard to come by and you sir have slayed my doubts about overdrawing a 850W power supply. THANKYOU!!! :D

October 2, 2014 | 12:39 AM - Posted by bazzman (not verified)

Im going to build this system
I7 4790K
SLI gtx 970s
16gb ram 1600mhz

Is a 630 watt PSU sufficient to run in sli? If yes can i also overclock?

October 3, 2014 | 03:22 AM - Posted by Anonymous (not verified)

630W? are you using a brand name PSU?

Dont trust PSU's that come preinstalled with a case..

I would think 630 would be buggy for SLI..
you want at least 25% to spare.. I'd say at least a 750W..

Tho my Coolermaster ran 2x R9 280's fine.. but that was me slightly underclocking my cpu so allow that..

just make sure the psu is a quality one.

November 2, 2014 | 02:01 AM - Posted by Anonymousyan (not verified)

Where AMD will have problems, not so much in pricing, but in the thermals that are required for the mini/micro sized systems for HTPC/Etc. that may not be able to take the AMD SKUs even if the prices are lower, getting as much GPU power into as small a form factor as possible is going to be a much more important market segment, as more of these products are being introduced.

Small portable form factor portable mini desktop systems, linked wirelessly to tablets, and relying on the mini desktop for most of the processing needs, are going to appear, systems that can be easily carried around in a laptop bag, along with a tablet, the tablet acting as the desktop host for direct(Via ad hoc WiFi) remote into the mini desktop PC. these type of systems will be more powerful than a laptop(the Mini PC part of the pair), but just as portable, and plugged in at the coffee houses/ETC. and wirelessly serving games, and compute to one, or more tablets. Fitting More powerful discrete GPUs into these systems that will not overburden the limited cooling available in the Mini/Micro form factor will be a big business, especially for gaming/game streaming on the go, and taking these devices along while traveling, and having a device that can be configured to be more like a laptop when on battery power, but ramp up the power beyond what a laptop is capable of while plugged in.

December 2, 2014 | 04:20 AM - Posted by Riaz (not verified)

Can i run gtx970 on my intel DH61HO Motherboard??

December 5, 2014 | 03:02 PM - Posted by Garry (not verified)

It's obvious Ryan you have taken heaps of time doing this (well done mate), but as someone wanting to build a rig to use on a big TV, I'm holding back until I can get my head around the 4k TV vs PC gaming output thing.

HDMI and 4k is my worry. I'll be buying a big (thinking 65") TV, only 4k for the gaming. It'll do service as a normal TV too, but in Australia it'll be obsolete before we see 4k content on the air! So that leaves gaming.

Is a big 4k TV a good option for high resolution gaming? Or are there land mines hidden in HDMI 1.x/2.x specs that'll catch out the unaware? Certainly look better than 3 monitors.

It looks like the 980 will push BF4 to 4k @ ~30fps, but is that enough, or is SLI to get 45-60fps needed to be a pleasure to play?

A pair of 290x SLI watercooled would have to be an option, quiet yet in the running on fps. OK, uses more power but the purchase price difference buys a lot of electricity, unless water cooling costs a bomb!

January 26, 2015 | 04:32 AM - Posted by Anonymous (not verified)

Why are the specific settings not disclosed?
That's pretty much benchmarking 101, and things like AA & AO can make a massive difference.

December 2, 2015 | 09:24 AM - Posted by Grover3606 (not verified)

While this is the only place I have seen that has benchmarked Skyrim in 4k with a 970, and Thank you very much for that! But what settings did you use? You post what settings you used at the top? But did the 970 really pull ~50 fps at 4k with 8x A and ultra? Find that hard to believe. I have a new 4k Samsung and really just want to play Skyrim in vanilla 4k, no need for AA and trying to decide if 970 is enough.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.