Review Index:
Feedback

NVIDIA GeForce GTX 980 and GTX 970 GM204 Review: Power and Efficiency

Author: Ryan Shrout
Manufacturer: NVIDIA

Overclocking, Power Consumption and Sound

As is usually the case with these launch articles, I don't get nearly the time that I would like to really go in-depth with overclocking. Still, even with limited time to do so, the results are kind of awesome. Kind of really awesome.

View Full Size

For my testing, I am looking only at the GeForce GTX 980 4GB reference cards NVIDIA sent over. We'll spend more time with the retail cards in the next week or two. I used EVGA's new PrecisionX software to handle all the knobs and dials of overclocking with GPU Boost.

Right away you can see that the power target is able to go up to 125%! Considering we were limited to 109% in most cases with the GTX 780 Ti, this is a big change, and it is the result of NVIDIA's efficient power design.

View Full Size

Click to Enlarge

My GPU was 100% stable at a clock speed offset of 225 MHz, bringing our base clock speed to 1352 MHz and the Boost clock to 1441 MHz. Holy hell, let me wipe this sweat off my forehead. (Note: this also includes upping the voltage to the +0.87 mV maximum provided here.)

So what does that get us in performance? A quick run down in Metro: Last Light shows us.

View Full Size

View Full Size

The black line represents our highest overclockable settings while remaining stable. The orange line, labeled with a 125 PT (power target) here, is here to show any gains from just increase the power target without changing a clock offset of any kind.

Our average frame rate in Metro at 2560x1440 jumps from about 57 FPS to 66 FPS, an increase of 17% or so from our overclocking.

View Full Size

Click to Enlarge

I know we are going to have questions about clock speed variance so I went ahead and addressed it with these few graphs. Here you are seeing the clock variance during our Metro: LL runs. The green line (all stock) shows there is some clock speed changes taking place, in line with what we expect GPU Boost to do, and how it has acted in the past. Nothing I have seen in my testing shows significant drops in clocks over time like we had with the initial wave of R9 290X/290 parts. 

The blue line (with power target set to 125%) shows that we are hitting a TDP limit on the GPU and by simply increasing the available power to the card (through that slider) we get a very flat, static clock rate. Of course that means there is room to go up...

And that is what the yellow line shows us. Even overclocked to a 225 MHz offset, the GTX 980 is able to maintain a very consistent clock pattern.

View Full Size

Click to Enlarge

To see those numbers in a different way, here are the average clock speeds over those runs. The stock GTX 980 hits an average clock speed of 1210 MHz, right at the advertised Boost clock. The GTX 980 with only the power target adjustment gets a lift to 1252 MHz. But the really impressive mark here is the yellow bar, showing an average frame rate of 1483 MHz in our overclocked settings; that is 30% higher clocks.

View Full Size

Click to Enlarge

All of this does come at a small price of course - noise from the fan on the GTX 980. This graph shows the ramp up on fan speed and I can tell you from testing that the overclocked card was noticeably louder, hitting nearly 44 dbA using my sound tester. That's about on par with the sound level of the original R9 290X cooler, so not great. But you are overclocked SIGNIFICANTLY here and stock noise levels of the GTX 980 hit only 36.7 dbA.

Power Consumption Testing

View Full Size

Let's take a look at power consumption, one of the key tenets of the Maxwell architecture. Right away you can tell that something odd is going on - I just showed you a dozen or more pages of the GTX 980 beating the GTX 780 Ti as well as the R9 290X by slim to moderate margins. Yet here I am showing you power consumption that is not just a little lower, but significantly lower than the competition. The GTX 980 draws 131 watts less than the R9 290X and 86 watts less than the GTX 780 Ti. 

That is not a small number - with the 165W TDP on the GTX 980, the R9 290X uses nearly an entire extra GTX 980 worth of power.

View Full Size

The GTX 970 shows similar results. It uses 80 watts less power than the Radeon R9 290 and 54 watts less power than the GTX 780. Heck, it even uses 34 watts less power than the Radeon R9 280X that doesn't even come close to it in performance. 

View Full Size

And, as expected, these results look even better for NVIDIA when we go into multiple cards. In fact, the GTX 980 SLI configuration uses about 30 watts LESS power than a single R9 290X going full bore. 

Sound Level Testing

View Full Size

When I first tested the GTX 780 Ti, I knew it was a louder card that NVIDIA would like to make. Here you can see that the new GTX 980 with a reference cooler is able stay at a much lower noise level than the GTX 780 Ti. The Radeon R9 290X number we are showing here is NOT from an original reference card but instead an ASUS DirectCU II after market design, thus it shines incredibly well in our noise testing comparison. 

View Full Size

The same is true for the GTX 970 - though we will dive more into with our retail card reviews soon!


September 18, 2014 | 10:46 PM - Posted by Anonymous (not verified)

GTX 970 for me! Been waiting for this card. Great review

November 2, 2014 | 01:58 AM - Posted by Anonymousyan (not verified)

akin kuya video graphics

September 18, 2014 | 10:48 PM - Posted by Robogeoff (not verified)

Very nice Ryan. It's good to see that someone took the time to do SLI benches and framerate pacing.

September 18, 2014 | 11:23 PM - Posted by Ryan Shrout

Thanks its been a pain in the ass and we are still writing stuff up as the article went live. :)

September 25, 2014 | 03:29 AM - Posted by Anonymous (not verified)

I know, the process you gotta go through to get those FCAT results is a pain, but I must say this puts your reviews and this website well above all others, true enthusiasts will understand and I am now sold.

September 18, 2014 | 10:50 PM - Posted by Anonymous (not verified)

Sooo. when is 980/1080/1800 or whatever TI coming out? I need performance more than power efficiency.

September 19, 2014 | 09:53 AM - Posted by funandjam

I'm guessing that a TI version will be pulled out sometime after AMD's new lineup, that is if their new lineup has a flagship that outperforms the 980.

September 19, 2014 | 11:06 AM - Posted by Anonymous (not verified)

Probably 6 months from now. AMD's next-gen is supposed to drop in Q1 2015 so we'll see how much they push things forward.

Nvidia didn't exert much energy (no pun intended) to beat the 18 month old performance of the 700 Series cards so AMD is in a good spot.

September 19, 2014 | 08:33 PM - Posted by Anonymous (not verified)

Where AMD will have problems, not so much in pricing, but in the thermals that are required for the mini/micro sized systems for HTPC/Etc. that may not be able to take the AMD SKUs even if the prices are lower, getting as much GPU power into as small a form factor as possible is going to be a much more important market segment, as more of these products are being introduced.

Small portable form factor portable mini desktop systems, linked wirelessly to tablets, and relying on the mini desktop for most of the processing needs, are going to appear, systems that can be easily carried around in a laptop bag, along with a tablet, the tablet acting as the desktop host for direct(Via ad hoc WiFi) remote into the mini desktop PC. these type of systems will be more powerful than a laptop(the Mini PC part of the pair), but just as portable, and plugged in at the coffee houses/ETC. and wirelessly serving games, and compute to one, or more tablets. Fitting More powerful discrete GPUs into these systems that will not overburden the limited cooling available in the Mini/Micro form factor will be a big business, especially for gaming/game streaming on the go, and taking these devices along while traveling, and having a device that can be configured to be more like a laptop when on battery power, but ramp up the power beyond what a laptop is capable of while plugged in.

September 19, 2014 | 08:38 PM - Posted by Anonymous (not verified)

Edit: the tablet acting as the desktop host for direct(Via ad hoc WiFi) remote into the mini desktop PC

to: Edit: the mini PC acting as the desktop/gaming/compute host for direct(Via ad hoc WiFi) remote from the tablet

Got it backwards!

September 18, 2014 | 10:53 PM - Posted by Holyneo (not verified)

I'm so glad I resisted the urge to not spend 3k on a Titan Z.

September 18, 2014 | 11:46 PM - Posted by Anonymous (not verified)

So you bought a Titan Z?

September 19, 2014 | 07:12 AM - Posted by Anonymous (not verified)

you wild be an idiot to spend 3k on titan z when you could get 2 titan blacks for 2k

September 19, 2014 | 07:41 AM - Posted by Anonymous (not verified)

Pretty sure he was being sarcastic.

September 18, 2014 | 11:00 PM - Posted by anon (not verified)

cant help but notice the 770 has no numbers up. Are those coming still..?

September 18, 2014 | 11:22 PM - Posted by Ryan Shrout

Didn't really have room for that in my graphs this time around, but I'll consider for the retail card releases for sure. You can estimate how much slower the GTX 770/GTX 680 would be by compare it to the GTX 780.

September 18, 2014 | 11:07 PM - Posted by Chaitanya Shukla

If nvidia has managed to drop power requirement so much on current 28nm process, I am eagre to see what they will manage to do next year when 20nm process will start. Die shrink is going to make Amd engineers rethink on thier gpu pricing and architecture.

September 18, 2014 | 11:23 PM - Posted by Ryan Shrout

It's an interesting thought - though all indications are right now that the 20nm process isn't setup to get higher performance.

September 18, 2014 | 11:57 PM - Posted by Anonymous (not verified)

It's nice to see them get the performance gains from the 28nm node with some architectural tweaks, that means they will get even more from a process node shrink, whenever the fabs get the 20nm process set up for higher performance. This year and next should see some solid gains, and hopefully less rebranding, as a new round of the AMD verses Nvidia contest begins anew.

September 19, 2014 | 04:38 PM - Posted by aparsh335i (not verified)

Here is what i think is wild -
The 980 seems to be incredibly "dumbed down" from where it could be if they wanted to max out. They've done this before though, right? It basically gives them headroom for when AMD makes their move. Also notice these prices - $560 for the 980 that beats the 780Ti. In reality the parts that the 980 has are actually cheaper than the 780Ti (less cores, less bits,e tc) so the margins for them right now with the 980 are simply going to be amazing. Nvidia is in a good spot - 70% market share and holding the crown for best single GPU with huge margins. If/When they beef up the 980 it will be an absolute beast! Imagine if it was 384 or 512 bit with 2880 cores+.

September 18, 2014 | 11:11 PM - Posted by Mandrake

That 970 is a killer card. Wow. Brilliant stuff!

September 18, 2014 | 11:23 PM - Posted by StewartGraham (not verified)

Fantastic article and thanks for making a longer video - I'd like to see more video content from PCper! This certainly wasn't too long, if anything I'd be very interested to see longer videos from PCper in general!

September 18, 2014 | 11:26 PM - Posted by Ryan Shrout

Make sure you subscribe to our YouTube channel and keep watching the Pcper.com front page!

September 19, 2014 | 08:58 AM - Posted by Anonymous (not verified)

Speaking of YouTube, Nvidia put up a great video explaining VXGI. At first I thought it was something that would make games like Minecraft perform better (which it very well may), but the truth is much cooler:

http://youtu.be/GrFtxKHeniI

September 18, 2014 | 11:24 PM - Posted by StewartGraham (not verified)

Could you imagine if Nvidia cranked up the TDP? These GPUs would be insanely fast!

September 19, 2014 | 12:12 AM - Posted by arbiter

Nvidia seems to side on side of being conservative when it comes to their gpu's, AMD tends to go balls to wall well 2 diff ways of doing things and see long run who hurts most from it.

September 18, 2014 | 11:25 PM - Posted by Anonymous (not verified)

I'm surprised with the decent pricing on the 970. Are you sure this card is actually from Nvidia?

September 18, 2014 | 11:43 PM - Posted by Mandrake

The 970 seems like a sucker punch aimed directly at AMD. Nvidia is happy to aggressively price when they feel it works in their favour.

September 19, 2014 | 09:55 AM - Posted by Anonymous (not verified)

Economy 101.

September 18, 2014 | 11:39 PM - Posted by Robogeoff (not verified)

I'm impressed with the overclockability of Maxwell. Although, I wonder if watercooling will make much difference.

September 18, 2014 | 11:45 PM - Posted by Anonymous (not verified)

that is a very, VERY good point! this might mean that it will be much easier to get into x2 and x3 SLI!...... Ryan, how in the WORLD did you get your hands on PrecisionX 16!?! EVGA still won't let people download 15..... please send me a download link :). Best article and video out about this great topic! thanks

September 19, 2014 | 12:05 AM - Posted by Ryan Shrout

Thanks for the compliment but sorry, can't share PrecisionX! :)

September 20, 2014 | 02:15 PM - Posted by Polycrastinator (not verified)

It became available for download a couple of days ago. I installed it on my PC last night.

September 19, 2014 | 12:01 AM - Posted by RGSPro (not verified)

It doesn't really look like it has a margin above the 780 Ti from what I can tell. Just less power usage? A lot of the comparisons are just to the standard 780. Am I missing something here or is this not really an upgrade for a 780 Ti (Well, other than the plethora of Displayport connections. Biggest advantage is the extra 1gb of memory I guess, so it should do a bit better on 3840x2160.

September 19, 2014 | 12:05 AM - Posted by tbone8ty (not verified)

Double check bf4 sli 4k frame time graphs. Is orange amd or nvidia?

September 19, 2014 | 12:08 AM - Posted by Ryan Shrout

Yup, fixed!

September 19, 2014 | 07:09 AM - Posted by PapaDragon

All I can say is that Nvidia hit a homerun to center field. Very Impressive results!! The Maxwell Architecture really flexed its muscle. The GTX 980 Is A BEAST! Thanks Ryan for the great review and thanks for adding Sli benchmarks and the video!!!

September 19, 2014 | 12:19 AM - Posted by Mark Harry (not verified)

I thought you were going to make donuts?

September 19, 2014 | 12:32 AM - Posted by General Lee (not verified)

Gotta hand it to Nvidia for making Maxwell so much more efficient on the same node. This gives some more hope that we'll finally get some real nice performance improvements in the high end too once the big chips come in, and when they move to 20nm.

The 970 seems like a fantastic deal right now, and certainly AMD has to start scrambling with their lineup. Tonga suddenly doesn't look that good and probably needs to go down in price very soon, especially if they're planning on a 285X that's unlikely to match a 970 in any area. Actually the same is true all the way up to 290X. Nice to see Nvidia put some actual pressure on the competition for once in terms of pricing. Kinda wish they had done the same with the 980, but I guess you take what you can get. I didn't dare to hope Nvidia would actually want to compete on price.

September 19, 2014 | 12:42 AM - Posted by Keith Jones (not verified)

looking at reviews elsewhere such as Techpowerup.com the benchmarks for the GTX 980 are a bit disappointing on certain games for example Wolfenstein: The New Order,Crysis,Crysis 3 there was very small increases in the frame rate

Bioshock Infinite actually saw a decrease in frames on all resolutions it was just a few frames so in all i am a disappointed

September 19, 2014 | 07:14 AM - Posted by Anonymous (not verified)

Wolfenstein is unoptimized crap also immature drivers

September 19, 2014 | 09:17 AM - Posted by donut (not verified)

Isn't Wolfenstein cap at 60fps?

September 19, 2014 | 09:30 AM - Posted by Merlynn (not verified)

that also lacks multi GPU support not to mention all the performance issues

September 19, 2014 | 06:17 PM - Posted by aparsh335i (not verified)

"very small increases" compared to what?

September 19, 2014 | 12:55 AM - Posted by Anonymous (not verified)

Any chance we can see a color compression comparison ?

Kepler, Maxwell, Tonga. Hawaii

September 19, 2014 | 01:55 AM - Posted by Ryan Shrout

How do you mean? Color compression on the memory bus is always lossless, so there should be no variance in image quality.

September 19, 2014 | 02:33 AM - Posted by Anonymous (not verified)

So many graphs!
My eye holes were completely unprepared. You should have warned us about that or something.
But overall, fantastic, review.

September 19, 2014 | 03:50 AM - Posted by Anonymous (not verified)

Why is the framerating comparing 7970 and gtx 680? Am I missing something?

September 19, 2014 | 04:29 AM - Posted by Anonymous (not verified)

That's the generic explanation of framerating and how to interpret the data. It's been recycled throughout all the reviews since then, as it works the same for all GPUs.

September 19, 2014 | 09:15 AM - Posted by Ryan Shrout

That's correct - thanks for commenting for me!

September 19, 2014 | 04:31 AM - Posted by Prodeous

Any plans to update the benchmarks with some CUDA or OpenCL testing?

CUDA wise - Blender?
OpenCL - Luxmark?

Or any other non Gaming benchmarks.

September 19, 2014 | 04:36 AM - Posted by Anonymous (not verified)

It is great to see a high performing card not pushing the limits of power consumption and cooling. I would caution against assuming that all of these gpus will be able to reach such high clock speeds though. If you significantly reduce the thermal limitations, you will bump up against another limitation quite quickly, and these gpus are already clocked high and the voltage is already high. We may see overclocks vary considerably, as we do with some cpus.

I don't care much about power consumption in a desktop system, and most high-end pc gamers don't seem to care either. I only really care about the load power consumption from a noise perspective, as long as it isn't too ridiculous; I don't need a space heater. The increased performance and new features are more of a selling point as far as I am concerned. I think my next system should support hardware HEVC decode/encode. I may be getting a new laptop before I build a new desktop, so will the 980 gpu make a good laptop part? It seems that a lot of performance comes from the high clock, which may not be doable in notebook thermal limitations. The wider chips may actually do better if you push them down to much lower power.

September 19, 2014 | 04:38 AM - Posted by raghu78 (not verified)

GTX 970 is the real game changer. it obsoletes AMD's R9 290 series and embarasses them in perf/watt. AMD will have to price R9 290 at $279 - $299 and R9 290X at $379 - $399.

September 19, 2014 | 05:07 AM - Posted by Anonymous (not verified)

Sorry but that is a stupid statement to make; how can this card render another card obsolete?
I merely makes the argument to purchase the R9 series a little harder.

September 23, 2014 | 06:33 PM - Posted by Casecutter (not verified)

From this this review it really has tempered my impression of what I seen the 970 in performance against a 290, they seem to spar in this data verse other reviews. I’ve seen other reviews that seem to make the 970 more often as strong as a 290X. While power reduction is noteworthy isn’t by Ryan number the 290 is something close to using 25% more? It’s good don’t wrong, but I might say when a nice OC custom 290 for $300 or less it’s still in the fray.

September 19, 2014 | 05:14 AM - Posted by gloomfrost

Thanks for the review Ryan, seems like the 970 will be a lot more popular, myself i just bought a fresh new 980 from NewEgg, i am upgrading from a 560ti no joking, think ill see any difference? ;)

September 19, 2014 | 05:30 AM - Posted by Osjur (not verified)

Thank you for testing CF vs SLI. So it seems XDMA engine shows its strengths and 290X CF is practically tied with 980 SLI. So "upgrading" would be pointless if you own 290X CF setup like I do other than getting lower power consumption.

September 19, 2014 | 05:37 AM - Posted by Ramos (not verified)

Ryan, if you guys don't mind ofc, would you kindly test the HDMi 2.0 port with an actual TV like fx UE55HD6900 for 4K(UHD really) desktop'ping?

I would appriciate that so much, maybe keep 1x 970/980 fixed and then include like 4 different UHD TV's all over 40 inches ofc, as the gain from normal monitor distance is where a 4K/UHD really shines in desktop usage. Ie using a 40-65 inch display from 2 feet max is where you can really use the UHD and not feel like looking for your magnifier glass.

Great job as usual!

September 19, 2014 | 09:17 AM - Posted by Ryan Shrout

We are working on getting in an HDMI 2.0 display currently!

September 19, 2014 | 02:30 PM - Posted by Anonymous (not verified)

Seams the after market ones that don't use the reference PCB have HDMI 1.4a instead of HDMI 2.0. Even if they have the same or similar layout connector as the reference they use HDMI 1.4a

So all of them.
MSI
Zotac
Gigabyte
EVGA
etc..

If you want HDMI 2.0 for 4k@60hz you need to get the refence model or else you be limited to 4k@30hz.

With all the 4k monitors coming out. This should be pointed out.

September 19, 2014 | 06:21 PM - Posted by aparsh335i (not verified)

Umm....or you can just use the displayport, right?

September 19, 2014 | 07:14 PM - Posted by Anonymous (not verified)

Hes talking about Samsung UHD TVs. They don't use DisplayPort. They all have 4 HDMI 2.0.

September 19, 2014 | 05:49 AM - Posted by Just Me (not verified)

Sorry, but I think the memory interface is just not enough for a high-end product. There seems to be effectively no progress to last generation. I guess the next higher NVIDIA GPU will have more bandwidth.

Otherwise it looks great.

September 19, 2014 | 09:18 AM - Posted by Ryan Shrout

While I too was worried about a 256-bit memory bus, but clearly the performance is able to keep up. Dismissing a part because of an arbitrary spec rather than its performance doesn't make sense.

September 20, 2014 | 07:32 PM - Posted by arbiter

techpowerup tested the 970 in SLI, and it kept up pretty good with the 295x2, yea at highest rez like 4k and 5760x1080 295x2 generally was faster, but 970 wasn't that far off. So even with 256bit bus it uses it well. Even with half the ram path. Most were around 10% difference.

September 19, 2014 | 05:52 AM - Posted by DenzilMonster

just bout a 4790k with a H75 cooler to go with my MSI 970 Frozer, or what ever its name is. Yet I bout a IPS 27 intch screen , when G Sink comes out with IPS screens this one can find a new home. I too have been waiting on these new cards. Your show is the reason I bought the 970. A big hug over frame rating too, forcing the new path of all venders to not put out junk drivers etc. great work PC crew.

September 19, 2014 | 07:08 AM - Posted by Anonymous (not verified)

Still rocking the awful green logo that throws of my theme.Gguess ill get acx again sucks becasue they overheat very quickly on air sandwiched.

September 19, 2014 | 07:09 AM - Posted by Anonymous (not verified)

edit
on tri sli

September 19, 2014 | 07:41 AM - Posted by Anonymous (not verified)

Ill wait for the real maxwell the GM110 and even then you have to wait an additional year becasue the first batch are not flagship the TI's are.

GPU market is becoming a joke not to mention the quality of games make it not even worth spending the money hell even my 780 are not being utilized to full extent becasue all these shit games are running on old/unoptimized engines.

September 19, 2014 | 07:42 AM - Posted by GutsNotDead (not verified)

A clean victory for Nvidia, definitely! It's been a long time since Nvidia was the clear winner on all fonts with a new GPU product release.

September 19, 2014 | 09:58 AM - Posted by Anonymous (not verified)

I see only only one player on the field and you already call victory ? You have information we don't have ?
Or are you comparing nvidia's 2014 arch against AMD's 2011 ?

September 19, 2014 | 09:58 AM - Posted by Anonymous (not verified)

I see only only one player on the field and you already call victory ? You have information we don't have ?
Or are you comparing nvidia's 2014 arch against AMD's 2011 ?

September 19, 2014 | 09:20 AM - Posted by Anonymous (not verified)

Any news on when they expect these cards to go on sale?

September 19, 2014 | 09:33 AM - Posted by Anonymous (not verified)

now

evga had instock couple hours ago but not out of stock

September 19, 2014 | 10:08 AM - Posted by Topjet (John) (not verified)

I see how your are Ryan, not saying anything Wednesday nite, lol The 970 looks to be the sweet spot for most, anything about a 960 ?

September 19, 2014 | 10:47 AM - Posted by Ryan Shrout

Sorry, I couldn't say anything! :)

September 20, 2014 | 04:26 PM - Posted by arbiter

Ryan we knew what was up on Wednesday when the 900 series story popped up and you excused yourself from talking about it along with Allyn. So most of us in the chat knew you had the cards for sure there, not like those box's sitting there didn't confirm it either.

September 19, 2014 | 10:12 AM - Posted by BBMan (not verified)

The thing I'm impressed with is the reduced power demand. I'm actually a little surprised at the higher 970 idles, but.... I've heard the 390X is going to be about 300W. It also may not hit the market until 12/1. If it doesn't totally own this ....

September 19, 2014 | 10:39 AM - Posted by Shambles (not verified)

What? No DP 1.3? They've had a whole 4 days since it was officially released!

September 19, 2014 | 12:59 PM - Posted by Polycrastinator (not verified)

So, I have a single 760 right now. Get a second 760 and SLI with the price drop, or save for another month or two and get a 970?

September 19, 2014 | 03:19 PM - Posted by Hübie

Ryan? Whats the exact setting in your catalyst? Was the framepacing active? I'm curious about the frametimes with the 290X. I mean: AMD has XDMA, framepacing on/off capability and mantle (bf4/thief/sniper elite3). So why they're so bad. Thats hilarious since they twittered a lot about it and as i see nothing dramaticly changed until now.

In addition: a very nice review (as always) and thanks for the hard work. Can you tell us something about the voltage-range? Is it ~1.2 Volts again?

September 19, 2014 | 03:20 PM - Posted by ZoA (not verified)

There is much talk about maxswell's new features but most of them are not really all that new or remarkable. Consequently IPC did not significantly increase when compared to Kepler, so performance per transistor per clock is only slightly improved. What makes Maxwell really notable is it's ability to reach previously unseen high frequencies with remarkably low voltage.

I'm completely lost as to how it does that, and can't find any explanation in any of the reviews. This kind of effect I would expect form node shrink, or form implementation of advance transistor types such as finFET or FD-SOI, but Maxwell is made on standard bulk planar28nm FETs so I don't understand how nvidia is able to make them stable as such high frequencies as such low voltage.

If someone has explanation please tall me.

September 19, 2014 | 06:57 PM - Posted by aparsh335i (not verified)

[Troll]
Alien Technology Bro!
[/Troll]

September 19, 2014 | 05:01 PM - Posted by MarkT (not verified)

The stability at a lower voltage and higher clocks is easy to explain because Ryan already explained in the article that SoC experience ...low voltage experience from mobile products has helped nvidia innovate in areas that AMD has no experience in yet(if ever).....nvidia trying to squeeze water out the rock that was tegra and drip into maxwell

September 19, 2014 | 05:33 PM - Posted by ZoA (not verified)

I don't think that “mobile SoC experience” is adequate explanation. Mobile chips sacrify frequency to get minimal voltage, and being optimized for such low voltages and frequencies they don't handle high frequencies well.

Maxwell behaves differently, and handles high frequencies excellently.

Besides even if “mobile SoC experience” is the answer it tells us nothing about what concrete technical feature imported form SoC is providing the effect.

September 19, 2014 | 06:58 PM - Posted by aparsh335i (not verified)

In all seriousness i think it had something to do with the 192 nodes vs the now 128 nodes?

September 20, 2014 | 04:33 AM - Posted by Anonymous (not verified)

Fantastic review. The best i have seen. WOW. much time.

September 20, 2014 | 09:46 AM - Posted by Dark_wizzie (not verified)

From the benchmarks it seems obvious to me that the game is CPU bottlenecked. The highest FPS is outside of combat. During combat the FPS all drops to 100fps no matter 1440p or 4k. This is my experience as well, from looking at CPU/GPU usage while playing Skyrim. Since it's limited to 2 cores, basically a highly overclocked 4 core is the way to go. However, with something like ENB, I wonder if that'll shift enough of the burden towards the GPU to the point where a better GPU solution actually does something.

September 20, 2014 | 09:47 AM - Posted by Dark_wizzie (not verified)

I am talking about Skyrim, BTW, in my above comment.

September 20, 2014 | 12:13 PM - Posted by Stangflyer (not verified)

Ryan are you planning on testing multimonitor in the near future?
Below is my question.

For example: I have my 7950 sapphire flex in crossfire. I play eyefinity games at 5920x1440. My 3 screens are 1680x1050x2 (16x10) for the outiside screens. And my u2711 at 2560x1440 (16x9) in the middle. All are dp monitors.

I do this because for the games that do not support multi monitor. I like the bigger 2560x1440 screen.

It would be great to know if Nvidia has updated surround capabilities to match AMD's!

September 21, 2014 | 06:59 AM - Posted by Cannonaire

I'm happy to see nVidia endorse downsampling in the form of a supported feature. I'm curious about the downsampling filter they use though - a 13-tap Gaussian filter should produce a decently sharp image without ringing, but is there any word on whether or not it is gamma-aware? That last detail is important when downsampling and particularly for high-contrast details.

September 21, 2014 | 09:09 PM - Posted by Shaun (not verified)

Hi,

I have a request to benchmark skyrim with enb. full quality.
real vision option a full quality is a good videocard destroyer!

my system is i7 4820k @ 4.5ghz and a 290X
skyrim at 4k without enb is 45-50fps
enb on full quality is 17-19FPS..

can you setup a skyrim enb benchmark for reference from now on?
im very interested in your benchmarks for skyrim enb with 290x, 780ti, 970gtx and 980gtx

I know its alot of work but please please please! hehehe

September 21, 2014 | 09:12 PM - Posted by Shaun (not verified)

ohhh, if you do, please add unoffical hd textures, flora overhaul and that hurts performance even more! makes the game so beautiful to play...

July 7, 2015 | 12:42 AM - Posted by Camille (not verified)

Most of those supplements work by stopping the cause
of baldness. Excess consumption of zinc may cause bleeding stomach and severe abdominal pain. There are only 2 St Johns wort products that I know of,
that have had been properly researched and the Flordis Remotiv is one of those.

My site ... ev44.pl (Priscilla)

September 23, 2014 | 10:19 PM - Posted by Gisondinator (not verified)

I have been watching your channel for a long time now. I would like to say that i enjoy the thorough way in which you benchmanrk every card. That being said, i would guess to say that 98% of pc gamers play in 1080p. Im wondering why you test such hi resolutions? Im sure you have 1080p benchmarks on another page. I just feel raked over the coals with GSync and 4K. Im tired of forking over thousands for small increases in performance. This bleeding edge is making my wallet bleed!

September 26, 2014 | 10:53 PM - Posted by matt (not verified)

I agree with Shaun, the realvision ENB would be a great benchmark tester as with the realvision ENB on skyrim. I had average of 15fps in open area outside of white run and the rest. Was average of 20-35fps on a gigabyte r9 290x OC 4gb and I'm not sure if that did but my card eventually broke Or overheating issues but wouldn't load into windows just a black screen with fans spinning fullspeed after windows load screen after POST. So I RMA'd that card. And got credit refund and bought the MSI GTX 970 4Gb and waiting for it to arrived with also with a new motherboard.

So I think that it would be a great Benchmark as it really pushes the GPU not so much the CPU and Skyrim with mods normally uses upto 4gb of VRAM

Anyway Thanks

September 28, 2014 | 09:00 AM - Posted by Br00ksy (not verified)

Awesome Review, SLI power consumption for dual 980's is hard to come by and you sir have slayed my doubts about overdrawing a 850W power supply. THANKYOU!!! :D

October 2, 2014 | 12:39 AM - Posted by bazzman (not verified)

Im going to build this system
I7 4790K
SLI gtx 970s
16gb ram 1600mhz

Is a 630 watt PSU sufficient to run in sli? If yes can i also overclock?

October 3, 2014 | 03:22 AM - Posted by Anonymous (not verified)

630W? are you using a brand name PSU?

Dont trust PSU's that come preinstalled with a case..

I would think 630 would be buggy for SLI..
you want at least 25% to spare.. I'd say at least a 750W..

Tho my Coolermaster ran 2x R9 280's fine.. but that was me slightly underclocking my cpu so allow that..

just make sure the psu is a quality one.

November 2, 2014 | 02:01 AM - Posted by Anonymousyan (not verified)

Where AMD will have problems, not so much in pricing, but in the thermals that are required for the mini/micro sized systems for HTPC/Etc. that may not be able to take the AMD SKUs even if the prices are lower, getting as much GPU power into as small a form factor as possible is going to be a much more important market segment, as more of these products are being introduced.

Small portable form factor portable mini desktop systems, linked wirelessly to tablets, and relying on the mini desktop for most of the processing needs, are going to appear, systems that can be easily carried around in a laptop bag, along with a tablet, the tablet acting as the desktop host for direct(Via ad hoc WiFi) remote into the mini desktop PC. these type of systems will be more powerful than a laptop(the Mini PC part of the pair), but just as portable, and plugged in at the coffee houses/ETC. and wirelessly serving games, and compute to one, or more tablets. Fitting More powerful discrete GPUs into these systems that will not overburden the limited cooling available in the Mini/Micro form factor will be a big business, especially for gaming/game streaming on the go, and taking these devices along while traveling, and having a device that can be configured to be more like a laptop when on battery power, but ramp up the power beyond what a laptop is capable of while plugged in.

December 2, 2014 | 04:20 AM - Posted by Riaz (not verified)

Can i run gtx970 on my intel DH61HO Motherboard??

December 5, 2014 | 03:02 PM - Posted by Garry (not verified)

It's obvious Ryan you have taken heaps of time doing this (well done mate), but as someone wanting to build a rig to use on a big TV, I'm holding back until I can get my head around the 4k TV vs PC gaming output thing.

HDMI and 4k is my worry. I'll be buying a big (thinking 65") TV, only 4k for the gaming. It'll do service as a normal TV too, but in Australia it'll be obsolete before we see 4k content on the air! So that leaves gaming.

Is a big 4k TV a good option for high resolution gaming? Or are there land mines hidden in HDMI 1.x/2.x specs that'll catch out the unaware? Certainly look better than 3 monitors.

It looks like the 980 will push BF4 to 4k @ ~30fps, but is that enough, or is SLI to get 45-60fps needed to be a pleasure to play?

A pair of 290x SLI watercooled would have to be an option, quiet yet in the running on fps. OK, uses more power but the purchase price difference buys a lot of electricity, unless water cooling costs a bomb!

January 26, 2015 | 04:32 AM - Posted by Anonymous (not verified)

Why are the specific settings not disclosed?
That's pretty much benchmarking 101, and things like AA & AO can make a massive difference.

December 2, 2015 | 09:24 AM - Posted by Grover3606 (not verified)

While this is the only place I have seen that has benchmarked Skyrim in 4k with a 970, and Thank you very much for that! But what settings did you use? You post what settings you used at the top? But did the 970 really pull ~50 fps at 4k with 8x A and ultra? Find that hard to believe. I have a new 4k Samsung and really just want to play Skyrim in vanilla 4k, no need for AA and trying to decide if 970 is enough.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.