Review Index:
Feedback

NVIDIA GeForce GTX 680 2GB Graphics Card Review - Kepler in Motion

Author: Ryan Shrout
Manufacturer: NVIDIA

GPU Boost

One of the most interesting, and likely to be the most controversial, new features on the GeForce GTX 680 and the Kepler GPU is called GPU Boost.  You should think of GPU Boost as the GPU equivalent of Intel's Turbo Boost technology as it allows the clock speeds of the GTX 680 to increase when there is thermal headroom for it.

View Full Size

Before going over what GPU Boost does, it is crucial to have a background on the clock selection process on GPUs today.  In general, clocks of GPUs have been set to the worst case scenario so that a vendor or manufacturer could guarantee stability of the part in all games.  If one game or application (like 3DMark 11 pictured here) uses 160 watts of power at 1000 MHz (and that was generally considered the worst case by the GPU designer) then the clock would be set there.

Last year both AMD and NVIDIA started down the process of changing things by eliminating applications like Furmark and OCCT from the mix by limiting the amount of power that the GPU could draw.  AMD did this in hardware and NVIDIA in a driver hack, but both vendors saw the benefit of removing the outside case of Furmark from their decision process when setting clock speeds.  

Games use less power than Furmark, OCCT and even 3DMark 11, and some games use much less power than others.  The above image shows a theoretical clock speed 1.0 GHz, a voltage of 1.0v and power consumption numbers for three applications - 3DM11, Crysis 2 and Battlefield 3.  Using 160 watts as the target power consumption for a specific card, these would be the clock speeds set by NVIDIA without GPU Boost technology.

View Full Size

With a clock speed of 1.05 GHz and a voltage of 1.1v, you can see that power consumption changes (as would performance).  3DMark 11 is now outside the 160 watt limit they wanted to set though Crysis 2 and BF3 are still within safe ranges.

View Full Size

At 1.1 GHz and 1.2v, this imaginary card is now outside the 160 watt limit in both Crysis 2 and 3DMark 11 but is still within spec on BF3. 

The goal of GPU Boost is to allow the clock speed of the GPU to scale between clock speeds based on the application being used, stay within the power consumption tolerance specified by the manufacturer, and run stable without user intervention.  

View Full Size

To accomplish this, the Kepler GPU and graphics cards need to be able to monitor power consumption, temperatures, and performance utilizations.  Then, to use that data to alter clock speeds and voltages on the fly.  NVIDIA is keeping very quiet about the specific algorithm and work here but it should be pretty simple to understand for users; if you have thermal headroom and the GPU is nearly completely utilized, then increase clock and voltage until you are nearly at the limits specified.

This all occurs fairly quickly with an instantaneous polling time and a 100ms response time on the clocks and voltages.

View Full Size

Here is a different view of the GPU Boost feature.  With the Y axis being power and the X axis a series of unnamed games, you can see that a single game that utilizes the GPU nearly completely and consumes the most power typically sets the base clock speed for GPUs.  However, all the way on the left you'll see another game that falls well under the power limits and could be running at higher speeds.

View Full Size

Each of these titles will have varying amounts of power windows to expand into but the goal of GPU Boost is to get as close to the thermal limits of the card as possible to enable the best GPU performance.

View Full Size

With the Y axis now shifted to clock speed rather than power, you can see that the GPU clock will increase some amount over the base clock used on that game on the far right; a capability possible because of GPU Boost's ability to increase clocks on the fly.

View Full Size

There are lots of questions that go along with this new technology NVIDIA is introducing, including whether users have the ability to disable the feature.  That is something that NVIDIA has decided to not allow simply stating that "this is way the card works now."  The obvious fear is that reviewers and gamers would turn the technology off to get an "apples to apples" comparison of the GTX 680 to the HD 7970.  NVIDIA's stance is that GPU Boost is a part of the GPU and should be included anyway.  We are going to keep looking for a way to measure the actual benefits of GPU Boost on our own; although, it does take some time.

Another question you surely have is "how much with the clock boost?"  The GTX 680 will be marketed with two clock speeds: the "base clock" and the "boost clock" with the base clock being exactly what you are used to.  The base clock is the speed at which the GPU will run in the worst case scenario, a game or application which utilizes the GPU to its fullest.  The boost clock is the GPU clock while running at "typical game" - though what a typical game is isn't really described.  Because there is such a wide array of PC games on the market, and that clock can change even from scene to scene in the same game, it will be hard to say with any kind of certainty what speed you will be running at without looking for yourself.

This also means that performance will vary in several ways from user to user.  If your GPU is in an open test bed versus a poorly cooled chassis, for example, you could have your GPU clock swing one way or the other.  Also, due to the tolerances with GPUs from the factory, two different GTX 680s could perform differently in the exact same environment.  While we have only been able to test a single GPU ourselves, and thus can't really judge the accuracy of the statement, NVIDIA is claiming there will only be a 1-2% variation from user to user.  Will consumers complain if their specific card runs slower than their friends?  We aren't sure yet but I know we will be finding out very soon.

More examples of NVIDIA's GPU Boost at work are available on our overclocking page.

March 22, 2012 | 09:30 AM - Posted by D1RTYD1Z619 (not verified)

After all these years It looks like I'm switching To Nvidia. BUY AMERICAN DAMN IT!!!YEEEEHAAAAWWWWWW jk

April 15, 2012 | 03:25 PM - Posted by Anonymous (not verified)

I'm a little disappointed in Batman AC at 1080p rez.

Nivida dips way down to laggy 25fps while AMD is in the 40s.

Average Frame rates are a little higher but not enough to notice during gameplay. You will notice a lag spike to 25fps though.

They say win gtx 680 but I'm thinking its really a win AMD.

Would rather have high min fps and ave 55fps vs averaging 60 fps and having lag spikes like that.

October 25, 2014 | 06:13 PM - Posted by Ulysses (not verified)

Home to some beautiful yacht Marinas the island offers endless opportunities for diving
and sailing, it has superb golf courses and an international airport as well as a huge variety of international and local cuisine.
If you're planning on visiting Phuket, and you want to
know what's going on in the next 12 months so that you can plan your itinerary
accordingly, you're in luck. Luxury hotels of Phuket offer the ideal setting
to relax, rejuvenate, re-energize and even reinvent.

Look into my site - เที่ยวภูเก็ต

March 22, 2012 | 09:39 AM - Posted by Tom E (not verified)

Hey thanks for the much detailed review. I'm a gamer and try to upgrade my rig from time to time. Still using a pair of AMD 5870's and thinking/hoping to do an upgrade sometime. I've been thinking amd7970, but open to Nvidia too.

So I'll keep watching for more. Looking forward to Josh's review of the MXI 7970R.

Thanks for the good show.

March 22, 2012 | 10:22 AM - Posted by Josh Walrath

That is... a lot of green squares in those NVIDIA provided diagrams. They kinda burn my eyes. But in a good way.

March 22, 2012 | 10:32 AM - Posted by Tim Verry

o wow, you're right! Didn't look at that image when editing, but now that i have I think I'll be seeing green NVIDIA log shaped sun spots today! :P

March 22, 2012 | 12:02 PM - Posted by Tullan (not verified)

Awesome! Triple monitor benches and a perfomance/dollar increase. Any idea why the 7970 and 7950 are dead even at 5760x1080, do they have some common limiting factor at that res.? This is going to make for a tough choice if AMD drops their pricing to be competitive with this!

March 22, 2012 | 12:09 PM - Posted by AParsh335i (not verified)

It's about time they paid attention to what we want :)

March 22, 2012 | 12:40 PM - Posted by william snell (not verified)

im just wondering what the temp is the fan running at.

March 23, 2012 | 10:40 AM - Posted by Ryan Shrout

Well the temp was listed as 81C, is that what you needed?

March 22, 2012 | 12:43 PM - Posted by Oskars (not verified)

I checked some benchmarks around the web. This card has almost nonexistant GPU compute portion, this card is even worse than GTX 570 in GPU rendering, even worse, it is more than 2 times slower than the Radeon HD 7870. And that doesn't surprise me, only 3.5 billion transistors, compared to previous 3.2 billion.
This is a Gaming card only, don't make your self a fool and try to use this as a card for doing some work that pays you back with some green bucks. :P
I may be a bit harsh, but mostly it is true.

August 13, 2012 | 04:16 PM - Posted by Anonymous (not verified)

That's why there are gaming cards and cards for rendering. This is a gaming card review, your comment is unnecessary.

July 26, 2013 | 10:37 PM - Posted by Jasmine (not verified)

Works great on cloth and velour as well as in the pockets
and recesses in the block and head. Chunks of sludge would work loose, and then trash your wheel wells.
First of all it is necessary for one to save enough money before attending lawn care 76017 school.

However, there's no such thing as a" certified" testing lab.

Feel free to visit my web page - perennials in canton

October 25, 2014 | 08:34 PM - Posted by tripadvisor chiang mai hotels (not verified)

The economy has not faced the same problems as other
cities, and at present is showing no sign of a decline.
With this, you are probably thinking of doing business in the country, may it be for a Thai partnership, a
representative office, and other types of business.
Continuing on this route though, head north
to the Chinese village of Mae Salong otherwise known as Santhiriki (an old
Pali name meaning peace).

Also visit my web-site tripadvisor chiang mai hotels

March 22, 2012 | 02:12 PM - Posted by pdjblum

Great review, as usual. A little disappointed by the mins, but generally terrific absolute performance despite the low power consumption and reasonable temps. And nvidia is being reasonable with their pricing as well. Have been waiting for this for my son's rig, which is presently running two 460 1gb hawks in sli. At that price, with this performance, we are probably going to pull the trigger if they come back into stock at newegg at the same price.

Update:EVGA came back in stock and we pulled the trigger. Glad I had a $250 gift card from way back that I was saving for just such an occasion.

March 22, 2012 | 01:48 PM - Posted by Hacksaw777 (not verified)

Good review, been waiting on a good upgrade to my gtx 560's in sli. This should be a great card with the low power draw. I love my 560's but in battlefield the drop in fps to 30 fps at random points because of sli have been really annoying. I wonder if anyone has figured that out yet. (Ryan posted about this back in December I think)Doesn't matter as I just picked up the Msi card from Newegg as I was reading article I kept refreshing Newegg and it jumped in stock and I was able to get one. I can't wait , I ordered it next day for a total of $523.22 not bad. (Glad I got in before the price gouging begins) I am sure glad Nvidia this time around did better on their pricing. I was holding off on getting new card till these came out to see which was a better option. I have always like Nvidia for their drivers and control panel over Amd/Ati. I am glad I waited. Now just need to save for another one later on and get a triple 27" monitor setup and I will be golden.

March 22, 2012 | 02:15 PM - Posted by pdjblum

Hope the EVGA card, never bought EVGA before, is as good as your MSI. EVGA came available for me. Congrats.

March 22, 2012 | 02:02 PM - Posted by Hacksaw777 (not verified)

One question Ryan!!. Can you tell me what power supply will be required to run a sli configuration on the new gtx680.

March 23, 2012 | 10:41 AM - Posted by Ryan Shrout

The only requirement for a single GPU is a 550 watt power supply, so I am going to recommend a 750w for SLI.

March 22, 2012 | 02:15 PM - Posted by Hacksaw777 (not verified)

Dam wish I could have gotten the evga version. I just saw it come back in stock but I already got a tracking number for my shipment. I was going to change to the egva version.

March 22, 2012 | 02:20 PM - Posted by Tim (not verified)

Quick question Ryan. Will the new card and drivers allow you configure just two monitors for surround. Currently eyefinity lets you group two monitors together as a single display/resolution. Can you do this with the GTX 680?

Thanks!

March 23, 2012 | 10:42 AM - Posted by Ryan Shrout

No, they still require three displays.

Can I ask what use you have for a two panel configuration?

March 23, 2012 | 11:19 PM - Posted by Tim (not verified)

Thanks for the info. I have a very specific application in mind, which probably wouldn't apply to most people.

I've recently built a dual projector system for passive 3D. Having a 3840 x 1080p perfectly synced screen allows me to play full frame 1080p side by side 3d video. Doing this allows me to give each eye the full 1080p image.

The eyefinity setup keeps the monitors perfectly in sync as long as I use two of the same outputs, minidisplay port to hdmi for both. Mixing outputs introduced a very slight lag to one of the displays, which is barely noticable.

There is some other software(Stereoscopic Player) which will play 3d video to two mintors for this purpose, but I had trouble getting everything to sync up.

I was hoping Nvidia would add this feature, as I greatly prefer there drivers for most games. There is one option, I could buy one of the Matrox Triple Head to go adapt. and use it with the Nvidia card, but obv. there is a little extra cost going that route. Thanks for the info!

March 22, 2012 | 02:36 PM - Posted by dagamer34 (not verified)

Seems AMD got thoroughly beat. Does this mean lower prices on the 7970?

March 22, 2012 | 03:40 PM - Posted by Oskars (not verified)

Nowadays it isn't that hard to make a GPU just for gaming. In raytracing rendering tasks it is surpassed even by GTX 570, not to mention radeon HD 7870 that does so more than two times.
It seems that nvidia will be adequate for productiviy work only with GK110 die. Because transistors will go from 3.5 to 6 billion, but Cuda cores only from ~1500 to ~2000, so it could be due to a higher percentage of computing cores on a given die.

March 22, 2012 | 02:47 PM - Posted by Matt Smith

I must admit. I want this. But it's too much and my Radeon 5850 is still fine.

March 22, 2012 | 08:53 PM - Posted by LordHog (not verified)

Regarding the following item in the article:

"Because the software is already handling so much of the decoding process from DirectX, CUDA, OpenCL, and more NVIDA found it to be more power efficient to continue to increase the workload in the software rather than on the chip itself. Some items remain on die though because of latency concerns, such as texture operations."

How much of a burden does this place on the CPU? 1%, 5%, 10%, or more? Will the driver sense which CPU core is being utilized the most and offload this processing to a core that is not being utilized for the game (knowing that most games probably don't use more than two cores)?

March 23, 2012 | 10:43 AM - Posted by Ryan Shrout

I asked this question and was told that move to the CPU was "very minimal" and that overall it was "the most efficient way" to get things done.

Though, like you, I wonder how much of this makes the GPU more efficient while the CPU LESS efficient. If it were a big deal though it would have shown up in our power consumption testing since it monitors total system power consumption or in performance result drops.

March 23, 2012 | 11:08 AM - Posted by Josh Walrath

Minimal is probably a good term here. With the ability to lay hands on inexpensive multi-core CPUs that often are not stressed in any game at standard resolutions, this software overhead looks to be a non-factor. There might be some corner cases, but you would expect the bottleneck to be somewhere else.

March 22, 2012 | 09:30 PM - Posted by nabokovfan87

Ryan, I know you were extremely busy or rushed in your review, I hope you read this and consider it deeply. I read the review at work, finally get to comment. I really think the review is lacking in a few key areas, and not to say it is biased, but it leaves out some critical information.

1. The OC of the 7970 is VASTLY superior to the Nvidia card. The MSI Lightning stock OC goes to 1070, and adds 5-7 AVG FPS in gaming. When you OC that to around 1150 as just about every reference card has been able to do, it is another 5-7 AVG FPS. That isn't including the "good" cards that can OC to ~1300 and have 20-25% more frames over the stock ref designs.

2. The skyrim bench especially as well as batman appear to simply be games where the Nvidia support at development clearly gives them the edge. It would be nice to see some other games added to the benchmarking roster of games.

Specifically, R.U.S.E., Alan Wake, Wings of Prey and even Crysis (not 2) would be some good games to add. The first 3 have some very nice "effects" settings which really push the high end cards. I have a 4850, the only one of those I can run well is crysis, but I have to turn everything down and middle of the road.

3. We ALL KNOW that the ATI 79xx cards are underclocked. To the point where the 7950 beats the 7970 with a slight OC. It is extremely interesting how Metro 2033 was the single benchmark where the ATI cards pulled ahead, but contrasting that with the skyrim results, it is clear as a reader that something isn't right here. BF3 has a slight nvidia edge, Batman and Skyrim appear to have a clear Nvidia edge, but it would seem that every other game might be a flip-flop type scenario, where one card does better on one thing and the others do things better on a different game. It appears to be a driver issue, and we all know that ATI has had dramatic driver issues with the 7 series, to the point where they barely came out weeks ago.

4. The speedboost, in my opinion, is a very poor "feature". I get the idea behind it, but it seems to be clearly for power savings only and seems very much like a marketing feature or something that would make sense on a laptop rather then a desktop. I would have loved to see a feature or setting where that can be disabled, and traditional OC is possible. I see it as a negative that this isn't available.

5. The OC being limited to around 2% is a very poor limit and shows just how much the turbo effects everything. I have seen the MSI Lightning 7970, as mentioned above, OC well beyond the 10-15% normal range, upwards of adding 20-25 AVG FPS to some games.

6. Even though we all know ATI is dropping prices, it is clear that the review was tilted in a way which let it seem like nvidia was doing "a favor" by undercutting the price. If you add up all of the above, it isn't simply as clear as that. Whoever comes out first gets the high price, it always, always drops. I think if you visit the price in a week, let alone a month, it will make a lot more sense, and pricing shouldn't be handled as a major factor when we have news that the competitor will be dropping prices in response. It comes off as irresponsible to the reader when I see that, or when I see things not mentioned like that in reviews.

It's a lot to digest, but I hope it makes sense.

I would love to see an "OC" comparison when you do SLI testing, as well as some new games added to the roster for testing.

Thanks for the review.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.