Review Index:

NVIDIA GeForce GTX 680 2GB Graphics Card Review - Kepler in Motion

Author: Ryan Shrout
Manufacturer: NVIDIA

GPU Boost

One of the most interesting, and likely to be the most controversial, new features on the GeForce GTX 680 and the Kepler GPU is called GPU Boost.  You should think of GPU Boost as the GPU equivalent of Intel's Turbo Boost technology as it allows the clock speeds of the GTX 680 to increase when there is thermal headroom for it.

View Full Size

Before going over what GPU Boost does, it is crucial to have a background on the clock selection process on GPUs today.  In general, clocks of GPUs have been set to the worst case scenario so that a vendor or manufacturer could guarantee stability of the part in all games.  If one game or application (like 3DMark 11 pictured here) uses 160 watts of power at 1000 MHz (and that was generally considered the worst case by the GPU designer) then the clock would be set there.

Last year both AMD and NVIDIA started down the process of changing things by eliminating applications like Furmark and OCCT from the mix by limiting the amount of power that the GPU could draw.  AMD did this in hardware and NVIDIA in a driver hack, but both vendors saw the benefit of removing the outside case of Furmark from their decision process when setting clock speeds.  

Games use less power than Furmark, OCCT and even 3DMark 11, and some games use much less power than others.  The above image shows a theoretical clock speed 1.0 GHz, a voltage of 1.0v and power consumption numbers for three applications - 3DM11, Crysis 2 and Battlefield 3.  Using 160 watts as the target power consumption for a specific card, these would be the clock speeds set by NVIDIA without GPU Boost technology.

View Full Size

With a clock speed of 1.05 GHz and a voltage of 1.1v, you can see that power consumption changes (as would performance).  3DMark 11 is now outside the 160 watt limit they wanted to set though Crysis 2 and BF3 are still within safe ranges.

View Full Size

At 1.1 GHz and 1.2v, this imaginary card is now outside the 160 watt limit in both Crysis 2 and 3DMark 11 but is still within spec on BF3. 

The goal of GPU Boost is to allow the clock speed of the GPU to scale between clock speeds based on the application being used, stay within the power consumption tolerance specified by the manufacturer, and run stable without user intervention.  

View Full Size

To accomplish this, the Kepler GPU and graphics cards need to be able to monitor power consumption, temperatures, and performance utilizations.  Then, to use that data to alter clock speeds and voltages on the fly.  NVIDIA is keeping very quiet about the specific algorithm and work here but it should be pretty simple to understand for users; if you have thermal headroom and the GPU is nearly completely utilized, then increase clock and voltage until you are nearly at the limits specified.

This all occurs fairly quickly with an instantaneous polling time and a 100ms response time on the clocks and voltages.

View Full Size

Here is a different view of the GPU Boost feature.  With the Y axis being power and the X axis a series of unnamed games, you can see that a single game that utilizes the GPU nearly completely and consumes the most power typically sets the base clock speed for GPUs.  However, all the way on the left you'll see another game that falls well under the power limits and could be running at higher speeds.

View Full Size

Each of these titles will have varying amounts of power windows to expand into but the goal of GPU Boost is to get as close to the thermal limits of the card as possible to enable the best GPU performance.

View Full Size

With the Y axis now shifted to clock speed rather than power, you can see that the GPU clock will increase some amount over the base clock used on that game on the far right; a capability possible because of GPU Boost's ability to increase clocks on the fly.

View Full Size

There are lots of questions that go along with this new technology NVIDIA is introducing, including whether users have the ability to disable the feature.  That is something that NVIDIA has decided to not allow simply stating that "this is way the card works now."  The obvious fear is that reviewers and gamers would turn the technology off to get an "apples to apples" comparison of the GTX 680 to the HD 7970.  NVIDIA's stance is that GPU Boost is a part of the GPU and should be included anyway.  We are going to keep looking for a way to measure the actual benefits of GPU Boost on our own; although, it does take some time.

Another question you surely have is "how much with the clock boost?"  The GTX 680 will be marketed with two clock speeds: the "base clock" and the "boost clock" with the base clock being exactly what you are used to.  The base clock is the speed at which the GPU will run in the worst case scenario, a game or application which utilizes the GPU to its fullest.  The boost clock is the GPU clock while running at "typical game" - though what a typical game is isn't really described.  Because there is such a wide array of PC games on the market, and that clock can change even from scene to scene in the same game, it will be hard to say with any kind of certainty what speed you will be running at without looking for yourself.

This also means that performance will vary in several ways from user to user.  If your GPU is in an open test bed versus a poorly cooled chassis, for example, you could have your GPU clock swing one way or the other.  Also, due to the tolerances with GPUs from the factory, two different GTX 680s could perform differently in the exact same environment.  While we have only been able to test a single GPU ourselves, and thus can't really judge the accuracy of the statement, NVIDIA is claiming there will only be a 1-2% variation from user to user.  Will consumers complain if their specific card runs slower than their friends?  We aren't sure yet but I know we will be finding out very soon.

More examples of NVIDIA's GPU Boost at work are available on our overclocking page.

March 24, 2012 | 05:37 PM - Posted by Steven (not verified)

Here's a review of overclocked cards. GTX 680 comes out top.

March 23, 2012 | 10:50 AM - Posted by Ryan Shrout

1. That's good to know, but we aren't testing overclocked cards. We are testing reference cards in this review. We do review OC model cards (like the XFX model 7970 we tested) and will have a test of the Lightning next week. I think your estimate of 25% better improvement is an overestimate.

2. That's crap. Metro 2033 was an "NVIDIA game" and Dirt 3 was an "AMD game" yet those titles don't show any bias. Both vendors can write driver fixes to ANY game.

3. Fair enough - but we can only test based on the real user experience that you get when you install the cards and play the games. Saying that the HD 7970 is underclocked to me simply means that AMD didn't have the balls to release a card at the CORRECT speed then.

4. I think you misunderstand the feature then - I think it is really a compelling addition to get users the full capability of their GPU regardless of the game/application being played. When our video interview with Tom Petersen is online, you should take a listen?

5. I was able to hit a 150 MHz offset on the GTX 680 which is basically a 15% overclock.

6. Pricing is THE MAJOR factor other than performance. Yes, it is always changing, but to pretend it isn't a crucial factor is simply insane.

Thanks for you comments man, I love to hear what others are thinking, even if we disagree! :D

March 23, 2012 | 10:18 PM - Posted by nabokovfan87

1. It was just one of those things that glared out to me, and the 25% is probably 20% based on the HardOCP review, but that is a bunch more then 2%, which would make things extremely close in the end.

2. I don't know if AMD helped with metro, or if all amd cards run better on it, but the point is, Nvidia clearly had helped with physx and some filtering stuff in bf and others, it would be a good idea to add some variety to the benches, I would add R.U.S.E. and WoP for sure (they are different types of games then the ones you have listed, and as I said, they have some very nice high end dx10/11 features).

3. I completely agree, but it would have been nice as a reader to see some "by the way" type text.

4. I didn't say there wasn't a point to it, but I just don't get it, or better put, agree with it. Why with 1500+ cores you would want to OC them on the fly? I would want to be able to test things and make them stable. I see benchmarks vastly varying with games, it could be related to this "feature". Maybe heat of the case/card forces some bad benches for some sites and better scores for others? IDK, just something to look into. I know for power specifically people have "proof" of shenanigans, especially with 3dmark, so who really knows, but I know there is more then meets the eye with this.

5. But the performance was only 2% better, not much higher, The high binned 7970's go to 1250-1300+ which is around (35-40%, even at 1150 it is 25%)

6. It has more to do with the fact that we already know that amd is going to reduce their pricing, and it is going to be within a week more then likely. If they don't the review stands, but if they do, then it really comes off as overly harsh for no reason in the long run. Let me put it this way, have you ever gone back and revised this, for the people who look at your review AFTER the prices drop? Nope, because it ALWAYS happens and it always has to do with either a competitor releasing something, or a new card being out. In my opinion, it always seems like after a week the pricing on reviews is completely off.

Like I said, I didn't want to come off as a jerk or w/e, but it was stuff I wanted you to discuss or look into. The point with the OC stuff was to bring it up before you do SLI/OC testing, and I know Josh is writing the lightning review, but it would be a good idea to toss those numbers in the mix and get an idea of which one, OC'd or at it's best let's say, actually performs the best. It seems very straightforward to me that the 7970 OCs a lot better.

March 22, 2012 | 11:45 PM - Posted by Michael (not verified)

Looks like we have that annoying "coil whine" here. Ryan, did you experience any in this review? Peeps may want to watch this video review at LinusTechTips here @ 7:31:

Note: Video also includes a cool cat

March 23, 2012 | 10:51 AM - Posted by Ryan Shrout


Actually, I know of this phenomenon but I didn't experience with the GTX 680. I have with some HD 7000 cards though. In my opinion this is some kind of odd mix of PSU/GPU that causes it. I have never been able to pinpoint one architecture of PSU as the culprit.

March 23, 2012 | 09:05 AM - Posted by Dave (not verified)

Ryan, can you put that fancy SPL meter on this beast and provide some noise measurements?

I run a Define R3 chassis and use "reference" blower-style fans on my GPUs in order to keep the noise/heat down during idle periods and provide more effective heat dissipation under load without adding more than the 2 fans I currently have. It's loud under load, but I can deal with that since I game with a headset.

Would be interesting to see how this compares to, say, my current 570 blower. Especially considering every new card release comes along with a marketing line regarding lower heat and noise.

March 23, 2012 | 10:57 AM - Posted by Ryan Shrout

Sorry, we had these numbers the whole time but I forgot to post them!

Again, the GTX 680 impressed us.

March 23, 2012 | 10:50 PM - Posted by Hacksaw777 (not verified)

Wow just got the gtx680 installed and all I can say is Wow!!!!. I have bought many of top of the line video card over the years, and this is the first time I have ever been completely satisfied with one. This one is amazing. So powerful, quiet and only running 65c under load in Nvidia cooler master stacker case. Battlefield 3 run so smooth even running on 32inch 1080p ultra with vsync enabled never drops below 60 fps. Crysis 2 smooth as butter on extreme hi res textures. Now I think its time to look into getting into surround setup. if I could only get some good 27inch with small bezels for a good price.

March 31, 2012 | 07:12 AM - Posted by cyow

All I can say it WOW Sweat I Want one so sad it out of my price rang at this time.

Great Review many thnaks Ryan

April 2, 2012 | 07:05 AM - Posted by jewie27 (not verified)

The reason for frame rate capping is to lower the temps of the card. Which also reduces the fanspeed automatically and consuming less power overall.

It works, I tried it in Battlefield 3 with my two GTX 580's in SLI. Normally I get about 100 FPS on Ultra, gpu's at 82C max. With a FPS cap of 80 FPS, I drop temps down to 70's. 10 degree savings and quieter operation.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.