Review Index:
Feedback

NVIDIA GeForce GTX 660 2GB Review - Kepler GK106 at $229

Author: Ryan Shrout
Manufacturer: NVIDIA

Two Retail Cards - EVGA GC and MSI OC

While we did receive a reference card from NVIDIA for this launch, we also got in a pair of retail cards later in the review process and we wanted to include them in our coverage for launch. 

View Full Size

First up is the EVGA GeForce GTX 660 SC 2GB that runs at higher than reference clock speeds while retaining the cooler configuration from the card we showed you on the previous page. 

View Full Size

EVGA's card runs at a base clock of 1046 MHz and a Boost clock of 1111 MHz, a good size jump over the reference speeds of 980 MHz.  Memory speeds remain the same though at 1500 MHz / 6.0 GHz. 

View Full Size

The connection configuration remains unchanged.

View Full Size

The EVGA card still requires the single 6-pin power connection and uses the smaller PCB design as well, just like our reference card and basically just like the GTX 660 Ti EVGA card we saw a few weeks ago.

View Full Size

Next up is the MSI GeForce GTX 660 OC 2GB card that you can quickly see is using one of the company's TwinFrozr cooler designs.

View Full Size

The clock speeds on MSI's card are set at 1033 MHz base and 1098 MHz Boost with the standard 6.0 GHz memory speed. 

View Full Size

The display configuration again includes a pair of DVI ports, an HDMI and DisplayPort.

View Full Size

From just a glance you can see the heat pipes used by the MSI cooler that will give it an edge over the reference and EVGA cards in terms of cooling in our testing.

View Full Size

Again, the GTX 660 from MSI uses a single 6-pin power connection though the PCB design is closer in design to the GTX 660 Ti card we reviewed previous than the reference spec from NVIDIA. 

September 13, 2012 | 01:55 PM - Posted by Thedarklord

This is the true sweet spot gamers have been looking for, even NVIDIA in its own documents compares it to other greats (9800GT/ect).

I also really do like the review including legacy cards, gives a really nice comparison, and not just comparing to what is currently available on the market.

September 13, 2012 | 02:54 PM - Posted by Tim Verry

Yup, I noticed that comparison as well. I don't remember exactly how 9800GT compares to 8800GT (were they exactly the same, just a rebrand?) but the latter was a card that sold like hotcakes :). 

September 13, 2012 | 05:45 PM - Posted by Thedarklord

Yeah, the 9800 GT was a rebranded 8800 GT, most of the time all that was done was a slight clock increase, later versions included a die shrink (65nm -> 55nm).

The 8800 GT and its bigger brother 8800 GTS 512 MB (which was rebranded to be the 9800 GTX), really should never have been. I think NVIDIA launched them just in time for the holiday season that year, they were only on the market for about 3 months before they were rebranded. >.<

But never the less, the 8800GT's/9800 GT's were an insanely popular gaming card, and rightfully so I think.

September 13, 2012 | 07:05 PM - Posted by aparsh335i (not verified)

I loved my 8800GTS 512! Nvidia did some really confusing stuff back then! Remember this?
8800GTS 640gb, 8800GTS 320gb, 8800GTS 512mb (g92).
The 512mb was way faster than the other 2.
Then the 8800gts 512mb was rebranded as the 9800GTX (mentioned as ).
Then rebranded as the 9800GTX+ (was it 45nm then? cant remember)
Then rebranded as GTS 250.
What a trip!

September 13, 2012 | 08:27 PM - Posted by Thedarklord

Haha, yes I do remember the fun with GeForce 8 branding/marketing back then.

Yup, there is alot I could go into about those cards (Ive been a computer hardware geek since the GeForce 3 Days, and video cards are my fav parts, lol).

Yes the 9800 GTX was rebranded to be the 9800 GTX+ which was the die shrink from (65nm -> 55nm).

45nm was skipped on Video Cards/GPU's mainly due to area density/manufacturing timing issues.

So that same G92 core went from 8800 GTS 512 MB -> 9800 GTX -> 9800 GTX+ -> GTS 250 ! How time flys ^.^

September 13, 2012 | 04:27 PM - Posted by Anonymous (not verified)

Thanks for the trip down memory lane with the 9800GT. Identical to the earlier 8800GT except for a BIOS flash for that fancy 9 Series number.

September 13, 2012 | 05:25 PM - Posted by PapaDragon

Best GTX 660 review period, thank you for your hard effort.

September 13, 2012 | 08:36 PM - Posted by Ryan Shrout

Thanks for your feedback everyone.  Let me know if you have any questions or suggestions!

September 13, 2012 | 11:29 PM - Posted by Thedarklord

At the risk of asking you guys to do more work ^.^, but in these video card reviews, can you include a few more video cards as comparison sake?

Rather than only the respective cards immidiate competition or siblings?

Say in this case, adding a GTX 670, GTX 680, HD 7970, HD 7950, GTX 580, GTX 570, ect. Seeing how cards compete across and inside generations is really nice/helpful. And, unless the testbed has changed, you could almost imput the benchmarks from previous reviews (though a "quick" re-run of the cards with updated drivers would be great).

Id be happy to help ;), lol.

September 13, 2012 | 10:03 PM - Posted by your reviews are fucked (not verified)

No way in hell would skyrim bring a gpu to 27 fps min at 1680

fix your fucking test rigs damnit.

September 14, 2012 | 07:21 AM - Posted by Ryan Shrout

It happens during a scene transition - going from a town to the overworld. 

September 15, 2012 | 02:42 AM - Posted by your reviews are fucked (not verified)

then your "Minimum" result is fundamentally WRONG.

September 15, 2012 | 05:33 PM - Posted by Ryan Shrout

They aren't wrong, they represent the system as a whole anyway. 

September 15, 2012 | 08:25 AM - Posted by Anonymous (not verified)

are you guys using a older version of Skyrim for the tests?

Skyrim until patch 1.4 (I think) was much worse in terms of CPU performance, since all cards are doing 27 min this can only be caused by something else.

September 15, 2012 | 05:32 PM - Posted by Ryan Shrout

Latest Steam updates...

September 15, 2012 | 09:29 AM - Posted by KNN Foddr (not verified)

The transitions scenes appear to be capped at 30fps, and aren't part of gameplay. Including them in a benchmark for a videocard doesn't seem very illuminating.

September 20, 2012 | 09:33 PM - Posted by Buck-O (not verified)

I will be refreshing when people learn how FRAPs works, and why there are dips in the charting as a result of cut scenes. It will do it on any system. And while...no, it isnt particularly illuminating on benchmark data...it is a necessity that all cards are shown on the same play through, with the same cut scenes, and the same corresponding dips. Please get off your high horse.

One thing that i will take issue with however, is the Anti Aliasing settings. I am discouraged that nVidia reviews consistently limit the AA to a paltry 4x. And usually its MSAA not FSAA. It is pretty common knowledge that high levels of AA on nVidia cards, particularly at higher resolutions, is a weak point of their design. And typically show steeper performance drop off as the AA and resolution goes higher then ATi cards under similar conditions would.

For the sake of fair and balanced testing, the true maximum settings should be run on the game, as a typical user would select from the start. Not cherry picking AA levels that are favorable to one vendors hardware setup that greatly skew expected real world performance.

If you are going to bother with the new WQHD resolutions, the least you could do is run 8x FSAA.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.