Review Index:

NVIDIA GeForce GTX 1660 (non-Ti) Review - Featuring EVGA and MSI

Manufacturer: NVIDIA

Power, Temps, and Conclusion

I must say that I find it odd that this non-Ti variant of the GTX 1660 is still using an 8-pin PCIe connector, though I can't complain about the overclocking potential this provides.

View Full Size

8-pin connections aside, the power draw with our test platform was the lowest of the group from both the stock and factory-OC cards - lower than the GTX 1060 6GB and its single 6-pin connector.

View Full Size

Load temperatures are going to vary based on board partner designs of course, and of our two samples we saw very good temps with an edge going to the stock-clocked EVGA card under load. Idle temps also vary quite a bit as an increasing number of cards are using a zero-RPM idle fan profile, and for this reason the MSI GAMING X shows a higher idle which will be subject to room temp and enclosure airflow.

As to noise, the EVGA XC Black was a very quiet 32 dBA at idle and only 35.1 dBA under load, with the MSI GAMING X totally silent at idle (0 RPM fans) and 34.2 dBA under normal loads... though overclocking with a more aggressive fan profile did result in numbers as high as 52 dBA when the fans were really pushed.


The GTX 1660 brings Turing down to the $219 price point, where it currently competes with the the outgoing GTX 1060 6GB cards. Once Pascal is out of the market the GTX 1660 will fill in for that card nicely, but is around a 15% overall increase over the GTX 1060 6GB all that exciting? Until AMD releases Navi GPUs or drops the RX 590 price even further (it currently starts at $239) there won't be the competition needed to lower prices on these cards. Once factory-overclocked GTX 1660's from various manufacturers are released (this has of course already happened as you read this) at or just above the $219 launch price, the GTX 1660 will become a much more interesting alternative to the GTX 1060 6GB.

At this point I have to talk about this overclocking topic from the previous page once again. Quite simply, the GTX 1660 seems to be made to overclock, as even the totally stock EVGA XC Black was stable after a +135 GPU clock (based on the +143 result after running the OC Scanner, and given the 15 MHz increments of Turing +135 was more stable than +150). The TU116 GPU is very conducive to overclocking, and I don't expect to see a lot of partner cards without at least a small Boost clock increase.

View Full Size

However, due to the 100% power limit with the stock card sustaining this OC under load is not really possible, as the card does drop below 2 GHz in game benchmarks while the MSI card was able to sustain 2 - 2.1 GHz. Memory with both cards is easily overclockable, though GDDR5 is not the screamer we have found GDDR6 to be. Any memory bandwidth increase does help to bring this card closer to the performance of a GTX 1660 Ti and its 12 Gbps GDDR6, though a 400 MHz increase (800 effective) was the upper limit in my testing.

I think what makes the overclocking component of this review so compelling (to me, anyway) is just how much performance seems to have been left on the table in the interest of product segmentation with the GTX 1660. Naturally, board partners can do as they like with Boost clocks and power limits, mitigating the difference between this GPU and the more GTX 1660 Ti, and there is a lot of room to play with added performance and features within the $60 window between the two GPUs.

The GTX 1660 is, for all practical purposes, the final, direct replacement for the GTX 1060 6GB, and offers improved performance (an increase of up to 23% in our testing) over that card while launching at a price lower than the 3GB variant of the GTX 1060, which started at $249. We have, therefor, received pretty much what one might have expected of a midrange sucessor: an increase in performance and a reduction in price. The extent of both of these varies by game and card design, and as this - like the GTX 1660 Ti before it - is an AIB (add-in board)-only launch the actual performance characteristics of the GTX 1660 will vary quite a bit from partner card to partner card.

As to our two sample cards, provided as with the 1660 Ti launch by EVGA and MSI, I found both to offer a very good mix at their respective price and performance levels. I'll provide a mini-review for each based on our experiences with both leading up to this review.

EVGA GTX 1660 XC Black - $219,

  • 1785MHz Boost Clock
  • 3-year warranty

View Full Size

The stock EVGA XC Black card is being offered at the base $219 price point, and offers excellent cooling from the very thick 2.75-slot cooler design. I am still somewhat on the fence about a GPU design that requires three expansion slots when it is also limited in the overclocking department, but I digress on the OC subject here yet again.

View Full Size

For what it is designed to do - stock performance, excellent temperatures, low noise - the EVGA GTX 1660 XC Black is a great effort, and a solid choice at the launch price.

MSI GTX 1660 GAMING X - $249.99,

  • ​1860 MHz Boost Clock
  • 3-year warranty

View Full Size

For those who can spend more, MSI's highest-end GTX 1660 is a blend of custom PCB, efficient Twin Frozr 7 cooler, and higher factory Boost clock (1860 MHz vs. stock 1785 MHz) that makes a real difference in gaming and reduces the performance gap with the GTX 1660 Ti.

View Full Size

This card also overclocks quite well in testing, and has a price premium of $30 over the stock GTX 1660 that is more than offset by its further OC potential in my opinion. It was quiet under load out of the box, but you will definitely hear it if you really push things with a manual overclock and custom fan profile. If the price holds at $249 this will be hard to beat if you make use of its overclocking potential.

Final Thoughts

If you're shopping for a 1080p gaming card and had previously had your sights set on the GTX 1060, there is just no reason to consider that card anymore. Apologies to those trying to sell through remaining Pascal inventory, but unless the price of the 6GB GTX 1060 drops significantly (and it probably will) I'd have a very hard time choosing one over a GTX 1660. Now that the dust has settled we do indeed, finally, have our successor to the GTX 1060 after nearly two years, and it's a good one. It would be a more exciting story if the 1660 Ti had been the $219 card, but we need some more competition to drive prices down. 2019 is just getting started, but NVIDIA is making a pretty good case for their midrange GPUs with the 1660 series already.

Video News

March 14, 2019 | 10:19 AM - Posted by ThatMadNvidiaBinningOperationProducingLoadsOfPriceSegements (not verified)

"So what is a GTX 1660 minus the “Ti”? A hybrid product of sorts, it turns out. The card is based on the same TU116 GPU as the GTX 1660 Ti, and while the Ti features the full version of TU116, this non-Ti version has two of the SMs disabled, bringing the count from 24 to 22."

No it's just an example of Nvidia's binning operations, and AMD does the same thing, albeit with lower numbers of Base Die Tapeouts compared to Nvidia. Nvidia will probably be filling out some even lower binned variant of that TU116 Base Die Tapeout for even more die harvesting and replacing more Pascal SKUs with Turing/GTX offerings, sans the RTX, on the low end of the GPU pricing totem pole. The Turing GPU micro arch's SMs with their concurrent Int/FP execution, other Turing SM tweaks, is a better performer than Pascal's non-concurrent Int/FP execution.

So while the games/gaming engine ecosystem will take its time adopting RTX, if First generation RTX can even be usable for the lowest end Nvidia GPU SKUs for gaming anyways! So Nvidia can still compete better at a lower price with AMD's SKUs in Raster Only Oriented gaming titles.

Nvidia's product stack is rather segemented based on performance what with Nvidia using even lower clocked GDDR5 along with less GTX/Turing SMs on the 1660. Nvidia has more Base Die Tapeouts with the Turing Generation than it had for its Pascal Generation of GPUs so look for even more sub-segements for Turing that are divided the RTX and GTX(Sans tensor and RT cores) major Turing segements. And there will be the usual OEM Only Turing Variants also.

March 14, 2019 | 06:28 PM - Posted by Anonymous911 (not verified)

"The GTX 1660 brings Turing down to the $219 price point, where it currently competes with the the outgoing GTX 1060 6GB cards. Once Pascal is out of the market the GTX 1660 will fill in for that card nicely, but is around a 15% overall increase over the GTX 1060 6GB all that exciting?"
Gold Award!!!

March 15, 2019 | 10:30 AM - Posted by chipman (not verified)


Who would buy ATI now? L O L

March 15, 2019 | 12:19 PM - Posted by BraindeadOnArrivalThatChipmanIs (not verified)

AMD has got its Epyc server CPUs to earn AMD plenty of Revenues above what any GPU only sales will bring in.
And AMD could just continue selling its Radeon Pro WX and Radeon Instinct SKUs to the HPC/AI accelerator market and not have to worry about being so dependent on consumer Gaming revenues.

So That's Epyc and Radeon Pro WX and Radeon Instinct SKUs with that Mad Revenue generating markups and AMD can still sell any Non Performant Vega 20 Dies as Gaming Bins and still earn some small revenues off of any defective Vega 20 Die samples that just do not make the grade for Professional usage.

AMD does have to bring its Navi to Market ASAP but really AMD's Professional Epyc CPUs and Compute/AI GPU Acceleratoors will still be earning growing revenues for AMD as they take each percentage point more of that server/HPC market share.

ATI no Longer exists and AMD's GPU IP can earn more revenues on the Professional Market with markups that the eternal scrub chipman could never afford.

It's strange that the Overall Technology Press is not trying to estimate AMD's integrated Graphics market share as those first and second generation Desktop/Mobile Raven Ridge APUs are selling nicely. Vega Graphics will be more used in the consumer market in its integrated graphics versions than in the Desktop GPU/Gaming market.

AMD's biggest mistake is in not focusing on some Discrete Mobile Vega/4GB-HBM2 for the non Apple Laptop market as that Vega HBCC, when there is some HBM2 available, will make laptops for Non Gaming Graphics Usage a great combination. Having a discrete mobile Vega GPU able to, via the HBCC, use the HBM2 into HBC(High Bandwidth Cache) is great for Blender 3D where workloads can easily eat up 4GB of VRAM. So Having a laptop with a Discrete Mobile Vega GPU/4GB-HBM2 and 16-32 GB of system RAM will allow for some nice Virtual VRAM paging via Vega's HBCC IP to and from the HBM2(HBC) and no worries about running out of VRAM or any need to slow things down as the HBCC will manage the swaps between HBM2 and System DRAM in the background to keep the GPU working from the HBM2(HBC).

AMD really needs to target the Laptop Market even moreso than the PC market as the laptop unit sales are larger than the PC unit sales. Graphics folks could mostly cere less about FPS and Vega's HBCC/HBCC-HBM2 is a much more attractive IP for non gaming Graphics usage than it is for Gaming Graphics usage.

I hope that AMD will do better with Navi and get some Navi Discrete Mobile GPUs on a higher priority because that HBCC/HBC IP has more potential beyond only the gaming market. AMD needs to start to Brand Some Epyc/Radeon Pro WX Discrete Mobile APUs for the portable workstation market and that includes Chiplet Based APUs that have Zen2 Chiplets and a GPU/HBM2 interposer pairing all on the MCM. And that 7nm TSMC process is looking not so bad if one looks at some pf the Radeon VII samples that have some very good undervolting characteristics and the ability to maintain higher clocks.

Now don't you go off on one of your tangents, chipman, and blame TSMC for Radeon VII's power uasge because anyone with a brain is going to look at Radeon VII's DP FP metrics and see where the Power Usage is coming from and that's AMD's Compute/AI Vega 20 Tapeout that's making some home compute folks very happy because they get plenty of DP FP.

Radeon VII

FP32 (float) performance--: 13,440 GFLOPS
FP64 (double) performance-: 3,360 GFLOPS (1:4) [Not bad for only $699]

RTX 2080Ti

FP32 (float) performance--: 13,448 GFLOPS
FP64 (double) performance-: 420.2 GFLOPS (1:32)

There you are chipman now that's a bit more DP FP on Radeon VII than Nvidia's offering and that's a lot more DP FP units even on Radeon VII's 60 out of 64 CUs enabled Bin.

Hey the Radeon Pro V340 is a dual GPU(Vega 56 complement of Shaders:TMUs:ROPs) SKU with hardware based virtualization market potential.

"The AMD Radeon Pro V340 is intended for enterprise VDI, desktop as a service (DaaS) and cloud gaming workloads.
The card itself is comprised of two Vega 56 style GPUs, each with 16GB of HMB2 memory giving each card a total of 32GB of HBM2 memory, with ECC, onboard. This is higher-end GPU memory than NVIDIA’s GDDR5(X) memory. Using SR-IOV based virtualization, these cards support up to 32x 1GB virtual desktops. " (1)


"Using AMD’s MxGPU technology and SR-IOV based hardware virtualization, AMD is able to virtualize graphics without having to utilize extra virtualization drivers in the hypervisor like NVIDIA uses." (1)

Look at that chipman and AMD is selling Vega 56-Like(Higher Binned Die Sample) on a Dual GPU/Single PCIe card for Headless Cloud Visualization and Cloud Graphics/Gaming workloads. That's $8,183.99(Usually $10,000+) online and some handsome revenues generated for AMD!

Wait until the Vega 20 based variant becomes available and AMD will be doubling up on that also with 2, Vega 20 dies and loads more of DP FP(The Pro Variants have the full 1:2 ratio enabled compared to Radeon VII's 1:4 DP:SP ratio).
Really does consumer Gaming bring in any real markups compared to the professional markets. And AMD can Package deal sell these GPU accelerators with their Epyc/Naples and soon Epyc/Rome CPU SKUs! And the Vega 20 based pro SKUs will speak PCIe 4.0 and xGMI(Infinity Fabric) so any Vega 20 based update/replacment to the Radeon Pro V340 will be some damn good Cloud Gaming Headless Horseman for Google/Others that need virtualization workloads performed.

It's more like RIP chipman's brain as that Brain was DOA!


"AMD Radeon Pro V340 Dual Vega 32GB VDI Solution Launched
By Patrick Kennedy - August 26, 2018"

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.