Review Index:

NVIDIA GeForce GTX 1660 (non-Ti) Review - Featuring EVGA and MSI

Manufacturer: NVIDIA

Turing at $219

NVIDIA has introduced another midrange GPU with today’s launch of the GTX 1660. It joins the GTX 1660 Ti as the company’s answer to high frame rate 1080p gaming, and hits a more aggressive $219 price point, with the GTX 1660 Ti starting at $279. What has changed, and how close is this 1660 to the “Ti” version launched just last month? We find out here.

View Full Size

RTX and Back Again

We are witnessing a shift in branding from NVIDIA, as GTX was supplanted by RTX with the introduction of the 20 series, only to see “RTX” give way to GTX as we moved down the product stack beginning with the GTX 1660 Ti. This has been a potentially confusing change for consumers used to the annual uptick in series number. Most recently we saw the 900 series move logically to 1000 series (aka 10 series) cards, so when the first 2000 series cards were released it seemed as if the 20 series would be a direct successor to the GTX cards of the previous generation.

But RTX ended up being more of a feature level designation, and not so much a new branding for GeForce cards as we had anticipated. No, GTX is here to stay it appears, and what then of the RTX cards and their real-time ray tracing capabilities? Here the conversation changes to focus on higher price tags and the viability of early adoption of ray tracing tech, and enter the internet of outspoken individuals who decry ray-tracing, and more so DLSS; NVIDIA’s proprietary deep learning secret sauce that has seemingly become as controversial as the Genesis planet in Star Trek III.

  GTX 1660 GTX 1660 Ti RTX 2060 RTX 2070 GTX 1080 GTX 1070 GTX 1060 6GB
GPU TU116 TU116 TU106 TU106 GP104 GP104 GP106
Architecture Turing Turing Turing Turing Pascal Pascal Pascal
SMs 22 24 30 36 20 15 10
CUDA Cores 1408 1536 1920 2304 2560 1920 1280
Tensor Cores N/A N/A 240 288 N/A N/A N/A
RT Cores N/A N/A 30 36 N/A N/A N/A
Base Clock 1530 MHz 1500 MHz 1365 MHz 1410 MHz 1607 MHz 1506 MHz 1506 MHz
Boost Clock 1785 MHz 1770 MHz 1680 MHz 1620 MHz 1733 MHz 1683 MHz 1708 MHz
Texture Units 88 96 120 144 160 120 80
ROPs 48 48 48 64 64 64 48
Memory Data Rate 8 Gbps 12 Gbps 14 Gbps 14 Gbps 10 Gbps 8 Gbps 8 Gbps
Memory Interface 192-bit 192-bit 192-bit 256-bit 256-bit 256-bit 192-bit
Memory Bandwidth 192.1 GB/s 288.1 GB/s 336.1 GB/s 448.0 GB/s 320.3 GB/s 256.3 GB/s 192.2 GB/s
Transistor Count 6.6B 6.6B 10.8B 10.8B 7.2B 7.2B 4.4B
Die Size 284 mm2 284 mm2 445 mm2 445 mm2 314 mm2 314 mm2 200 mm2
Process Tech 12 nm 12 nm 12 nm 12 nm 16 nm 16 nm 16 nm
TDP 120W 120W 160W 175W 180W 150W 120W
Launch Price $219 $279 $349 $499 $599 $379 $299

So what is a GTX 1660 minus the “Ti”? A hybrid product of sorts, it turns out. The card is based on the same TU116 GPU as the GTX 1660 Ti, and while the Ti features the full version of TU116, this non-Ti version has two of the SMs disabled, bringing the count from 24 to 22. This results in a total of 1408 CUDA cores - down from 1536 with the GTX 1660 Ti. This 128-core drop is not as large as I was expecting from the vanilla 1660, and with the same memory specs the capabilities of this card would not fall far behind - but this card uses the older GDDR5 standard, matching the 8 Gbps speed and 192 GB/s bandwidth of the outgoing GTX 1060, and not the 12 Gbps GDDR6 and 288.1 GB/s bandwidth of the GTX 1660 Ti.

Continue reading our review of the NVIDIA GeForce GTX 1660 graphics card

The 6GB version of the GTX 1060 is the obvious parallel to the GTX 1660 launching today, and yes, we said something similar with the GTX 1660 Ti launch. This time we mean it, damn it! Performance will be higher with the 1660 compared to the 1060 in part thanks to architectural improvements with Turing, as well as CUDA core count and clock speed advantages; though moving back to 8 Gbps GDDR5 means the gains over the GTX 1060 will not be as impressive as what we saw with GTX 1660 Ti. NVIDIA seems to be making a concession to hit a price target with the move to GDDR5, as the GTX 1660 is positioned $60 below the Ti version with its $219 launch price.

And now for a look at the cards! Yes, it's deja vu all over again with the GTX 1660 review, as once again we have a stock EVGA and factory-overclocked MSI card for your inspection.

The EVGA GTX 1660 XC Black

View Full Size

Just like our GTX 1660 Ti card from EVGA the 1660 XC Black is a compact single-fan design that has a surprisingly thick 2.75-slot cooler that, in contrast to the card's short length, requires three expansion slots for installation.

View Full Size

The XC Black is a completely stock card, coming in at the NVIDIA launch price of $219 and offering default clock speeds.

View Full Size

And while some overclocking is certainly possible thanks to the beefy cooler, the power limit is locked at 100% so those seeking more aggressive clocks need look further up EVGA's product lineup.

View Full Size

The GTX 1660 retains the 8-pin PCIe power requirement of the GTX 1660 Ti

What happens when a board partner adds a factory overclock and provides additional headroom for a manual OC with a higher power limit? Look no further than the other card in our review today, which comes from MSI.


View Full Size

The GAMING X card from MSI, just as with the 1660 Ti we looked at previously, is factory overclocked with a 1860 MHz Boost clock - a 75 MHz OC (memory is stock).

View Full Size

The GAMING X offers a 107% power limit, and along with the dual-fan Twin Frozr 7 cooler this should allow for some more adventurous overclocking than with a totally stock card.

View Full Size

In fact, further overclocking is just what we did with this, and the MSI GAMING X is featured on our OC results page after we found the practical limit of our particular sample.

View Full Size

On the next page we will begin our look at gaming performance with the new GTX 1660 starting with some 1080p benchmarks.

Video News

March 14, 2019 | 10:19 AM - Posted by ThatMadNvidiaBinningOperationProducingLoadsOfPriceSegements (not verified)

"So what is a GTX 1660 minus the “Ti”? A hybrid product of sorts, it turns out. The card is based on the same TU116 GPU as the GTX 1660 Ti, and while the Ti features the full version of TU116, this non-Ti version has two of the SMs disabled, bringing the count from 24 to 22."

No it's just an example of Nvidia's binning operations, and AMD does the same thing, albeit with lower numbers of Base Die Tapeouts compared to Nvidia. Nvidia will probably be filling out some even lower binned variant of that TU116 Base Die Tapeout for even more die harvesting and replacing more Pascal SKUs with Turing/GTX offerings, sans the RTX, on the low end of the GPU pricing totem pole. The Turing GPU micro arch's SMs with their concurrent Int/FP execution, other Turing SM tweaks, is a better performer than Pascal's non-concurrent Int/FP execution.

So while the games/gaming engine ecosystem will take its time adopting RTX, if First generation RTX can even be usable for the lowest end Nvidia GPU SKUs for gaming anyways! So Nvidia can still compete better at a lower price with AMD's SKUs in Raster Only Oriented gaming titles.

Nvidia's product stack is rather segemented based on performance what with Nvidia using even lower clocked GDDR5 along with less GTX/Turing SMs on the 1660. Nvidia has more Base Die Tapeouts with the Turing Generation than it had for its Pascal Generation of GPUs so look for even more sub-segements for Turing that are divided the RTX and GTX(Sans tensor and RT cores) major Turing segements. And there will be the usual OEM Only Turing Variants also.

March 14, 2019 | 06:28 PM - Posted by Anonymous911 (not verified)

"The GTX 1660 brings Turing down to the $219 price point, where it currently competes with the the outgoing GTX 1060 6GB cards. Once Pascal is out of the market the GTX 1660 will fill in for that card nicely, but is around a 15% overall increase over the GTX 1060 6GB all that exciting?"
Gold Award!!!

March 15, 2019 | 10:30 AM - Posted by chipman (not verified)


Who would buy ATI now? L O L

March 15, 2019 | 12:19 PM - Posted by BraindeadOnArrivalThatChipmanIs (not verified)

AMD has got its Epyc server CPUs to earn AMD plenty of Revenues above what any GPU only sales will bring in.
And AMD could just continue selling its Radeon Pro WX and Radeon Instinct SKUs to the HPC/AI accelerator market and not have to worry about being so dependent on consumer Gaming revenues.

So That's Epyc and Radeon Pro WX and Radeon Instinct SKUs with that Mad Revenue generating markups and AMD can still sell any Non Performant Vega 20 Dies as Gaming Bins and still earn some small revenues off of any defective Vega 20 Die samples that just do not make the grade for Professional usage.

AMD does have to bring its Navi to Market ASAP but really AMD's Professional Epyc CPUs and Compute/AI GPU Acceleratoors will still be earning growing revenues for AMD as they take each percentage point more of that server/HPC market share.

ATI no Longer exists and AMD's GPU IP can earn more revenues on the Professional Market with markups that the eternal scrub chipman could never afford.

It's strange that the Overall Technology Press is not trying to estimate AMD's integrated Graphics market share as those first and second generation Desktop/Mobile Raven Ridge APUs are selling nicely. Vega Graphics will be more used in the consumer market in its integrated graphics versions than in the Desktop GPU/Gaming market.

AMD's biggest mistake is in not focusing on some Discrete Mobile Vega/4GB-HBM2 for the non Apple Laptop market as that Vega HBCC, when there is some HBM2 available, will make laptops for Non Gaming Graphics Usage a great combination. Having a discrete mobile Vega GPU able to, via the HBCC, use the HBM2 into HBC(High Bandwidth Cache) is great for Blender 3D where workloads can easily eat up 4GB of VRAM. So Having a laptop with a Discrete Mobile Vega GPU/4GB-HBM2 and 16-32 GB of system RAM will allow for some nice Virtual VRAM paging via Vega's HBCC IP to and from the HBM2(HBC) and no worries about running out of VRAM or any need to slow things down as the HBCC will manage the swaps between HBM2 and System DRAM in the background to keep the GPU working from the HBM2(HBC).

AMD really needs to target the Laptop Market even moreso than the PC market as the laptop unit sales are larger than the PC unit sales. Graphics folks could mostly cere less about FPS and Vega's HBCC/HBCC-HBM2 is a much more attractive IP for non gaming Graphics usage than it is for Gaming Graphics usage.

I hope that AMD will do better with Navi and get some Navi Discrete Mobile GPUs on a higher priority because that HBCC/HBC IP has more potential beyond only the gaming market. AMD needs to start to Brand Some Epyc/Radeon Pro WX Discrete Mobile APUs for the portable workstation market and that includes Chiplet Based APUs that have Zen2 Chiplets and a GPU/HBM2 interposer pairing all on the MCM. And that 7nm TSMC process is looking not so bad if one looks at some pf the Radeon VII samples that have some very good undervolting characteristics and the ability to maintain higher clocks.

Now don't you go off on one of your tangents, chipman, and blame TSMC for Radeon VII's power uasge because anyone with a brain is going to look at Radeon VII's DP FP metrics and see where the Power Usage is coming from and that's AMD's Compute/AI Vega 20 Tapeout that's making some home compute folks very happy because they get plenty of DP FP.

Radeon VII

FP32 (float) performance--: 13,440 GFLOPS
FP64 (double) performance-: 3,360 GFLOPS (1:4) [Not bad for only $699]

RTX 2080Ti

FP32 (float) performance--: 13,448 GFLOPS
FP64 (double) performance-: 420.2 GFLOPS (1:32)

There you are chipman now that's a bit more DP FP on Radeon VII than Nvidia's offering and that's a lot more DP FP units even on Radeon VII's 60 out of 64 CUs enabled Bin.

Hey the Radeon Pro V340 is a dual GPU(Vega 56 complement of Shaders:TMUs:ROPs) SKU with hardware based virtualization market potential.

"The AMD Radeon Pro V340 is intended for enterprise VDI, desktop as a service (DaaS) and cloud gaming workloads.
The card itself is comprised of two Vega 56 style GPUs, each with 16GB of HMB2 memory giving each card a total of 32GB of HBM2 memory, with ECC, onboard. This is higher-end GPU memory than NVIDIA’s GDDR5(X) memory. Using SR-IOV based virtualization, these cards support up to 32x 1GB virtual desktops. " (1)


"Using AMD’s MxGPU technology and SR-IOV based hardware virtualization, AMD is able to virtualize graphics without having to utilize extra virtualization drivers in the hypervisor like NVIDIA uses." (1)

Look at that chipman and AMD is selling Vega 56-Like(Higher Binned Die Sample) on a Dual GPU/Single PCIe card for Headless Cloud Visualization and Cloud Graphics/Gaming workloads. That's $8,183.99(Usually $10,000+) online and some handsome revenues generated for AMD!

Wait until the Vega 20 based variant becomes available and AMD will be doubling up on that also with 2, Vega 20 dies and loads more of DP FP(The Pro Variants have the full 1:2 ratio enabled compared to Radeon VII's 1:4 DP:SP ratio).
Really does consumer Gaming bring in any real markups compared to the professional markets. And AMD can Package deal sell these GPU accelerators with their Epyc/Naples and soon Epyc/Rome CPU SKUs! And the Vega 20 based pro SKUs will speak PCIe 4.0 and xGMI(Infinity Fabric) so any Vega 20 based update/replacment to the Radeon Pro V340 will be some damn good Cloud Gaming Headless Horseman for Google/Others that need virtualization workloads performed.

It's more like RIP chipman's brain as that Brain was DOA!


"AMD Radeon Pro V340 Dual Vega 32GB VDI Solution Launched
By Patrick Kennedy - August 26, 2018"

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.