Review Index:

NVIDIA GeForce GTX 1660 (non-Ti) Review - Featuring EVGA and MSI

Manufacturer: NVIDIA

1920x1080 Game Benchmarks

PC Perspective GPU Test Platform
Processor Intel Core i7-8700K
Motherboard ASUS ROG STRIX Z370-H Gaming
Memory Corsair Vengeance LED 16GB (8GBx2) DDR4-3000
Storage Samsung 850 EVO 1TB
Power Supply CORSAIR RM1000x 1000W
Operating System Windows 10 64-bit (Version 1803)
Drivers AMD: 18.50
NVIDIA: 419.35

We begin benchmark results with DX12 games, and I will point out that these three tests are not neutral, instead being examples of the very real effects of GPU optimization, with Ashes of the Singularity heavily favoring AMD, Far Cry 5 favoring AMD (but to a lesser degree), and then Shadow of the Tomb Raider firmly in the NVIDIA camp with AMD cards falling far behind in this test specifically. It may not be fair to run just one of these, but the hope is by running both AMD and NVIDIA optimized benchmarks we get a clearer picture of what a given GPU will do with real games, which often favor one side or the other.

Ashes of the Singularity: Escalation is first up, run at the "high" preset using DirectX 12.

View Full Size

View Full Size

With AotS Escalation we find the stock GTX 1660 barely edging out the GTX 1060 6GB, with a mere ~2.5% increase over the older card. This is the least impressive showing for the 1660 in this group of tests, with the game representing a "worst-case" scenario for NVIDIA cards in general.

Far Cry 5 is next, and this was also run using the default "high" preset settings.

View Full Size

View Full Size

Here the GTX 1660 jumps up the chart to finish ahead of the GTX 980 Ti, with gains improving to ~17% over the GTX 1060 6GB.

And now for the NVIDIA-friendly Shadow of the Tomb Raider benchmark result, run at the "high" preset as well.

View Full Size

View Full Size

The position remains the same for the GTX 1660 here relative to the other NVIDIA cards on the lower half of the chart - though the lead over the GTX 1060 6GB rises to more than 23% - but of course the AMD cards drop significantly in this test and here the GTX 1660 outperforms a Vega 64. The disadvantage to AMD cards is part and parcel with this game just as the reverse is true with AotS - though to less of an extreme for NVIDIA in that game.

Now we move on to some DirectX 11 tests, beginning with F1 2018, run using (you guessed it) the default "high" settings.

View Full Size

View Full Size

The GTX 1660's increase over the GTX 1060 6GB is up to nearly 27% here, and just behind the older GTX 980 Ti in this game (though with a slightly more consistent framerate).

Middle Earth: Shadow of War is our next DX11 title, run at the default "high" settings as well:

View Full Size

View Full Size

The GTX 1660 enjoys an increase of ~23% over the GTX 1060 6GB in this benchmark, and this marks the third result in a row to exceed 20% in performance gains vs. the older card.

Next up are a pair of "canned" benchmarks, beginning with the standalone Final Fantasy XV test, run here using the "standard" preset to help level the playing field with the AMD cards.

View Full Size

View Full Size

The GTX 1660 offers a ~11% increase over the GTX 1060 6GB here, which is the second-lowest gain we've seen at 1080p so far.

World of Tanks enCore is next; a standalone DX11 test that is run using the "ultra" preset as it is a less challenging test.

View Full Size

View Full Size

Here the GTX 1660 holds a 9% advantage over the GTX 1060 6GB.

To sum up, the GTX 1660 is capable of gains exceeding 20% compared to the GTX 1060 6GB, though its advantage varies quite a bit by title and in some instances is in the low single-digits.

But how will this Turing card fare vs. its Pascal predecessor when the resolution is bumped up to 2560x1440? Is the GTX 1660 a viable 1440p gaming option at $219? We will find out on the next page.

Video News

March 14, 2019 | 10:19 AM - Posted by ThatMadNvidiaBinningOperationProducingLoadsOfPriceSegements (not verified)

"So what is a GTX 1660 minus the “Ti”? A hybrid product of sorts, it turns out. The card is based on the same TU116 GPU as the GTX 1660 Ti, and while the Ti features the full version of TU116, this non-Ti version has two of the SMs disabled, bringing the count from 24 to 22."

No it's just an example of Nvidia's binning operations, and AMD does the same thing, albeit with lower numbers of Base Die Tapeouts compared to Nvidia. Nvidia will probably be filling out some even lower binned variant of that TU116 Base Die Tapeout for even more die harvesting and replacing more Pascal SKUs with Turing/GTX offerings, sans the RTX, on the low end of the GPU pricing totem pole. The Turing GPU micro arch's SMs with their concurrent Int/FP execution, other Turing SM tweaks, is a better performer than Pascal's non-concurrent Int/FP execution.

So while the games/gaming engine ecosystem will take its time adopting RTX, if First generation RTX can even be usable for the lowest end Nvidia GPU SKUs for gaming anyways! So Nvidia can still compete better at a lower price with AMD's SKUs in Raster Only Oriented gaming titles.

Nvidia's product stack is rather segemented based on performance what with Nvidia using even lower clocked GDDR5 along with less GTX/Turing SMs on the 1660. Nvidia has more Base Die Tapeouts with the Turing Generation than it had for its Pascal Generation of GPUs so look for even more sub-segements for Turing that are divided the RTX and GTX(Sans tensor and RT cores) major Turing segements. And there will be the usual OEM Only Turing Variants also.

March 14, 2019 | 06:28 PM - Posted by Anonymous911 (not verified)

"The GTX 1660 brings Turing down to the $219 price point, where it currently competes with the the outgoing GTX 1060 6GB cards. Once Pascal is out of the market the GTX 1660 will fill in for that card nicely, but is around a 15% overall increase over the GTX 1060 6GB all that exciting?"
Gold Award!!!

March 15, 2019 | 10:30 AM - Posted by chipman (not verified)


Who would buy ATI now? L O L

March 15, 2019 | 12:19 PM - Posted by BraindeadOnArrivalThatChipmanIs (not verified)

AMD has got its Epyc server CPUs to earn AMD plenty of Revenues above what any GPU only sales will bring in.
And AMD could just continue selling its Radeon Pro WX and Radeon Instinct SKUs to the HPC/AI accelerator market and not have to worry about being so dependent on consumer Gaming revenues.

So That's Epyc and Radeon Pro WX and Radeon Instinct SKUs with that Mad Revenue generating markups and AMD can still sell any Non Performant Vega 20 Dies as Gaming Bins and still earn some small revenues off of any defective Vega 20 Die samples that just do not make the grade for Professional usage.

AMD does have to bring its Navi to Market ASAP but really AMD's Professional Epyc CPUs and Compute/AI GPU Acceleratoors will still be earning growing revenues for AMD as they take each percentage point more of that server/HPC market share.

ATI no Longer exists and AMD's GPU IP can earn more revenues on the Professional Market with markups that the eternal scrub chipman could never afford.

It's strange that the Overall Technology Press is not trying to estimate AMD's integrated Graphics market share as those first and second generation Desktop/Mobile Raven Ridge APUs are selling nicely. Vega Graphics will be more used in the consumer market in its integrated graphics versions than in the Desktop GPU/Gaming market.

AMD's biggest mistake is in not focusing on some Discrete Mobile Vega/4GB-HBM2 for the non Apple Laptop market as that Vega HBCC, when there is some HBM2 available, will make laptops for Non Gaming Graphics Usage a great combination. Having a discrete mobile Vega GPU able to, via the HBCC, use the HBM2 into HBC(High Bandwidth Cache) is great for Blender 3D where workloads can easily eat up 4GB of VRAM. So Having a laptop with a Discrete Mobile Vega GPU/4GB-HBM2 and 16-32 GB of system RAM will allow for some nice Virtual VRAM paging via Vega's HBCC IP to and from the HBM2(HBC) and no worries about running out of VRAM or any need to slow things down as the HBCC will manage the swaps between HBM2 and System DRAM in the background to keep the GPU working from the HBM2(HBC).

AMD really needs to target the Laptop Market even moreso than the PC market as the laptop unit sales are larger than the PC unit sales. Graphics folks could mostly cere less about FPS and Vega's HBCC/HBCC-HBM2 is a much more attractive IP for non gaming Graphics usage than it is for Gaming Graphics usage.

I hope that AMD will do better with Navi and get some Navi Discrete Mobile GPUs on a higher priority because that HBCC/HBC IP has more potential beyond only the gaming market. AMD needs to start to Brand Some Epyc/Radeon Pro WX Discrete Mobile APUs for the portable workstation market and that includes Chiplet Based APUs that have Zen2 Chiplets and a GPU/HBM2 interposer pairing all on the MCM. And that 7nm TSMC process is looking not so bad if one looks at some pf the Radeon VII samples that have some very good undervolting characteristics and the ability to maintain higher clocks.

Now don't you go off on one of your tangents, chipman, and blame TSMC for Radeon VII's power uasge because anyone with a brain is going to look at Radeon VII's DP FP metrics and see where the Power Usage is coming from and that's AMD's Compute/AI Vega 20 Tapeout that's making some home compute folks very happy because they get plenty of DP FP.

Radeon VII

FP32 (float) performance--: 13,440 GFLOPS
FP64 (double) performance-: 3,360 GFLOPS (1:4) [Not bad for only $699]

RTX 2080Ti

FP32 (float) performance--: 13,448 GFLOPS
FP64 (double) performance-: 420.2 GFLOPS (1:32)

There you are chipman now that's a bit more DP FP on Radeon VII than Nvidia's offering and that's a lot more DP FP units even on Radeon VII's 60 out of 64 CUs enabled Bin.

Hey the Radeon Pro V340 is a dual GPU(Vega 56 complement of Shaders:TMUs:ROPs) SKU with hardware based virtualization market potential.

"The AMD Radeon Pro V340 is intended for enterprise VDI, desktop as a service (DaaS) and cloud gaming workloads.
The card itself is comprised of two Vega 56 style GPUs, each with 16GB of HMB2 memory giving each card a total of 32GB of HBM2 memory, with ECC, onboard. This is higher-end GPU memory than NVIDIA’s GDDR5(X) memory. Using SR-IOV based virtualization, these cards support up to 32x 1GB virtual desktops. " (1)


"Using AMD’s MxGPU technology and SR-IOV based hardware virtualization, AMD is able to virtualize graphics without having to utilize extra virtualization drivers in the hypervisor like NVIDIA uses." (1)

Look at that chipman and AMD is selling Vega 56-Like(Higher Binned Die Sample) on a Dual GPU/Single PCIe card for Headless Cloud Visualization and Cloud Graphics/Gaming workloads. That's $8,183.99(Usually $10,000+) online and some handsome revenues generated for AMD!

Wait until the Vega 20 based variant becomes available and AMD will be doubling up on that also with 2, Vega 20 dies and loads more of DP FP(The Pro Variants have the full 1:2 ratio enabled compared to Radeon VII's 1:4 DP:SP ratio).
Really does consumer Gaming bring in any real markups compared to the professional markets. And AMD can Package deal sell these GPU accelerators with their Epyc/Naples and soon Epyc/Rome CPU SKUs! And the Vega 20 based pro SKUs will speak PCIe 4.0 and xGMI(Infinity Fabric) so any Vega 20 based update/replacment to the Radeon Pro V340 will be some damn good Cloud Gaming Headless Horseman for Google/Others that need virtualization workloads performed.

It's more like RIP chipman's brain as that Brain was DOA!


"AMD Radeon Pro V340 Dual Vega 32GB VDI Solution Launched
By Patrick Kennedy - August 26, 2018"

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.