Manufacturer: NVIDIA

Turing at $219

NVIDIA has introduced another midrange GPU with today’s launch of the GTX 1660. It joins the GTX 1660 Ti as the company’s answer to high frame rate 1080p gaming, and hits a more aggressive $219 price point, with the GTX 1660 Ti starting at $279. What has changed, and how close is this 1660 to the “Ti” version launched just last month? We find out here.

GTX_1660_cards.jpg

RTX and Back Again

We are witnessing a shift in branding from NVIDIA, as GTX was supplanted by RTX with the introduction of the 20 series, only to see “RTX” give way to GTX as we moved down the product stack beginning with the GTX 1660 Ti. This has been a potentially confusing change for consumers used to the annual uptick in series number. Most recently we saw the 900 series move logically to 1000 series (aka 10 series) cards, so when the first 2000 series cards were released it seemed as if the 20 series would be a direct successor to the GTX cards of the previous generation.

But RTX ended up being more of a feature level designation, and not so much a new branding for GeForce cards as we had anticipated. No, GTX is here to stay it appears, and what then of the RTX cards and their real-time ray tracing capabilities? Here the conversation changes to focus on higher price tags and the viability of early adoption of ray tracing tech, and enter the internet of outspoken individuals who decry ray-tracing, and more so DLSS; NVIDIA’s proprietary deep learning secret sauce that has seemingly become as controversial as the Genesis planet in Star Trek III.

  GTX 1660 GTX 1660 Ti RTX 2060 RTX 2070 GTX 1080 GTX 1070 GTX 1060 6GB
GPU TU116 TU116 TU106 TU106 GP104 GP104 GP106
Architecture Turing Turing Turing Turing Pascal Pascal Pascal
SMs 22 24 30 36 20 15 10
CUDA Cores 1408 1536 1920 2304 2560 1920 1280
Tensor Cores N/A N/A 240 288 N/A N/A N/A
RT Cores N/A N/A 30 36 N/A N/A N/A
Base Clock 1530 MHz 1500 MHz 1365 MHz 1410 MHz 1607 MHz 1506 MHz 1506 MHz
Boost Clock 1785 MHz 1770 MHz 1680 MHz 1620 MHz 1733 MHz 1683 MHz 1708 MHz
Texture Units 88 96 120 144 160 120 80
ROPs 48 48 48 64 64 64 48
Memory 6GB GDDR5 6GB GDDR6 6GB GDDR6 8GB GDDR6 8GB GDDR5X 8GB GDDR5 6GB GDDR5
Memory Data Rate 8 Gbps 12 Gbps 14 Gbps 14 Gbps 10 Gbps 8 Gbps 8 Gbps
Memory Interface 192-bit 192-bit 192-bit 256-bit 256-bit 256-bit 192-bit
Memory Bandwidth 192.1 GB/s 288.1 GB/s 336.1 GB/s 448.0 GB/s 320.3 GB/s 256.3 GB/s 192.2 GB/s
Transistor Count 6.6B 6.6B 10.8B 10.8B 7.2B 7.2B 4.4B
Die Size 284 mm2 284 mm2 445 mm2 445 mm2 314 mm2 314 mm2 200 mm2
Process Tech 12 nm 12 nm 12 nm 12 nm 16 nm 16 nm 16 nm
TDP 120W 120W 160W 175W 180W 150W 120W
Launch Price $219 $279 $349 $499 $599 $379 $299

So what is a GTX 1660 minus the “Ti”? A hybrid product of sorts, it turns out. The card is based on the same TU116 GPU as the GTX 1660 Ti, and while the Ti features the full version of TU116, this non-Ti version has two of the SMs disabled, bringing the count from 24 to 22. This results in a total of 1408 CUDA cores - down from 1536 with the GTX 1660 Ti. This 128-core drop is not as large as I was expecting from the vanilla 1660, and with the same memory specs the capabilities of this card would not fall far behind - but this card uses the older GDDR5 standard, matching the 8 Gbps speed and 192 GB/s bandwidth of the outgoing GTX 1060, and not the 12 Gbps GDDR6 and 288.1 GB/s bandwidth of the GTX 1660 Ti.

Continue reading our review of the NVIDIA GeForce GTX 1660 graphics card

MSI's RTX 2080 Ti Gaming X Trio is a great card for those who tweak

Subject: Graphics Cards | March 13, 2019 - 02:50 PM |
Tagged: msi, RTX 2080 Ti, gaming x trio

MSI's RTX 2080 Ti Gaming X Trio is a beefy card, at 32.5cm (12.8") in length you should check the clearance on your case before considering a purchase.  Right out of the box the boost clock is 1755MHz, which doesn't really represent what this card is capable of, [H]ard|OCP found it sits around 1900MHz before they started tweaking it.  With a bit of TLC they saw the clock spike all the way to 2070MHz, though for the most part it ran just above 2000MHz which had a noticeable impact on performance.  It still wasn't enough to provide a decent experience playing Metro Exodus at 4k with Ultra Ray Tracing enabled, with that disabled the card happily managed 70FPS with all the other bells and whistles enabled. 

Check the numbers yourself in the full review.

155198019353empmcz7i_1_1.png

"The MSI GeForce RTX 2080 Ti GAMING X TRIO takes the GeForce RTX 2080 Ti to new heights in performance and overclocking. We’ve got Metro Exodus NVIDIA Ray Tracing performance, 1440p and 4K gameplay, and we compare this video card overclocked as well as in silent mode for more efficient gameplay."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

Report: AMD's Upcoming Navi GPUs Launching in August

Subject: Graphics Cards | March 13, 2019 - 02:41 PM |
Tagged: report, rumor, wccftech, amd, navi, gpu, graphics, video card, 7nm, radeon

Could Navi be coming a bit sooner than we expected? I'll quote directly from the sourced report by Usman Pirzada over at WCCFtech:

"I have been told that AMD’s Navi GPU is at least one whole month behind AMD’s 7nm Ryzen launch, so if the company launches the 3000 series desktop processors at Computex like they are planning to, you should not expect the Navi GPU to land before early August. The most likely candidates for launch during this window are Gamescom and Siggraph. I would personally lean towards Gamescom simply because it is a gaming product and is the more likely candidate but anything can happen with AMD!

Some rumors previously had suggested an October launch, but as of now, AMD is telling its partners to expect the launch exactly a month after the Ryzen 7nm launch."

AMD GPU Roadmap.png

Paying particular attention to the second paragraph from the quote above, if this report is coming from board partners we will probably start seeing leaked box art and all the fixings from VideoCardz as August nears - if indeed July is the release month for the Ryzen 3000 series CPUs (and come on, how could they pass on a 7/7 launch for the 7nm CPUs?).

Source: Wccftech

Uh... WoW... World of Warcraft DirectX 12 on Windows 7

Subject: General Tech, Graphics Cards | March 12, 2019 - 04:53 PM |
Tagged: pc gaming, wow, blizzard, microsoft, DirectX 12, dx12

Microsoft has just announced that they ported the DirectX 12 runtime to Windows 7 for World of Warcraft and other, unannounced games. This allows those games to run the new graphics API with its more-efficient framework of queuing work on GPUs, with support from Microsoft. I should note that the benchmarks for DirectX 12 in WoW are hit or miss, so I’m not sure whether it’s better to select DX11 or DX12 for any given PC, but you are free to try.

microsoft-2015-directx12-logo.jpg

This does not port other graphics features, like the updated driver model, which leads to this excerpt from the DirectX blog post:

How are DirectX 12 games different between Windows 10 and Windows 7?
Windows 10 has critical OS improvements which make modern low-level graphics APIs (including DirectX 12) run more efficiently. If you enjoy your favorite games running with DirectX 12 on Windows 7, you should check how those games run even better on Windows 10!

Just make sure you don’t install KB4482887? Trollolololol. Such unfortunate timing.

Of course, Vulkan also exists, and has supported Windows 7 since its creation. Further, both DirectX 12 and Vulkan have forked away from Mantle, which, of course, supported Windows 7. (AMD’s Mantle API pre-dates Windows 10.) The biggest surprise is that Microsoft released such a big API onto Windows 7 even though it is in extended support. I am curious what lead to this exception, such as cyber cafés or other international trends, because I really have no idea.

As for graphics drivers? I am guessing that we will see it pop up in new releases. The latest GeForce release notes claim that DirectX 12 is only available on Windows 10, although undocumented features are not exactly uncommon in the software and hardware industry. Speaking of undocumented features, World of Warcraft 8.1.5 is required for DirectX 12 on Windows 7, although this is not listed anywhere in the release notes on their blog.

Source: Microsoft

NVIDIA Explains How Higher Frame Rates Can Give You an Edge in Battle Royale Games

Subject: Graphics Cards | March 7, 2019 - 11:07 AM |
Tagged: nvidia, gaming, fps, battle royale

NVIDIA has published an article about GPU performance and its impact on gaming, specifically the ultra-popular battle royale variety. The emphasis is on latency, and reducing this when gaming with a combination of high FPS numbers and a 144 Hz (and higher) refresh display. Many of these concepts may seem obvious (competitive gaming on CRTs and/or lower resolutions for max performance comes to mind), but there are plenty of slides to look over - with many more over at NVIDIA's post.

"For many years, esports pros have tuned their hardware for ultra-high frame rates -- 144 or even 240 FPS -- and they pair their hardware with high refresh rate monitors. In fact, ProSettings.net and ProSettings.com report that 99% of Battle Royale Pros (Fortnite,PUBG and Apex Legends) are using 144 Hz monitors or above, and 30% are using 240 Hz monitors. This is because when you run a game, an intricate process occurs in your PC from the time you press the keyboard or move the mouse, to the time you see an updated frame on the screen. They refer to this time period as ‘latency’, and the lower your latency the better your response times will be."

battle-royale-latency-comparison.png

While a GTX 750 Ti to RTX 2080 comparison defies explanation, latency obviously drops with performance in this example

"Working with pros through NVIDIA’s research team and Esports Studio, we have seen the benefits of high FPS and refresh rates play out in directed aiming and perception tests. In blind A/B tests, pros in our labs have been able to consistently discern and see benefits from even a handful of milliseconds of reduced latency.

But what does higher frame rates and lower latency mean for your competitiveness in Battle Royale? A few things:

  • Higher FPS means that you see the next frame more quickly and can respond to it
  • Higher FPS on a high Hz monitor makes the image appear smoother, and moving targets easier to aim at. You are also less likely to see microstutters or “skip pixels” from one frame to the next as you pan your crosshair across the screen
  • Higher FPS combined with G-SYNC technologies like Ultra Low Motion Blur (ULMB) makes objects and text sharper and easier to comprehend in fast moving scenes

This is why for Battle Royale games, which rely heavily on reaction times and your ability to spot an enemy quickly, you want to play at 144 FPS or more."

One of the more interesting aspects of this article relates to K/D ratios, with NVIDIA claiming an edge in this are based on GPU performance and monitor refresh rate:

battle-royale-fortnite-pubg-increase-in-kd-gpu.png

"We were curious to understand how hardware and frame rates affect overall competitiveness in Battle Royale games for everyday gamers - while better hardware can’t replace practice and training, it should assist gamers in getting closer to their maximum potential."

battle-royale-fortnite-pubg-increase-in-kd-monitor.png

"One of the common metrics of player performance in Battle Royales is kill-to-death (K/D) ratio -- how many times you killed another player divided by how many times another player killed you. Using anonymized GeForce Experience Highlights data on K/D events for PUBG and Fortnite, we found some interesting insights on player performance and wanted to share this information with the community."

battle-royale-fortnite-pubg-increase-in-kd-hours.png

For more on this topic, and many more charts, check out the article over at NVIDIA.com.

Source: NVIDIA

Chilling out with the Radeon VII

Subject: Graphics Cards | March 6, 2019 - 01:47 PM |
Tagged: radeon vii, amd, undervolting, Radeon Wattman, Radeon Chill

A question that has been asked about the new Radeon VII is how undervolting will change the performance of the card and now [H]ard|OCP has the answer.  Making use of AMD's two tools for this, Wattman and Chill, and the 19.2.2 driver they tested clockspeed and temperature when running Far Cry 5.  As it turns out, undervolting the Radeon VII has a noticeable impact on performance, increasing the average FPS to 105.7 from 101.5, while enabling Chill drops that number to 80fps. 

Check out the full review to see what happened to the performance in other games as well as the effect on temperatures.

1551043702oc0tueeqyz_1_6_l.png

"Is Radeon Chill or GPU undervolting the answer? We run the Radeon VII through some real world gaming and show you exactly what Chill and Undervolting will do to, or for your gameplay."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP
Manufacturer: EVGA

The EVGA RTX 2060 XC Ultra

While NVIDIA’s new GTX 1660 Ti has stolen much of the spotlight from the RTX 2060 launched at CES, this more powerful Turing card is still an important part of the current video card landscape, though with its $349 starting price it does not fit into the “midrange” designation we have been used to.

EVGA_RTX2060_XC_Ultra_7.jpg

Beyond the price argument, as we saw with our initial review of the RTX 2060 Founders Edition and subsequent look at 1440p gaming and overclocking results, the RTX 2060 far exceeds midrange performance, which made sense given the price tag but created some confusion based on the "2060" naming as this suggested a 20-series replacement to the GTX 1060.

The subsequent GTX 1660 Ti launch provided those outspoken about the price and performance level of the RTX 2060 in relation to the venerable GTX 1060 with a more suitable replacement, leaving the RTX 2060 as an interesting mid-premium option that could match late-2017’s GTX 1070 Ti for $100 less, but still wasn’t a serious option for RTX features without DLSS to boost performance - image quality concerns in the early days of this tech notwithstanding.

EVGA_RTX2060_XC_Ultra_1.jpg

One area certainly worth exploring further with the RTX 2060 is overclocking, as it seemed possible that a healthy OC had the potential to meet RTX 2070 performance, though our early efforts were conducted using NVIDIA’s Founders Edition version, which just one month in now seems about as common as a pre-cyclone cover version of the original Sim City for IBM compatibles (you know, the pre-Godzilla litigation original?). LGR-inspired references aside, let's look at the card EVGA sent us for review.

Continue reading our review of the EVGA GeForce RTX 2060 XC Ultra graphics card

NVIDIA Announces GeForce 419.35 WHQL Driver, RTX Triple Threat Bundle

Subject: Graphics Cards | March 5, 2019 - 09:48 AM |
Tagged: Tom Clancy’s The Division II, RTX Triple Threat Bundle, rtx, ray tracing, nvidia, geforce, gaming, devil may cry 5, bundle, Apex Legends, 419.35 WHQL

GeForce Game Ready Driver 419.35 WHQL

NVIDIA has released their latest Game Ready driver today, 419.35 WHQL, for "the optimal gaming experience for Apex Legends, Devil May Cry 5, and Tom Clancy’s The Division II". The update also adds three monitors to the G-SYNC compatible list, with the BenQ XL2540-B/ZOWIE XL LCD, Acer XF250Q, and Acer ED273 A joining the ranks.

apex-screenshot-world-overview.jpg

Apex Legends World Overview (image credit: EA)

"Our newest Game Ready Driver introduces optimizations and updates for Apex Legends, Devil May Cry 5, and Tom Clancy’s The Division II, giving you the best possible experience from the second you start playing.

In addition, we continue to optimize and improve already-released games, such as Metro Exodus, Anthem, and Battlefield V, which are included in our new GeForce RTX Triple Threat Bundle."

RTX Triple Threat Bundle

NVIDIA's latest game bundle offers desktop and laptop RTX 2060 and 2070 buyers a choice of Anthem, Battlefield V, or Metro Exodus. Buyers of the high-end RTX 2080 and 2080 Ti graphics cards (including laptops) get all three of these games.

geforce-rtx-triple-bundle.jpg

"For a limited time, purchase a qualifying GeForce RTX 2080 Ti or 2080 graphics card, gaming desktop, or gaming laptop and get Battlefield V, Anthem, and Metro Exodus (an incredible $180 value!). Pick up a qualifying GeForce RTX 2070 or 2060 graphics card, gaming desktop, or gaming laptop and get your choice of these incredible titles."

The free games offer begins today, with codes redeemable "beginning March 5, 2019 until May 2, 2019 or while supplies last." You can download the latest NVIDIA driver here.

Source: NVIDIA

GeForce Driver Updates Contain Security Fixes

Subject: Graphics Cards | February 28, 2019 - 11:25 PM |
Tagged: nvidia, graphics drivers, security

Normally, when we discuss graphics drivers, there are a subset of users that like to stay on old versions. Some have older hardware and they believe that they will get limited benefits going forward. Others encounter a bug with a certain version and will refuse to update until it is patched.

In this case – you probably want to update regardless.

nvidia-2015-bandaid.png

NVIDIA has found eight security vulnerabilities in their drivers, which have been corrected in their latest versions. One of them also affects Linux... more on that later.

On Windows, there are five supported branches:

  • Users of R418 for GeForce, Quadro, and NVS should install 419.17.
  • Users of R418 for Tesla should install 418.96.
  • Users of R400 for Quadro and NVS should install 412.29.
  • Users of R400 for Tesla should install 412.29.
  • Users of R390 for Quadro and NVS should install 392.37.

Basically, you should install 419.17 unless you are using professional hardware.

One issue is being likened to Meltdown and Spectre although it is not quite the same. In those cases, the exploit took advantage of hardware optimizations leak system memory. In the case of CVE-2018-6260, however, the attack uses NVIDIA’s performance counters to potentially leak graphics memory. The difference is that GPU performance counters are a developer tool, used by applications like NVIDIA Nsight, to provide diagnostics. Further, beyond targeting a developer tool that can be disabled, this attack also requires local access to the device.

Linux users are also vulnerable to this attack (but not the other seven):

  • Users of R418 for GeForce, Quadro, and NVS should install 418.43.
  • Users of R418 for Tesla should install 418.39.
  • Users of R400 for GeForce, Quadro, NVS, and Tesla should install 410.104.
  • Users of R390 for GeForce, Quadro, NVS, and Tesla should install 390.116.
  • Users of R384 for Tesla should install 384.183.

Whether on Windows or Linux, after installing the update, a hidden option will allow you to disable GPU performance counters unless admin credentials are provided. I don’t know why it’s set to the insecure variant by default… but the setting can be toggled in the NVIDIA Control Panel. On Windows it’s Desktop then Enable Developer Settings then Manage GPU Performance Counters under Developer then Restrict access to the GPU counters to admin users only. See the driver release notes (especially the "Driver Security" section) for more info.

nvidia-2019-disableperfcounters.png

The main thing to fix is the other seven, however. That just requires the driver update. You should have received a notification from GeForce Experience if you use it; otherwise, check out NVIDIA’s website.

Source: NVIDIA

EWC 2019: Vulkan Safety Critical (SC) Working Group Created

Subject: Graphics Cards | February 26, 2019 - 12:41 AM |
Tagged: Khronos, Khronos Group, vulkan, vulkan sc, opengl, opengl sc

The Khronos Group, the industry body that maintains OpenGL, OpenCL, EGL, glTF, Vulkan, OpenXR, and several other standards, has announced the Vulkan Safety Critical (SC) Working Group at Embedded World Conference 2019. The goal is to create an API that leverages Vulkan’s graphics and compute capabilities in a way that implementations can be safe and secure enough for the strictest of industries, such as automotive, air, medical, and energy.

khronos-2017-vulkan-alt-logo.png

It's a safety hammer, I promise. (No I don't.)

The primary goal is graphics and compute, although the working group will also consider exposing other hardware capabilities, such as video encode and decode. These industries currently have access to graphics through OpenGL SC, although the latest release is still significantly behind what a GPU can do. To put it into perspective – the latest OpenGL SC 2.0 (which was released in 2016) has less functionality than the original release of WebGL back in 2011.

While OpenGL SC 2.0 allows programmable vertex and fragment (pixel) shaders, it falls short in many areas. Most importantly, OpenGL SC 2.0 does not allow compute shaders; Vulkan SC is aiming to promote the GPU into a coprocessor for each of these important industries.

There is not much else to report on at this point – the working group has been formed. A bunch of industry members have voiced their excitement about the new API’s potential, such as Codeplay, Arm, and NVIDIA. The obvious example application would be self-driving cars, although I’m personally interested in the medical industry. Is there any sort of instrument that could do significantly more if it had access to a parallel compute device?

If you are in a safety-critical enterprise, then look into joining the Khronos Group.