The TU116 GPU and First Look at Cards from MSI and EVGA
NVIDIA is introducing the GTX 1660 Ti today, a card build from the ground up to take advantage of the new Turing architecture but without real-time ray tracing capabilities. It seems like the logical next step for NVIDIA as gamers eager for a current-generation replacement to the popular GTX 1060, and who may have been disappointed with the launch of the RTX 2060 because it was priced $100 above the 1060 6GB, now have something a lot closer to a true replacement in the GTX 1660 Ti.
There is more to the story of course, and we are still talking about a “Ti” part and not a vanilla GTX 1660, which presumably will be coming at some point down the road; but this new card should make an immediate impact. Is it fair to say that the GTX 1660 Ti the true successor to the GTX 1060 that we might have assumed the RTX 2060 to be? Perhaps. And is the $279 price tag a good value? We will endeavor to find out here.
It has been a rocky start for RTX, and while some might say that releasing GTX cards after the fact represents back-peddling from NVIDIA, consider the possibility that the 2019 roadmap always had space for new GTX cards. Real-time ray tracing does not make sense below a certain performance threshold, and it was pretty clear with the launch of the RTX 2060 that DLSS was the only legitimate option for ray tracing at acceptable frame rates. DLSS itself has been maligned of late based on a questions about visual quality, which NVIDIA has now addressed in a recent blog post. There is clearly a lot invested in DLSS, and regardless of your stance on the technology NVIDIA is going to continue working on it and releasing updates to improve performance and visual quality in games.
As its “GTX” designation denotes, the GeForce GTX 1660 Ti does not include the RT and Tensor Cores that are found in GeForce RTX graphics cards. In order to deliver the Turing architecture to the sub-$300 graphics segment, we must be very thoughtful about the types and numbers of cores we use in the GPU: adding dedicated cores to accelerate Ray Tracing and AI doesn’t make sense unless you can first achieve a certain level of rendering performance. As a result, we chose to focus the GTX 1660 Ti’s cores exclusively on graphics rendering in order to achieve the best balance of performance, power, and cost.
If the RTX 2060 is the real-time ray tracing threshold, then it's pretty obvious that any card that NVIDIA released this year below that performance (and price) level would not carry RTX branding. And here we are with the next card, still based on the latest Turing architecture but with an all-new GPU that has no ray tracing support in hardware. There is nothing fused off here or disabled in software with TU116, and the considerable reduction in die size from the TU106 reflects this.
Subject: Graphics Cards, Systems | February 21, 2019 - 03:04 PM | Scott Michaud
Tagged: pascal, nvidia, mx250, mx230, mx, gp108, geforce mx
Two new laptop GPUs launched in NVIDIA’s low-end MX line. This classification of products is designed to slide above the GPUs found on typical laptop CPUs by a wide enough margin to justify an extra chip, but not enough to be endorsed as part of their gaming line.
As such, pretty much the only performance number that NVIDIA provides is an “up-to” factor relative to Intel’s HD620 iGPU as seen on the Core i5-8265U. For reference, the iGPU on this specific CPU has 192 shader units running at up to 1.1 GHz. Technically there exists some variants that have boost clocks up to 1.15 GHz but that extra 4.5% shouldn’t matter too much for this comparison.
Versus this part, the MX250 is rated as up to 3.5x faster; the MX230 is rated at up to 2.6x faster.
One thing that I should note is that the last generation’s MX150 is listed as up to 4x the Intel UHD 620, although they don’t state which specific CPU’s UHD 620.
This leads to a few possibilities:
- The MX250 has a minor performance regression versus the MX150 in the “up to” test(s)
- The UHD 620 had significant driver optimizations in at least the “up to” test(s)
- The UHD 620 that they tested back then is significantly slower than the i5-8265U
- They rounded differently then vs now
- They couldn’t include the previous “up to” test for some reason
Unfortunately, because NVIDIA is not releasing any specifics, we can only list possibilities and maybe speculate if one seems exceedingly likely. (To me, none of the first four stands out head-and-shoulders above the other three.)
Like the MX150 that came before it, both the MX230 and MX250 will use GDDR5 memory. The MX130 could be paired with either GDDR5 or DDR3.
Anandtech speculates that it is based on the GP108, which is a safe assumption. NVIDIA confirmed that the new parts are using the Pascal architecture, and the GP108 is the Pascal chip in that performance range. Anandtech also claims that the MX230 and MX250 are fabricated under Samsung 14nm, while the “typical” MX150 is TSMC 16nm. The Wikipedia list of NVIDIA graphics, however, claims that the MX150 is fabricated at 14nm. While both could be right, a die shrink would make a bit of sense to squeeze out a few more chips from a wafer (if yields are relatively equal). If that’s the case, and they changed manufacturers, then there might be a slight revision change to the GP108; these changes happen frequently, and their effects should be invisible to the end user… but sometimes they make a difference.
It’ll be interesting to see benchmarks when they hit the market.
Subject: General Tech | February 20, 2019 - 09:07 PM | Tim Verry
Tagged: quarterly earnings, nvidia, financial results
On Valentine's Day NVIDIA released its yearly and quarterly financial results for fiscal year 2019. While yearly revenue was up 21% from last year at 11.72 billion, its quarterly revenue of 2.2 billion fell 31% versus the previous quarter and 24% versus the same quarter last year. On the yearly revenue front, Nvidia credits gaming, data center, professional visualization, and automotive products/divisions for its record revenue in FY2019.
Nvidia launched its RTX 2060 graphics card in Q4.
Q4 of FY2019 ended Jan 27th and saw operating expenses increase 6% versus last quarter and 25% YoY. while operating income fell 72% QoQ and 73% YoY. Net Income of $567 million fell 54% versus the third quarter and 49% versus Q4'FY18. Earnings per diluted share also fell to 92 cents. In Q4 Nvidia completed $700 million in share repurchases.
|Q4 FY19||Q3 FY19||Q4 FY18||Q/Q||Y/Y|
|Gross Margin||54.7%||60.4%||61.9%||(570 bps)||(720 bps)|
|Operating Expenses||$913||$863||$728||+ 6%||+ 25%|
|Diluted Earnings Per Share||$0.92||$1.97||$1.78||(53%)||(48%)|
In FY2019 Nvidia reportedly returned $1.95 billion to shareholders through $371 million in cash dividend payments and $1.58B in share repurchases. Looking at FY2020 the graphics giant plans to return $2.3 billion to shareholders through a combination of dividends and share buybacks.
Nvidia CEO Jensen Huang was quoted in the press release in stating
“This was a turbulent close to what had been a great year. The combination of post-crypto excess channel inventory and recent deteriorating end-market conditions drove a disappointing quarter.
“Despite this setback, NVIDIA’s fundamental position and the markets we serve are strong. The accelerated computing platform we pioneered is central to some of world’s most important and fastest growing industries – from artificial intelligence to autonomous vehicles to robotics. We fully expect to return to sustained growth,”
Looking into next year, Nvidia expects Q1 FY2020 revenue to hit $2.2 billion (+/- 2%) and for yearly revenue to stay flat or decrease slightly. First quarter gross margins and operating expenses are expected to increase to 58.8% and $930 million respectively (those are GAAP numbers).
Nvidia has had a rough last quarter and both graphics chip makers AMD and Nvidia have experienced yet another cryptocurrency mining craze and crash in 2018 except this time around the companies had jumped more into it than before with mining specific graphics card lines and all. Nvidia's stock price (currently at $158.55) has fallen quite a bit since October but is still above where it was just a few years ago. Nvidia has a wide range of products and diversified interests where I am not worried about their future, but I don't know enough to say with confidence which way things will go in FY2020 and if their outlook predictions will hold true. The company launched its RTX 2060 last quarter and is expected to bring budget and mid-range cards sans ray tracing support (e.g. the rumored GTX 1660 Ti) this quarter along with the professional market products ramping up with data center and professional workstation graphics cards and projects like NVIDIA DRIVE and the Mercedes Benz partnership – and that's only a couple slices of what the company is involved in – so it will be interesting to see how FY2020 shakes out for them in general as well as for enthusiasts.
You can dig into the nitty-gritty numbers over at investor.nvidia.com if you are curious.
Subject: Cases and Cooling | February 20, 2019 - 03:57 PM | Jeremy Hellstrom
Tagged: Alphacool, GPX Eisblock, rtx, nvidia, watercooling
When testing the new watercooler from Alphacool, designed for RTX cards, [H]ard|OCP made an interesting discovery, VRAM height is somewhat variable. As part of their review process they always check how well the waterblock mates with what it is cooling, and as you can see below there is something a bit off with their test sample.
As it turns out, VRAM height can vary by up to 0.3mm, which may not sound like a lot but is enough to cause mating problems unless you spread thermal paste on like peanut butter ... which is not that good an idea. The good news is that you can purchase thermal pads in varying thicknesses which you can make use of to ensure a proper mate. You can check out their initial look now, or wait until said pads arrive and the full review is published.
"Alphacool has always made a great showing when it comes to water cooling our hot video cards. The company has recently updated its Eisblock series of GPU water blocks to include models made for NVIDIA's RTX series of cards featuring the Turing GPUs. We show you these new blocks and tell you about our first experiences with those."
Here are some more Cases & Cooling reviews from around the web:
- Deepcool Gammaxxx L240 AIO @ Guru of 3D
- SilentiumPC Spartan 3 Pro RGB @ TechPowerUp
- Deepcool Gamerstorm FRYZEN CPU Cooler @ Kitguru
- Noctua NF-P12 redux-1700 PWM Fan @ TechPowerUp
- Thermaltake View 71 TG RGB Full Tower Review @ NikKTech
- Fractal Design Meshify S2 @ Kitguru
Subject: Graphics Cards | February 19, 2019 - 06:21 PM | Jeremy Hellstrom
Tagged: danger, rtx, bios, flash, nvidia, risky business
So you like living dangerously and are willing to bet $1000 or more on something that might make your new NVIDIA GPU a bit faster, or transform it into a brick? Then does Overclockers Club have a scoop for you! There exists a tool called NVFlash, with or without added ID Mismatch Modified, which will allow you to change the BIOS of your card to another manufacturers design which can increase your cards power envelope and offer better performance ...
or kill it dead ...
or introduce artifacting, random crashes or all sort of other mischief.
On the other hand, if all goes well you can turn your plain old RTX card into an overclocked model of the same type and see higher performance overall. Take a look at OCC's article and read it fully before deciding if this is a risk you might be willing to take.
"WARNING! Flash the BIOS at your own risk. Flashing a video card BIOS to a different model and/or series WILL void your warranty. This process can also cause other permanent issues like video artifacts and premature hardware failure!"
Here are some more Graphics Card articles from around the web:
- MSI RTX 2080 Ti Lightning Z @ Kitguru
- GeForce 418.91 Driver Performance Analysis @ BabelTechReviews
- Battlefield V DLSS Tested: Overpromised, Underdelivered @ Techspot
- Far Cry New Dawn @ Guru of 3D
- Metro Exodus PC Graphics Benchmark @ Techspot
- AMD Radeon VII @ [H]ard|OCP
Subject: General Tech | February 19, 2019 - 03:31 PM | Jeremy Hellstrom
Tagged: tu116, ryzen 3, rumours, nvidia, navi, msi, GTX 1660 TI Gaming X, gtx 1660 ti, amd
If you blinked you would have missed a certain unboxing video, as it was posted before the NDA on the GTX 1660 Ti expired. However, a few sites managed to get some screengrabs before the video was taken down, so we now know a bit more about the card once thought to be mythical.
Image from PC World Bulgaria via [H]ard|OCP
Specifically, it was an MSI GeForce GTX 1660 TI Gaming X that was revealed to the world and while there were no benchmarks, there now seems to be physical proof that this card exists. It sports a single 8pin PCIe power connector, three DisplayPort 1.4 and a single HDMI 2.0b outputs and not a bit of RTX branding. Instead it contains 1,536 Turing Shaders and a 12 nm process "TU116" chip hidden under the Twin Frozr 7 cooler. The outputs tell us this particular card is not compatible with VirtualLink.
For AMD fans, The Inquirer is reporting that 7nm Ryzen 3 desktop CPUs and Navi GPUs should be announced on 7 July at Computex. We should also see the new X570 chipset, though the rumour is that the current generation of motherboards will support the new Ryzen series with a BIOS update. Sadly, Navi is likely to only be announced as it is likely the release will be delayed until October, though like everything else in this post that is purely speculation based on a variety of sources and may not be accurate.
One thing we do know is that the new flagship Ryzen 9 3800X will have two eight core Zen 2 dies, offering a total of 16 cores and 32 threads. The base clock should be 3.9GHz with a top speed of 4.7GHZ, and a TDP of 125W.
Subject: Graphics Cards | February 18, 2019 - 12:07 PM | Scott Michaud
Tagged: nvidia, graphics drivers, geforce
Apparently the latest WHQL driver, 418.81, can cause random application crashes and TDRs (“Timeout Detection and Recovery”) issues on Windows 7 and 8.1. NVIDIA has followed up with a hotfix driver, 418.99, that addresses the issue.
Hotfix drivers do not undergo full testing, so they should not be installed unless you are concerned about the specific issues they fix. In this case, because the bug does not affect Windows 10, a Windows 10 driver is not even provided.
In case you’re wondering what “Timeout Detection and Recovery” is, Windows monitors the graphics driver to make sure that work is being completed quickly (unless it is not driving a monitor – Windows doesn’t care how long a GPU is crunching on compute tasks if it is not being used for graphics). If it hangs for a significant time, Windows reboots the graphics driver just in case it was stuck in, for example, an infinite loop caused by a bad shader or compute task. Without TDR, the only way to get out of this situation would be to cut power to the system.
Subject: General Tech | February 17, 2019 - 10:34 AM | Tim Verry
Tagged: turing, tu116, nvidia, gtx 1660 ti, 12nm
The rumor mill is churning out additional information on the alleged NVIDIA GTX 1660 Ti graphics card as it gets closer to its purported release date later this month. Based on the same Turing architecture as the already launched RTX series (RTX 2080, RTX 2070, RTX 2060), the GTX 1660 Ti will reportedly use a smaller TU116 GPU (specifically TU116-400-A1) and 6GB of GDDR6 memory on a 192-bit memory bus. Spotted by VideoCardz, TU116 appears to be pin compatible with TU106 (the GPU used in RTX 2060) but the die itself is noticeably smaller suggesting that TU116 is a new GPU rather than a cut down TU106 GPU with hardware purposefully disabled or binned down due manufacturing defects.
A bare MSI GTX 1660 Ti Ventus XS graphics card courtesy VideoCardz.
Rumor has it that the GTX 1660 Ti will feature 1536 CUDA cores, 96 Texture Units, and an unknown number of ROPs (possibly 48 though as the memory bus is the same as RTX 2060 with its 192-bit bus). Clockspeeds will start at 1500 MHz and boost to 1770 MHz. The 6GB of GDDR6 will be clocked at 6000 MHz. VideoCardz showed off an alleged MSI GTX 1660 Ti graphics card with the cooler removed showing off the PCB and components. Interestingly, the PCB has six memory chips on board for the 6GB GDDR6 with spots and traces for two more chips. Don't get your hopes up for an 8GB card however, as it appears that NVIDIA is simply making things easier on AIB partners by using pin compatible GPUs allowing them to reuse boards for the higher end graphics card models for the GTX 1660 Ti. The PCB board number for the GTX 1660 Ti is PG161 and is similar to the board used with RTX 2060 (PG160).
Enthusiasts' favorite twitter leaker TUM_APISAK further stirs the rumor pot with a leaked screenshot showing the benchmark results of a GTX 1660 Ti graphics card in Final Fantasy XV with a 1440p High Quality preset. The GTX 1660 Ti allegedly scored 5,000 points putting it just above the GTX 1070 at 4,955 points and just under the 980 Ti's 5052 score. Compared to the other side, the GTX 1660 Ti appears to sit between a presumably overclocked RX Vega (4876) and a Radeon Vega II (5283).
@TUM_APISAK shows off a FF:XV benchmark run including results from an unspecified GTX 1660 Ti graphics card.
Other performance rumors suggest that the GTX 1660 Ti will offer up 5.44 TFLOPs. RT cores are apparently cut (or disabled) in this GPU, but it is not clear whether or not the Tensor cores are intact (rumors seem to say yes though).
Nvidia GTX 1660 Ti graphics cards based on the TU116 GPU will reportedly start at $279 [update: VideoCardz claims the pricing has been confirmed from information given to reviewers] and may well launch as soon as February 22nd (though they've already missed one rumored launch date on the 15th...). Assuming for a minute the performance factors are true, it is interesting to see the smaller TU116 GPU with fewer CUDA cores at least getting close to GTX 1070 performance. The GTX 1070 uses the 16nm GP104 GPU (7.2B transistors) with 1920 CUDA cores (1506 MHz), 120 texture units, 64 ROPs, and 8GB of memory on a 256-bit bus clocked at 8000 MHz. The GTX 1070 offers up to 5.7 TFLOPS. Looking at the progress over the past few generations, it is neat to see that as architectures improve, they are able to do more work with less (but better/faster) CUDA cores. I would guess that the GTX 1660 Ti will not best the GTX 1070 in all games and situations though as the GTX 1070 does have more ROPs and more total memory (though the GDDR6 memory on GTX 1660 Ti does offer more bandwidth than the 1070's GDDR5 despite the smaller bus). Pricing will be interesting in this regard as the rumored price starts at $279 for GTX 1660 Ti. The cheapest GTX 1070 I found online at time of publication was $300 with most cards going for closer to $330+. We may see price drops on the older GTX 1070 cards as a result. GTX 1060 cards are going for $200+ and RX 580 cards are sitting at $190+, RX 590 at $260+, and Vega 56 prices starting at $330 (and go crazy high heh) so the GTX 1660 Ti may also push down the prices of the highe end and higher priced models of those cards as well.
What are your thoughts on the latest rumors?
- The GeForce GTX 1070 8GB Founders Edition Review
- NVIDIA GeForce RTX 2060 Review Part One: Initial Testing
- The Architecture of NVIDIA's RTX GPUs - Turing Explored
Subject: Graphics Cards | February 16, 2019 - 09:02 AM | Tim Verry
Tagged: turing, tuf, RTX 2060, nvidia, graphics card, factory overclocked, asus
Asus recently announced two new Turing-based graphics cards that are part of the TUF (The Ultimate Force) series. Clad in urban camo with shades of grey, the Asus TUF RTX 2060 6GB Gaming and TUF RTX 2060 OC 6GB Gaming pair Nvidia’s 12nm TU106 GPU and 6GB of GDDR6 memory with a dual fan cooler and backplate. As part of the TUF series, the new graphics cards use Asus’ Auto Extreme manufacturing technology and are put through its 144-hour validation program.
The RTX 2060 GPU features 1920 CUDA cores, 120 TMUs, 48 ROPs, 240 Tensor cores, and 30 RT cores. The standard TUF RTX 2060 6GB Gaming graphics card comes clocked at 1365 MHz base and 1689 MHz boost out of the box with the boost clock jumping to 1710 MHz in OC Mode. The OC model graphics card, however, comes clocked by default at 1365 MHz base and 1710 MHz boost in Gaming Mode and 1740 MHz boost in OC Mode (when using Asus’ software).
The TUF Graphics cards feature one dual layer DVI, two HDMI 2.0b, and one DisplayPort 1.4 video outputs. The dual fan cooler is IP5X dust resistant and uses dual ball bearing fans. A black metal backplate is secured to the card to help PCB rigidity. The cards measure 20.4 x 12.5 x 4.6 centimeters so should be compatible with most cases. The cards are powered by a single 8-pin PCI-E power connector.
The TUF cards use a no-frills design sans any RGB or extra features so should be priced competitively and may go well with a silent PC or sleeper PC build. Unfortunately, Asus is not talking specific pricing or availability yet.
- NVIDIA GeForce RTX 2060 Review Part One: Initial Testing
- NVIDIA GeForce RTX 2060 Review Part Two: 1440p and OC
- The Architecture of NVIDIA's RTX GPUs - Turing Explored
Subject: General Tech | February 13, 2019 - 02:53 PM | Jeremy Hellstrom
Tagged: Metro Exodus, gaming, nvidia, amd, DLSS, ray tracing
The Guru of 3D took over two dozen cards on the Metro, with a focus on the DX12 render path with DX-R support which does make the NVIDIA results a bit more interesting for now. If you are looking to play at 1080p with every bell and whistle on, you can scrape by on a GTX 1080 or Vega 56 but you should really consider bumping that to an RTX 2070 or Vega 64. For 1440p gamers the new Radeon VII is capable of providing a good experience but you are far better off with an RTX 2080 or better.
At 4k, well, even the RTX 2080 Ti can barely make 50fps, with the rest of the pack reaching 40fps at best. As to the effects of DLSS and ray tracing on the visual quality and overall performance? Read on to see for yourself.
"A game title of discussion and debate, yes Metro Exodus for the PC is here, and we're going to put it to the test with close to 30 graphics cards in relation to framerates, frame times and CPU scaling."
Here is some more Tech News from around the web:
- Metro Exodus @ The Inquirer
- Metro Exodus Benchmark Performance, RTX & DLSS @ TechPowerUp
- Metro Exodus @ Rock, Paper, SHOTGUN
- Metro Exodus PC Game & Performance @ BabelTechReview
- Metro Exodus: A beautiful, brutal single-player game—with insane RTX perks @ Ars Technica
- Great GameMaker Games @ Humble
- System Shock 3 returns to OtherSide after Starbreeze sell publishing rights @ Rock, Paper, SHOTGUN
- NVIDIA DLSS Test in Battlefield V @ TechPowerUp
- Doom II mod Eviternity teaches everything to know about demon slaying @ Rock, Paper, SHOTGUN
- Our favorite two-player board games, 2019 edition @ Ars Technica
- Phoenix Point delayed to September @ Rock, Paper, SHOTGUN
- Skyrim total conversion Enderal expands onto Steam next week @ Rock, Paper, SHOTGUN