This wee GTX 1660 from Zotac has at least two fans

Subject: Graphics Cards | April 11, 2019 - 01:21 PM |
Tagged: zotac, nvidia, GTX 1660 Twin Fan 6GB, gtx 1660

Zotac's GeForce GTX 1660 Twin Fan 6G is designed for SFF systems, measuring 173.4x111.15x35.3mm (6.83x4.38x1.39") but still needing two slots which in this day and age of the triple wide is considered svelte.  The GPU Boost clock remains at the reference 1,785 MHz and while the memory runs at 8 GHz, it is worth noting it is GDDR5, not GDDR6 like you find in their GTX 1660 Ti.  With a bit of overclocking, either manually or with OC Scanner The Guru of 3D was able to hit ~2050 MHz Boost, 9.7GHz memory for about a 10% increase in performance.

While not a huge step up from a GTX 1060, the ~$220 price tag makes it worth consideration if you are running an older generation mid-range card.

img_8086.jpg

"In this article we test and review the new Zotac GeForce GTX 1660 Twin Fan, it is priced spot on 219 USD, the MSRP of the new non-Ti model, meet the small and compact GeForce GTX 1660 Twin Fan edition. Based on a dual fan and a small form factor it performs as expected."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: Guru of 3D

NVIDIA Releases Ray Tracing Driver for GeForce GTX Cards, New Ray-Tracing Demos

Subject: Graphics Cards | April 11, 2019 - 09:02 AM |
Tagged: turing, rtx, ray tracing, pascal, nvidia, gtx, graphics, gpu, geforce, dxr, demo

NVIDIA has released the Game Ready Driver 425.31 WHQL which enables ray tracing for GeForce GTX graphics cards - a capability previously reserved for the company's RTX series of graphics cards. This change "enables millions more gamers with GeForce GTX GPUs to experience ray tracing for the first time ever", as the list of DXR-capable graphics cards from NVIDIA has grown considerably as of today.

DXR_GPUs.png

The list of NVIDIA GPUs that are DXR-capable now includes (in addition to the RTX series):

  • GeForce GTX 1660 Ti
  • GeForce GTX 1660
  • NVIDIA TITAN Xp (2017)
  • NVIDIA TITAN X (2016)
  • GeForce GTX 1080 Ti
  • GeForce GTX 1080
  • GeForce GTX 1070 Ti
  • GeForce GTX 1070
  • GeForce GTX 1060 6GB
  • Laptops with equivalent Pascal and Turing-architecture GPUs

NVIDIA previously warned of a performance deficit when comparing even high-end Pascal GPUs such as the GTX 1080 Ti to the Turing-based RTX 20-series GPUs when this driver update was discussed during GTC, and their position is that for the best experience dedicated ray tracing cores will be required, and will make a measurable impact - with or without DLSS (a feature that requires the RT cores of the RTX series of GPUs).

"With dedicated RT cores, GeForce RTX GPUs provide up to 2-3x faster performance in ray-traced games, enabling more effects, higher ray counts, and higher resolutions for the best experience. With this new driver however, GeForce GTX 1060 6GB and higher GPUs can execute ray-tracing instructions on traditional shader cores, giving gamers a taste, albeit at lower RT quality settings and resolutions, of how ray tracing will dramatically change the way games are experienced."

In addition to the driver release which enables the visual goodies associated with real-time ray tracing, NVIDIA has also released a trio of tech demos on GeForce.com which you can freely download to check out ray tracing first hand on GTX and RTX graphics cards. Not only will these demos give you a taste of what you might expect from games that incorporate DXR features, but like any good demo they will help users get a sense of how their system might handle these effects.

The demos released include, via NVIDIA:

atomic-heart-rtx-screenshot.jpg

Atomic Heart RTX tech demo - Atomic Heart tech demo is a beautifully detailed tech demo from Mundfish that features ray traced reflections and shadows, as well as NVIDIA DLSS technology.

JusticeRTXDemo.jpg

Justice tech demo - Justice tech demo hails from China, and features ray traced reflections, shadows, and NVIDIA DLSS technology.  It is the first time that real time ray tracing has been used for caustics.

GEFORCE-RTX-Star-Wars.jpg

Reflections tech demo - The Reflections tech demo was created by Epic Games in collaboration with ILMxLAB and NVIDIA. Reflections offers a sneak peek at gaming’s cinematic future with a stunning, witty demo that showcases ray-traced reflections, ray-traced area light shadows, ray-traced ambient occlusion for characters and NVIDIA DLSS technology.

The download page for the tech demos can be found here.

And now to editorialize briefly, I'll point out that one of the aspects of the RTX launch that did not exactly work to NVIDIA's advantage was (obviously) the lack of software to take advantage of their hardware ray tracing capabilities and DLSS, with just a few high-profile titles to date offering support. By adding the previous generation of GPUs to the mix users now have a choice, and the new demos are a big a part of the story, too. Looking back to the early days of dedicated 3D accelerators the tech demo has been an integral part of the GPU experience, showcasing new features and providing enthusiasts with a taste of what a hardware upgrade can provide. The more demos showcasing the effects possible with NVIDIA's ray tracing hardware available, the more Pascal GPU owners will have the ability to check out these features on their own systems without making a purchase of any kind, and if they find the effects compelling it just might drive sales of the RTX 20-series in the endless quest for better performance. It really should have been this way from the start, but at least it has been corrected now - to the benefit of the consumer.

Source: NVIDIA

Math with Maxwell and Turing

Subject: Graphics Cards | April 4, 2019 - 02:23 PM |
Tagged: nvidia, maxwell, rtx, gtx 960, gtx 1660, RTX 2060, turing

TechSpot decided to test out NVIDIA's claims that the GTX 1660 is 113% faster than the old GTX 960, which is still sitting in the top five for most used GPUs on the Steam charts.  They tested Apex Legends, The Division 2, Shadow of the Tomb Raider and many other games at 1080p as that is what these cards were designed for and then compiled the results.  Wolfenstein II:TNC showed the most improvement with 177% while Middle Earth Shadow of War sat at the bottom with a mere 54% jump with the overall average hitting 117%. 

Not a bad upgrade choice, though as they point out the RX 580 8GB is a strong contender for GTX 960 users as well.

Image_02S.jpg

"When we recently tested the new GeForce GTX 1660 we noted that Nvidia was making a bold claim in the review guide saying that the 1660 was a whopping 113% faster than the GTX 960, making it a perfect upgrade option for owners of the old mid-range Maxwell GPU."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: TechSpot

Anthem Patch 1.0.4 Adds NVIDIA DLSS and Highlights Support

Subject: Graphics Cards | March 26, 2019 - 10:48 AM |
Tagged: update, rtx, ray tracing, patch 1.0.4, patch, nvidia, geforce, ea, DLSS, BioWare, anthem

Patch 1.0.4 for Anthem was released by BioWare/EA today which addresses a number of issues and adds features including two specific to GeForce graphics card users, namely DLSS and NVIDIA Highlights support.

anthem-geforce-rtx-screenshot.jpg

The DLSS (deep learning super-sampling) feature is exclusive to RTX 20-series GPUs, providing performance gains of up to 40% with real-time ray tracing enabled according to NVIDIA, who provides this video of DLSS off vs. on in the game (embedded below):

NVIDIA offers this chart showing performance gains with the range of RTX cards with DLSS enabled:

anthem-nvidia-dlss-4k-performance.png

NVIDIA Highlights support is available to users of GeForce Experience, and this feature allows the automatic capture of gameplay clips and screenshots in certain situations. BioWare/EA lists the supported scenarios for the feature:

  • Visiting and viewing overlooks
  • Defeating certain large creatures
  • Performing multi-kills
  • Defeating legendary creatures
  • Discovering the Tombs of the Legionnaires
  • Performing combos
  • When the player is downed by enemies
  • Defeating bosses

anthem-nvidia-highlights.png

The full Patch 1.0.4 release notes can be viewed here, and more information from NVIDIA about the new GeForce features for Anthem are available here.

Source: NVIDIA

Mobilizing the RTX 2080, meet the two Max-Q versions

Subject: Graphics Cards | March 25, 2019 - 03:03 PM |
Tagged: RTX 2080 Max-Q, RTX 2080, NVIDA

TechSpot posted an investigation into the differences between the 215W desktop RTX 2080 and the 80W mobile Max-Q version.  As you might expect, quite a bit had to be done to drop the power so precipitously including dropping the boost clock to just 1095MHz from 1515 MHz.  To make things even more confusingly there is a 90W variant of the Max-Q which has a base clock of 990 MHz and a boost of 1230 MH and for the most part you will not know which one is installed before you buy the laptop and open it up.

TechSpot offers a look at the performance of the two RTX 2080 Max-Q models here.

F-9.jpg

"Following our coverage into Nvidia's laptop RTX GPUs, today we're reviewing the top-end RTX 2080 Max-Q. As an "RTX 2080" Turing part, this GPU comes with 2944 CUDA cores, 368 Tensor cores and 46 ray tracing cores. But that's where the similarities between the RTX 2080 Max-Q and the desktop RTX 2080 end."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: TechSpot

NVIDIA Releases GeForce Game Ready 419.67 Drivers

Subject: Graphics Cards | March 25, 2019 - 02:14 PM |
Tagged: nvidia, graphics drivers, geforce

Five days after they released the Creator Ready 419.67 drivers, NVIDIA has published their Game Ready equivalent, which is also version 419.67. It can be downloaded from GeForce Experience, or you can get it from their website (alongside the release notes).

nvidia-geforce.png

In terms of features: this is another entry in the R418 branch. Often, but not always, major new features are merged in at the start, and subsequent releases add game-specific optimizations, bug fixes, and so forth. This is one of those drivers. It adds two new G-SYNC compatible monitors, the ASUS VG278QR and the ASUS VG258, as well as “Game Ready” tweaks for Battlefield V: Firestorm, Anthem, Shadow of the Tomb Raider, and Sekiro: Shadows Die Twice.

Also, if you have multiple GSYNC compatible displays, NVIDIA now also supports using them in Surround on Turing GPUs.

There are also several bug fixes, although most of them are relatively specific. They do have fixes for DaVinci Resolve and multiple Adobe applications, which I believe were first introduced in the Creator Ready driver. If you’re concerned about “missing out” on fixes, it looks like they’re intending to keep Game Ready up-to-date with Creator Ready, just with less QA on professional applications and timed to game releases (versus professional software releases).

Source: NVIDIA

Intel Publishes Details of Upcoming Gen11 Graphics Architecture

Subject: Graphics Cards | March 22, 2019 - 04:32 PM |
Tagged: processor, Intel, integrated graphics, iGPU, hardware, graphics, gpu, Gen11

Intel has published a whitepaper on their new Gen11 processor graphics, providing details about the underlying architecture. The upcoming Sunny Cove processors and Gen11 graphics were unveiled back in December at Intel's Architecture Day, where Intel had stated that Gen11 was "expected to double the computing performance-per-clock compared to Intel Gen9 graphics", which would obviously be a massive improvement over their current offerings. Intel promises up 1 TFLOP performance from Gen11, with its 64 EUs (execution units) and other improvements providing up to a 2.67x increase over Gen9 - though Intel does clarity that "there may be different configurations" so we will very likely see the usual segmentation.

Intel_Gen11.png

"The architecture implements multiple unique clock domains, which have been partitioned as a per-CPU core clock domain, a processor graphics clock domain, and a ring interconnect clock domain. The SoC architecture is designed to be extensible for a range of products and enable efficient wire routing between components within the SoC.

Gen11 graphics will be based on Intel’s 10nm process, with architectural refinements that promise significant performance-per-watt improvements, according to Intel. Intel also states that memory bandwidth has been addressed to meet the demands of the increased potency of the GPU, with improvements to compression, larger L3 cache size, and increased peak memory bandwidth. All major graphics APIs are supported including DirectX, OpenGL, Vulkan, OpenCL, and Metal - the last of which makes sense as these will very likely be powering the next generation of Apple's MacBook line.

Intel states that beyond the increases in compute and memory bandwidth, Gen11 will introduce "key new features that enable higher performance by reducing the amount of redundant work", and list Coarse pixel shading (CPS) and Position Only Shading Tile Based Rendering (PTBR) among them. Many more details are provided in the document, available at the source link (warning, PDF).

Source: Intel (PDF)

NVIDIA Introduces Creator Ready Driver Program (419.67)

Subject: Graphics Cards | March 20, 2019 - 04:03 PM |
Tagged: nvidia, graphics drivers

At the ongoing Game Developers Conference, GDC 2019, NVIDIA has announced a “Creator Ready Driver” program. This branch targets the users that sit between the GeForce and Quadro line, who use GeForce cards for professional applications – such as game developers.

nvidia-2019-creatorreadylogo.png

Both Game Ready Drivers and the Creator Ready Drivers will support all games and creative applications. The difference is that Creator Ready drivers will be released according to the release schedule of popular creative tools, and they will receive more strict testing with creative applications.

The first release, Creator Reader Drivers 419.67, lists a 13% performance increase with the Blender Cycles renderer, and an 8% to 9% increase for CINEMA 4D, Premiere Pro CC, and Photoshop CC, all relative to the 415-branch GeForce drivers.

You can choose between either branch using the GeForce Experience interface.

nvidia-2019-creatorreadygfe.png

Image Credit: NVIDIA

As for supported products? The Creator Ready Drivers are available for Pascal-, Volta-, and Turing-based GeForce and TITAN GPUs. “All modern Quadro” GPUs are also supported on the Creator Ready branch.

Personally, my brain immediately thinks “a more-steady driver schedule and some creative application performance enhancements at the expense of a little performance near the launch of major video games”. I could be wrong, but that’s what I think when I read this news.

Source: NVIDIA

Microsoft Announces Variable Rate Shading for DirectX 12

Subject: Graphics Cards | March 19, 2019 - 08:29 PM |
Tagged: microsoft, DirectX 12, turing, nvidia

To align with the Game Developers Conference, GDC 2019, Microsoft has announced Variable Rate Shading for DirectX 12. This feature increases performance by allowing the GPU to lower its shading resolution for specific parts of the scene (without the developer doing explicit, render texture-based tricks).

An NVIDIA speech from SIGGRAPH 2018 (last August)

The feature is divided into three parts:

  • Lowering the resolution of specific draw calls (tier 1)
  • Lowering the resolution within a draw call by using an image mask (tier 2)
  • Lowering the resolution within a draw call per primitive (ex: triangle) (tier 2)

The last two points are tagged as “tier 2” because they can reduce the workload within a single draw call, which is an item of work that is sent to the GPU. A typical draw call for a 3D engine is a set of triangles (vertices and indices) paired with a material (a shader program, textures, and properties). While it is sometimes useful to lower the resolution for particularly complex draw calls that take up a lot of screen space but whose output is also relatively low detail, such as water, there are real benefits to being more granular.

The second part, an image mask, allows detail to be reduced for certain areas of the screen. This can be useful in several situations:

  • The edges of a VR field of view
  • Anywhere that will be brutalized by a blur or distortion effect
  • Objects behind some translucent overlays
  • Even negating a tier 1-optimized section to re-add quality where needed

The latter example was the one that Microsoft focused on with their blog. Unfortunately, I am struggling to figure out what specifically is going on, because the changes that I see (ex: the coral reef, fish, and dirt) don’t line up with their red/blue visualizer. The claim is that they use an edge detection algorithm to force high-frequency shading where there would be high-frequency detail.

microsoft-2019-dx12-vrs-1.png

Right side reduces shading by 75% for terrain and water

microsoft-2019-dx12-vrs-2.png

Right side reclaims some lost fidelity based on edge detection algorithm

microsoft-2019-dx12-vrs-3.png

Visualization of where shading complexity is spent.

(Red is per-pixel. Blue is 1 shade per 2x2 block.)

Images Source: Firaxis via Microsoft

Microsoft claims that this feature will only be available for DirectX 12. That said, NVIDIA, when Turing launched, claims that Variable Rate Shading will be available for DirectX 11, DirectX 12, Vulkan, and OpenGL. I’m not sure what’s different between Microsoft’s implementation that lets them separate it from NVIDIA’s extension.

Microsoft will have good tools support, however. They claim that their PIX for Windows performance analysis tool will support this feature on Day 1.

Source: Microsoft

NVIDIA is ready to storm the server room

Subject: General Tech, Graphics Cards, Networking, Shows and Expos | March 19, 2019 - 06:16 PM |
Tagged: nvidia, t4, amazon, microsoft, NGC, Mellanox, CUDA-X, GTC, jen-hsun huang, DRIVE Constellation, ai

As part of their long list of announcements yesterday, NVIDIA revealed they are partnering with Cisco, Dell EMC, Fujitsu, Hewlett Packard Enterprise, Inspur, Lenovo and Sugon to provide servers powered by T4 Tensor Core GPUs, optimized to run their CUDA-X AI accelerators. 

t4.PNG

Those T4 GPUs have been on the market for a while but this marks the first major success for NVIDIA in the server room, with models available for purchase from those aforementioned companies.  Those who prefer other people's servers can also benefit from these new products, with Amazon and Microsoft offering Cloud based solutions.  Setting yourself up to run NVIDIA's NGC software may save a lot of money down the road, the cards sip a mere 70W of power which is rather more attractive than the consumption of a gaggle of Tesla V100s.  One might be guilty of suspecting this offers an explanation for their recent acquisition of Mellanox.

purty.PNG

NGC software offers more than just a platform to run optimizations on, it also offers a range of templates to start off with from classification, and object detection, through sentiment analysis and most other basic starting points for training a machine.  Customers will also be able to upload their own models to share internally or, if in the mood, externally with other users and companies.  It supports existing products such as TensorFlow and PyTorch but also offers access to CUDA-X AI, which as the name suggests takes advantage of the base design of the T4 GPU to reduce the amount of time waiting for results and letting users advance designs quickly. 

memorex.PNG

If you are curious exactly what particular implementations of everyone's favourite buzzword might be, NVIDIA's DRIVE Constellation is a example after JoshTekk's own heart.  Literally an way to create open, scalable simulation for large fleets of self-driving cars to train them ... for good one hopes.  Currently the Toyota Research Institute-Advanced Development utilizes these products in the development of their next self-driving fleet, and NVIDIA obviously hopes others will follow suit. 

replaced.PNG

There is not much to see from the perspective of a gamer in the short term, but considering NVIDIA's work at shifting the horsepower from the silicon you own to their own Cloud this will certainly impact the future of gaming from both a hardware and gameplay perspective.  GPUs as a Service may not be the future many of us want but this suggests it could be possible, not to mention the dirty tricks enemy AIs will be able to pull with this processing power behind them.

One might even dream that escort missions could become less of a traumatic experience!

Source: NVIDIA

NVIDIA to Add Real-Time Ray Tracing Support to Pascal GPUs via April Driver Update

Subject: Graphics Cards | March 18, 2019 - 09:41 PM |
Tagged: unreal engine, Unity, turing, rtx, ray tracing, pascal, nvidia, geforce, GTC 19, GTC, gaming, developers

Today at GTC NVIDIA announced a few things of particular interest to gamers, including GameWorks RTX and the implementation of real-time ray tracing in upcoming versions of both Unreal Engine and Unity (we already posted the news that CRYENGINE will be supporting real-time ray tracing as well). But there is something else... NVIDIA is bringing ray tracing support to GeForce GTX graphics cards.

DXR_GPUs.png

This surprising turn means that hardware RT support won’t be limited to RTX cards after all, as the install base of NVIDIA ray-tracing GPUs “grows to tens of millions” with a simple driver update next month, adding the feature to both to previous-gen Pascal and the new Turing GTX GPUs.

How is this possible? It’s all about the programmable shaders:

“NVIDIA GeForce GTX GPUs powered by Pascal and Turing architectures will be able to take advantage of ray tracing-supported games via a driver expected in April. The new driver will enable tens of millions of GPUs for games that support real-time ray tracing, accelerating the growth of the technology and giving game developers a massive installed base.

With this driver, GeForce GTX GPUs will execute ray traced effects on shader cores. Game performance will vary based on the ray-traced effects and on the number of rays cast in the game, along with GPU model and game resolution. Games that support the Microsoft DXR and Vulkan APIs are all supported.

However, GeForce RTX GPUs, which have dedicated ray tracing cores built directly into the GPU, deliver the ultimate ray tracing experience. They provide up to 2-3x faster ray tracing performance with a more visually immersive gaming environment than GPUs without dedicated ray tracing cores.”

A very important caveat is that “2-3x faster ray tracing performance” for GeForce RTX graphics cards mentioned in the last paragraph, so expectations will need to be tempered as RT features will be less efficient running on shader cores (Pascal and Turing) than they are with dedicated cores, as demonstrated by these charts:

BFV_CHART.png

METRO_EXODUS_CHART.png

SOTTR_CHART.png

It's going to be a busy April.

Source: NVIDIA

Remember Conservative Morphological Anti-Aliasing?

Subject: Graphics Cards | March 18, 2019 - 03:13 PM |
Tagged: fxaa, SMAA, Anti-aliasing, MLAA, taa, amd, nvidia

Apart from the new DLSS available on NVIDIA's RTX cards, it has been a very long time since we looked at anti-aliasing implementations and the effects your choice has on performance and visual quality.  You are likely familiar with the four most common implementations, dating back to AMD's MLAA and NVIDIA's FXAA which are not used in new generation games to TAA/TXAA and SMAA but when was the last time you refreshed your memory on what they actually do and how they compare.

Not only did Overclockers Club looking into those, they discuss some of the other attempted implementations as well as sampling types that lie behind these technologies.  Check out their deep dive here.

anti.PNG

"One setting present in many if not all modern PC games that can dramatically impact performance and quality is anti-aliasing and, to be honest, I never really understood how it works. Sure we have the general idea that super-sampling is in effect running at a higher resolution and then downscaling, but then what is multi-sampling? How do post-processing methods work, like the very common FXAA and often favored SMAA?"

Here are some more Graphics Card articles from around the web:

Graphics Cards

AMD Releases Radeon Software Adrenalin 2019 Edition 19.3.2 with Windows 7 DX12 Support

Subject: Graphics Cards | March 14, 2019 - 08:38 PM |
Tagged: Windows 7, The Division 2, radeon, graphics, gpu, gaming, dx12, driver, DirectX 12, amd, Adrenalin, 19.3.2

AMD has released Radeon 19.3.2 drivers, adding support for Tom Clancy’s The Division 2 and offering a performance boost with Civilization VI: Gathering Storm. This update also adds a number of new Vulkan extensions. But wait, there's more: "DirectX 12 on Windows 7 for supported game titles." The DX12-ening is upon us.

adrenalin.PNG

Here are AMD's release notes for 19.3.2:

Radeon Software Adrenalin 2019 Edition 19.3.2 Highlights

Support For

  • Tom Clancy’s The Division® 2
  • Sid Meier’s Civilization® VI: Gathering Storm
    • Up to 4% average performance gains on AMD Radeon VII with Radeon™ Software Adrenalin 2019 Edition 19.3.2 vs 19.2.3. RS-288
  • DirectX® 12 on Windows®7 for supported game titles
    • AMD is thrilled to help expand DirectX® 12 adoption across a broader range of Windows operating systems with Radeon Software Adrenalin 2019 Edition 18.12.2 and onward, which enables consumers to experience exceptional levels of detail and performance in their games.

Fixed Issues

  • Radeon ReLive for VR may sometimes fail to install during Radeon Software installation.
  • Fan curve may fail to switch to manual mode after the manual toggle is switched when fan curve is still set to default behavior.
  • Changes made in Radeon WattMan settings via Radeon Overlay may sometimes not save or take effect once Radeon Overlay is closed.

Known Issues

  • Rainbow Six Siege™ may experience intermittent corruption or flickering on some game textures during gameplay.
  • DOTA™2 VR may experience stutter on some HMD devices when using the Vulkan® API.
  • Mouse cursors may disappear or move out of the boundary of the top of a display on AMD Ryzen Mobile Processors with Radeon Vega Graphics.
  • Performance metrics overlay and Radeon WattMan gauges may experience inaccurate fluctuating readings on AMD Radeon VII..

More release notes after the break.

Source: AMD

Need a new NVIDIA GPU but don't want to get Ti'd down in debt?

Subject: Graphics Cards | March 14, 2019 - 01:33 PM |
Tagged: video card, turing, rtx, nvidia, gtx 1660 ti, gtx 1660, gtx 1060, graphics card, geforce, GDDR5, gaming, 6Gb

Sebastian has given you a look at the triple slot EVGA GTX 1660 XC Black as well as the dual fan and dual slot MSI GTX 1660 GAMING X, both doing well in benchmarks especially when overclocked.  The new GTX 1660 does come in other shapes and sizes, like the dual slot, single fan GTX 1660 StormX OC 6G from Palit which The Guru of 3D reviewed.  Do not underestimate it because of its diminutive size, the Boost Clock is 1830MHz out of the box and with some tweaking will sit around 2070MHz and the GDDR5 pushed up to 9800MHz.

Check out even more models below.

img_7957.jpg

"We review a GeForce GTX 1660 that is priced spot on that 219 USD marker, the MSRP of the new non-Ti model, meet the petite Palit GeForce GTX 1660 StormX OC edition. Based on a big single fan and a small form factor you should not be fooled by its looks. It performs well on all fronts, including cooling acoustic levels."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: Guru of 3D

MSI's RTX 2080 Ti Gaming X Trio is a great card for those who tweak

Subject: Graphics Cards | March 13, 2019 - 02:50 PM |
Tagged: msi, RTX 2080 Ti, gaming x trio

MSI's RTX 2080 Ti Gaming X Trio is a beefy card, at 32.5cm (12.8") in length you should check the clearance on your case before considering a purchase.  Right out of the box the boost clock is 1755MHz, which doesn't really represent what this card is capable of, [H]ard|OCP found it sits around 1900MHz before they started tweaking it.  With a bit of TLC they saw the clock spike all the way to 2070MHz, though for the most part it ran just above 2000MHz which had a noticeable impact on performance.  It still wasn't enough to provide a decent experience playing Metro Exodus at 4k with Ultra Ray Tracing enabled, with that disabled the card happily managed 70FPS with all the other bells and whistles enabled. 

Check the numbers yourself in the full review.

155198019353empmcz7i_1_1.png

"The MSI GeForce RTX 2080 Ti GAMING X TRIO takes the GeForce RTX 2080 Ti to new heights in performance and overclocking. We’ve got Metro Exodus NVIDIA Ray Tracing performance, 1440p and 4K gameplay, and we compare this video card overclocked as well as in silent mode for more efficient gameplay."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

Report: AMD's Upcoming Navi GPUs Launching in August

Subject: Graphics Cards | March 13, 2019 - 02:41 PM |
Tagged: report, rumor, wccftech, amd, navi, gpu, graphics, video card, 7nm, radeon

Could Navi be coming a bit sooner than we expected? I'll quote directly from the sourced report by Usman Pirzada over at WCCFtech:

"I have been told that AMD’s Navi GPU is at least one whole month behind AMD’s 7nm Ryzen launch, so if the company launches the 3000 series desktop processors at Computex like they are planning to, you should not expect the Navi GPU to land before early August. The most likely candidates for launch during this window are Gamescom and Siggraph. I would personally lean towards Gamescom simply because it is a gaming product and is the more likely candidate but anything can happen with AMD!

Some rumors previously had suggested an October launch, but as of now, AMD is telling its partners to expect the launch exactly a month after the Ryzen 7nm launch."

AMD GPU Roadmap.png

Paying particular attention to the second paragraph from the quote above, if this report is coming from board partners we will probably start seeing leaked box art and all the fixings from VideoCardz as August nears - if indeed July is the release month for the Ryzen 3000 series CPUs (and come on, how could they pass on a 7/7 launch for the 7nm CPUs?).

Source: Wccftech

Uh... WoW... World of Warcraft DirectX 12 on Windows 7

Subject: General Tech, Graphics Cards | March 12, 2019 - 04:53 PM |
Tagged: pc gaming, wow, blizzard, microsoft, DirectX 12, dx12

Microsoft has just announced that they ported the DirectX 12 runtime to Windows 7 for World of Warcraft and other, unannounced games. This allows those games to run the new graphics API with its more-efficient framework of queuing work on GPUs, with support from Microsoft. I should note that the benchmarks for DirectX 12 in WoW are hit or miss, so I’m not sure whether it’s better to select DX11 or DX12 for any given PC, but you are free to try.

microsoft-2015-directx12-logo.jpg

This does not port other graphics features, like the updated driver model, which leads to this excerpt from the DirectX blog post:

How are DirectX 12 games different between Windows 10 and Windows 7?
Windows 10 has critical OS improvements which make modern low-level graphics APIs (including DirectX 12) run more efficiently. If you enjoy your favorite games running with DirectX 12 on Windows 7, you should check how those games run even better on Windows 10!

Just make sure you don’t install KB4482887? Trollolololol. Such unfortunate timing.

Of course, Vulkan also exists, and has supported Windows 7 since its creation. Further, both DirectX 12 and Vulkan have forked away from Mantle, which, of course, supported Windows 7. (AMD’s Mantle API pre-dates Windows 10.) The biggest surprise is that Microsoft released such a big API onto Windows 7 even though it is in extended support. I am curious what lead to this exception, such as cyber cafés or other international trends, because I really have no idea.

As for graphics drivers? I am guessing that we will see it pop up in new releases. The latest GeForce release notes claim that DirectX 12 is only available on Windows 10, although undocumented features are not exactly uncommon in the software and hardware industry. Speaking of undocumented features, World of Warcraft 8.1.5 is required for DirectX 12 on Windows 7, although this is not listed anywhere in the release notes on their blog.

Source: Microsoft

NVIDIA Explains How Higher Frame Rates Can Give You an Edge in Battle Royale Games

Subject: Graphics Cards | March 7, 2019 - 11:07 AM |
Tagged: nvidia, gaming, fps, battle royale

NVIDIA has published an article about GPU performance and its impact on gaming, specifically the ultra-popular battle royale variety. The emphasis is on latency, and reducing this when gaming with a combination of high FPS numbers and a 144 Hz (and higher) refresh display. Many of these concepts may seem obvious (competitive gaming on CRTs and/or lower resolutions for max performance comes to mind), but there are plenty of slides to look over - with many more over at NVIDIA's post.

"For many years, esports pros have tuned their hardware for ultra-high frame rates -- 144 or even 240 FPS -- and they pair their hardware with high refresh rate monitors. In fact, ProSettings.net and ProSettings.com report that 99% of Battle Royale Pros (Fortnite,PUBG and Apex Legends) are using 144 Hz monitors or above, and 30% are using 240 Hz monitors. This is because when you run a game, an intricate process occurs in your PC from the time you press the keyboard or move the mouse, to the time you see an updated frame on the screen. They refer to this time period as ‘latency’, and the lower your latency the better your response times will be."

battle-royale-latency-comparison.png

While a GTX 750 Ti to RTX 2080 comparison defies explanation, latency obviously drops with performance in this example

"Working with pros through NVIDIA’s research team and Esports Studio, we have seen the benefits of high FPS and refresh rates play out in directed aiming and perception tests. In blind A/B tests, pros in our labs have been able to consistently discern and see benefits from even a handful of milliseconds of reduced latency.

But what does higher frame rates and lower latency mean for your competitiveness in Battle Royale? A few things:

  • Higher FPS means that you see the next frame more quickly and can respond to it
  • Higher FPS on a high Hz monitor makes the image appear smoother, and moving targets easier to aim at. You are also less likely to see microstutters or “skip pixels” from one frame to the next as you pan your crosshair across the screen
  • Higher FPS combined with G-SYNC technologies like Ultra Low Motion Blur (ULMB) makes objects and text sharper and easier to comprehend in fast moving scenes

This is why for Battle Royale games, which rely heavily on reaction times and your ability to spot an enemy quickly, you want to play at 144 FPS or more."

One of the more interesting aspects of this article relates to K/D ratios, with NVIDIA claiming an edge in this are based on GPU performance and monitor refresh rate:

battle-royale-fortnite-pubg-increase-in-kd-gpu.png

"We were curious to understand how hardware and frame rates affect overall competitiveness in Battle Royale games for everyday gamers - while better hardware can’t replace practice and training, it should assist gamers in getting closer to their maximum potential."

battle-royale-fortnite-pubg-increase-in-kd-monitor.png

"One of the common metrics of player performance in Battle Royales is kill-to-death (K/D) ratio -- how many times you killed another player divided by how many times another player killed you. Using anonymized GeForce Experience Highlights data on K/D events for PUBG and Fortnite, we found some interesting insights on player performance and wanted to share this information with the community."

battle-royale-fortnite-pubg-increase-in-kd-hours.png

For more on this topic, and many more charts, check out the article over at NVIDIA.com.

Source: NVIDIA

Chilling out with the Radeon VII

Subject: Graphics Cards | March 6, 2019 - 01:47 PM |
Tagged: radeon vii, amd, undervolting, Radeon Wattman, Radeon Chill

A question that has been asked about the new Radeon VII is how undervolting will change the performance of the card and now [H]ard|OCP has the answer.  Making use of AMD's two tools for this, Wattman and Chill, and the 19.2.2 driver they tested clockspeed and temperature when running Far Cry 5.  As it turns out, undervolting the Radeon VII has a noticeable impact on performance, increasing the average FPS to 105.7 from 101.5, while enabling Chill drops that number to 80fps. 

Check out the full review to see what happened to the performance in other games as well as the effect on temperatures.

1551043702oc0tueeqyz_1_6_l.png

"Is Radeon Chill or GPU undervolting the answer? We run the Radeon VII through some real world gaming and show you exactly what Chill and Undervolting will do to, or for your gameplay."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

NVIDIA Announces GeForce 419.35 WHQL Driver, RTX Triple Threat Bundle

Subject: Graphics Cards | March 5, 2019 - 09:48 AM |
Tagged: Tom Clancy’s The Division II, RTX Triple Threat Bundle, rtx, ray tracing, nvidia, geforce, gaming, devil may cry 5, bundle, Apex Legends, 419.35 WHQL

GeForce Game Ready Driver 419.35 WHQL

NVIDIA has released their latest Game Ready driver today, 419.35 WHQL, for "the optimal gaming experience for Apex Legends, Devil May Cry 5, and Tom Clancy’s The Division II". The update also adds three monitors to the G-SYNC compatible list, with the BenQ XL2540-B/ZOWIE XL LCD, Acer XF250Q, and Acer ED273 A joining the ranks.

apex-screenshot-world-overview.jpg

Apex Legends World Overview (image credit: EA)

"Our newest Game Ready Driver introduces optimizations and updates for Apex Legends, Devil May Cry 5, and Tom Clancy’s The Division II, giving you the best possible experience from the second you start playing.

In addition, we continue to optimize and improve already-released games, such as Metro Exodus, Anthem, and Battlefield V, which are included in our new GeForce RTX Triple Threat Bundle."

RTX Triple Threat Bundle

NVIDIA's latest game bundle offers desktop and laptop RTX 2060 and 2070 buyers a choice of Anthem, Battlefield V, or Metro Exodus. Buyers of the high-end RTX 2080 and 2080 Ti graphics cards (including laptops) get all three of these games.

geforce-rtx-triple-bundle.jpg

"For a limited time, purchase a qualifying GeForce RTX 2080 Ti or 2080 graphics card, gaming desktop, or gaming laptop and get Battlefield V, Anthem, and Metro Exodus (an incredible $180 value!). Pick up a qualifying GeForce RTX 2070 or 2060 graphics card, gaming desktop, or gaming laptop and get your choice of these incredible titles."

The free games offer begins today, with codes redeemable "beginning March 5, 2019 until May 2, 2019 or while supplies last." You can download the latest NVIDIA driver here.

Source: NVIDIA