NVIDIA to Add Real-Time Ray Tracing Support to Pascal GPUs via April Driver Update

Subject: Graphics Cards | March 18, 2019 - 09:41 PM |
Tagged: unreal engine, Unity, turing, rtx, ray tracing, pascal, nvidia, geforce, GTC 19, GTC, gaming, developers

Today at GTC NVIDIA announced a few things of particular interest to gamers, including GameWorks RTX and the implementation of real-time ray tracing in upcoming versions of both Unreal Engine and Unity (we already posted the news that CRYENGINE will be supporting real-time ray tracing as well). But there is something else... NVIDIA is bringing ray tracing support to GeForce GTX graphics cards.

DXR_GPUs.png

This surprising turn means that hardware RT support won’t be limited to RTX cards after all, as the install base of NVIDIA ray-tracing GPUs “grows to tens of millions” with a simple driver update next month, adding the feature to both to previous-gen Pascal and the new Turing GTX GPUs.

How is this possible? It’s all about the programmable shaders:

“NVIDIA GeForce GTX GPUs powered by Pascal and Turing architectures will be able to take advantage of ray tracing-supported games via a driver expected in April. The new driver will enable tens of millions of GPUs for games that support real-time ray tracing, accelerating the growth of the technology and giving game developers a massive installed base.

With this driver, GeForce GTX GPUs will execute ray traced effects on shader cores. Game performance will vary based on the ray-traced effects and on the number of rays cast in the game, along with GPU model and game resolution. Games that support the Microsoft DXR and Vulkan APIs are all supported.

However, GeForce RTX GPUs, which have dedicated ray tracing cores built directly into the GPU, deliver the ultimate ray tracing experience. They provide up to 2-3x faster ray tracing performance with a more visually immersive gaming environment than GPUs without dedicated ray tracing cores.”

A very important caveat is that “2-3x faster ray tracing performance” for GeForce RTX graphics cards mentioned in the last paragraph, so expectations will need to be tempered as RT features will be less efficient running on shader cores (Pascal and Turing) than they are with dedicated cores, as demonstrated by these charts:

BFV_CHART.png

METRO_EXODUS_CHART.png

SOTTR_CHART.png

It's going to be a busy April.

Source: NVIDIA
Manufacturer: PC Perspective

AMD and NVIDIA GPUs Tested

Tom Clancy’s The Division 2 launched over the weekend and we've been testing it out over the past couple of days with a collection of currently-available graphics cards. Of interest to AMD fans, this game joins the ranks of those well optimized for Radeon graphics, and with a new driver (Radeon Software Adrenalin 2019 Edition 19.3.2) released over the weekend it was a good time to run some benchmarks and see how some AMD and NVIDIA hardware stack up.

d2-key-art-1920x600.jpg

The Division 2 offers DirectX 11 and 12 support, and uses Ubisoft's Snowdrop engine to provide some impressive visuals, particularly at the highest detail settings. We found the "ultra" preset to be quite attainable with very playable frame rates from most midrange-and-above hardware even at 2560x1440, though bear in mind that this game uses quite a bit of video memory. We hit a performance ceiling at 4GB with the "ultra" preset even at 1080p, so we opted for 6GB+ graphics cards for our final testing. And while most of our testing was done at 1440p we did test a selection of cards at 1080p and 4K, just to provide a look at how the GPUs on test scaled when facing different workloads.

Tom Clancy's The Division 2

d2-screen1-1260x709.jpg

Washington D.C. is on the brink of collapse. Lawlessness and instability threaten our society, and rumors of a coup in the capitol are only amplifying the chaos. All active Division agents are desperately needed to save the city before it's too late.

d2-screen4-1260x709.jpg

Developed by Ubisoft Massive and the same teams that brought you Tom Clancy’s The Division, Tom Clancy’s The Division 2 is an online open world, action shooter RPG experience set in a collapsing and fractured Washington, D.C. This rich new setting combines a wide variety of beautiful, iconic, and realistic environments where the player will experience the series’ trademark for authenticity in world building, rich RPG systems, and fast-paced action like never before.

d2-screen3-1260x709.jpg

Play solo or co-op with a team of up to four players to complete a wide range of activities, from the main campaign and adversarial PvP matches to the Dark Zone – where anything can happen.

Continue reading our preview of GPU performance with The Division 2

Remember Conservative Morphological Anti-Aliasing?

Subject: Graphics Cards | March 18, 2019 - 03:13 PM |
Tagged: fxaa, SMAA, Anti-aliasing, MLAA, taa, amd, nvidia

Apart from the new DLSS available on NVIDIA's RTX cards, it has been a very long time since we looked at anti-aliasing implementations and the effects your choice has on performance and visual quality.  You are likely familiar with the four most common implementations, dating back to AMD's MLAA and NVIDIA's FXAA which are not used in new generation games to TAA/TXAA and SMAA but when was the last time you refreshed your memory on what they actually do and how they compare.

Not only did Overclockers Club looking into those, they discuss some of the other attempted implementations as well as sampling types that lie behind these technologies.  Check out their deep dive here.

anti.PNG

"One setting present in many if not all modern PC games that can dramatically impact performance and quality is anti-aliasing and, to be honest, I never really understood how it works. Sure we have the general idea that super-sampling is in effect running at a higher resolution and then downscaling, but then what is multi-sampling? How do post-processing methods work, like the very common FXAA and often favored SMAA?"

Here are some more Graphics Card articles from around the web:

Graphics Cards

AMD Releases Radeon Software Adrenalin 2019 Edition 19.3.2 with Windows 7 DX12 Support

Subject: Graphics Cards | March 14, 2019 - 08:38 PM |
Tagged: Windows 7, The Division 2, radeon, graphics, gpu, gaming, dx12, driver, DirectX 12, amd, Adrenalin, 19.3.2

AMD has released Radeon 19.3.2 drivers, adding support for Tom Clancy’s The Division 2 and offering a performance boost with Civilization VI: Gathering Storm. This update also adds a number of new Vulkan extensions. But wait, there's more: "DirectX 12 on Windows 7 for supported game titles." The DX12-ening is upon us.

adrenalin.PNG

Here are AMD's release notes for 19.3.2:

Radeon Software Adrenalin 2019 Edition 19.3.2 Highlights

Support For

  • Tom Clancy’s The Division® 2
  • Sid Meier’s Civilization® VI: Gathering Storm
    • Up to 4% average performance gains on AMD Radeon VII with Radeon™ Software Adrenalin 2019 Edition 19.3.2 vs 19.2.3. RS-288
  • DirectX® 12 on Windows®7 for supported game titles
    • AMD is thrilled to help expand DirectX® 12 adoption across a broader range of Windows operating systems with Radeon Software Adrenalin 2019 Edition 18.12.2 and onward, which enables consumers to experience exceptional levels of detail and performance in their games.

Fixed Issues

  • Radeon ReLive for VR may sometimes fail to install during Radeon Software installation.
  • Fan curve may fail to switch to manual mode after the manual toggle is switched when fan curve is still set to default behavior.
  • Changes made in Radeon WattMan settings via Radeon Overlay may sometimes not save or take effect once Radeon Overlay is closed.

Known Issues

  • Rainbow Six Siege™ may experience intermittent corruption or flickering on some game textures during gameplay.
  • DOTA™2 VR may experience stutter on some HMD devices when using the Vulkan® API.
  • Mouse cursors may disappear or move out of the boundary of the top of a display on AMD Ryzen Mobile Processors with Radeon Vega Graphics.
  • Performance metrics overlay and Radeon WattMan gauges may experience inaccurate fluctuating readings on AMD Radeon VII..

More release notes after the break.

Source: AMD

Need a new NVIDIA GPU but don't want to get Ti'd down in debt?

Subject: Graphics Cards | March 14, 2019 - 01:33 PM |
Tagged: video card, turing, rtx, nvidia, gtx 1660 ti, gtx 1660, gtx 1060, graphics card, geforce, GDDR5, gaming, 6Gb

Sebastian has given you a look at the triple slot EVGA GTX 1660 XC Black as well as the dual fan and dual slot MSI GTX 1660 GAMING X, both doing well in benchmarks especially when overclocked.  The new GTX 1660 does come in other shapes and sizes, like the dual slot, single fan GTX 1660 StormX OC 6G from Palit which The Guru of 3D reviewed.  Do not underestimate it because of its diminutive size, the Boost Clock is 1830MHz out of the box and with some tweaking will sit around 2070MHz and the GDDR5 pushed up to 9800MHz.

Check out even more models below.

img_7957.jpg

"We review a GeForce GTX 1660 that is priced spot on that 219 USD marker, the MSRP of the new non-Ti model, meet the petite Palit GeForce GTX 1660 StormX OC edition. Based on a big single fan and a small form factor you should not be fooled by its looks. It performs well on all fronts, including cooling acoustic levels."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: Guru of 3D
Manufacturer: NVIDIA

Turing at $219

NVIDIA has introduced another midrange GPU with today’s launch of the GTX 1660. It joins the GTX 1660 Ti as the company’s answer to high frame rate 1080p gaming, and hits a more aggressive $219 price point, with the GTX 1660 Ti starting at $279. What has changed, and how close is this 1660 to the “Ti” version launched just last month? We find out here.

GTX_1660_cards.jpg

RTX and Back Again

We are witnessing a shift in branding from NVIDIA, as GTX was supplanted by RTX with the introduction of the 20 series, only to see “RTX” give way to GTX as we moved down the product stack beginning with the GTX 1660 Ti. This has been a potentially confusing change for consumers used to the annual uptick in series number. Most recently we saw the 900 series move logically to 1000 series (aka 10 series) cards, so when the first 2000 series cards were released it seemed as if the 20 series would be a direct successor to the GTX cards of the previous generation.

But RTX ended up being more of a feature level designation, and not so much a new branding for GeForce cards as we had anticipated. No, GTX is here to stay it appears, and what then of the RTX cards and their real-time ray tracing capabilities? Here the conversation changes to focus on higher price tags and the viability of early adoption of ray tracing tech, and enter the internet of outspoken individuals who decry ray-tracing, and more so DLSS; NVIDIA’s proprietary deep learning secret sauce that has seemingly become as controversial as the Genesis planet in Star Trek III.

  GTX 1660 GTX 1660 Ti RTX 2060 RTX 2070 GTX 1080 GTX 1070 GTX 1060 6GB
GPU TU116 TU116 TU106 TU106 GP104 GP104 GP106
Architecture Turing Turing Turing Turing Pascal Pascal Pascal
SMs 22 24 30 36 20 15 10
CUDA Cores 1408 1536 1920 2304 2560 1920 1280
Tensor Cores N/A N/A 240 288 N/A N/A N/A
RT Cores N/A N/A 30 36 N/A N/A N/A
Base Clock 1530 MHz 1500 MHz 1365 MHz 1410 MHz 1607 MHz 1506 MHz 1506 MHz
Boost Clock 1785 MHz 1770 MHz 1680 MHz 1620 MHz 1733 MHz 1683 MHz 1708 MHz
Texture Units 88 96 120 144 160 120 80
ROPs 48 48 48 64 64 64 48
Memory 6GB GDDR5 6GB GDDR6 6GB GDDR6 8GB GDDR6 8GB GDDR5X 8GB GDDR5 6GB GDDR5
Memory Data Rate 8 Gbps 12 Gbps 14 Gbps 14 Gbps 10 Gbps 8 Gbps 8 Gbps
Memory Interface 192-bit 192-bit 192-bit 256-bit 256-bit 256-bit 192-bit
Memory Bandwidth 192.1 GB/s 288.1 GB/s 336.1 GB/s 448.0 GB/s 320.3 GB/s 256.3 GB/s 192.2 GB/s
Transistor Count 6.6B 6.6B 10.8B 10.8B 7.2B 7.2B 4.4B
Die Size 284 mm2 284 mm2 445 mm2 445 mm2 314 mm2 314 mm2 200 mm2
Process Tech 12 nm 12 nm 12 nm 12 nm 16 nm 16 nm 16 nm
TDP 120W 120W 160W 175W 180W 150W 120W
Launch Price $219 $279 $349 $499 $599 $379 $299

So what is a GTX 1660 minus the “Ti”? A hybrid product of sorts, it turns out. The card is based on the same TU116 GPU as the GTX 1660 Ti, and while the Ti features the full version of TU116, this non-Ti version has two of the SMs disabled, bringing the count from 24 to 22. This results in a total of 1408 CUDA cores - down from 1536 with the GTX 1660 Ti. This 128-core drop is not as large as I was expecting from the vanilla 1660, and with the same memory specs the capabilities of this card would not fall far behind - but this card uses the older GDDR5 standard, matching the 8 Gbps speed and 192 GB/s bandwidth of the outgoing GTX 1060, and not the 12 Gbps GDDR6 and 288.1 GB/s bandwidth of the GTX 1660 Ti.

Continue reading our review of the NVIDIA GeForce GTX 1660 graphics card

MSI's RTX 2080 Ti Gaming X Trio is a great card for those who tweak

Subject: Graphics Cards | March 13, 2019 - 02:50 PM |
Tagged: msi, RTX 2080 Ti, gaming x trio

MSI's RTX 2080 Ti Gaming X Trio is a beefy card, at 32.5cm (12.8") in length you should check the clearance on your case before considering a purchase.  Right out of the box the boost clock is 1755MHz, which doesn't really represent what this card is capable of, [H]ard|OCP found it sits around 1900MHz before they started tweaking it.  With a bit of TLC they saw the clock spike all the way to 2070MHz, though for the most part it ran just above 2000MHz which had a noticeable impact on performance.  It still wasn't enough to provide a decent experience playing Metro Exodus at 4k with Ultra Ray Tracing enabled, with that disabled the card happily managed 70FPS with all the other bells and whistles enabled. 

Check the numbers yourself in the full review.

155198019353empmcz7i_1_1.png

"The MSI GeForce RTX 2080 Ti GAMING X TRIO takes the GeForce RTX 2080 Ti to new heights in performance and overclocking. We’ve got Metro Exodus NVIDIA Ray Tracing performance, 1440p and 4K gameplay, and we compare this video card overclocked as well as in silent mode for more efficient gameplay."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

Report: AMD's Upcoming Navi GPUs Launching in August

Subject: Graphics Cards | March 13, 2019 - 02:41 PM |
Tagged: report, rumor, wccftech, amd, navi, gpu, graphics, video card, 7nm, radeon

Could Navi be coming a bit sooner than we expected? I'll quote directly from the sourced report by Usman Pirzada over at WCCFtech:

"I have been told that AMD’s Navi GPU is at least one whole month behind AMD’s 7nm Ryzen launch, so if the company launches the 3000 series desktop processors at Computex like they are planning to, you should not expect the Navi GPU to land before early August. The most likely candidates for launch during this window are Gamescom and Siggraph. I would personally lean towards Gamescom simply because it is a gaming product and is the more likely candidate but anything can happen with AMD!

Some rumors previously had suggested an October launch, but as of now, AMD is telling its partners to expect the launch exactly a month after the Ryzen 7nm launch."

AMD GPU Roadmap.png

Paying particular attention to the second paragraph from the quote above, if this report is coming from board partners we will probably start seeing leaked box art and all the fixings from VideoCardz as August nears - if indeed July is the release month for the Ryzen 3000 series CPUs (and come on, how could they pass on a 7/7 launch for the 7nm CPUs?).

Source: Wccftech

Uh... WoW... World of Warcraft DirectX 12 on Windows 7

Subject: General Tech, Graphics Cards | March 12, 2019 - 04:53 PM |
Tagged: pc gaming, wow, blizzard, microsoft, DirectX 12, dx12

Microsoft has just announced that they ported the DirectX 12 runtime to Windows 7 for World of Warcraft and other, unannounced games. This allows those games to run the new graphics API with its more-efficient framework of queuing work on GPUs, with support from Microsoft. I should note that the benchmarks for DirectX 12 in WoW are hit or miss, so I’m not sure whether it’s better to select DX11 or DX12 for any given PC, but you are free to try.

microsoft-2015-directx12-logo.jpg

This does not port other graphics features, like the updated driver model, which leads to this excerpt from the DirectX blog post:

How are DirectX 12 games different between Windows 10 and Windows 7?
Windows 10 has critical OS improvements which make modern low-level graphics APIs (including DirectX 12) run more efficiently. If you enjoy your favorite games running with DirectX 12 on Windows 7, you should check how those games run even better on Windows 10!

Just make sure you don’t install KB4482887? Trollolololol. Such unfortunate timing.

Of course, Vulkan also exists, and has supported Windows 7 since its creation. Further, both DirectX 12 and Vulkan have forked away from Mantle, which, of course, supported Windows 7. (AMD’s Mantle API pre-dates Windows 10.) The biggest surprise is that Microsoft released such a big API onto Windows 7 even though it is in extended support. I am curious what lead to this exception, such as cyber cafés or other international trends, because I really have no idea.

As for graphics drivers? I am guessing that we will see it pop up in new releases. The latest GeForce release notes claim that DirectX 12 is only available on Windows 10, although undocumented features are not exactly uncommon in the software and hardware industry. Speaking of undocumented features, World of Warcraft 8.1.5 is required for DirectX 12 on Windows 7, although this is not listed anywhere in the release notes on their blog.

Source: Microsoft

NVIDIA Explains How Higher Frame Rates Can Give You an Edge in Battle Royale Games

Subject: Graphics Cards | March 7, 2019 - 11:07 AM |
Tagged: nvidia, gaming, fps, battle royale

NVIDIA has published an article about GPU performance and its impact on gaming, specifically the ultra-popular battle royale variety. The emphasis is on latency, and reducing this when gaming with a combination of high FPS numbers and a 144 Hz (and higher) refresh display. Many of these concepts may seem obvious (competitive gaming on CRTs and/or lower resolutions for max performance comes to mind), but there are plenty of slides to look over - with many more over at NVIDIA's post.

"For many years, esports pros have tuned their hardware for ultra-high frame rates -- 144 or even 240 FPS -- and they pair their hardware with high refresh rate monitors. In fact, ProSettings.net and ProSettings.com report that 99% of Battle Royale Pros (Fortnite,PUBG and Apex Legends) are using 144 Hz monitors or above, and 30% are using 240 Hz monitors. This is because when you run a game, an intricate process occurs in your PC from the time you press the keyboard or move the mouse, to the time you see an updated frame on the screen. They refer to this time period as ‘latency’, and the lower your latency the better your response times will be."

battle-royale-latency-comparison.png

While a GTX 750 Ti to RTX 2080 comparison defies explanation, latency obviously drops with performance in this example

"Working with pros through NVIDIA’s research team and Esports Studio, we have seen the benefits of high FPS and refresh rates play out in directed aiming and perception tests. In blind A/B tests, pros in our labs have been able to consistently discern and see benefits from even a handful of milliseconds of reduced latency.

But what does higher frame rates and lower latency mean for your competitiveness in Battle Royale? A few things:

  • Higher FPS means that you see the next frame more quickly and can respond to it
  • Higher FPS on a high Hz monitor makes the image appear smoother, and moving targets easier to aim at. You are also less likely to see microstutters or “skip pixels” from one frame to the next as you pan your crosshair across the screen
  • Higher FPS combined with G-SYNC technologies like Ultra Low Motion Blur (ULMB) makes objects and text sharper and easier to comprehend in fast moving scenes

This is why for Battle Royale games, which rely heavily on reaction times and your ability to spot an enemy quickly, you want to play at 144 FPS or more."

One of the more interesting aspects of this article relates to K/D ratios, with NVIDIA claiming an edge in this are based on GPU performance and monitor refresh rate:

battle-royale-fortnite-pubg-increase-in-kd-gpu.png

"We were curious to understand how hardware and frame rates affect overall competitiveness in Battle Royale games for everyday gamers - while better hardware can’t replace practice and training, it should assist gamers in getting closer to their maximum potential."

battle-royale-fortnite-pubg-increase-in-kd-monitor.png

"One of the common metrics of player performance in Battle Royales is kill-to-death (K/D) ratio -- how many times you killed another player divided by how many times another player killed you. Using anonymized GeForce Experience Highlights data on K/D events for PUBG and Fortnite, we found some interesting insights on player performance and wanted to share this information with the community."

battle-royale-fortnite-pubg-increase-in-kd-hours.png

For more on this topic, and many more charts, check out the article over at NVIDIA.com.

Source: NVIDIA