NVIDIA Releases GeForce 381.65 Drivers

Subject: Graphics Cards | April 6, 2017 - 01:57 PM |
Tagged: nvidia, graphics drivers

Lining up with yesterday’s Windows 10 Creators Update opt-in, NVIDIA releases OS Game Ready drivers. GeForce 381.65 also includes their game-specific optimizations for the Quake Champions closed beta that you have probably seen people tweeting about over the last day or so. Also, as you would expect from a graphics card and graphics driver launching on the same day, this version adds support for the new TITAN Xp.

nvidia-geforce.png

This driver also adds Ansel support to a pair of titles: Snake Pass and Kona. Snake Pass is a puzzle platformer with a bit of a Rare art style. Kona is a mystery title with, as NVIDIA describes it, adventure, puzzle, and survival elements, set in the fictional, northern Canada village of Atamipek Lake.

You can get the new drivers from GeForce Experience or their website.

Source: NVIDIA

NVIDIA Releases TITAN Xp with Fully-Unlocked GP102

Subject: Graphics Cards | April 6, 2017 - 01:13 PM |
Tagged: titan xp, pascal, nvidia

While I realize that it’s the other way around if anything, part of me wants to believe that NVIDIA released this new graphics card, the TITAN Xp, solely to prevent people from calling last year’s Titan X “Titan XP”. Alternatively, they could be trolling everyone, but doing so with a legit product launch.

nvidia-2017-actualtitanxp.png

The NVIDIA TITAN Xp is, finally, a fully-unlocked GP102 for the consumer market, which was previously exclusive to the Tesla P40 and Quadro P6000 graphics cards. The extra 256 CUDA cores and slight bump in boost clocks equate to an expected 10.7% increase in boost shader capacity (12.15 TFLOPs vs 10.97 TFLOPs). Memory bandwidth, for its 12GB of GDDR5X, has also increase from 480 GB/s to 547.7 GB/s, which is a 14.1% increase.

NVIDIA's blog post also mentions that macOS drivers are coming this month.

The NVIDIA TITAN Xp is available now from NVIDIA’s website for $1200 USD. 2016’s NVIDIA Titan X is also listed at $1200, but is out of stock for some weird reason… hmm. It’s almost like they released an all-around better product at the same price point.

Source: NVIDIA
Author:
Manufacturer: NVIDIA

Overview

Since the launch of NVIDIA's Pascal architecture with the GTX 1070 and 1080 last May, we've taken a look at a lot of Pascal-based products, including the recent launch of the GTX 1080 Ti. By now, it is clear that Pascal has proven itself in a gaming context.

One frequent request we get about GPU coverage is to look at professional uses cases for these sort of devices. While gaming is still far and away the most common use for GPUs, things like high-quality rendering in industries like architecture, and new industries like deep learning can see vast benefits from acceleration by GPUs.

Today, we are taking a look at some of the latest NVIDIA Quadro GPUs on the market, the Quadro P2000, P4000, and P5000. 

DSC02752.JPG

Diving deep into the technical specs of these Pascal-based Quadro products and the AMD competitor we will be testing,  we find a wide range of compute capability, power consumption, and price.

  Quadro P2000 Quadro P4000 Quadro P5000 Radeon Pro Duo
Process 16nm 16nm 16nm 28nm
Code Name GP106 GP104 GP104 Fiji XT x 2
Shaders 1024 1792 2560 8192
Rated Clock Speed 1470 MHz (Boost) 1480 MHz (Boost) 1730 MHz (Boost) up to 1000 MHz
Memory Width 160-bit 256-bit 256-bit 4096-bit (HBM) x 2
Compute Perf (FP32) 3.0 TFLOPS 5.3 TFLOPS 8.9 TFLOPS 16.38 TFLOPS
Compute Perf (FP64) 1/32 FP32 1/32 FP32 1/32 FP 32 1/16 FP32
Frame Buffer 5GB 8GB 16GB 8GB (4GB x 2)
TDP 75W 105W 180W 350W
Street Price $599 $900 $2000 $800

The astute readers will notice similarities to the NVIDIA GeForce line of products as they take a look at these specifications.

Continue to read our roundup of 3 Pascal Quadro Graphics Cards

AMD Releases Radeon Software Crimson ReLive 17.4.1

Subject: Graphics Cards | April 4, 2017 - 09:21 PM |
Tagged: graphics drivers, amd

The first AMD Radeon driver of April isn’t aligned with a major game launch. Instead, this release seems to focus on gaming technologies in general. For VR, Radeon Software Crimson ReLive 17.4.1 adds Oculus’ Asynchronous Spacewarp (ASW) to R9 Fury, R9 390, and R9 290 graphics cards. It also adds, for Windows 10, SteamVR Asynchronous Reprojection to RX 480 and RX 470 graphics cards.

amd-2016-crimson-relive-logo.png

The driver also adds a couple of extra display options based on the (also just added) DP1.4 HBR3 cable standard. For now, it seems like it’s just (read: “just”) 8K 60 Hz dual-cable and 8K 30Hz single-cable. The increased bandwidth also allows for several other formats, but those have nothing to do with today’s driver.

Update: AMD released a video on the same day to advertise 8K / HDR / FreeSync 2. Embed below.

A few bugs were also fixed, most of which were general bug-fixes not associated with games. Tom Clancy’s Ghost Recon Wildlands is the one exception, which should now scale better with multiple GPUs.

AMD Radeon Software Crimson ReLive 17.4.1 is now available from AMD’s website.

Source: AMD

Imagination Technologies Releases Apple GPU Loss Statement

Subject: Graphics Cards, Mobile | April 3, 2017 - 06:18 PM |
Tagged: apple, Imagination Technologies, PowerVR

This morning, Imagination Technologies Group released a press statement announcing that Apple Inc. intends to phase out their technology in 15 to 24 months. Imagination has doubts that Apple could have circumvented every piece of intellectual property, and they have requested proof from Apple that their new solution avoids all patents, trade secrets, and so forth. According to Imagination’s statement, Apple has, thus far, not provided that proof, and they don’t believe Apple’s claims.

imaginationtech-logo.png

On the one hand, it makes sense that Apple would not divulge their own trade secrets to their current-partner, soon-competitor until it’s necessary for them to do so. On the other hand, GPUs, based on previous stories, like the Intel / NVIDIA cross-license six years ago, are still a legal minefield for new players in the industry.

So, in short, Apple says they don’t need Imagination anymore, but Imagination calls bull.

From the financial side of things, Apple is a gigantic chunk of Imagination’s revenue. For the year ending on April 30th, 2016, Apple contributed about £60.7 million GBP (~$75 million USD in today’s currency) to Imagination Technology’s revenue. Over that same period, Imagination Technology’s entire revenue was £120.0 million GBP ($149.8 million USD in today’s currency).

imaginationtech-2017-stockfall.png

To see how losing essentially half of your revenue can damage a company, I’ve included a screenshot of their current stock price (via Google Finance... and I apologize for the tall shot). It must be a bit scary to do business with Apple, given how much revenue they can add and subtract on a moment’s notice. I’m reminded of the iPhone 6 sapphire glass issue, where GT Advanced Technologies took on a half-billion dollars of debt to create sapphire for Apple, only to end up rejected in the end. In that case, though, Apple agreed to absolve the company of its remaining debt after GT liquidated its equipment.

As for Apple’s new GPU? It will be interesting to see how it turns out. Apple already has their own low-level graphics API, Metal, so they might have a lot to gain, although some macOS and iOS applications use OpenGL and OpenGL ES.

We’ll find out in less than two years.

Vulkans on the Fury Road

Subject: Graphics Cards | April 3, 2017 - 03:15 PM |
Tagged: mad max, linux, kepler, maxwell, pascal, NVIDA, vulkan, opengl

With Vulkan support being added to Mad Max, at least in beta form, Phoronix decided to take advantage of the release to test the performance of a wide variety of NVIDIA cards on the API.  They grabbed over a dozen cards encompassing three different architectures, from the GTX 680 through to the GTX 1080 Ti, so you get a very good look at the change in performance of NVIDIA on Vulkan.  The results are clear, in every case Vulkan was superior to OpenGL and in many cases framerate more than doubled.  Drop by for a look at what some predicted was a DOA API.

image.php_.jpg

"Yesterday game porter firm Feral Interactive released a public beta of Mad Max that features a Vulkan renderer in place of its OpenGL API for graphics rendering on Linux. In addition to Radeon Vulkan numbers, I posted some NVIDIA Mad Max Linux benchmarks with both renderers. Those results were exciting on the few Pascal cards tested so I have now extended that comparison to feature a line-up of 14 NVIDIA GeForce graphics cards from Kepler, Maxwell, and Pascal families while looking at this game's OpenGL vs. Vulkan performance."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: Phoronix

Futuremark Adds Vulkan, Removes Mantle from 3DMark

Subject: Graphics Cards | March 28, 2017 - 04:32 PM |
Tagged: vulkan, DirectX 12, Futuremark, 3dmark

The latest update to 3DMark adds Vulkan support to its API Overhead test, which attempts to render as many simple objects as possible while keeping above 30 FPS. This branch of game performance allows developers to add more objects into a scene, and design these art assets in a more simple, straight-forward way. This is, now, one of the first tests that can directly compare DirectX 12 and Vulkan, which we expect to be roughly equivalent, but we couldn’t tell for sure.

While I wasn’t able to run the tests myself, Luca Rocchi of Ocaholic gave it a shot on their Core i7-5820K and GTX 980. Apparently, Vulkan was just under 10% faster than DirectX 12 in their results, reaching 22.6 million draw calls in Vulkan, but 20.6 million in DirectX 12. Again, this is one test, done by a third-party, for a single system, and a single GPU driver, on a single 3D engine, and one that is designed to stress a specific portion of the API at that; take it with a grain of salt. Still, this suggests that Vulkan can keep pace with the slightly-older DirectX 12 API, and maybe even beat it.

This update also removed Mantle support. I just thought I’d mention that.

Source: Futuremark

OS Limitations of Vulkan Multi-GPU Support

Subject: Graphics Cards | March 21, 2017 - 07:47 PM |
Tagged: windows 10, vulkan, sli, multi-gpu, crossfire

Update (March 22nd @ 3:50pm EDT): And the Khronos Group has just responded to my follow-up questions. LDA has existed since Windows Vista, at the time for assisting with SLI and Crossfire support. Its implementation has changed in Windows 10, but that's not really relevant for Vulkan's multi-GPU support. To prove this, they showed LDA referenced in a Windows 8.1 MSDN post.

In short:

Vulkan's multi-GPU extensions can be used on Windows 7 and Windows 8.x. The exact process will vary from OS to OS, but the GPU vendor can implement these extensions if they choose, and LDA mode isn't exclusive to Windows 10.

 

Update (March 21st @ 11:55pm EDT): I came across a Microsoft Support page that discusses issues with LDA in Windows 7, so it seems like that functionality isn't limited to WDDM 2.0 and Windows 10. (Why have a support page otherwise?) Previously, I looked up an MSDN article that had it listed as a WDDM 2.0 feature, so I figured DSOGaming's assertion that it was introduced with WDDM 2.0 was correct.

As such, LDA might not require a GPU vendor's implementation at all. It'll probably be more clear when the Khronos Group responds to my earlier request, though.

That said, we're arguing over how much a GPU vendor needs to implement; either way, it will be possible to use the multi-GPU extensions in Windows 7 and Windows 8.x if the driver supports it.

Update (March 21st @ 7:30pm EDT): The Khronos Group has just released their statement. It's still a bit unclear, and I've submit another request for clarification.

Specifically, the third statement:

If an implementation on Windows does decide to use LDA mode, it is NOT tied to Windows 10. LDA mode has been available on many versions of Windows, including Windows 7 and 8.X.

... doesn't elaborate what is required for LDA mode on Windows outside of 10. (It could be Microsoft-supported, vendor-supported, or something else entirely.) I'll update again when that information is available. For now, it seems like the table, below, should actually look something like this:

  Implicit Multi-GPU
(LDA Implicit)
Explicit Multi-GPU
(LDA Explicit)
Unlinked Multi-GPU
(MDA)
Windows 7 Requires GPU Vendor
LDA Implementation?

(Or Equivalent)
Requires GPU Vendor
LDA Implementation?

(Or Equivalent)
Windows 8.1 Requires GPU Vendor
LDA Implementation?

(Or Equivalent)
Requires GPU Vendor
LDA Implementation?

(Or Equivalent)
Windows 10
macOS Apple doesn't allow the Vulkan API to ship in graphics drivers.
At all.
Linux / etc.

... but we will update, again, should this be inaccurate.

Update (March 20th @ 3:50pm EDT): The Khronos Group has just responded that the other posts are incorrect. They haven't yet confirmed whether this post (which separates "device groups" from the more general "multi-GPU in Vulkan") is correct, though, because they're preparing an official statement. We'll update when we have more info.

Original Post Below (March 19th @ 9:36pm EDT)

A couple of days ago, some sites have noticed a bullet point that claims Windows-based GPU drivers will need WDDM in “linked display adapter” mode for “Native multi-GPU support for NVIDIA SLI and AMD Crossfire platforms” on Vulkan. This note came from an official slide deck by the Khronos Group, which was published during the recent Game Developers Conference, GDC 2017. The concern is that “linked display adapter” mode is a part of WDDM 2.0, which is exclusive to Windows 10.

This is being interpreted as “Vulkan does not support multi-GPU under Windows 7 or 8.x”.

khronos-2017-vulkan-alt-logo.png

I reached out to the Khronos Group for clarification, but I’m fairly sure I know what this does (and doesn’t) mean. Rather than starting with a written out explanation in prose, I will summarize it into a table, below, outlining what is possible on each platform. I will then elaborate below that.

  Implicit Multi-GPU
(LDA Implicit)
Explicit Multi-GPU
(LDA Explicit)
Unlinked Multi-GPU
(MDA)
Windows 7    
Windows 8.1    
Windows 10
macOS Apple doesn't allow the Vulkan API to ship in graphics drivers.
At all.
Linux / etc.

So the good news is that it’s possible for a game developer to support multi-GPU (through what DirectX 12 would call MDA) on Windows 7 and Windows 8.x; the bad news is that no-one might bother with the heavy lifting. Linked display adapters allow the developer to assume that all GPUs are roughly the same performance, have the same amount of usable memory, and can be accessed through a single driver interface. On top of these assumptions, device groups also hide some annoying and tedious work inside the graphics driver, like producing a texture on one graphics card and quickly giving it to another GPU for rendering.

Basically, if the developer will go through the trouble of supporting AMD + NVIDIA or discrete GPU + integrated GPU systems, then they can support Windows 7 / 8.x in multi-GPU as well. Otherwise? Your extra GPUs will be sitting out unless you switch to DirectX 11 or OpenGL (or you use it for video encoding or something else outside the game).

On the other hand, this limitation might pressure some developers to support unlinked multi-GPU configurations. There are some interesting possibilities, including post-processing, GPGPU tasks like AI visibility and physics, and so forth, which might be ignored in titles whose developers were seduced by the simplicity of device groups. On the whole, device groups was apparently a high-priority request by game developers, and its inclusion will lead to more multi-GPU content. Developers who can justify doing it themselves, though, now have another reason to bother re-inventing a few wheels.

Or... you could just use Linux. That works, too.

Again, we are still waiting on the Khronos Group to confirm this story. See the latest update, above.

NVIDIA's GeForce 378.78; is this the DX12 driver you have been waiting for?

Subject: Graphics Cards | March 15, 2017 - 03:59 PM |
Tagged: nvidia, GeForce 378.78

AMD has been offering support for DX12 more effectively than NVIDIA in many titles; not enough to consistently surpass the higher end GTX cards but certainly showing improvements.  NVIDIA announce that their new driver, along with optimized support for the new Tom Clancy game will also offer performance increases in DX12 games.  [H]ard|OCP put the numbers referenced in the PR to the test in their recent review.  The news is good for the games which were mentioned but you should not expect any gains in DX11 titles with the new driver as you can see from the benchmarks results. 

1489379901fkk6C8hkMC_13_1.png

"We will take the new NVIDIA GeForce 378.78 performance driver and add it to our NVIDIA Video Card Driver Performance Review graphs to see if this driver has improved performance. NVIDIA has made some very bold claims lately, so let's see if those come through as true gaming advantages."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP
Author:
Manufacturer: Various

Background and setup

A couple of weeks back, during the excitement surrounding the announcement of the GeForce GTX 1080 Ti graphics card, NVIDIA announced an update to its performance reporting project known as FCAT to support VR gaming. The updated iteration, FCAT VR as it is now called, gives us the first true ability to not only capture the performance of VR games and experiences, but the tools with which to measure and compare.

Watch ths video walk through of FCAT VR with me and NVIDIA's Tom Petersen

I already wrote an extensive preview of the tool and how it works during the announcement. I think it’s likely that many of you overlooked it with the noise from a new GPU, so I’m going to reproduce some of it here, with additions and updates. Everyone that attempts to understand the data we will be presenting in this story and all VR-based tests going forward should have a baseline understanding of the complexity of measuring VR games. Previous tools don’t tell the whole story, and even the part they do tell is often incomplete.

If you already know how FCAT VR works from reading the previous article, you can jump right to the beginning of our results here.

Measuring and validating those claims has proven to be a difficult task. Tools that we used in the era of standard PC gaming just don’t apply. Fraps is a well-known and well-understood tool for measuring frame rates and frame times utilized by countless reviewers and enthusiasts, but Fraps lacked the ability to tell the complete story of gaming performance and experience. NVIDIA introduced FCAT and we introduced Frame Rating back in 2013 to expand the capabilities that reviewers and consumers had access to. Using more sophisticated technique that includes direct capture of the graphics card output in uncompressed form, a software-based overlay applied to each frame being rendered, and post-process analyzation of that data, we could communicate the smoothness of a gaming experience, better articulating it to help gamers make purchasing decisions.

vrpipe1.png

For VR though, those same tools just don’t cut it. Fraps is a non-starter as it measures frame rendering from the GPU point of view and completely misses the interaction between the graphics system and the VR runtime environment (OpenVR for Steam/Vive and OVR for Oculus). Because the rendering pipeline is drastically changed in the current VR integrations, what Fraps measures is completely different than the experience the user actually gets in the headset. Previous FCAT and Frame Rating methods were still viable but the tools and capture technology needed to be updated. The hardware capture products we used since 2013 were limited in their maximum bandwidth and the overlay software did not have the ability to “latch in” to VR-based games. Not only that but measuring frame drops, time warps, space warps and reprojections would be a significant hurdle without further development. 

vrpipe2.png

vrpipe3.png

NVIDIA decided to undertake the task of rebuilding FCAT to work with VR. And while obviously the company is hoping that it will prove its claims of performance benefits for VR gaming, it should not be overlooked the investment in time and money spent on a project that is to be open sourced and free available to the media and the public.

vlcsnap-2017-02-27-11h31m17s057.png

NVIDIA FCAT VR is comprised of two different applications. The FCAT VR Capture tool runs on the PC being evaluated and has a similar appearance to other performance and timing capture utilities. It uses data from Oculus Event Tracing as a part of the Windows ETW and SteamVR’s performance API, along with NVIDIA driver stats when used on NVIDIA hardware to generate performance data. It will and does work perfectly well on any GPU vendor’s hardware though with the access to the VR vendor specific timing results.

fcatvrcapture.jpg

Continue reading our first look at VR performance testing with FCAT VR!!