Teaching an old star new tricks, the Radeon RX 580

Subject: Graphics Cards | April 18, 2017 - 04:04 PM |
Tagged: RX 580, radeon, Polaris, amd, powercolor, red devil

Ryan covered the improvements over the previous Polaris based cards the RX 580 offers, a higher Rated Clock and standardizing memory frequency of all RX 580 models to 8GHz.  That lead to the expected increase in performance compared the the RX 480, in a marketplace somewhat different than what the first Polaris chips arrived in.  Consumers now know what NVIDIA's current generation cards provide in performance and prices have settled as much as can be expected in the volatile GPU market.  Those using cards several generations old may be more receptive to an upgrade than they were with the previous generation, especially as the next large launches are some time off; we shall see if this is true in the coming months.

One particular reason to consider upgrading is VR support, something [H]ard|OCP covers in their review.  The improved speeds do not provide miracles in their VR Leaderboard however they do show improvements in some games such as Serious Sam, with reprojection rates dropping markedly.

1492479839NWuo0n8qGF_2_4_l.jpg

"AMD is launching the AMD Radeon RX 500 series today, and we lead with a custom retail Radeon RX 580 GPU based video card from PowerColor. We’ll take the Red Devil RX 580 Golden Sample video card through the paces and see how it compares to the competition at the same price point."

Here are some more Graphics Card articles from around the web:

Graphics Cards

 

Source: [H]ard|OCP
Author:
Manufacturer: AMD

What is old is new again

Trust me on this one – AMD is aware that launching the RX 500-series of graphics cards, including the RX 580 we are reviewing today, is an uphill battle. Besides battling the sounds on the hills that whisper “reeebbrraannndd” AMD needs to work with its own board partners to offer up total solutions that compete well with NVIDIA’s stronghold on the majority of the market. Just putting out the Radeon RX 580 and RX 570 cards with same coolers and specs as the RX 400-series would be a recipe for ridicule. AMD is aware and is being surprisingly proactive in its story telling the consumer and the media.

  • If you already own a Radeon RX 400-series card, the RX 500-series is not expected to be an upgrade path for you.
     
  • The Radeon RX 500-series is NOT based on Vega. Polaris here everyone.
     
  • Target users are those with Radeon R9 380 class cards and older – Polaris is still meant as an upgrade for that very large user base.

The story that is being told is compelling; more than you might expect. With more than 500 million gamers using graphics cards two years or older, based on Steam survey data, there is a HUGE audience that would benefit from an RX 580 graphics card upgrade. Older cards may lack support for FreeSync, HDR, higher refresh rate HDMI output and hardware encode/decode support for 4K resolution content. And while the GeForce GTX 1060 family would also meet that criteria, AMD wants to make the case that the Radeon family is the way to go.

slides-10.jpg

The Radeon RX 500-series is based on the same Polaris architecture as the RX 400-series, though AMD would tell us that the technology has been refined since initial launch. More time with the 14nm FinFET process technology has given the fab facility, and AMD, some opportunities to refine. This gives the new GPUs the ability to scale to higher clocks than they could before (though not without the cost of additional power draw). AMD has tweaked multi-monitor efficiency modes, allowing idle power consumption to drop a handful of watts thanks to a tweaked pixel clock.

Maybe the most substantial change with this RX 580 release is the unleashing of any kind of power consumption constraints for the board partners. The Radeon RX 480 launch was marred with issues surrounding the amount of power AMD claimed the boards would use compared to how much they DID use. This time around, all RX 580 graphics cards will ship with AT LEAST an 8-pin power connector, opening overclocked models to use as much as 225 watts. Some cards will have an 8+6-pin configuration to go even higher. Considering the RX 480 launched with a supposed 150 watt TDP (that it never lived up to), that’s quite an increase.

slides-13.jpg

AMD is hoping to convince gamers that Radeon Chill is a good solution to help some specific instances of excessive power draw. Recent drivers have added support for games like League of Legends and DOTA 2, adding to The Witcher 3, Dues Ex: Mankind Divided and more. I will freely admit that while the technology behind Chill sounds impressive, I don’t have the experience with it yet to claim or counterclaim its supposed advantages…without sacrificing user experience.

Continue reading our review of the Radeon RX 580 graphics card!

NVIDIA Releases GeForce 381.65 Drivers

Subject: Graphics Cards | April 6, 2017 - 01:57 PM |
Tagged: nvidia, graphics drivers

Lining up with yesterday’s Windows 10 Creators Update opt-in, NVIDIA releases OS Game Ready drivers. GeForce 381.65 also includes their game-specific optimizations for the Quake Champions closed beta that you have probably seen people tweeting about over the last day or so. Also, as you would expect from a graphics card and graphics driver launching on the same day, this version adds support for the new TITAN Xp.

nvidia-geforce.png

This driver also adds Ansel support to a pair of titles: Snake Pass and Kona. Snake Pass is a puzzle platformer with a bit of a Rare art style. Kona is a mystery title with, as NVIDIA describes it, adventure, puzzle, and survival elements, set in the fictional, northern Canada village of Atamipek Lake.

You can get the new drivers from GeForce Experience or their website.

Source: NVIDIA

NVIDIA Releases TITAN Xp with Fully-Unlocked GP102

Subject: Graphics Cards | April 6, 2017 - 01:13 PM |
Tagged: titan xp, pascal, nvidia

While I realize that it’s the other way around if anything, part of me wants to believe that NVIDIA released this new graphics card, the TITAN Xp, solely to prevent people from calling last year’s Titan X “Titan XP”. Alternatively, they could be trolling everyone, but doing so with a legit product launch.

nvidia-2017-actualtitanxp.png

The NVIDIA TITAN Xp is, finally, a fully-unlocked GP102 for the consumer market, which was previously exclusive to the Tesla P40 and Quadro P6000 graphics cards. The extra 256 CUDA cores and slight bump in boost clocks equate to an expected 10.7% increase in boost shader capacity (12.15 TFLOPs vs 10.97 TFLOPs). Memory bandwidth, for its 12GB of GDDR5X, has also increase from 480 GB/s to 547.7 GB/s, which is a 14.1% increase.

NVIDIA's blog post also mentions that macOS drivers are coming this month.

The NVIDIA TITAN Xp is available now from NVIDIA’s website for $1200 USD. 2016’s NVIDIA Titan X is also listed at $1200, but is out of stock for some weird reason… hmm. It’s almost like they released an all-around better product at the same price point.

Source: NVIDIA
Author:
Manufacturer: NVIDIA

Overview

Since the launch of NVIDIA's Pascal architecture with the GTX 1070 and 1080 last May, we've taken a look at a lot of Pascal-based products, including the recent launch of the GTX 1080 Ti. By now, it is clear that Pascal has proven itself in a gaming context.

One frequent request we get about GPU coverage is to look at professional uses cases for these sort of devices. While gaming is still far and away the most common use for GPUs, things like high-quality rendering in industries like architecture, and new industries like deep learning can see vast benefits from acceleration by GPUs.

Today, we are taking a look at some of the latest NVIDIA Quadro GPUs on the market, the Quadro P2000, P4000, and P5000. 

DSC02752.JPG

Diving deep into the technical specs of these Pascal-based Quadro products and the AMD competitor we will be testing,  we find a wide range of compute capability, power consumption, and price.

  Quadro P2000 Quadro P4000 Quadro P5000 Radeon Pro Duo
Process 16nm 16nm 16nm 28nm
Code Name GP106 GP104 GP104 Fiji XT x 2
Shaders 1024 1792 2560 8192
Rated Clock Speed 1470 MHz (Boost) 1480 MHz (Boost) 1730 MHz (Boost) up to 1000 MHz
Memory Width 160-bit 256-bit 256-bit 4096-bit (HBM) x 2
Compute Perf (FP32) 3.0 TFLOPS 5.3 TFLOPS 8.9 TFLOPS 16.38 TFLOPS
Compute Perf (FP64) 1/32 FP32 1/32 FP32 1/32 FP 32 1/16 FP32
Frame Buffer 5GB 8GB 16GB 8GB (4GB x 2)
TDP 75W 105W 180W 350W
Street Price $599 $900 $2000 $800

The astute readers will notice similarities to the NVIDIA GeForce line of products as they take a look at these specifications.

Continue to read our roundup of 3 Pascal Quadro Graphics Cards

AMD Releases Radeon Software Crimson ReLive 17.4.1

Subject: Graphics Cards | April 4, 2017 - 09:21 PM |
Tagged: graphics drivers, amd

The first AMD Radeon driver of April isn’t aligned with a major game launch. Instead, this release seems to focus on gaming technologies in general. For VR, Radeon Software Crimson ReLive 17.4.1 adds Oculus’ Asynchronous Spacewarp (ASW) to R9 Fury, R9 390, and R9 290 graphics cards. It also adds, for Windows 10, SteamVR Asynchronous Reprojection to RX 480 and RX 470 graphics cards.

amd-2016-crimson-relive-logo.png

The driver also adds a couple of extra display options based on the (also just added) DP1.4 HBR3 cable standard. For now, it seems like it’s just (read: “just”) 8K 60 Hz dual-cable and 8K 30Hz single-cable. The increased bandwidth also allows for several other formats, but those have nothing to do with today’s driver.

Update: AMD released a video on the same day to advertise 8K / HDR / FreeSync 2. Embed below.

A few bugs were also fixed, most of which were general bug-fixes not associated with games. Tom Clancy’s Ghost Recon Wildlands is the one exception, which should now scale better with multiple GPUs.

AMD Radeon Software Crimson ReLive 17.4.1 is now available from AMD’s website.

Source: AMD

Imagination Technologies Releases Apple GPU Loss Statement

Subject: Graphics Cards, Mobile | April 3, 2017 - 06:18 PM |
Tagged: apple, Imagination Technologies, PowerVR

This morning, Imagination Technologies Group released a press statement announcing that Apple Inc. intends to phase out their technology in 15 to 24 months. Imagination has doubts that Apple could have circumvented every piece of intellectual property, and they have requested proof from Apple that their new solution avoids all patents, trade secrets, and so forth. According to Imagination’s statement, Apple has, thus far, not provided that proof, and they don’t believe Apple’s claims.

imaginationtech-logo.png

On the one hand, it makes sense that Apple would not divulge their own trade secrets to their current-partner, soon-competitor until it’s necessary for them to do so. On the other hand, GPUs, based on previous stories, like the Intel / NVIDIA cross-license six years ago, are still a legal minefield for new players in the industry.

So, in short, Apple says they don’t need Imagination anymore, but Imagination calls bull.

From the financial side of things, Apple is a gigantic chunk of Imagination’s revenue. For the year ending on April 30th, 2016, Apple contributed about £60.7 million GBP (~$75 million USD in today’s currency) to Imagination Technology’s revenue. Over that same period, Imagination Technology’s entire revenue was £120.0 million GBP ($149.8 million USD in today’s currency).

imaginationtech-2017-stockfall.png

To see how losing essentially half of your revenue can damage a company, I’ve included a screenshot of their current stock price (via Google Finance... and I apologize for the tall shot). It must be a bit scary to do business with Apple, given how much revenue they can add and subtract on a moment’s notice. I’m reminded of the iPhone 6 sapphire glass issue, where GT Advanced Technologies took on a half-billion dollars of debt to create sapphire for Apple, only to end up rejected in the end. In that case, though, Apple agreed to absolve the company of its remaining debt after GT liquidated its equipment.

As for Apple’s new GPU? It will be interesting to see how it turns out. Apple already has their own low-level graphics API, Metal, so they might have a lot to gain, although some macOS and iOS applications use OpenGL and OpenGL ES.

We’ll find out in less than two years.

Vulkans on the Fury Road

Subject: Graphics Cards | April 3, 2017 - 03:15 PM |
Tagged: mad max, linux, kepler, maxwell, pascal, NVIDA, vulkan, opengl

With Vulkan support being added to Mad Max, at least in beta form, Phoronix decided to take advantage of the release to test the performance of a wide variety of NVIDIA cards on the API.  They grabbed over a dozen cards encompassing three different architectures, from the GTX 680 through to the GTX 1080 Ti, so you get a very good look at the change in performance of NVIDIA on Vulkan.  The results are clear, in every case Vulkan was superior to OpenGL and in many cases framerate more than doubled.  Drop by for a look at what some predicted was a DOA API.

image.php_.jpg

"Yesterday game porter firm Feral Interactive released a public beta of Mad Max that features a Vulkan renderer in place of its OpenGL API for graphics rendering on Linux. In addition to Radeon Vulkan numbers, I posted some NVIDIA Mad Max Linux benchmarks with both renderers. Those results were exciting on the few Pascal cards tested so I have now extended that comparison to feature a line-up of 14 NVIDIA GeForce graphics cards from Kepler, Maxwell, and Pascal families while looking at this game's OpenGL vs. Vulkan performance."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: Phoronix

Futuremark Adds Vulkan, Removes Mantle from 3DMark

Subject: Graphics Cards | March 28, 2017 - 04:32 PM |
Tagged: vulkan, DirectX 12, Futuremark, 3dmark

The latest update to 3DMark adds Vulkan support to its API Overhead test, which attempts to render as many simple objects as possible while keeping above 30 FPS. This branch of game performance allows developers to add more objects into a scene, and design these art assets in a more simple, straight-forward way. This is, now, one of the first tests that can directly compare DirectX 12 and Vulkan, which we expect to be roughly equivalent, but we couldn’t tell for sure.

While I wasn’t able to run the tests myself, Luca Rocchi of Ocaholic gave it a shot on their Core i7-5820K and GTX 980. Apparently, Vulkan was just under 10% faster than DirectX 12 in their results, reaching 22.6 million draw calls in Vulkan, but 20.6 million in DirectX 12. Again, this is one test, done by a third-party, for a single system, and a single GPU driver, on a single 3D engine, and one that is designed to stress a specific portion of the API at that; take it with a grain of salt. Still, this suggests that Vulkan can keep pace with the slightly-older DirectX 12 API, and maybe even beat it.

This update also removed Mantle support. I just thought I’d mention that.

Source: Futuremark

OS Limitations of Vulkan Multi-GPU Support

Subject: Graphics Cards | March 21, 2017 - 07:47 PM |
Tagged: windows 10, vulkan, sli, multi-gpu, crossfire

Update (March 22nd @ 3:50pm EDT): And the Khronos Group has just responded to my follow-up questions. LDA has existed since Windows Vista, at the time for assisting with SLI and Crossfire support. Its implementation has changed in Windows 10, but that's not really relevant for Vulkan's multi-GPU support. To prove this, they showed LDA referenced in a Windows 8.1 MSDN post.

In short:

Vulkan's multi-GPU extensions can be used on Windows 7 and Windows 8.x. The exact process will vary from OS to OS, but the GPU vendor can implement these extensions if they choose, and LDA mode isn't exclusive to Windows 10.

 

Update (March 21st @ 11:55pm EDT): I came across a Microsoft Support page that discusses issues with LDA in Windows 7, so it seems like that functionality isn't limited to WDDM 2.0 and Windows 10. (Why have a support page otherwise?) Previously, I looked up an MSDN article that had it listed as a WDDM 2.0 feature, so I figured DSOGaming's assertion that it was introduced with WDDM 2.0 was correct.

As such, LDA might not require a GPU vendor's implementation at all. It'll probably be more clear when the Khronos Group responds to my earlier request, though.

That said, we're arguing over how much a GPU vendor needs to implement; either way, it will be possible to use the multi-GPU extensions in Windows 7 and Windows 8.x if the driver supports it.

Update (March 21st @ 7:30pm EDT): The Khronos Group has just released their statement. It's still a bit unclear, and I've submit another request for clarification.

Specifically, the third statement:

If an implementation on Windows does decide to use LDA mode, it is NOT tied to Windows 10. LDA mode has been available on many versions of Windows, including Windows 7 and 8.X.

... doesn't elaborate what is required for LDA mode on Windows outside of 10. (It could be Microsoft-supported, vendor-supported, or something else entirely.) I'll update again when that information is available. For now, it seems like the table, below, should actually look something like this:

  Implicit Multi-GPU
(LDA Implicit)
Explicit Multi-GPU
(LDA Explicit)
Unlinked Multi-GPU
(MDA)
Windows 7 Requires GPU Vendor
LDA Implementation?

(Or Equivalent)
Requires GPU Vendor
LDA Implementation?

(Or Equivalent)
Windows 8.1 Requires GPU Vendor
LDA Implementation?

(Or Equivalent)
Requires GPU Vendor
LDA Implementation?

(Or Equivalent)
Windows 10
macOS Apple doesn't allow the Vulkan API to ship in graphics drivers.
At all.
Linux / etc.

... but we will update, again, should this be inaccurate.

Update (March 20th @ 3:50pm EDT): The Khronos Group has just responded that the other posts are incorrect. They haven't yet confirmed whether this post (which separates "device groups" from the more general "multi-GPU in Vulkan") is correct, though, because they're preparing an official statement. We'll update when we have more info.

Original Post Below (March 19th @ 9:36pm EDT)

A couple of days ago, some sites have noticed a bullet point that claims Windows-based GPU drivers will need WDDM in “linked display adapter” mode for “Native multi-GPU support for NVIDIA SLI and AMD Crossfire platforms” on Vulkan. This note came from an official slide deck by the Khronos Group, which was published during the recent Game Developers Conference, GDC 2017. The concern is that “linked display adapter” mode is a part of WDDM 2.0, which is exclusive to Windows 10.

This is being interpreted as “Vulkan does not support multi-GPU under Windows 7 or 8.x”.

khronos-2017-vulkan-alt-logo.png

I reached out to the Khronos Group for clarification, but I’m fairly sure I know what this does (and doesn’t) mean. Rather than starting with a written out explanation in prose, I will summarize it into a table, below, outlining what is possible on each platform. I will then elaborate below that.

  Implicit Multi-GPU
(LDA Implicit)
Explicit Multi-GPU
(LDA Explicit)
Unlinked Multi-GPU
(MDA)
Windows 7    
Windows 8.1    
Windows 10
macOS Apple doesn't allow the Vulkan API to ship in graphics drivers.
At all.
Linux / etc.

So the good news is that it’s possible for a game developer to support multi-GPU (through what DirectX 12 would call MDA) on Windows 7 and Windows 8.x; the bad news is that no-one might bother with the heavy lifting. Linked display adapters allow the developer to assume that all GPUs are roughly the same performance, have the same amount of usable memory, and can be accessed through a single driver interface. On top of these assumptions, device groups also hide some annoying and tedious work inside the graphics driver, like producing a texture on one graphics card and quickly giving it to another GPU for rendering.

Basically, if the developer will go through the trouble of supporting AMD + NVIDIA or discrete GPU + integrated GPU systems, then they can support Windows 7 / 8.x in multi-GPU as well. Otherwise? Your extra GPUs will be sitting out unless you switch to DirectX 11 or OpenGL (or you use it for video encoding or something else outside the game).

On the other hand, this limitation might pressure some developers to support unlinked multi-GPU configurations. There are some interesting possibilities, including post-processing, GPGPU tasks like AI visibility and physics, and so forth, which might be ignored in titles whose developers were seduced by the simplicity of device groups. On the whole, device groups was apparently a high-priority request by game developers, and its inclusion will lead to more multi-GPU content. Developers who can justify doing it themselves, though, now have another reason to bother re-inventing a few wheels.

Or... you could just use Linux. That works, too.

Again, we are still waiting on the Khronos Group to confirm this story. See the latest update, above.