AMD Announces Radeon Vega Frontier Edition Graphics Cards

Subject: Graphics Cards | May 16, 2017 - 07:39 PM |
Tagged: Vega, reference, radeon, graphics card, gpu, Frontier Edition, amd

AMD has revealed their concept of a premium reference GPU for the upcoming Radeon Vega launch, with the "Frontier Edition" of the new graphics cards.

Vega FE Slide.png

"Today, AMD announced its brand-new Radeon Vega Frontier Edition, the world’s most powerful solution for machine learning and advanced visualization aimed to empower the next generation of data scientists and visualization professionals -- the digital pioneers forging new paths in their fields. Designed to handle the most demanding design, rendering, and machine intelligence workloads, this powerful new graphics card excels in:

  • Machine learning. Together with AMD’s ROCm open software platform, Radeon Vega Frontier Edition enables developers to tap into the power of Vega for machine learning algorithm development. Frontier Edition delivers more than 50 percent more performance than today’s most powerful machine learning GPUs.
  • Advanced visualization. Radon Vega Frontier Edition provides the performance required to drive increasingly large and complex models for real-time visualization, physically-based rendering and virtual reality through the design phase as well as rendering phase of product development.
  • VR workloads. Radeon Vega Frontier Edition is ideal for VR content creation supporting AMD’s LiquidVR technology to deliver the gripping content, advanced visual comfort and compatibility needed for next-generation VR experiences.
  • Revolutionized game design workflows. Radeon Vega Frontier Edition simplifies and accelerates game creation by providing a single GPU optimized for every stage of a game developer’s workflow, from asset production to playtesting and performance optimization."

Vega FE.jpg

From the image provided on the official product page it appears that there will be both liquid-cooled (the gold card in the background) and air-cooled variants of these "Frontier Edition" cards, which AMD states will arrive with 16GB of HBM2 and offer 1.5x the FP32 performance and 3x the FP16 performance of the Fury X.

From AMD:

Radeon Vega Frontier Edition

  • Compute units: 64
  • Single precision compute performance (FP32): ~13 TFLOPS
  • Half precision compute performance (FP16): ~25 TFLOPS
  • Pixel Fillrate: ~90 Gpixels/sec
  • Memory capacity: 16 GBs of High Bandwidth Cache
  • Memory bandwidth: ~480 GBs/sec

The availability of the Radeon Vega Frontier Edition was announced as "late June", so we should not have too long to wait for further details, including pricing.

Source: AMD

Inno3D Introduces a Single Slot GTX 1050 Ti Graphics Card

Subject: Graphics Cards | May 13, 2017 - 11:46 PM |
Tagged: SFF, pascal, nvidia, Inno3D, GP107

Hong Kong based Inno3D recently introduced a single slot graphics card using NVIDIA’s mid-range GTX 1050 Ti GPU. The aptly named Inno3D GeForce GTX 1050 Ti (1-Slot Edition) combines the reference clocked Pascal GPU, 4GB of GDDR5 memory, and a shrouded single fan cooler clad in red and black.

Inno3D GeForce GTX 1050 Ti 1 Slot Edition.png

Around back, the card offers three display outputs including a HDMI 2.0, DisplayPort 1.4, and DVI-D. The single slot cooler is a bit of an odd design with an thin axial fan rather than a centrifugal type that sits over a fake plastic fin array. Note that these fins do not actually cool anything, in fact the PCB of the card does not even extend out to where the fan is; presumably the fins are there primarily for aesthetics and secondarily to channel a bit of the air the fan pulls down. Air is pulled in and pushed over the actual GPU heatsink (under the shroud) and out the vent holes next to the display connectors. Air is circulated through the case and is not actually exhausted like traditional dual slot (and some single slot) designs. I am curious how the choice of fan and vents will affect cooling performance.

Overclocking is going to be limited on this card, and it comes out-of-the-box clocked at NVIDIA reference speeds of 1290 MHz base and 1392 MHz boost for the GPU’s 768 cores and 7 GT/s for the 4GB of GDDR5 memory. The card measures 211 mm (~8.3”) long and should fit in just about any case. Since it pulls all of its power from the slot, it might be a good option for those slim towers OEMs like to use these days to get a bit of gaming out of a retail PC.

Inno3D is not yet talking availability or pricing, but looking at there existing lineup I would expect a MSRP around $150.

Source: Tech Report

NVIDIA Announces Tesla V100 with Volta GPU at GTC 2017

Subject: Graphics Cards | May 10, 2017 - 01:32 PM |
Tagged: v100, tesla, nvidia, gv100, gtc 2017

During the opening keynote to NVIDIA’s GPU Technology Conference, CEO Jen-Hsun Huang formally unveiled the latest GPU architecture and the first product based on it. The Tesla V100 accelerator is based on the Volta GPU architecture and features some amazingly impressive specifications. Let’s take a look.

  Tesla V100 GTX 1080 Ti Titan X (Pascal) GTX 1080 GTX 980 Ti TITAN X GTX 980 R9 Fury X R9 Fury
GPU GV100 GP102 GP102 GP104 GM200 GM200 GM204 Fiji XT Fiji Pro
GPU Cores 5120 3584 3584 2560 2816 3072 2048 4096 3584
Base Clock - 1480 MHz 1417 MHz 1607 MHz 1000 MHz 1000 MHz 1126 MHz 1050 MHz 1000 MHz
Boost Clock 1455 MHz 1582 MHz 1480 MHz 1733 MHz 1076 MHz 1089 MHz 1216 MHz - -
Texture Units 320 224 224 160 176 192 128 256 224
ROP Units 128 (?) 88 96 64 96 96 64 64 64
Memory 16GB 11GB 12GB 8GB 6GB 12GB 4GB 4GB 4GB
Memory Clock 878 MHz (?) 11000 MHz 10000 MHz 10000 MHz 7000 MHz 7000 MHz 7000 MHz 500 MHz 500 MHz
Memory Interface 4096-bit (HBM2) 352-bit 384-bit G5X 256-bit G5X 384-bit 384-bit 256-bit 4096-bit (HBM) 4096-bit (HBM)
Memory Bandwidth 900 GB/s 484 GB/s 480 GB/s 320 GB/s 336 GB/s 336 GB/s 224 GB/s 512 GB/s 512 GB/s
TDP 300 watts 250 watts 250 watts 180 watts 250 watts 250 watts 165 watts 275 watts 275 watts
Peak Compute 15 TFLOPS 10.6 TFLOPS 10.1 TFLOPS 8.2 TFLOPS 5.63 TFLOPS 6.14 TFLOPS 4.61 TFLOPS 8.60 TFLOPS 7.20 TFLOPS
Transistor Count 21.1B 12.0B 12.0B 7.2B 8.0B 8.0B 5.2B 8.9B 8.9B
Process Tech 12nm 16nm 16nm 16nm 28nm 28nm 28nm 28nm 28nm
MSRP (current) lol $699 $1,200 $599 $649 $999 $499 $649 $549

While we are low on details today, it appears that the fundamental compute units of Volta are similar to that of Pascal. The GV100 has 80 SMs with 40 TPCs and 5120 total CUDA cores, a 42% increase over the GP100 GPU used on the Tesla P100 and 42% more than the GP102 GPU used on the GeForce GTX 1080 Ti. The structure of the GPU remains the same GP100 with the CUDA cores organized as 64 single precision (FP32) per SM and 32 double precision (FP64) per SM.

image7.png

Click to Enlarge

Interestingly, NVIDIA has already told us the clock speed of this new product as well, coming in at 1455 MHz Boost, more than 100 MHz lower than the GeForce GTX 1080 Ti and 25 MHz lower than the Tesla P100.

SXM2-VoltaChipDetails.png

Click to Enlarge

Volta adds in support for a brand new compute unit though, known as Tensor Cores. With 640 of these on the GPU die, NVIDIA directly targets the neural network and deep learning fields. If this is your first time hearing about Tensor, you should read up on its influence on the hardware markets, bringing forth an open-source software library for machine learning. Google has invested in a Tensor-specific processor already, and now NVIDIA throws its hat in the ring.

Adding Tensor Cores to Volta allows the GPU to do mass processing for deep learning, on the order of a 12x improvement over Pascal’s capabilities using CUDA cores only.

07.jpg

For users interested in standard usage models, including gaming, the GV100 GPU offers 1.5x improvement in FP32 computing, up to 15 TFLOPS of theoretical performance and 7.5 TFLOPS of FP64. Other relevant specifications include 320 texture units, a 4096-bit HBM2 memory interface and 16GB of memory on-module. NVIDIA claims a memory bandwidth of 900 GB/s which works out to 878 MHz per stack.

Maybe more impressive is the transistor count: 21.1 BILLION! NVIDIA claims that this is the largest chip you can make physically with today’s technology. Considering it is being built on TSMC's 12nm FinFET technology and has an 815 mm2 die size, I see no reason to doubt them.

03.jpg

Shipping is scheduled for Q3 for Tesla V100 – at least that is when NVIDIA is promising the DXG-1 system using the chip is promised to developers.

I know many of you are interested in the gaming implications and timelines – sorry, I don’t have an answer for you yet. I will say that the bump from 10.6 TFLOPS to 15 TFLOPS is an impressive boost! But if the server variant of Volta isn’t due until Q3 of this year, I find it hard to think NVIDIA would bring the consumer version out faster than that. And whether or not NVIDIA offers gamers the chip with non-HBM2 memory is still a question mark for me and could directly impact performance and timing.

More soon!!

Source: NVIDIA

NVIDIA Releases VRWorks Audio 1.0

Subject: Graphics Cards | May 10, 2017 - 07:02 AM |
Tagged: vrworks, nvidia, audio

GPUs are good at large bundles of related tasks, saving die area by tying several chunks of data together. This is commonly used for graphics, where screens have two-to-eight million (1080p to 4K) pixels, 3d models have thousands to millions of vertexes, and so forth. Each instruction is probably done hundreds, thousands, or millions of times, and so parallelism greatly helps with utilizing real-world matter to store and translate this data.

Audio is another area with a lot of parallelism. A second of audio has tens of thousands of sound pressure samples, but another huge advantage is that higher frequency sounds model pretty decently as rays, which can be traced. NVIDIA decided to repurpose their OptiX technology into calculating these rays. Beyond the architecture demo that you often see in global illumination demos, they also integrated it into an Unreal Tournament test map.

And now it’s been released, both as a standalone SDK and as an Unreal Engine 4.15 plug-in. I don’t know what its license specifically entails, because the source code requires logging into NVIDIA’s developer portal, but it looks like the plug-ins will be available to all users of supported engines.

Source: NVIDIA

GeForce Experience 3.6 Has Vulkan and OpenGL

Subject: Graphics Cards | May 10, 2017 - 03:53 AM |
Tagged: vulkan, ShadowPlay, opengl, nvidia, geforce experience

The latest version of GeForce Experience, 3.6, adds video capture (including screenshots and live streaming) support for OpenGL and Vulkan games. The catalog of titles support by ShadowPlay, which I’m pretty sure NVIDIA wants to call Share now, despite referring to it by its old name in the blog post, now includes No Man’s Sky, DOOM, and Microsoft’s beloved OpenGL title: Minecraft.

The rest of the update focuses on tweaking a few interface elements, including its streaming panel, its video and screenshot upload panel, and its gallery. Access to the alternative graphics APIs was the clear headline-maker, however, opening the door to several large gaming groups, and potentially even more going forward.

GeForce Experience 3.6 is available now.

Source: NVIDIA

The Gigabyte Aorus GTX 1080 Ti Xtreme Edition 11G; worth the extra $50?

Subject: Graphics Cards | May 9, 2017 - 01:16 PM |
Tagged: gigabyte, aorus, 1080 ti, Xtreme Edition 11G, factory overclocked

Gigabyte's Aorus branded GTX 1080 Ti Xtreme Edition 11G just arrived on our Hardware Leaderboard, not in small part due to this review at The Tech Report.  The card utilizes the same triple fans moving air over five copper heat pipes combined with a mess of fins and a large copper plate as the non-Ti Xtreme card we have seen previously.  That cooler allows the card to be clocked at 1632MHz base, 1746MHz Boost with memory hitting over 2.8GHz right out of the box and with Afterburner you can reach even higher.  TR's testing shows that this does have a noticeable effect on performance compared to the Founder's Edition cards.

backside-logo.jpg

"Aorus' GeForce GTX 1080 Ti Xtreme Edition 11G promises to unshackle the GP102 GPU from the constraints of a reference board. We run this card through our gauntlet of performance and noise testing to see whether it's worth the premium over Nvidia's GeForce GTX 1080 Ti Founders Edition."

Here are some more Graphics Card articles from around the web:

Graphics Cards

GTC 17: NVIDIA Demos (Professional) Multi-User VR

Subject: Graphics Cards | May 9, 2017 - 07:01 AM |
Tagged: VR, quadro, nvidia, gp102

Four Quadro P6000s installed in a single server, which looks like a 4U rack-mounted box, are shown running four HTC Vive Business Edition VR systems through virtual machines. It isn’t designed to be a shipping product, just a demo for NVIDIA’s GPU Technology Conference that was developed by their engineers, but that should get the attention of this trade show’s attendees, who are mostly enterprise-focused.

nvidia-2017-fouruservrserver.png

For context, this system has roughly equivalent GPU horsepower to four Titan Xps, albeit with twice the RAM and slightly different clocks; there’s plenty of power per headset to harness. Still, running this level of high-performance application on a virtual machine could be useful in a variety of business applications, from architectural visualization to, as NVIDIA notes, amusement parks.

Given that it’s just a proof-of-concept demo, you’ll need to build it yourself to get one. They didn’t mention using any special software, though.

Source: NVIDIA

Contest: Win an ASUS GTX 1070 8GB or GTX 1050 Ti Dual-fan OC Card!

Subject: General Tech, Graphics Cards | May 5, 2017 - 10:14 AM |
Tagged: gtx 1070, GTX 1050 Ti, giveaway, geforce, contest, asus

With spring filling us with happy thoughts, our friends at ASUS are gracing us with some hardware to giveaway - that's right, it's time for a contest!

Here's what's on the docket for our readers and fans:

asuscontest.jpg

The ASUS Dual-fan line is a great option for gamers that want to balance performance and value and are quieter, cooler, and faster than reference specs. 

ASUS-2017-Giveaway-Dual_Series-Banner-v1-640x200.jpg

How do you enter? Use the form below!

Win an ASUS GeForce® GTX 1070 8GB Dual-fan OC Edition!!

I do have to apologize - this contest is open to US and Canada (except Quebec) residents only. Sorry!

A HUGE THANKS goes to our partners at ASUS for supporting PC Perspective and our raders with this great contest! Good luck to everyone!

AMD Releases Radeon Software Crimson ReLive 17.5.1

Subject: Graphics Cards | May 5, 2017 - 12:21 AM |
Tagged: amd, graphics drivers

Aligning with the release of Prey, AMD released the first Radeon Crimson ReLive graphics driver of the month: 17.5.1. This version optimizes the aforementioned title with up to a 4.7% boost in performance, versus 17.4.4 as measured on an RX 580, according to AMD. It also adds multi-GPU support to the title, for those who have multiple AMD graphics cards.

amd-2016-crimson-relive-logo.png

A bunch of bugs were also fixed in this release, as is almost always the case. Probably the most important one, though, is the patch to their auto-updater that prevents it from failing. They also fixed a couple issues with hybrid graphics, including a crash in Civilization VI with those types of systems.

You can pick up 17.5.1 from AMD’s website.

Source: AMD

Prey The Fourth Be With You??? GeForce 382.05 Drivers

Subject: Graphics Cards | May 4, 2017 - 10:24 PM |
Tagged: nvidia, graphics drivers

If you’re cringing while reading that headline, then rest assured I felt just as dirty writing it.

nvidia-geforce.png

NVIDIA has released another graphics driver, 382.05, to align with a few new game releases: Prey, Battlezone, and Gears of War 4’s latest update. The first title is a first-person action-adventure title by Arkane Studios, which releases tomorrow. Interestingly, the game runs on CryEngine... versus their internally-developed Void engine, as seen in Dishonored 2; Unreal Engine, as they’ve used with the original Dishonored; or id Tech, which Bethesda’s parent company, ZeniMax, owns through id Software and has a bit of a name-association with franchise that this Prey rebooted.

Cross-eyed yet? Good. Let’s move on.

Fans of Gears of War 4, specifically those who have multiple NVIDIA graphics cards, might be interested in SLI support for this DirectX 12-based title. As we’ve mentioned in the past, the process of load-balancing multiple GPUs has changed going from DirectX 11 to DirectX 12. According to The Coalition, SLI support was technically available on 381.89 (and 17.4.4 for AMD CrossFire), but NVIDIA is advertising it with 382.05. I’m not sure whether it’s a timing-based push, or if they optimized the experience since 381.89, but you should probably update regardless.

The driver also adds / updates the SLI profile for Sniper: Ghost Warrior 3 and Warhammer 40,000: Dawn of War III. A bunch of bugs have been fixed, too, such as “In a multi-display configuration, the extended displays are unable to enter sleep mode.” along with a couple of black and blue screen issues.

You can get them from GeForce Experience or NVIDIA’s website.

Source: NVIDIA