Podcast #439 - GTX 1080 Ti, Radeon RX Vega, and Ryzen

Subject: Editorial | March 2, 2017 - 11:37 AM |
Tagged: Vega, ryzen, podcast, fcat, 1080Ti, 1080

PC Perspective Podcast #439 - 03/02/17

Join us for GTX 1080 Ti, Radeon RX Vega, Ryzen and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

Hosts: Ryan Shrout, Allyn Malventano, Josh Walrath, Jermey Hellstrom

Program length: 1:41:49

Podcast topics of discussion:
  1. Week in Review:
  2. Casper Ad
  3. News items of interest:
    1. 0:36:35 Ryzen News
  4. Closing/outro

Source:

AMD Unveils Next-Generation GPU Branding, Details - Radeon RX Vega

Subject: General Tech | February 28, 2017 - 05:46 PM |
Tagged: amd, Vega, radeon rx vega, radeon, gdc 2017, capsaicin, rtg, HBCC, FP16

Today at the AMD Capsaicin & Cream event at GDC 2017, Senior VP of the Radeon Technologies Group, Raja Koduri officially revealed the branding that AMD will use for their next generation GPU products.

While we usually see final product branding deviate from their architectural code names (e.g. Polaris becoming the Radeon RX 460, 470 and 480), AMD this time has decided to embrace the code name for the retail naming scheme for upcoming graphics cards featuring the new GPU – Radeon RX Vega.

RadeonRXVega.jpg

However, we didn't just get a name for Vega-based GPUs. Raja also went into some further detail and showed some examples of technologies found in Vega.

First off is the High-Bandwidth Cache Controller found in Vega products. We covered this technology during our Vega architecture preview last month at CES, but today we finally saw a demo of this technology in action.

Vega-HBCCslide.jpg

Essentially, the High-Bandwidth Cache Controller (HBCC) allows Vega GPUs to address all available memory in the system (including things like NVMe SSDs, system DRAM and network storage.) AMD claims that by using the already fast memory you have available on your PC to augment onboard GPU memory (such as HBM2) they will be able to offer less expensive graphics cards that ultimately offer access to much more memory than current graphics cards.

Vega-HBCC.jpg

The demo that they showed on stage featured Deus Ex: Mankind Divided running on a system with a Vega GPU running with 2GB of VRAM, and Ryzen CPU. By turning HBCC on, they were able to show a 50% increase in average FPS, and a 100% increase in minimum FPS.

While we probably won't actually see a Vega product with such a small VRAM implementation, it was impressive to see how HBCC was able to dramatically improve the playability of a 2GB GPU on a game that has no special optimizations to take advantage of the High-Bandwidth Cache.

The other impressive demo running on Vega at the Capsaicin & Cream event centered around what AMD is calling Rapid Pack Math.

Rapid Pack Math is an implementation of something we have been hearing and theorizing a lot about lately, the use of FP16 shaders for some graphic effects in games. By using half-precision FP16 shaders instead of the current standard FP32 shaders, developers are able to get more performance out of the same GPU cores. In specific, Rapid Pack Math allows developers to run half-precision FP16 shaders at exactly 2X the speed of traditional standard-precision FP32 shaders.

TressFX-FP16.jpg

While the lower precision of FP16 shaders won't be appropriate for all GPU effects, AMD was showing a comparison of their TressFX hair rendering technology running on both standard and half-precision shaders. As you might expect, AMD was able to render twice the amount of hair strands per second, making for a much more fluid experience.

Vega-shirt.jpg

Just like we saw with the lead up to the Polaris GPU launch, AMD seems to be releasing a steady stream of information on Vega. Now that we have the official branding for Vega, we eagerly await getting our hands on these new High-end GPUs from AMD.

 

Learn about AMD's Vega Memory Architecture

Subject: General Tech | February 3, 2017 - 01:40 PM |
Tagged: amd, Vega, Jeffrey Cheng

Tech ARP had a chance talk with AMD's Jeffrey Cheng about the new Vega GPU memory architecture.  He provided some interesting details such as the fact that the new architecture can handle up to 512 TB of addressable memory.  With such a large pool it would be possible to store data sets in HBM2 memory to be passed to the GPU as opposed to sitting in general system memory.  Utilizing the memory present on the GPU could also reduce costs and energy consumption, not to mention the fact it will perform far more quickly.  Pop by to watch the video to see how he feels this could change the way games and software could be programmed.

AMD-Vega-Tech-Report-15.jpg

"Want to learn more about the AMD Vega memory architecture? Join our Q&A session with AMD Senior Fellow Jeffrey Cheng at the AMD Tech Summit!"

Here is some more Tech News from around the web:

Tech Talk

Source: TechARP

AMD Announces Q4 2016 and FY 2016 Results

Subject: Editorial | January 31, 2017 - 11:14 PM |
Tagged: Vega, ryzen, quarterly results, Q4 2016, Q4, FY 2016, amd, AM4

Today AMD announced their latest quarterly earnings.  There was much speculation as to how well or how poorly the company did, especially in light of Intel’s outstanding quarter and their record year.  Intel has shown that the market continues to be strong, even with the popular opinion that we are in a post-PC world.  Would AMD see a strong quarter, or would Intel take further bites out of the company?

icon.jpg

The results for AMD are somewhere in between.  It was not an overly strong quarter, but it was not weak either.  AMD saw strength in the GPU market with their latest RX series of GPUs for both desktop and mobile applications.  Their CPU sales seemingly were flat with limited new products in their CPU/APU stack.  AMD is still primarily shipping 32nm and 28nm products and will not introduce 14nm products until Ryzen in late Q1 of this year.  While AMD has improved their APU offerings at both mobile and desktop TDPs, they still rely on Carrizo and the Bristol Ridge derivative to provide new growth.  The company’s aging Piledriver based Vishera CPUs still comprise a significant portion of sales for the budget and midrange enthusiast markets.

The company had revenues of $1.11B US for Q4 with a $51M net loss.  Q3 featured revenues of $1.31B, but had a much larger loss of $293M.  The primary factor for that loss was the $340M charge for the adjusted wafer start agreement that AMD has with GLOBALFOUNDRIES.  AMD did make less this past quarter, but they were able to winnow their loss down to the $51M figure.  

While AMD stayed steady with the CPU/APU and GPU markets, their biggest decline came in the semi-custom products.  This is understandable due to the longer lead times on these products as compared to AMD’s CPUs/APUs and GPUs.  The console manufacturers purchase these designs and then pay out royalties as the chips are produced.  Sony and Microsoft each had new console revisions for this holiday season that feature new SoC designs from AMD for each.  To hit the holiday rush these companies made significant orders in Q2 and Q3 of this year to allow delivery in Q4.  Once those deliveries are made then Sony and Microsoft dramatically cut orders to allow good sell-through in Q4 and not have massive unsold quantities in Q1 2017.  With royalties down with fewer chips being delivered, AMD obviously suffers at the hand of seasonality typically one quarter sooner than Intel or NVIDIA does.

am4_01.jpg

For the year AMD had nearly $300M more in revenue as compared to 2015.  2016 ended at $4.27B as compared to 2015’s $3.99B.  This is generally where AMD has been for the past decade, but is lower than they have seen in years past with successful parts like Athlon and their Athlon 64 parts.  In 2005 AMD had $5.8B in revenue.  We see that AMD still has a way to go before matching some of their best years as a company.

One of the more interesting aspects is that even through these quarterly losses AMD has been able to increase their cash on hand.  AMD was approaching some $700M a few years back and with the losses they were taking it would not be all many years before liquidity was non-existent.  AMD has been able to build that up to $1.26B at the end of this quarter, giving them more of a cushion to rely upon in tight times.

AMD’s year on year improvement is tangible, but made more impressive when considering how big of an impact the $340M charge that the WSA incurred.  This shows that AMD has been very serious about cutting expenses and monetizing their products to the best of their ability.

This coming year should show further improvement for AMD due to a more competitive product stack in CPUs, APUs, and GPUs.  AMD announced that Ryzen will be launching sometimes this March, hitting the Q1 expectations that the company had in the second half of 2016.  Previous to that AMD thought they could push out limited amounts of Ryzen chips in late Q4 2016, but that did not turn out to be the case.  AMD has shown off multiple Ryzen samples running anywhere from 3.2 GHz base with a potential engineering sample with a boosted speed up to 4 GHz.  Ryzen looks far more competitive against Intel’s current and upcoming products than AMD has in years.

am4_03.png

The GPU side will also be getting a boost in the first half of 2017.  It looks like the high end GPU Vega will be launching in Q2 2017.  AMD has addressed the midrange and budget markets with the Polaris based chips but has been absent at the high end with 14nm chips.  AMD still produces and sells Fury and Nano based offerings that somewhat address the area above the midrange, but they do not adequately compete with the NVIDIA GTX 1070 and 1080 products.  Vega looks to be competitive with what NVIDIA has at the high end, and there is certainly a pent up demand for an AMD card in that market.

AMD had a solid 2016 that showed that the current management team could successfully lead the company through some very challenging times.  The company continues to move forward and we shall see new products with CPUs, GPUs, and motherboards that should all materially contribute to and expand AMD’s bottom line.

Source: AMD

Podcast #432 - Kaby Lake, Vega, CES Review

Subject: Editorial | January 12, 2017 - 04:42 PM |
Tagged: Vega, Valerie, snapdragon, podcast, nvidia, msi, Lenovo, kaby lake, hdr, hdmi, gus, FreeSync2, dell, coolermaster, CES, asus, AM4, acer, 8k

PC Perspective Podcast #432 - 01/12/17

Join us this week as we DasKeyboard, Samsung 750 EVO, CES predictions and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

Hosts:  Ryan Shrout, Allyn Malventano, Josh Walrath, Jermery Hellstrom

Program length: 1:45:28

Podcast topics of discussion:
 
  1. Week in Review:
  2. News items of interest:
  3. Hardware/Software Picks of the Week
    1. Jeremy: 1:42:11 They did it, they beat the hairbrush
  4. Closing/outro

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

Source:

CES 2017: AMD Vega Running DOOM at 4K

Subject: Graphics Cards | January 6, 2017 - 11:27 PM |
Tagged: Vega, doom, amd

One of the demos that AMD had at CES was their new Vega architecture running DOOM with Vulkan on Ultra settings at 4K resolution. With this configuration, the pre-release card was coasting along at the high 60s / low 70s frames per second. Compared to PC Gamer’s benchmarks of the Vulkan patch (ours was focused on 1080p) this puts Vega somewhat ahead of the GTX 1080, which averages the low 60s.

Some of the comments note that, during one of the melee kills, the frame rate stutters a bit, dropping down to about 37 FPS. That’s true, and I included a screenshot of it below, but momentary dips sometimes just happen. It could even be a bug in pre-release drivers for a brand new GPU architecture, after all.

amd-2017-ces-vegadip.png

Yes, the frame rate dipped in the video, but stutters happen. No big deal.

As always, this is a single, vendor-controlled data point. There will be other benchmarks, and NVIDIA has both GP102 and Volta to consider. The GTX 1080 is only ~314 mm2, so there’s a lot more room for enthusiast GPUs to expand on 14nm, but this test suggests Vega will at least surpass it. (When a process node is fully mature, you will typically see low-yield chips up to around 600mm2.)

Coverage of CES 2017 is brought to you by NVIDIA!

PC Perspective's CES 2017 coverage is sponsored by NVIDIA.

Follow all of our coverage of the show at http://pcper.com/ces!

High Bandwidth Cache

Apart from AMD’s other new architecture due out in 2017, its Zen CPU design, there is no other product that has had as much build up and excitement surrounding it than its Vega GPU architecture. After the world learned that Polaris would be a mainstream-only design that was released as the Radeon RX 480, the focus for enthusiasts came straight to Vega. It’s been on the public facing roadmaps for years and signifies the company’s return to the world of high end GPUs, something they have been missing since the release of the Fury X in mid-2015.

slides-2.jpg

Let’s be clear: today does not mark the release of the Vega GPU or products based on Vega. In reality, we don’t even know enough to make highly educated guesses about the performance without more details on the specific implementations. That being said, the information released by AMD today is interesting and shows that Vega will be much more than simply an increase in shader count over Polaris. It reminds me a lot of the build to the Fiji GPU release, when the information and speculation about how HBM would affect power consumption, form factor and performance flourished. What we can hope for, and what AMD’s goal needs to be, is a cleaner and more consistent product release than how the Fury X turned out.

The Design Goals

AMD began its discussion about Vega last month by talking about the changes in the world of GPUs and how the data sets and workloads have evolved over the last decade. No longer are GPUs only worried about games, but instead they must address profession workloads, enterprise workloads, scientific workloads. Even more interestingly, as we have discussed the gap in CPU performance vs CPU memory bandwidth and the growing gap between them, AMD posits that the gap between memory capacity and GPU performance is a significant hurdle and limiter to performance and expansion. Game installs, professional graphics sets, and compute data sets continue to skyrocket. Game installs now are regularly over 50GB but compute workloads can exceed petabytes. Even as we saw GPU memory capacities increase from Megabytes to Gigabytes, reaching as high as 12GB in high end consumer products, AMD thinks there should be more.

slides-8.jpg

Coming from a company that chose to release a high-end product limited to 4GB of memory in 2015, it’s a noteworthy statement.

slides-11.jpg

The High Bandwidth Cache

Bold enough to claim a direct nomenclature change, Vega 10 will feature a HBM2 based high bandwidth cache (HBC) along with a new memory hierarchy to call it into play. This HBC will be a collection of memory on the GPU package just like we saw on Fiji with the first HBM implementation and will be measured in gigabytes. Why the move to calling it a cache will be covered below. (But can’t we call get behind the removal of the term “frame buffer”?) Interestingly, this HBC doesn’t have to be HBM2 and in fact I was told that you could expect to see other memory systems on lower cost products going forward; cards that integrate this new memory topology with GDDR5X or some equivalent seem assured.

slides-13.jpg

Continue reading our preview of the AMD Vega GPU Architecture!

AMD's Drumming Up Excitement for Vega

Subject: Graphics Cards | January 2, 2017 - 02:56 PM |
Tagged: Vega, amd

Just ahead of CES, AMD has published a teaser page with, currently, a single YouTube video and a countdown widget. In the video, a young man is walking down the street while tapping on a drum and passing by Red Team propaganda posters. It also contains subtle references to Vega on walls and things, in case the explicit references, including the site’s URL, weren’t explicit enough.

amd-2017-ces-vega.jpg

How subtle, AMD.

Speaking of references to Vega, the countdown widget claims to lead up to the architecture preview. We were expecting AMD to launch their high-end GPU line at CES, and this is the first (official) day of the show. Until it happens, I don’t really know whether it will be a more technical look, or if they will be focusing on the use cases.

The countdown ends at 9am (EST) on January 5th.

Source: AMD

AMD has the Instinct; if not the license, to kill

Subject: Graphics Cards | December 12, 2016 - 04:05 PM |
Tagged: vega 10, Vega, training, radeon, Polaris, machine learning, instinct, inference, Fiji, deep neural network, amd

Ryan was not the only one at AMD's Radeon Instinct briefing, covering their shot across NVIDIA's HPC products.  The Tech Report just released their coverage of the event and the tidbits which AMD provided about the MI25, MI8 and MI6; no relation to a certain British governmental department.   They focus a bit more on the technologies incorporated into GEMM and point out that AMD's top is not matched by an NVIDIA product, the GP100 GPU does not come as an add-in card.  Pop by to see what else they had to say.

dad_pierce_gbs_bullet.jpg

"Thus far, Nvidia has enjoyed a dominant position in the burgeoning world of machine learning with its Tesla accelerators and CUDA-powered software platforms. AMD thinks it can fight back with its open-source ROCm HPC platform, the MIOpen software libraries, and Radeon Instinct accelerators. We examine how these new pieces of AMD's machine-learning puzzle fit together."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Author:
Manufacturer: AMD

AMD Enters Machine Learning Game with Radeon Instinct Products

NVIDIA has been diving in to the world of machine learning for quite a while, positioning themselves and their GPUs at the forefront on artificial intelligence and neural net development. Though the strategies are still filling out, I have seen products like the DIGITS DevBox place a stake in the ground of neural net training and platforms like Drive PX to perform inference tasks on those neural nets in self-driving cars. Until today AMD has remained mostly quiet on its plans to enter and address this growing and complex market, instead depending on the compute prowess of its latest Polaris and Fiji GPUs to make a general statement on their own.

instinct-18.jpg

The new Radeon Instinct brand of accelerators based on current and upcoming GPU architectures will combine with an open-source approach to software and present researchers and implementers with another option for machine learning tasks.

The statistics and requirements that come along with the machine learning evolution in the compute space are mind boggling. More than 2.5 quintillion bytes of data are generated daily and stored on phones, PCs and servers, both on-site and through a cloud infrastructure. That includes 500 million tweets, 4 million hours of YouTube video, 6 billion google searches and 205 billion emails.

instinct-6.jpg

Machine intelligence is going to allow software developers to address some of the most important areas of computing for the next decade. Automated cars depend on deep learning to train, medical fields can utilize this compute capability to more accurately and expeditiously diagnose and find cures to cancer, security systems can use neural nets to locate potential and current risk areas before they affect consumers; there are more uses for this kind of network and capability than we can imagine.

Continue reading our preview of the AMD Radeon Instinct machine learning processors!