AMD Working on GDDR6 Memory Controller For Future Graphics Cards

Subject: General Tech, Graphics Cards | December 4, 2017 - 05:47 PM |
Tagged: navi, HBM2, hbm, gddr6, amd

WCCFTech reports that AMD is working on a GDDR6 memory controller for its upcoming graphics cards. Starting with an AMD Technical Engineer listing GDDR6 on his portfolio, the site claims to have verified through sources familiar with the matter that AMD is, in fact, supporting the new graphics memory standard and will be using their own controller to support it (rather than licensing one).

roadmap.jpg

AMD is not abandoning HBM2 memory though. The company is sticking to its previously released roadmaps and Navi will still utilize HBM2 memory – at least on the high-end SKUs. While AMD has so far only released RX Vega 64 and RX Vega 56 graphics cards, the company may well release lower-end Vega-based cards with GDDR5 at some point although for now the Polaris architecture is handling the lower end. AMD supporting GDDR6 is a good thing and should enable cheaper mid-range cards that are not limited by supply shortages of the more expensive (albeit much higher bandwidth) High Bandwidth Memory that have seemingly plagues both NVIDIA and AMD at various points in time. GDDR6 further offers several advantages over GDDR5 with almost twice the speed (9 Gbps versus 16 Gbps) at lower power (1.5V versus 1.35V) and more density and underlying technology optimizations than even GDDR5X. While the G5X memory is capable of hitting the same 16 Gbps launch speeds of GDDR6, the newer memory technology offers up to 32Gb dies* versus 16Gb and a two channel design (which ends up being a bit more efficient and easier to produce / for GPU manufacturers to wire up). GDDR6 will represent a nice speed bump for mid-range cards (very low end may well stick with GDDR5 save for mobile parts which could benefit from the lower power GDDR6) while letting AMD have a bit better profit margins on these lower end margin SKUs and being able to produce more cards to satisfy demand. HBM2 is nice to have but it is more well suited for the compute-oriented cards for workstation and data center usage rather than gaming right now and GDDR6 can offer more price-to-performance for the consumer gaming cards.

As for the question of why AMD would want to design their own GDDR6 memory controller rather than license one, I think that comes down to AMD thinking long-term. It will be more expensive up front to design their own controller, but AMD will be able to more fully integrate it and tune it to work with their graphics cards such that it can be more power efficient. Also, having their own GDDR6 memory controller means they can use it in other areas such as their APUs and SoCs offered through their Semi Custom Business Unit (e.g. the SoCs used in gaming consoles). Being able to offer that controller to other companies in their semi-custom SoCs free of third party licensing fees is a good thing for AMD.

Micron GDDR5X.png

With GDDR6 becoming readily available early next year, there is a good chance AMD will be ready to use the new memory technology as soon as Navi but likely not until closer to the end of 2018 or early 2019 when AMD launches new lower and mid-range gaming cards (consumer-level) based on Navi and/or Vega.

*At launch it appears that GDDR6 from the big three (Micron, Samsung, and SK Hynix) will use 16Gb dies, but the standard allows for up to 32Gb dies. The G5X standard allows for up to 16Gb dies.

Also read:

Source: WCCFTech

(Leak) AMD Vega 10 and Vega 20 Information Leaked

Subject: Graphics Cards | January 8, 2017 - 03:53 AM |
Tagged: vega 11, vega 10, navi, gpu, amd

During CES, AMD showed off demo machines running Ryzen CPUs and Vega graphics cards as well as gave the world a bit of information on the underlying architecture of Vega in an architectural preview that you can read about (or watch) here. AMD's Vega GPU is coming and it is poised to compete with NVIDIA on the high end (an area that has been left to NVIDIA for awhile now) in a big way.

Thanks to Videocardz, we have a bit more info on the products that we might see this year and what we can expect to see in the future. Specifically, the slides suggest that Vega 10 – the first GPUs to be based on the company's new architecture – may be available by the (end of) first half of 2017. Following that a dual GPU Vega 10 product is slated for a release in Q3 or Q4 of 2017 and a refreshed GPU based on smaller process node with more HBM2 memory called Vega 20 in the second half of 2018. The leaked slides also suggest that Navi (Vega's successor) might launch as soon as 2019 and will come in two variants called Navi 10 and Navi 11 (with Navi 11 being the smaller / less powerful GPU).

AMD Vega Leaked Info.jpg

The 14nm Vega 10 GPU allegedly offers up 64 NCUs and as much as 12 TFLOPS of single precision and 750 GFLOPS of double precision compute performance respectively. Half precision performance is twice that of FP32 at 24 TFLOPS (which would be good for things like machine learning). The NCUs allegedly run FP16 at 2x and DPFP at 1/16. If each NCU has 64 shaders like Polaris 10 and other GCN GPUs, then we are looking at a top-end Vega 10 chip having 4096 shaders which rivals that of Fiji. Further, Vega 10 supposedly has a TDP up to 225 watts.

For comparison, the 28nm 8.9 billion transistor Fiji-based R9 Fury X ran at 1050 MHz with a TDP of 275 watts and had a rated peak compute of 8.6 TFLOPS. While we do not know clock speeds of Vega 10, the numbers suggest that AMD has been able to clock the GPU much higher than Fiji while still using less power (and thus putting out less heat). This is possible with the move to the smaller process node, though I do wonder what yields will be like at first for the top end (and highest clocked) versions.

Vega 10 will be paired with two stacks of HBM2 memory on package which will offer 16GB of memory with memory bandwidth of 512 GB/s. The increase in memory bandwidth is thanks to the move to HBM2 from HBM (Fiji needed four HBM dies to hit 512 GB/s and had only 4GB).

The slide also hints at a "Vega 10 x2" in the second half of the year which is presumably a dual GPU product. The slide states that Vega 10 x2 will have four stacks of HBM2 (1TB/s) though it is not clear if they are simply adding the two stacks per GPU to claim the 1TB/s number or if both GPUs will have four stacks (this is unlikely though as there does not appear to be room on the package for two more stacks each and I am not sure if they could make the package bit enough to make room for them either). Even if we assume that they really mean 2x 512 GB/s per GPU (and maybe they can get more out of that in specific workloads across both) for memory bandwidth, the doubling of cores and at least potential compute performance will be big. This is going to be a big number crunching and machine learning card as well as for games of course. Clockspeeds will likely have to be much lower compared to the single GPU Vega 10 (especially with stated TDP of 300W) and workloads wont scale perfectly so potential compute performance will not be quite 2x but should still be a decent per-card boost.

AMD-Vega-GPU.jpg

Raja Koduri holds up a Vega GPU at CES 2017 via eTeknix

Moving into the second half of 2018, the leaked slides suggest that a Vega 20 GPU will be released based on a 7nm process node with 64 CUs and paired with four stacks of HBM2 for 16 GB or 32 GB of memory with 1TB/s of bandwidth. Interestingly, the shaders will be setup such that the GPU can still do half precision calculations at twice that of single precision, but will not take nearly the hit on double precision at Vega 10 at only 1/2 single precision rather than 1/16. The GPU(s) will use between 150W and 300W of power, and it seems these are set to be the real professional and workstation workhorses. A Vega 10 with 1/2 DPFP compute would hit 6 TFLOPS which is not bad (and it would hopefully be more than this due to faster clocks and architecture improvements).

Beyond that, the slides mention Navi's existence and that it will come in Navi 10 and Navi 11 but no other details were shared which makes sense as it is still far off.

You can see the leaked slides here. In all, it is an interesting look at potential Vega 10 and beyond GPUs but definitely keep in mind that this is leaked information and that the information allegedly came from an internal presentation that likely showed the graphics processors in their best possible/expect light. It does add a bit more hope to the fire of excitement for Vega though, and I hope that AMD pulls it off as my unlocked 6950 is no longer supported and it is only a matter of time before new games perform poorly or not at all!

Also read: 

Source: eTeknix.com

Podcast #391 - AMD's news from GDC, the MSI Vortex, and Q&A!

Subject: General Tech | March 17, 2016 - 11:07 PM |
Tagged: podcast, video, amd, XConnect, gdc 2016, Vega, Polaris, navi, razer blade, Sulon Q, Oculus, vive, raja koduri, GTX 1080, msi, vortex, Intel, skulltrail, nuc

PC Perspective Podcast #391 - 03/17/2016

Join us this week as we discuss the AMD's news from GDC, the MSI Vortex, and Q&A!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Allyn Malventano

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!