(Leak) AMD Vega 10 and Vega 20 Information Leaked

Subject: Graphics Cards | January 8, 2017 - 03:53 AM |
Tagged: vega 11, vega 10, navi, gpu, amd

During CES, AMD showed off demo machines running Ryzen CPUs and Vega graphics cards as well as gave the world a bit of information on the underlying architecture of Vega in an architectural preview that you can read about (or watch) here. AMD's Vega GPU is coming and it is poised to compete with NVIDIA on the high end (an area that has been left to NVIDIA for awhile now) in a big way.

Thanks to Videocardz, we have a bit more info on the products that we might see this year and what we can expect to see in the future. Specifically, the slides suggest that Vega 10 – the first GPUs to be based on the company's new architecture – may be available by the (end of) first half of 2017. Following that a dual GPU Vega 10 product is slated for a release in Q3 or Q4 of 2017 and a refreshed GPU based on smaller process node with more HBM2 memory called Vega 20 in the second half of 2018. The leaked slides also suggest that Navi (Vega's successor) might launch as soon as 2019 and will come in two variants called Navi 10 and Navi 11 (with Navi 11 being the smaller / less powerful GPU).

View Full Size

The 14nm Vega 10 GPU allegedly offers up 64 NCUs and as much as 12 TFLOPS of single precision and 750 GFLOPS of double precision compute performance respectively. Half precision performance is twice that of FP32 at 24 TFLOPS (which would be good for things like machine learning). The NCUs allegedly run FP16 at 2x and DPFP at 1/16. If each NCU has 64 shaders like Polaris 10 and other GCN GPUs, then we are looking at a top-end Vega 10 chip having 4096 shaders which rivals that of Fiji. Further, Vega 10 supposedly has a TDP up to 225 watts.

For comparison, the 28nm 8.9 billion transistor Fiji-based R9 Fury X ran at 1050 MHz with a TDP of 275 watts and had a rated peak compute of 8.6 TFLOPS. While we do not know clock speeds of Vega 10, the numbers suggest that AMD has been able to clock the GPU much higher than Fiji while still using less power (and thus putting out less heat). This is possible with the move to the smaller process node, though I do wonder what yields will be like at first for the top end (and highest clocked) versions.

Vega 10 will be paired with two stacks of HBM2 memory on package which will offer 16GB of memory with memory bandwidth of 512 GB/s. The increase in memory bandwidth is thanks to the move to HBM2 from HBM (Fiji needed four HBM dies to hit 512 GB/s and had only 4GB).

The slide also hints at a "Vega 10 x2" in the second half of the year which is presumably a dual GPU product. The slide states that Vega 10 x2 will have four stacks of HBM2 (1TB/s) though it is not clear if they are simply adding the two stacks per GPU to claim the 1TB/s number or if both GPUs will have four stacks (this is unlikely though as there does not appear to be room on the package for two more stacks each and I am not sure if they could make the package bit enough to make room for them either). Even if we assume that they really mean 2x 512 GB/s per GPU (and maybe they can get more out of that in specific workloads across both) for memory bandwidth, the doubling of cores and at least potential compute performance will be big. This is going to be a big number crunching and machine learning card as well as for games of course. Clockspeeds will likely have to be much lower compared to the single GPU Vega 10 (especially with stated TDP of 300W) and workloads wont scale perfectly so potential compute performance will not be quite 2x but should still be a decent per-card boost.

View Full Size

Raja Koduri holds up a Vega GPU at CES 2017 via eTeknix

Moving into the second half of 2018, the leaked slides suggest that a Vega 20 GPU will be released based on a 7nm process node with 64 CUs and paired with four stacks of HBM2 for 16 GB or 32 GB of memory with 1TB/s of bandwidth. Interestingly, the shaders will be setup such that the GPU can still do half precision calculations at twice that of single precision, but will not take nearly the hit on double precision at Vega 10 at only 1/2 single precision rather than 1/16. The GPU(s) will use between 150W and 300W of power, and it seems these are set to be the real professional and workstation workhorses. A Vega 10 with 1/2 DPFP compute would hit 6 TFLOPS which is not bad (and it would hopefully be more than this due to faster clocks and architecture improvements).

Beyond that, the slides mention Navi's existence and that it will come in Navi 10 and Navi 11 but no other details were shared which makes sense as it is still far off.

You can see the leaked slides here. In all, it is an interesting look at potential Vega 10 and beyond GPUs but definitely keep in mind that this is leaked information and that the information allegedly came from an internal presentation that likely showed the graphics processors in their best possible/expect light. It does add a bit more hope to the fire of excitement for Vega though, and I hope that AMD pulls it off as my unlocked 6950 is no longer supported and it is only a matter of time before new games perform poorly or not at all!

Also read: 

Source: eTeknix.com

January 8, 2017 | 04:53 AM - Posted by Tim Verry

I feel like I missed the perfect opportunity and didn't use allegedly enough (probably one of my most used words since I started here like 5 years ago haha).

 

Note that if you click on the first image, you can view the full sized version which makes the slides easier to read. I put them in a single image so that the article didn't take up the whole front page :-).

January 8, 2017 | 12:21 PM - Posted by JohnGR

Well, the number of 4096 stream processors looks like a wall they just can't pass and the 225W for one 14nm GPU, looks like a factory overclocked chip just to get where the competition is. AMD seems to need much more time to become and to stay competitive in the high end.

January 8, 2017 | 01:05 PM - Posted by Anonymous (not verified)

Look at the one that Linus Tech Tips looked at when they where allowed to pop open the case an look inside and talk with Raja Koduri about! It's an GPU engineering sample and PCIe card with the power pins covered in gaffers tape is also for testing(Look at that USB testing port on the GPU's card). So who knows the final wattage! And the card is not scheduled for release until 1st half of 2017. So sometime in the April to June time frame most likely. So an engineering sample not final stepping and more tweaks to hardware, drivers, microcode, etc. are going to happen as well as the games/gaming engine and DX12/Vulkan API tweaks with respect to Vega’s development before its final release!

The sense on entitlement among gamers is out of this world! And I really hope that AMD/RTG can get more of the Server/HPC/Workstation market so they are not beholden to gamers too much longer for AMD’s very existence! The consumer/gaming market is too damn fickle for any high technology company to only depend on for any form of business viability!

January 9, 2017 | 03:18 AM - Posted by JohnGR

It's obvious that AMD is targeting mostly the professional market, even if they are running Doom everywhere there is an event. Raja showed his disappointment in another interview about people giving the money to the competition. Even in cases where AMD is offering an equal or better product, people will find an excuse to go with Nvidia. GTX 1060 is outselling RX 480, 3 or even 4 to 1. People buy custom GTX 1050Tis at RX 470 price levels. People diverse to get the GTX 2080 at $999.

January 9, 2017 | 11:36 AM - Posted by Anonymous (not verified)

AMD is trying to get into that professional market so it can have the revenues to afford to have gaming SKUs, and more power to AMD for doing that! Nvidia has been doing that for quite some time and look at Nvidia's financial state. AMD will be able to even more than Nvidia and Intel offer total package deals with AMD’s Zen server CPU SKUs paired with its Radeon Pro WX and Instinct GPU SKUs for a slice of that very high margin professional market that AMD really needs to remain in business.

Even AMDs professional SKUs tend to be more affordable relative to Nvidia’s and Intel’s professional SKUs. AMD’s Radeon Pro Duo is just like that Titan X for compute without having to go all the way to the very expensive certified professional GPU variants, so AMD will probably have some affordable compute focused SKUs in addition to the professional SKUs that cost much more. AMD’s Instinct line is targeting the AI market and AMD will be continuing to have its gaming oriented SKUs with a lot of the R&D for the gaming SKUs paid for by the professional market revenues.

The Consumer/gaming markets are not enough to pay the bills and allow AMD to really prosper, so gamers should hope for AMD to have great success in the professional markets with those professional market deep pockets and ability to write of on that market's business taxes any professional GPU/CPU/Motherboard costs and allow AMD to make those mad markup revenues that pay the bills/R&D expenses and please the shareholders! Vega is coming and some very new interconnect technology after that as well for connecting up more than one GPU die on an interposer, and interposers can be spliced together to get past that reticle limit. Nvidia is always going to price its SKUs higher until Nvidia starts loosing enough market share so AMD will have to aggressively price their CPU and GPU SKUs this time around in order to get their market share numbers up to better compete. This is a very good time for the GPU/CPU market with Ryzen and Vega and some very new IP from AMD on both its CPU and GPU offerings.

January 8, 2017 | 12:54 PM - Posted by CNote

Are they going to keep working on Polaris? 490 etc...

January 8, 2017 | 01:15 PM - Posted by Anonymous (not verified)

Sure for the laptop/mobile Polaris rebrands(Labeled RX 500M), and maybe there will be some Polaris tweaks incoming to get a little better performance. I'll even bet there where some hardware features on Polaris that where not able to be certified in time for the Polaris release to market schedule that are going to be used in Vega and that Primitive discard accelerator IP used in Polaris hints at that new Primitive Shader IP that will be used in VEGA!

January 8, 2017 | 02:18 PM - Posted by Tim Verry

I could see them keeping Polaris around for a bit and have it be low to mid range as it has been. Rather than scale Vega that far down. It could be a similar split like NViDiA did with big kepler and regular kepler (gk110 and gk104? i forget the names atm heh). 

January 8, 2017 | 02:51 PM - Posted by Anonymous (not verified)

People are speculating that the card is running at 1,500mhz which would be huge for amd and the new architecture. If this card has any headroom we may actually have some real competition. People seem to forget that Nvidia used to sell you their biggest chip for $500 not too long ago that chip will now run you $1,299 on a much smaller die, food for thought.

January 8, 2017 | 10:56 PM - Posted by John H (not verified)

It's been a while. Since March 2011 Nvidia's top end has always been $700 or more.

January 8, 2017 | 04:23 PM - Posted by Timothy Edgin (not verified)

This crippling of the FP64 means I will have to wait for Navi. Wtf is wrong with AMD? They keep following Nvidia but the only thing that has separated the two (until recently) was they didn't cripple the consumer cards. There is no way I am "upgrading to a card with less DP than my 7970. Yes I use FP64.

January 8, 2017 | 05:28 PM - Posted by Anonymous (not verified)

Gaming doesn't need FP64 from what I gather. So someone who does need it would surely buy a firepro gpu and not one specifically made for gaming?
Bit like buying a sports car and expecting it to be good off-road?

January 9, 2017 | 01:47 AM - Posted by Cyric (not verified)

You are correct, There are way thin number of applications that use double precision. Its mostly for CAD where you will be on the production of complicated machinery as well. That's where you need double precision in order to be precise to the thousands of a millimeter, both on the design, on the work emulation,and on the actual production.
So if you are in that kind of industry, getting a couple of Teslas that have that capability is not even a problem.
For the rest of us single an half precision is enough

January 9, 2017 | 01:47 AM - Posted by Cyric (not verified)

You are correct, There are way thin number of applications that use double precision. Its mostly for CAD where you will be on the production of complicated machinery as well. That's where you need double precision in order to be precise to the thousands of a millimeter, both on the design, on the work emulation,and on the actual production.
So if you are in that kind of industry, getting a couple of Teslas that have that capability is not even a problem.
For the rest of us single an half precision is enough

January 9, 2017 | 10:41 AM - Posted by DDearborn (not verified)

Hmmm

The problem with buying the likes of the FirePro and Quadro cards is that you are paying a massive premium for application specific drivers that in many cases aren't applicable to your requirements. The costs are literally almost a full order of magnitude higher. Particularly for smaller firms with multiple seats, the costs quickly become prohibitive. In fact one can make a legitimate argument that this price gouging represents a significant barrier to small business proliferation and innovation. And lets be clear here, the cost to AMD/NVidia to activate dormant features in the chip is zero.

January 9, 2017 | 08:57 AM - Posted by Anonymously Anonymous (not verified)

@Pcper, you guys are slipping, when hasn't this knucklehead's offensiveness been deleted yet?

January 9, 2017 | 12:37 PM - Posted by Jeremy Hellstrom

I'm not going to do 24/7 comment checking mate, they go when I get to them.

January 9, 2017 | 11:56 AM - Posted by Anonymous (not verified)

"workloads wont scale perfectly so potential computer performance will not be quite 2x but should still be a decent per-card boost."

Speculation under DX12.

January 9, 2017 | 07:47 PM - Posted by Anonymous (not verified)

No, I think he was referring to "compute performance" for applications not games with that specific comment.

("computer performance" in the article seems to be a typo. I think he meant "compute performance" as it was said right above)

January 10, 2017 | 01:32 AM - Posted by Tim Verry

D'oh! I'll fix that typo.