New AMD Polaris 10 and Polaris 11 GPU Details Emerge

Subject: Editorial, Graphics Cards | May 18, 2016 - 01:18 PM |
Tagged: rumor, Polaris, opinion, HDMI 2.0, gpu, gddr5x, GDDR5, GCN, amd, 4k

While Nvidia's Pascal has held the spotlight in the news recently, it is not the only new GPU architecture debuting this year. AMD will soon be bringing its Polaris-based graphics cards to market for notebooks and mainstream desktop users. While several different code names have been thrown around for these new chips, they are consistently in general terms referred to as Polaris 10 and Polaris 11. AMD's Raja Kudori stated in an interview with PC Perspective that the numbers used in the naming scheme hold no special significance, but eventually Polaris will be used across the entire performance lineup (low end to high end graphics).

Naturally, there are going to be many rumors and leaks as the launch gets closer. In fact, Tech Power Up recently came into a number of interesting details about AMD's plans for Polaris-based graphics in 2016 including specifications and which areas of the market each chip is going to be aimed at. 

View Full Size

Citing the usual "industry sources" familiar with the matter (take that for what it's worth, but the specifications do not seem out of the realm of possibility), Tech Power Up revealed that there are two lines of Polaris-based GPUs that will be made available this year. Polaris 10 will allegedly occupy the mid-range (mainstream) graphics option in desktops as well as being the basis for high end gaming notebook graphics chips. On the other hand, Polaris 11 will reportedly be a smaller chip aimed at thin-and-light notebooks and mainstream laptops.

Now, for the juicy bits of the leak: the rumored specifications!

AMD's "Polaris 10" GPU will feature 32 compute units (CUs) which TPU estimates – based on the assumption that each CU still contains 64 shaders on Polaris – works out to 2,048 shaders. The GPU further features a 256-bit memory interface along with a memory controller supporting GDDR5 and GDDR5X (though not at the same time heh). This would leave room for cheaper Polaris 10 derived products with less than 32 CUs and/or cheaper GDDR5 memory. Graphics cards would have as much as 8GB of memory initially clocked at 7 Gbps. Reportedly, the full 32 CU GPU is rated at 5.5 TFLOPS of single precision compute power and runs at a TDP of no more than 150 watts.

Compared to the existing Hawaii-based R9 390X, the upcoming R9 400 Polaris 10 series GPU has fewer shaders and less memory bandwidth. The memory is clocked 1 GHz higher, but the GDDR5X memory bus is half that of the 390X's 512-bit GDDR5 bus which results in 224 GB/s memory bandwidth for Polaris 10 versus 384 GB/s on Hawaii. The R9 390X has a slight edge in compute performance at 5.9 TFLOPS versus Polaris 10's 5.5 TFLOPS however the Polaris 10 GPU is using much less power and easily wins at performance per watt! It almost reaches the same level of single precision compute performance at nearly half the power which is impressive if it holds true!

  R9 390X R9 390 R9 380 R9 400-Series "Polaris 10"
GPU Code name Grenada (Hawaii) Grenada (Hawaii) Antigua (Tonga) Polaris 10
GPU Cores 2816 2560 1792 2048
Rated Clock 1050 MHz 1000 MHz 970 MHz ~1343 MHz
Texture Units 176 160 112 ?
ROP Units 64 64 32 ?
Memory 8GB 8GB 4GB 8GB
Memory Clock 6000 MHz 6000 MHz 5700 MHz 7000 MHz
Memory Interface 512-bit 512-bit 256-bit 256-bit
Memory Bandwidth 384 GB/s 384 GB/s 182.4 GB/s 224 GB/s
TDP 275 watts 275 watts 190 watts 150 watts (or less)
Peak Compute 5.9 TFLOPS 5.1 TFLOPS 3.48 TFLOPS 5.5 TFLOPS
MSRP (current) ~$400 ~$310 ~$199 $ unknown

Note: Polaris GPU clocks esitmated using assumption of 5.5 TFLOPS being peak compute and accurate number of shaders. (Thanks Scott.)

Another comparison that can be made is to the Radeon R9 380 which is a Tonga-based GPU with similar TDP. In this matchup, the Polaris 10 based chip will – at a slightly lower TDP – pack in more shaders, twice the amount of faster clocked memory with 23% more bandwidth, and provide a 58% increase in single precision compute horsepower. Not too shabby!

Likely, a good portion of these increases are made possible by the move to a smaller process node and utilizing FinFET "tri-gate" like transistors on the Samsung/Globalfoundries 14LPP FinFET manufacturing process, though AMD has also made some architecture tweaks and hardware additions to the GCN 4.0 based processors. A brief high level introduction is said to be made today in a webinar for their partners (though AMD has come out and said preemptively that no technical nitty-gritty details will be divulged yet). (Update: Tech Altar summarized the partner webinar. Unfortunately there was no major reveals other than that AMD will not be limiting AIB partners from pushing for the highest factory overclocks they can get).

Moving on from Polaris 10 for a bit, Polaris 11 is rumored to be a smaller GCN 4.0 chip that will top out at 14 CUs (estimated 896 shaders/stream processors) and 2.5 TFLOPS of single precision compute power. These chips aimed at mainstream and thin-and-light laptops will have 50W TDPs and will be paired with up to 4GB of GDDR5 memory. There is apparently no GDDR5X option for these, which makes sense at this price point and performance level. The 128-bit bus is a bit limiting, but this is a low end mobile chip we are talking about here...

  R7 370 R7 400 Series "Polaris 11"
GPU Code name Trinidad (Pitcairn) Polaris 11
GPU Cores 1024 896
Rated Clock

925 MHz base (975 MHz boost)

~1395 MHz
Texture Units 64 ?
ROP Units 32 ?
Memory 2 or 4GB 4GB
Memory Clock 5600 MHz ? MHz
Memory Interface 256-bit 128-bit
Memory Bandwidth 179.2 GB/s ? GB/s
TDP 110 watts 50 watts
Peak Compute 1.89 TFLOPS 2.5 TFLOPS
MSRP (current) ~$140 (less after rebates and sales) $?

Note: Polaris GPU clocks esitmated using assumption of 2.5 TFLOPS being peak compute and accurate number of shaders. (Thanks Scott.)

Fewer details were unveiled concerning Polaris 11, as you can see from the chart above. From what we know so far, it should be a promising successor to the R7 370 series even with the memory bus limitation and lower shader count as the GPU should be clocked higher, (it also might have more shaders in M series mobile variants versus of the 370 and lower mobile series) and a much lower TDP for at least equivalent if not a decent increase in performance. The lower power usage in particular will be hugely welcomed in mobile devices as it will result in longer battery life under the same workloads, ideally. I picked the R7 370 as the comparison as it has 4 gigabytes of memory and not that many more shaders and being a desktop chip readers may be more widely familiar with it. It also appears to sit between the R7 360 and R7 370 in terms of shader count and other features but is allegedly going to be faster than both of them while using at least (on paper) less than half the power.

Of course these are still rumors until AMD makes Polaris officially, well, official with a product launch. The claimed specifications appear reasonable though, and based on that there are a few important takeaways and thoughts I have.

View Full Size

The first thing on my mind is that AMD is taking an interesting direction here. While NVIDIA has chosen to start out its new generation at the top by announcing "big Pascal" GP100 and actually launching the GP104 GTX 1080 (one of its highest end consumer chips/cards) yesterday and then over the course of the year introducing lower end products AMD has opted for the opposite approach. AMD will be starting closer to the lower end with a mainstream notebook chip and high end notebook/mainstream desktop GPU (Polaris 11 and 10 respectively) and then over a year fleshing out its product stack (remember Raja Kudori stated Polaris and GCN 4 would be used across the entire product stack) and building up with bigger and higher end GPUs over time finally topping off with its highest end consumer (and professional) GPUs based on "Vega" in 2017.

This means, and I'm not sure if this was planned by either Nvidia or AMD or just how it happened to work out based on them following their own GPU philosophies (but I'm thinking the latter), that for some time after both architectures are launched AMD and NVIDIA's newest architectures and GPUs will not be directly competing with each other. Eventually they should meet in the middle (maybe late this year?) with a mid-range desktop graphics card and it will be interesting to see how they stack up at similar price points and hardware levels. Then, of course once "Vega" based GPUs hit (sadly probably in time for NV's big Pascal to launch heh. I'm not sure if Vega is Fury X replacement only or even beyond that to 1080Ti or even GP100 competitor) we should see GCN 4 on the new smaller process node square up against NVIDIA and it's 16nm Pascal products across the board (entire lineup). Which will have the better performance, which will win out in power usage and performance/watt and performance/$? All questions I wish I knew the answers to, but sadly do not!!

Speaking of price and performance/$... Polaris is actually looking pretty good so far at hitting much lower TDPs and power usage targets while delivering at least similar performance if not a good bit more. Both AMD and NVIDIA appear to be bringing out GPUs better than I expected to see as far as technological improvements in performance and power usage (these die shrinks have really helped even though from here on out that trend isn't really going to continue...). I hope that AMD can at least match NV in these areas at the mid range even if they do not have a high end GPU coming out soon (not until sometime after these cards launch and not really until Vega, the high end GCN GPU successor). At least on paper based on the leaked information the GPUs so far look good. My only worry is going to be pricing which I think is going to make or break these cards. AMD will need to price them competitively and aggressively to ensure their adoption and success.  

I hope that doing the rollout this way (starting with lower end chips) helps AMD to iron out the new smaller process node and that they are able to get good yields so that they can be aggressive with pricing here and eventually at the hgh end!

I am looking forward to more information on AMD's Polaris architecture and the graphics cards based on it!

Also read:

I will admit that I am not 100% up on all the rumors and I apologize for that. With that said, I would love to hear what your thoughts are on AMD's upcoming GPUs and what you think about these latest rumors!

Video News

May 18, 2016 | 01:35 PM - Posted by Anonymous (not verified)

AMD Navi? "HEY!"

May 18, 2016 | 02:16 PM - Posted by Anonymous (not verified)


May 18, 2016 | 01:36 PM - Posted by Cristiano (not verified)

I might get Polaris 10 this time around. I just love this Consumer Friendly Produts and if it's cheaper than the 1070 offering around the same Performance it would be awesome. And also Power Efficient xD

May 18, 2016 | 07:17 PM - Posted by cheeseballs (not verified)

Based on what we know about the 1070 (about 6.5 TFLOPS), Polaris 10 is likely to be quite a bit slower. If the price is also a lot lower, then Polaris 10 will definitely be a great value (assuming these leaks are true).

May 18, 2016 | 09:59 PM - Posted by Qwehuy (not verified)

Athe 1070 is 5.6 tflops on base clock though

May 18, 2016 | 01:42 PM - Posted by JohnGR

Have you seen this?

Also someone tried to guess Polaris' performance based on clock speeds and whatever info he had or thought he had.

Any ideas on those two links are welcomed. The only thing that looks certain, is that the whole story about Polaris failing at 850MHz was a joke.

May 18, 2016 | 01:44 PM - Posted by Jeremy Hellstrom

I really hope you are right, but I am still worried about them.

May 18, 2016 | 01:46 PM - Posted by JohnGR

I think we all learned to worry about AMD's future in the last decade. Me, you, everybody. :D

May 18, 2016 | 03:47 PM - Posted by Anonymous (not verified)

Would be incredible if true, not only would Polaris 10 be stronger than 1070 but would actually be on pace with 1080 at half the cost. AMD has some mighty big shoes to fill with all these rumors.

May 18, 2016 | 01:43 PM - Posted by Anonymous (not verified)

It is rumoured that the polaris 10 5.5 tflops is actually for a moble gpu instead of desktop, desktop variant likely to have higher tflops

May 18, 2016 | 03:05 PM - Posted by Tim Verry

That would be neat. It is definitely possible that the speca TPU have been told are only for one Polaris 10 GPU, there could (probably) be others. Its hard to say if this one is the full polaris 10 or one in the middle of the product stack. Something that lends that idea a bit of credibility is the sheer number of different specific GPU codenames being thrown around in rumors. They could all be polaris 10 in general but cut down ect. The GPU spec'd above is rumored to go in both mainstream to mid range desktops and high end laptops. That does not rule out a part aimed solely at the desktop... perhaps the 40 CU polaris part that was rumored/expected by TPU previously (though 40 seems an strange number to me for some reason).


In general I am, I believe the term is, skeptically optimistic about Polaris and am waiting patiently (or trying to heh) for more concrete details. I do think it is safe to say that I personally have not been pre-briefed in anything Polaris or under NDAs which is good because we can all talk about it and speculate but sucks because I dont know more about it! ;-).

May 18, 2016 | 05:24 PM - Posted by Mr Buster (not verified)

Mobile gpu with 150 Watt TDP? Not buying that story. Polaris 11 as mobile part makes perfect sense. Polaris 10 not so much.

May 18, 2016 | 01:43 PM - Posted by Anonymous (not verified)

It is rumoured that the polaris 10 5.5 tflops is actually for a moble gpu instead of desktop, desktop variant likely to have higher tflops

May 18, 2016 | 01:58 PM - Posted by remc86007

I think and hope this is correct. It doesn't make sense to me that AMD would aim to merely match the 390X in performance. I hope it is within 5% of the 1070, and if it is I am buying on day one. If not, I guess I'll wait for Vega.

I've had way too many bad experiences with my 970s to give Nvidia any more money. Having to go into safe-mode to recover from their screwed up driver upgrade during finals week was the last straw, but I've had countless other bad experiences with them in the past two years that have culminated in my disdain for Nvidia. (I still believe marketing the 970 with the wrong memory bandwidth was malicious despite what they say)

May 18, 2016 | 01:46 PM - Posted by StephanS

So nvidia gave review site full working cards and is ready to go on sale.. and with AMD, with dont even know the spec of Polaris 10, a card that is expected to be 2x slower then the 1080 ?

And the best we can do is speculate ... Could AMD have made a mistake by switching away from TSMC? 14nm clock speed seem to be 30% slower then TSMC 16nm... And TSMC already have working 10nm ?
So nvidia will jump to 10nm way ahead of AMD.

Also it seem from the benchmarks, the 1070 will make the Fiji line of card non profitable for AMD. By that I mean AMD wont be able to charge enough money to make a profit, so will have to sell the card at a loss to clear inventory.

So far 2016 look like its going to be a very bad year for AMD :(

May 18, 2016 | 01:56 PM - Posted by JohnGR

Polaris 10 is expected to be close to 1070 in performance. If not, it will be a sub $300 card and Polaris 11 an $150 card. It makes you wonder if those two companies are enjoying their duopoly, by splitting the market and laughing at fanboys butchering each other online.

Where did you read that 14nm will be slower? If you are looking at clock speeds you are wrong. You can't compare clockspeeds with different architectures.

Fury is EOL. All Fiji GPUs will go to Radeon Pro Duo and Nano that still doesn't have competition (in dimensions, not performance).

2016 will be great for AMD. They are not a GPU company. 7th gen APUs will be announced in 1st June, new PS4 and probably Xbox is coming in the end of the year, and then Zen follows. Even if Zen is not good enough against Kaby Lake, it will be probably be much better in IPC compared to Excavator, which means that Zen based APUs next year will be offering much better performance.

May 18, 2016 | 01:59 PM - Posted by Anonymous (not verified)

Comparing a the rumored specs of a low/mid-range GPU with specs two cut-down high-end GPUs and then assuming AMD will have a bad year?

Brilliant post.

May 18, 2016 | 03:00 PM - Posted by Keven Harvey (not verified)

AMD also had lower clocks than Nvidia during the last gen, so the improvement could be just as big than Nvidia's, percentage wise.

And who cares if it's not as powerful, most people care about the performance/$, high end products are fine, but it's not what most people buy.

May 18, 2016 | 08:13 PM - Posted by Not_Anonymous (not verified)

Amd is still signed on with TSMC and will have access to 10nm there at the same time as nvidia. Samsung is ahead in 10nm btw. The new apple soc will be in mass production by late summer on Samsung's 10nm process. And both TSMC and Samsung's 10nm are more like a 10/14nm hybrid.
I am still suspicious that Vega may come out on TSMC 16nm depending on yeilds for larger chips.

May 19, 2016 | 07:57 AM - Posted by Anonymous (not verified)

Havin Fury lineup at 550+ already doomed the idiots. Those cards should of been priced at 400 tops.

May 18, 2016 | 01:57 PM - Posted by Kusanagi (not verified)

There should be tons of overclocking headroom as well.

May 18, 2016 | 01:58 PM - Posted by Randal_46

Additional site with info, haven't looked into veracity:

Buried lede: "Interestingly, according to VCZ this 5.5 TFLOP GPU is not even a desktop class chip but a mobility variant. This would bring R9 390X/390 class performance to notebooks."

May 18, 2016 | 02:01 PM - Posted by Anonymous (not verified)

I want to believe.

May 18, 2016 | 02:21 PM - Posted by mikesheadroom

"we should see GCN 4 on the new smaller process node square up against NVIDIA and it's 14nm Pascal products across the board"

Should that say "16nm Pascal", or will we be seeing 14nm Pascal parts in the future?

May 18, 2016 | 02:35 PM - Posted by Tim Verry

You are correct. Sorry about that. I will fix that typo. Thanks.


psa for everyone It's hard for me to reply on mobile since the text box doesnt like when i try to zoom in it keeps jumping to the top of the page and/or zooming all the way out. So if i dont reply until tonight Im not ignoring you heh.

May 18, 2016 | 02:53 PM - Posted by Maurice Fortin (not verified)

Stephan, AMD is using Globalfoundries/Samsung for their GPU this year, who knows exactly what we will see till we see it, Lisa Su and her team are doing very well at "keeping quiet" which means, itnot be what we would like OR they know it will stun their customers for the performance etc, ALSO Samsung is already at 7nm for their production capabilities in VOLUME for 2017, so AMD has as much a chance surpassing Nvidia at this point, again, we DONT know, rumors are just rumors.

all we can do is wait and see over this entire year, I am quite optimistic Lisa Su and her team have done VERY well with Zen/Polaris/AM4, her "team" is top notch engineers as they brought in many of them since 2010 which is where many of the designs we have seen since then(Radeon 7k-AM3+ etc) have all come from(takes many years to design and implement these products, so, one mess up can also take many years to move away from keep that in mind)

May 18, 2016 | 02:55 PM - Posted by Maurice Fortin (not verified)

Pascal uses 16nm TSMC, AMD will be 14nm GF/Samsung for the GPU of Radeon and APU

May 18, 2016 | 03:11 PM - Posted by Anonymous (not verified)

It's really confusing when they say Polaris will be used across the entire performance lineup, but at the same time there's only 2 chips and they say they won't compete with Nvidia's high end. That's not a full lineup at all, at best it's mid range and low end.

I'm hoping full P10 would at least compete with 1070. GTX 1080 is an impressive card, but it's 789€ in Europe, that's more than what the 600 mm^2 980 Ti sold for. What a joke.

May 18, 2016 | 03:28 PM - Posted by Batismul (not verified)

Definitely agree, but considering GDDR%x is new and 8GB of it plus some new improvements to VR I can see why they want to charge more.

May 18, 2016 | 03:28 PM - Posted by Batismul (not verified)

Edit: GDDR5x

May 18, 2016 | 04:20 PM - Posted by CommanderEdge (not verified)

Typically though there are only 2-3 chips and they are scaled by have cores or other metrics fused off. So they could use Polaris 11 on 4 different Cards or Polaris 10 on 3. Nvidia does the same thing.

May 18, 2016 | 10:46 PM - Posted by Tim Verry

Yup and it is possible thr GPU specs in the article sre for a Polaris 10 GPU but not the top/full Polaris 10 GPU..

May 20, 2016 | 02:48 AM - Posted by Anonymous (not verified)

These GPUs have such different performance characteristics it is difficult to say where they fall precisely. The 390 falls anywhere between a 970 and a 980 Ti, depending on the test. In DX12, it seems AMD parts usually do a bit better. I suspect that Polaris 10 will be close to a 1080 in the full configuration. Polaris 11 and Polaris 10 will almost certainly be marketed as 480/480X and 490/490X, but not as a Fury part. The 490X may match a 1080, or it may not. If it will probably be cheaper either way. The 490x will certainly be faster than a 390x, and the 390x still compares quite well. There is a very low probability that the 490x will match 1080 exactly; that just doesn't happen. I would expect a similar situation as currently with performance varying depending on application, but going forward with DX12 games, I expect AMD parts will compare quite well. It would be interesting to compare performance with raw flops (based on actual clock speed in game) to get some idea of actual efficiency.

In the past, we have had a lot of previous generation parts staying in production as rebranded parts, but all of them were on 28 nm for years, so it didn't make much sense to do a redesign of the part on the same process. They had 3 different die, and essentially updated one at a time. With the move to 14 nm (finally) they will probably want to get all of their products on 14 nm as soon as possible. This makes some argument for getting Vega out as soon as possible, if it really is just a big Polaris with HBM. SK-Hynix HBM2 may be ramping up in time for an early release. Although, I think those rumors are specifically to try to spread FUD implying that Polaris 10 can't compete with a 1080, and Vega is needed for that. I don't think that is the case though. If Vega was planned for 2017, it is unlikely that they have the ability to rush it out.

May 20, 2016 | 01:50 AM - Posted by Anonymous (not verified)

They can get more products out of them than they used to. It isn't like CPUs though. Intel can make an 18-core CPU and get maybe a dozen different products or more out of it with varying numbers of cores active, varying amounts of cache, and varying clock speeds. Customers will not accept a dozen different versions of a GPU in the desktop market. They can make at least 2 to 3 different versions of one die for desktop products.

They can also take the same die and use it as a mobile part. Mobile parts often have varying number of units enabled compared to desktop parts. This allows them to make the best use of all salvaged parts. The mobile parts will generally be binned for low power consumption in addition to the number of functional units. This isn't as fine grained as what can be done with CPUs, but they still should be able to use a lot of salvaged parts if they can sell in the mobile market. If you look at Nvidia's GM204 die, they get 980s, 970s, 980Ms, and 970Ms, all with different numbers of units enabled and different clock speed ranges.

If you look at the wiki for AMD's mobile 3xx parts, it looks they have up to 5 versions of a single die in some cases. Cape Verde cores are listed as M365X, M370X, M375, M375X, and M380. Wikipedia list M490 and M490X as Polaris 10, M480 and M480X as Polaris 11, but this is almost certainly based on speculation. There would, of course, be desktop parts based on these die also. I wouldn't be surprised to see several lower end parts for the OEM market based on the small die Polaris 11 also.

May 18, 2016 | 10:50 PM - Posted by Anonymous (not verified)

I suspect supply of 1080 GPUs will be very constrained. There is a reason smaller die are usually made on new processes first. Even if you want to spend the money an a 1080, you may not be able to buy one. Hopefully it isn't a complete paper launch to get publicity. They get a huge amount of good publicity by launching first since their part will be compared to previous generation parts; of course it will look great in that context.

May 18, 2016 | 03:26 PM - Posted by Batismul (not verified)

This year will either make or break AMD. If only they actually release a new product.

May 18, 2016 | 10:58 PM - Posted by Anonymous (not verified)

They seem to be doing pretty well with the console business. The needs of consoles are actually relatively close to the needs for a mobile gaming laptop. I don't think anyone will be able to compete with AMD APUs for mobile, especially when we get HBM based APUs. Intel doesn't have the graphics processor, and they seem to have given up on developing it. Nvidia doesn't have a CPU except for their ARM core. I wouldn't buy a ARM based laptop just yet.

May 19, 2016 | 09:32 AM - Posted by Batismul (not verified)

Very true indeed, I would definitely buy a notebook or laptop with AMD's APU in the future with HBM and 390 performance :D

May 18, 2016 | 03:39 PM - Posted by Scratch (not verified)

""Vega" in 2017."

Vega has been moved to October this year.

May 18, 2016 | 04:17 PM - Posted by CommanderEdge (not verified)

That hasnt been confirmed yet...sadly.

May 18, 2016 | 05:01 PM - Posted by JohnGR

That rumor probably came out to support that other rumor about Polaris having problems. The rumor alone about Polaris was looking too be a stupid lie, but adding the rumor about AMD rushing Vega to October was making Polaris (disaster) rumor more believable.

May 18, 2016 | 07:53 PM - Posted by Tim Verry

hmm it would be cool if that was true (Vega in October) but I'm feeling a lot of doubt on that rumor. Maybe they will rush to get it to market now though, october seems super early though!

May 18, 2016 | 11:49 PM - Posted by renz (not verified)

The rumor about polaris can't hit 850 come much later. From the initial rumor i heard AMD did not expect Pascal (to be exact GP104) to be very aggressive with it's clock. It is not that polaris having problem but it is simply that polaris might not compete with GP104 at all forcing AMD to launch Vega early to compete with GP104. Because no one expect 1080 to be very close to GP100 in term of SP performance.

May 20, 2016 | 12:58 AM - Posted by Anonymous (not verified)

It wouldn't be that uncommon for early engineering samples to be clocked very low. I haven't kept up with the rumors, it wouldn't be that surprising to me if an early sample was clocked at 850. That doesn't tell us much of anything about final clock. The stuff Nvidia is talking about is kind of ridiculous. It is a pretty standard part of the process to go through and try to fix any critical, clock speed limiting paths. Nvidia is talking about that as if it is something new.

May 18, 2016 | 04:06 PM - Posted by Pholostan

I have a hard time getting excited about laptop GPUs. Just as I have a hard time caring much about paper launches. Wake me up when I can buy an actual graphics card. Well, Vega next year I guess.

May 18, 2016 | 04:32 PM - Posted by Anonymous (not verified)

These kinds of articles about "OMG Polaris 10 slower than 390x because lower tflops and lower power!" are getting boring.

Either those are double standards due to Nvidia bias or you all are forgetting that 970 while having half the TDP of 780Ti on the SAME node and much less Tflops it also matched and even surpassed 780Ti in many cases

Where did tech jurnalism go these days?

May 19, 2016 | 05:02 AM - Posted by Anonymous (not verified)

Haha, 780 Ti >> 970. My Oc'd 780 ti phantom gets very similar performance as my friends reference 980.

May 18, 2016 | 04:56 PM - Posted by wcg

Good to see a more optimistic AMD-related post. The last one claiming AMD's doom and gloom from some unsourced Nordic AIB was pretty sad.

AMD (in the hopes somebody is listening): You really need to compete on price for this generation. You can't provide equal performance for the same price versus Nvidia, it just won't work.

May 18, 2016 | 05:05 PM - Posted by Batismul (not verified)

I work with 2 new contractor who both use AMD products not because they are fans but they like the thinkering and tweaking and making it their own (they're software devs). They know all about Nvidia and stuff but they prefer dealing with certain things and making it so they can be on par or better than Nvidia's offering while paying half the price. Its like a competition to them haha yes they paid half of what I paid and are getting incredible performance compared to what benchmark reviews.

May 18, 2016 | 07:26 PM - Posted by Anonymous (not verified)

There are a lot of very nice AMD APU based Industrial-PCs, from people like Fit-PC, and others with some very nice connectivity options! The fitlet-X with 4 GbE LAN ports, and other Fitlet options! Hopefully there will be more Options for with Carrizo R, and BristolRidge Carrizo refresh options, and even Zen/Zen light options for some Linux based Industrial-PCs without any CPU lock-in to any one OS, Or one OS version(See the Intel/Windows 10 antics to see just what nefarious plans the WINTEL weasels are up to)!

May 19, 2016 | 09:36 AM - Posted by Batismul (not verified)

Very interesting, thanks for that I will share it with those guys and see if they know about that. I think they probably do

May 18, 2016 | 05:06 PM - Posted by JohnGR

Petersen was coming for a live stream event and excitement about 1080 review ought to be sky high. Site had to look as Nvidia friendly as possible. That's why that article was never updated as it should.

The live event finished, the 1080 review is in the past, now it is time to balance things a little, or Raja will not give an exclusive interview again. Not to mention that last year AMD was excluding sites from getting Fiji cards.

Nice isn't it?

May 18, 2016 | 08:22 PM - Posted by Tim Verry

heheh I am sorry that I wasnt able to get this out as early as I wanted to this week but i did finish it asap. The coincidental timing is pretty funny though so you can go with BiosGate if you want! :D

In other news Morry is still investigating CMOS Gate!

May 19, 2016 | 01:23 AM - Posted by JohnGR

You (PCPer) do update articles when you have new info, don't you? There was new info, the article wasn't updated. Coincidences.

May 19, 2016 | 03:10 AM - Posted by Tim Verry

That's your prerogative to believe. To answer your question though, yes we generally do directly update the stories with new information, especially if it's something major. In my opinion it would not have hurt anything to update it with the AMD response, and I probably would have added it in especially since it was direct from the company in question, but perhaps Jeremy felt differently.

Looking at the AMD response (on eTeknix which was at least pointed to in the comments section), they did not come out and directly disprove the rumor. They said that they will be at Computex and that Polaris is on schedule. They never said that they would be bringing Polaris to the show. Heck, if the conspiracy was to make AMD look bad, Jeremy probably should have included the quote from them ;-). (Only kidding, them not confirming they would be bringing Polaris is standard practice..)

fwiw, I don't think those rumors from Guru3D will hold true, especially the clockspeed aspect (showing or not showing the cards at Computex won't make or break them though AIB partners are likely pushing for AMD to let them show off the new hardware heh).

May 19, 2016 | 03:19 AM - Posted by JohnGR

Thanks for the reply.

May 19, 2016 | 08:11 PM - Posted by Tim Verry

You're welcome. I, what's the term, welcome critcsm. If you have some specific ideas on things PCPer could do better, I am open to hearing them and would do what I can to advocate for changes that I believe in. Some standard template and/or process for updating articles with company responses sounds like a good thing that would be worth implementing.

May 19, 2016 | 11:20 AM - Posted by funandjam

Take a look around, lots of people feel the same way. Not that you should answer this here publicly, but PCPer should ask itself, just how many people need to point out that how that rumor and AMD response was handled was in done in bad taste? I thought you guys were above this type of thing.

If this had been some kind of major publication, say like the New York Times and this "rumor" was posted on the front page somewhere and NO ONE went back and did an update on AMD's response? That publication would get roasted alive! One of the problems here is that PCPer is usually very good with both quality of content and quality of presentation, professional most of the time, but stunts like what Jeremy did really start to make PCPer look more like all of the rest of the "national enquirer" style rumor-mill websites.

I get it, PCPer needs the traffic generated by juicy rumors like the one jeremy posted. It's the reason why PCPer has resorted to Patreon, they need the money. And to be quite honest, posting rumors is ok, as long as they are properly disclosed as such. And Jeremy did a good job on the rumor article, no mistaking what it is. But....

The issue is that while PCPer is ok with posting rumors, PCPer failed to post ANY KIND OF UPDATE about AMD's official response. No new article or update to the article about the rumor, nothing. It had to be pointed out in the comment section by readers. This looks extremely bad on PCPer, it not only makes PCPer look biased against AMD, but it also makes PCPer look like Wcfftech or videocardszz, which those sites are well known for posting straight up fabrications in order to generate traffic. I sincerely hope that PCPer isn't trying to stoop this low, tell me it isn't so!

Also, just because Jeremy might or might not have felt like posting the udpate, doesn't mean someone else couldn't do it. We've all seen PLENTY of articles where they were updated by someone other than who originally posted the article. So to say that Jeremy felt differently about posting the update is not a good reason. Anyone at PCPer could have updated Jeremy's article or posted AMD's response as a separate article.

I like the content that PCPer does, usually very in depth and asking all the right questions. Even the majority of the blurbs that Jeremy or Scott posts make for some interesting reading, but please PCPer, you guys should be better than this.

May 19, 2016 | 08:22 PM - Posted by Tim Verry

Thank you for the feedback and for keeping this civil. I can not speak for the site as a whole, but since we are all talking here I will speak from my pov and say that there are always ways that we can improve and aspire to be the "go to" site for reviews, editorials, and news on all sorts of pc harfware which is why we are all here (our love and interest of technology / computers). We could have / should have done better with that article. It will be a learning experience that we can build on to do better in the future. Thank you for your continued readership.

May 18, 2016 | 05:54 PM - Posted by Anonymous (not verified)

This is actually wrong information. AMD talk on reddit and Neogaf point to 36cu's not 32.

May 18, 2016 | 10:45 PM - Posted by Tim Verry

Hmm that would be good and again not out of the realm of possibility so we might see that at some point.

May 18, 2016 | 06:55 PM - Posted by Anonymous (not verified)

I'm more interested in a Zen/Polaris APU talk for laptops, and some Zen/Polaris APU based laptops getting an option for a discrete Polaris GPU to go along with Zen/Polaris integrated graphics based APU SKUs. I'd also like to see AMD get some Zen/Polaris APU Linux OS based laptop OEM design wins(with discrete Polaris GPU options also), for after 2020 when a lot of people will tell M$ to GTFO!

Linux kernel OS/Vulkan will have a wider Vulkan graphics API install base than any proprietary OS based proprietary graphics APIs!

May 20, 2016 | 02:52 AM - Posted by Tim Verry

Yeah, I hope AMD gets better design wins this go around with Zen + Polaris APUs for mobile, they got a crap deal in recent years with very limited options for people to buy especially using higher TDP options. Even if mobile Zen-based APUs end up being really good, I'm skeptical how well it will be picked up by laptop OEMs I hope that I'm proven wrong ont this though!

May 20, 2016 | 12:29 PM - Posted by Anonymous (not verified)

If AMD creates a Zen/Polaris APU on an Interposer with HBM2, with even just two stacks of HBM2, then no others will be able to match that for raw effective bandwidth(CPU/GPU to HBM, and CPU die to GPU die) in such a small(space saving) of an area as an interposer module.

Also AMD will be able to have the Zen/cores die fabricated separately from the Polaris die and increase yields, AMD will also have the option of, for a Low power Zen/Light mobile cores design that are normally clocked lower than the desktop variants anyways to use its GPU style high density design libraries on some of its mobile Zen core variants' layouts and get that extra 30% Planar space savings on top of the 14nm process node shrink Planar space savings to get even more Zen cores on a smaller die area, or for more space saved for more die area available for GPU/other on die resources!

AMD could still elect to make some Zen/light cores and Polaris GPUs/ACE units on a single monolithic die, as both CPU/GPU layouts would be done using the High density design libraries, and then wire that monolithic CPU/cores/GPU monolithic die up to some HBM2 stacks for some of the smaller Laptop SKUs. For the more high power Zen Core dies based APUs on an interposer package, AMD could fabricate a separate Zen/cores die using the normal low density design libraries for the Zen's CPU cores high power/higher clocked Zen APU desktop variants, and wire the Zen cores up to a separate and larger Polaris GPU Die designed on the normal for GPU's layouts high density design libraries.

I believe that AMD's future APUs on an interposer designs will supplant most of AMD's single core monolithic APU designs for the desktop APU market, while AMD may still be fabricating its Zen ”Light” CPU/GPU on single monolithic dies based SKUs with separate HBM stacks wired up via a silicon interposer package for laptop/mobile APUs on an interposer variants.

So all the APU on an Interposer package advantages of wide memory traces to HBM memory stacks can be had, while adding the ability of the APU on an Interposer design to likewise wire the CPU die, more directly to the GPU die with thousands of interposer traces also, and any PCIe based CPU to GPU narrower interconnect will not be able to compete on the raw effective bandwidth at lower power savings clocks speeds metrics as a wide parallel etched interposer connection CPU die to GPU die, and all processor dies to HBM memory would provide!

May 18, 2016 | 07:39 PM - Posted by Anonymous (not verified)

"a good portion of these increases are made possible by the move to a smaller process node and utilizing FinFET "tri-gate" like transistors on the Samsung/Globalfoundries 14LPP FinFET manufacturing process"

In other words, the 14 nm measure is a pure marketing invention!

Take a 45 nm transistor, add it 2 more gates and then you have a miraculous 14 nm transistor... however 3D transistors still leaks like a 2D transistors but in a smaller silicon piece.

After rebranding old video cards, they succeeded in rebranding old node process... nice move AMD! :o)

May 18, 2016 | 11:02 PM - Posted by Qwehuy (not verified)

The nodes aren't even made by amd so you can't blame them. Also tsmc is doing the exact same thing, so is nvidia also rebranding? And on top of that, amd hashas both a tsmc and Samsung contract so they can chose which node they want

May 19, 2016 | 01:03 AM - Posted by Anonymous (not verified)

Of course I can blame AMD for node rebranding since Global Foundries owned by AMD shareholders licensed the Samsung 14 nm marketing process for producing chip from their own fabs...

If AMD was a bit honest the process would be sold as 45 nm node with trigate transistors.

May 20, 2016 | 01:02 PM - Posted by Anonymous (not verified)

Samsung's(that GF is licensing) 14nm FF node gate sizes are still 14nm wide, just like Intel's 14nm gate sizes are 14nm, it's just that Intel's circuit pitch(distance between 14nm gates) is smaller so Intel can pack more transistors per unit area. The 14nm gate advantage is the same for Samsung and Intel, as it's the actual gate size and gate geometry that give the advantage and not the circuit pitch. Now Intel still has a little advantage with its 2nd generation finfet, but Samsung is twaeking its 14nm process likewise, and some third party fabrication businesses(TSMC) are getting some 10nm designs certified to work on some ARM ISA based designs!

P.S. GlobalFoundries is not wholly owned just by AMD shareholders. There are other investors in GlobalFoundries including IBM/Others, and IBM, GlobalFoundries, and Samsung have been in a chip fabrication technology/IP sharing foundation for some years now! IBM has gone fabless and is using GlobalFoundries for its market supply of Power8/power9 parts, as well as both GlobalFoundries and Samsung are in line to maybe get some of Google’s power9 fabrication business because IBM via its OpenPower entity(Google is a member) is going to be licensing the Power8/Power9 designs to third parties, and the number of OpenPower Power licensees is growing! So that Samsung 14nm process licensed to GlobalFoundries, probably at IBM's request, is going to do just fine for IBM/GlobalFoundries/Samsung, and the many OpenPower licensees, AMD's x86, Polaris/Vega and maybe K12 custom ARM designs, and anyone else using GlobalFoundries licensed from Samsung 14nm process node foundry services, and Samsung will be using its 14nm process also for many clients also!

May 22, 2016 | 02:47 AM - Posted by Anonymous (not verified)

I wholly appreciate your attempt to use facts and logic to explain the situation to him, but he's way too invested in his belief that AMD is lying (and only AMD is lying, not GloFo (even though it's their process) or Samsung (even though they licensed it to GloFo) or TSMC (even though their measurements are even more "off" by his definition) or Nvidia (even though his logic would place it on them instead of TSMC) or anybody, just AMD) because believing AMD is lying is the only way he can justify his otherwise unintelligible hatred for AMD.

May 21, 2016 | 03:59 PM - Posted by Not_Anonymous (not verified)

This is exactly why i suggest not using the term "trigate". There are not two more gates. Its just a a finfet. Its gate isnt flat. It allows more contact between the gate and juntion in a smaller package. Technically even planar transistors are dual gate but almost no one uses them that way.

May 19, 2016 | 12:17 AM - Posted by Anonymous (not verified)

Are we going to get a Polaris based design with HBM1? In the PCPer interview with Raja Kudori a while back, it sounded like they might be making another HBM1 product before moving on to HBM2. If Vega is for 2017, then there isn't much time to sell such a product though.

May 19, 2016 | 02:41 AM - Posted by Tim Verry

Hmm I do not know if we will end up seeing that part, especially if the rumor about them moving up the launch of Vega to October is true..

May 19, 2016 | 08:51 AM - Posted by The_O (not verified)

Hmmm... Starting an article about AMD by talking about Nvidia. When will he biased behaviour end?

May 19, 2016 | 10:25 AM - Posted by Anonymous (not verified)

TechPowerUp as source? That's a nice way to write down "UNRELIABLE"...

Benchmark entries have disproven most of their speculations over the past weeks. The gaps in the mobile lineup also draw a pretty clear image of Polaris 11 and 10.

May 19, 2016 | 11:20 PM - Posted by Not_Anonymous (not verified)

Please don't use the "trigate" or "3D" terminology. Trigate could refer to multigate and 3D implies stacked dies. Blame Intel and Samsung for trying to market something "new".

May 20, 2016 | 02:05 AM - Posted by Anonymous (not verified)

What are you suggesting as replacement?

You can't only use the "14 nm" expression to define this marketing scam!

If Global Foundries call it "14 nm 3D FinFET", the most relevant part is "3D FinFET" since "14 nm" isn't a real measure of the transistor gate.

I'm afraid you can't prevent people from using the terms "trigate" or "3D" to identify what is commercially available under these names.

Blame AMD for trying to scam people with lies...

May 20, 2016 | 06:25 PM - Posted by Anonymous (not verified)

It should be called more into the 3D than planar FINFET, because nothing can exist in 2 dimensions! And FINFET was invented and the term coined by From wikipedia:

"The term FinFET (Fin Field Effect Transistor) was coined by University of California, Berkeley researchers (Profs. Chenming Hu, Tsu-Jae King-Liu and Jeffrey Bokor) to describe a nonplanar, double-gate transistor built on an SOI substrate,[8] based on the earlier DELTA (single-gate) transistor design.[9] The distinguishing characteristic of the FinFET is that the conducting channel is wrapped by a thin silicon "fin", which forms the body of the device. The thickness of the fin (measured in the direction from source to drain) determines the effective channel length of the device. The Wrap-around gate structure provides a better electrical control over the channel and thus helps in reducing the leakage current and overcoming other short-channel effects.

In current usage the term FinFET has a less precise definition. Among microprocessor manufacturers, AMD, IBM, and Freescale describe their double-gate development efforts as FinFET[10] development whereas Intel avoids using the term to describe their closely related tri-gate architecture.[11] In the technical literature, FinFET is used somewhat generically to describe any fin-based, multigate transistor architecture regardless of number of gates." (1)

Also You say: "since "14 nm" isn't a real measure of the transistor gate." When the process node(Any size process node) 14nm/whatever is named after the actual gate size, and has nothing to do with the circuit pitch(distance between circuits), GO do some reading! A 14nm gate size is a 14nm gate size, and the different processes from different fab companies have different circuit pitches, but the gate size is what gives the benefits and not the circuit pitch, so go figure on that one! Intel has a smaller circuit pitch so it can cram more 14nm gates into the same unit area, but Samsung's 14nm gates are 14nm also, and that gate size/gate geometry is what matters most. Also "tri-gate" is an Intel marketing term for Intel's FinFet designs!


May 21, 2016 | 07:43 AM - Posted by Anonymous (not verified)

Actually there is no correlation between the 14 nm size and physical dimensions of transistors. The pitch between the source and the drain from a 14 nm FinFET transistor is 42 nm.

May 21, 2016 | 09:49 AM - Posted by Anonymous (not verified)

That's for Intel's process, and other processes have different pitches, but still the 14nm gate size is what is used to name the process 14nm, so Samsung’s and others who have 14nm gates may have different pitch sizes. Intel's process can cram more transistors into a unit area than the other processes can, but that 14nm gate advantage is still available from the other chip fab processes. Intel's process is still more mature, but Intel is still trying to get its CISC x86 designs performing at the same total SOC low power using metrics at 14nm than some of the the custom ARM designs that are/where at 28nm! That ARM RISC micro-architecture has a simpler ISA that takes less overall transistors to implement, and the custom ARM SOCs at 14nm will have more room and low power using metrics than any x86 CISC designs that take more transistors to implement! Some of the RISC custom ARM designs have even more room to implement mobile integrated GPUs/graphics that outperforms Intel's graphics on the price/performance front!

Intel's x86 ISA ATOMs Bombed in the mobile market, and wait until AMD rolls out its custom ARM K12 ARMv8A ISA running designs, even Apple will have problems competing against AMD's K12 APUs with AMD's Polaris graphics! Intel will still be trying to shoehorn the x86 ISA designs into the same low power using metrics as the ARMv8A ISA running designs, that now are at 14, and 16nm, and an ARM holdings reference design was just demonstrated at 10nm(TSMC process). Also since ARM CPUs are used in the low power markets there is nothing stopping AMD from using its High density design libraries to get 30% more planar space savings on its custom K12 ARM core's layout design, in addition to any 14nm node planar space savings for even more APU die space available for more ACE units on any custom K12 ARM CPU core's layout.

Intel is not going to make much headway into the mobile devices market dominated by both Custom ARMv8 ISA running designs, and the ARM holdings' reference designs. Apple's A10/A11 and AMD's K12 custom designs will be the ones to watch, and AMD's Polaris/newer will be going head to head with the PowerVR GPU designs a some future time so that comparison will be interesting to observe. AMD's first K12 ARM based SKUs are going for the server SKUs, but there will be Tablet SKUs in there also from AMD.

For sure the Mobile Devices market OEM's will not be making the same Intel dominated PC/laptop CPU/SOC supply market mistakes with their Licensed IP ARM ISA based designs that give the mobile device makers much more competitive control over their SOC parts supply chains! There will also be The OpenPower Power8/power9 server/HPC market licensees with the same CPU parts supply chain freedoms, ARM holdings licensed IP business model style, from the OpenPower server/HPC markets OEMs. AMD is at least currently a Two ISA based company, and maybe AMD could pick up a third ISA with OpenPower, and profit from that market also, Nvidia sure is going to be making the mad money with the OpenPower market! Google is getting some power9s, so the x86 only market is not going to be as profitable going forward!

May 21, 2016 | 02:09 PM - Posted by Anonymous (not verified)

Blah blah blah...

Anyway your verbose speech won't reverse the downtrend on your AMD stocks. :o)

May 21, 2016 | 04:11 PM - Posted by Not_Anonymous (not verified)

All of the above is why jouralists should stay away from terms like 3D or trigate. As you said. Planar is also 3D so it's not a good term to differentiate. Trigate is a margeting term by Intel and shouldn't be used across the board. I also dislike that term because they aren't used as if they have 3 separate gates. Finfet is common industry term that i find to be the most accurate.

June 5, 2016 | 08:52 PM - Posted by Anonymous (not verified)

The rx480 or 480x , how much faster would it be if AMD used 5x ram , not the standard , the clock speeds would be interesting to see the bump . and is a card down the road that we dont know about or AIB partners may install later .. anyone hear any rumors ..

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.