AMD Confirms Tonga 384-bit Memory Bus, Not Enabled For Any Products

Subject: Graphics Cards | December 3, 2015 - 10:34 PM |
Tagged: Tonga XT, tonga, Radeon R9 380X, Radeon R9 285, Radeon R9 280X, Radeon R9 280, radeon, amd, 384-bit

While it was reported a year ago that AMD's Tonga XT GPU had a 384-bit memory bus in articles sourcing the same PC Watch report, when the Radeon R9 380X was released last month we saw a Tonga XT GPU with a 256-bit memory interface.

View Full Size

The full Tonga core features a 384-bit GDDR5 memory bus (Credit: PC Watch)

Reports of the upcoming card had consistently referenced the wider 384-bit bus, and tonight we are able to officially confirm that Tonga (not just Tonga XT) has been 384-bit capable all along, though this was never enabled by AMD. The reason? The company never found the right price/performance combination.

AMD's Raja Koduri confirmed Tonga's 384-bit bus tonight, and our own Ryan Shrout broke the news on Twitter.

So does this mean an upcoming Tonga GPU could offer this wider memory bus? Tonga itself was a follow-up to Tahiti (R9 280/280X), which did have a 384-bit bus, but all along the choice had been made to keep the updated core at 256-bit.

Now more than a year after the launch of Tonga a new part featuring a fully enabled memory bus doesn't seem realistic, but it's still interesting to know that significantly more memory bandwidth is locked away from owners of these cards.

Video News

December 3, 2015 | 11:14 PM - Posted by arbiter

AMD kept it locked for more performance in upcoming refresh. Something was touched on in the podcast.

December 3, 2015 | 11:45 PM - Posted by Anonymous (not verified)

I agree with you. It's a smart move, not saying it's necessarily right but smart. I would think Nvidia or Intel would do the same. Kind of tired of refreshes, but it is what it is.

December 3, 2015 | 11:52 PM - Posted by arbiter

If Nvidia did this, what you think certain group of people would let them off without bashing them?

December 4, 2015 | 01:03 AM - Posted by Anonymous (not verified)

Probably not, but I'm not partial to either side. If they did it, it would be a smart move on their par too, but you are probably right they would have a whole lot to say about it, even though it's just business.

December 4, 2015 | 03:24 AM - Posted by JohnGR

If Nvidia have done it, how much of the memory would be running at full speed? Nvidia was criticized for having 0.5GB of memory on 970 that is ultra slow and specs that are not the real specs. Also someone would criticized Nvidia for selling different cards under the same name and also advertises wrong feature support like in the case of GT 730 with two models being Kepler and one model being Fermi. And yes they post support of DX12 on the Fermi based model when in fact it does NOT support it.
The same case with GT 630. They have one Kepler based model and two Fermi based models. They advertise again DX12 support for the Fermi based models. Lies. Lies everywhere.

AMD did NOT advertised 384bit data bus on their card boxes. There is a huge difference but as always you try to present white as black when talking about AMD and black as white when talking about Nvidia. A true Nvidia defender online, misleading people every way you can like Nvidia advertising wrong specs all the time.

December 4, 2015 | 04:24 AM - Posted by TopJoe (not verified)

Bro, you must be referring to Arbiter, and not myself. As I stated I don't have dog in the fight from either side.
I think it's totally asinine the back and forth arguing about companies that is trying to make as much money as possible off of us.
I think both sides have done some questionable things over time but we need both sides to keep completion for the consumer. If this is the case for Amd to refresh Tonga down line so be it. I don't have a problem with it and I get why they did not release the 384 bit card. No harm no foul either way.
But man come on, I see his point and you just proved it. No need to point fingers about something we can't control.

December 4, 2015 | 04:27 AM - Posted by TopJoe (not verified)

edit* competition-freaking auto correct

December 4, 2015 | 04:40 AM - Posted by JohnGR

I proved what? No one says that companies don't try to take the money from our pockets, but please. Give me a break. His point was closer to trolling than reality. If you can not understand what I am saying, forget it. It will be pointless to continue.

December 4, 2015 | 05:10 AM - Posted by TopJoe (not verified)

Dude if you cant see that you are still pointing fingers, then yeah it is pointless to continue this particular conversation. I'm not going to carry on with grown ass men on the internet about minute bs. Constructive conversation we need. Other stuff we don't. Have a nice day.
P.S.- you seemed to be able to carry a good conversation when you want to, but the beef you have with the other guy is between you two, I'm not a part of that one.

December 4, 2015 | 06:13 AM - Posted by JohnGR

The problem here is that you don't know some facts and still think you know enough to be right on this one. That's why it is pointless.

December 5, 2015 | 10:19 PM - Posted by tbris84

Feel the rage of the AMD fanboys.

December 4, 2015 | 11:03 AM - Posted by funandjam

You must be new around here. It isn't so much about why a company did what it did, it's more about some people around here consistently trolling and spreading misinformation, and Arbiter is one of those. Don't get me wrong, JohnGR isn't a saint either, but he was definitely right for calling Arbiter out on it.
Hopefully I won't jinx it, but at least we haven't seen the other two main ones post on here, those two make me cringe.

December 4, 2015 | 11:24 AM - Posted by Anonymous (not verified)

Arbiter is the PCPer forum's verson of Chizow on WCCF Tech's forum! All FUD All of the time!

December 4, 2015 | 03:21 PM - Posted by Anonymous (not verified)

I'm not new to reading this forum I just don't talk in the forums a lot because of this type of crap on the internet. I am aware of all three of these guys. I get why people get bent when they start, but to me it's senseless, but to them I guess not. I don't have a problem with talking to any of them, as long as it is civil.

December 4, 2015 | 01:34 PM - Posted by Anonymous (not verified)

"Nvidia was criticized for having 0.5GB of memory on 970 that is ultra slow"

Sadly more a case of popular misinformation. The 0.5GB partition ran at exactly the same speed as the rest of the RAM, when read or written from separately. The problem came when when you tried to read or write SIMULTANEOUSLY from both sides of the partition (i.e. you could read from 3.5GB at full speed and write to 0.5GB at full speed without issue). In that case, the last 0.5GB would run at half the speed due to the missing ROP/L2 unit on that last memory controller handling that last 0.5GB.

December 4, 2015 | 01:53 PM - Posted by JohnGR

Any proofs? A link would be nice.

December 4, 2015 | 11:11 PM - Posted by Anonymous (not verified)

He's lying. It's that simple. The .5 GB partition runs at a maximum of 28 GB/s and has XOR contention.

December 5, 2015 | 02:37 AM - Posted by JohnGR

Yes I know. That's why I asked for a link.

December 4, 2015 | 03:20 PM - Posted by Anonymous (not verified)

I don't think you understand how it works. Graphics cards achieve their read speed by striping the data across many 32-bit memory channels, similar to how a RAID array works. Since one of the memory channels in the 970 is part of another partition, it can not be used for striping. This means that the maximum bandwidth quoted by Nvidia is incorrect and false advertising; it is only 7/8 of the 980's max since it can only stripe across 7 channels, not the full 8 channels available in a 980. Nvidia still listed it as equivalent read bandwidth to the 980. Also, the eighth channel is mostly worthless which reduces the effective memory size to 3.5 GB. When you read or write to the 0.5 GB section separately, you only get 1/8 th of the bandwidth. Apparently it can not be read from at the same time as the other partition since it shares read hardware with the 7th channel of the other partition. It supposedly can be written to simultaneously, but I don't see when that would be useful.

December 16, 2015 | 03:35 AM - Posted by Anonymous (not verified)

This is the correct answer.

December 4, 2015 | 03:03 AM - Posted by Anonymouse (not verified)

So isnt it still locked in the refresh?
And isnt the disabling of features for segmentation purposes par for the course?

December 4, 2015 | 01:00 AM - Posted by Master Chen (not verified)

Holy shit that STUPID. "Finding the right price segment" for a FRIGGIN' BUS WIDTH, I mean. That is outright RETARDED, IMHO.

December 4, 2015 | 02:02 AM - Posted by Anonymous (not verified)

It isn't only the bus width. Utilizing the entire 384-bit bus would mean 3 GB or 6 GB cards. High-speed GDDR5 isn't the cheapest memory, so it does effect the price of the card. With the 256-bit bus, they will be making 2 or 4 GB cards, which put them in the proper price range.

December 4, 2015 | 03:21 AM - Posted by JohnGR

Also a 384bit data bus needs a more expensive PCB. With prices of 390 going down to only a few dozen dollars higher than 380X, who would be buying a 6GB 380X instead of a 8GB 390?

December 4, 2015 | 05:25 AM - Posted by Jann5s

+1, also to the anonymous post which John replies to

December 4, 2015 | 02:38 PM - Posted by Master Chen (not verified)

256 bit can do up to 4GB without choking any just fine.
384 bit just give a little bit extra room for that same amount, but they cannot into 5 or 6 GB, that's why Titan was a total and utter failure.
But it's AMD we're talking about here. Not including 512 bit bus for EVERY non-HBM 4GB Radeon card by this point is outright LAME and the "couldn't find a good enough price segment for a wider bus" is an absolutely RETARDED kind of an "excuse" to gimp Tonga XT. This is LAAAAAME.

December 4, 2015 | 03:44 PM - Posted by Anonymous (not verified)

Ridiculous. As someone already indicated, the price of a PCB to route the traces of a 512-bit bus is quite high. The 980 only uses a 256-bit bus, but there is a trade-off between a wide bus with slower, cheaper memory and a narrow bus with faster, more expensive memory. Not all variations are economically feasible.

December 4, 2015 | 10:46 PM - Posted by Master Chen (not verified)

>the price of a PCB to route the traces of a 512-bit bus is quite high
Your bullshit won't work on me.

December 4, 2015 | 11:16 PM - Posted by Anonymous (not verified)

I agree. Logic and reason have no effect on trolls.

December 6, 2015 | 12:17 AM - Posted by Master Chen (not verified)

Whatever makes you feel better and more secure about yourself, kiddo. Keep living in that little pink world of delusional denials of yours, I guess.

December 6, 2015 | 08:18 PM - Posted by Anonymous (not verified)

Are you my father? He is the only one who called me "kiddo". He would be almost 80 years old if he was still alive today. Have you taken up trolling in your retirement? Age gets you more respect in a lot of cultures, but not really in American or western cultures in general, so trying to sound like a wise old person will not get you very far here, in fact it probably significantly powers your stature fore tech related knowledge. You should consider changing your tactics to be a more effective troll.

Anyway, I have worked in the computer industry in Santa Clara, but not for any of the companies involved. I am 100% certain that a wider memory bus takes more board space, usually more layers on the board, and more memory chips which makes it a lot more expensive. Intel gets hundreds of dollars for a high end consumer processor and thousands for professional chips. Graphics cards often cost a similar amount for the entire board with gpu and memory. They sell with much lower margins, just like motherboards. Motherboard makers have been making so called ATX boards which are much smaller than 9.6". I was just looking at a gigabyte "ATX" board that is only 7.83" wide. Margins on motherboards are not that good. The idea that it is the same price to implement a 512-bit bus rather than a 256-bit bus is preposterous. Almost everyone reading this forum should know this, so there isn't much point in replying, but it is Sunday afternoon on a rainy day.

December 4, 2015 | 05:18 PM - Posted by JohnGR

Tonga XT is not "gimped". AMD doesn't advertises a feature or hardware that it doesn't have or tries to emulate on software or anything like it. Is 970 "gimped" because it uses only 1664 CUDA cores out of the 2048 that GM204 has?

December 4, 2015 | 08:21 PM - Posted by renz (not verified)

lol not 'gimp' huh just because AMD did not advertise it. i'm sure even if nvidia do exactly the same as AMD did you WILL do your best to spin the story to bash nvidia.

December 5, 2015 | 02:40 AM - Posted by JohnGR

It's funny how Nvidia trolls try to create a fuss out of nothing. You avoided the 970 example I wrote. Should we bash Nvidia for cutting CUDA cores on GTX 970? On GTX 950?

December 5, 2015 | 08:39 AM - Posted by renz (not verified)

Doesn't matter what the spec are. You will continue to bash nvidia no matter what. And looking at how many your post here it seems ADF on scramble to do damage control.

December 5, 2015 | 10:36 AM - Posted by JohnGR

Avoiding to answer because even you, you realize that you have posted something stupid, like the rest of the Nvidia trollboys in here.

December 7, 2015 | 01:17 AM - Posted by renz (not verified)

Why should i? Even if i can come up with valid point you will not going to accept it anyway. And i never mention 970 once in my post.

December 7, 2015 | 02:45 AM - Posted by JohnGR

Ridiculous excuse. After so many posts, you would have been happy to post a valid point if you had one.

December 8, 2015 | 09:15 PM - Posted by renz (not verified)

It's useless to entertain ADF. Not like they would take any reasonable/logical point anyway. Everything must bend to their reality distortion. And people like you guys actually tarnish AMD rep more in the public eyes.

December 7, 2015 | 02:44 AM - Posted by JohnGR

-double post-

December 4, 2015 | 10:49 PM - Posted by Master Chen (not verified)

Are you even for real?
Tonga XT has ability to fully utilize 384 bit bus by default, from day-one of it's existence. They've locked that ability and cut it to a lane of 256 bits and now come up with lame excuses and PR bullshit "why" they decided to do this. In other words - they've completely shit themselves from head to toes while standing on their hands, and now they're backing out by producing BS and fairy tales. That's 100% gimpage.

December 4, 2015 | 11:14 PM - Posted by Anonymous (not verified)

No, that's what Nvidia did by putting 28 GB/s with XOR contention into an enthusiast-grade card and lying about it to the public.

That's half the speed of the VRAM in a midrange card from 2007.

December 5, 2015 | 02:43 AM - Posted by JohnGR

All Nvidia trolls are here posting nonsense because only nonsense is logic to them. You also avoided the 970 example like the other troll. Let's forget 970, because it probably confuse you. 970 IS a gimped chip, advertised with different specs compared to what it really was. Let's use GTX 950 as an example. Is 950 gimped by using less CUDA cores than GTX 960? They are the same GPU.

December 6, 2015 | 12:18 AM - Posted by Master Chen (not verified)

STFU, lamer.

December 6, 2015 | 07:56 AM - Posted by JohnGR

Hahaha. Feeling stupid? You should be.

December 4, 2015 | 01:08 AM - Posted by Anonymous (not verified)

Intel and Nvidia already do this. Its business. It is easy to make a bad ass part and enable segments of the product for different price brackets. Nothing to see here, moving on....

December 4, 2015 | 04:15 AM - Posted by dragosmp (not verified)

I appreciate Raja is giving this type of information. Here's to a more honest and open AMD, gwd knows they need to regain our trust.

December 4, 2015 | 05:11 AM - Posted by Anonymous (not verified)

Regain trust? AMD has messed up in many ways recently but they were never really untrustworthy or deceptive, certainly nothing like Nvidia's false advertising of 970 specs and async compute

December 4, 2015 | 11:17 AM - Posted by Anonymous (not verified)

"they were never really untrustworthy or deceptive"
How old are you, 7 or 8?

December 4, 2015 | 05:09 PM - Posted by arbiter

Really? Last 4 years of 1 AMD lie after another. AMD has done nothing but misrepresent their products to be faster then they really are for years, "Fury X is 30% faster in fps then 980ti". Remember that one? That was out right misrepresentation of the truth.

As for trust, Nvidia has a lot more truth when they say how fast their cards are and any tech they release. AMD tends to claim faster then they really are then when its not they blame someone else for their own lie.

December 5, 2015 | 12:38 AM - Posted by looncraz (not verified)

You misconstrue rumor as AMD statements?

AMD never said anything you said they said, the rumor mill did.

Right now, nVidia has claimed Pascal will be 10x faster than Maxwell. At least the CEO called it for what it was "CEO math."

The reality is that both nVidia and AMD are beholden to the same laws of physics and the same processes, which shows given how close in performance their products tend to be (which is also a result of insider information).

For example, right now, it looks like Pascal is trying to hit higher performance levels than AMD's Arctic Islands. However, we know that, in theory, AMD's hardware is actually far more capable, it is just under utilized for a number of reasons. nVidia is very efficiently utilizing their hardware resources, this means AMD has only to improve utilization to make a surprising jump in performance, whereas nVidia must rely on hardware growth and improvements.

Arctic Islands is suppose to have a new GCN ISA, which is what would be required to extract more of the theoretical performance. You should also note that the R9 Nano is faster AND more efficient than the GTX 980. With AMD having higher transistor density, a new ISA, and potentially using 14nm FinFet vs nVidia's 16nm FinFt (@TSMC), things may change very quickly. What would you complain about then?

December 4, 2015 | 07:33 AM - Posted by booboop (not verified)

Finding the right perf/$ slot is going to be inherently difficult when Tonga barely improved upon the January 2012 Tahiti GPU in terms of performance, power, and die size.

Maybe AMD should consider making GPUs that are better than last gen's GPUs? That might make it a tad easier to sell

December 5, 2015 | 12:40 AM - Posted by looncraz (not verified)

They don't need to be any better, you pay a certain amount for a certain level of performance. I don't see nVidia doing any different.

December 4, 2015 | 08:32 AM - Posted by Anonymous (not verified)

Even with 384, it would still get crushed by Hawaii, so I don't see what the problem is.

December 5, 2015 | 08:58 PM - Posted by Anonymous (not verified)

Any chance to spread FUD will not be ignored.

December 4, 2015 | 08:54 AM - Posted by BBMan (not verified)

I'm not sure what the performance difference would be if they enabled it. Any theoretical guesses?

I guess there is a part of me that thinks they might not have been technically ready to release it either. But how doesone prove that?

December 4, 2015 | 10:35 AM - Posted by YTech

There's the other approach that it's cheaper to manufacture the parts with certain elements locked.

Intel does it all the time with their CPU for various reasons, such as below QA standards.

And I agree. It could have also been locked due to other technical issues that they were experimenting with for their new/current products. -- not technically ready --

You never know. If it was properly implemented on those cards, it could eventually be unlocked just like Sony and Microsoft did for the cores of their CPU consoles.

Lately, drivers has been unlocking new features and improvements on their cards.

At least it wasn't advertised that it had a 384-bit memory bus and secretly locked it at 256-bit.
I rather get Easter Eggs than a fake Cake :P

At the end, it just tells us that AMD has been looking into increasing the memory bus in an affective matter for a while and that the new cards will probably have it properly implemented. Look at the HBM!

December 4, 2015 | 03:59 PM - Posted by Anonymous (not verified)

"You never know. If it was properly implemented on those cards, it could eventually be unlocked just like Sony and Microsoft did for the cores of their CPU consoles."

This isn't an "unlockable" feature. The traces and memory chips to connect to the GPU are not present on the cards. Graphic cards essentially use 32-bit memory channels, with each channel usually connected to a separate memory chip. Moving up to a 384-bit bus would require a PCB with traces for 4 more channels (128 more bits) and 4 more memory chips. The 384-bit bus is apparently present on the GPU die, but probably fused off so it cannot be activated. There is nothing you can do to upgrade an existing card since the traces and chips are not present on the PCB. It would be a huge waste of money to put these on the card, but leave them deactivated. No manufacturer would do that.

December 4, 2015 | 09:32 AM - Posted by Anonymous (not verified)

I would suggest that AMD throws away all this new and bloated GPU junk and goes back to the the ET4000 chipset.

I mean who really needs 384bit when you can have 256bit....or maybe 16bit is really the best and all you will possibly ever need...

December 4, 2015 | 09:35 AM - Posted by Anonymous (not verified)


December 4, 2015 | 10:29 AM - Posted by Batismul (not verified)


December 4, 2015 | 10:36 AM - Posted by BBMan (not verified)


December 4, 2015 | 10:19 AM - Posted by Anonymous (not verified)

Well, if you say that something has official confirmation that isn't an official confirmation (papers, docs, please), what else that one literal citation of the words of the source.

Where is that? I don't see it here.

How, where and what exactly said Ratja to you. Because in some days will be another new clearing this "misinformation".

It could be the dumbest thing that AMD made in its history. It made a chip with 384 bits (and waste die size, and I don't know for sure, but it's posible that this disable some L2 block, too) that NEVER use that bus.

The market isn't a excuse, you can launch many variants of the gpu, and when the R9 285, that only came with 2 GB of VRAM, so that time was perfect for a "uber"-285 with the fulliness of its bus and 3 GB.

Now that launch have sense too, 3-6GB confgs. against 2-4 GB of cheaper variants. In fact the R9 380x could have better arguments to selling with 3-6 GB of VRAM and more bandwidth that the regular 380 with 2-4GB.

That "new" is bullshit, no real data, no citations, and much hype for the dumbest explanation of the history of gpus.

December 4, 2015 | 10:37 AM - Posted by BBMan (not verified)

English much?

December 7, 2015 | 11:47 AM - Posted by Anonymous (not verified)

Ryan Shrout was in Sonoma, sitting at a table at AMD with Raja who told him this. What sort of citation do you want? You want paper? Would this be more credible to you if Ryan wrote down what Raja told him on a piece of paper and then posted a screenshot of that as his "official confirmation"? Would that be an acceptable citation to you?

Oh, wait, no. You just want to hate on AMD.

December 4, 2015 | 10:31 AM - Posted by Anonymous (not verified)

" Didn't find a perfect perf/$ slot."

Sure. Because between 229$ and 329$ (MSRPs of 380x and 390) you can't find any price, soooo near both. ¬_¬

So, AMD is selling:

380x for 229$ with 4GB of VRAM and "low" bandwidth.

390 for 329$ with uber-8GB of VRAM and tons of bandwidth.

And AMD doesn't see the "sense" of introducing one product with, I wanna be creative, maybe... 6GB of VRAM and very decent bandwith? No? 279$ is "madness" for AMD?

If you don't show true information about what exaclty said Ratja to you, then you are going to be a unrealiable source of information.

Why Ratja said that to you if until now that info was "secret". One year after the launch of this chip, with technical data. And why that info with hiden channels of info.

Is it so difficult from AMD to make a public declaration or a doc about this technical data?

That is FUD.

December 4, 2015 | 10:45 AM - Posted by YTech

It appears that you're answering your own questions :)

There's a lot that can't be divulged openly, even if each parties really would love to.

December 4, 2015 | 11:30 AM - Posted by funandjam

You are trying to make a mountain out of a molehill.

You are not quite right about the price differences. The 380x starts out at $229 and goes up, depending on the brand and model, some have some really beefy specs and go for $259 and up.
The 390 currently starts out at $299, so there really isn't that big of a gap to put the high-bandwidth Tonga in. At a starting price of your suggested $279, it is just too close.

December 4, 2015 | 11:39 AM - Posted by Owlofminerva (not verified)

It's galling to see so many people so ignorant of how processors work coment on this. 'I JUST KNOW BIGGER NUMBERS IS BIGGER PERFORMANCE AMD STUPID, EVERYTHING SHOULD ALWAYS BE BIGGER NUMBERS." As if AMD made an intentional decision to screw over customers because money. This is the most infantile naivety. If it was simply a matter of slapping in a 384-bit bus giving drastically better performance and pulling a magically and drastically redesigned computing architecture out of their asses they'd have done it. In either case, AMD cards significantly outperform Nvidia cards at the same price point, so this all seems like AMD is simply outgunned on their advertising/propaganda budget. I would think another $20 million in PR could scrub the frankly meaningless word "refresh" from our collective vocabulary. I'm thankful they don't waste money like that.

December 4, 2015 | 12:25 PM - Posted by Anonymous (not verified)

AMD needs to get its GPUs into the HPC/Server/workstation market ASAP, and not focus so much on only the gaming market. So AMD may have had a larger BUS on Tonga that it may have had available to be used on an even higher cost SKU, well they went with a narrower bus at a lower cost lower power using design and saved that wider bus for a possible more expensive variant! Hell folks were glad to pay for Nvidia's gimped of FP and fully in hardware asynchronous compute hardware to get the power savings at the expense of FP performance.

AMD made no false claims about the narrower BUS used on all its Tonga SKU product's, and maybe AMD did have some power usage metric it wanted to achieve on the 380X/other Tonga parts and it's not unusual for a base GPU micro-architecture like Tonga to have a larger Bus for the higher priced SKUs or for future variants consumer or Pro, with AMD able to fuse off portions to make lower cost/lower power using parts. Hell maybe there is some cross bar/other types of bus re-routing pathways that allows AMD to bin parts with defective bus pathways but still have the parts usable for targeted and lower binned SKUs, that is not unheard of in the CPU/GPU industry. So AMD has fewer new tape-outs to do for its lower binned lower cost Tonga derived SKUs and also has spare BUS paths should defects in some BUS pathes reduce its targeted designs yealds, if AMD had not over provisoned the BUS resources on Tonga's base design. The other question about the 380X, or any other Tonga variant is do the parts use up all the available bandwidth on the narrower BUS. I thought that AMD was going more with some loss-less compression algorithms to save on bandwidth and power usage.

Maybe AMD is saving the 384-bit Memory Bus for a FirePro workstation variant, AMD still has new FirePro versions being introduced on earlier versions of GCN.

AMD is barely surviving by a thread, so maybe some of the more affluent gamers could front AMD a few billion dollars to throw out some more Tonga variants with the wide bus. AMD's needs to focus on Zen and Arctic Islands as much as possible and does not have the funds to placate every gamer's request, new tape-outs/design variants costs millions to make!

December 4, 2015 | 04:10 PM - Posted by Anonymous (not verified)

It is always interesting how "enthusiast" seem to think that they know better than the actual engineers that work for these companies.

December 4, 2015 | 11:16 PM - Posted by Anonymous (not verified)

It's not engineers who come up with brilliance like 28 GB/s + XOR in enthusiast-grade parts plus lying to the public about the specs.

That's management brilliance.

December 5, 2015 | 08:48 PM - Posted by Anonymous (not verified)

Not sure what you are implying. The engineers obviously came up with the set up on the 970; marketing doesn't design memory controllers or caches. It is really 3.5 GB with a 224-bit bus rather than 4 GB and 256-bit bus claimed by marketing though. The 0.5 GB segment is relatively worthless since the read bandwidth is so slow.

Without the way it is segmented it would have been reduced to 3 GB on a 192-bit bus instead. This is designed to work around a defective memory controller, cache, or other component on the die. With previous designs it would have been cut down more significantly. The engineers managed to give us an extra memory channel and an extra 512 MB of memory. This was a good design from the engineers, but apparently it was not communicated to marketing properly or someone along the chain deliberately mislead people. Performance was good when it came out, so if they had just been truthful about the specs, there would not have been any issues.

Last I checked they still listed the bandwidth of the 970 as the same as the 980, which is false. With one 32-bit memory controller connected via a shared read path internally, it can only achieve 7/8ths of the bandwidth of a 980 across 7 channels. When reading from the last controller, it will only achieve 1/8th of the bandwidth of a 980, which is why it will be seldom used. Because they did not update this information as soon as it was discovered to be incorrect internally, which is probably before reviewers discovered it, I think they are guilty of false advertising.

December 4, 2015 | 04:48 PM - Posted by dreamer77dd

Does wider 384-bit bus really change things? On benchmarks or anything and by how much?

December 4, 2015 | 08:46 PM - Posted by Anonymous (not verified)

Well that 384 bit BUS has been superseded by 4 1024 bit buses to HBM memory, so that's going to be an argument for the history books. I'll expect that once most things GPU and APU eventually move to onto the interposer and all the wide buses unable to be supported on PCB will be supplanted by the interposer with its ability to host 10's of thousands of wide parallel buses etched into interposer's silicon substrate! So as more of the HBM based high end SKUs begins to become HBM on all GPU's, and even more powerful desktop Gaming APUs in a few years time the bandwidth issue will be solved for the most part. PCBs are on their way out as the primary hosts to memory traces with even more thing in the future going onto the interposer for GPUs and APUs/SOCs alike. So thanks go out to AMD for getting another new and much better HBM memory technology out there and accepted into the JEDEC standards for even Nvidia to use.

Hay Arbiter, Nvidia is USING another technology spearheaded by AMD, I'll bet you will figure out some insane way to make AMD villain in this advancement also!

December 4, 2015 | 11:14 PM - Posted by Anonymous (not verified)

It would put it into close to the price range of a 290 or 390, but it would not perform as well. Why would you buy a 380x if just a little bit more money will get you a higher end part? There just isn't room in the line-up for a 380x with a 384-bit bus.

December 7, 2015 | 04:18 AM - Posted by Anonymous (not verified)

Yeah!!! I demand the 8bit Zen platform. Do it AMD!!! Don´t listen to those idiots who want 64bit or 128bit computing. 8bit RULEZ!


December 7, 2015 | 01:37 AM - Posted by Anonymous (not verified)

Anyone know how many layers are used for current graphics card PCBs? I went looking for it, but didn't find much. They used to report such information in reviews occasionally. It would be interesting to compare the differences. I guess they could also trade some complexity in the GPU package for less complexity in the PCB. If they use a larger package substraight with more layers, then they may be able to reduce the number of layers need to route the memory bus on the graphics card PCB. This would be of interest with this current story with the comparison between different bus widths. It would also be interesting for silicon interposers. Since the memory bus is on the silicon interposer, the routing on the PCB is very simple. Cards with Fiji parts should be able to use a very cheap PCB, but the interposer will obviously be more expensive than a standard GPU.

December 7, 2015 | 01:37 AM - Posted by Anonymous (not verified)

Anyone know how many layers are used for current graphics card PCBs? I went looking for it, but didn't find much. They used to report such information in reviews occasionally. It would be interesting to compare the differences. I guess they could also trade some complexity in the GPU package for less complexity in the PCB. If they use a larger package substraight with more layers, then they may be able to reduce the number of layers need to route the memory bus on the graphics card PCB. This would be of interest with this current story with the comparison between different bus widths. It would also be interesting for silicon interposers. Since the memory bus is on the silicon interposer, the routing on the PCB is very simple. Cards with Fiji parts should be able to use a very cheap PCB, but the interposer will obviously be more expensive than a standard GPU.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.