Nein, Zeneration 3 is best

Subject: General Tech | December 6, 2018 - 01:14 PM |
Tagged: amd, Zen 2, Ryzen 5 3600, Ryzen 3 3300, Ryzen 9 3800, leak, Ryzen 7 3700, Ryzen 3000

If the rumours The Inquirer are helping spread are true then AMD really does believe the third time's the charm.  The new series of Ryzen 3000 chips will use Zen 2 cores and will follow Intel's addition of a 9 series, though the quoted price of £400 for the Ryzen 9 3850X is a lot more attractive than Intel's pricing.  That chip will sport a 5.1GHz peak clock on its pair of Zen 2 dies with eight cores apiece, though the 135W TDP will need some taming. 

Check out the variety of other chips in the Ryzen 3, 5, and 7 families which have leaked out.

View Full Size

"The upcoming third-generation Ryzen chip, slated for release next year, will be based on Team Red's Zen 2 architecture, the successor to its rather successful Zen architecture found in Ryzen 1 and 2 CPUs and EPYC server processors."

Here is some more Tech News from around the web:

Tech Talk

Source: The Inquirer

Video News

December 6, 2018 | 03:20 PM - Posted by OlderGenZenAndZen+RyzenAndTR1AndTR2DealsOnceRyzen3/TR3Drops (not verified)

That should be Ryzeneration 3 and yes it's the Ryzen-3 lineup will be based on the Zen-2 CPU micro-arch what with that Zen+ 12nm node-shrink/slight tweaking(Mostly the memory controller and other uncore tweaks/fixes on Zen+) has thrown off the cadence.

So Ryzen-3/TR-3 will be based on the Zen-2 micro-arch and more naming confusion from Team Red as usual.

Maybe with all the Phone market in a period of decline there will be sufficient 7nm production slots available for AMD to maybe begin to start tweaking that I/O Die design that's used for Epyc/Rome enough to get that I/O Die production certified at 7nm also before the Zen-3 based Epyc/Milan designs are frozen and ready for Tapeout.

Once Ryzen-3 arrives the Ryzen-1s and Ryzen-2s will become available at fire sale pricing and Intel's non overclockable low end will be fruther fustigated via 2 generations of older generation Zen and Zen+ based offerings. And those Threadripper 1 and 2 offerings will likewise put even more hurt on even Intel's non-HEDT higher end mainstream offerings as 1950X is setting below $500 and those Cyber Monday sales had it close to $400 at times and that's not even including any X399/TR-1-CPU bundling offers.

Globalfoundries will still be fabbing plenty of Zen-1 based Epyc/Naples SKUs for 3 or more years just to supply the needs of that first generation Epyc/Naples market demand and their extended product availablity that the server market demands. So that also transfers over to Epyc/Rome with GF's fabbing of the 14nm I/O die. But AMD should have that I/O die certified at 7nm(TSMC) also by the time Epyc/Milan rolls out for sampling.

AMD's First Generation Zen/Ryzen-1 based SKUs at great sales pricing is already putting more hurt on Intel's core i5/lower lineup even before any Zen+/Ryzen 2 competition is added to the mix. Those Black Friday Zen/Ryzen-1 8 core offerings are down in the $150-$180 range and that's 2 more cores than Intel's core i5 8000 series offers with the Ryzen 7 1700 costing less. And Intel's 6 core i5's have their SMT(HT) disabled.

December 6, 2018 | 07:17 PM - Posted by Isaac Johnson

I read as far as 'According to a video by AdoredTV', didn't really see the point in continuing past that.

December 7, 2018 | 03:50 AM - Posted by Baldrick's Trousers (not verified)

You should. He's been absolutely nailing it.

December 10, 2018 | 05:22 AM - Posted by WhyMe (not verified)

Baldrick's Trousers "You should. He's been absolutely nailing it."

Even a broken clock is right twice a day.

The only thing he cares about is stirring up controversy and the money he gets from views and patreon, the guy has zero credibility and citing him degenerates any argument a poster tries to make.

February 20, 2019 | 08:37 AM - Posted by WhyMeSnowflakeIsMeltingMeltingMelting (not verified)

Wow did he do something else to you that's hidden deep down subconscious! Really just reading the name AdordedTV appears to Trigger you and AdoredTV does wihat he does and he peer reviews the Technology Press sometimes.

Your another of those that's similar to chipman with your head firmly stuck in the sand a good foot past your chin and really the AdordedTV guy is no worse or better than any of the legitimate online technology press. You are just angry becuase your little snowflake got melted my some Truth that you Can't handle. Your "Credibility" is in the below Goose Egges range of: -infinity!

December 7, 2018 | 05:44 AM - Posted by othertomperson (not verified)

Agreed. 5.1GHz on a 16 core chip for 135W. May as well have claimed it defecates rainbows and saves babies from the gnashing maw of Intel. He's the most biased and unscrupulous fanboy in the youtube tech "media".

December 7, 2018 | 12:23 PM - Posted by NonCPUbasedProcessorIP (not verified)

"gnashing maw of Intel" no that's more like the Cash Stuffed brown envelopes of Intel that has earned Intel such infamy and billion dollar+ fines.

And Intel was not fined large enough and should have been forced to pay AMD at least 5 billion dollars+ in restitution.

Maybe you are also outed as an opposing fanboy for your dislike of AdoredTV's content but Intel's history of nefarious/illegal market tactics is well documented.

December 8, 2018 | 10:38 AM - Posted by Othertomperson (not verified)

Ultimately it doesn’t matter if you assume I’m a fanboy for distrusting an extremely biased media outlet or not. I’m just a random person on the Internet. AdoredTV, however, is a media production outlet and should start behaving like one, instead of a corporate mouthepiece.

I don’t really care about something Intel did around 20 years ago. It doesn’t affect the current product stack. I was a child at the time so it doesn’t matter to me. What I do see, however, is Adored backing up AMD when they have been caught lying to consumers, backing up AMD and criticising reviewers for not deliberately fudging their results to make Ryzen look better than it is.

It’s not his criticism of Intel or Nvidia that’s the problem (when his criticisms are actually fair and valid, and not just lies by omission or otherwise), but his complete favouritism of one specific companies that make his videos not worth the disk space they occupy on Google’s servers.

And this “5.1 JIGGAHERTS” crap just sounds yet another like another “NVIDIA ARE DYING” or “POOR VOLTA” hyperbolic clickbait rant from this petulant child.

December 8, 2018 | 01:12 PM - Posted by YouDoNotUnderstandModernProcessorTechnologyInTheLeastBit (not verified)

But AdoredTV has stated that Nvidia has won the Desktop Gaming GPU contest since Maxwell arrived with its mobile first design that was scaled up for Desktop Gaming. And Nvidia had the funds to create a different gaming only line of Base Die Tapeouts that where stripped of all but the minimal amounts of shader cores needed for gaming workloads.

His criticisms of Nvidia's market tactics are right on the money because Nvidia only has its GPUs for most of its income while AMD will be making more markups on a 32 core Epyc/Naples, or Epyc/Rome 64 core, SKUs than Nvidia could ever hope to get for any GPU die of a larger MM^2 dimension. Do you see how much the x86 CPU makers get/charge per MM^2 on an x86 die.

Nvidia has already lost billions in market cap over something as stupid as a crash in the coin mining market while AMD has seen a much less lowering in market cap value. AMD has its Epyc/Naples and soon to appear Epyc/Rome SKUs that cost so little in total BOM to make but sell for way more per core/MM^2 of Die than any GPU.

Nvidia has to win in GPUs or it's Dead because Nvidia has no x86 license or any CPU design of its own that's competative as the x86 or Power8/Power9 designs that are not Nvidia's IP.

AMD can really not have to be as concerned with the low profit producing Flagship GPU market as the consumer markups are not really there to justify the BOM for any Flagship GPU die production. AMD only targets the mainstream GPU market on the GTX 1080/RTX 2080 levels of performance so Vega 10(Big Die) and Navi(Big Die) only have to be as good as the GTX 1080 and RTX 2080 in gaming performance because that's where that real revenues are. AMD will still be earning more from its Epyc/Naples and It's Epyc/Rome CPU business sales, with AMD not having to ever be as dependent on only any consumer gaming revenues going forward.

Nvidia has only the GPU market mostly and Nvidia is way more extended into the Desktop Consumer/Gaming market than AMD, as Nvidia has more to lose and AMD only has more to win, so low is AMD's desktop gaming GPU market share. And AMD's Professional Epyc/Radeon Pro WX/MI CPU/GPU line of products will continue to grow until it dwarfs AMD's consumer only GPU market's sales/revenues into just some little background noise.

CPUs obtain more revenue/profits per MM^2 of Die space compared to GPUs, so much that it boggles the mind when one thinks about how easy it is to produce all those Zen/Zeppelin Dies and how much the overall markup per MM^2 of CPU die space is compard to GPUs. GPUs cost billions and those largest of processor DIEs have terrible Die/Wafer Yield rates, and the Mask Sets for even the mid rage GPUs cost more to produce and design than any CPU Die.

Nvidia is like a wounded animal as far as the GPU market is concerned and has to fight to survive on GPUs mostly while AMD's got its x86 CPU business and ATI's GPU business where AMD only has to ever target mainstream desktop GPUs, and it's semi-custom console market sales are still going to be producing the revenues also on the consumer side of things.

If at some point it time AMD starts selling enough Vega 20 Dies that production will always result in some unacceptable Vega 20 DIE/Bins. So then AND could maybe think of bringing out some Vega 20 gaming variant that targets the gamng market rather than just throwing the Vega 20 DIEs away that are not performant enough for Professional maket usage.

And Navi Mid-Range Dies may just be able to make use of some Infinity Fabric Links to allow customers to purchase 2 Navi Mid-Range GPUs and wire then up via the Infinity Fabric to act and perform just like one larger falgship GPU. Nvidia is sure not turnig on all of its NVLink IP on its consumer/gaming GPU variants. But AMD will make us of the Full Infinity Fabric xGMI links on its consumer gaming SKUs if that will enable it to make more sales.

AMD also could take 2 mid-range Navi Dies and mount them on a single PCIe card and wire them up vai xGMI(Infinity Fabric) links and get a flagship that way, as AMD has done in the past.

February 20, 2019 | 09:22 AM - Posted by WhyMeSnowflakeIsMeltingMeltingMelting (not verified)

Really the convicted abusive monopoly interest, Intel, the one that did not pay enough restitution to AMD and enough fines for getting cought engaging in aiticompetative market practices.

And Intel is still incentivizing the OEM laptop market with its in-kind marketing assistance and engineering assistance to the laptop OEMs. Ever wonder why any AMD APU based laptops come with only one of the laptop's 2 memory channels populated and why the reviewers steadfastly will go about reviewing the laptops with only one memory channel populated knowing full well that it so gimps the integreted Vega Graphics even more. And That's if the Laptop OEMs even include an extra memory channel above one on their AMD APU based laptop offerings.

Just look at all the reviews lately of first generation Raven Ridge 2000 U series laptops that are already the past years offerings and no great attempts at sourcing any Zen+/12 Raven Ridge/Picasso 3000 series laptops. Really even folks with half a brain know what's going on and that's the result of the Intel CPU Trust and its monopolistic hold over the OEM Laptop market.

Why Intel even had it's Ultrabook Initiative to foist that Apple Style Thin And Useless design ethos onto the non Apple Laptop Market. And Intel was, at the time, big on reducing the i7 U series SKUs core count to 2/4 threads from the previous generation of mobile/laptop i7 SKUs that came with 4 cores/8 threads.

Oh Intel had a great time effectively doubling the per core cost over its previous generation Quadore i7 mobile variants with dual core i7 U series Ultrabook processors that cost even more than the older quad core i7 Mobile SKU. And by reducing the mobile i7 U series core counts to 2 Intel saved some serious die space and effectively increased the number of U series i7 dies it could fit on a single wafer.

Folks this is why Monopolies are bad for the consumer because Intel for a good long while was not in the technology business it was in the technology milking business. And Intel still is doing that to some degree with all of its product segementation across its line of offerings.

Currently where intel can not compete with AMD on the Integrated Graphics front Intel will incentivise the laptop OEMs with in-kind marketing and engineering assistance that amounts to a quid pro quo that stifles the competition in the OEM laptop market. Intel has less pull in the PC market but that's only because there is a home system PC builders market where folks can source their own parts and build their own systems. But even the OEM PC market is influnced by Intel to a lesser degree than the Laptop OEM market but Intel still has pull.

For Intel the fines are just the cost of doing business and so in that OEM laptop market AMD's better graphics is relatively underrepresented. And, currently, just look at all those reviews of last year's 14nm 2000 Series Raven Ridge APU based laptops relative to the reviews of this year's Zen+/12nm based Raven Ridge/Picasso 3000 series APU based laptops. And look at the Featuers on the AMD laptops still being lowballed by the laptop OEMs and the laptop reviewers that should be called out on that single memory channel populated benchmarking that they are doing.

Really the OEM laptop market is so artifically tilted towards one monopoly CPU parts supplier that if it where a ship it would be capsized already!

December 7, 2018 | 02:36 PM - Posted by Jeremy Hellstrom

See, we can have fun in the comments! 

December 7, 2018 | 03:27 PM - Posted by James

I wouldn’t trust the clock speeds, but 16 cores is probably easily doable. They just need an IO die with 2 IFOP links. The original Zen 1 die actually had 4 IFOP links. It will be interesting to see 16 core, 7 nm Ryzen up against intel 10 core, 14 nm++ (not sure how many pluses at this point). To go to more than 10 core would probably require yet another socket revision from intel due to power consumption; needs better power delivery with possibly more pins. The current rumored 10 core plans may be difficult to support on current boards at 14 nm as it is.

December 7, 2018 | 10:10 AM - Posted by Kareful (not verified)

AdoredTV works hard at taking available info and trying to find the big picture. I watch everything he puts up and value the input (and so am a patron). You can dismiss him if you want, but it's your loss. Though maybe you know of somebody who does the same thing but better? And puts it all out for free (which omits Demerjian)?

--> AND this PC Perspective post should have linked to AdoredTV since he - not the Inquirer - did all the work and has put his name on the results! That's the proper practice.

December 7, 2018 | 12:03 PM - Posted by Anonymously Anonymous (not verified)

To add to that last bit, The Inquirer got their info from AdoreTv, they quote him as thier source, but no mention of him from PCPer?

speaking of source, here is the video:

February 20, 2019 | 09:28 AM - Posted by WhyMeSnowflakeIsMeltingMeltingMelting (not verified)

It's the Article writer's responsibility to fact check sources and include the orginal source of the information that's quoted/refrenced.

The Inquirer is definitely just a shell of its former self that's mostly become a news aggregator rather that a source of in depth content.

December 7, 2018 | 01:39 PM - Posted by James

His previous video talking about GDDR6 and HBM interfaces on an IO die is almost certainly wrong. Trying to pass through a high speed memory interface gets expensive fast. Infinity fabric is fast, but not that fast. Also, an interface die would be a massive waste of space on an interposer. I have some ideas about what is on the IO die and it is mostly what AMD said is there. At this point, I don’t think it even has L4 cache. Also, if the PS5 is multi-chip, then I would expect it to have a custom gpu connected directly to GDDR6 memory and a Zen 2 chiplet.

The rumors/leaks that he came out with are not anything unexpected if you have been paying attention. Given that, the whole thing could be fabricated. It has a reasonable chance of matching what AMD eventually releases, although I would definitely not trust the clock speed estimates / rumors.

December 7, 2018 | 08:02 PM - Posted by Chiseled Badger (not verified)

But this is exactly how the new 7nm Epyc is being done.

Remember that the Zen series is all about modular designs on small high yield dies. They will not be designing another Zen 2 chip for the consumer market. It'll be the same chiplet as Epyc.

Furthermore, with Zen2 there'll be a second generation of Infinity Fabric, which won't magically cure all the drawbacks, but it will be much much better.

December 8, 2018 | 06:43 AM - Posted by James

What is “exactly how the new 7nm Epyc is being done”? There is a big difference between DDR4 speeds and GDDR6 or HBM speeds. HBM will be reaching 1 TB/s. Infinty fabric links are not that fast, even if you use a ridiculously large number of them. Using any kind of interface die would also waste very limited space on the interposer. GDDR6 and HBM memory will be directly connected to the device that uses it. They will have high speed links between devices; we already know that the Radeon instinct mi60 and mi50 cards have 2 infinity fabric links for connection to other gpus, up to 4 in a single cluster. Those are only 50 GB/s per link though. That is tiny compared to the hundreds of GB/s possibly with GDDR6 or HBM memory (1 TB/s = 1024 GB/s). You would need more than 20 of those infinity fabric links to support 1 TB/s. Does it sound reasonable to have a 4096-bit HBM interface and 20 infinty fabric links?

I don’t think his Ryzen 3000 video is wrong. A lot of it is what I expected. I wouldn’t trust some of the details though. His previous video claiming that the IO die, or an IO die, could have GDDR6 or HBM memory controllers is almost certainly completely wrong though. They could have a design that could make efficient use of either one, but the actual die will be different and they will not use a separate interface chip for graphics memory.

I think I have worked out how the memory controllers have changed between Zen 1 and Zen 2. The topology I have in mind would mean that there is no L4 cache on the IO die, which I believe AMD has stated. That became was a lot less likely with the 16 MB L3 cache size rumor anyway. There isn’t much need for L4 with that much L3. I expect a IO die for desktop Ryzen 3000 with 2 IFOP infinity fabric links for connection to up to 2 chiplets. Intel may come out with a 10 core mainstream cpu, so more than 8 may be required in the mainstream market. I don’t know if the IO die could be a diced up Epyc IO die as AdoredTV claimed in one of their videos. I have never heard of doing that. I also expect a single die solution with integrated GPU eventually, possibly with 8 CPU cores rather than the 4 in current Ryzen APUs. This would be for mobile parts and possibly lower end desktop parts. Having everything on a single die with other optimizations for mobile ( like skewing process for lower power ) is important for a laptop part. While the chiplet solution is incredibly flexible, it will not be as optimal for a mobile part. The mobile part may come later, on a different version of the process tech.

There is a few other things that people haven’t thought of yet, but I am not inclined to try to iterate them here.

December 8, 2018 | 12:16 PM - Posted by YouDoNotUnderstandModernProcessorTechnologyInTheLeastBit (not verified)

Graphics memory has higher bandwidth but worse latency and CPUs perform better on gameing workloads at lower latency.

I do not think that you properly understand what SerDes IP is and how much faster that has become because every fabric interface on a modern CPU/GPU makes use of SerDes IP that's way faster than any PCIe or other I/O standard. All that Infinity Fabric IP makes use of SerDes links and that can go way higher than even PCIe 5.0.

Looking at how large that I/O die is in relation to the Chiplets it's not going to be hard to include plenty of memory controller IP and if you could take a moment to pull your foot out of your mouth and look at the JEDEC HBM2 standard you will see that each 1024 bit HBM2 stack interface is divided into 8, 128 bit independintently operating channels with the ability to fruther subdivide each 128 bit channel into 2, 64 bit pseudo channels. So maybe there can be many sorts of I/O support on the 14nm I/O die.

Having an L4 cache on that I/O die would still improve latency hiding and still be utilized to improve latency no matter how large the L3 cache sizes are on the Epyc/Rome Zen-2 Die/Chiplet. L4 cache could be used as an coalescing buffer to DDR4 memory and having more code staged on any cache level is still going to be faster than any slow latency inducing memory accesses. IBM uses its The Centaur chip that acts as a 16MB L4-cache(per memory channel) to save on memory accesses latency for it's Power8 and Power9 SKU's.

So that I/O Die can support many different features for many MCM based CPU and APU configureations and I'll bet that there could be many variants produced that make use of many different I/O Die tapeouts where the options will give AMD pleny of flexability. The only issue with the Epyc/Rome MCM design as far as gaming workloads is that all memory accesses are now a one hop far acces to the I/O Die, but for server workloads that can be helpful in averaging out all memory accesses to only one hop between CPU chiplet and I/O die.

For the Consumer Gaming Market AMD does not have to change Zen-2/conumer designs over from the Zen-1/Zeppelin die/memory controller stucture if it so desires but even if AMD does use a dedicated I/O die on its consumer variants that larger L3 cache and maybe some L4 on the I/O die is going to pretty much reduce memory access latencies to the point that gaming workloads may not be adversely affected at all by that all far memory accesses via a dedicated I/O die. Twice as much L3 cache means that its twice as likely that any needed data/code is going to reside on L3 and any L4 cache on the I/O dis is just going to mean that there will be less direct accesse to memory needed as its likely that L4 will be holding what was pushed out of L3. And any memory acceses for the main gaming logic will not be needed at all if things are managed properly on the game/gaming engine side of that development. Games/Geming Engnes will be optimized for Zen-2 and that Dedicated I/O die, espeically if the Next generation of Console SOCs/APUs make use of that MCM design. I could even see some FPGA logic added to help accelerate DX12 and Vulkan acceleration on any MCM based next generation APUs to assist with new accelerating Graphics API Features. Dedicated Tensor Core DIEs could also be added to the MCM for AI based Upscaling also. Even Ray Tracing IP could be added at a later time via FPGAs until any ASIC based Ray Tracing DIE/IP is ready.

AMD's Modular Die/Chiplet IP on an MCM or Silicon Interposer will be what AMD will be using going forward and even variants where it's the MCM's Organic Substrate for most low density connecyions and some form of Embedded Silicon Interposer for the high density GPU to HBM2 interface parts on the MCM.

As far as the Infinity Fabric is concerned for linkage between 2 MI60s/50s that only for passing cache coherency traffic between GPUs and most memory accesses will be mostly via the PCIe 4.0 bandwidth transferred via the DMA/HBM2 memory subeystems onto the available on the MI60's/50's HBM2 stacks. The IF is there to allow 2 or more MI60s to share a common pooled HBM2 and Virtual VRAM memory space GPU-HBM2 to GPU-HBM2 and out onto regular System DRAM and SSD/Hard-drive also. So there will still be plenty of bandwidth via those 2 xGMI Links that are based on the Infinity Fabric and PCIe 4.0(to manage most DRAM memory acceses/transfers) and cache coherency traffic in the background while the MI60s/50s do their work unimpeded.

You do not understand what the Infinity Fabric cache coherency IP is for and the Infinity Fabric is not there soly for memory transfers as that is the job of the memory controllers and the respective processor's cache subsystems. You lack the basic understanding of how any modern processor's cache subsystems and memory subsystems operate indipendently and in concert with the Processor/Processor cores and that's a big mistake on your part.

December 10, 2018 | 02:21 AM - Posted by James

I have read all kinds of enthusiast theories about Zen 2. It is always unclear to me why enthusiast like yourself think that they some how know better than engineers. The idea that AMD would use the ~ 420 square mm IO die on a silicon interposer is not plausible. That IO die would be larger than the gpu. The gpu plus HBM is already near the reticle limit. There is no space for an interface die. Also the pad size on the chips must be different. Die for use on interposers use a much smaller grid. They have to to fit the thousands of microballs needed for HBM interfaces; how many microballs for 4096-bit interface on mi60? Also, did you completely miss the part about needing 20 Zen 2 IFOP links to equal the HBM interface on an MI60? How would that ever make sense compared to directly connecting the HBM to the GPU die? Attempting to pass high speed graphics memory through an interface chip is just not going to be worth it for many different reasons. I don’t think I have even covered them all. It is much better to have multiple die variants.

The IO die for Epyc 2 seems contain just what AMD said that it contains, most likely with a few changes that others have not considered. The amount of uncore on Zen 1 is very large so I don’t think the over 400 square mm is out of line just for all of the interfaces that we already know it contains.

The video that this article is about is probably mostly correct and mostly expected, unlike his previous video. Two chiplets will only be about 140 to 150 square mm, leaving room for an IO die with two IFOP links. The actual raw memory latency might be slightly higher than Zen 1, but there are several different things that will improve latency in Zen 2, not just the massive L3.

February 20, 2019 | 11:22 AM - Posted by WhyMeSnowflakeIsMeltingMeltingMelting (not verified)

There's not going to be any HBM2 interfacing via any I/O die as the HBM2 interface currently is only for GPU die to HBM2 usage. Now it may be possible to have a GPU and a single HBM2 die stack on that MCM if they forgo a second CPU die/chiplet for some consumer MCM based APU variant but that would be a single 1024 bit wide bridge interposer to wire the GPUs memory controller to the HBM2 stack. The GPU Die/HBM2 stack combo to I/O die links would be via narrower IF-Ver2 links. So the GPU to I/O die direct link would be IF-2 based and the GPU to HBM2 stack would be JEDEC/HBM2 based.

Wikichips Zen 2 micro-arch entry lists the IF-Ver2 as:

"◾ Infinity Fabric 2

...◾ 2.3x transfer rate per link (25 GT/s, up from ~10.6 GT/s)" (1)


Wikichip's Fuse article has this to say about IF-2.0:

"interesting detail AMD disclosed with their GPU announcement is that the infinity fabric now supports 100 GB/s (BiDir) per link. If we assume the Infinity Fabric 2 still uses 16 differential pairs as with first-generation IF, it would mean the IF 2 now operates at 25 GT/s, identical to NVLink 2.0 data rate. However, since AMD’s IF is twice as wide, it provides twice the bandwidth per link over Nvidia’s NVLink." (2)

So it would take 10, 32 Lane(16 differential pairs) IF-2 links to equal 1TB/s bandwidth. So 320 IF-2 lanes could match the 4096 lanes to 4 HBM2 stack's bandwidth. That many IF lanes would be power hungry.

I'd rather the I/O Die have L4 Cache than any excess I/O based memory controller logic that's not going to be utilized by all variants. L4 cache and enough memory controllers to service up to 16 cores. And having only 2 memory controllers for AM4/MB socket compatability would really be helped by having an L4 Cache reducing the stress on the peek memory bandwidth limitations. The more cache levels the more opportunities there are for latency hiding and the higher the chances that the code/data that the CPU/Processor needs will reside in come cache level instead of slow/higher latency system DRAM.


"Zen 2 - Microarchitectures - AMD "


"AMD Discloses Initial Zen 2 Details"

December 11, 2018 | 07:45 PM - Posted by rukur

Don't worry :) If Adored is wrong next year. PC-PER will absolutely call out Adored and not the Inquirer :)

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.