There are some pretty breathless analysis of a single leaked block diagram that is supposedly from AMD.  This is one of the first indications of what the Zen architecture looks like from a CPU core standpoint.  The block diagram is very simple, but looks in the same style as what we have seen from AMD.  There are some labels, but this is almost a 50,000 foot view of the architecture rather than a slightly clearer 10,000 foot view.

There are a few things we know for sure about Zen.  It is a clean sheet design that moves away from what AMD was pursuing with their Bulldozer family of cores.  Zen gives up CMT for SMT support for handling more threads.  The design has a cluster of four cores sharing 8 MB of L3 cache, with each core having access to 512 KB of L2 cache.  There is a lot of optimism that AMD can kick the trend of falling more and more behind Intel every year with this particular design.  Jim Keller is viewed very positively due to his work at AMD in the K7 through K8 days, as well as what he accomplished at Apple with their ARM based offerings.

One of the first sites to pick up this diagram wrote quite a bit about what they saw.  There was a lot of talk about, “right off the bat just by looking at the block diagram we can tell that Zen will have substantially higher single threaded performance compared to Excavator and the Bulldozer family.”  There was the assumption that because it had two 256-bit FMACs that it could fuse them to create a single 512 bit AVX product.

These assumptions are pretty silly.  This is a very simple block diagram that answers few very important questions about the architecture.  Yes, it shows 6 int pipelines, but we don’t know how many are address generation vs. execution units.  We don’t know how wide decode is.  We don’t know latency to L2 cache, much less how L3 is connected and shared out.  So just because we see more integer pipelines per core does not automatically mean, “Da, more is better, strong like tractor!”  We don’t know what improvements or simplifications we will see in the schedulers.  There is no mention of the front-end other than Fetch and Decode.  How about Branch Prediction?  What is the latency for the memory controller when addressing external memory?

Essentially, this looks like a simplified way of expressing to analysts that AMD is attempting to retain their per core integer performance while boosting floating point/AVX at a similar level.  Other than that, there is very little that can be gleaned from this simple block diagram.

Other leaks that are interesting concerning Zen are the formats that we will see these products integrated into.  One leak detailed a HPC aimed APU that features 16 Zen cores with 32 MB of L3 cache attached to a very large GPU.  Another leak detailed a server level chip that will support 32 cores and will be seen in 2P systems.  Zen certainly appears to be very flexible, and in ways it reminds me of a much beefier Jaguar type CPU.  My gut feeling is that AMD will get closer to Intel than it has been in years, and perhaps they can catch Intel by surprise with a few extra features.  The reality of the situation is that AMD is far behind and only now are we seeing pure-play foundries start to get even close to Intel in terms of process technology.  AMD is very much at a disadvantage here.

Still, the company needs to release new, competitive products that will refill the company coffers.  The previous quarter’s loss has dug into cash reserves, but AMD is still stable in terms of cash on hand and long term debt.  2015 will see new GPUs, an APU refresh, and the release of the new Carrizo parts.  2016 looks to be the make or break year with Zen and K12.

Edit 2015-04-28:  Thanks to SH STON we have a new slide that has been leaked from the same deck as this one.  This has some interesting info in that AMD may be going away from exclusive cache designs.  Exclusive was a good idea when cache was small and expensive, as data was not replicated through each level of cache (L1 was not replicated in L2 and L2 was not replicated in L3).  Intel has been using inclusive cache since forever, where data is replicated and simpler to handle.  Now it looks like AMD is moving towards inclusive.  This is not necessarily a bad thing as the 512 KB of L2 can easily handle what looks to be 128 KB of L1 and the shared 8 MB of L3 cache can easily handle the 2 MB of L2 data.  Here is the link to that slide.

The new slide in question.