Review Index:
Feedback

Zen and the Art of CPU Design

Author:
Subject: Editorial
Manufacturer: AMD

Zen vs. 40 Years of CPU Development

Zen is nearly upon us.  AMD is releasing its next generation CPU architecture to the world this week and we saw CPU demonstrations and upcoming AM4 motherboards at CES in early January.  We have been shown tantalizing glimpses of the performance and capabilities of the “Ryzen” products that will presumably fill the desktop markets from $150 to $499.  I have yet to be briefed on the product stack that AMD will be offering, but we know enough to start to think how positioning and placement will be addressed by these new products.

View Full Size

To get a better understanding of how Ryzen will stack up, we should probably take a look back at what AMD has accomplished in the past and how Intel has responded to some of the stronger products.  AMD has been in business for 47 years now and has been a major player in semiconductors for most of that time.  It really has only been since the 90s where AMD started to battle Intel head to head that people have become passionate about the company and their products.

The industry is a complex and ever-shifting one.  AMD and Intel have been two stalwarts over the years.  Even though AMD has had more than a few challenging years over the past decade, it still moves forward and expects to compete at the highest level with its much larger and better funded competitor.  2017 could very well be a breakout year for the company with a return to solid profitability in both CPU and GPU markets.  I am not the only one who thinks this considering that AMD shares that traded around the $2 mark ten months ago are now sitting around $14.

 

AMD Through 1996

AMD became a force in the CPU industry due to IBM’s requirement to have a second source for its PC business.  Intel originally entered into a cross licensing agreement with AMD to allow it to produce x86 chips based on Intel designs.  AMD eventually started to produce their own versions of these parts and became a favorite in the PC clone market.  Eventually Intel tightened down on this agreement and then cancelled it, but through near endless litigation AMD ended up with a x86 license deal with Intel.

AMD produced their own Am286 chip that was the first real break from the second sourcing agreement with Intel.  Intel balked at sharing their 386 design with AMD and eventually forced the company to develop its own clean room version.  The Am386 was released in the early 90s, well after Intel had been producing those chips for years. AMD then developed their own version of the Am486 which then morphed into the Am5x86.  The company made some good inroads with these speedy parts and typically clocked them faster than their Intel counterparts (eg. Am486 40 MHz and 80 MHz vs. the Intel 486 DX33 and DX66).  AMD priced these points lower so users could achieve better performance per dollar using the same chipsets and motherboards.

View Full Size

Intel released their first Pentium chips in 1993.  The initial version was hot and featured the infamous FDIV bug.  AMD made some inroads against these parts by introducing the faster Am486 and Am5x86 parts that would achieve clockspeeds from 133 MHz to 150 MHz at the very top end.  The 150 MHz part was very comparable in overall performance to the Pentium 75 MHz chip and we saw the introduction of the dreaded “P-rating” on processors.

There is no denying that Intel continued their dominance throughout this time by being the gold standard in x86 manufacturing and design.  AMD slowly chipped away at its larger rival and continued to profit off of the lucrative x86 market.  William Sanders III set the bar higher about where he wanted the company to go and he started on a much more aggressive path than many expected the company to take.

Click here to read the rest of the AMD processor editorial!

AMD K5 and K6 Era

The K5 was the first CPU designed entirely by AMD.  It was a very aggressive design that could have panned out very well for AMD had they been able to effectively execute on it.  It was a superscalar, Out-of-Order CPU design that utilized a RISC based core with a x86 decoder front end.  To this point most designs were entirely CISC based.  There was nothing inherently wrong with a fully CISC based CPU, but the overall architecture essentially limited clockspeeds and induced more complexity than many designers wanted.  Going with a x86 decode with a “risc-y” core solved a lot of problems and we have essentially have had that solution ever since.

The K5 was more akin with the Pentium Pro of the time than the original Pentium.  It had better IPC performance than the Pentium and was more on par with the Pentium Pro, but without that extremely expensive cache unit.  Unfortunately, AMD was unable to get clockspeeds up to a point where it could be competitive overall with the Pentium.  AMD continued with the “P-rating” with this part.  The K5 ran well and was highly compatible in x86 applications.  It utilized the Socket 5 and Socket 7 infrastructure without problem.  Sadly, it ran slower and was a hotter chip than the competing Pentiums.

View Full Size

It was a solid attempt at competing with Intel.  Sadly, issues with the design and AMD’s ability to compete with Intel in manufacturing meant that it was not nearly as competitive as they would have liked it to be.  It did build the foundation of good design ideas being put into silicon from AMD.  The company will learn from this product and how to improve upon the design strategies that they were able to implement from scratch.

AMD was also very aware of the industry around them and would take advantage of opportunities.  DEC was hitting some real financial difficulties and AMD was busy hiring engineers from the company.  AMD also noticed another CPU company called NextGen that was designing a CPU that was very comparable to the upcoming Pentium II from Intel.  AMD quickly bought up this company and their upstart CPU and renamed it the K6.

The K6 is probably the most important CPU from AMD to this time.  It had wide appeal due to it being Socket 7 compatible and providing Pentium II performance in most integer applications.  This was one of the first CPUs to cause Intel to radically change their plans for their CPUs.  AMD introduced the K6 in April of 1997.  Intel released the Pentium II a month later.  For that month AMD had the fastest x86 processor in the world.  What is interesting is that Intel felt the need to release the Pentium II well before they had planned.  Intel had been stockpiling CPUs for quite some time, but it did not have the 440 LX chipset platform ready when AMD started pushing forward on releasing K6.  Intel took the aging 440 FX chipset that powered the Pentium Pro for years and utilized it for Pentium II.  This meant no AGP or SDRAM support for the PII right off the bat.  AMD on the other hand was able to utilize the mature 430 HX and the new and shiny 430 TX based chipset, as well as the SiS, ALi, and VIA based chipsets supporting Socket 7 at the time.

 

February 28, 2017 | 11:10 AM - Posted by denstieg (not verified)

Yeahhh history lessons, was 17 when the 8086 hit the markets. Can remember a time the teachers at school swore Windows was never going to be something (mbo informatica).

February 28, 2017 | 11:23 AM - Posted by Josh Walrath

Apple IIe ruled the land... 

February 28, 2017 | 11:35 AM - Posted by David Simons (not verified)

Surely you mean Commodore ? They dominated the industry and we're really THE company to push home computing forward. Apple were quite small in comparison back then.

February 28, 2017 | 11:50 AM - Posted by John H (not verified)

Or if you are in the UK sinclair ruled... In France. Amstrad CPC. :)

My money was on the Atari 800. Custom GPU and sound chip. 1979.

February 28, 2017 | 06:42 PM - Posted by Jeremy Hellstrom

Alas poor Cyrix, I knew thee well.

March 2, 2017 | 04:38 AM - Posted by denstieg (not verified)

It was worse...... DOS was it and all there would ever be or needed.
The school and personal pc of the staf couldn't run windows so it wouldn't be anything. Meen while my weekend job as mover for KPN made me move thousends of Win running pc's.

February 28, 2017 | 11:21 AM - Posted by Anonymous (not verified)

AMD K6-II was my first build that was all my own from beginning to end with my own money. W95-C!!!

February 28, 2017 | 11:21 AM - Posted by Josh Walrath

Heh, the only Win95 worth a damn.

February 28, 2017 | 11:43 AM - Posted by JohnGR

First processor an Intel Celeron 333A (I am still cursing the sales person who convinced me to choose that model over the 300A). 2 years later I wanted to upgrade. So I was waiting for Intel to lower it's prices to buy a new processor at around 600-700MHz. I was young and naive, thinking that Intel was/is a company that lowers prices. Yeah right.
Next processor a Duron 700. After that a 1800+ Thoroughbred-B. Intel has almost lost me as a customer by that time. I was thinking about an Intel Pentium 4 when the Athlon XP was staying behind and AMD was just increasing PR numbers to create the illusion that competes with the latest Pentium 4 models. Then the Athlon64 came out and never looked back.
By the time Bulldozer was out, I was a much calmer person, not pursuing the highest performance out there, so I bought a Thuban and remained with it until today.

February 28, 2017 | 12:35 PM - Posted by Voldenuit (not verified)

Wha? The 333 was a great chip. Most 300As easily hit 450 MHz, but the 333 would hit 504 MHz at the same bus settings.

February 28, 2017 | 05:18 PM - Posted by JohnGR

No, not really. It was easier to go to 450MHz with the 300A, than over 500MHz with the 333A. If I remember correctly the best Pentium II back then was at around 500MHz, so going at 500MHz stable was very difficult. So I was playing with 75MHz or 83MHz BUS speeds back then. I was lucky that the graphics card was stable enough to not bother me at 83MHz (not that I knew anything about stability back then anyway, but it was working ).

February 28, 2017 | 07:52 PM - Posted by Topweasel (not verified)

It was a twofold issue first was if you had a Celeron A not made in Costa Rica you werent getting to 500mhz easily. Second if you couldn't runthe bus at 100mhz then you were overclocking the PCI and AGP bus. Turned out not be a big issue for the AGP bus, but PCI cards went mental. So basically you either got a CPU that ran at 450, 500,or 550 with the 300, 333, 366. Or you basically used it the default speed or just over default speed. The 300A just about any chip could hit 450. Getting a good 500 or 550 part required you to know what serial numbers to look for.

Apparently most retail packages came for Costa Rica, but the OEMs became a crapshoot. On top of that in the early days most BM's employees would test all their OEM chips to cherry pick the best for them and their friends

March 1, 2017 | 10:51 AM - Posted by GURVINDER S PARMAR (not verified)

You were using Windows 95... Stability hadn't yet been invented...Lol.

February 28, 2017 | 12:36 PM - Posted by Josh Walrath

I also made that mistake back in the day. Thought I could get 500 MHz out of that 333A rather than a guaranteed 450 from the 300A.

February 28, 2017 | 09:13 PM - Posted by chaseyDog (not verified)

Celeron 300A/450 was my first build and still one of the most rewarding in terms of price for performance. Initially I kicked my self for not holding off for the 333, but the more I read on the newsgroups the better I felt about only getting 450 out of the 300

February 28, 2017 | 02:39 PM - Posted by Anonymous (not verified)

the Athlon X2 on a Nvidia Motherboard was one of the best computing platforms ever.

February 28, 2017 | 03:32 PM - Posted by Martin Trautvetter

You must be one of the lucky few who didn't run into major issues with Nforce chipsets.

February 28, 2017 | 05:16 PM - Posted by Josh Walrath

I had some really solid experiences with the nForce 2.  Loved SoundStorm!  Memory support was robust and it could handle some decent overclocking.  I believe I had the Asus version at the time... A7N8X Deluxe?

February 28, 2017 | 05:23 PM - Posted by JohnGR

Nvidia graphics onboard and Soundstorm for the audio, from the first nForce. Oh my...
And later nForse2 Ultra 400 for crazy bus speeds.

February 28, 2017 | 06:03 PM - Posted by Pete in Mtl

A8N SLI Asus with 2 6600GTS
Was a nice combination

Built a system for the wife a bit later and used an Asus with a 1100 series processor. Thing still runs quite well years later.

March 1, 2017 | 01:34 PM - Posted by BrightCandle (not verified)

I think I had a pair of 366's on a BP6 motherboard and both overclocked to 550Mhz. That machine was really great, showed the potential of dual cores to make a system feel smoother, didn't see that sense of smoothness when loading programs until much later with the Core 2 Duo.

February 28, 2017 | 11:47 AM - Posted by Fubar (not verified)

AMD K6-II and a godly Nokia 449 monitor which had 1280 x 1024 @ 75 Hz (I could not stand the 60hz monitors). All that hard work during the whole summer, the choice between a moped or a computer from which my mother would pay half. No regrets going the computer route :)

February 28, 2017 | 08:53 PM - Posted by Avoozl (not verified)

Haha. I just got rid of a Nokia 446xt last year. So I can appreciate that. :-)

February 28, 2017 | 12:35 PM - Posted by Anonymous (not verified)

"GF seems to have provided a very solid 14nm FinFET process that utilizes high density libraries to allow AMD to have a reasonably sized die given 8 cores and 20 MB of cache."

GF's 14nm FF process is licensed from Samsung and any high density designs libraries engineering is probably done at layout by AMD via its in house CAD automated layout CPU/GPU design systems. So that layout is done mostly in house by AMD's engineers with some assistance from GF and Samsung engineers(For basic transistor design only to fit the 14nm Samsung/GF process) for final tapeout.

I do not think that the Desktop Ryzen and Zen Server CPU SKUs are making use of any high density design layout libraries for these parts. Any high density design libraries automated layout, that are normally used for GPUs to get more shaders per unit area, can not for transistor density/thermal dissipation reasons be clocked as high as the nominal lower density design libraries used for high performance higher clocked desktop CPU cores.

AMD’s first Carrizo excavator cores were for mobile usage only and AMD could take advantage of most of the lower clocks tradeoff for any higher circuit density gained by using those high density nominal GPU layout libraries on a mobile usage only CPU core’s design/layout process. Mobile parts are going to be clocked lower anyways regardless of their design process(Low or High Density design libraries) so why bother using those CPU nominal low density high performance design/layout libraries on any mobile CPU parts that are never going to be clocked as high to begin with.

For high density design libraries engineered Mobile CPU SKUs that mostly live in thermally constrained form factors like laptops or tablets, it makes more sense to make the clock speed trade off to get more functionality on the processor die as those parts are clocked lower anyways but for desktop parts for the enthusiasts market the nominal CPU Low density design libraries are more appropriate because of the higher clock speeds that can be had at the cost of a larger die size.

I think that Ryzen/Zen desktop consumer/server parts are probably making use of the nominal CPU low density higher performance layout design libraries to get those needed extra clock speeds and that for any Ryzen/Raven Ridge APUs AMD will probably make use of the high density design libraries to get more space for more Vega NCUs on any mobile/laptop Ryzen/Raven Ridge APUs for mobile/laptop usage only APU SKUs.

The Ryzen/Zen x86 core being smaller than the competing Intel x86 core has more to do with Ryzen's smaller FP units relative to Intel’s larger FP units than it does with any high density design libraries being used. This is not to say that AMD has not employed some tweaks to its Zen cores designs under any Low density design library process as AMD does have the option of hand tweaking some of the generated layouts from any automated layout process to get smaller cores.

February 28, 2017 | 01:01 PM - Posted by Josh Walrath

GF did license 14LPP from Samsung, but it has also added in some of its own secret sauce to the mix to differentiate it subtly from what Samsung uses.  GF engineers have worked very closely with AMD (as they are the primary customer of the company) to develop structures that will work better with AMD's designs.  For example, AMD and GF developed specialized capacitors that exists on the power plane of CPU designs that provide instant power upon wakeup of those structures without inducing excessive voltage droop.

I'm working on trying to find the documentation about using a high density design for Zen that has improved its die size advantages.  I do know that one aspect is the notched SRAM design that AMD/GF uses that allows much smaller caches than expected from a company not called Intel.  While the lower complexity FPU is a reason for the smaller Zen die size, also consider that Zen has double the L2 cache as compared to the competing Intel design.

February 28, 2017 | 02:11 PM - Posted by Anonymous (not verified)

Yes all foundry partners work with their customers to get the basic transistor geometry of their 14nm/any other FF process working and also the basic elemental doping and diffusion work engineered as that is part on the licensed process 14nm/otherwise. The layout and the actual architectural work of designing/layout of the CPU functional units made up of the transistors is a mostly an in house task for AMD using AMD’s layout CAD software(AMD’s software or licensed software from a third party) to design/layout the CPU core’s logic.

So most of the GF/Licensed from Samsung 14nm fabrication process work involves GF and Samsung engineers working with AMD. And because GF is a licensee of Samsung’s 14nm process Samsung probably has to provide some consulting engineers to GF to help in the engineering of the licensed diffusion process formulas/transistor geometry design that is composed/built using mostly Samsung IP/Some GF IP/Industry wide IP.

The open market foundry business is very similar to the construction market business where architects/engineers(AMD) design the building and the construction company and its engineers(GF/Samsung) build the designs.

It should be noted that for some years now that IBM, Samsung, and GF and others have been in a chip foundry IP/technology sharing partnership/foundation and that GF is now also doing all of IBM’s commercial fabrication work, with IBM still in control of some research fabrication capacity as well as IBM still in control of IBM’s important chip foundry research/commercial IP. IBM is a very large IP license-er for a lot of very important foundry IP and CPU designs(OpenPower).

So very rarely is there a third party fabrication plant without some on fabrication site engineering offices with loads of engineers from various companies made up of the customer’s engineers and consulting engineers as well as the other engineers from the companies that provide the diffusion machinery and other services.

The reworking of the AMD/GF wafer agreement also allows AMD to take the process and use Samsung/other fabs should extra capacity be needed with GF getting some payment for even the outside of GF produced wafers. The IP licensing across the third party chip foundry industry is a very interesting subject in itself and there is a lot of sharing and cooperation among Samsung, GF, and IBM and plenty of licensing among these parties in an effort to pool resources and save on R&D.

March 2, 2017 | 06:50 PM - Posted by Anonymous (not verified)

I was under the impression that AMD has explicitly stated that they are using high density libraries with their CPU designs. They have multiple cell designs for any given cell that allows optimization depending on what is required though. At this point, I doubt that critical paths due to transistor switching speed are the main issue. Enthusiast constantly complain about the seemingly mobile first strategy which supposedly limits desktop performance. This is probably completely wrong. Current processors are very heat limited. For both manufacturers, a CPU core, even including an L3 cache slice, is only around 12 square mm. That is only 3.5 mm on a side, if square. That is a tiny area to conduct heat out of. The total TPD is not the issue like it was when the Pentium 4 was hitting 150 watts. It is the thermal density. GPUs can push 300 watts because of the large die size and the ratio of logic to memory. If it is not switching speed limited, then it is much better to use the denser versions of the standard cells. This gives you a larger transistor budget to pursue IPC and lower power. Zen based chips may be great for mobile since they should be able to achieve good performance at low clock and very low power. Early on this looked like a gamble, but it seems to be paying off. Zen does have a small difference between base and boost clock though; that may be due to high desnsity libraries. Intel can push high boost clocks, but they are thermally limited.

They will obviously want the dense cells being used for their APUs. Also, they are unlikely to have a completely different design or even a different die for most mobile parts. Each different die cost a lot of money to make. Redoing the layout just with different cells would be expensive. It would probably require completely new mask for all layers. Most CPUs are salvage parts. They only make 2 or 3 actual different die variants for each market. I would expect the 8 core variant and maybe a 4 core variant since they use groups of 4. They will have APUs, but it is unclear whether these will have less than 4 cores. Going with a 4 core / 8 thread APU raises the minimum bar to 8 threads, which is what the consoles have anyway.

They may not have a 4 core die without on die graphics though. They may go down to 4 core / 4 threads with Zen a variant (salvage), as that would probably still be able to keep up with 8 low power, single threaded cores as used in the consoles. It is unclear whether Scorpio will use full Zen cores or not. I would expect for anything less than 4 cores, they may use a light weight Zen core. Scorpio may include a full Zen 4 core module, maybe without or with less L3, since it will have access to a much faster memory system than a desktop CPU. It may also be a light weight variant of Zen rather than full Zen. That isn't exactly relevant, since Microsoft will be paying the cost of that die variant anyway.

March 4, 2017 | 02:15 PM - Posted by Josh Walrath

You are exactly right on the variety of standard cells that AMD used for this design.  If you take a look at the last pic of my Zen arch overview, it shows density/power/perf of those different cells and how they are used.

https://www.pcper.com/reviews/Processors/AMD-Zen-Architecture-Overview-F...

February 28, 2017 | 12:56 PM - Posted by Anonymous (not verified)

I think an missing piece of the story for Ryzen is that while every Intel parts have to reserve an important die area for their iGPU, Ryzen (Summit Ridge) parts have no iGPU and thus the full die is available for CPU, hence much better performance.

February 28, 2017 | 01:03 PM - Posted by Josh Walrath

You are correct that the GPU in the i7 series of desktop processors takes up over 1/3 of the space.  I did not however go into die size/perf characteristics in this particular article.  Once we know more about relative die sizes of these parts we can explore further what we think AMD can do to accentuate their profitability with regards to overall performance.

March 2, 2017 | 07:13 PM - Posted by Anonymous (not verified)

That is a somewhat ridiculous enthusiast complaint though. Intel makes CPUs without integrated graphics; they are just very expensive Xeon-based devices. The reason they don't show up except for expensive extreme edition (salvaged Xeons) parts is market segmentation, not anything to do with die size. Intel profit margins are probably still quite large for their 350$ parts with integrated graphics despite the die size. I would also wonder if an 4 core part is really worth making at those margins without an integrated GPU. It would only be around 80 square mm or so I would think. In don't know how much lad space they need on the bottom of the die. It would also not sell in the mobile market.

February 28, 2017 | 01:13 PM - Posted by Anonymous (not verified)

No Intel has a line of CPU only SKUs for the desktop/enthusiasts market!

Ryzen/Summit Ridge parts are AMD's line of deshtop/enthusiasts SKUs with Ryzen/Raven Ridge being for AMD's APUs(desktop and mobile SKUs).

I think you are confusing Intel's mainstream desktop SKUs which do have integrated GPUs. There still is no complete list of Desktop and Mobile Ryzen/Raven Ridge APU variants at this time from AMD. I'd think that AMD will probably offer a few Desktop APUs for the business market as well the mobile/laptop APUs that will also have integrated GPU/graphics.

Also there is a whole new class of Interposer based Professional server/workststion/HPC APUs and Interposer based consumer APUs variants to consider. So that's a whole new class of Interposer based APUs(CPU die, GPU die, and HBM2 stack/s on the interposer) from AMD that will definitely have an inpact on the professional and consumer markets!

February 28, 2017 | 01:03 PM - Posted by Tralalak (not verified)

Mr. Walrath, you're missing one of the most successful x86/x86-64 AMD processor microarchitecture "cat cores" AMD Bobcat, AMD Jaguar / Puma... (BGA, AM1, PS4, XB0, PS4 Pro)

February 28, 2017 | 01:06 PM - Posted by Josh Walrath

Heh, the editorial was over 6000 words as is... I couldn't cover everything! Note I left out Duron, Sempron, and other chips that were also major products for AMD.  The cat cores provided a lot of great design aspects that have eventually made their way up and down the stack of AMD products.  Perhaps what is most interesting is that AMD was able to design the Zen architecture in such a way that it could address not only the high end market, but can be scaled down to sub 5w implementations that have pushed Jaguar out of future products.

February 28, 2017 | 01:30 PM - Posted by Tralalak (not verified)

Duron was K7 microarchitecture and Sempron 64 / Sempronn X2 was K8 microarchitecture and Sempron 100 Series was K10 microarchitecture and all K7, K8, K10 microrchitectures you basically mention but small cores of course first AMD APU processsor AMD Brazos (Bobcat core) and SoC AMD Jaguar not.

I have only a minor note: im Keller Talks About AMD's Upcoming Zen And K12 Cores z AMD Core Innovation Summit May 2014: "so is interesting AMD has two families od processors today, you know the Bulldozer family focused to on really high frequency, Jaguar family super small cores and know in our new generation of what I told my team as we got to take the DNA of both the best od boath and put it put in there so this is really nice we know how to do high-frequency design we know how to dense design and ARM give us som inherent architectual efficiency and the combination is pretty good"..."
source: https://www.youtube.com/watch?v=SOTFE7sJY-Q

February 28, 2017 | 01:32 PM - Posted by Maree (not verified)

The 16nm version of the cat cores (made by TSMC as APU with integrated GPU) can be found in XBOX OneS & PlayStation Slim/Pro consoles. So these cores will live on for some more time atleast till Microsoft & Sony replace Xbox1 & ps4 with Xbox2 & ps5 respectively.

February 28, 2017 | 02:21 PM - Posted by Martin Trautvetter

Yes, and mainstream gaming will suffer from their presence much like mainstream operating systems have suffered from having to support multiple generations of god-awful Intel Atoms in the past decade.

February 28, 2017 | 03:51 PM - Posted by Tralalak (not verified)

But new Atom (Goldmont microarchitecture, Denverton platform) is cool...

• Intel Atom CPU C3955 @ 2.10GHz (16C 2.1GHz, 8x 2MB L2 cache, Goldmont - Denverton, 31W TDP): 229.47Mpix/s
source: http://ranker.sisoftware.net/show_run.php?%20q=c2ffcee889e8d5e3d4e5d1e9d...

• AMD FX-8350 Eight-Core Processor (4M 8T 4.22GHz, 2.2GHz IMC, 4x 2MB L2 cache, 8MB L3 cache, 125W TDP): 187.87Mpix/s
source: http://ranker.sisoftware.net/show_run.php?%20q=c2ffcee889e8d5e3d0e4ddeed...

• AMD Phenom II X6 1100T Processor (6C 3.72GHz, 2GHz IMC, 6x 512kB L2 cache, 6x 6MB L3 cache, 125W TDP): 92.82Mpix/s
source: http://ranker.sisoftware.net/show_run.php?%20q=c2ffcee889e8d5e3d4ecd9edd...

• AMD Eng Sample: 2D3151A2M88E4_35/31_N (8C / 16T, 3.14GHz, 8x 512kB L2 cache, 2x 8MB L3 cache, Zen, 65W ~ 95W TDP): 317.84Mpix/s
source: http://ranker.sisoftware.net/show_run.php?%20q=c2ffcee889e8d5e3d5e3d1e2d...

March 2, 2017 | 07:03 PM - Posted by Anonymous (not verified)

You can't put a high power CPU in a console for a variety of reasons. Developers are just going to have to put some work in to actually multi-thread their code. I use a 24 core machine at work (2 12-core Zeons with HT off actually) and I am still stuck at using one core for many applications. That is 4.17 % of the available CPU resources. It is really nice when I have an application that can hit "2400%" CPU utilization. With the consoles running 8 threads (some reserved for OS though), we will have engines that can take good advantage of multiple CPU cores eventually. We aren't there yet I don't think.

March 1, 2017 | 12:51 PM - Posted by Tspudz (not verified)

Aaah yes, the Sempron got me through my poor college student days where I dumpster dove for the case and monitor. I was surprised about what I was able to get accomplished on that thing. It was nice of AMD to consider the destitute tier of consumers even for their most current socket (at least at the time).

February 28, 2017 | 08:23 PM - Posted by quest4glory

My first PC (which was a dumpster dive) had an Am286 (12 MHz.)

The PC I took with me to college was an Am386 DX-40.

First CPU that I purchased with my own money was an Am486 DX2-80.

I kind of miss the good old days. I have multiple machines now, none of which have an AMD processor in them. That's been relegated to my video game consoles.

February 28, 2017 | 01:30 PM - Posted by John H (not verified)

I have a lot of faith that Lisa Su will also execute well with Ryzen; better than the CEOs did with at least K8 and K7 (not enough capacity when appropriate).

February 28, 2017 | 05:43 PM - Posted by Anonymous (not verified)

Lisa Su is a good CEO, and CEO's do not operate in a vacuum as there is the BOD and stockholders to contend with. But some of those past AMD CEO's both engineers and non engineers made some crucial mistakes at crucial times and that along with some of Intel's not so legal market tactics really made AMD falter.

Getting Raja Koduri managing AMD's new RTG and Keller back working(lead manager/engineer) on new almost clean sheet Zen x86 micro-arch with SMT abliities has helped AMD greatly. Rory Read was brought in at the last moment just to try and stabilize AMD financially while an new more permanent CEO(Lisa Su) could be chosen.

So far with AMD's limited resources we have Zen/Ryzen and Polaris with Vega on the way. So AMD's performance has been great over the past few years with its turnaround which appears to be very sucessful.

February 28, 2017 | 02:05 PM - Posted by Mike S. (not verified)

Thanks! I was aware of some of the history but not all of it.

I had been under the impression that Phenom and Phenom 2 were fully competitive with Core 2 Duo and so forth, and AMD only fell behind with Bulldozer. Clearly I was wrong. (That's not snark. I didn't pay enough attention to benchmarks at the time to notice the details.)

February 28, 2017 | 02:15 PM - Posted by Martin Trautvetter

Hey Josh, M/C/W were launched in mid 2006, not 2007. ;)

February 28, 2017 | 06:05 PM - Posted by Josh Walrath

Ack! I knew that! I will blame my fingers slipping on 7 instead of 6.  Updated! Thanks for pointing that mistake out.

March 1, 2017 | 04:14 PM - Posted by Martin Trautvetter

Sure thing, sometimes fingers are the worst! :P

February 28, 2017 | 03:44 PM - Posted by agello (not verified)

blow after blow after blow to AMD in this review. no wonder so many are intel fans. its reviews like this that people read. not once did the reviewer tell how much r&D intel is blowing a quarter. compared to how little AMD is blowing. i did not read the atom 2 failure, the kabylake bending issues, the i7 4770k over heat issues, the choking of the pentiumg g dual core. Nothing about how not everyone is looking to blow a crap load of money on a processor that gives a 15% lead over the competition for $200 more. Their is a saying though. "you build it, they will come!" idiots.

February 28, 2017 | 03:59 PM - Posted by Josh Walrath

Sooo the whole part where AMD, at a fraction of the size and income of Intel, has had a wide variety of CPUs through the years that have not only competed, but exceeded Intel in performance throughout the history of the company... just sort of passed you by?

Anyway, this is an editorial and historical look at AMD through really the past 22 or so years from K5 through Ryzen.  Look at AMD's share price and CPU performance through the years and then compare it to when I said they had the high points and the low points.  If you think I'm going to sugarcoat what the Bulldozer arch did to AMD and how low it brought them, you got another thing coming.

Ryzen looks to be competitive and is getting AMD back in the game.  Their share price reflects the confidence in the company from shareholders and investors, and they look to have a much better year due to CPU and GPU refreshes.  So yeah, if you got out of those 5 pages that all I did was criticize AMD... you may need to seek some professional help.

February 28, 2017 | 04:05 PM - Posted by Master Chen (not verified)

The very first two computers I've ever used, was my father's brought-straight-from-Japan MSX and the GODLIKE X68000. The very first computer that was my own, but which I didn't build by myself completely from scratch, was Apple II. Quite honestly, after father's X68000 - Apple II looked like a friggin' joke...and unfortunately enough, since that Apple II was a birthday present from my relatives and for "study purposes", I had to sit on it ALL THE WAY RIGHT UP UNTIL LATE 1997. I'm shitting you NOT, people. At beginning it was "just fine", but by mid-90s it turned into an absolute TORTURE. And in the winter of 1997, this is where it ALL started...1997 was the year when I was finally born as a PC enthusiast. I've built my very first own custom PC configuration that winter, completely from scratch and by spending my own hard-earned money on all of the hardware needed. It was based on 266MHz version of Klamath (Pentium II). I initially wanted to get a 300MHz clocked one, but it was quite rare in my local hardware stores at that time, and in places where it was selling on an everyday basis it was super-effing-expensive. It was not the most best thing available out there, but it was the first piece of hardware that was 100% my own by all means, so I'll always cherish that memory.

February 28, 2017 | 04:09 PM - Posted by Joseph Taylor (not verified)

Nice article Josh. I enjoyed it.

February 28, 2017 | 06:01 PM - Posted by Josh Walrath

Thanks! Was fun taking a jaunt down memory lane and figuring out what processors I had and what I most remember about them.

February 28, 2017 | 05:51 PM - Posted by Hitman3003 (not verified)

Very interesting and solid article! Appreciate it a lot, it was fun to read :) Speaking of AMD, fingers crossed for them. Hopefully it's the beginning of new era for them, and for us - customers. We need progress in this field, not a slowly evolution and robbery, like Intel has been doing for several years.

If they will be successful, there are bright years to come in CPU industry :)

February 28, 2017 | 05:52 PM - Posted by pdjblum

Agreed.

February 28, 2017 | 06:03 PM - Posted by Josh Walrath

Remember, this is only the first iteration of Zen.  AMD has an aggressive game plan to extend the architecture and improve upon it.  It has a lot of legs for the years to come and consumers have a stronger AMD to leverage Intel to innovate or lower prices (or hopefully both).

February 28, 2017 | 05:53 PM - Posted by CNote

I don't really remember using any AMD in the "old days" but my high school CAD/3D Animation classes were on Pentium 3 and took days to render. My first custom computer with a Pentium 4 cut that in half. I think I debated going AMD but bought a bare bones system.

February 28, 2017 | 06:35 PM - Posted by Anonymous (not verified)

" Worst case scenario is that AMD will suffer a 20% performance hit in I/O operations."

Did you just pull that right out of your ass, or what?

February 28, 2017 | 06:44 PM - Posted by Josh Walrath

Would you consider that a worst case scenario in regards to chipset quality?  I'm not spreading FUD and saying that is what is going to happen, but it would certainly put a big blot on the overall system performance if in the unlikely event it would happen.

March 2, 2017 | 03:39 AM - Posted by Phil (not verified)

20 percent just seems like a lot to me here. Can you cite a specific example of the southbridge/chips etc weaknesses from the past I can research?

March 2, 2017 | 10:31 AM - Posted by Josh Walrath

Now that the reviews are out, we see that AMD is not suffering.  However, if you want historical examples of this problem just look up any in depth review of a motherboard using the SB600 southbridge from AMD.  That had terrible SATA performance and was in the 20% slower range than what Intel offered.

February 28, 2017 | 08:35 PM - Posted by Lance Ripplinger (not verified)

Great article Josh! Thank you for the trip down memory lane. The very first computer I bought with my own money that I worked hard to save up over a summer had and K6-2 400Mhz in it. The board in it was a Jetway branded thing I believe. You mentioned in the article how AMD had issues with AGP, and I remember quite well the trouble I had. I remember the motherboard had an ALi chipset, which quite frankly was abysmal. Even though my ATI Rage 128 16MB worked OK on it, I could never get any other video card to work on it, even another ATI card wouldn't work. After that system died (the motherboard had smoke coming off of it, good riddance), I built a system for myself that had the early K7 Slot A Athlon 700mhz cartridge processor. I believe at the time I used an Biostar board. I think I gave that system to my parents while in college, because I built myself another SLOT A Thunderbird Athlon system that ran at 800mhz. That thing was great for gaming. I also remember all of the despicable, very illegal things Intel was doing at the time, and how they got away with it. That really hurt AMD, although I think AMD really killed themselves when they way overpaid for ATI. Oh yeah, I had an AMD Sempron system I built back in 2005 or 06ish, with an Abit board. That board burned up though. Anyway, I could go on forever, but yeah, those were the days indeed.

February 28, 2017 | 08:52 PM - Posted by Josh Walrath

I don't miss the chipset and motherboard issues that plagued AMD for many years. When 890FX and later 900 series came out, they had so much better overall quality and support.  Took a while to get there.  Hope the new series is just as good, if not better.

March 1, 2017 | 03:41 AM - Posted by John Collins (not verified)

A very interesting article, made my day...much appreciated, thanks Mr Walrath. Having a person with your many, many, many :) years of experience on the staff is definitley a credit to PCPER.

March 1, 2017 | 07:59 PM - Posted by Josh Walrath

Glad you enjoyed it. Thanks for calling me old!

March 2, 2017 | 12:55 AM - Posted by John Collins (not verified)

I thought the not too subtle reference might raise a smile :) Pushing 50 here and my first PC was an AMD386DX40. Unlike most of my peers, my addiction to building and tinkering with PCs never abated.
Currently my "high end machine" is a GTX1070/4790K rig. IMO the tech websites need more people such as yourself who have a more mature perspective, much like a fine wine, improving with age, not to mention keeping the whippersnappers like Peak and Shrout in check :)
I look forward to the next podcast.

March 2, 2017 | 01:03 AM - Posted by Josh Walrath

My first PC was a CompuAdd NEC 8086 knockoff running at 10 MHz. Family bought a TRS-80 well before that. First machine I upgraded was a 386 SX 16 MHz. Added a Pro Audio Spectrum 16 and 4 more MB of SIMM RAM to it. First real enthusiast grade kit was a Quantex Pentium 133. That was the one I really tweaked and added to. Put the fire in my belly for these things.

March 1, 2017 | 05:45 AM - Posted by Anonymous (not verified)

ok, so i dont know this site very well but gota say that the headline was very misleading and had nothing to do with zen. it almost looks like a small piece just to say " hay people amd have had bad cpus" then nothing about their good 1s or what they are just about to release ( weather good or bad ). i am neather a amd or intel fanboy but the headline was very missleading

March 1, 2017 | 10:45 AM - Posted by Josh Walrath

You do know that there are 5 whole pages to this article where I glowingly covered K6, K7 Athlon, and the Athlon 64?  Not to mention a whole page at the end where I describe Zen and how it compares to current competition? I'm guessing you didn't get past the first page?

March 1, 2017 | 02:31 PM - Posted by Anonymous (not verified)

Great article Josh - one of the best I've read on PCper in a while. One for the old gits to reminisce me thinks...

March 1, 2017 | 06:54 PM - Posted by Josh Walrath

I originally got into computers in High School in the 80s, but it was really expensive so I couldn't afford anything.  It really wasn't until 1996 that I had the funds to start exploring hardware.  That is when I bought my first machine myself and in about 5 months had started to fiddle with it.  Adding the 3DFX Voodoo Graphics card supercharged my interest.  Was hooked ever since.  Wished I had the chance to play with some of the older AMD parts pre-95.

March 1, 2017 | 07:46 PM - Posted by Anonymous (not verified)

Pretty much the same as me. Amigas till the early 90's, then onto 486 > P120 > Orchid Righteous 3D yadda yadda. I was 40 the other day which is depressing!

March 1, 2017 | 06:07 PM - Posted by Anonymous (not verified)

Love it. I was around for a lot of this, but it was before I started building; very cool to know that this CPU race at least used to be a very close one. Can only hope that becomes the case again.

March 1, 2017 | 06:55 PM - Posted by Josh Walrath

These things seem to cycle around.  The only thing really different about this time is that while Intel hasn't been aggressively pushing the industry, it is certainly not in a weaker position architecturally as compared to the Pentium !!! and Pentium 4 days.

March 1, 2017 | 11:47 PM - Posted by notwalle@notemail.com (not verified)

hey josh and guys thanks for the history lesson.

In my article I thought I was using k6 but I guess it was an Athlon.

Anyway thanks again.

March 4, 2017 | 02:18 PM - Posted by Josh Walrath

Thanks for reading!

March 2, 2017 | 12:46 PM - Posted by Harney (not verified)

Great write up josh

thank you for this

March 4, 2017 | 02:18 PM - Posted by Josh Walrath

Appreciate it!

March 2, 2017 | 07:24 PM - Posted by Anonymous (not verified)

"Going with a x86 decode with a “risc-y” core solved a lot of problems and we have essentially have had that solution ever since."

I think using statements like this causes confusion. The micro-ops are not equivalent of RISC instructions. They are probably quite long and complicated because they embed a lot of information about the original AMD64 instruction and they may include a lot of run time data also, like register renaming stuff. In my opinion, RISC and CISC are obsolete terms. Modern processors are closer to CISC with a few RISC like features. The main thing you want is fixed instruction length encoding to allow for easier pipelining and super scalar, out-of-order execution. You also don't want a large number of complicated addressing modes. Even with an old CISC ISA, those can mostly be worked around. They just have the compilers not use complex addressing modes and the complex, irregular length instruction encoding is converted to micro-ops that the backend can pipeline and such. I don't consider even ARM ISA to be anywhere close to a traditional RISC ISA. It has a huge number of very specialized instructions which is the exact opposite of RISC ISAs. It is cleaner and simpler to decode than x86, but it is not RISC.

March 3, 2017 | 02:22 PM - Posted by Josh Walrath

I seem to remember discussions back in the day when talking about this way of decoding x86 instructions, and they would often term it "RISC-y".  It certainly is not RISC, but you can see how they would be using such a term to describe it back in 1995.

March 12, 2017 | 01:41 PM - Posted by spkay31

Very good article covering the major CPU milestones for AMD. I hope the younger readers who may not be that familiar with past AMD successes take the time to understand the advances made by AMD and the effect on keeping Intel R&D moving forward at a more rapid pace. As in any market, competition brings out the best in everything. Better products, better pricing and more rapid advances. Let's hope AMD continues with the initial success of Ryzen.