Review Index:
Feedback

AMD's Processor Shift: The Future Really is Fusion

Author:
Subject: Editorial
Manufacturer: AMD

Retiring the Workhorses

There is an inevitable shift coming.  Honestly, this has been quite obvious for some time, but it has just taken AMD a bit longer to get here than many have expected.  Some years back we saw AMD release their new motto, “The Future is Fusion”.  While many thought it somewhat interesting and trite, it actually foreshadowed the massive shift from monolithic CPU cores to their APUs.  Right now AMD’s APUs are doing “ok” in desktops and are gaining traction in mobile applications.  What most people do not realize is that AMD will be going all APU all the time in the very near future.

View Full Size

We can look over the past few years and see that AMD has been headed in this direction for some time, but they simply have not had all the materials in place to make this dramatic shift.  To get a better understanding of where AMD is heading, how they plan to address multiple markets, and what kind of pressures they are under, we have to look at the two major non-APU markets that AMD is currently hanging onto by a thread.  In some ways, timing has been against AMD, not to mention available process technologies.

Click here to read the entire editorial!

 

The Desktop and the End of AM3+

Currently AMD has two levels of desktop products; the FM2 based APUs and the AM3+ based FX CPUs (as well as the older Athlon II and Phenom II products that are still available).  The foundation of the AM3+ socket infrastructure is a mature one, but it still very fast and effective for what it does.  For a while AMD had some of the most feature packed chipsets available for their processors, but often these were overshadowed by the lower performing CPUs as compared to Intel’s latest products of the time.  The release of the 890FX with the SB850 southbridge was a watershed event as they arguably had the most advanced chipset out at the time.  The 40 PCI-E 2.0 lanes supported by the 890FX were complemented by the first SATA 6G based southbridge available.  Not only that, but it was the first to natively support six SATA 6G devices.

View Full Size

The Asus Crosshair IV was an excellent example of the then cutting-edge 890FX/SB850 combo.  Too bad the base chipset has not been improved upon in years... and likely never will be.

AMD had a chipset that hit all the major checkmarks for OEMs.  They complemented the 890FX with cut down versions of the chipset to hit different price points.  They still had very competitive integrated graphics with the 880F and 890G.  The Phenom II CPUs were doing “ok” against the latest Intel offerings of the time, but they were certainly not at the performance level of the i7 series of the time.  The promise of Bulldozer was initially good, and AMD was expecting to compete with Intel’s fastest.  Things went sideways at this time.  Early Bulldozer results were very disappointing.  The fastest chips off the line were barely hitting Phenom II X6 1090T performance.  A few delays later and we finally got to see the much-ballyhooed FX processors, and were duly unimpressed.

To ready the market for Bulldozer support, AMD released the 990FX chipset family.  The 990FX was simply a rebrand of the 890FX.  They rebranded this to insure the change in VRM specifications from the AM3 to AM3+ socket infrastructure, so as not to confuse customers.  During this time the decision was made to cancel the upcoming 1090FX northbridge and the SB1050 southbridge.  This was the first real tip-off that AM3+ would not be a supported platform after a certain point.

This was later confirmed when roadmaps were released and showed that the Vishera lineup would be supported for the AM3+ platform throughout 2013 and 2014.  The Steamroller architecture, which will be introduced with the Kaveri parts, will not be represented on AM3+.  Steamroller does appear to be a big step up in both IPC and multi-processing as compared to Zambezi and Piledriver architectures.  Sadly, for those users hoping for one last upgrade on the AM3+ platform, it looks like prospects of a large Steamroller based product on that platform are slim.

 

The Server Market

The original Opteron was another great success for AMD, and it catapulted them into the server market in a big way.  These processors featured HyperTransport connectivity, integrated memory controllers, native 64 bit processing, and performance that outpaced every other Xeon on the market.  Unfortunately, the salad days did not last terribly long for AMD.  Intel came back with Core 2 based products and soon regularly outpaced AMD and their latest.

Over the years AMD’s marketshare in the server space has continued to erode.  They have been able to stave off extinction here by focusing on high core count products at very reasonable prices.  AMD keeps the TDPs low and the core count high, and have been able to maintain a presence in this lucrative market.  The cracks are showing though.  AMD has yet to really update the server side chipset offerings since around 2008.  The southbridge that is implemented in nearly every board is based on the old (and somewhat buggy) SB750.  This means no SATA-6G capabilities, not to mention the slower I/O speeds even with modern SSDs.

Things are slightly different here than with the desktop.  AMD announced the Warsaw based processors for the server market.  Very little information has been released about the specifics of these products, but the one thing they were quick to point out is that they will be faster and more power efficient than the current Opteron 6300 series of products.  These claims point to these chips being based on the new Steamroller architecture and being manufactured on 28 nm rather than the larger/older 32 nm PD-SOI process current AMD CPUs and APUs are manufactured on.

View Full Size

The Supermicro H8SGL is still considered a modern motherboard supporting G34 based Opterons.  Unfortunately, it is still several years old and relies on a chipset design that was released in 2008/2009.

Here is where things get interesting.  If AMD was going to go ahead and design Steamroller parts for C32 and G34, why wouldn’t they port these numbers over to AM3+?  Also, why have they not updated their server chipsets if they are going to keep pushing these sockets?  There will be no PCI-E 3.0 northbridges, and I do not believe that AMD is putting in the extra effort on their latest A85X and A88X I/O hubs so that they can be certified to work on server platforms.  So why even go there?

The answer to that one seems to be that Warsaw is a stopgap measure to keep AMD’s foot in the door in the server market.  AMD has put in the time to create the Warsaw design, but they will not be focusing on optimizing the design for high clock speeds.  Manufacturing and design appear to be aimed at getting the new product into the upper 2 GHz range while still supporting 4, 8, 12, and 16 thread models.  They look to be aiming at keeping these faster than previous Interlagos based chips and not going overboard on the TDP.  While designing any modern CPU is a time and manpower intensive operation, this particular scenario looks to be one where AMD is abandoning the high end and putting out a product that is better than previous, and good enough for what that aim is.  Warsaw is designed to make AMD stay relevant in the server market and keep their current partners pleased with the offerings.  It really does look to be a stopgap measure as the rest of the Opteron ecosystem has remained essentially untouched (and un-updated) for years.

September 4, 2013 | 03:45 PM - Posted by 16stone

I agree!

September 4, 2013 | 04:02 PM - Posted by Coupe

This is one of the most well written articles I have read in a long time.

September 4, 2013 | 05:09 PM - Posted by boothman

Was really hoping for an updated AM3+ chipset (995FX?) and perhaps the commercial release of the 95W FX-8300, or a further tweaked Vishera (not 9590 btw). Something to keep high core users engaged until Kaveri or Carrizo is ready with 6-8 core offerings.

I agree with Coupe, very well written article Josh. An excellent read!

September 4, 2013 | 05:31 PM - Posted by ArcRendition

Fantastic article Josh. Informative, concise and clearly articulated.

September 4, 2013 | 06:28 PM - Posted by praack

nice article, i remember being very much in support with amd's first white papers on processor design into the apu space, but then saw the result and the push to entry level.

so never bought into the apu because the business side kept it as an entry level part.

if they can get the speeds up, get the graphics up and allow them to communicate with more than low end parts - maybe it will be the processor amd needs- if not - well more people will move to intel

September 4, 2013 | 06:48 PM - Posted by Brett from Australia (not verified)

Very well written article Josh we have been waiting for some time to see where AMD was going with their CPU's\APU's. I was particularly interested to see what lay ahead with FM2+ parts, interesting roadmap lies ahead.

September 4, 2013 | 07:33 PM - Posted by Bill (not verified)

What I got out of that article is if you're waiting for AMD to come out with something better to compete with Intel, stop waiting and buy Intel now. As usual, AMD will only state that something better is on the horizon and it never materializes.

September 4, 2013 | 08:00 PM - Posted by ArcRendition

That's the opposite of what the article was conveying. AMD isn't trying to compete with Intel directly. They're trying to cultivate a new "Fusion" APU standard with hUMA and consequently change the landscape of CPU dynamism.

September 9, 2013 | 06:56 AM - Posted by Anonymous Coward (not verified)

The problem is that nobody needs "fusion", or "huma", whatever the hell that is.

The bottom end of the market cares about cost only and nothing else, and margins are so slim it doesn't really matter how many units you sell.

For the midrange and up the performance of the CPU cores matters. AMD seems hell bent on using words to convince others that this doesn't matter, but right now it's ALL that matters.

People want fast cores, otherwise what's the point in upgrading? 8 cores? Who cares, the market for 8 slow cores is almost zero. The market for huma is exactly zero.

September 9, 2013 | 07:22 AM - Posted by Josh Walrath

Qualcomm seems to be doing pretty good with low cost/low margin chips.

September 19, 2013 | 09:14 AM - Posted by Anonymous (not verified)

CPU's are obsoleted already. Intel does not own any high-end GPU's and does not own high-end GPU IP.

The current Intel CPU architecture is already obsoleted and not much you can enhance it, besides improving the chip foundry process which is very expensive.

AMD has a new architecture that will be really released by the beginning of next year. The current AMD APU architecture is work in progress.

AMD APU fusion design is the way to go. Integrating the CPU and GPU functions into one seamless processor is the way to remote all the current architecture bottlenecks. i.e. two different memory banks, data clogging the bus, etc.

Soon the AMD APU's CPU/GPU elements will share the same memory, will be able to use GDDR5 for GPU performance, etc.

Intel doesn't have anything close to the AMD APU's, and they don't have any high performance GPU's.

Systems with only an Intel CPU are complete dogs. Intel desperately needs and is totally dependent on NVIDIA and AMD high-end GPU's in a system to be able to be considered an high end system.

AMD has both great CPU's, great GPU's, great Crossfire technology for multi-GPU performance scaling, etc.

Intel's marketing catches all naive people that think that what makes a high-end system is the CPU.

Any high end system needs a good CPU and one or more high-end GPU's, which Intel do not own and have failed miserably in designing one (Larrabbee, etc).

So there is no high-end system made of Intel only parts, but THERE ARE HIGH END SYSTEMS MADE OF AMD ONLY PARTS...!!

Intel needs NVIDIA and AMD to be able to become a high end system, AMD DOES NOT need any other company.

So the bottom line is there are no INTEL high end systems, because they all need NVIDIA and AMD high end GPU's or it will not be called a high end gaming/etc. system.

September 25, 2013 | 12:33 PM - Posted by RJ (not verified)

Intel will do exactly the same thing AMD did - buy GPU company (nVidia will be easy pray). They will then take a while to chew it up and to integrate everything, like AMD did. I am afraid, however, that at the end of that misstep Intel will still have superior product to AMD, if nothing else - because it has better (less nm) production.

Its going to be the story of 2000-2005 all over again - AMD comes up with great idea but after couple of years INTC's money stash will make up the difference and Intel's technological processes will propel it to undisputed leader again.

January 1, 2014 | 03:29 AM - Posted by Eric Slattery (not verified)

Intel is not allowed to buy Nvidia or AMD because AMD is their only big competitor (Monopoly law) and Nvidia is with Tegra, so they cannot be owned by anyone else, and technically it is cheap Nvidia Graphics on the Intel die, they are just that terrible though.......AMD is developing exclusively for itself, and then other people in the HSA (ARM, Samsung, etc etc) are all going to work to utilize the hardware, making it widely available, and being integrated. AMD may have patented its current idea to bridge the CPU and GPU for parallel processing and offloading, making Intel have to pay them or come up with something on their own to use that set up, just as Intel pays AMD for x64 architecture to manufacture. Right now APUs will be best used in the mobile market, but they will eventually catch up and be used on the Desktop, wide stream, and we are already seeing integration on servers too. AMD is benefiting at the moment too that their GPUs are being bought like hot cakes for cryptomining. If intel was the sole provider, we would get a processor every year that is 3-5% better than last year, with maybe a slightly smaller die. At least AMD has made innovations with their chips, and now that software is taking advantage of the CPU modules of Piledriver and Bulldozer, their multithreaded performance is booming in many applications, especially with the FX-8350 keeping up with the 300-500 dollar i7s. Intel has not really done much innovation in the CPU side sadly......they did not push the smaller manufacturing sizes, they have someone else do the manufacturing process, and then there has to be a market for it, which is the mobile market and that is driving lower power usage, but they have been rather stale for a few years.

September 4, 2013 | 09:38 PM - Posted by ezjohny

I think AMD made a good decision, but they need some work still with there "Single Core Speed" for the processor. For the desktop side lets hope they get the hard core gamers on APU's!!!

September 4, 2013 | 11:12 PM - Posted by puppetworx (not verified)

Cool article Josh. I like this prospective, analysis-based type of article and I'd like to see more of them. Some of this information and analysis would have been included in product reviews but a lot of it wouldn't and besides it's nice to have it all brought together and extrapolated on. Very cool.

This all makes me wonder how and if NVIDIA is going to gain any traction

@ezjohnny Single core speed should with any luck be less significant in the near future, at least for gaming. Yeah I know, we've been hearing that one for years, but with the Xbone and PS4 both housing 8-core processors clocked at a measly 1.6GHz multi-core optimization simply has to happen.

Hardcore gamers on APUs though... you crazy!

September 4, 2013 | 11:57 PM - Posted by Anonymous (not verified)

While I can certainly appreciate the fact that AMD's plans are coming along nicely, as a tech enthusiast living in South Africa this doesn't give me any solace in the fact that they are working on better products. I'm planning on upgrading my PC next year, but what exactly do I have to look forward to?

I mean, I have the choice of Haswell and a new chipset, AM3+ with outdated everything (but more cores) or FM2+, which will taper out at four cores and won't see anything capable of addressing six or more threads. I'd like to remain with AMD as my Athlon X3 has served me well, but at the rate they've been moving I might as well move to Intel.

And that sucks. I don't like gimped products that have been restricted primarily because Intel knows they have the market by the balls.

September 5, 2013 | 12:47 AM - Posted by imadman

Superb article, congrats Josh!

September 5, 2013 | 03:07 AM - Posted by Anonymous (not verified)

Wawsaw will be piledriver based, basically a refresh of current opterons. So just a few tweaks, like richland is to trinity. Look at the roadmap pic here:http://www.pcper.com/news/General-Tech/AMDs-plans-keep-their-ARMs-server-room

The thing that bothers me about going all-APU is the fact that an APU will never be as powerful as discrete CPU+GPU setup, simply because it's easier to make two 250mm chips than it is to make one 500mm chip. So for gamers and enthusiasts who want a discrete CPU, AMD won't have anything on offer (assuming kaveri isn't a CPU jesus that crushes intel's performance), and and even intel's chips will have 50% wasted on an unused GPU.

Perhaps some day someone will figure out how to make use of the iGPU for compute even while a discrete graphics card is in use, or perhaps we might see multi socket motherboards that you can plug 2-4 APUs into in some sort of crossfire arrangement?

It really bugs me to have so much wasted silicon lying around.

September 6, 2013 | 07:17 AM - Posted by Josh Walrath

I will be interested to finally hear what Warsaw actually is.  They haven't done well in educating the public/press about it.

All APU is not necessarily a bad thing.  With HSA there really could be some usage case scenarios where that GPU portion is utilized to a large degree, even with a standalone GPU in the system.  Collision and physics in game could really use that type of horsepower without having to dedicate the GPU to those calculations (and thereby potentially decreasing performance over a hUMA solution due to context switching, copying memory, etc.).  AMD has a long ways to go before it can be used in such a situation, but I think it is coming.  The APU is a good low end processor as well as you still get decent 3D graphics performance essentially for free.

October 15, 2013 | 02:35 AM - Posted by Anonymous (not verified)

The thing that bothers me about going all-APU is the fact that an APU will never be as powerful as discrete CPU+GPU setup, simply because it's easier to make two 250mm chips than it is to make one 500mm chip. So for gamers and enthusiasts who want a discrete CPU, AMD won't have anything on offer (assuming kaveri isn't a CPU jesus that crushes intel's performance), and and even intel's chips will have 50% wasted on an unused GPU.

Perhaps some day someone will figure out how to make use of the iGPU for compute even while a discrete graphics card is in use...

DUDE, this is exactly what HSA means!
The CPU part covers the floating point calculations, while the iGPU covers the integer calcs. The discrete GPU then has more than enough headroom to take care of the rest.

So basically AMD turns their biggest disadvantage into their biggest advantage. Their CPUs excel at floating point calcs and they offset their poor integer calcs with the power of the igpu, making the APUs far more capable at pretty much everything than every traditional CPU could ever be.

January 8, 2014 | 09:30 AM - Posted by HikingMike (not verified)

Switch integer and floating point.

September 5, 2013 | 06:17 AM - Posted by gamerk2 (not verified)

"Multi-core aware software is still not entirely common, much less applications which can utilize more than four threads."

I take issue with this. Ever take a look at application thread counts in task manager/process explorer?

The main issue is that most of the work is serial; you do things in a logical order, with very little opportunity to break up processing. This results in non-scaling applications, irrespective of how many threads they use. (The side-effect of this is giving Intel a performance edge in most tasks, another reason why AMD would be wise to abandon BD and it's derivatives). The things that do easily scale, such as media encoding, physics, and rendering are already offloaded to the GPU, leaving the CPU with all the non-scaling workloads.

Likewise, thread management is not the domain of the developer, but the OS. Windows assigns threads to cores. At any given instant, the highest priority thread(s) that are ready to run are executed. The scheduler does a lot of work behind the scenes to adjust priorities, but that's the gist of windows thread management. After the developer invokes CreateThread() (or one of its many wrappers), their job ends.

Point being: any application that uses more then one thread (and almost all do) is, by definition, multi-core aware. The issue is one of processing: Can you logically break up processing in such a way as to perform processing on two different tasks at the same time, and not reduce performance due to thread overhead, deadlocks, and all the other performance penalties that start to crop up as you utilize more threads? For most tasks, the answer is no; after two or three threads, it becomes to difficult to break up work in such a way where you gain performance. Hence the current state of software.

/rant

September 6, 2013 | 07:23 AM - Posted by Josh Walrath

I think we are actually saying the same thing, but you were able to codify this a whole lot better than I did!  When multi-core CPUs first came out and MS announced that Windows supported multi-threaded environments, many people automatically assumed that Windows would magically cause many programs to go faster just because it could divide up the workload in any application.  Of course, this assumption is incorrect because as you mentioned above, not all programs will benefit from multi-threading due to their workload being more serial.

One of the things I constanly am reading is "all these lazy programmers who can't take the time to make their app more multi-thread friendly" and that of course is not entirely true.  Sure, there are some lazy programmers who could potentially extract more parallelism from their code, but for the most part these guys do understand their programs and have already done a lot of work to improve performance in a multi-core environment.  Plus, as mentioned before, some programs just can't be broken up to address multiple threads.

Great comments though!

September 5, 2013 | 09:29 AM - Posted by TopHatKiller (not verified)

"Anon" is correct; Warsaw is a base?-respun piledrive, done for cost no doubt. 2015 AMD launches new 2/4p server architecture and platform; intergrated gen3 so nearer fm then am? Very high end desktop cpu could/will be derived from that. Core count: up to 20/chip on 0.2. SteamrollerB or Excavator. Seems certain no am3 follow up though - this chip willbe fm2+ or 3. AMD has made clear their commitment to the GROWING high end server market, not just micro, and 15h chips will be released in this market, it is foolish and overly pesimistic to assume otherwise.
This made me laugh: Hasnot is a quad because Intel knows high core counts do not translate into higher performance on the desktop? Giggle, you silly silly man, Intel is screwing us all delivering pathetic low-cost parts, keeping price skyhigh and pocketing massive profits.So many sites and journos not only never combat intel on its policies, in particular the hugh screw-up over trigate,but say tar thanks oh mighty one for screwing my bankaccount.

September 6, 2013 | 07:28 AM - Posted by Josh Walrath

You should watch our podcast, we actually discuss how Intel is actually screwing us over.  I believe it is in the Ivy Bridge E section that we actually take Intel to task for not delivering the goods in a timely manner and making us pay for it out the teeth.  Consider the overall perf. difference between a i7 2600/2700K and the latest Haswell based 4770K.  Sure, there is a perf difference, but it is pretty marginal considering the amount of years that the 2600K has been out.  We saw this back in the days of the K6/Pentium II, and Intel is acting remarkably similar (conservative roadmaps for CPUs with high prices attached to them).

Tri-gate is not a bad idea, but I think that Intel did not expect the huge jump in power when the speed was raised above 3.9 GHz with these chips.  It looks like planar/FD-SOI will actually have superior power/switching characteristics at 22/20 nm.  Sorta wish Intel had utilized FD-SOI, but they prefer to stay with bulk silicon and keep margins higher.

September 6, 2013 | 09:47 AM - Posted by TopHatKiller (not verified)

Thanks for the reply. Sadly I just never listen to podcasts. Perhaps the 'silly' repetition was a little rude.
I'm still expecting 2015 to be AMD payback time, but God knows AMD has cancelled so many cpus over the last coupla years - who can really say what the high end server / culled desktop parts will actually be anymore?

September 6, 2013 | 10:20 AM - Posted by Josh Walrath

I think they are on a good track with Steamroller.  We will see some nice IPC improvements as well as much better thread handling (namely two modules can simultaneously handle 4 threads, unlike current iterations which can do one thread per clock per module).  Can't wait to get our hands on Kaveri and see what the combination of Steamroller and GCN can give desktop uers.

September 5, 2013 | 11:00 AM - Posted by MarkT (not verified)

I'll be honest I was sad the entire time reading this article, nothing gave me hope, why, oh why this is good news.....

September 5, 2013 | 11:27 AM - Posted by Computer Ed (not verified)

Welcome to the church brothers, nice to see others arriving at this future. The thing I love is so many web sites starting to finally see this and I was talking about this very move in 2011 :-)

Better late than never, welcome to the show...

September 5, 2013 | 12:15 PM - Posted by Anonymouspipm1 (not verified)

It'll be disappointing if a 4-threaded Kaveri APU still doesn't have any L3 cache & doesn't out-performance 4-thread Vishera with L3 cache CPU. :(

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.