Review Index:
Feedback

AMD's Processor Shift: The Future Really is Fusion

Author:
Subject: Editorial
Manufacturer: AMD

Retiring the Workhorses

There is an inevitable shift coming.  Honestly, this has been quite obvious for some time, but it has just taken AMD a bit longer to get here than many have expected.  Some years back we saw AMD release their new motto, “The Future is Fusion”.  While many thought it somewhat interesting and trite, it actually foreshadowed the massive shift from monolithic CPU cores to their APUs.  Right now AMD’s APUs are doing “ok” in desktops and are gaining traction in mobile applications.  What most people do not realize is that AMD will be going all APU all the time in the very near future.

View Full Size

We can look over the past few years and see that AMD has been headed in this direction for some time, but they simply have not had all the materials in place to make this dramatic shift.  To get a better understanding of where AMD is heading, how they plan to address multiple markets, and what kind of pressures they are under, we have to look at the two major non-APU markets that AMD is currently hanging onto by a thread.  In some ways, timing has been against AMD, not to mention available process technologies.

Click here to read the entire editorial!

 

The Desktop and the End of AM3+

Currently AMD has two levels of desktop products; the FM2 based APUs and the AM3+ based FX CPUs (as well as the older Athlon II and Phenom II products that are still available).  The foundation of the AM3+ socket infrastructure is a mature one, but it still very fast and effective for what it does.  For a while AMD had some of the most feature packed chipsets available for their processors, but often these were overshadowed by the lower performing CPUs as compared to Intel’s latest products of the time.  The release of the 890FX with the SB850 southbridge was a watershed event as they arguably had the most advanced chipset out at the time.  The 40 PCI-E 2.0 lanes supported by the 890FX were complemented by the first SATA 6G based southbridge available.  Not only that, but it was the first to natively support six SATA 6G devices.

View Full Size

The Asus Crosshair IV was an excellent example of the then cutting-edge 890FX/SB850 combo.  Too bad the base chipset has not been improved upon in years... and likely never will be.

AMD had a chipset that hit all the major checkmarks for OEMs.  They complemented the 890FX with cut down versions of the chipset to hit different price points.  They still had very competitive integrated graphics with the 880F and 890G.  The Phenom II CPUs were doing “ok” against the latest Intel offerings of the time, but they were certainly not at the performance level of the i7 series of the time.  The promise of Bulldozer was initially good, and AMD was expecting to compete with Intel’s fastest.  Things went sideways at this time.  Early Bulldozer results were very disappointing.  The fastest chips off the line were barely hitting Phenom II X6 1090T performance.  A few delays later and we finally got to see the much-ballyhooed FX processors, and were duly unimpressed.

To ready the market for Bulldozer support, AMD released the 990FX chipset family.  The 990FX was simply a rebrand of the 890FX.  They rebranded this to insure the change in VRM specifications from the AM3 to AM3+ socket infrastructure, so as not to confuse customers.  During this time the decision was made to cancel the upcoming 1090FX northbridge and the SB1050 southbridge.  This was the first real tip-off that AM3+ would not be a supported platform after a certain point.

This was later confirmed when roadmaps were released and showed that the Vishera lineup would be supported for the AM3+ platform throughout 2013 and 2014.  The Steamroller architecture, which will be introduced with the Kaveri parts, will not be represented on AM3+.  Steamroller does appear to be a big step up in both IPC and multi-processing as compared to Zambezi and Piledriver architectures.  Sadly, for those users hoping for one last upgrade on the AM3+ platform, it looks like prospects of a large Steamroller based product on that platform are slim.

 

The Server Market

The original Opteron was another great success for AMD, and it catapulted them into the server market in a big way.  These processors featured HyperTransport connectivity, integrated memory controllers, native 64 bit processing, and performance that outpaced every other Xeon on the market.  Unfortunately, the salad days did not last terribly long for AMD.  Intel came back with Core 2 based products and soon regularly outpaced AMD and their latest.

Over the years AMD’s marketshare in the server space has continued to erode.  They have been able to stave off extinction here by focusing on high core count products at very reasonable prices.  AMD keeps the TDPs low and the core count high, and have been able to maintain a presence in this lucrative market.  The cracks are showing though.  AMD has yet to really update the server side chipset offerings since around 2008.  The southbridge that is implemented in nearly every board is based on the old (and somewhat buggy) SB750.  This means no SATA-6G capabilities, not to mention the slower I/O speeds even with modern SSDs.

Things are slightly different here than with the desktop.  AMD announced the Warsaw based processors for the server market.  Very little information has been released about the specifics of these products, but the one thing they were quick to point out is that they will be faster and more power efficient than the current Opteron 6300 series of products.  These claims point to these chips being based on the new Steamroller architecture and being manufactured on 28 nm rather than the larger/older 32 nm PD-SOI process current AMD CPUs and APUs are manufactured on.

View Full Size

The Supermicro H8SGL is still considered a modern motherboard supporting G34 based Opterons.  Unfortunately, it is still several years old and relies on a chipset design that was released in 2008/2009.

Here is where things get interesting.  If AMD was going to go ahead and design Steamroller parts for C32 and G34, why wouldn’t they port these numbers over to AM3+?  Also, why have they not updated their server chipsets if they are going to keep pushing these sockets?  There will be no PCI-E 3.0 northbridges, and I do not believe that AMD is putting in the extra effort on their latest A85X and A88X I/O hubs so that they can be certified to work on server platforms.  So why even go there?

The answer to that one seems to be that Warsaw is a stopgap measure to keep AMD’s foot in the door in the server market.  AMD has put in the time to create the Warsaw design, but they will not be focusing on optimizing the design for high clock speeds.  Manufacturing and design appear to be aimed at getting the new product into the upper 2 GHz range while still supporting 4, 8, 12, and 16 thread models.  They look to be aiming at keeping these faster than previous Interlagos based chips and not going overboard on the TDP.  While designing any modern CPU is a time and manpower intensive operation, this particular scenario looks to be one where AMD is abandoning the high end and putting out a product that is better than previous, and good enough for what that aim is.  Warsaw is designed to make AMD stay relevant in the server market and keep their current partners pleased with the offerings.  It really does look to be a stopgap measure as the rest of the Opteron ecosystem has remained essentially untouched (and un-updated) for years.

September 4, 2013 | 06:45 PM - Posted by 16stone

I agree!

September 4, 2013 | 07:02 PM - Posted by Coupe

This is one of the most well written articles I have read in a long time.

September 4, 2013 | 08:09 PM - Posted by boothman

Was really hoping for an updated AM3+ chipset (995FX?) and perhaps the commercial release of the 95W FX-8300, or a further tweaked Vishera (not 9590 btw). Something to keep high core users engaged until Kaveri or Carrizo is ready with 6-8 core offerings.

I agree with Coupe, very well written article Josh. An excellent read!

September 4, 2013 | 08:31 PM - Posted by ArcRendition

Fantastic article Josh. Informative, concise and clearly articulated.

September 4, 2013 | 09:28 PM - Posted by praack

nice article, i remember being very much in support with amd's first white papers on processor design into the apu space, but then saw the result and the push to entry level.

so never bought into the apu because the business side kept it as an entry level part.

if they can get the speeds up, get the graphics up and allow them to communicate with more than low end parts - maybe it will be the processor amd needs- if not - well more people will move to intel

September 4, 2013 | 09:48 PM - Posted by Brett from Australia (not verified)

Very well written article Josh we have been waiting for some time to see where AMD was going with their CPU's\APU's. I was particularly interested to see what lay ahead with FM2+ parts, interesting roadmap lies ahead.

September 4, 2013 | 10:33 PM - Posted by Bill (not verified)

What I got out of that article is if you're waiting for AMD to come out with something better to compete with Intel, stop waiting and buy Intel now. As usual, AMD will only state that something better is on the horizon and it never materializes.

September 4, 2013 | 11:00 PM - Posted by ArcRendition

That's the opposite of what the article was conveying. AMD isn't trying to compete with Intel directly. They're trying to cultivate a new "Fusion" APU standard with hUMA and consequently change the landscape of CPU dynamism.

September 9, 2013 | 09:56 AM - Posted by Anonymous Coward (not verified)

The problem is that nobody needs "fusion", or "huma", whatever the hell that is.

The bottom end of the market cares about cost only and nothing else, and margins are so slim it doesn't really matter how many units you sell.

For the midrange and up the performance of the CPU cores matters. AMD seems hell bent on using words to convince others that this doesn't matter, but right now it's ALL that matters.

People want fast cores, otherwise what's the point in upgrading? 8 cores? Who cares, the market for 8 slow cores is almost zero. The market for huma is exactly zero.

September 9, 2013 | 10:22 AM - Posted by Josh Walrath

Qualcomm seems to be doing pretty good with low cost/low margin chips.

September 19, 2013 | 12:14 PM - Posted by Anonymous (not verified)

CPU's are obsoleted already. Intel does not own any high-end GPU's and does not own high-end GPU IP.

The current Intel CPU architecture is already obsoleted and not much you can enhance it, besides improving the chip foundry process which is very expensive.

AMD has a new architecture that will be really released by the beginning of next year. The current AMD APU architecture is work in progress.

AMD APU fusion design is the way to go. Integrating the CPU and GPU functions into one seamless processor is the way to remote all the current architecture bottlenecks. i.e. two different memory banks, data clogging the bus, etc.

Soon the AMD APU's CPU/GPU elements will share the same memory, will be able to use GDDR5 for GPU performance, etc.

Intel doesn't have anything close to the AMD APU's, and they don't have any high performance GPU's.

Systems with only an Intel CPU are complete dogs. Intel desperately needs and is totally dependent on NVIDIA and AMD high-end GPU's in a system to be able to be considered an high end system.

AMD has both great CPU's, great GPU's, great Crossfire technology for multi-GPU performance scaling, etc.

Intel's marketing catches all naive people that think that what makes a high-end system is the CPU.

Any high end system needs a good CPU and one or more high-end GPU's, which Intel do not own and have failed miserably in designing one (Larrabbee, etc).

So there is no high-end system made of Intel only parts, but THERE ARE HIGH END SYSTEMS MADE OF AMD ONLY PARTS...!!

Intel needs NVIDIA and AMD to be able to become a high end system, AMD DOES NOT need any other company.

So the bottom line is there are no INTEL high end systems, because they all need NVIDIA and AMD high end GPU's or it will not be called a high end gaming/etc. system.

September 25, 2013 | 03:33 PM - Posted by RJ (not verified)

Intel will do exactly the same thing AMD did - buy GPU company (nVidia will be easy pray). They will then take a while to chew it up and to integrate everything, like AMD did. I am afraid, however, that at the end of that misstep Intel will still have superior product to AMD, if nothing else - because it has better (less nm) production.

Its going to be the story of 2000-2005 all over again - AMD comes up with great idea but after couple of years INTC's money stash will make up the difference and Intel's technological processes will propel it to undisputed leader again.

January 1, 2014 | 06:29 AM - Posted by Eric Slattery (not verified)

Intel is not allowed to buy Nvidia or AMD because AMD is their only big competitor (Monopoly law) and Nvidia is with Tegra, so they cannot be owned by anyone else, and technically it is cheap Nvidia Graphics on the Intel die, they are just that terrible though.......AMD is developing exclusively for itself, and then other people in the HSA (ARM, Samsung, etc etc) are all going to work to utilize the hardware, making it widely available, and being integrated. AMD may have patented its current idea to bridge the CPU and GPU for parallel processing and offloading, making Intel have to pay them or come up with something on their own to use that set up, just as Intel pays AMD for x64 architecture to manufacture. Right now APUs will be best used in the mobile market, but they will eventually catch up and be used on the Desktop, wide stream, and we are already seeing integration on servers too. AMD is benefiting at the moment too that their GPUs are being bought like hot cakes for cryptomining. If intel was the sole provider, we would get a processor every year that is 3-5% better than last year, with maybe a slightly smaller die. At least AMD has made innovations with their chips, and now that software is taking advantage of the CPU modules of Piledriver and Bulldozer, their multithreaded performance is booming in many applications, especially with the FX-8350 keeping up with the 300-500 dollar i7s. Intel has not really done much innovation in the CPU side sadly......they did not push the smaller manufacturing sizes, they have someone else do the manufacturing process, and then there has to be a market for it, which is the mobile market and that is driving lower power usage, but they have been rather stale for a few years.

September 5, 2013 | 12:38 AM - Posted by ezjohny

I think AMD made a good decision, but they need some work still with there "Single Core Speed" for the processor. For the desktop side lets hope they get the hard core gamers on APU's!!!

September 5, 2013 | 02:12 AM - Posted by puppetworx (not verified)

Cool article Josh. I like this prospective, analysis-based type of article and I'd like to see more of them. Some of this information and analysis would have been included in product reviews but a lot of it wouldn't and besides it's nice to have it all brought together and extrapolated on. Very cool.

This all makes me wonder how and if NVIDIA is going to gain any traction

@ezjohnny Single core speed should with any luck be less significant in the near future, at least for gaming. Yeah I know, we've been hearing that one for years, but with the Xbone and PS4 both housing 8-core processors clocked at a measly 1.6GHz multi-core optimization simply has to happen.

Hardcore gamers on APUs though... you crazy!

September 5, 2013 | 02:57 AM - Posted by Anonymous (not verified)

While I can certainly appreciate the fact that AMD's plans are coming along nicely, as a tech enthusiast living in South Africa this doesn't give me any solace in the fact that they are working on better products. I'm planning on upgrading my PC next year, but what exactly do I have to look forward to?

I mean, I have the choice of Haswell and a new chipset, AM3+ with outdated everything (but more cores) or FM2+, which will taper out at four cores and won't see anything capable of addressing six or more threads. I'd like to remain with AMD as my Athlon X3 has served me well, but at the rate they've been moving I might as well move to Intel.

And that sucks. I don't like gimped products that have been restricted primarily because Intel knows they have the market by the balls.

September 5, 2013 | 03:47 AM - Posted by imadman

Superb article, congrats Josh!

September 5, 2013 | 06:07 AM - Posted by Anonymous (not verified)

Wawsaw will be piledriver based, basically a refresh of current opterons. So just a few tweaks, like richland is to trinity. Look at the roadmap pic here:http://www.pcper.com/news/General-Tech/AMDs-plans-keep-their-ARMs-server-room

The thing that bothers me about going all-APU is the fact that an APU will never be as powerful as discrete CPU+GPU setup, simply because it's easier to make two 250mm chips than it is to make one 500mm chip. So for gamers and enthusiasts who want a discrete CPU, AMD won't have anything on offer (assuming kaveri isn't a CPU jesus that crushes intel's performance), and and even intel's chips will have 50% wasted on an unused GPU.

Perhaps some day someone will figure out how to make use of the iGPU for compute even while a discrete graphics card is in use, or perhaps we might see multi socket motherboards that you can plug 2-4 APUs into in some sort of crossfire arrangement?

It really bugs me to have so much wasted silicon lying around.

September 6, 2013 | 10:17 AM - Posted by Josh Walrath

I will be interested to finally hear what Warsaw actually is.  They haven't done well in educating the public/press about it.

All APU is not necessarily a bad thing.  With HSA there really could be some usage case scenarios where that GPU portion is utilized to a large degree, even with a standalone GPU in the system.  Collision and physics in game could really use that type of horsepower without having to dedicate the GPU to those calculations (and thereby potentially decreasing performance over a hUMA solution due to context switching, copying memory, etc.).  AMD has a long ways to go before it can be used in such a situation, but I think it is coming.  The APU is a good low end processor as well as you still get decent 3D graphics performance essentially for free.

October 15, 2013 | 05:35 AM - Posted by Anonymous (not verified)

The thing that bothers me about going all-APU is the fact that an APU will never be as powerful as discrete CPU+GPU setup, simply because it's easier to make two 250mm chips than it is to make one 500mm chip. So for gamers and enthusiasts who want a discrete CPU, AMD won't have anything on offer (assuming kaveri isn't a CPU jesus that crushes intel's performance), and and even intel's chips will have 50% wasted on an unused GPU.

Perhaps some day someone will figure out how to make use of the iGPU for compute even while a discrete graphics card is in use...

DUDE, this is exactly what HSA means!
The CPU part covers the floating point calculations, while the iGPU covers the integer calcs. The discrete GPU then has more than enough headroom to take care of the rest.

So basically AMD turns their biggest disadvantage into their biggest advantage. Their CPUs excel at floating point calcs and they offset their poor integer calcs with the power of the igpu, making the APUs far more capable at pretty much everything than every traditional CPU could ever be.

January 8, 2014 | 12:30 PM - Posted by HikingMike (not verified)

Switch integer and floating point.

September 5, 2013 | 09:17 AM - Posted by gamerk2 (not verified)

"Multi-core aware software is still not entirely common, much less applications which can utilize more than four threads."

I take issue with this. Ever take a look at application thread counts in task manager/process explorer?

The main issue is that most of the work is serial; you do things in a logical order, with very little opportunity to break up processing. This results in non-scaling applications, irrespective of how many threads they use. (The side-effect of this is giving Intel a performance edge in most tasks, another reason why AMD would be wise to abandon BD and it's derivatives). The things that do easily scale, such as media encoding, physics, and rendering are already offloaded to the GPU, leaving the CPU with all the non-scaling workloads.

Likewise, thread management is not the domain of the developer, but the OS. Windows assigns threads to cores. At any given instant, the highest priority thread(s) that are ready to run are executed. The scheduler does a lot of work behind the scenes to adjust priorities, but that's the gist of windows thread management. After the developer invokes CreateThread() (or one of its many wrappers), their job ends.

Point being: any application that uses more then one thread (and almost all do) is, by definition, multi-core aware. The issue is one of processing: Can you logically break up processing in such a way as to perform processing on two different tasks at the same time, and not reduce performance due to thread overhead, deadlocks, and all the other performance penalties that start to crop up as you utilize more threads? For most tasks, the answer is no; after two or three threads, it becomes to difficult to break up work in such a way where you gain performance. Hence the current state of software.

/rant

September 6, 2013 | 10:23 AM - Posted by Josh Walrath

I think we are actually saying the same thing, but you were able to codify this a whole lot better than I did!  When multi-core CPUs first came out and MS announced that Windows supported multi-threaded environments, many people automatically assumed that Windows would magically cause many programs to go faster just because it could divide up the workload in any application.  Of course, this assumption is incorrect because as you mentioned above, not all programs will benefit from multi-threading due to their workload being more serial.

One of the things I constanly am reading is "all these lazy programmers who can't take the time to make their app more multi-thread friendly" and that of course is not entirely true.  Sure, there are some lazy programmers who could potentially extract more parallelism from their code, but for the most part these guys do understand their programs and have already done a lot of work to improve performance in a multi-core environment.  Plus, as mentioned before, some programs just can't be broken up to address multiple threads.

Great comments though!

September 5, 2013 | 12:29 PM - Posted by TopHatKiller (not verified)

"Anon" is correct; Warsaw is a base?-respun piledrive, done for cost no doubt. 2015 AMD launches new 2/4p server architecture and platform; intergrated gen3 so nearer fm then am? Very high end desktop cpu could/will be derived from that. Core count: up to 20/chip on 0.2. SteamrollerB or Excavator. Seems certain no am3 follow up though - this chip willbe fm2+ or 3. AMD has made clear their commitment to the GROWING high end server market, not just micro, and 15h chips will be released in this market, it is foolish and overly pesimistic to assume otherwise.
This made me laugh: Hasnot is a quad because Intel knows high core counts do not translate into higher performance on the desktop? Giggle, you silly silly man, Intel is screwing us all delivering pathetic low-cost parts, keeping price skyhigh and pocketing massive profits.So many sites and journos not only never combat intel on its policies, in particular the hugh screw-up over trigate,but say tar thanks oh mighty one for screwing my bankaccount.

September 6, 2013 | 10:28 AM - Posted by Josh Walrath

You should watch our podcast, we actually discuss how Intel is actually screwing us over.  I believe it is in the Ivy Bridge E section that we actually take Intel to task for not delivering the goods in a timely manner and making us pay for it out the teeth.  Consider the overall perf. difference between a i7 2600/2700K and the latest Haswell based 4770K.  Sure, there is a perf difference, but it is pretty marginal considering the amount of years that the 2600K has been out.  We saw this back in the days of the K6/Pentium II, and Intel is acting remarkably similar (conservative roadmaps for CPUs with high prices attached to them).

Tri-gate is not a bad idea, but I think that Intel did not expect the huge jump in power when the speed was raised above 3.9 GHz with these chips.  It looks like planar/FD-SOI will actually have superior power/switching characteristics at 22/20 nm.  Sorta wish Intel had utilized FD-SOI, but they prefer to stay with bulk silicon and keep margins higher.

September 6, 2013 | 12:47 PM - Posted by TopHatKiller (not verified)

Thanks for the reply. Sadly I just never listen to podcasts. Perhaps the 'silly' repetition was a little rude.
I'm still expecting 2015 to be AMD payback time, but God knows AMD has cancelled so many cpus over the last coupla years - who can really say what the high end server / culled desktop parts will actually be anymore?

September 6, 2013 | 01:20 PM - Posted by Josh Walrath

I think they are on a good track with Steamroller.  We will see some nice IPC improvements as well as much better thread handling (namely two modules can simultaneously handle 4 threads, unlike current iterations which can do one thread per clock per module).  Can't wait to get our hands on Kaveri and see what the combination of Steamroller and GCN can give desktop uers.

September 5, 2013 | 02:00 PM - Posted by MarkT (not verified)

I'll be honest I was sad the entire time reading this article, nothing gave me hope, why, oh why this is good news.....

September 5, 2013 | 02:27 PM - Posted by Computer Ed (not verified)

Welcome to the church brothers, nice to see others arriving at this future. The thing I love is so many web sites starting to finally see this and I was talking about this very move in 2011 :-)

Better late than never, welcome to the show...

September 5, 2013 | 03:15 PM - Posted by Anonymouspipm1 (not verified)

It'll be disappointing if a 4-threaded Kaveri APU still doesn't have any L3 cache & doesn't out-performance 4-thread Vishera with L3 cache CPU. :(

September 6, 2013 | 12:50 AM - Posted by Anonymous (not verified)

the purpose of fusion is not to force user to use a built in gpu but to allow massive parallelism within the compute unit (or api). there will still be a place for discrete graphics. The fusion concept allows massive levels of computing to be done if the workload is designed for it. It would only be enhanced with the addition of discrete gpu(s).
You wont be stuck with APU graphics. If you read up on the goals of DirectCompute and OpenCL you will see what i am talking about, though perhaps not so clearly worded. :-)

September 6, 2013 | 09:24 AM - Posted by Anonymous (not verified)

Nothing good I can see from all-APU path for AMD, and I'm pretty sure this is bad, extremely bad for us end-users. I can't understand how AMD chose NOW this APU path when Intel locked it's CPUs for overclocking. NOW is the best chance for unlocked DESCRETE CPU and they are switching to APU?? This is so wrong I suspect this some Intel-AMD deal, and I doubt Intel will ever have monoply problems with EU/US as before.

September 6, 2013 | 10:32 AM - Posted by Josh Walrath

With hUMA/HSA AMD is hoping to leverage the power of the GPU for more compute intensive applications.  They are not forcing us into APUs, they are trying to guide software dev and applications to utilize all of that floating point power that a GPU possesses.  The software to really implement this effectively is not yet out, but will be seeing the light of day in 2014.  Kaveri looks to be the driving factor for this transition.

September 6, 2013 | 10:54 AM - Posted by boothman

A video that complements your great article and podcast breakdown.

http://www.youtube.com/watch?v=A9YwkSLqpXw&feature=c4-overview-vl&list=P...

September 6, 2013 | 06:54 PM - Posted by Anonymous (not verified)

Ehh guys .. just a side note, Warsaw is neither 28nm nor Steamroller. It is 32nm Piledriver .. again. That is official information, just take a look at the official roadmap.

September 6, 2013 | 10:54 PM - Posted by Josh Walrath

Ugh, bummer.  Well, that makes more sense than my hypothesis.

September 7, 2013 | 04:00 AM - Posted by Anonymous (not verified)

Stuff happens ;-) It is not easy to follow all the little bits of information.

IMO the currently best and official future AMD outlook can be seen here:
https://www.youtube.com/watch?v=EPxFVzGucZg&feature=player_embedded#t=803

On that slide you can see that Hypertransport is phased out in favor of Seamicro's freedom fabric. That means no more ccNUMA systems, which means no more 2P or 4P Opteron systems, which also means no more Opteron MCMs (like G34).

Cache coherency is becoming a bigger problem, eating up die-size and consuming power. Just have a look at the recent Hotchips 25 presentations of Oracle and IBM. The interconnects are going to be insanely complex.

Therefore the overall strategy to spend die estate on more GPU circuits on an APU instead of interconnect logic and then "fuse" these 1P-APU-systems loosely together by a cheap I/O-interconnect is not that bad.

It wont be suitable for all calculation problems, but it should be good enough for the most. Let intel's xeons/Oracle/IBM fight for 5% of the market and take the rest of the market. Instead of ccNUMA across whole server racks, concentrate memory coherency on the chip level and use heterogeneous chips (APUs). It should be fast and cheap.

September 7, 2013 | 04:10 AM - Posted by capawesome9870

Thanks Josh, this was a great lesson on the current standing of AMD.

i'm hopeing that future games will take advantage of AMD Cores better.

i just hope that the future AMD APUs feature the abality to have the onboard GPU still function and do calculations if a large GPU is installed externally. so Physics and Hair Calculations are done on the APU, and the possibly a Nvidia PhysX work around to get it to work on the APU's GPU instead of the APU's CPU. That is a long shot seeing that the PhysX is a closed Nvidia only party.

Also can AMD move to the LGA already, or a PGA that has a 'lid' on the APU that will keep the APU on the Motherboard. i am tired of changing heatsinks and pulling the CPU out with the heatsink.

September 7, 2013 | 05:10 AM - Posted by Anonymous (not verified)

So AMD is dead for enthusiast user??

I'm big AMD fan but it I want to upgrade they propose nothing in the futur....

It's total disaster for a fan like me that look for a top end PC

For this year FX8150 IS quite good but 2014 nothing comming with 8 core....

PLEASE PLEASE PLEASE AMD
Release an APU with 8 core !!!!!

or I will switch for intel sadly

September 7, 2013 | 06:04 AM - Posted by Anonymous (not verified)

8350 sorry

September 7, 2013 | 08:21 AM - Posted by Anonymous Coward (not verified)

All this HUMA, heterogeneous shit isn't the future, and was never going to be.

To see AMD reaffirm this crappy long term plan after several years of it taking them ever closer to bankruptcy is more than a little disappointing.

AMD needs to go back to how they used to run things: manual placement of transistors, high autonomy for engineers, and a focus on performance per core.

Instead, you have management that fires people and replaces their work with lousy automated programs that don't do nearly as good of a job, and when faced with garbage results management as always takes care of management. Instead of admitting a mistake, they say to themselves, so our per core performance is shit, just add more cores. What? Even with our multiple cores intel is light years ahead? Let's put a super-low-end ati chip onboard, call it an apu, and say intel are the ones behind the curve, because ZOMG THE FUTURE IS FUSION? WHAT IS FUSION, I JUST MADE IT UP BRO!

Nice try AMD.

Integrated graphics have ALWAYS been good enough for facebook and email, and NEVER been good enough for someone interested in a little more gaming than just farmville.

Get your processors up to modern standards AMD, or you will cease to exist. You can have the top dog spread his "vision" to his yes-men in the echo-chamber of the remaining AMD employees, but all the cult-like belief in the whole world won't make bring in revenues when people don't want to buy your shit.

By the way, anyone know what the hell "The future is fusion" really means? If so, email AMD cause they sure as hell would like to know too.

October 28, 2013 | 11:08 PM - Posted by Principle (not verified)

Hey Genius, about 100 million people will be playing the latest games on AMD APUs in the Xbox One and PS4. A lot of people play games with less GPU than will be available on the Kaveri Desktop APU.

I cannot argue with your point on AMD's missteps at CPU design, and am frustrated with them myself, but I can agree that Fusion is a better future and will be much more powerful than any CPU in the future. When the onboard GPU is used as thousands of floating point units like any CPU has floating point units. It will eventually be the processor capable of the most computing, which is immediately great for mobile world with small processors that sip power, but also have a GPU onboard that can easily render your excel spread sheet and run matlab simulations that could take several minutes on a normal CPU.

September 7, 2013 | 02:35 PM - Posted by IronMikeAce

AMD has already introduced their APUs into the server market with the Opteron x2100 and x1100 series chips. From reading the article above, it gives the impression that AMD has not released server APUs. They can be found here http://www.amd.com/us/products/server/processors/2100seriesplatform/Page... and have be available for a while now.
I think AMD is heading in the right direction with offloading workloads to their strong side (gpu) which is probably why their fpu performance isn't great compared to integer performance on their bulldozer/piledriver/steamroller architectures while their GCN is much more capable of handling these workloads. Optimizing the execution of different processes through different parts of their APU while working together to produce a much more efficient/intelligent system. You are right about intel holding their cards close to their chest. Even though Intel has integrated gpus in their cpus, it seems they have no intentions of moving in the direction that AMD is. I think Intel is making a mistake by not really trying to improve their offerings like AMD is. If AMD continues the hard work on creating a more efficient, intelligent and more versatile processors capable of almost any workload, I think they will catch Intel off guard and will probably outpace them.

September 9, 2013 | 10:25 AM - Posted by Josh Walrath

I had forgotten about those.  Quick question though, have we seen any products actually using these Opteron chips?  I'm looking over the Seamicro site, and they don't seem to have these puppies.  http://www.seamicro.com/products

For the low power servers they are still using Atoms.

September 8, 2013 | 05:03 AM - Posted by razor512

I guess my next build will use a core i5 or core i7.

September 12, 2013 | 11:57 PM - Posted by anonymous (not verified)

The pink elephant AMD is ignoring is that even the fastest APU chips are stuck on a 128 bit, DDR3 interface. As such, they are starved for bandwidth and will never produce acceptable GPU performance for the mid-range and higher gaming market. (I question the 'compute' performance too; most GPU apps run much better on high badnwidth cards.) Sure, you can add a discrete GPU to the system, but at that point all those bazillions of gpu transistors on the APU are suddenly rendered 99% useless and are doing nothing more than taking up space that could/should have been dedicated to another CPU module or two. I'm hoping against hope that AMD releases a 3 and 4 module (6/8 core) steamroller derivative on FM2+ with minimal or no GPU components on-chip. Failing to do this will essentially throw away what mid-range+ market-share they currently have.

OR, just maybe, they'll get a cluepon and port Steamroller to AM3+. Doing so can't be that hard. Such an upgrade path would smooth the transition for the enthusiast crowd while buying time for the Fusion platform to mature and gain a bit of desperately needed bandwidth when DDR4 is introduced.

September 13, 2013 | 05:06 PM - Posted by Anonymous (not verified)

"Software coding is still in the relative dark ages and will stay that way until tools like HSAIL (HSA Intermediate Layer) are implemented. C++ and Java are on their way to natively supporting hUMA type architectures, but they again are not quite there yet. Most of the heavy lifting on the software side will appear in late 2014."

This could be a BIG problem. This is exactly what went wrong with the Intel IA64 architecture. It was dependent on compiler tricks and programming to make use of the architecture. When they released it those features were not yet available in compiles and it suffered. I hope that AMD isn't making the same mistake by releasing a platform with poor compiler support. AMD64 was fine because it could emulate the 32 bit instruction sets, and I suppose steamroller will be ok in that it will be backward compatible. However, if they miss performance again for another year+ while the programming tools get up to speed it could mean Intel catches up. Or it could mean people ignore the steamroller CPUs and make the company rethink strategy (as Intel did with IA64).

January 8, 2014 | 12:44 PM - Posted by HikingMike (not verified)

The difference is that nobody could use IA64 processors until that stuff was developed. AMD's APUs have floating point work done within the CPU portion right now (half the resources compared to integer work) so they are currently able to do all the required work that's out there. They won't shift that completely to the GPU portion until it makes sense with software support and with automatic switching in hardware.

October 9, 2013 | 01:26 AM - Posted by Optimist (not verified)

AMD consolidating on the FM2+ socket seems like a logical step to simplify development of products.

AMD can make an FM2+ compatible 8-core CPU (not APU) in an FM2+ socket. if they wish. They won't be able to do it with an existing APU with an inactivated GPU; they would have to use a CPU, so they could fit the 8 cores on the die.

I suspect as time goes by, GPU performance will be so good that very few people will need or want to buy a discrete video card anyway.

October 28, 2013 | 10:55 PM - Posted by Principle (not verified)

You may be correct but I don't think so. There are non APU Steamroller based CPUs already slated for 2014, they are called Bald Eagle Embedded 4 core CPUs. Now, why would AMD create Steamroller cores, make an embedded Steamroller CPU, but not a socketed one? There are about 10 million AM3+ motherboard owners just waiting to update, otherwise will never get another AMD sale most likely. They will be alienated regardless of what AMD comes up with later in 2015.

Not launching it goes against AMD's own mantra of late. To leverage its IP and existing R&D. So AMD already has the cores developed at 28nm with dual channel DDR3 RAM, already has the socket AM3+, wallah! You have millions of instant sales and could even be at a premium with great margins with little development cost.

How do you think the HPC guys with $500 million dollar supercomputers are going to feel with AMD leaving them out to dry on their normal socketed CPU update they have counted on to remain relevant and drove a lot of previous Opteron sales. There is value to drop in plug and play upgrades that do not require all new motherboards when you have thousands of them.

Is AMD being devious and not releasing a socketed client computing roadmap so people buy their old stuff hoping for an upgrade? Or are they keeping it a secret, for a CES surprise and not try to overhype them for months before an actual release. Are there going to be FM2+ boards with 3 X16 slots for triple crossfire GPUs? FX processors also have a GPU attach rate, and if youre buying an AMD FX cpu, youre getting an AMD GPU.

November 8, 2013 | 09:57 PM - Posted by RJX (not verified)

pretty sad because APUs are pretty slow plus if i gonna play with other gpu APUs GPU will not be used.......
damn....
Hoping release of Pentium IV with 12 cores :D

December 10, 2013 | 12:53 AM - Posted by Shawn L Ciampi (not verified)

APU's Suck! Dump the GPU and give me more CPU! I have video cards for the workload in your choice of SLI or Crosfire! I want more threads and more cores not less! Why would anyone want less?!?! Why?!?

January 28, 2014 | 09:43 PM - Posted by Anonymous (not verified)

Since I just read the article....I have to say it is very well written and to this day pretty accurate "hats off to the author for having great vision and keeping well up to date . The only thing you left out on a OLD article is the new Cortex A57 4- 8 core 64 bit arm CPU with 10ggbit switches to become AMD's Opteron A1100 series, features either 4 or 8 AMD Cortex A57 cores.Each core will run at a frequency somewhere north of 2GHz, each pair of cores shares a 1MB L2 cache, for a total of up to 4MB of L2 cache for the chip. All cores share a unified L3 cache of up to 8MB in size. The SoC is built on a 28nm process at Global Foundries. Also AMD designed a new memory controller for the Opteron A1100 that's capable of supporting both DDR3 or DDR4. The memory interface is 128-bits wide and supports up to 4 SODIMMs, UDIMMs or RDIMMs. On top of all that it even comes with 8 lanes of PCI express 3.0. sounds very promising.
I am hoping if the A1100 has enough CPU power to have a version of it in the form of a Passivly Cooled miniITX form factor with a DDR 4 memory, PCIe 16x slot "running at 8x@3.0 is plenty enough fast since its the same as PCIex16 with 2.0 specs, and does not slwe down a high end GTX 770 or 780 a maximun of 10% with a light load and under a couple % with a heavy load" to be Honest PCIe 2.0@8X doesnt even slow down my new GTX770 classified 4GB card made by EVGA. and My p67-ud4-b3 gigabyte motherboard loved 2 GTX560 TI superclocked cards runnng at 1ghx core and 2400 memory only scoring 10350 3DMark 11 perf settings points. My EVGA GTX770 Classifed scores 11600 3dmarl points in the identical test running at PCIE 2.0@16X......NOW to finally get to my original point I was trying to make is with the 8 low power ARM cores running x86 @ between 2.5-2.7GHZ passivly cooled, along with fast ddr4 memory withy 8 lanes of PCIE 3.0 lanes that could hold a nice r7 260x or Hd 7790 video card would make for a fantastic HTPC and light 1080p gaming rig.

Hats off to the writer again...great vision!!
Vargis14

January 28, 2014 | 09:44 PM - Posted by Anonymous (not verified)

Since I just read the article....I have to say it is very well written and to this day pretty accurate "hats off to the author for having great vision and keeping well up to date . The only thing you left out on a OLD article is the new Cortex A57 4- 8 core 64 bit arm CPU with 10ggbit switches to become AMD's Opteron A1100 series, features either 4 or 8 AMD Cortex A57 cores.Each core will run at a frequency somewhere north of 2GHz, each pair of cores shares a 1MB L2 cache, for a total of up to 4MB of L2 cache for the chip. All cores share a unified L3 cache of up to 8MB in size. The SoC is built on a 28nm process at Global Foundries. Also AMD designed a new memory controller for the Opteron A1100 that's capable of supporting both DDR3 or DDR4. The memory interface is 128-bits wide and supports up to 4 SODIMMs, UDIMMs or RDIMMs. On top of all that it even comes with 8 lanes of PCI express 3.0. sounds very promising.
I am hoping if the A1100 has enough CPU power to have a version of it in the form of a Passivly Cooled miniITX form factor with a DDR 4 memory, PCIe 16x slot "running at 8x@3.0 is plenty enough fast since its the same as PCIex16 with 2.0 specs, and does not slwe down a high end GTX 770 or 780 a maximun of 10% with a light load and under a couple % with a heavy load" to be Honest PCIe 2.0@8X doesnt even slow down my new GTX770 classified 4GB card made by EVGA. and My p67-ud4-b3 gigabyte motherboard loved 2 GTX560 TI superclocked cards runnng at 1ghx core and 2400 memory only scoring 10350 3DMark 11 perf settings points. My EVGA GTX770 Classifed scores 11600 3dmarl points in the identical test running at PCIE 2.0@16X......NOW to finally get to my original point I was trying to make is with the 8 low power ARM cores running x86 @ between 2.5-2.7GHZ passivly cooled, along with fast ddr4 memory withy 8 lanes of PCIE 3.0 lanes that could hold a nice r7 260x or Hd 7790 video card would make for a fantastic HTPC and light 1080p gaming rig.

Hats off to the writer again...great vision!!
Vargis14

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.