AMD Announces Ryzen 5 Series of CPUs

Subject: Processors
Manufacturer: AMD

Here Comes the Midrange!

Today AMD is announcing the upcoming Ryzen 5 CPUs.  A little bit was known about them from several weeks ago when AMD talked about their upcoming 6 core processors, but official specifications were lacking.  Today we get to see what Ryzen 5 is mostly about.

View Full Size

There are four initial SKUs that AMD is talking about this evening.  These encompass quad core and six core products.  There are two “enthusiast” level SKUs with the X connotation while the other two are aimed at a less edgy crowd.

The two six core CPUs are the 1600 and 1600X.  The X version features the higher extended frequency range when combined with performance cooling.  That unit is clocked at a base 3.6 GHz and achieves a boost of 4 GHz.  This compares well to the top end R7 1800X, but it is short 2 cores and four threads.  The price of the R5 1600X is a very reasonable $249.  The 1600 does not feature the extended range, but it does come in at a 3.2 GHz base and 3.6 GHz boost.  The R5 1600 has a MSRP of $219.

View Full Size

When we get to the four core, eight thread units we see much the same stratification.  The top end 1500X comes in at $189 and features a base clock of 3.5 GHz and a boost of 3.7 GHz.  What is interesting about this model is that the XFR is raised by 100 MHz vs. other XFR CPUs.  So instead of an extra 100 MHz boost when high end cooling is present we can expect to see 200 MHz.  In theory this could run at 3.9 GHz in the extended state.  The lowest priced R5 is the 1400 which comes in at a very modest $169.  This features a 3.2 GHz base clock and a 3.4 GHz boost.

The 1400, 1500, and 1600 CPUs come with Wraith cooling solutions.  The 1600X comes bare as it is assumed that users want to use something a bit more robust.  The R5 1400 comes with the lower end Wraith Stealth cooler while the R5 1500X and R5 1600 come with the bigger Wraith Spire.  The bottom 3 SKUs are all rated at 65 watts TDP.  The 1600X comes in at the higher 95 watt rating.  Each of the CPUs are unlocked for overclocking.

View Full Size

These chips will provide a more fleshed out pricing structure for the Ryzen processors and provide users and enthusiasts with lower cost options for those wanting to invest in AMD again.  These chips all run on the new AM4 platform which are pretty strong in terms of features and I/O performance.

View Full Size

AMD is not shipping these parts today, but rather announcing them.  Review samples are not in hand yet and AMD expects world-wide availability by April 11.  This is likely a very necessary step for AMD as current AM4 motherboard availability is not at the level we were expecting to see.  We also are seeing some pretty quick firmware updates from motherboard partners to address issues with these first AM4 boards.  By April 11 I would expect to see most of the issues solved and a healthy supply of motherboards on the shelves to handle the influx of consumers waiting to buy these more midrange priced CPUs from AMD.

What they did not cover or answer would be how the four core products would be presented.  Would each be a single CCX and only 8 MB of L3 cace, or would AMD disable two cores in each CCX and present 16 MB of L3?  We currently do not have the answer to this.  Considering the latency between accessing different CCX units we can surely hope they only keep one CCX active.

View Full Size

Ryzen has certainly been a success for AMD and I have no doubt that their quarter will be pretty healthy with the estimated sales of around 1 million Ryzen CPUs since launch.  Announcing these new chips will give the mainstream and budget enthusiasts something to look forward to and plan their purchases around.  AMD is not announcing the Ryzen 3 products at this time.

Update: AMD got back to me this morning about a question I asked them about the makeup of cores, CCX units, and L3 cache.  Here is their response.

1600X: 3+3 with 16MB L3 cache. 1600: 3+3 with 16MB L3 cache. 1500X: 2+2 with 16MB L3 cache. 1400: 2+2 with 8MB L3 cache. As with Ryzen 7, each core still has 512KB local L2 cache.

Video News

March 15, 2017 | 11:03 PM - Posted by Isaac Johnson

You were obviously paid by Intel to write this. But that's ok, because I was also paid by Intel to read it, and to write this. I'm not sure what they were trying to accomplish by all this but apparently they're just handing out money to anyone for all kinds of things these days.

March 16, 2017 | 12:54 AM - Posted by Rob (not verified)

Intel also paid me to comment on this. I also got paid a bonus extra from them for telling my girlfriend about posting on here. Just kidding, I don't have a girlfriend, only Intel loves me. (Intel didn't pay me to say that)

March 16, 2017 | 05:52 AM - Posted by Irishgamer01

Well i was paid by ARM to say that you two are crazy and all you need is mobile low power devices.

March 15, 2017 | 11:11 PM - Posted by Anonymous (not verified)

I wonder if the overclocking headroom will be higher on the 4 and 6 core parts than the 8 core parts. Could be compelling for some single threaded workloads.

March 16, 2017 | 12:51 AM - Posted by Anonymous (not verified)

Unless they are specifically binning them for higher clock, I wouldn't expect much difference.

March 23, 2017 | 03:08 AM - Posted by Anonymous (not verified)

I've a Ryzen R7 1700. With all eight cores enabled:
Clocks to 3.85Ghz @ 1.3V with the AMD Spire cooler.
But becomes unstable when pushed any higher.

Disable 2 cores, and 3.9GHz @1.35V becomes stable.
Disable 4 cores, and 4.0GHz @1.40V becomes stable.
Disable 6 cores, and 4.05GHz @1.45V becomes stable.

200KHz advantage merely by cutting off its legs!

I ought to try overvolting the Spire Fan, wonder
how many fingers I could lose at 14V???

Fastest NOP loop ever...

March 15, 2017 | 11:54 PM - Posted by Anonymous (not verified)

from ars - The 6 core parts use two 4-core Core Complexes (CCXes) with one core from each disabled. Each core within a CCX has a 2MB slice of level 3 cache, and interestingly, all 16MB of cache are available. The 1500X again uses two CCXes, this time with two cores from each disabled, but again still offers the full 16MB of level 3 cache. The cheapest 1400 part, however, is a single CCX with all four cores enabled, and only 8MB cache.

If it's possible to overclock that 1400... wouldn't it be something if it was the fastest gaming cpu AMD makes. LOL

March 16, 2017 | 03:21 AM - Posted by Anonymous (not verified)

Yeah, if the 1400 goes to 4GHz OC it's going to be insane. $169 ? That's going to sell like crazy. That's a 4C/8T that's cheaper than the cheapest Kaby Lake i5 !

And before someone says "Hur hur, so what, the i5 will be faster !", it won't. Because the cheapest KL i5 is the 6400, and it's *locked* at 2.7GHz. So it won't even come close on performance, or VFM. Closest thing to the 1400 would probably be an i7-6700, which is a $300 chip.

Damn but Intel are screwed.

March 16, 2017 | 08:30 AM - Posted by Anonymous (not verified)

What makes you think Intel can't afford to drop the price of their chips? They're the ones that still have their own fabs. They also offer the nice extra of an iGPU.

March 16, 2017 | 09:03 AM - Posted by Anonymous (not verified)

"nice extra of an iGPU"

No, they waste tons of silicon that could be used for more cores or simply make smaller chips. IGP gaming is a waste of time, Quick Sync has terrible encoding quality, and it drives up the cost of the CPU for no good reason. I'm hoping that this pressure from AMD forces Intel to drop IGP and lower prices.

(Intel paid me to say this.)

March 16, 2017 | 10:41 AM - Posted by Anonymous (not verified)

iGPU gaming is a waste if you want to play current AAA games. Indies and older gems can run alright.
And gaming isn't the only thing one can do with a computer, you know. Just general purpose word processing/browsing machines suffer from the extra fan and sacrificed PCIe slot that comes with a dGPU. Even a home server can benefit from being able to be hooked up to a monitor in the event of a borked update.

March 18, 2017 | 03:22 PM - Posted by Anonymous (not verified)

^^^ This

Granted you could argue that the APU's fill most of those needs.

But I would still rather have the APU's get beefy integrated graphics while the CPU's get anemic basic 2d desktop graphics.

March 16, 2017 | 10:02 AM - Posted by Anonymous (not verified)

Yes Intel still has its own fabs and also the billions of dollars of upkeep on those pricey fabs, while AMD's fab partner foots those physical plant and process node R&D/creation expenses mostly. And GF has more than AMD as a customer so GF can spread the costs around its entire market base of customers.

Those Intel chip fabs can be a blessing for some things but a damned billions of dollars cash black hole on others, especially with AMD’s Ryzen sales eating in to Intel’s unit sales figures. And Intel needs to keep those dollar hungry chip fab chip lines running at as close to 100% capacity as possible because the upkeep costs on those chip fabs are in the billions of dollars. Hell, GF is using a licensed from Samsung 14nm process so that R&D expense for that licensed for Samsung process was mostly amortized by Samsung and Samsung’s chip customer base before GF licensed that Samsung 14nm process from Samsung.

Do you ever wonder why the auto makers source all of their parts from the third party parts industry! AMD is fabless and not having to spend spend spend on that expensive chip fab upkeep or any process node R&D costs on its own. Your Intel has it’s own chip fabs reasoning does not hold for economic reasons first and foremost as AMD’s low overhead operation can price much lower without going into the red while Intel has its high dollar overhead chip fab “flexibility” that comes with some cash draining overhead expenses that AMD does not have.

AMD will be able to fustigate Intel’s high margin chip business with low pricing that will net AMD more market share, leaving Intel having to still maintain some Idle chip fab capacity and Intel’s fat flabby corporate structural costs! AMD is so lean in the corporate structure sense and the no chip fab upkeep expense that AMD has some of the very lowest overhead expenses relative to Intel’s giant monthly bills for chip fab psychical plant costs, chip process node R&D cost, and large organizational corporate fat costs that Intel has so mostly shoulder on its own!

"They also offer the nice extra of an iGPU"

At a really high Intel high margin cost for that overpriced Intel integrated graphics dogfood, and really how many of Intel's lower cost SKUs get Intel's "Best" Graphics. Ryzen/Raven Ridge APUs(BGA only) are coming this year with Ryzen/Vega graphics at 14nm, and NOT 28nm, and next year the socketed APUs also.

So how many more Vega NCUs will AMD be able to get on its APUs at 14nm, with even The Zen/Ryzen cores taking up less die space than Intel's cores!

March 17, 2017 | 02:15 AM - Posted by Anonymous (not verified)

Not all car makers do that. Toyota/Lexus have their own carbon fiber works, engine foundry, metallurgical research, and turbocharger manufacturing plant. Idk of anyone else who makes their own turbos either. Koenigsegg has a 3D printed(DMLS type) turbine intake volute.

March 17, 2017 | 12:53 PM - Posted by Anonymous (not verified)

You are splitting hairs as the vast majority of the auto makers’ parts are sourced to the third party parts industry. So some of the high technology Toyota parts are done in-house only because the process is so new or so critical that it has to be done in house because the IP does not exist in the third party parts industry to supply Toyota's needs. Ditto for other car makers.

Even IBM has chosen to go Fabless, and IBM still has its research fabs in-house, as well the latest chip fab IP that IBM only licenses to GF for GF's usage on the IBM in-house designed power8's and Power9's, or for any of the OpenPower Licensees that pay royalties to IBM Via the OpenPower consortium/foundation!

You missed the point entirely concerning the economies of scale that the third party chip fabrication industry offers by spreading around the process node R&D development costs and the Chip Fab physical plant upkeep costs around an entire market of chip makers/designers who are fabless chip makers/designers.

March 17, 2017 | 06:06 PM - Posted by errorr (not verified)

Don't forget that IBM functionally paid GloFo over $1B for them to take the Fab off their hands.

March 17, 2017 | 06:11 PM - Posted by errorr (not verified)

Well Intel did just buy Mobileye so they are definitely in on the ground floor to compete with the next gen auto requirements for compute power as autonomous driving tech spreads and starts to demand significantly more robust levels of compute.

March 17, 2017 | 10:46 PM - Posted by Anonymous (not verified)

Ha! Mobileye with Contra Revenue and who wants Intel's ring through their nose! The ARM based market has most of the automobile control systems market the same as the mobile devices market! How's Intel doing there, we do not know about the mobile losses from Intel because Intel folded its mobile division into a larger division to hide the bleeding on the corporate 10k filings!

Hey Mobileye Tesla says stick a pin in it, your business is not wanted!

March 21, 2017 | 06:59 PM - Posted by Anonymous (not verified)

With expensive 7/10nm FF vs even lower power, cheaper, more 'harsh condition' robust 14nm FD-SOI ?? Sorry but their process is waaay to expensive to compete in Automotive.

They bought Mobileye so they could *force* them to use their process, because *nobody else* wants to ! LOL, they ain't 'in on the ground floor', they're digging another grave for themselves !

Look, companies building 22FDX/FD-SOI stuff for ULP, inc automotive = ~80 and rising.

Companies beating down Intels door ?....(tumbleweed)...

Stop repeating what Intels PR says and actually *look* at what's going on ! They are going to get nowhere because they can't admit that their FF process isn't 'teh best at everything !' But it's *not*. For ULP and harsh environment stuff like automotive FD-SOI is miles better.

March 17, 2017 | 04:54 PM - Posted by homerdog

Intel owning and running its own fabs (the best in the world no less) is absolutely a competitive advantage and to suggest otherwise is absurd.

March 17, 2017 | 05:32 PM - Posted by Anonymous (not verified)

Intel has to charge, charge, charge so highest margins to pay or the upkeep on those pricy pricy fabs! And with Ryzen taking some market share those pricy pricy to maintain fabs will bleed those pricy pricy billions of dollars. Intel's economy of scale in its fab operations is mostly only there to support only Intel's chip production! And that's not enough production alone for Intel to afford dumping to much money into any excess chip plant capacity!

AMD’s Ryzen will fustigate Intel’s high margin chip sales business in bot the consumer and server/HPC/workstation markets. Plus AMD will be able to package price Zen/Naples CPU SKUs with Vega Radeon Pro WX/GPU, and Radeon Instinct AI/GPU, server/HPC/workstation SKUs and get more of the professional markets business from both Intel and Nvidia! Ditto for any Zen/Ryzen paired with Vega consumer deals, or even the RX 500 series Polaris refresh SKUs, for some low cost package deals for the consumer gaming market.

Sit back and watch Zen/Ryzen/Vega fustigate those Intel/Nvidia high margins because it very much appears that AMD is back with a vengeance!

March 24, 2017 | 11:57 PM - Posted by Anonymous (not verified)

Cheap cars use out-sourced parts ... like cheap motherboards used cheap capacitors till ... But, expensive quality high=performance cars use ALL their own manufacture ... from brake-pads to wiring-harness : kinda like an INTEL cpu. Get the drift pad're ? Guess you buy lots of cheap product , like chinese-caught 1/2 rotted cod ... instead of Icelandic sourced cod!

March 16, 2017 | 12:56 AM - Posted by Anonymous (not verified)

Anyone know what APIs are available for setting thread core affinities? That will be more important for the 6 core part and really important for a 4 core part spread across 2 CCXs. A 4 core part with 2 CCXs can have a huge amount of cache though. It probably would perform very well for some applications.

March 16, 2017 | 01:56 AM - Posted by Anonymous (not verified)

Does anyone want to speculate on any "core unlocking" possibilities in these "not quite RYZEN 7" parts?

Why speculate?

It will be interesting to understand the CCX architecture of these chips relative to RYZEN 7 parts.

March 16, 2017 | 04:33 PM - Posted by Anonymous (not verified)

Given that yields on 14 nm may not really be that great, if a core is disabled, there is probably a good chance that it is really not functional.

March 16, 2017 | 04:21 AM - Posted by Master Chen (not verified)

I wonder what 1200X will be. If they'll ask more than 120$ for it, I'll pass. I'm currently preparing to build a seventh system on 1800X this summer, which will be my new main station and will be replacing my current i7 2600K, but aside from 1800X I've also eyed 1200X and Raven Ridge as possible contenders for a small future ITX I'll be doing sometime in the end of 2017/beginning of 2018. I'll be fine with 1200X at 80~120$, but not more than just that. I couldn't care less about six cores.

March 16, 2017 | 05:46 AM - Posted by Irishgamer01

Why launch when there are no motherboards available?
Kinda crazy.

March 16, 2017 | 09:04 AM - Posted by Anonymous (not verified)

You don't think AMD's partners will have motherboards available in the retail channel a month from now?

March 17, 2017 | 12:22 PM - Posted by Chaitanya (not verified)

motherboard availability seems to a problem only in US. In India Asus and Gigabyte motherboards for AM4 are available readily.

March 16, 2017 | 05:46 AM - Posted by Irishgamer01

Why launch when there are no motherboards available?
Kinda crazy.

March 16, 2017 | 05:58 AM - Posted by Irishgamer01

I have a feeling these will all be put up agains't the Intel 7700K again. So Intel biased reviews will be incoming. Fact they are different prices, core counts and speed won't come into it.

Intel great AMD poop.

Just wait and see.

March 16, 2017 | 09:06 AM - Posted by Anonymous (not verified)

The mistake every review site made - including this one - is testing gaming and broadcasting (OBS, etc.) at the same time. The i3, i5 and i7 go full retard when you do that.

March 16, 2017 | 09:08 AM - Posted by Anonymous (not verified)

Yeah, but anybody with an ounce of common sense will realize that if you buy the R5 1400 at $169 it's half the price of a 7700K. That $170 you save can buy you a much better graphics card that will be waaaay faster than a 7700K plus a card that's $170 cheaper.

Obviously the Intel loving press aren't going to point this out, but I think most people are smart enough to see it for what it is.

March 17, 2017 | 10:58 AM - Posted by Anonymous (not verified)

Holy crap you guys just don't freaking stop do you? Why is everything a big conspiracy ALL THE TIME?

March 16, 2017 | 09:21 AM - Posted by Martin Trautvetter

"What they did not cover or answer would be how the four core products would be presented. Would each be a single CCX and only 8 MB of L3 cace, or would AMD disable two cores in each CCX and present 16 MB of L3? We currently do not have the answer to this. Considering the latency between accessing different CCX units we can surely hope they only keep one CCX active."

AMD have now confirmed that the quad-cores are, unfortunately, 2+2s.


March 16, 2017 | 10:33 AM - Posted by Anonymous (not verified)

Why if the 2+2's have all the Infinity Fabric bandwidth to share among half the amount of cores and each 2-core CCX(with half the CCX unit's core disabled) still gets that 8MB of L3 cache for each CCX unit. That's 4 Ryzen cores getting 16 MB of L3 and sharing the same Infinity Fabric bandwidth that was speced out for 8 cores. Some minor tweaking of the games and all that CCX to CCX related issues are moot with that 4 core Ryzen bin netting some extra L3 cache for each of those remaining cores to share!

Any two Ryzen cores on a half the cores disabled CCX unit with that much more extra L3 cache can really never have to look outside the CCX unit if the game can manage any CCX/CCX Unit's core affinity properly. And as long as the draw call threads are kept on the same core until Draw call completion all of that latency inducing Cache coherence nonsense(because of improper processor thread management) over the IF will not factor into the game play!

Also there is the question of any tweaks that are incoming for games and any OS, Linux/Other OS tweaking, that can help to optimize for Ryzen's CCX unit/Cache locality issues. Some parts of AMD's CCX/IF/Cache latency is actually lower latency on Ryzen than on Intel's SKUs, it all depends on the way things are managed. And Ryzen really likes higher clocked memory as that IF gets faster with higher clocked memory SKUs.

March 16, 2017 | 11:49 AM - Posted by Martin Trautvetter

Thanks to, we already know how this choice affects performance:

The only scenrario 2+2 comes out ahead is with 7-zip and likely only for the 16MB 1500X, not the 1400 with its 8MB L3 cache across two CCXs.

March 16, 2017 | 12:47 PM - Posted by Anonymous (not verified)

On unoptimized games that do not mange the CCX unit/core affinity properly! So the tests where done on an 8 core 16 thread SKU with some cores disabled, and we do not even know how AMD's is going to change up any fabric controller hardware/firmware or other things with the 5 series SKUs.

I do not expect that the Ryzen 5 series will be able to match the 7700K but at the price on the 5 series SKUs that will mean that they will be competing with the Intel i5/i3 series and not the i7 series SKUs and still AMD's prices will be lower!

Let's see how high the Ryzen 5's can make out overclocked but really no one expects that Ryzen 5 will outright beat Kaby Lake at gaming. The games will have to be optimized for Ryzen, and the Ryzen/Vega Raven Ridge APUs will be of a different design than the desktop/CPU only Summit Ridge SKUs.

I'll wait for the actual benchmarks on the RTM Ryzen 5 SKUs, but then I'm waiting for the Raven Ridge APU SKUs to show up offered on some Linux OS laptop OEM offerings, because I'll pass any of that windows 10 crap that M$ is calling an OS! Linux/Vulkan all the way!

March 16, 2017 | 05:29 PM - Posted by Anonymous (not verified)

Ryzen 5 is the same die as Ryzen 7 series processors. It seems like AMD is cranking out a single die variant at the moment. It may actually be the same die used in Naples also. The die photos I have seen of Ryzen have a lot of uncore area, so I suspect it has a lot of extra high speed links that are not routed to the package with current Ryzen desktop processors. They seem to be using the same links for either IO or inter-processor communication. It is unclear how they are doing that though. I suspect they are either making the links configurable or they have the ability to tunnel multiple protocols over the same pci express physical layer. If the chips are placed on an MCM, the trace length would be a few milimeters. They could run very high clock speed for the on-package links. The 32 core Naples processor may be 4 standard Ryzen die on one MCM or package.

March 16, 2017 | 08:27 PM - Posted by Anonymous (not verified)

The Nen/Naples uses the exact same 2 CCX die Times 4 with some extra added on MCM IP and the Infiinty Fabric does tunnel multiple protocols but over the Infiinty Fabric IP/protocol and not PCIe, so look for direct attatched GPUs for AMD with the Infiinty Fabric on Zen/Naples performing an NVLink style functionality for direct attatched GPUs, and even the OpenCAPI protocol/other protocol support over the Infiinty Fabric.

When AMD has its Zen/Naples RTM and the Zen/Naples NDAs expire AMD will release more information. The Infinity Fabric will be used across CPUs and GPUs/other processors from AMD!(1)


"AMD Infinity Fabric underpins everything they will make"

March 17, 2017 | 04:16 AM - Posted by Anonymous (not verified)

It has to support pci express signaling directly for when the link is used for IO. Pci-e is a multi-layer protocol. Most of the high bandwidth links in use will be using a physical layer that is very similar to pci-e physical layer. That makes a lot of sense; there is no reason to reinvent the wheel. I don't think the infinity fabric is about the physical layer interconnect. It could be using the pci-e physical, data link, or transaction layers just with a different protocol on top of it. With pci-e, the physical layer can change without changing the layers above it, but it is also possible to use the same physical layer with other protocols on top. The physical layer isn't inherently high latency, so they could be using the physical layer with a low latency protocol on top, and it is probably switchable to several other protocols. When I said it is probably using pci-e, I mean it is using the pci-e physical layer signaling, not the entire pci-e stack. That would be too high of latency. I don't know if there is any unique physical layer for infinity fabric.

I don't think the physical layer signaling is that important other than the ability to use it as either interprocessor links or IO links. The routing tech may be the interesting part and distributed nature of it will allow for interesting things. They could run the links at very high speed in an MCM. The trace length would only be a few millimeters. Even a 16-bit link could have very high bandwidth. It is unclear whether they are going to do fully connected within the MCM. That takes 3 links to connect to the other 3 die. If they have something like 4 16 lane links per die then that would leave 64 links available for IO.

If a fully connected system isn't necessary, then they could use wider links. If you take a die with 64 lanes, for example, you can use 32 to connect to another die and still have 64 available, 32 from each die. You could then take a pair of two die MCMs and connect them together on the package using 32 lanes from each and you would, once again, still have 64 lanes available for external connection. Given the clock speed that this stuff can operate at with such short traces, having 3 hops might not be a big deal. The clock speed that pci-e is running at could allow for very low latency if a low level protocol layer is used.

Eventually, I would expect a CPU made to go on a silicon interposer. That may actually use the same type of router, it would just have a completely different physical layer that would be wide and lower clocked rather than narrrow and high clock speed.

March 17, 2017 | 01:07 PM - Posted by Anonymous (not verified)

Read this research paper(PDF).(1) The entire paper has an multi-Interposer(Active interposers NOT passive interposers) design on a module made up of smaller modular CPU and modular GPU chiplets:

NOTE: [This governmented funded exascale research IP will find it way down into the consumer market, make no mistake about that!]

"Abstract — The challenges to push computing to exaflop
levels are difficult given desired targets for memory capacity, memory bandwidth, power efficiency, reliability, and cost.This paper presents a vision for an architecture that can be used to construct exascale systems. We describe a conceptual Exascale Node Architecture (ENA), which is the computational building block for an exascale supercomputer. The ENA consists of an Exascale Heterogeneous Processor (EHP) coupled with an advanced memory system. The EHP provides a high-performance accelerated processing unit (CPU+GPU), in-package high-bandwidth 3D memory, and aggressive use of die-stacking and chiplet technologies to meet the requirements for exascale computing in a balanced manner. We present initial experimental analysis to demonstrate the promise of our approach, and we discuss remaining open research challenges for the community.


"Design and Analysis of an APU for Exascale Computing"

March 17, 2017 | 01:10 PM - Posted by Anonymous (not verified)

edit: governmented

to; government

March 16, 2017 | 05:16 PM - Posted by Anonymous (not verified)

It is better to be consistent though. If AMD sold a huge number of 4 core, single CCX units then developers wouldn't have as much reason to actually add optimizations for multiple CCXs. With SMT enabled, the 4 core is still 4 threads per CCX. The larger cache hierarchy should be able to support multiple threads very well. Once some core affinity optimizations are added, I think the 4 core will perform very well. It doesn't take much of a hit in most applications as it is. I don't know how hard those optimizations are; it doesn't seem like it would be that big of an undertaking.

Developers are going to need to go more multi-threaded anyway for the consoles. That may also involve assigning thread core affinities. You have 8 relatively low performance cores in both major consoles. It is unclear what Scorpio will have though. A single CCX could do 8 threads, but there have been some rumors of a lower end, lower power core for APUs that does not have SMT. If they have such a light weight Zen core, then they could use 8 of those rather than a 4 core CCX with SMT. It would make some sense to have 8 smaller cores for mobile APUs. Cores can more easily be completely deactivated when idle. You could do a lot of things with just one core active. They could then sell a 4 core, 8 core, and maybe 6 core APUs. I don't think 2 core processors really make much sense anymore. Four core should be the minimum except for very low power devices like phones.

March 19, 2017 | 03:58 PM - Posted by Anonymous (not verified)

See Post #1003(page 41) of the AnandTech Ryzen:Strictly Technical forum thread! For Ryzen tamz_msc States: "2+2 seems to take the least hit in the sequential L3 benchmarking". So tamz_msc(Post #1003) is using the same source as you!

March 16, 2017 | 12:38 PM - Posted by Likeonions (not verified)

They're launching another line of processors and here I am still waiting on a motherboard to ship, despite being preordered 2 weeks early. First world problems, I guess.

March 16, 2017 | 02:16 PM - Posted by Anonymous (not verified)

that 1400 has to be 1 CCX as it has only 8MB of cache. Makes no sense to manufacturer 1 chip that has a shared cache when no other chips operate like that. All the other chips, the CCX has it's own cache.

March 16, 2017 | 07:05 PM - Posted by Anonymous (not verified)

That isn't necessarily true. All of the processors in a CCX can access all of the cache slices. The latency is probably lower for the processors own slice. It would still be better to go to another slice than to have to go out all of the way to main memory. I don't know how that is aranged, but anandtech is saying that it is 2x2 with 8 MB, so I am assuming that it has 2 cores and two cache slices disabled; 4 MB per CCX. The other processors that have 3x3 but still have all of the cache enabled are stranger. I would expect 12 MB of L3, 6 per CCX.

March 16, 2017 | 08:46 PM - Posted by Anonymous (not verified)

well i obviously could be wrong as i'm just speculating. we'll know for sure when the benchmarks come in.

March 16, 2017 | 05:17 PM - Posted by Kronus (not verified)

I'm surprised they didn't push back the r5 release to try and figure out the ccx issues that literally ever competent Tech reviewer has pointed to.

Unless there's a workaround to keeping games sitting on one ccx or a method to increase performance over the ccx then gamers are just going to get less performing R7 processors.

Kinda disappointed, unless you can unlock cores like in the phenom days it looks like it wouldn't be worth purchasing an EYE for gaming. LE cache only gets you as far as you can access it quickly.

I guess I'll just wait and see confirmation from benchmarks. Fingers crossed I'm wrong and the R5 series brings that mid tier gaming value AMD seems to keep shooting for.

March 16, 2017 | 08:49 PM - Posted by Anonymous (not verified)

It looks like it is only a small percentage decrease for most games. There is obviously a few where it is more like 20%, but performance should improve with a little optimization from developers. I am probably going to build a Ryzen system, but I am waiting for slightly more mature motherboards and such. I have other reasons for wanting a cheap processor capable of executing 16 threads though. YMMV.

March 17, 2017 | 05:56 PM - Posted by Kronus (not verified)

If I go for a Ryzen build I'll probably go the r7 route just because I do plenty of other tasks that could take advantage of the extra cores and threads. For example I frequently run virtual machines, ftp and vpn servers, and stream when I game. So the r7 would be more up to my work case scenarios.

But like you I'm looking to see what happens as the platform matures. Not confident enough in the current motherboards because of things like iommu grouping, plus ccx speeds seem to be tied to Ram speeds and I'm hoping that faster more inexpensive Ram will show up in the coming year that will make a build more affordable.

I guess we'll see what happens.

March 16, 2017 | 09:13 PM - Posted by Anonymous (not verified)

On a side note...

When is the Wraith Max Cooler going on sale?

March 22, 2017 | 02:56 PM - Posted by agello24 (not verified)

it says the b350 chipset does not support dual pci express. but Asrock has a board that is the b350 chipset with dual pci express. so my question is, does it or does it not support it?

March 23, 2017 | 01:25 AM - Posted by HauteDaug (not verified)

"1400: 2+2 with 8MB L3 cache"

"2+2" ?! Ouch !

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.