Review Index:

The AMD Ryzen 7 1800X Review: Now and Zen

Subject: Processors
Manufacturer: AMD

AMD Ryzen 7 Processor Specifications

It’s finally here and its finally time to talk about. The AMD Ryzen processor is being released onto the world and based on the buildup of excitement over the last week or so since pre-orders began, details on just how Ryzen performs relative to Intel’s mainstream and enthusiast processors are a hot commodity. While leaks have been surfacing for months and details seem to be streaming out from those not bound to the same restrictions we have been, I think you are going to find our analysis of the Ryzen 7 1800X processor to be quite interesting and maybe a little different as well.

Honestly, there isn’t much that has been left to the imagination about Ryzen, its chipsets, pricing, etc. with the slow trickle of information that AMD has been sending out since before CES in January. We know about the specifications, we know about the architecture, we know about the positioning; and while I will definitely recap most of that information here, the real focus is going to be on raw numbers. Benchmarks are what we are targeting with today’s story.

Let’s dive right in.

The Zen Architecture – Foundation for Ryzen

Actually, as it turns out, in typical Josh Walrath fashion, he wrote too much about the AMD Zen architecture to fit into this page. So, instead, you'll find his complete analysis of AMD's new baby right here: AMD Zen Architecture Overview: Focus on Ryzen

View Full Size

AMD Ryzen 7 Processor Specifications

Though we have already detailed the most important specifications for the new AMD Ryzen processors when the preorders went live, its worth touching on them again and reemphasizing the important ones.

  Ryzen 7 1800X Ryzen 7 1700X Ryzen 7 1700 Core i7-6900K Core i7-6800K Core i7-7700K Core i5-7600K Core i7-6700K
Architecture Zen Zen Zen Broadwell-E Broadwell-E Kaby Lake Kaby Lake Skylake
Process Tech 14nm 14nm 14nm 14nm 14nm 14nm+ 14nm+ 14nm
Cores/Threads 8/16 8/16 8/16 8/16 6/12 4/8 4/4 4/8
Base Clock 3.6 GHz 3.4 GHz 3.0 GHz 3.2 GHz 3.4 GHz 4.2 GHz 3.8 GHz 4.0 GHz
Turbo/Boost Clock 4.0 GHz 3.8  GHz 3.7 GHz 3.7 GHz 3.6 GHz 4.5 GHz 4.2 GHz 4.2 GHz
Cache 20MB 20MB 20MB 20MB 15MB 8MB 8MB 8MB
Memory Support DDR4-2400
Dual Channel
Dual Channel
Dual Channel
Quad Channel
Quad Channel
Dual Channel
Dual Channel
Dual Channel
TDP 95 watts 95 watts 65 watts 140 watts 140 watts 91 watts 91 watts 91 watts
Price $499 $399 $329 $1050 $450 $350 $239 $309

All three of the currently announced Ryzen processors are 8-core, 16-thread designs, matching the Core i7-6900K from Intel in that regard. Though Intel does have a 10-core part branded for consumers, it comes in at a significantly higher price point (over $1500 still). The clock speeds of Ryzen are competitive with the Broadwell-E platform options though are clearly behind the curve when it comes the clock capabilities of Kaby Lake and Skylake. With admittedly lower IPC than Kaby Lake, Zen will struggle in any purely single threaded workload with as much as 500 MHz deficit in clock rate.

One interesting deviation from Intel's designs that Ryzen gets is a more granular boost capability. AMD Ryzen CPUs will be able move between processor states in 25 MHz increments while Intel is currently limited to 100 MHz. If implemented correctly and effectively through SenseMI, this allows Ryzen to get 25-75 MHz of additional performance in a scenario where it was too thermally constrainted to hit the next 100 MHz step. 

View Full Size

XFR (Extended Frequency Range), supported on the Ryzen 7 1800X and 1700X (hence the "X"), "lifts the maximum Precision Boost frequency beyond ordinary limits in the presence of premium systems and processor cooling." The story goes, that if you have better than average cooling, the 1800X will be able to scale up to 4.1 GHz in some instances for some undetermined amount of time. The better the cooling, the longer it can operate in XFR. While this was originally pitched to us as a game-changing feature that bring extreme advantages to water cooling enthusiasts, it seems it was scaled back for the initial release. Only getting 100 MHz performance increase, in the best case result, seems a bit more like technology for technology's sake rather than offering new capabilities for consumers.

View Full Size

Ryzen integrates a dual channel DDR4 memory controller with speeds up to 2400 MHz, matching what Intel can do on Kaby Lake. Broadwell-E has the advantage with a quad-channel controller but how useful that ends of being will be interesting to see as we step through our performance testing.

One area of interest is the TDP ratings. AMD and Intel have very different views on how this is calculated. Intel has made this the maximum power draw of the processor while AMD sees it as a target for thermal dissipation over time. This means that under stock settings the Core i7-7700K will not draw more than 91 watts and the Core i7-6900K will not draw more than 140 watts. And in our testing, they are well under those ratings most of the time, whenever AVX code is not being operated. AMD’s 95-watt rating on the Ryzen 1800X though will very often be exceed, and our power testing proves that out. The logic is that a cooler with a 95-watt rating and the behavior of thermal propagation give the cooling system time to catch up. (Interestingly, this is the philosophy Intel has taken with its Kaby Lake mobile processors.)

View Full Size

Obviously the most important line here for many of you is the price. The Core i7-6900K is the lowest priced 8C/16T option from Intel for consumers at $1050. The Ryzen R7 1800X has a sticker price less than half of that, at $499. The R7 1700X vs Core i7-6800K match is interesting as well, where the AMD CPU will sell for $399 versus $450 for the 6800K. However, the 6800K only has 6-cores and 12-threads, giving the Ryzen part an instead 25% boost in multi-threaded performance. The 7700K and R7 1700 battle will be interesting as well, with a 4-core difference in capability and a $30 price advantage to AMD.

Continue reading our review of the new AMD Ryzen 7 1800X processor!!

Video News

March 2, 2017 | 09:10 AM - Posted by Chaitanya (not verified)

Intel needs to stop rehashing their Nahlem architecture and overpricing their cpus. Finally some good competition and intel has been caught naked and sleeping.

March 2, 2017 | 09:19 AM - Posted by Anonymous (not verified)

AMD really delivered affordable 8-core / 16-thread CPU! Also it is well within reach of people looking at Intel i7s. Hard to believe but true.

March 3, 2017 | 12:08 AM - Posted by Anonymous (not verified)

huh? the ryzen costs twice as much as the 6700k!

March 3, 2017 | 03:52 AM - Posted by Anonymous (not verified)

A Ryzen 1700 has 8 cores, 16 threads, and costs $330.

March 3, 2017 | 01:18 PM - Posted by Clu (not verified)

You have to consider that the 6700k is also overpriced, AMD says that enthusiast prices should be where the 1800x is, it only goes to follow that the 6700k is too expensive for what it offers.

March 4, 2017 | 02:53 AM - Posted by DaKrawnik

AMD needs to stop rebranding.

March 5, 2017 | 08:38 PM - Posted by Anonymous (not verified)

Do you really think Intel has not been doing any work in R&D? Everyone seems to think that we currently have the best Intel has to offer. No one ever considers that Intel has a new tech they have been sitting on.

Why wouldn't Intel just release 5-10% speed increases per year with the old chips? They are THE performance chip, and are in a position that they can do exactly what they have been doing. Sitting at the top, drawing top dollar from their chips. Until now, there has been no reason to release anything new. They make billions on 10 year old Tech, while it took AMD 10 years to catch up.

If anything, Intel will finally release what they have been sitting on for Lord knows how long and destroy AMD sending them back tech wise 10 years.

March 8, 2017 | 09:18 AM - Posted by Anonymous (not verified)

You should follow Intel a little more closely as it seems you like their products.

They hit a wall with their 10nm process. We are on the 4th gen of the same process with Kaby Lake. Unheard of until now since they operated on (invented even) a tick tock method of release where we would get a new process every other year.

Plus massive layoffs. Terrible business acquisitions. Etc

Plus it's a public company, they aren't sitting on anything unless they think losing money via stock is a good idea. You think the laid off 10,000 employees because they just want to sit on something? Lol

We will see their 10nm Coffee Lake barring they got their issues sorted out hopefully in Q4 of this year. It should be a significant jump in performance (unlike Kaby Lake).

August 10, 2017 | 06:57 PM - Posted by Mememememe (not verified)

"Everyone" seems to think Intel has been sitting on a new Magical Core, but have just decided to let their r&d investments sit and get mouldy because reasons.
If Intel actually had this magic tech they would release it and charge so much that only enterprises would pick them up (and they would because there are enough loads that prioritize perf over everything else, and companies who have enough volume to justify these purchases).
This sort of thought process is very like that which produced ideas like the Illuminati or $deity. It's magical thinking.

March 2, 2017 | 09:12 AM - Posted by Offler (not verified)

Hello guys.

Be so kind and share information about testing configuration. Ram frequencies, timings, bus frequencies.

Without those I would not consider the review to be complete, accurate or trustworthy.

March 2, 2017 | 09:36 AM - Posted by Anonymous (not verified)

Yeah, I'm also wondering about these things.

-Was the moon crescent at the time of the review?
-What was the temperature in Barbados?
-Did Ryan dab the thermal paste on his nipples before putting it on the CPU?


March 2, 2017 | 10:11 AM - Posted by appaws (not verified)

And can you post pictures of Ryan's turgid pink nipples, nipples that only a true ginger can have.

Go Big Blue!

March 2, 2017 | 09:55 AM - Posted by marley gibson (not verified)

Just because you didn't agree with them, doesn't make them wrong

March 2, 2017 | 10:20 AM - Posted by Offler (not verified)

I find the information provided to be incomplete. Thats all.

Providing screenshot of CPU-Z with memory setting is not that hard, while we both can agree that different memory frequency on different CPUs will provide different results.

March 2, 2017 | 10:21 AM - Posted by Ryan Shrout

Well I definitely noted that the memory was running at 2400 MHz in the setup table. As for timings, I don't think have the specifics, but defaults in the BIOS I can double check.

As for bus speeds...? 100 Mhz on the base clock on all is not assumed?

March 2, 2017 | 10:41 AM - Posted by Offler (not verified)

Just take care that Intel board did not switched RAM to XMP profile due some reason (its not commonly a default setting), and that Default settings used same JEDEC profile on each mainboard. DIMM modules may contain more than one of these (the DDR3 DIMM modules I own has 3 JEDEC settings for 400, 666, 800Mhz and one XMP profile).

Mechanisms to pickup certain profiles and set RAM accordingly may wary on different boards, and "default" settings for either ram frequency and timings is giving different settings than manual value.

From my experience the differences between default and fine tuned memory setting (1600 9-9-9-24 vs 1600 6-6-6-18) affect CPU performance by 5-10% in mflops from LinX (66mflops went to 71).

In clock to clock performance it can have impact on accurancy of the test beyond margin of error.

I would also test CPUs with quad channel capability just with 2 DIMMs to have an idea how much better RAM bandwidth affects the CPU performance, but thats a different story.

March 2, 2017 | 11:21 AM - Posted by khanmein

Ryan Shrout, i thought they sent all the reviewer with Corsair’s Vengeance LPX 3000 MHz kit (2x16 GB) DDR4?

AMD suggest test with 2666 MHz or??

can u guys hit 3000 MHz with that kit they offered?

i really hope more in-depth review regarding memory side.


March 2, 2017 | 07:35 PM - Posted by Anonymous (not verified)

nobody can get over 2.6 ghz on the memory yet as far as i can see.

March 2, 2017 | 03:59 PM - Posted by Eric (not verified)

The X370 and B350 boards support DDR4 3200 OC ram. I suspect that going from 2400 to 3200 may make a difference.

March 3, 2017 | 02:56 AM - Posted by Martin Trautvetter

From the reviews around the web, Ryzen CPUs seem to be limited to DDR4-2933 or 3000 even under the most favourable of circumstances. (single-rank, single module per channel)

March 2, 2017 | 09:23 AM - Posted by stevex291 (not verified)

The performance per dollar aspect can certainly not be ignored. While I don't think that many intel people will jump ship it certainly provides some actual competition this time around. I'd say if people are still on older chipsets and are looking to upgrade switching to Ryzen isn't a bad bet at all.

Hopefully they can get all the kinks and bugs ironed out on the gaming side of things. O.C. in the reviews seems a bit lackluster but not everybody needs to overclock.

March 2, 2017 | 10:20 AM - Posted by Mike S. (not verified)

Well, for the past few years CPU performance-per-dollar tends to max in the $100-$300 range and going much past $300 gives diminishing returns. For some hypothetical example, a $375 Core i7 outperforms the $250 Core i5, but you're paying 50% more for 20% more performance.

So I suspect if someone really cares about performance-per-dollar, the Ryzen 1700 or the upcoming Ryzen R5s will be more interesting than the $500 1800X.

March 2, 2017 | 09:25 AM - Posted by John H (not verified)

[H]ardOCP link in article not working right now (9:25am EST) unfortunately

March 2, 2017 | 09:28 AM - Posted by stesmithy83

Gaming performance is a bit of a letdown :( would be nice to see a larger sampling of games tho

I have only recently upgraded i just hope AMD can keep it up for my next upgrade cycle in 2/3 years time competition is always good

March 2, 2017 | 09:29 AM - Posted by Anonymous (not verified)

Idle power consumption way lower then Intel. 8-core/16-threads semi-passive systems easily achievable! It will be very hard to resist for someone like me. Very happy :-)

March 2, 2017 | 09:31 AM - Posted by Anonymous (not verified)

With the video done some days back, what about any last hours BIOS updates and maybe any microcode updates for Ryzen as well as issues with the new AM4 motherboards for likewise new firmware changes?

And of course there will be some Ryzen SKUs the will come off the diffusion lines with a little better performance that other Ryzen SKUs for that silicon lottery sort of deal for some lucky consumers.

So will there be more looks at the the overall certified database of results from the major benchmarking suites with regards to Ryzen to check out the variance of benchmarking results?

I’m also very Interested in knowing what compiler/libraries the benchmarking suites where compiled with as well as the status of any AMD optimized compiler products from AMD and third parties.

Motherboards and DRAM memory tweaking for all the new AM4 IP will have to be looked at with loads of re-benchmarking as well as any AMD tweaking of Ryzen microcode and UEFI/BIOS settings. So over the next few weeks there will be massive amounts of tweaking going on that will result in more benchmarking needing to be done for all sorts of things related to motherboards/memory/CPU any brand new technology.

March 2, 2017 | 09:36 AM - Posted by derz

Thank You AMD Team!

March 2, 2017 | 09:37 AM - Posted by RWGTROLL

What are the system specs? I think that is important.

March 2, 2017 | 10:14 AM - Posted by Anonymous (not verified)

They should be included, but why is it actually important? This is a CPU analysis. If the CPUs are running at stock and the memory is the same speed then who gives a crap what PSU they used?

March 2, 2017 | 10:23 AM - Posted by Ryan Shrout

Look on the first page of benchmarks, we have a table with the system build out on that page.

March 2, 2017 | 09:40 AM - Posted by Dark_wizzie

You got the same exact fps for both pass 1 and 2 for x264 for 1800 Ryzen and 6800k? Weird.

March 2, 2017 | 10:02 AM - Posted by PCPerFan (not verified)

Yes, I noticed this too. I imagine this has to be an error.

March 2, 2017 | 10:24 AM - Posted by Ryan Shrout

Nope, no error. :) Just the average of four runs happened to work out that way.

March 2, 2017 | 09:45 AM - Posted by Anonymous (not verified)

The price/performance specs on the 1700X and 1700 will be even more interesting once the benchmarks come out for those SKUs. And maybe some newer 1800X results with any CPU/Motherboard/Memory Tweaks also.

So when are any benchmarking results going to be here that allow the Reviewers to use better memory/motherboard/cooling solutions above what was allowed using the benchmarking kits provided by AMD/Partners?

March 2, 2017 | 11:55 AM - Posted by Anonymous (not verified)


The amount of programmers and businesses PC application users is more than 6x the amount of people gaming.

step out of your bubble.

March 2, 2017 | 09:55 AM - Posted by Anonymous (not verified)

Do any reviewers calculate price per frame (based on FPS benchmarks)?

March 2, 2017 | 10:06 AM - Posted by Yavor (not verified)

I would not have given this chip even silver if it's advertised for a desktop chip since that usually MAINLY means gaming, as that's the biggest market.

Let down, yet again, AMD.

March 2, 2017 | 10:22 AM - Posted by Mike S. (not verified)

They should have put an FX-8370 into the mix to show how far AMD has come. :D

March 2, 2017 | 10:22 AM - Posted by Anonymous (not verified)

I'm waiting for the AMD optimizing compiler to be used on the compiling of any games/benchmarks that are currently compiled under Intel's optimizing compiler. AMD's Ryzen is still doing ok for some games even though the games have most likely been Complied/Optimized for Intel's silicon.

There are differences between AMD's and Intel's underlying hardware/micorcode that must be accounted for by using the respective CPU maker's specifically designed optimizing compiler and compiled libraries tuned for that maker's specific silicon/microcode!

There is also the issue of any reviews done days in advance of any last minute CPU related UEFI/BIOS micorcode and firmware tweaks and AM4 motherboard likewise tweaks as well as the memory options that will become available after any motherboard tweaks, or even better memoty/cooling SKUs used above what was supplied in the initial review kits.

Some review kits came with better cooling solutions than others so that will have to be looked at also.

March 2, 2017 | 11:30 PM - Posted by Anonymous (not verified)

Cry some more ignorant shill.
These are the best CPUs on the market, performance, price, efficiency, you name it.

The fact that it gets 10 less fps on games not optimized for its architecture while running on handicapped memory because of some early BIOS problems doesn't mean anything for the 99% of people who will use these CPUs like normal people.

100fps choice from a scumbag company with no ethics and inferior technology, or the 90fps choice that wrecks the former in everything else? Easy choice.

Guess some people are stupid enough that after 5 years of getting pounded from behind they don't know what it's like to not get raped.

March 2, 2017 | 10:15 AM - Posted by odizzido (not verified)

I would be interested in seeing a more in depth look at gaming performance vs intel's 8 core CPU. Including frame times and all of that.

March 2, 2017 | 10:17 AM - Posted by Anonymous (not verified)

Disabling SMT helps Intel 6-8 core CPUs, it should help Ryzen as well. I saw large gains in several games on my 5820k.

March 2, 2017 | 10:24 AM - Posted by Ryan Shrout

I would agree - but should you have to sacrifice that capability to game?

March 2, 2017 | 11:08 AM - Posted by Anonymous (not verified)

Good point. For me it was worth the tradeoff, I run office apps and games on my desktop, not large batch jobs like encoding.

March 2, 2017 | 11:52 AM - Posted by remc86007

One could in theory assign games to run only on odd number threads therefore simulating disabling SMT.

March 2, 2017 | 10:26 AM - Posted by Anonymous (not verified)

Welp, the 45W power consumption advantage implied by the TDP difference was total bullshit.
Ryzen: it's better than Bulldozer ( and Intel if all you do is Cinebench).

March 2, 2017 | 10:40 AM - Posted by Questions (not verified)

What testing would best show game streaming stability? I would like to know how Ryzen compares when gaming, capture, external capture, local record, and exporting etc at once as compared to x99v3 and z270, Mainly meaning beyond pcie lane number what is the real world balls to the wall stability and/or hand of characteristics between multifaceted use. I don't know if there is a benchmark for this though as intel has remained unchallenged in io capabilities even more than their domination over the FX8350.

March 2, 2017 | 11:01 AM - Posted by Questions (not verified)

An actually readable way to ask my question may have been, "When all of the pcie lanes and the cpu performance is being maxed out, what do the errors and device dropouts look like with the new Ryzen proccesors and chipset?"

March 2, 2017 | 10:43 AM - Posted by Sunain (not verified)

I'm quite disappointed with PCPer's attention to detail in this review.

Which bios revision did you use? No disclaimer anywhere about the bios change.

Ryan made the excuse that they didn't have the mounting coolers yet but they have a Asus Crosshair VI which has AM3 and AM4 mounting holes which means any modern aftermarket cooler will work.

Why didn't you use a game that is known for CPU bottlenecking like City Skylines? Why didn't you also use a title like DOOM with Vulcan? No Ash's of the Singularity? All the game reviews seemed to be bias to titles that are already known to perform well on Intel CPU's.

Audacity MP3 @ 128kbps? Why not use FLAC or AAC like most people these days? LAME is a lame benchmark for any modern review of a system. The first version of LAME 3.99 came out 7 years ago.

LAME is optimized for Intel processors, so it's an unfair benchmark to start, albeit an obsolete test.

"Gaming results are particularly concerning as AMD has been pushing Ryzen as a gamers and enthusiasts dream solution"

Did you even do a test to replicate the AMD streaming and playing a game at the same time?

You also didn't address the fact that going forward, games are being optimized for PS4/XB1 with DX12 and Vulcan which are AMD architecture, so the next year or so will significantly change the gaming landscape

March 2, 2017 | 10:57 AM - Posted by Ryan Shrout

The BIOS revision on the ASUS Crosshair VI Hero was noted on the test setup page.

Your game suggestion are fine, they just aren't the ones I used. They were selected well before I had Ryzen hardware to play with.

Audacity and LAME MP3 encoding is still a VERY COMMON workload and all audio encoding, MP3/AAC, etc. uses nearly identical algorithms at this point. FLAC is different. I am confident that LAME has been in the open source long enough that it's a very fair test.

No, I didn't replicate AMD's streaming test. That's never been a workload we measure. Still interesting though.

Are you suing games are going to be optimized that way? And all of our games tests were using DX12 anyway. 

Maybe you're right though. 

March 2, 2017 | 11:27 AM - Posted by Sunain (not verified)

I know time was limited and that's AMD's fault for not giving reviewers enough time. I look forward to future write-up as time permits to do more in-depth reviews and discussions on the Ryzen processor.

I'm just concerned that the gaming conclusion/opinion in this review was a bit unjustified. The conclusion was based on just a couple titles in DX12. Replicating AMD's gaming benchmarks to see if they are correct, I personally thought would be the first thing a reviewer would double check. AMD has been touting Battlefield 1 at 4k on a Titan being faster than Intel's chips. They were also showing of DOTA 2 / MOBA's playing and streaming at the same time. It may not be your standard review practice but when a company shows this off a couple times, it's something that a review needs to cover to either independently confirm the results or to call AMD out on it. Streaming to Twitch and YouTube is a big thing for gamers these days and if Ryzen can do that better than Intel as they've been touting, that's something the review needs to address.

March 6, 2017 | 04:32 PM - Posted by Anonymous (not verified)

Frankly I can't understand why Cities Skylines and Planet Coaster aren't tested. It's obvious a city with 100k population or Large park would be a very CPU heavy game. yet they are ignored in these tests..

March 2, 2017 | 10:45 AM - Posted by JohnGR

I guess the games could be the Achilles heal for Ryzen. If AMD and it's partners manage to improve some how performance in games, then Ryzen will be the best option under $500 no matter what. Until then people could still choose the 7700K (but with NO option to improve performance through a CPU update in the future) or the 6800K.

March 2, 2017 | 11:29 AM - Posted by Anonymous (not verified)

Ironically gamers are probably the most eager to switch from Intel just like we switched from AMD when Core 2 Duo came out.

March 2, 2017 | 10:49 AM - Posted by Polycrastinator (not verified)

So what I'm getting from this is basically that Intel is probably still your default choice, but if you expect to do multithreaded work (especially in video) you should consider RyZen. I'll be interested in how this looks in 6 months, I'm sure bugs will be worked out, and games will be patched to better utilize what AMD is offering here.

March 2, 2017 | 11:03 AM - Posted by MrBook (not verified)

Your post mirrors my thoughts. People tend to forget that the games they play need to be first developed. Dev studios will benefit from the cost effective multithreaded performance.

March 2, 2017 | 10:50 AM - Posted by Martin Trautvetter

Sucks that AMD decided to blatantly lie about the power consumption of these chips. If the other tests are any indication, this becomes downright laughable with the "65 What?" 1700.

March 2, 2017 | 10:52 AM - Posted by Anonymous (not verified)

Anyone else notice the Handbrake 1.0.2 Chart has Ryzen 7 1700 included at the bottom. :)

March 2, 2017 | 10:58 AM - Posted by Ryan Shrout

Oops, a top secret early look at our upcoming story! 


March 2, 2017 | 11:09 AM - Posted by Anonymous (not verified)

Meh, Im glad I canceled my pre order.

They hyped this up for gaming and its no better than a 7600k that is $100 less than the 1700. Average consumers dont need these cpus, maybe content creators and the like. AMD just can beat Intel when it comes to IPC.

I'll wait until the aib boards iron out all the issues and bugs, then I will rethink this then.

March 2, 2017 | 11:25 AM - Posted by Anonymous (not verified)

The first generation Zen/Ryzen was never expected to beat Intel outright! AMD did beat the 40% IPC improvment over excavator though with a 52% better IPC improvment. There are other things to consider as far as the AM4 motherboard maturity(Brand new) against Intel's motherboard maturity(Around for longer with more tweaks available).

There are too many uncontrolled variables to consider this early in the game but AMD/AM4 motherboard partners have some tweaking to do with regards to getting more performance out of it brand new Ryzen CPU/AM4 hardware/firmware ecosystem. The total gaming ecosystem is so directly tuned to Intel's hardware and compiler exosystem, and even M$'s OS ecosystem has some tweaks and optimizations ahead for Ryzen/AM4 and its compiler ecosystem tuning requirements for Ryzen/AM4 under windows 10.

March 2, 2017 | 11:15 AM - Posted by RWGTROLL

You should try the 1800x with SMT off. I think there is a bug.

March 2, 2017 | 07:42 PM - Posted by Anonymous (not verified)

You do Know that Ryzen has a lot of new CPU core thread priority IP built into the micro-arch and some benchmarks and games are not ready at this time to take advantage of these new features in Ryzen. But even for Intel Turning Intel's SMT(HyperThreading) off shows some improvment for some games.

There is still too many unknowns and tweaks incomimg and there is some benchmarking software still not working properly to be of any use for Ryzen Benchmarking at this time. The benchmarking rigs are running full tilt with more new information to come so its still a waiting game until more testing can be done.

March 2, 2017 | 11:36 AM - Posted by DaVolfman (not verified)

I'd like to see Doom Vulcan (AMD card) benchmarks as a multi-threaded gaming sample.

March 2, 2017 | 11:36 AM - Posted by Mageoftheyear (not verified)

I've got to say I found the quality of this review very disappointing.

- No overclocking results
- Results structured as if the 1800X's rival is the 7700K and not the 6900K (very strange not to focus on the apples to apples comparison. Can we expect the Ryzen 5 1600X's rival in your next review to be the 6950X in opposite-land?)
- 3 games tested, one resolution, one API (come on guys)
- And basically: "Due to time constraints we couldn't be as thorough as we wanted to"

It's the last one that REALLY confuses me. The Ryzen release has been one of the most hotly anticipated CPU releases in a decade, and you didn't schedule your time around it? Why?

It feels like in light of the many leaks that you gave up on this review. I'm not saying that you guys didn't put effort into it, just that it wasn't enough for me to feel like I don't NEED to read more reviews to get a complete picture.

Hopefully GamersNexus, Anandtech or HardwareCanuks can do that.

Please keep these criticisms in mind for the Ryzen 5 1600X review.

March 2, 2017 | 11:42 AM - Posted by Anonymous (not verified)

Is it possible to ask AMD for some clarity regarding XFR as your article says it's only available on X chips while other sites like Andantech is saying it's available on all Ryzen.

That's not to degenerate your article BTW as everyone covering Ryzen seem to be saying opposite things concerning XFR.

March 2, 2017 | 05:50 PM - Posted by Allyn Malventano

Ryan is confirming, but from what we understood, the X in the processor name stands for XFR.

March 2, 2017 | 07:57 PM - Posted by Anonymous (not verified)

According to Anandtech XFR is on all the SKUs but only the X versions get more of a boost. So the X means eXtended XFR boost above the XFR ability of the non X Ryzen SKUs

"All the CPUs are multiplier unlocked, allowing users to go overclocking when paired with the X370 or B350 chipset. At this point we’re unsure what the upper limit is for the multiplier. We have been told that all CPUs will also support XFR, whereby the CPU automatically adjusts the frequency rather than the OS based on P-states, but the CPUs with ‘X’ in the name allow the CPU to essentially overclock over the turbo frequency. XFR stands for ‘eXtended Frequency Range’, and indicates that the CPU will automatically overclock itself if it has sufficient thermal and power headroom. We’ll mention it later, but XFR works in jumps of 25 MHz by adjusting the multiplier, which also means that the multiplier is adjustable in 0.25x jumps (as they have 100 MHz base frequency). XFR does have an upper limit, which is processor dependent. All CPUs will support 25 MHz jumps though XFR above the P0 state, but only X CPUs will go beyond the turbo frequency.

A side note: As to be expected, XFR only works correctly if the correct setting in the BIOS is enabled. At this point the option seems to be hidden, but if exposed it means it is up to the motherboard manufacturers to enable it by default – so despite it being an AMD feature, it could end up at the whim of the motherboard manufacturers. I suspect we will see some boards with XFR enabled automatically, and some without. We had the same issue on X99 with Turbo Boost 3, and Multi-Core Turbo."(1)

(1)[Page 3 of the article]

"The AMD Zen and Ryzen 7 Review: A Deep Dive on 1800X, 1700X and 1700"

March 3, 2017 | 01:50 AM - Posted by Anonymous (not verified)

I get what you're saying but extended extended frequency range sounds a little odd, that's what we get when putting the X from the chip next to the XFR from the technology.

Maybe I'm being slightly pedantic but it just seem off to say the non-X chips don't have XFR when it seems they have a less version.

March 2, 2017 | 11:42 AM - Posted by Anonymous (not verified)

Great chip for game ... developers.

March 2, 2017 | 12:03 PM - Posted by SoundFX09

Allow me to Explain the Gaming Performance Numbers in a way people will understand....

Most Games today don't use any more than 4 Cores and 4 Threads. As a result, AMD's Ryzen Architecture isn't able to show its full strength due to limitations in Game Engines and Programming that favors single-threaded Quad-Cores (aka Intel CPUs).

This is why right now, If you are JUST Gaming and nothing else, AMD's Ryzen 7 CPUs are not a very good choice, and you are better off waiting for Ryzen 5 or 3 (Their 6- or 4-Core Variants), or going for Intel's Core i5 and i7 Kaby Lake Options.

If you plan to do MORE than Gaming, like Video Editing or Streaming, or heck, doing other Complicated Tasks like 3-D Rendering and Advanced Programming, then the Ryzen 7 Line really hits the Sweet Spot at $330-500. The Performance you get will no doubt match, come close, or even beat Intel's Broadwell-E Options, while being SIGNIFICANTLY Cheaper and having a stronger value for money.

Heck, you could make a System based on the R7 1700 Non-X, and you'd likely be able to make a system that matches the price of an i7-6900K, and get performance that is close to or better than it.

So keep being triggered by AMD's Not-so-surprising Gaming Performance. AMD's Ryzen CPUs have certainly met their expectations. It may not beat Intel Completely, but it certainly is Fantastic and brings back MUCH needed Competition to the CPU Market.

-----End Explanation-----

Now to my final Opinion on PCper's Review:

I personally think it would have made more sense to Test More Games (GTAV, BF1, COD:IW, Overwatch, DOTA 2, CS:GO, The Witcher 3, Crysis 3, etc.) and Test Video Encoding using Adobe Premiere, Art Encoding using GIMP, and Other Encoding Programs that I cannot mention or don't know about, since I believe this is where Ryzen can Show its true Colors.

This Review could be A Lot better than what this has come out to be. Instead of Rushing out the Review, why not give it more time so that way we have a More thorough Review that can give us a Clear Idea of what Ryzen really is all about. As Viewers, it's best we get the most accurate information available, not the most immediate.


March 3, 2017 | 01:53 PM - Posted by CNote

Buying a Ryzen7 for just gaming is like buying a Dump Truck for commuting

March 2, 2017 | 12:06 PM - Posted by Anonymous (not verified)

This is the first time I've been excited about a CPU launch in a while. All of the recent Intel launches have been somewhat of a bunch of fizzles. Finally something to make me consider AMD again for a new build. The 1800x is out my price league so I'm really interested to see what you have to say about the 1700 and 1700X. I'd be really interested to see how these multi-threaded beasts handle research programs such as F@H and World Community Grid projects.

March 2, 2017 | 12:19 PM - Posted by Xukanik

That Idle consumption is pretty nice!!!
This would make for a great home server.

March 2, 2017 | 12:34 PM - Posted by Topjet

Thanks PC Per, loved the info, way to go AMD, now just keep it up :)

March 2, 2017 | 12:54 PM - Posted by Anonymous (not verified)

Why did all the sites use the 6950 10 core processor which is valued at 3-4 times the ryzen cores?? You could build two high end ryzen systems with decent graphics for just one of these cores. These cpus are an expensive joke and don't belong in these reviews??? Did you simply follow the piss weak intel review guide to make sure any threaded benchmarks have an intel core at the top??? Wouldn't it have been better to show the progression from amd's previous cpu cores??? If you want to be impartial, why don't you benchmark an IBM power8 12 core with 96 threads under Linux and really put both AMD and intel in their place under highly threaded workloads???

March 2, 2017 | 01:19 PM - Posted by Martin Trautvetter

Lets be real, if Ryzen were faster, you'd absolutely demand it be tested against the 6950X.

March 2, 2017 | 01:40 PM - Posted by Ryan Shrout

^ this.

March 2, 2017 | 01:47 PM - Posted by Anonymous (not verified)

You don't get the point here. There was reports of Intel up to their dirty old tricks and getting to all the media groups to push their case and keep AMD down. There is no reason a $2400aud CPU is compared to a $699aud unless the big guy wants to keep the little guy down. The only reason most of the sites have it in the charts is to ensure ryzen doesn't win the heavy threaded tests. I personally have an Intel-nvidia system at home, but the fake media has the world conned if you can't see what is obvious here and on many sites which simply followed the Intel review guide... Competition is great, but fair competition is what we actually need to ensure everyone can get a fair product for their dollar... Why not bench it against the power8/9 cores which feature crazy SMT (8x) abilities and see how fast it is against a CPU 10-15x the cost in highly threaded performance, especially as there are rumours of a possible future Zen core design possibly expanding to SMT 4x, and would this be a worthwhile design choice?????

March 2, 2017 | 03:04 PM - Posted by Anonymous (not verified)

As soon as you go to the performance per dollar page you will find out that Intel i7-6950 is quite bitter deal.

March 2, 2017 | 01:15 PM - Posted by Anonymous (not verified)

What about real world multi-tasking? Is there a good benchmark for that? I'm talking about the following:

1 AAA gaming while streaming 1080p video thru twitch with high quality webcam
2 running handbrake while AAA gaming
3 running handbrake while web browsing
4 running handbrake while watching a 1080p youtube video
5 running a medium/heavy compute application while playing a AAA video game

This is real world use and I'm curious if the extra cores on the Ryzen make your computer run smoother.

March 2, 2017 | 01:22 PM - Posted by realjjj (not verified)

You messed up the comparison at 3.5GHz, the 6900k is running at default clocks, compare Cinebench results for example - 151 at 3.5GHz and a few pages later at 3.7GHz

March 2, 2017 | 03:03 PM - Posted by Allyn Malventano

We double checked the data. It's just a coincidence. The 6900K ended up running at a similar speed when clocking freely under that workload.

March 2, 2017 | 01:28 PM - Posted by realjjj (not verified)

And how out of touch are reviewers?
90% of users buy GPUs below 250$ and game at 1440p and bellow, mostly 1080p

Anyone that has payed 700$ to game at 1080p needs to seek medical help and i guess reviewers are equally insane.
Been through 20 reviews and nobody tests with a 480 or 1060.

March 2, 2017 | 02:59 PM - Posted by Dennis (not verified)

In the 1990's when I was a computer hobbyist and becoming a Systems Administrator, I would read reviews like these, buy the CPUs, and wonder why I was not getting similar FPS in games and encoding times. Of course, I later learned reviewers used high-end motherboards, GPUs, and most other components when testing the CPUs. However, this is actually a good experimental designed to mitigate bottlenecks so the CPU is not hampered by other parts of the system, resulting in an under performing CPU. As you pointed out, this does little for the average consumer with a small, and realistic, budget. The Ryzen 1800+ was not designed for us mere mortals, but rather people who would gladly pair a $500 CPU with an equally high-end GPU. I would much rather get the 1800+ than the 6900K based on this review if I had the need and the money for this performance.

That being said, this review really highlights the 7600K. The 7600K offers slower performance, but not so much slower that people would be suffering, it is unlocked, noticeably better power consumption, and the best price per performance of all the CPUs in this review. If I were to build a machine now, I would pair the 7600K with a GTX 1060 and a 24" 1080p monitor to offer the best bang-for-buck for gaming and most everything else.

March 2, 2017 | 03:05 PM - Posted by Martin Trautvetter

What would be the point of testing scenarios limited by GPU performance for a CPU review?

March 2, 2017 | 03:05 PM - Posted by Allyn Malventano

It stands to reason that in a CPU review, you test games that are CPU limited, and choose GPUs that will not be as much as a limiting factor / influence on the results. It's the same reason why we test GPUs with as fast of a CPU as we can throw at them.

March 2, 2017 | 05:35 PM - Posted by Anonymous (not verified)

For those who bet core count will be more important in upcoming games then IPC, It would be nice to show CPU utilization.

March 2, 2017 | 06:51 PM - Posted by quest4glory

It does, but it seems kind of like a vacuum if you only test in that fashion. For *most* gamers, I dare to say what matters most is overall performance across a variety of titles. Based on this, they should see results for both 1080p and 1440p, and for the highest end AMD processor, it's fair to test at 4K as well.

I know if I'm buying a $500 processor I'm doing it after reading reviews, and making a decision based on the aggregate results. If all I see are a lot of 1080p results, and I'm sitting here with a 1440p or 4K screen, I'm going to be begging for more information before I take the plunge.

I'm not an AMD CPU user. I have a 6700K machine and a 5930K machine. If I were in the market (which I'm not) I would be looking at AMD and asking what is the value proposition for both use cases (productivity and gaming.) I don't have enough information if all I have are 1080p gaming benchmarks.

March 3, 2017 | 02:01 AM - Posted by Allyn Malventano

I'm with you, and in an ideal world with infinite manpower and testing time, we would have results for every possible combination of CPU and GPU. That not being practical, we observe the relative differences in various parts by performing comparison tests with the other hardware in the system being as out of the way as possible. We occasionally do lower end builds as well, but adding multiple CPU speeds to a GPU review would multiply what is already typically 4-6 rounds of testing.

March 2, 2017 | 08:56 PM - Posted by ddferrari (not verified)

Who would pair a $500 cpu with a $250 gpu?

March 4, 2017 | 07:28 PM - Posted by Anonymous Nvidia User (not verified)

You wouldn't be looking at getting a $500 Ryzen with a 480 or 1060 either. These cards can be a GPU bottleneck even at 1080. Doesn't give a full representation of power of CPU.

Testing is done at 1080 to show maximum frames a CPU can push. The lower the frame rate is the less capable the CPU is.

Somehow I think you may already know this but if not you do now.

March 2, 2017 | 01:29 PM - Posted by Gabriel S (not verified)

I would like to see a vishera system included in all these tests to see how the platform as a whole had matured. I know it would pale in comparison but all other things being equal it would show at least the potential of optimization as the am3 platform is very mature and will supported.

March 2, 2017 | 01:41 PM - Posted by Ryan Shrout

Fair. I wanted to include other AMD CPUs but ran out of time.

March 2, 2017 | 01:49 PM - Posted by pSz (not verified)

On page #3: attempting to draw far-reaching conclusions about the IPC of a chip based on performance of Audacity and Cinebench, then stating that you'd trust the latter more? Chuckle... be a bit more rigorous, please. These are just two application-specific instruction streams that were never generated (let alon alone tuned) for the new AMD arch at best and quite likely well-optimized for Intel at worst. So you execute these mostly Intel-tuned binaries on the new chip to draw general conclusions about IPC properties. Color me skeptical!

(Correct me if I'm wrong and you've received at least recompiled binaries, though!)

March 2, 2017 | 02:18 PM - Posted by Ryan Shrout

Definitely no re-compiles. This is off-the-shelf software as consumers that buy hardware today would be using.

March 2, 2017 | 03:10 PM - Posted by Allyn Malventano

I should remind you that AMD's own public demos of Ryzen used Cinebench.

March 2, 2017 | 02:33 PM - Posted by Mike S. (not verified)

A lot of these comments were predictable. "AMD only closed 80% of the performance gap between Bulldozer and Broadwell. This sucks!!!!!"

Sure, I wanted to see AMD leapfrog Intel and they didn't. For gaming, they didn't even hit parity. But:

1. Even if you're sticking with Intel, you can bet the price you pay going forward drops because they now have these parts as competition.

2. There are millions of gamers getting along just fine with a Haswell i7 or older. This is still a step forward from that.

3. AMD is at least back in the game well enough to generate a positive cash flow, which improves their chances of being able to invest in Ryzen++ or Ryzen 2 or whatever it is that they'll need to offer against Cannonlake/Icelake.

March 2, 2017 | 09:30 PM - Posted by StephanS

I read half a dozen review and I like the R7 1700, a lot.

It seem to squeeze nearly all you can get out of a $500 GPU at 1440p, and deliver 6800k performance in productivity apps.

The fact that a 7770k can get 130fps vs 110fps in some game at 720p doesn't bother me much. I prefer having double the core and thread for the same price for all the other apps I run . (Visual Studio C++ being my top app)

March 2, 2017 | 02:56 PM - Posted by StephanS

Anyone tested the 1700X overclocked with 4 core disabled ?

This would could be a preview of the $199 Ryzen 5 1500X (4core/4 thread) coming in Q2 ...

For people that only care about gaming in the next 24month , this might be the best option ?

March 2, 2017 | 03:06 PM - Posted by MarkT (not verified)

As a person who has watched every single pcper podcast I say the following:

The lack of benchmarks is Ryan saying its not worth it, move on unless you can't afford Intel.

Ryan is sick of Ryzen

Honestly I'm going to save up and wait to upgrade my 2700K ....I cannot afford AMD inconsistency.

March 2, 2017 | 05:49 PM - Posted by Jeremy Hellstrom

Not sure if you caught it, but he was up at 5AM and hasn't really slept with his Caspar mattress in weeks.

Wait a bit until we have time to fully torture AMD's new offering, then it will become more obvious.

March 2, 2017 | 06:34 PM - Posted by MarkT (not verified)

I fucken love you.

March 2, 2017 | 06:36 PM - Posted by MarkT (not verified)

words matter ;)

March 2, 2017 | 03:35 PM - Posted by Anonymous (not verified)

Did you the the letter I sent you Ryan?

March 2, 2017 | 04:32 PM - Posted by Anonymous (not verified)

Only watched the video, was happy to see Ryan biting his tongue and giving an honest assessment of the processor instead of trolling as he usually does. I am looking forward to motherboard reviews to help me narrow down which one to buy (not to mention a desire for lots of vanity shots of the boards.)

My only disappointment was the lack of benchmarks for the most recent top-end FX processor included to contrast the new CPU. As an AMD fanboy I couldn't care less about Intel numbers.

March 2, 2017 | 05:56 PM - Posted by Allyn Malventano

I don't think it's considered trolling when there was simply nothing for AMD to compete with in that space for a number of years. We are more than happy to see them once again competitive on the CPU side.

March 2, 2017 | 07:12 PM - Posted by Moravid (not verified)

Does Ryzen Master not allow you to overclock cores individually?

March 2, 2017 | 07:40 PM - Posted by MarkT (not verified)

no, all or nothing

March 2, 2017 | 07:55 PM - Posted by Anonymous (not verified)

I can almost taste Master Chen's tears...maybe next time AMD. You were going to get my $$$.

March 2, 2017 | 08:50 PM - Posted by ddferrari (not verified)

So single thread performance is mediocre and this chip clearly lacks in the gaming department, but it still gets a gold rating? O_o

March 2, 2017 | 09:20 PM - Posted by StephanS

Because its not 1999 anymore ? and single threaded apps are gone or going away ?

Look at the plethora of real apps where the $330 R7 1700 smack a $430 6800K

The R7 1800x seem to deliver very close performance to a 6900K , overall (even with games), for half the price.

March 2, 2017 | 11:35 PM - Posted by Anonymous (not verified)

Because it's great value, very efficient, delivers the smackdown for any real work, incredibly tinkerer friendly, and gives average people who are sick of getting wallet raped for a powerful computer a new choice in the market?

Yeah, it's trash.

March 2, 2017 | 09:30 PM - Posted by Anonymous Nvidia User (not verified)

AMD used all the tricks they could to make the Ryzen seem like it had equal gaming performance vs Intel. They sent emails telling sites to bottleneck the GPUs by testing at 1440p and 4k.
Look for this text under the 1440p benchmarks.

This was to serve as a "great equalizer". Others might call this cheating when employed by a company not named AMD

March 4, 2017 | 12:33 AM - Posted by StephanS

So review should be made using a Titan X at 1280x720 ?
You now, to remove even more the GPU aspect when doing GAMING evaluation...

This is dishonest reviewing when a general conclusion is made from a single unrealistic configuration.

Even so the result are true, it doesn't reflect people gaming experience.
At some stage, they stop being CPU/GPU gaming review and simply become CPU benchmarks. "CPU A can render a 280FPS, while CPU B an only do 240FPS... CPU A is the clear winner for PC gamers"

When it reality, both CPU will show little to no difference.

Any review that show gaming benchmark with a GTX 1070 at 1440p ?

March 4, 2017 | 08:01 PM - Posted by Anonymous Nvidia User (not verified)

Yes a Titanx is more capable but how about 2. It is needed to push more than 60 fps at 4k. Get more performance out of 1440 too. Show more of the capability of CPU.

Yes a Ryzen is capable of maxing out a single video card made up to this point. An older i7 or newer i5 can probably do this as well. Why limit the GPU by running in higher resolution?

Most people gaming at 1080 or below won't be buying a $500 Ryzen.

It is a CPU review of its gaming competence. This processor is for an enthusiast build and better be able to push frames.

There will be cheaper Ryzens coming down the pipe for more budget minded systems.

Most reviewers have a $1600 Intel for GPU benchmarks to not limit GPU performance. I agree this does not represent your average gamer in the least.

GPU bottlenecking however is akin to taking a Ferrari vs a $20000 car any brand and putting a governor on the Ferrari to limit speed to cars maximum. Equalizing for anything else such as same tires. Running both on lots of S curved track and saying said $20k car performs similar to the Ferrari. Does this represent true world performance.

Or simpler put. Rating a 20 gallon per minute pump against a 15 gallon per min pump with the same 1 inch pipe to measure output after 1 minute. Going to be pretty similar. If you don't limit the size of the pipe the performance of both is going to be allowed to reach their maximum.

This is what all reviews of any computer product does. They can't possibly use a whole slew of processors/video cards/tons of game/application benchmarks. If you have a game they bench with new processor/video card, you can compare your system results to it and see if it is worth you upgrading.

Or it's the internet age if you look hard enough and you'll usually find what you need somewhere.

March 2, 2017 | 09:41 PM - Posted by Mr_raider (not verified)

Any chance you can run folding @ home benchmarks. Frogs want to know!

March 2, 2017 | 10:19 PM - Posted by Anonymous (not verified)

For those concerned about 1080p gaming performance, this video should alleviate some of your concerns, hopefully Pcper can replicate these stellar results in future Ryzen reviews.

March 3, 2017 | 02:28 AM - Posted by Martin Trautvetter

How is this supposed to alleviate anything when the results are clearly GPU-limited?

March 4, 2017 | 12:19 AM - Posted by StephanS

limited? you mean real life gaming as with 99% of gaming PC out there. Look at the GPU spec used...

Using $700 GPU at 1080p (now $500 with the nvidia price drop) is not indicative people gaming experience.

its like recommending an i7-7700k for a GTX 970... and a i5-7600k will be as good and cost $100 less.

Also choosing a CPU with half the cores, because you can get 180fps, vs 150fps at 720p is insane when your gaming rig is 1440p.

frankly, I see a lack of objectivity in those reviews conclusion regarding gaming. you cant make a conclusion when you only test a game on a $700 (now $500) GPU at 1080p.

I cant wait for some "pro" review site take a 1080ti, and do an entire gaming Ryzen review using 1600x900 resolution. And then make a general conclusion about ryzen and modern gaming ... sigh

March 4, 2017 | 06:24 AM - Posted by Martin Trautvetter

Yes, GPU-limited.

And as such, it's completely meaningless as an effort to "alleviate some [...] concerns" about Ryzen's performance, as the OP put it.

March 3, 2017 | 02:26 AM - Posted by nick (not verified)

THE CACHE SPECS ARE WRONG IN THE ARTICLE!!!! the 8 cores are dual module set up with 2 of the 4 core modules each with 8 mb of l3 cache 2x8= 16mb not 20...the 6 core skus will be 2 4 core modules (the same as the 8 core) with 1 core from each module deactivated along with the corresponding l2 cache for each of the disabled cores...however the l3 cache will remain at 16 mb

March 3, 2017 | 02:26 AM - Posted by nick (not verified)

THE CACHE SPECS ARE WRONG IN THE ARTICLE!!!! the 8 cores are dual module set up with 2 of the 4 core modules each with 8 mb of l3 cache 2x8= 16mb not 20...the 6 core skus will be 2 4 core modules (the same as the 8 core) with 1 core from each module deactivated along with the corresponding l2 cache for each of the disabled cores...however the l3 cache will remain at 16 mb

March 3, 2017 | 02:38 AM - Posted by nick (not verified)

well oops on the double post chrome decided to be dumb
and i guess your counting l2+l3 which is a tad atypical but not like it doesnt happen so i will ammend my statement pardon me

March 3, 2017 | 02:48 AM - Posted by Martin Trautvetter

Adding L2 and L3 is based on the latter being implemented as a victim cache, i.e. not duplicating any of the former's content. It's also how AMD markets Ryzen.

March 3, 2017 | 09:48 AM - Posted by Alamo

i was waiting for Josh's Ryzen review, but it turned out to be Ryan's... WHY ?

March 5, 2017 | 03:47 AM - Posted by Anonymous (not verified)

Exactly what you wanted GUARANTEED!!!

March 3, 2017 | 02:02 PM - Posted by Anonymous (not verified)

I ran 7970 crossfire fx8320@4.9ghz on a 1080p monitor running at 2880x1620 for a few years and was very happy with that setup. The r7 1700 looks like it's light years ahead of the 8320, these are the best cpu's we have seen from amd in like forever even with this so called 1080p issue.

I would love to see a sli/crossfire review.

March 3, 2017 | 05:19 PM - Posted by Anonymous Nvidia User (not verified)

AMD probably would like you not to see multicard performance. AMD is limited to x8 x8 PCI express 3.0. (x300 chipsets only). It is not x16 x16 PCI express 3.0 like upper end Intels are. Ryzen only has 20 lanes available with 4 reserved. SSD speeds might be impacted if you have more than one.

The benchmarks at lower resolutions show the raw frame throughput of the CPUs. The Intel CPUs were much better when not GPU limited.

Ryzen fans want to show only 1440p and 4k results. I agree except they should do it on 2 Titanx Pascal cards and see who gets the higher frame rates. My money would be on the Intel.

But multicard is dead they say. Dx12 is the future. As long as the abomination of dx12 exists so will dx11. Dx11 just works good and no extra programming required.

March 3, 2017 | 07:49 PM - Posted by GB (not verified)

Good review and well balanced.

Read a few reviews around the web adn they seem to point to motherboard/chipset makers have a bit of catching up to do.

Compatible memory selections are currently low and a touch confused.

So time will tell... I would like to see some reviews revisited once V4 or 5 of Motherboard Bios's are out.

That said.. I really wish some reviewers had used a AMD chip GPU instead of Nvidia...

Cheers and thanks for the review

March 4, 2017 | 12:45 AM - Posted by StephanS

Some questions I still have.

- What is the effect of ram speed on the R7 ?

- What is the gaming experience at 1440p with a GTX 1070 class card?

- Are AMD GPU drivers better optimized for Ryzen or Intel cpus ?

- How does the r7 1700x perform with 4 core disabled ?

March 4, 2017 | 07:20 AM - Posted by mAxius

Ryan great coverage as always. i caught this it looks like ryzen is suffering form scheduling issues and its being addressed.

March 4, 2017 | 12:00 PM - Posted by zgradt

Hey, my 1700X came in today! Still no word on the motherboard I pre-ordered though. Such BS. And now I'm second guessing those 2 sticks of 3000 DDR4 memory I got weeks ago. From what I've read, the Bristol Ridge + AM4 is only rated at 2400 DDR4, which probably should have tipped me off.

You know, if they already had Bristol Ridge CPU's on AM4 socket, you would think they would release the X370 boards ahead of Ryzen launch, because at least then you could still use them with the older CPU's. Not to mention the AM4 adapter bracket for my EVO 212 that I don't think has even shipped yet.

A lot of balls dropped IMO. Very frustrating.

March 7, 2017 | 03:44 PM - Posted by Evan K (not verified)

So the AMD to Intel disparity in gaming apparently comes from devs optimizing based on an Intel architecture due to their market dominance (or so AMD says). The question I have is with the Ryzen hype being what it is, and the launch being as seemingly successful as it is; will devs work in some AMD optimization?

I know a lot of devs already see the PC-port as an afterthought to their console development cycle, and in those cases I doubt the extra effort will be put in. But for the devs that really want to give a solid PC experience to their customers, I wonder if they'll try to better optimize their titles and to what level of difficulty that optimization will entail.

March 15, 2017 | 07:27 PM - Posted by Anonymous (not verified)

I am not fan of Intel although have bunch of their super expensive Xeons V4, randing from 2K usd to 4K usd per CPU... business needs.

I am not big fan of AMD either, I was hoping they will go IBM way, server and workstation market, multisocket. I respect graphic cards, math is fast.

So no bias on any side... I wish I could use IBM processors, but I have to wait for open Risc architecture for that, unless I sell kidneys to buy 400W IBM Power cpus.


Ryzen is well positioned among average and higher consumer segment. AMD did some good job here and I am glad people will have chance to buy decent cpu for cheap price.

It was time that someone tells Intel to buzz of with their overpriced CPUs (yeah, ultimate Xeon costs 4K USD) and finally office computers can use cheap AMD and have better experience for the same price of Celeron (yes regular office computers usually have celerons if you didn't notice, that crap is still on the market, even using DDR4... wtf Intel is doing...)

Game over intel with your Celerons and low segment CPUs....

March 26, 2017 | 04:12 PM - Posted by Mr.Gold (not verified)

I finally got my 1800x running (waited for the asus x370 prime)

And my experience is.. not worth overclocking.

Since you boost on all 8 core to 3.7ghz even in heavy apps like Cinebench, going to 4ghz gives you 8% boost at best.

Now if that was "free" it would be fine but.

a) you loose XFR
b) voltage ramp high (1.4v @ 4ghz on a chip that big is massive)
c) the P-state seemed messed up on the asus board with manual clocking
(voltage/clock dont seem to scale, its always max)

So I decided to leave all untouched and simply do an undervolt by 500mv.

Result, the chip run "cold" under prime95 SmallFFT,
and I still get my 4.1ghz clock in single threaded
(I have a corsair H100 AOI set on silent mode, fan barely spin and I peak at 45c in prime95 smallFFT)

But if I do manual overclock to 4ghz, I need 1.4v and the temp and wattage skyrocket. The fan dont ramp fast enough so I need to up the cooling profile...

For an max 8% boost, at best, not worth it. at all.

The only thing I haven't done is update the bios (still from february). So I'm stuck at 2133mhz ram

I also built another PC on the R7 1700... this was was overclocked to 3.7ghz on an MSI tomahawk. Amazingly the stock cooler is super quiet under stress test. And it look amazing.
(I rotated the AMD logo isn't sideways, look must better in a 460x corsair crystal case)

Anyways, if I knew all that I know now, I would have gotten two R7 1700 with B350 motherboard.
the XFR on the 1800x is not worth it, and the asus x370 prime is no better then the msi tomahawk.

R7 1700 at 3.7ghz (8% shy of 4ghz) is where the sweat spot it.
And its stock cooler with its copper core is amazing silent.

September 19, 2017 | 03:37 PM - Posted by deksman2 (not verified)

So, did it ever occur to the reviewer that the a bit slower performance in some software (games included) is actually due to poor optimizations?
The industry used the last decade or so to specifically optimize for Intel.
Ryzen is fairly new by comparison, but it demonstrated that it got up to 30% increase in performance through simple patches in games.

Audacity and many other software like it are not optimized for Ryzen architecture.
They are taking advantage of every possible trick in Intel's hand, and yet barely anything or none of it actually benefits Ryzen performance-wise.

Plus, the Infinity fabric in Ryzen is sensitive to RAM speeds.
2400 MhZ speed on RAM is simply inadequate for Ryzen... 3000 MhZ would be better as that would raise it's performance by about 10%.

Other than RAM speeds, software optimizations are required to take advantage of Ryzen's capabilities.
Otherwise, you might as well be comparing apples and oranges right now.

It actually shows that Ryzen via 'brute force' is highly competitive for all Intel's products... just imagine what might happen if we get developers to actively support for Ryzen - of course, this will probably require time as devs usually optimize for hardware they are paid to optimize for - and as we know, both Intel and Nvidia have deep pockets to sway devs to support their own hardware specifically and make AMD look bad (when in reality, its anything but).

April 18, 2018 | 08:46 AM - Posted by ludozher (not verified)

You can go for the native iOS or Android app if you have a device with such an OS. The alternative would be to directly go to the mobile-responsive website of Paddy Power that works with pretty much any popular phone or tablet.
Whatever you pick, you will have access to the complete betting coupon and all features of the bookmaker. On top of that, the platform is wrapped up in a clean design and is optimized for mobile devices. In addition to the Bet365 betting website, the company owns an 84.4% stake in the English Premier League franchise Stoke City FC, of which Peter Coates is the chair. As the story goes, he was a bookmaker, and his daughter Denise headed up their online expansion. While the origins of the site are vague, the story is far more remarkable starting at the beginning with Peter Coates' birth.
Peter Coates was born into poverty in 1938 in Stoke-on-Trent (England, UK) as the youngest of 14 children. of this

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.