AMD responds to 1080p gaming tests on Ryzen

Subject: Processors | March 2, 2017 - 11:29 AM |
Tagged: amd, ryzen, gaming, 1080p

By far one of the most interesting and concerning points about today's launch of the AMD Ryzen processor is gaming results. Many other reviewers have seen similar results to what I published in my article this morning: gaming at 1080p, even at "ultra" image quality settings, in many top games shows a deficit in performance compared to Intel Kaby Lake and Broadwell-E processors. 

I shared my testing result with AMD over a week ago, trying to get answers and hoping to find some instant fix (a BIOS setting, a bug in my firmware). As it turns out, that wasn't the case. To be clear, our testing was done on the ASUS Crosshair VI Hero motherboard with the 5704 BIOS and any reports you see claiming that the deficits only existed on ASUS products are incorrect.

View Full Size

AMD responded to the issues late last night with the following statement from John Taylor, CVP of Marketing:

“As we presented at Ryzen Tech Day, we are supporting 300+ developer kits with game development studios to optimize current and future game releases for the all-new Ryzen CPU. We are on track for 1000+ developer systems in 2017. For example, Bethesda at GDC yesterday announced its strategic relationship with AMD to optimize for Ryzen CPUs, primarily through Vulkan low-level API optimizations, for a new generation of games, DLC and VR experiences.

Oxide Games also provided a public statement today on the significant performance uplift observed when optimizing for the 8-core, 16-thread Ryzen 7 CPU design – optimizations not yet reflected in Ashes of the Singularity benchmarking. Creative Assembly, developers of the Total War series, made a similar statement today related to upcoming Ryzen optimizations.

CPU benchmarking deficits to the competition in certain games at 1080p resolution can be attributed to the development and optimization of the game uniquely to Intel platforms – until now. Even without optimizations in place, Ryzen delivers high, smooth frame rates on all “CPU-bound” games, as well as overall smooth frame rates and great experiences in GPU-bound gaming and VR. With developers taking advantage of Ryzen architecture and the extra cores and threads, we expect benchmarks to only get better, and enable Ryzen excel at next generation gaming experiences as well.

Game performance will be optimized for Ryzen and continue to improve from at-launch frame rate scores.” John Taylor, AMD

The statement begins with Taylor reiterating the momentum of AMD to support developers both from a GPU and a CPU technology angle. Getting hardware in the hands of programmers is the first and most important step to find and fixing any problem areas that Ryzen might have, so this is a great move to see taking place. Both Oxide Games and Creative Assembly, developers of Ashes of the Singularity and Total War respectively, have publicly stated their intent to demonstrate improved threading and performance on Ryzen platforms very soon.

Taylor then recognizes the performance concerns at 1080p with attribution to those deficits going to years of optimizations for Intel processors. It's difficult, if not impossible, to know for sure how much weight this argument has, but it would make some logical sense. Intel CPUs have been the automatic, defacto standard for gaming PCs for many years, and any kind of performance optimizations and development would have been made on those same Intel processors. So it seems plausible that simply by seeding Ryzen to developers and having them look at performance as development goes forward would result in a positive change for AMD's situation.

View Full Size

For buyers today that are gaming at 1080p, the situation is likely to remain as we have presented it going forward. Until games get patched or new games are released from developers that have had access and hands-on time with Ryzen, performance is unlikely to change from some single setting/feature that AMD or its motherboard partners can enable. 

The question I would love answered is why is this even happening? What architectural difference between Core and Zen is attributing to this delta? Is it fundamental to the pipeline built or to the caching structure or to how SMT is enabled? Does Windows 10 and its handling of kernel processes have something to do with it? There is a lot to try to figure out as testing moves forward.

If you want to see the statements from both Oxide and Creative Assembly, they are provided below.

“Oxide games is incredibly excited with what we are seeing from the Ryzen CPU. Using our Nitrous game engine, we are working to scale our existing and future game title performance to take full advantage of Ryzen and its 8-core, 16-thread architecture, and the results thus far are impressive. These optimizations are not yet available for Ryzen benchmarking. However, expect updates soon to enhance the performance of games like Ashes of the Singularity on Ryzen CPUs, as well as our future game releases.” - Brad Wardell, CEO Stardock and Oxide
"Creative Assembly is committed to reviewing and optimizing its games on the all-new Ryzen CPU. While current third-party testing doesn’t reflect this yet, our joint optimization program with AMD means that we are looking at options to deliver performance optimization updates in the future to provide better performance on Ryzen CPUs moving forward. " – Creative Assembly, Developers of the Multi-award Winning Total War Series

Source: AMD

Video News

March 2, 2017 | 11:47 AM - Posted by remc86007

I don't understand why the "gaming" synthetics like 3d mark show it performing in line with how one would expect given the multithreaded application performance, but then actual games are much lower.

Is Windows not spreading the load between cores properly?

March 2, 2017 | 12:11 PM - Posted by Anonymous (not verified)

RyZEN price/performance is great for game developers so it will be sorted out soon.

March 2, 2017 | 01:35 PM - Posted by derFunkenstein

So many games rely on a relatively low number of "main" threads that do all the work. I have a feeling in most games today, Ryzen 7 CPUs have cores that just sit idle. Meanwhile, Futuremark is living in the future and using all the resources the system can give.

It was never a surprise that Ryzen was still not totally caught up to Intel on a single-thread basis. Just look at AMD's own published single-core speeds for rendering tasks on the 1800X and compare those to the current gaming champ, the 7700K. Those single-core synthetics are more relevant to current-day games than what's coming down the pipeline.

March 2, 2017 | 03:27 PM - Posted by CNote

I'm waiting for the 4/4 or 4/8 to be the gaming chips, these are mostly overkill.

March 3, 2017 | 07:37 AM - Posted by Easy815 (not verified)

It's gonna be harder for an 8 core processor to reach the same clock speeds a 4 core due to thermal issues. The single thread thing is getting pretty old. Simple way to look at this is: Ryzen is really really fast. How fast? really really fast. Ryzen has far exceeded what Intel has shown it is capable of with it's own multicore designs. Take a look at the various Xeons out there: more cores mean lower clocks. Want to see single threaded performance? Disable 7 cores and SMT on Ryzen 7 and then run the benches. ikr? ridiculous. Do people really believe what is being said about Ryzen's gaming performance? Because if they do, they're basically saying that Ryzen 3 will be the ultimate AMD gaming platform. But that's ridiculous too, right? The fact is, Ryzen is packed with value and delivers a solid kick right to Intel's cojones. Poor performance in various games and excellent performance in others indicates that it is not an architectural problem with Ryzen, but simply software bugs that are the reason for the erratic performance. Putting up buggy benches for people to mull over is pretty underhanded, but hey, Intel's in an existential fight right now, it was never a surprise that they'd fight dirty.

March 3, 2017 | 07:57 AM - Posted by Martin Trautvetter

Among the applications showing significantly sub-par performance by Ryzen is AMD's very own poster-child "game" AotS.

March 3, 2017 | 06:43 PM - Posted by Anonymous (not verified)

"Oxide games is incredibly excited with what we are seeing from the Ryzen CPU. Using our Nitrous game engine, we are working to scale our existing and future game title performance to take full advantage of Ryzen and its 8-core, 16-thread architecture, and the results thus far are impressive. These optimizations are not yet available for Ryzen benchmarking. However, expect updates soon to enhance the performance of games like Ashes of the Singularity on Ryzen CPUs, as well as our future game releases.” - Brad Wardell, CEO Stardock and Oxide

Why don't you take it up with Brad Wardell, but you always appear to be Butthurt and taking it out on AMD! But Ryzen is very new technology and it takes a while for regressions to be bisected and/or even to get the optimizing done before a new CPU micro-arch is on the market.

A lot of software development is started early on using simulated/emulated CPU cores using electronic/engineering modeling software on large server clusters and then onto engineering samples which may not have all the feature bugs fixed, with very little time available to fully get every software fix in place before the final RTM product is ready to be sold.

More Tweaking is in order for Zen/Ryzen and no brand new or even older CPU SKUs are ever a totally complete project as far as software/firmware/dirvers are concerned, the tweaking happens over the lifetime of the CPU product until the CPU hardware is declared legacy and only the necessary security related fixes are maintained. Ditto for GPUs and other types of processors!

It's strange that everybody wanted Ryzen to be to market last year and where crying when it was not and now that AMD has 3 Ryzen SKUs available folks are still not happy, but that is not all AMD's fault.

Get over Yourself! AMD is back in the game and they produced 12% more with the RTM Zen/Ryzen IPC gains than the 40% that they initally promised over AMD's previous generation CPU micro-arch. That's a 52% IPC gain for Zen/Ryzen over the Excavator micro-arch. Take that AotS issue that you have up with Brad!

March 4, 2017 | 06:32 AM - Posted by Martin Trautvetter

AMD had many months to evaluate Ryzen's performance before launch. AMD knew of their deficient performance in non-GPU-limited gaming scenarios. AMD is probably the major reason this application even exists, they have had a working relationship with the developer for years.

This is the absolute best-case scenario.

Yet AMD didn't even get these developers to optimize for Ryzen before launch. How will they fare elsewhere?

March 2, 2017 | 01:50 PM - Posted by Anonymous (not verified)

The thing is that actual games rarely have proper multi-threading like in synthetic benchmarks that were intentionally written to test under ideal circumstances.
It's difficult to parallel time critical components, so most developers let the heavy lifting be done by two or only one thread, while the others work lighter loads. It's simpler and faster and therefore costs a lot less to implement.
This is something software designers have to do on purpose. Windows and APIs like DirectX or Vulkan don't magically rewrite the code in order to fix this.

You often hear that game developers will catch up with the trends in CPUs and that good multithreading and optimizations are just around the corner. Something that's been repeated for over a decade now with nominal improvements.

Of course this can change in the future as there are few but very valid examples of how multithreading can be beneficial for the performance of a game. We'll have to see how it turns out.

March 3, 2017 | 05:37 AM - Posted by John L (not verified)

While true, I would expect the performance gap to be largely the same as the IPC difference if this indeed was the case.

It's possible that this difference is because of a well optimized specific middleware that would favor Intel. Like say, Havoc...

But regardless what the case is, I think it's important to get to the bottom of this and not write it off as something AMD is solely at fault for.

March 2, 2017 | 09:11 PM - Posted by Anonymous (not verified)

Exactly! and that is exactly why John Taylor's statement makes so much sense. Games are presently optimized for Intel processors but that will soon be a thing of the past.

I am also tired of hearing the argument that the Ryzen is slower than the 7700K in single thread performance...of course it is! and so is Intel's very own 6, 8 or 10 core chips when compared to their own 7700K!

Quite simply, what you have with Ryzen is performance on par with Intel chips (8 cores) costing twice as much and at a much lower TDP! It is a game changer (pardon the pun)...they will soon be on par on gaming as well due to the collaborations with game makers...even the makers are saying they are impressed by the strides made as they optimize the code!

I expect they will even surpass Intel performance for gaming quite soon...expect patches for games to start coming out. They will start by Vulcan optimization but then will go deeper...let's talk again in 3 months ok?

Please remember, that besides all this, we have AMD to thank for Intel slashing the prices on its processors in response to even if you are an Intel fanboy you should still send a thank you letter to Lisa Su.

March 3, 2017 | 02:03 AM - Posted by renz (not verified)

it's okay to stay optimistic but don't think everything will change in short term. yes there will be game patch but how many game developer will care outside triple A tittles? remember pushing this "optimization patch" is not a free effort for game developer. i doubt AMD have the money to pay thousands of game developer out there to make sure every game can properly take advantage their new CPU. for whatever it's worth is better to keep the expectation in check. i still remember about bulldozer patch for windows 7. some people expect that magical patch will somehow going to make bulldozer to be on par with intel sandy.

March 3, 2017 | 06:27 AM - Posted by Anonymous (not verified)

Whatever you say AMD fanboi ...

just dont pop out a vein :)

March 2, 2017 | 11:48 AM - Posted by Anonymous (not verified)

There is no fully AMD specific compiler optimized games/software/OS/API ecosystem for the Zen/Ryzen and AM4 motherboard hardware at this point in time. So it's logical to assume that it will take some time after release for the total numbers of Zen/Ryzen systems to become available to the general market for the total optimized Zen/Ryzen OS/API and gaming/benchmarking software ecosystem to catch up to Intel in this regard.

Intel has had the market dominance all to itself for so long a period of time that this can not be helped at such an early point in time with the Zen/Ryzen/AM4 release to market that has just started today.

March 2, 2017 | 11:41 PM - Posted by Anonymous (not verified)

This guy gets it.

March 2, 2017 | 12:00 PM - Posted by zme-ul

yes, yes .. blame Intel again, AMD

but wait! weren't those same compilers used for the software where Zen showed to be better than Broadwell - yeah, that's what I thought

March 2, 2017 | 12:08 PM - Posted by Anonymous (not verified)

GPU drivers?

March 2, 2017 | 09:14 PM - Posted by Anonymous (not verified)

No...Ryzen underperformance is only at lower res. like 1080p but at 4K (which was what the demo was at) the performace is on par and even exceeds Intel. Check it out for yourself if you don't believe me...

March 3, 2017 | 05:54 AM - Posted by Anonymous (not verified)

Just cause at 4K the GPU is causing bottleneck. Don't get me wrong, I believe those cpus are very good but the 4K benchmarks are not meaningful.

March 2, 2017 | 12:06 PM - Posted by John H (not verified)

The VR portion of the [H] review is pretty damning for Ryzen on gaming. 2600K @ 4.5 GHz --> 4.1 GHz 8 core Ryzen is often a downgrade in frame latency.

I really wish some technology would come along that would allow scaling of CPU performance like we saw in the 1990s..

March 2, 2017 | 12:06 PM - Posted by Anonymous (not verified)

"Intel CPUs have been the automatic, defacto standard for gaming PCs for many years, and any kind of performance optimizations and development would have been made on those same Intel processors."

Ryan, your statement does not say much about the sorrid lack of advancements made by Intel in graphics devices. Intel i965 graphics are not much better than Intel i915 graphics or even more ancient Intel graphics devices.

Ryan, news for you: Intel IS NOT an industry leader in PC graphics. So why bother testing their schlock.

REAL GAMERS don't use the integrated GPU in Intel CPUs. Only LAMERS think they can get good gaming performance using the integrated GPUs of Intel CPUs. Srsly bro.

March 2, 2017 | 12:28 PM - Posted by Anonymous (not verified)

irispro6200/amd a10 masterrace! intel doesnt even hv a ip6200 eqv anymore, with ryzen doa, (thts wht amd gets for charging highend i7 prices lol, and to think we were all hoping for a rx480 miracle[before 1060 came and ruined the party]where prices for 390/970 dropped like rocks) im hoping at least raven ridge will kicks intels igpu to the curb in laptops, cause even sandy is overkill for anything a office worker needs, and i doubt raven will be atom bad, so wht else u gonna do but play aaa 5yroldgames on ure 6hr business flight?

March 2, 2017 | 07:40 PM - Posted by Jimmy J (not verified)

Soooooo, you'd rather have an Intel monopoly, and charging 1700 bucks for a cpu when you get down to brass tacks is 15% faster than a 500 buck cpu? In game, it's not even NOTEWORTHY. I game with an FX 8350, water cooled and 4.4gz, I play games in 1080P and they are silky smooth. I just ordered the Ryzen 1700X, and I'm sure when I'm done tweaking the BIOS, I'll be much happier having paid for infrastructure that will be around (socket AM4) for at least 4 years and myriad processors. The reviews were actually AWESOME for AMD, considering where they were with the last CPU iteration. Wake up, we need a STRONG AMD, because without, Intel will continue doing what Intel has done for the last 10 years, SCREW us, the company has zero integrity and is still paying for their misconduct 16 years ago. I wouldn't pee on them on the side of the road if they were on fire............

March 2, 2017 | 12:59 PM - Posted by Ryan Shrout

I wasn't referring to integrated graphics in this statement. It's about CPU optimizations, yo.

March 2, 2017 | 02:21 PM - Posted by Mike S. (not verified)

Word, Ryan.


March 2, 2017 | 03:22 PM - Posted by Anonymous (not verified)

The ef are you talking about??!? He's talking about CPUs and you're here yapping about Intel gpus and acting smart... Just lol. Read what was written in the article again, please. Do yourself a favour.

March 2, 2017 | 12:22 PM - Posted by Anonymous (not verified)

Rypoo is garbage and loses most of the benchmarks as seen today all across the Internet.

March 2, 2017 | 12:37 PM - Posted by Anonymous (not verified)

them i7 prices and new benchies show then, we now know tht, amd is no dreamer-ryzen is pee, finalfantasy8 main theme. never bet on rice, always bet the lake. the only thing tht will save ryzen now, apart from alanwake2, is a new redfaction game bundle #betterredthandead warning poor volta

March 2, 2017 | 07:41 PM - Posted by Jimmy J (not verified)

You're an IDIOT, learn English, at least SOUND intelligent.

March 2, 2017 | 09:25 PM - Posted by Anonymous (not verified)

What benchmarks were you looking at? and what they hell were you snorting? Ryzen was only surpassed on SOME games and ONLY at 1080P totally trashed 7700K on Blender, Premiere Pro, Cinebench r10,r12 and r15 and any application that is multithreaded...and on those apps it was on par with Intel's 6900K...tell you what? you go buy 100 6900K's and I will go and buy 200 Ryzen 1800X's and we will both set up render farms and see who goes out of business first! snop sniffing glue dude...seriously!

March 2, 2017 | 12:29 PM - Posted by Edkiefer (not verified)

Try disable SMP, seems Ryzen has high overhead like old Intel chips with regard to SMP, were in games you don't need 16threads.

March 2, 2017 | 12:31 PM - Posted by Anonymous (not verified)


March 2, 2017 | 01:00 PM - Posted by Ryan Shrout

From what I've been told there is in fact a gain from that, but not enough to make up the delta's we are seeing here.

And I would buyers and gamers would be disappointed if they had to give up multi-threaded performance or to slightly improve gaming results.

March 2, 2017 | 01:50 PM - Posted by Edkiefer (not verified)

Yes, my statement wasn't to say that is useful fix, just observation and now I see other sites mentioning.
It gets you like 50% better where it should be IMO.

It is still down a ways on some games.

More testing and time might give clue to whats happening.

March 2, 2017 | 03:05 PM - Posted by Michael K (not verified)

LoL, go look at Gamers Nexus review. They test with SMT enabled and disabled, and it was a single digit increase, like 2-4 fps, with it disabled, when it helped, not a 50% increase.

March 2, 2017 | 03:33 PM - Posted by Edkiefer (not verified)

hehe, I knew someone would misunderstand what I wrote.
What I meant was say it was behind 10 fps (clock for clock per core), with SMP disabled it came close to 4-5 of that, so mid way, it still down.

March 2, 2017 | 10:18 PM - Posted by John H (not verified)

It's an extra step or two but processor affinity could be used for games that don't like SMT "cores" while keeping SMT on for everything else.

Would like to see more detailed OC with 2-4 cores at a higher speed than all 8.

March 2, 2017 | 02:26 PM - Posted by Martin Trautvetter

Wait, am I understanding this correctly? AMD didn't even get their pet project (I refuse to refer to it as a game) optimized in time for their most important CPU launch in history?

And we're supposed to belief that they'll manage to convince studios to optimize games already out or deep into development?

March 2, 2017 | 07:29 PM - Posted by Anonymous (not verified)


March 3, 2017 | 02:58 AM - Posted by Martin Trautvetter

Thank you! :)

March 2, 2017 | 02:33 PM - Posted by StephanS

Who buys a $300+ CPU for 1080p gaming ?

When I see "game" benchmark at 1080p or 720p, I cant consider those reflective of real gaming. To me those are "synthetic" CPU test.

Also, since I'm not a pure gamer, the 1700 seem like the better choice over the i7-7770k. Same price, within 10% gaming at 1440p, but so much faster in productivity app (compiler / adobe / ...)

Even if only did gaming... the i7-7700k might not age well.
By that I mean the 10% advantage today might turn in a 10% disadvantage a couple of years from now.

My Q6600 for example was slower at gaming then the X6800 in 2007, but now the x6800 is obsolete why the Q6600 is still usable (dips to 50fps in Overwatch at 1440, but its playable) a core 2 duo cant even boot some games at all.

Anyways, if you only do 1080p gaming and plan to upgrade in 2 years, I think the i5-7600k is a great option.

But doing more then 1080p gaming, and plan to keep this PC >2years... I'm not sure Intel offer any good options. (unless you spend >$500)

March 2, 2017 | 03:02 PM - Posted by Anonymous (not verified)

What kind of question is that?
It's a well known trick to lower the resolution of a game if you want to find out whether your bottleneck is in the CPU or in the GPU. Because you know, CPU load hardly scales with the resolution that's used. And that's basically what they did in these tests.

Or in other words, what you've see in those AMD released gaming benchmarks was mostly the performance of the graphics card and not the CPU.

March 2, 2017 | 03:15 PM - Posted by StephanS

Correct, and one could argue that the AMD benchmark are more reflective of the REAL gaming experience people have at >1080p.

My point is, 1080p benchmark are useful for 1080p gamer.

Because, as you stated GPU quickly become the limiting factor at higher resolutions.

So I'm not saying those result are false. Just that people that are going to buy a $500 CPU are NOT going to game at 1080p.

And if games become GPU limited at 1440p, its possible that the intel advantage nearly completely evaporate.

I dont think I have seen one review yet that show CPU/GPU scaling with Ryzen.

March 2, 2017 | 03:30 PM - Posted by Anonymous (not verified)

You're missing the point entirely.

If you looked at 4k benchmarks with a GPU that's not meant to render those resolutions, the differences between CPUs start to vanish. An AMD FX-6350 will appear to look like a very good options in these cases. Why waste $500 on an 1800X when the difference to an $130 FX-6350 are only a few FPS, right?

I hope you're starting to see what I'm getting at. The benchmarks distort the reality of the hardware's capabilities. Whether it was done deliberately I can't say. If you want to test a CPU on high resolutions you got for something like a GTX 1080 or even SLI GTX 1080. But I suppose AMD didn't want to use a competitors graphics card and went with their own, which are a suboptimal choice in this case.

March 2, 2017 | 03:52 PM - Posted by StephanS

No, its you that is arguing a different point.

1080p gaming benchmark result are only applicable to 1080p gaming, or used as a synthetic CPU only benchmark.

If you game at 1440p (or 4K), those 1080p benchmark are NOT reflective of you gaming experience. Hence you cannot use them to make an educated choice.

Why? because if you are in some form GPU limited a CPU that is twice faster wont give you twice the FPS.
So you might see a clear difference at 1080p.. but that difference vanish at 4K or even 1440p.

This is why some review site do CPU/GPU scaling benchmarks.
And you can clearly see that their is a point of zero gain.

You must know this as you mentioned 4K and GPU bottleneck...

March 2, 2017 | 04:21 PM - Posted by Anonymous (not verified)

We're talking here mostly about the Ryzen CPUs, aren't we?
So I don't see why these 1080p benchmarks, that are better suited to show the performance of CPUs aren't appropriate.

But if we're getting pedantic then let's examine AMD's 4k benchmarks.
They're are only applicable to that specific hardware setup, namely a Ryzen with fitting RAM and a MoBo in conjunction with 2x RX 480.
It might look entirely different if the test was conducted with 2x GTX 1080.
To employ your logic: Who's playing on 4k and then saves money by buying two $240 graphics cards?

But sure, let's wait for benchmarks where CPU-to-GPU performance scaling is tested. I'm curious with what kind of excuses people will come up in that case, whether they're Intel or AMD fanboys.

March 3, 2017 | 04:13 AM - Posted by RojasTKD (not verified)

So, 12our so months from now when I get a newest, fastest, most awesome GPU those deltas increase again?

I game at 1440 but will likely upgrade my GPU before my CPU and don't want my CPU choice to limit me once I get a new GPU.

March 4, 2017 | 07:05 PM - Posted by Anonymous Nvidia User (not verified)

Whoa the denial is strong in this one, getting paid by AMD, or BIG fanboy.

1080 benchmarks and lower are useful for showing frame pushing capability of CPU. Limiting the CPU by GPU bottleneck of higher resolutions is misrepresentation.

If you want testing at 1440/4k, 2 Titanx Pascals should be used because best available now and can get above 60 fps in 4k. This will give a truer picture of CPU performance even though may be still GPU limited to a degree. Hell even lowering the details to low in 1440 and 4k would be an improvement.

As GPUs of the future get faster, the Ryzen you bought today is going to lose even worse as it can't push the same number of frames as an Intel as shown by lower resolution benchmarks.

Of course in GPU bottleneck an i3/FX can be comparable to Ryzen as well. Intel's performance advantage evaporates, indeed.

Why not bench with an rx460 as well to show the capabilities of the $500 Ryzen. LOL Hint this will be GPU bottleneck at even lower resolutions.

So the Ryzen isn't as good at gaming doesn't mean it's a total failure.

March 2, 2017 | 03:03 PM - Posted by Anonymous (not verified)

You said it. You aren't a gamer. 144fps, aka 144 tick rate Battlefield Server.

you think gamers are gaming in above 1080p on these servers?

Latency is all the matters... Not resolution.

Do you have issues going to the movie theater and watching 24fps movies. Shit looks like a stop motion.

March 2, 2017 | 03:26 PM - Posted by StephanS

That is one game.

Look at the Overwatch benchmark results,
Ryzen 1800x is a little faster then a i7-7700k

Also we are not talking about 24fps gaming here.

Like 267 FPS in Overwatch

I do play Overwatch, but I also work on my PC.

So if I get ~60FPS gaming at 1440p (freesync), I'm more then happy.
What also matter to me is compile time and render time.
No way will go with a CPU thats 2x slower at compile time for going from 90fps to 110fps at 1080p gaming. (I dont game at 1080p, so I'm guessing the fps difference is minimal)

March 2, 2017 | 03:41 PM - Posted by StephanS

Here is a BF1 MP benchmark.

March 4, 2017 | 06:54 AM - Posted by Anonymous (not verified)

Look at your own suggested charts. Slaughter.

March 2, 2017 | 02:58 PM - Posted by StephanS

another point... the 1500x is upcoming (Q2) and is $199

Its 4 core / 8 thread , and is cheaper then the i7-7600k

I wonder if the 1700x with 4 core disabled and overclocked wouldnt give us a preview of what AMD will release with the Ryzen 5 ?

I still think spending $300+ for 1080p gaming is not right.

And if the 1500x perform like the 1800x for 1080p gaming, the price performance ratio would be insane.

April 7, 2017 | 04:32 PM - Posted by Donnie Fisher (not verified)

i7-7600k ? What is that? I have searched and re-thought my recollections, but I cannot find anything on an i7-7600k! Don't you value correct statements enough to read back your comments for accuracy?

March 2, 2017 | 03:02 PM - Posted by StephanS

Can you guy run the hitman benchmark with 4 core disabled on the 1800x (and the i7-6900k if technically possible)?

March 2, 2017 | 04:24 PM - Posted by pdjblum

@StephanS why bother when they will not even acknowledge that you clearly understand the importance of doing the low res tests? I agree with all you have said. Gaming performance at low res will only improve and the rest of the performance numbers, including gaming at higher res, are great. This is an amazing achievement. I am more excited than ever.

March 2, 2017 | 04:45 PM - Posted by Anonymous (not verified)

Actually don't assume people won't acknowledge anything. I looked at the BF1 link he sent. BF gamers can be a lot different than overwatch gamers. Overwatch was made to make money. It's a blizzard game. They are amazing. They make games to make money. They don't really care about pushing polygons. they take their lesson's from Nintendo where the game is all the matters.

I think that AMD's numbers will look better next year because of the console developers and those platforms.

The i5 is now back to it's normal $200 price point thanks to AMD.

Oculus is still overpriced.

PS4 is still the best right now

Times change... Looking forward to Scorpio

March 2, 2017 | 06:08 PM - Posted by Anonymous (not verified)

Maybe things will improve, maybe not. We'll see. When I decide to buy something I usually look at benchmarks that represent what I'm doing with my machine and then decide. I really don't care whether it's an Intel or AMD or nVidia or AMD GPU, what matters is the performance and price.

But the current, independent benchmarks don't make it look so good for those who are primarily interested in gaming.

I mean even if you play on 4k where the differences start do disappear due to a shift in the bottleneck, why would you go for a $500 CPU over a $350 CPU that delivers pretty much the same performance in most games except for a few*? I can use that $150 difference to buy a MB that's better suited for overclocking and a better cooling solution.
*Of course those few games might be a driving factor for some individuals, since it's exactly the games that matter the most for them, but that doesn't necessarily apply to the majority of gamers.

Until game development has caught on to the multi-threading trend, which they've been doing for over a decade already you'll probably want to upgrade your CPU again, since both AMD and Intel have made some improvements to their lineup.

If you're into gaming+streaming or video editing, sure the 1800X is a beast for that $500 compared to everything Intel has to offer. It's pretty amazing what they've done, but there's room for improvement.
I do hope that they're successful, since we probably all can agree that the CPU market needs competition.

March 2, 2017 | 07:12 PM - Posted by pdjblum

Copy that. Thanks. I am hearing a lot of young gamers are really into streaming while gaming, so those gamers, which might be a significant number, or not, might find this chip very compelling. But really have no clue how many gamers do this or whether it is becoming more common.

March 2, 2017 | 07:25 PM - Posted by Raiderwolf (not verified)

What I would like to know is why AMD wasn't ironing all this out with game developers long before this. They must have known that games would play a large part of the reviews. They could do a NDA just like the hardware manufacturers and reviewers are under. This looks bad for AMD for not having their ducks in a row before release. I hope that this just LOOKS bad and is not as bad as it looks because it can be rectified.

March 2, 2017 | 10:21 PM - Posted by StephanS

I dont think AMD care to much with the R7 anf gaming.
The 1700 to 1800x are more productivity CPUs, and they got great results across the board.

But the fact that some gamers are considering buying R7 is good.
By that I mean, how many gamers went for the 6800k, 6900k and else over the 6700k and 6600k?

Now, the R5 1500x I think is the gamer CPU AMD need to focus on.
This model got half the core, so relatively to its memory controller got 2x then bandwidth... but I still think Intel got the edge there.

Since AMD price it a $199... its no competition to the $340 i7-7700K
But it might be perfect for 1080p gaming, realistic 1080p gaming.
(where you dont pair a $700 GPU with a $200 CPU)

March 2, 2017 | 07:42 PM - Posted by wcg

I'm optimistic we'll see consistent results in a few weeks after these teething pains are over.

March 2, 2017 | 07:48 PM - Posted by Anonymous (not verified)

I still use a i7 2600k@4.4 Ghz running pretty darn well 1080p games and not planning to upgrade until this summer Ryzen 5 or i7 7700K , all depends on what will come ....

I was looking forward for the new Ryzen to jump back on the ship i was back in the day when FX 8150 4 core - 8 threads launched in 2011.AMD said multithreading is the future....and here we are in 2017 with FX8150 gettin shitfaced by Intel G4620 2 core - 4 threads ( look it up if you don't know what it is ).

Lets be serious for a moment and just admit that R7 1800X is just good for renders, video encoding and mostly very well optimized multithread situations, any thoughts of multimedia or gaming, Intel is better with the Kaby Lake's.

Sure that 1700/1700X and 1800X vs 6800K/6900K is a good deal, can't say anyone can argue with that but i don't know how many of us buy a 1000$ CPU.... Only hope of AMD to make a true comeback is to make 1500X / 1600X competitive with 7600k/7700k. People will not care if it will be 50$ cheaper but 10% slower, they will go Intel all the way like they should , and who thinks that a 6core/12thread CPU will help them gaming more in the future , they must be talking about 5% of the games that will be optimized for that multithread situation.

I can praise AMD for the fact that they are bring down the prices of Intel , who has been a bit long at the wheel and free of competition , would like to see AMD do better for us to have a better market of CPU. hope R5 and R3 won't be fail coz that will be the end of ryZEN

March 3, 2017 | 02:02 AM - Posted by Finn (not verified)

I noticed in this review, he got much closer scores for RyZen in gaming:

He tested with a 'Gigabyte X370 Gaming 5, 16GB of Corsair Vengeance RAM at 3000MHz'

Can anyone replicate that? Is there a bios or RAM difference that could account for it?

March 3, 2017 | 08:45 AM - Posted by Martin Trautvetter

That test is using a slower and cheaper 6800K for comparison.

March 3, 2017 | 02:49 AM - Posted by Cyric (not verified)

Through the AMA in Reddit AMD has stated that an example of the game not utilizing the Zen correctly. The example was the workload move from thread to thread can lead a process being given to a thread of a core of a different CCX module in the processor, that leads the thread not finding its data in the module cache that leads to data calls or process dump (roughly)

March 3, 2017 | 04:08 AM - Posted by pdjblum

Seems as if that would be a relatively easy to thing to correct. Awesome if that is what is the reason for the performance deficit in games at low res. This cpu will only get better.

March 3, 2017 | 04:58 AM - Posted by Anonymous (not verified)

That is a possibility. Intel CPUs do not have separate modules like that. I am not sure what the state of core parking is. Does it need to be done from within the code? My first thought was the code not checking available feature flags correctly for a non-Intel processor and falling back to a less optimal code path. That is seeming a bit unlikely though. Switching processes between modules or running two heavy threads on the same core with SMT could cause such issues. If it was an SMT issue, then disabling it would solve the problem though. A 4 core single module Zen part would not have issues with switching between modules so this may not be an issue with later Zen parts.

March 3, 2017 | 06:29 AM - Posted by Anonymous (not verified)

Trying get software vendors to optimise for your hardware after launch is newbish, childish and lazy at best.

You work with them at least an year before the launch.

March 3, 2017 | 07:20 AM - Posted by Anonymous (not verified)

thanks for ruining my stock value ;P

March 3, 2017 | 08:35 AM - Posted by Anonymous (not verified)

so funny

March 3, 2017 | 08:54 AM - Posted by Anonymous (not verified)

i hope performance will increase over the next weeks and months, and be fixed before 4 and 6 core launch. this is the lineups that really matters to most gamers anyway.

March 3, 2017 | 08:09 PM - Posted by Surcrit (not verified)

Linus sent us.

March 3, 2017 | 08:09 PM - Posted by Anonymous (not verified)

linus sent us

March 3, 2017 | 08:09 PM - Posted by Anonymous (not verified)


March 3, 2017 | 08:09 PM - Posted by Anonymous (not verified)

Good stuf, btw linus sent us.

March 4, 2017 | 05:57 AM - Posted by Mad Duck (not verified)

If it's a matter of games being optimised for Intel architecture, then fine, let's assume some of the most recent and future games will be made to work better on AMD. It's logical once AMD has a line of competitive CPUs.
But what about older games? Can anyone expect game developers to provide patches their back catalogue just so people can play them on Ryzen as well as on Intel? I can't see that coming, that's an issue AMD will have to address themselves.

March 4, 2017 | 06:24 AM - Posted by Anonymous (not verified)

"March 3, 2017 | 05:54 AM - Posted by Anonymous (not verified)
Just cause at 4K the GPU is causing bottleneck. Don't get me wrong, I believe those cpus are very good but the 4K benchmarks are not meaningful."

This is a bit odd, ofcourse 4K benchmark is meaningful. If I will consider myself as an enthusiast and will build a PC from ground up, I will have intel x99 6900k platform example and paired it with gtx 1080ti, I wont be gaming in 1080p. Ofcourse atleast at 1440p right? Now I have an option on AMD platform w/c cost $500 less, And now i have $500 more budget to purchase 4k monitor.

That will be the reality right?

March 4, 2017 | 07:20 PM - Posted by Anonymous Nvidia User (not verified)

The $500 Ryzen you're considering should be good enough for one 1080ti. Should you choose to add another one later maybe not.

The more expensive Intel has more PCI Express lanes that the 24 in Ryzen. For Multicard configs it will be running in x8 x8 instead of x16 x16.

In the future if you're planning on keeping your CPU at least 5 years, the Ryzen may not be able to keep up as good as that Intel for pushing frames as video cards get faster.

Wait a bit before building if you can to see if Intel cuts prices and/or revision of Ryzen fixes their gaming deficits.

If not a $300-350 i7700k can get you gaming for even less than Ryzen now.

Good luck with your build.

April 24, 2017 | 12:48 AM - Posted by DonaldRoss

Good to read!

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.