Review Index:

The Intel Core i9-7900X 10-core Skylake-X Processor Review

Author: Ryan Shrout
Subject: Processors
Manufacturer: Intel

1080p Gaming Performance and Overclocking

Okay everyone, buckle up. With all of the debate surrounding 1080p gaming on high performance processors, we of course had to look at that with the Core i9-7900X. The results are… interesting

View Full Size

First, the 7900X is never faster than the 6950X in the gaming tests we ran, although it is able to match performance in Ashes, Deus Ex: Human Revolution, Ghost Recon Wildlands, and GTA V (mostly). However, take a look at Civ 6, Far Cry Primal, or even Hitman and Rise of the Tomb Raider: these show the new 7900X as slower than the 6950X in gaming, even with its substantial clock speed advantages. How? Remember back to this graph I showed you on thread-to-thread communication latency?

View Full Size

In a similar, but somewhat less substantial manner, the behavior we are seeing on Skylake-X with its longer LLC/L3 latencies, is an analog to the issues that concerned us with the Ryzen processor CCX implementation.

Obviously this is going to raise a lot of eyebrows and give the AMD fans a reason to poke fun at Intel, but the Core i9-7900X is still faster in this regard than the Ryzen 7 1800X in 7 of our 8 tested games. CPU architectures are complex, and the decision to use a mesh design for the intra-core communication was made for the Xeon product family, but there are clearly some interesting consequences for consumer workloads.

As we said in our Ryzen 7 review, as you increase the gaming resolutions above the 1080p mark, these deltas between ALL processors are going to be minimized. So, if you are a 4K gamer, or you plan to be one, any of these four CPUs is going to run splendidly.

Overclocking the Core i9-7900X

Though my time with the ASUS X299-Deluxe and the Core i9-7900X was short, I was able to get our sample overclocked to an impressive state. With a core voltage of 1.28v I was able to push all cores to a 4.6 GHz and run the system mostly stable.

View Full Size

However, temperatures at this point would spike to over 100C (!!) and level off in the mid-90C range, which was too hot for me to run for an extended period with a clear conscious, even with a Corsair H100i GTX as the cooler.

View Full Size

I eventually settled on a 4.5 GHz overclock on all cores and a Vcore of 1.24v, which allowed the system to stabilize around 83C with our cooling setup. Even that is going to be high for a lot of users, but the performance advantages of running all 10-cores at 4.5 GHz should be around 10-12% on heavily threaded applications.

We’ll dive more into this as time permits!

Video News

June 19, 2017 | 10:04 AM - Posted by Martin Trautvetter

A 15% increase in performance resulting from a 50% increase in power consumption seems to indicate that this processor is firmly out of its comfort zone in terms of efficiency.

Makes me wonder where it would land with similar clock rates as the 6950X.

As for the i9 line-up, I don't follow the argument that these CPUs are not the direct result of AMD's renewed competitiveness. Sure, 6- through 10-core CPUs would've been planned for long ago, but their final clocks were set post-Ryzen. The idiotic KBL-X were rushed post-Ryzen. The MCC-i9s are clearly a rush job (hence their late launch) trying to compete with Threadripper.

I'd be willing to bet that not a single CPU launched for this platform was planned exactly as-is 9 months ago.

June 19, 2017 | 10:48 AM - Posted by Ryan Shrout

Even if everything you say is true, is that a problem? Is that not what we want? Some competition to push things forward?

June 19, 2017 | 12:39 PM - Posted by Martin Trautvetter

Sorry, I might have simply misread/misunderstood your conclusion.

As far as I'm concerned, it was not giving enough credit to AMD for the final specs of these CPUs, as they are / will be shipping.

Anyways, thanks for testing the rejiggered cashes and mesh topography and showing how it affects scaling when compared to its predecessor!

June 19, 2017 | 11:02 AM - Posted by Xebec

I am curious if future BIOS updates will affect mesh speed(ping time?), and what kinds of differences that will make.

I like the performance/$ metrics. There's so many ways to slice those -- CPU Only, including motherboard and RAM (which you have to buy anyway to use the CPU), or full system price. Pros/Cons to each.

Best internet line of the day:
"Until July. Or August. Or October..."

Great review PCPer!

June 19, 2017 | 11:13 AM - Posted by Ryan Shrout

Future BIOS should not have a direct effect on it, unless Intel changes its stance on the clocks of the cache. It runs at a slower clock that memory or the CPU itself, but it is controllable - I show you the change on one of our graphs here looking at thread to thread "ping times".

On the performance / dollar, you are right, we could have included memory and motherboard in that and it might be worth doing in the future. But I think most people reading will understand that the X299 motherboard price average is higher than the X370 motherboard price average, so the differenecs will widen slightly.

June 24, 2017 | 08:40 PM - Posted by RandomUsername1234 (not verified)

X370 may not be a fair yardstick if you want price/performance. X370 is closer to X299 in features (though still a long way off), but if want you want is maximum price/performance B350 is the way to go.

June 19, 2017 | 11:19 AM - Posted by DakTannon

Hey Ryan great review. If possible for the gaming benchmarks could you post the 1% and .1% low frame rates or just the min fps if that would be easier. I have found the enthusiast platform tends to excel in the minimum FPS and smooth delivery of frames (less stutter) and that is what motiveates my purchases more than Max or Average fps i would rather have a CPU with a min of 60 fps and a max of 85 fps than one with a max of 105 fps and a min of 45 fps even if that mean it has a lower average fps, smoothness is everything for me

June 19, 2017 | 02:59 PM - Posted by StephanS

At what speed where you running the 1800x infinity frabic ?

Also your idle system wattage look to be half of other sites for the 1800x. I wonder what you or they are doing differently.

Cinebench value. Not sure why but I get a score of 1641 on a stock 1800x. / $440 (amazon) = 3.72
I think you are using the launch day price of $500 ?

note: I run my ram at 2400mhz (the rated XMP profile)

June 19, 2017 | 03:18 PM - Posted by Ryan Shrout

All of the 1800X data was generated at stock settings, DDR4-2400 memory. And yes, I am still using the $499 launch price for that data, as you note.

June 19, 2017 | 03:10 PM - Posted by amadsilentthirst

Great Video Ryan, Actually made me read the review... that was good too. little exception New parts, higher clocks, more cores

Kinda wanted to see what the "NiceHash" daily BTC amount would be, you know for science.

Consider including it in your benchmarks for all the new CPU's?

June 19, 2017 | 03:18 PM - Posted by Ryan Shrout

Maybe...but CPUs, even 10-core CPUs, are very inefficient in comparison to even moderate GPUs.

June 19, 2017 | 03:52 PM - Posted by amadsilentthirst

You are totally right, and rather handsome,
However after Electrickery a Ryzen 7 1700X nets $600 per annum

Which is peanuts to golden haired tech gods granted, but some peeps may want to put one in a corner and let it pay for itself (with all the assumptions granted) while heating up their greenhouse.

As alogs change and prices fluctuate, releases get more cores, it'll be nice to keep an eye on hashing value.

Goes without saying that it will be awesome to have it on GPU charts.

You're obviously way too important and tall to take on such a task, maybe the smaller more condensed you (Ken) could take on such a burden of honour.

June 19, 2017 | 04:53 PM - Posted by quest4glory

Something I haven't seen much of is the (potential) benefit of this X299 plan to boutique system builders, and even larger mass producers of custom PCs such as HP with their Omen, and Dell / Alienware.

They could standardize on X299 for most of their builds, and then offer customers the choice of i5 and "entry level" i7 now, with the option to upgrade to a true HEDT system later on, while keeping the same chassis and main system components.

That and single-core performance should be best on those parts, especially when overclocked to their max.

June 19, 2017 | 05:53 PM - Posted by quest4glory

In terms of TDP, did you measure that at stock or overclocked? I'd have to assume stock, and if so, could the measurements be off due to the new platform?

I know you know this, but for anyone who wonders how Intel defines TDP...from

"Intel defines TDP as follows: The upper point of the thermal profile consists of the Thermal Design Power (TDP) and the associated Tcase value. Thermal Design Power (TDP) should be used for processor thermal solution design targets. TDP is not the maximum power that the processor can dissipate. TDP is measured at maximum TCASE.1"

June 20, 2017 | 04:40 PM - Posted by Ryan Shrout

All measured at stock settings.

June 19, 2017 | 06:15 PM - Posted by Titan_Y

Seems to me that there has been a cost-shift Intel has done here from the CPUs to the chipsets. The motherboards are about $100 more expensive than they should be. This way, Intel can make their CPUs out to be a better value than they actually are.

June 19, 2017 | 10:11 PM - Posted by Ryan Shrout

I don't think that's accurate. Intel is probably getting slightly more from the X299 than the Z270, but I would guess not much. If anything, the motherboard vendors know this is a higher end platform and audience, so they put higher end products together to serve it.

June 19, 2017 | 11:07 PM - Posted by boinc_oclock


Did overclocking the cache + using faster RAM have any effect on benchmarks?

June 19, 2017 | 11:50 PM - Posted by Ryan Shrout

I honestly did not have time to check, only to do the latency evaluation you saw on that page. We'll be following up - my expectation is that it will have affect on things like 7zip and the 1080p gaming results, if it all.

June 22, 2017 | 10:16 PM - Posted by boinc_oclock

I'm looking at Guru3D's X299 motherboard reviews and it seems like the BIOS that run the more conservative power profile have higher memory/L3 latency and run worse in games and synthetics like Cinebench. The Cinebench scores matched your results so I'm assuming these latency tests were done using the lower power profiles.

It will be interesting to see what your latency tester shows on the higher power profiles.

June 20, 2017 | 04:08 AM - Posted by chortbauer

Great Review!

Grammar Nazi:
On the last page, under the last picture

It is worth noting here that our early testing with the X299 motherboards has including troubling amounts of performance instability and questionable compatibility.

June 20, 2017 | 04:41 PM - Posted by Ryan Shrout

Ah, thanks. :)

June 20, 2017 | 04:27 AM - Posted by n19h7m4r3

Interesting to see the Intercore latency affect Skylake X so much. Despite Ryzen's latency affecting games, it does compete well with Broadwell often, despite lower clocks usually.

It's nearly the reverse in gaming with Skylake X, where it's clocked higher, and still loses.

I hope Ryan does some detailed tests with Skylake X X CPUs, and Threadripper to see how the increase CPUs & CCX's will affect latency; and as a result affect some use cases.

June 20, 2017 | 06:40 AM - Posted by psuedonymous

"And to combat Threadripper, it seems clear that Intel was willing to bring forward the release of Skylake-X, to ensure that it maintained cognitive leadership in the high-end prosumer market."

Impressive Intel know the release date for Threadripper back in 2015 when they scheduled Basin Falls!

August 19, 2017 | 04:10 AM - Posted by Desmond (not verified)

Continue the good work; keep posting more n more n more.
seo service cambridge

June 21, 2017 | 01:10 AM - Posted by Cellar Door

Hey Ryan,

Great job as always! Just wanted to give a little feedback about the graphs - the font is borderline unreadable and that is on a 1080p 27" ultrasharp Dell.

Otherwise keep on rocking!

June 24, 2017 | 11:19 AM - Posted by Anonymous123 (not verified)

Why is your latancy test different to that from sysoft Sandra?

Intel 7900x
Sisoft: 79ns
PCPer: 100ns

AMD Fabric:
Sisoft: 122
PcPer: 140

June 24, 2017 | 08:15 PM - Posted by RandomUsername1234 (not verified)

Perhaps you guys should factor in the platform cost in these reviews - B350 Mobos can be had for ~$100, while these X299 Mobos cost at least $400. It's hard to argue the i7-7800X is a suitable competitor for the 1700 when you have to pay another $400 for the motherboard, and are still two cores short (though the higher clocks make up for this)

Intel needs to offer multi-core mainstream offers to truly compete with the 1700 in the future. Right now higher clocks trump twice the threads, but if games like Battlefield and the higher core count of consoles are anything to go for that won't last forever.

June 24, 2017 | 08:15 PM - Posted by RandomUsername1234 (not verified)

Perhaps you guys should factor in the platform cost in these reviews - B350 Mobos can be had for ~$100, while these X299 Mobos cost at least $400. It's hard to argue the i7-7800X is a suitable competitor for the 1700 when you have to pay another $400 for the motherboard, and are still two cores short (though the higher clocks make up for this)

Intel needs to offer multi-core mainstream offers to truly compete with the 1700 in the future. Right now higher clocks trump twice the threads, but if games like Battlefield and the higher core count of consoles are anything to go for that won't last forever.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.