Review Index:

Haswell-E: The Intel Core i7-5960X 8-core Processor Review

Author: Ryan Shrout
Subject: Processors
Manufacturer: Intel

Multi-GPU Gaming Performance

One of the key target markets for the new Haswell-E processor and the X99 chipset is high end, enthusiast PC gamers and in particular those of you willing to dive into multi-GPU configurations. For a quick look at how the new Core i7-5960X + X99 compares to other high end options, we ran some games on three different systems, all using a set of three NVIDIA GeForce GTX 780 Ti graphics cards in SLI.

The first system was the ASUS X99-Deluxe motherboard and the Core i7-5960X processor, the target of our review today. The second system is our standard GPU test bed: the Core i7-3960X Sandy Bridge-E processor and ASUS X79 Rampage IV Extreme motherboard. Finally, to get some data for the "conservative crazy PC gamer" I put together a system using the Core i7-4790K on a Z87 motherboard with a PLX bridge chip, enabling 3- and 4-Way SLI configurations.

View Full Size

I used the 340.52 driver that was the latest beta from NVIDIA's website. We are looking to see if the X99 platform and the new HSW-E processor can improve on multi-GPU scaling in any way. Or, does the higher single threaded clock speed of the Core i7-4790K still mean more to gaming despite claims that game engines continue to breach into multi-threadedness?

Games were tested at a resolution of 3840x2160, or 4K to really push the GPUs as hard as possible. This also helps approximate performance of multi-display gaming (Eyefinity or Surround) due to the high pixel count.

If you are new to the way we test graphics card and systems, these graphs below are going to be a bit confusing. We use a system I call Frame Rating that uses direct capture of the GPU display output and then uses post processing to measure observed performance. Especially when diving into the world of multiple GPUs in a single system, on-system performance measurements like FRAPS can misrepresent what the gamer sees thanks to micro-stuttering and more.

Let's start with BF4.

View Full Size

View Full Size

View Full Size

View Full Size

The first thing we notice in the Observed FPS graph is that the performance on all three platforms is very close when looking at average frame rate. All three results range between 99 and 104 FPS on average.

The frame times graph, shows us individual frame times. The wider the line is on the Y-axis, the more frame time variance there is and usually indicates frame stutter. Triple and quad-GPU systems tend to have a lot more issues with delivering consistent frames times, and I see no advantage between any of the three test setups in the case.

A quick glance at the Frame Variance shows that we are seeing more than 5ms of frame time variance (that's noticeable) for 20% of the frames rendered (above the 80th percentile). That's not great, but it does appear that the Core i7-4790K has slightly MORE variance than either SNB-E or HSW-E, telling us that the base Haswell platform is slightly underperforming.


View Full Size

View Full Size

View Full Size

View Full Size

The Frame Time graph of Crysis 3 is quite different than BF4 - the frames are being delivered fairly consistently with low variance, indicating a better overall experience. Average frame rates hover at 34-35 FPS regardless of the processor used. Frame time variance is almost identical as well.


View Full Size

View Full Size

View Full Size

View Full Size

Here is our first indication that Haswell-E, in particular the Core i7-5960X, might have some advantage in multi-GPU gaming. The average frame rate of the 5960X based system is about 10% faster than the frame rate on either the 4790K or 3960X system. Even though there is more frame time variance than we want to see in our games, no single platform stands out from the others as being worse (or better really). 


View Full Size

View Full Size

View Full Size

View Full Size

Metro: Last Light is another game that shows almost no differences between multi-GPU scaling performance on these three platforms. 


What can you take away from this quick little look at multi-GPU scaling on Haswell-E? Well clearly we have more testing to do and we'd like to include AMD Radeon cards in the mix (if they can solve a dramatic bug we found in testing) to get a clearer picture. But, at least for this short glance, there are very little differences in either average frame rate performance or in frame time variance between the Core i7-5960X, Core i7-3960X and Core i7-4790K. To be honest, I was hoping and expecting something a bit different.

August 29, 2014 | 12:11 PM - Posted by Anonymous (not verified)

Thank you for including the tests with gpus!

How does this overclock compared to the 5930K? I apologize if you already answered this question in the article.

August 29, 2014 | 12:20 PM - Posted by PapaDragon

Amazing! Wasnt expecting that high of an overclock with 8 Cores, the power consumption , nor the h100 being able to maintain it below 80C at 4.6. Wow. Intel did a great job and The Asus Deluxe looks stunning. Thanks for the review Ryan!

August 29, 2014 | 12:22 PM - Posted by Jack_Pearson (not verified)


Yeah, I know, "first" posts, but hey, I have been waiting for a DECENT upgrade for years!


August 29, 2014 | 12:34 PM - Posted by Anonymous (not verified)

Decent performance. Would have figured it would be a little better in multi-threaded applications, but that may be the clock speed holding it back a little negating the 2 extra cores.

Despite all that. It's mouth watering, but I'd never pay $999 for it just how even extreme users will tell you those $999 Intel parts are a joke and that's just a very high premium for those 2 cores unless you're running some serious applications.

August 29, 2014 | 04:57 PM - Posted by Anonymous (not verified)

8 cores, or the maximum for a server part, ray tracing will eat all those cores/threads, and still take hours. It's time for the GPU makers to get some Ray tracing hardware, in a massively GPU parallel type of SKU, and bid goodbye to the CPU for all graphics workloads. I'm talking thousands of cores doing Ray interaction computations, and not even the Xeon Pi can keep up when simulating billions of rays with multiple, per ray, interactions through multiple transparent, and semi-transparent surfaces. It gets hairy fast, with rays, but those images can not be simulated any better way than the original way the eye gets its information, from those trillions and trillions of photon interactions. Even for, Non Ray Tracing, serious applications, better to LAN up some less expensive boxes together and do some work on a home based cluster, there are some fine Linux Distros, that will enable some damn good asymmetrical multiprocessing among wired laptops and PCs, and those inexpensive AMD APUs/motherboards can be LANed up just fine.

November 10, 2014 | 01:35 AM - Posted by Anonymous (not verified)

I'm sure you are familiar with nVidia's latest Titans and Titan Z (dual core) which has 5760 CUDA cores which is aimed at 3D rendering and games. Then there is the Octane rendering program/plugin which renders on the CUDA cores. I would say rendering is already moving to the GPU and some people have been doing it for years but it is now becoming more mainstream and more affordable. No more waiting for Intel. They promised 32 core processors by now years ago. Well I am not waiting for Intel anymore. nVidia has it going on now and for the future. All you need is a quad-core intel processor then stick 4 Titans on the motherboard and now you have the equivalent of a super computer for rendering. That's 11,520 cores.

August 29, 2014 | 12:38 PM - Posted by Anonymous (not verified)

This is the best review on the net so far.

The idiots on the other sites are cooling the 5960x with a H80i and then complaining about temps.

August 29, 2014 | 07:40 PM - Posted by Ryan Shrout

LOL well I did do that at first, before moving to the H100i.

August 29, 2014 | 12:41 PM - Posted by Kingofkats

Infreakingcredible CPU! Fantastic work by a Intel and Asus! Thanks for a super review, Ryan, and I'm still blinking at your overclocking results. Price and power consumption aren't exactly bargains, but I'm LOVING the fact that specs like these in a microATX box may finally push Thunderbolt into the mainstream, with consequent drops in prices for TB gear.

August 29, 2014 | 12:48 PM - Posted by Adam (not verified)

Awesome review Ryan. Can't wait for the live stream! Just ordered that motherboard and cpu from newegg!!

August 29, 2014 | 07:40 PM - Posted by Ryan Shrout

Wow, congrats!

August 29, 2014 | 01:42 PM - Posted by Robogeoff (not verified)

I'm wondering how much of a difference 5930k is from 4930k. Unfortunately, I can only afford the mid-level -E class CPUs.

August 29, 2014 | 02:21 PM - Posted by Nilbog

Why does the heat spreader have a hole in it?

Am i crazy to think the 3 GHz clocks seem a little conservative considering the target market? Or perhaps they did that to make overclocking seem more meaningful.

August 29, 2014 | 02:27 PM - Posted by Aaron (not verified)

Excellent review as always Ryan! The question is, will i ever be able to afford such a monster of a cpu.

My one and only question for you is in the testing, setup and SiSoft Sandra page. Why did you list the MSI A85 (Trinity) board when nowhere in the article was it even mentioned (let alone even closely compare in the stack)?

August 29, 2014 | 07:42 PM - Posted by Ryan Shrout

Oh, copy and pasted that table from a previous CPU story. :)

August 29, 2014 | 02:52 PM - Posted by Anonymous (not verified)

AMD 5GHz 8-core CPU: $269.99
Intel 3GHz 8-core CPU: $999

Cheapest AM3+ motherboard: $24.99
Cheapest 2011 motherboard: $228.79

AMD DDR3 16GB RAM cheapest: $149.99
Intel DDR4 16GB RAM cheapest: $199.99

Save $1000 and go with AMD.

August 29, 2014 | 03:27 PM - Posted by Anonymous (not verified)

1. A 5960X consumes 81 Watts less than a 9590 at full load. So your power bill will be larger and you'll need a stronger power supply.

2. The 5960X can be overclocked on air. Try to overclock a 9590 and not set you're house on fire.

3. This was never meant to compete with anything that AMD has. If people just look at the number of cores as opposed to actual performance, of course they wouldn't know why this cost $1000.

4. A Haswell (socket 1150) will cost about the same as an AMD system and give similar performance.

August 29, 2014 | 03:55 PM - Posted by Anonymous (not verified)

Come on Nvidia, get a Power8 license, and build a line of Home gaming servers, with Nvlink, that's 12 cores, at 8 threads per core for the Power8, for the trust fund kids and your Titians! All on a mezzanine module! People with money want to game too. Yes Nvidia, get power8, and you will not ever need an x86 license. Apple too, license some Power8s for your Mac Pros, and stop giving your stockholders money to Intel, yes ARM for iStuff, and Power8 for the Mac pros, and eventually for you macbooks, those P.A. semiconductor folks have such a fine pedigree(Alpha 21064, StrongARM, others), and don't worry any more, about licenses transferring, because Power8 is up for license, just like ARM, and Apple bought you with couch change, so there is plenty of R&D funding to do with Power8 your P.A. semiconductor magic! Refrence Power8s to the mac pro, and Power8 derivatives to the macbooks. New ISAs, no problem for Captain Cook, he has chests filled with royal jewels, and gold doubloons, enough to port OSX to any ISA, x86 be damned! Hell, the Good Captain, just sealed a handshake with IBM, for some cloud connected iThings, why not get some of that juicy Power8 going on the Mac Pros. the more People that license the power8 ISA, and refrence designs, and start fabbing their own flavors, the less High performance home computing will cost, maybe even more than 40 PCI 3.0 lanes for less than $300 on the CPU SKUs!

August 31, 2014 | 08:36 AM - Posted by Anonymous (not verified)

You're right

August 29, 2014 | 03:24 PM - Posted by Anonymous (not verified)

Clock for clock Haswell-E is about 15% faster than a 3 year old SNB-E. Eight cores makes sense if you need or want them. Otherwise, not much of an incentive for SNB-E owners to upgrade until Skylake-E perhaps.

I used to like having the latest and greatest, but this 5-10% clock for clock increase per generation BS is wearing thin. If an 8-core Haswell-E could consistently OC to 5GHz for daily use it would be another story. Unfortunately, they can't and I doubt that 4.6GHz is the normal as well. The average is probably around 4.4GHz for the 8-core.

August 29, 2014 | 07:43 PM - Posted by Ryan Shrout

You might be right - but I can tell you that 8-cores at 4.6 GHz is truly impressive to use.

November 17, 2014 | 04:45 AM - Posted by Anonymous (not verified)

yea maybe true but i agree on holding onto my sandy e it has been @ 4.9ghz 24/7 for years. i have a pretty good custom loop but still. I don't think i will be upgrading my cpu for some time. Hoping skylark e can do it for me. I am really starting to think the overclocking days or starting to fade. Sadly to.... Only thing that i see myself upgrading for is new features that come to surface. Even then i will be getting a e series cpu that I hope will overclock well. It will be really hard to move onto a cpu that can't oc as good as mine. I would settle on 4.7ghz but i doubt that is even in reach the way overclocking has been the last couple of gens. Sad had High hopes for haswell e. Good cpu but not enough to spend my money on. I guess i will keep my saved money for another generation or to. Fingers crossed amd brings something to the table so amd can finally pull there heads out of there asses.

August 30, 2014 | 03:15 AM - Posted by MtRush (not verified)

I agree, after seeing these gaming benches.
these games aren't taking advantage of these cores, only the ghz.
i'll be waiting out for broadwell,skylake unless haswell-e refresh can take advantage of games then.

August 29, 2014 | 08:51 PM - Posted by balls (not verified)

I have both the Ivy Bridge 4930K and the FX8350. Both are wonderful processors, and without a doubt I use the Intel system for converting media - it saves hours, but every time I see that performance for the dollar chart I love the fact that I own an AMD CPU. With respect to value - it is tough to beat and it is the choice I use when building additional machines and ones for family members. No guilt and good feelings knowing I spent the right amount and made people happy.

August 30, 2014 | 12:39 PM - Posted by Homey (not verified)

was looking forward to this board and the Intel i7-5960X chippy
UNTIL I saw the UK price hikes!!!
RRP $1,000 is £601.83 approx... and the board at $399 is £240.37

SADLY as is ALWAYS the case with the UK
Scan (A Very trustworthy source) has them for...
£759.80 and £287.16 respectively

that's a UK rip off price hike of over £205 (or $340.00)
No wonder the UK is so behind when it comes to sales to the gen pop
I for one won't be falling for UK's rip off pricing not again!

and I was so ready to lob out for the board chippy rammage and graphics cards with approx £1,200 but it seems it'll be more like £2,000

September 1, 2014 | 04:05 PM - Posted by Anonymous (not verified)

You do realise we have to pay 20% VAT in the UK, right? Not to mention the customs duty and import VAT the suppliers have to pay add to the cost. No "price hikes" here.

August 30, 2014 | 03:31 PM - Posted by Dumgi

Ryan, I'm a new fan of your website and youtube page. You guys deliver great reviews and content. I was waiting for this processor but sadly I need to save more :-(... lol

August 30, 2014 | 04:56 PM - Posted by QD (not verified)

Definitely the new king of the hill. Looks like the new enthusiast sweet spot will be the 5920-30 for the more cost-performance competitive solution.

Still, I expect a new dream machine soon ;D

August 30, 2014 | 06:43 PM - Posted by Anonymous (not verified)

This platform clearly wants 14nm Broadwell-E for better power efficiency and higher clocks.

September 2, 2014 | 10:20 AM - Posted by Anonymous (not verified)

Look around: Broadwell-E isn't scheduled until Fall of NEXT year (2015), or over a year away. And Skylake-E may be more than double that. And then Cannonlake...well, with all the problems with the XUV sources and such, who really knows?

September 2, 2014 | 12:20 PM - Posted by BBMan (not verified)

Yep- this really is a pretty big process move so speculation is out there

August 31, 2014 | 08:33 AM - Posted by Anonymous (not verified)

why you do not try with 167 MHZ
What good AVX 2 If you work 100 MHZ?

August 31, 2014 | 01:01 PM - Posted by 0utf0xZer0 (not verified)

Ryan, which Z87 board did you use for the multi-GPU test? If it's the same Intel DZ87KLT-75K you used for the rest of the review, I don't think the PLX chip adds additional graphic lanes, it just allows them to add thunderbolt and extra expansion slots without affecting bandwidth to the USB and SATA controllers. In which case it's still running 8x/4x/4x, which might explain the frame variance in BF4 at 4K?

August 31, 2014 | 08:47 PM - Posted by Anonymous (not verified)

Was the nature of the AMD Radeon cards "dramatic bug we found in testing" associated with the multiple GPU set up or something else? Specifically would there be a problem if using a single AMD Radeon graphics card on an X99 mother board?

September 1, 2014 | 06:28 PM - Posted by QD (not verified)

I wonder how much difference Broadwell will make in comparison when it comes out- while it looks like there is a difference in bandwidth, it looks like there was mot much change in performance I could attribute to DDR4. The monster cache was also a ? to me. Perhaps that helps with some DCPs? At any rate, I was hoping for a more definitive toe-to-toe performance change.

November 4, 2014 | 03:07 PM - Posted by Otto (not verified)

If you wouldd like too improve your experience only keep visiting this website and be updated with the newest news posted here.

Also visit my weblog :: Telecharge CoD Advanced Warfare

December 23, 2014 | 08:09 PM - Posted by Jason Honingford (not verified)

To guys claiming your apps need to be multithreaded to use all the cores, what are you running MS DOS???

February 6, 2015 | 02:43 PM - Posted by Mr_illa (not verified)

I have the new 5960x and i think people miss the point of the product "THIS IS NOT FOR GAMES". I am a 3d artist and having to wait too see where you messed up in your renders is KEY, you have no idea how hard it is to make pretty artwork on a i5 or lower chip.

If you want to render footage for youtube or just video/photo editing then dont buy this chip, its a waste plus it shows the world you have no idea what this chip should be used for.

This chip should be used for 3d artist's who cant buy a super computer but would like fast renders for stills or short animations but saying that i would think for a long animation you would use an online render farm for the fished peace. Yes gpu rendering is great and becoming more mainstream but even these render engines use the cpu with the gpu to make the renders even faster "the cpu will never go away".

Anyone wanting to judge this cpu against another needs too pick just 3d rendering as the benchmark and nothing else, its intels fault that they dont know how to market there cpu's (this is a xeon based cpu and people want to play games on it lol)

GROW SOME BALLS INTEL AND MAKE A CPU RANGE FOR 3D ARTISTS THAT IS NOT A £2000/3000 CHIP (WE KNOW YOU CAN DO IT) we dont need to render a whole animated film but too make high-end images fast so we can learn our craft faster is a must.... we make your games and sell your products with our 3d skills, why not have more of us???

Well thats my rant and if you check A1OFFENDER on youtube that would make me happy ohhhhhhhhhhhh and i will say one thing that alot of people dont seem to know, this cpu will give you less frame dips in game but if that is your only need for the cpu and can afford it, do it (so many people say this chip wont help gaming but it will help with the frame dips but is it worth the extra doh???)

Wrote this in a rush lol peace out off to eat dinner

February 17, 2015 | 06:08 PM - Posted by 25 (not verified)

While I don't see the need for 2 (or 3) HDDs in an HTPC, it is because of noise and extra heat, more so than the loss of space. They had the space, since they wanted to make it the same width/depth dimensions of typical home theatre hardware, so they put it to use.

Arguably, moving the power supply out was extremely wise, since now it can be passively cooled, instead of forcing the chassis fans to go into overdrive venting the excess heat. Looking at the temps of a Pentium, without the addition of a graphics card or power supply, they really didn't have the headroom to put any other heat producer inside the chassis.
Play GTA Vice City Online
Play GTA San Andreas Online
Telecharger Jeux PC Complet Gratuit
GTA San Andreas Game
harvest moon pc
Jeu Yu Gi Oh PC
Tekken 6 PC
Telecharger Gta San Andreas PC Gratuit
Play Super Mario kart online

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.