Review Index:

The Intel Core i7-6700K Review - Skylake First for Enthusiasts

Subject: Processors
Manufacturer: Intel

Light on architecture details

Our Intel Skylake launch coverage is intense! Make sure you hit up all the stories and videos that are interesting for you!

The Intel Skylake architecture has been on our radar for quite a long time as Intel's next big step in CPU design. Through leaks and some official information discussed by Intel over the past few months, we know at least a handful of details: DDR4 memory support, 14nm process technology, modest IPC gains and impressive GPU improvements. But the details have remained a mystery on how the "tock" of Skylake on the 14nm process technology will differ from Broadwell and Haswell.

Interestingly, due to some shifts in how Intel is releasing Skylake, we are going to be doing a review today with very little information on the Skylake architecture and design (at least officially). While we are very used to the company releasing new information at the Intel Developer Forum along with the launch of a new product, Intel has instead decided to time the release of the first Skylake products with Gamescom in Cologne, Germany. Parts will go on sale today (August 5th) and we are reviewing a new Intel processor without the background knowledge and details that will be needed to really explain any of the changes or differences in performance that we see. It's an odd move honestly, but it has some great repercussions for the enthusiasts that read PC Perspective: Skylake will launch first as an enthusiast-class product for gamers and DIY builders.

For many of you this won't change anything. If you are curious about the performance of the new Core i7-6700K, power consumption, clock for clock IPC improvements and anything else that is measurable, then you'll get exactly what you want from today's article. If you are a gear-head that is looking for more granular details on how the inner-workings of Skylake function, you'll have to wait a couple of weeks longer - Intel plans to release that information on August 18th during IDF.

View Full Size

So what does the addition of DDR4 memory, full range base clock manipulation and a 4.0 GHz base clock on a brand new 14nm architecture mean for users of current Intel or AMD platforms? Also, is it FINALLY time for users of the Core i7-2600K or older systems to push that upgrade button? (Let's hope so!)

Continue reading our review of the Intel Core i7-6700K Skylake processor!!

Intel's Skylake Enthusiast Push

Get this folks: PC gaming is a thriving business. Intel first attempted to make good with enthusiasts and overclockers with the release of Devil's Canyon, the Core i7-4790K. That CPU boosted clock speeds at a slight TDP increase while pushing clocks to their highest out-of-box settings yet. They were fully unlocked and priced very competitively against previous Haswell-based processors. They were great parts and I was glad to see Intel investing some money and resources in a community that most of us thought that it had written off.

Intel today offers another olive branch to our readers and fans.

"Take ye thine Skylake processor, with unlocketh overclock settings and DDR4 memory, and continueth innovation on the open and scalable platform.  Eth."

Kindheartedness aside, Intel is in for the money: PC gaming continues to be big business. Game revenues are the largest form of software sales, gaming across all platforms leads all other forms of entertainment and that crazy eSport segment is expected to grow from 89M to 145M units by 2017.

View Full Size

And as any PC gamer knows, the PC is where the real innovation occurs. Yes, Xbox owners have the Kinect, but nearly every other noteworthy shift happens on the PC. Virtual reality, downloadable gaming, streaming gaming, free to play, etc. - these are all technologies and directions that were started or perfect on the PC. Intel wants to make sure they are part of that, and not just in a passive manner.

Let's first take a look at the specifications of the two new Skylake 6th Generation Intel Core processors launching today.

  Core i7-6700K Core i5-6600K Core i7-5775C Core i7-4790K Core i7-4770K Core i7-3770K
Architecture Skylake Skylake Broadwell Haswell Haswell Ivy Bridge
Process Tech 14nm 14nm 14nm 22nm 22nm 22nm
Socket LGA 1151 LGA 1151 LGA 1150 LGA 1150 LGA 1150 LGA 1155
Cores/Threads 4/8 4/4 4/8 4/8 4/8 4/8
Base Clock 4.0 GHz 3.5 GHz 3.3 GHz 4.0 GHz 3.5 GHz 3.5 GHz
Max Turbo Clock 4.2 GHz 3.9 GHz 3.7 GHz 4.4 GHz 3.9 GHz 3.9 GHz
Memory Tech DDR4 DDR4 DDR3 DDR3 DDR3 DDR3
Memory Speeds Up to 2133 MHz Up to 2133 MHz Up to 1600 MHz Up to 1600 MHz Up to 1600 MHz Up to 1600 MHz
Cache (L4 Cache) 8MB 6MB 6MB (128MB) 8MB 8MB 8MB
System Bus DMI3 - 8.0 GT/s DMI3 - 8.0 GT/s DMI2 - 6.4 GT/s DMI2 - 5.0 GT/s DMI2 - 5.0 GT/s DMI2 - 5.0 GT/s
Graphics HD Graphics 530 HD Graphics 530 Iris Pro 6200 HD Graphics  4600 HD Graphics 4600 HD Graphics  4000
Max Graphics Clock ? ? 1.15 GHz 1.25 GHz 1.25 GHz 1.15 GHz
TDP 91W 91W 65W 88W 84W 77W
MSRP $350 $243 $366 $339 $339 $332

Based on the same 14nm process as Broadwell, a CPU architecture that only recently saw a release to the DIY builder (and only then in a pair of options), Skylake is still built around a quad-core, HyperThreading capable design. The base clock speed of 4.0 GHz matches that of the Core i7-4790K though the maximum Turbo clock rate of 4.2 GHz is 200 MHz slower than we saw on the Devil's Canyon part. Still, with reasonable IPC improvements from Haswell to Skylake, that matched clock speed should result in noticeable performance improvements.

One change that we did notice for the Skylake architecture is its move away from the FIVR, Fully Integrated Voltage Regulator. It was first introduced during the Haswell architecture disclosure and was touted as revolutionary, simplifying the board and platform design significantly. However with Broadwell we already saw some back peddling as Intel admitted having to by-pass the FIVR for ultra-low power implementations due to efficiency issues. This time the company says that removing it "enabled improved power efficiency" across a "much broader range of thermal design powers." This means that from 2-in-1s to desktop gaming PCs, the power delivery solution returns to the motherboard.

View Full Size

The biggest specification change (other than the new socket) is the move to DDR4 memory technology. Yes, Skylake will be able to support both DDR3L and DDR4. DDR3L integration will, with a few exceptions, only occur on low-cost solutions like notebooks and tablets. This marks the second consumer platform to integrate DDR4, following the release of Haswell-E and the X99 chipset nearly a year ago. Official specs limit the DDR4 memory speed to 2133 MHz but, in our testing with the new ASUS Z170-Deluxe motherboard and memory from Corsair and G.Skill, getting 3400 MHz is definitely possible.

The system bus design has also been upgraded this time; we are now on the third revision of DMI with a bump in performance to 8.0 GT/s, which is equivalent to x4 lanes of PCI Express 3.0. This upgrade in bandwidth turns out to be incredibly useful for platform builders like ASUS that can take advantages of up to 20 lanes of PCIe 3.0 on the Z170 chipset itself! That means new storage options, configurations and more expandability for the non-E level of processors from Intel.

The integrated graphics on Skylake are drastically improved though the details on how they are doing it are still a mystery until we get to IDF. Our IGP test shows nearly 50% performance improvements for the Intel HD Graphics 530 compared to the HD Graphics 4600 on Haswell. As it turns out, Broadwell and the Core i7-5775C with its Iris Pro graphics and eDRAM implementation are still going to blow the Core i7-6700K out of the water, but more on that later.

Thermal levels of the Core i7-6700K rest at 91 watts, just a handful higher than the Core i7-4790K and 4770K. The Core i7-5775C is much lower thanks to a lower clock speed set. The important thing here is that Skylake scales well up to this TDP level and provides excellent performance along the way. The fear of having Intel completely abandon the DIY market with process tech and CPU designs that are only geared for low-power implementations has been alleviated for at least one more generation.

View Full Size

Finally, look at that price. With the Core i7-4790K selling today for $339, Intel is listing the tray price of the Core i7-6700K at $350! And, after the initial launch excitement, you can expect this price to settle another $10-20 down. Of course this will require you to upgrade your motherboard as well as your system memory so the total upgrade cost is going to be higher but Intel seems to be reaching out olive branches in several areas with today's announcement. I know more than a handful of PC gamers that will be using this opportunity to upgrade from aging Sandy Bridge / Ivy Bridge systems so the price to entry is a welcome data point.

Though I am focusing on the Core i7-6700K for today's story as that was the sample we were sent, Intel is also pushing out a Core i5-6600K today with a price of $243. That CPU will run four cores without HyperThreading, a base clock of 3.5 GHz and a maximum Turbo clock of 3.9 GHz. Other than the 6MB of cache (as opposed to 8MB) the rest of the specifications and capabilities remain unchanged.

Video News

August 5, 2015 | 08:19 AM - Posted by Joe K (not verified)

What about x265? The dedicated hardware for HEVC is one of the biggest selling points for those of us looking to do next gen encoding.

August 5, 2015 | 10:03 AM - Posted by Anonymous (not verified)

What about it? My guess is that it's like Carrizo: it is present and it works. Other than testing that it functions, it's not like there is any real benchmark to perform.

August 5, 2015 | 10:40 AM - Posted by Joe K (not verified)

It "functions" on basically all relatively modern hardware. The question is how much of an improvement is there in performance with dedicated encoders built in vs just throwing a bunch of cores at it? And I would love to see some comparison involving GPU encoding, both with the similarly hardware supported GTX 960 or 950, with a non-hardware encoding equipped GPU like an AMD or earlier nvidia part.
Basically, now that we have a couple CPU's & GPU's with full hardware support for HEVC/H.265/x265, vs the hybrid/partial, and full software options, I'd love to see some serious benchmarking to give us an idea of what kind of performance gains we could see, and how much horsepower we would have to throw at a media encoding box to handle HEVC encoding practically.

August 5, 2015 | 11:11 AM - Posted by Polycrastinator (not verified)

Quality is a big question there, also. How does the output of the dedicated encoder look in comparison to the output from CPU cores and GPU cores?

August 6, 2015 | 07:46 AM - Posted by Albert89 (not verified)

Why they didn't include AMD's Carrizo in this skylake review is because it would mean that Intel iGPU hasn't gone as far as we were all lead to believe. You gotta love the PC perspective team hating all things AMD !

September 5, 2015 | 07:36 AM - Posted by Jest (not verified)

And which carrizo can you buy for desktop and on what motherboard are they working?

January 21, 2016 | 08:44 PM - Posted by Anonymous (not verified)

Well the same argument holds for broadwell which was included...

August 5, 2015 | 11:27 AM - Posted by willmore (not verified)

x265 isn't ready for users, yet as it's still in very active development. x264 is strongly recommended until x265 gets into better shape. I don't believe that x265 uses HEVC hardware acceleration rather it competes with it.

August 5, 2015 | 09:51 AM - Posted by PCPerFan (not verified)

CPU performance improvement is better than I expected. GPU improvement is as I expected. Nice part.

I still don't feel any urgent need to upgrade my Nehalem 920 or Sandy 2600k.

August 6, 2015 | 08:47 AM - Posted by Albert89 (not verified)

That's one gas guzzler.

August 5, 2015 | 09:56 AM - Posted by lantian (not verified)

If the price in Europe would be reasonable for the i7 this would be a must have upgrade, as it stands right now i5 6600k is going for 250 euro, while the i7 4700k is going for 389 at cheapest, the damn i7 5820k is the exact same price it's almost 150 euro price difference WTF is wrong with them

August 5, 2015 | 10:35 AM - Posted by jessterman21 (not verified)

The real question is, how does a 5GHz i7-2700K compare to a 4.3GHz 17-6700K. Since the OC headroom on Skylake will probably top out there...

August 5, 2015 | 10:36 AM - Posted by jessterman21 (not verified)

Nevermind - HardOCP got theirs to 4.7GHz...

August 7, 2015 | 12:43 PM - Posted by Nomingtons (not verified)

my POV-RAY test was 4665.59PPS on a 2700k 5ghz v1.475v HT=on.
Cinebench 11.5 was9.42
trucrypt AES=4.6 and twofish=386

August 5, 2015 | 10:45 AM - Posted by Joe K (not verified)

We've all heard the all of two instances of overclocking pre-release samples of the 6700K on air, now let's get some actual release samples out there to handle all the basic scenarios;
24x7 on air
24x7 on water
Max OC on air
Max OC on water
Max OC on LN2

August 5, 2015 | 12:06 PM - Posted by lantian (not verified)

these are all release/retail samples intel did not send any es out this time and from what people are seeing 4,7ghz for the i7 sku is about avarage

August 6, 2015 | 08:42 AM - Posted by Joe K (not verified)

The stories I've seen claimed 5.0 & 5.2

August 8, 2015 | 02:56 AM - Posted by Roon (not verified)

kitguru has an article on somebody using LN2 to get 6.5 ghz...

August 5, 2015 | 10:50 AM - Posted by jerrytsao (not verified)

Maybe it's time for Sandy Bridge users to upgrade, but people used to be in X58 either jumped in X99 already or still waiting for Skylake-E, instead of going to a "downgraded" platform.

August 5, 2015 | 10:58 AM - Posted by Anonymous (not verified)

And now the wait for the Skylake Pentium and Skylake Celeron begins...

August 5, 2015 | 11:03 AM - Posted by AnthonyShea

I still dont understand why they keep including igpu on the i7 (non mobile) product line. if you're going the htpc route a i5 or i3 seeing that 4k decoding isn't exactly taxing on processors anymore. also glad to see voltage regulation back where it belongs. thb this seems to be what should have followed up the 4770k or the 2600k for that matter (excluding nm) the fact that it's taken then so long to implement some real change is disappointing.

and you know they had been sitting on at least part of this for a fair bit of time, seeing how long they could hose us for. at least it releases with a good msrp, too bad you have to replace pretty much everything lol. it's like they have a pack with board venders, to change the socket as frequently as possible.

August 5, 2015 | 04:32 PM - Posted by Misa (not verified)

The iGPU is still useful for quicksync (think game streaming) or running a second/third monitor that doesnt hang off the GPU/SLI configuration so it can be powered down when using social media/web

August 5, 2015 | 11:07 AM - Posted by Polycrastinator (not verified)

1.4v seems awfully high to me. Has Intel given any guidance on a "safe" voltage for these chips? Even if you can get it stable, if it's fried in 6 months that's no help to anyone. Can the chips really take that sort of voltage consistently?

August 5, 2015 | 04:27 PM - Posted by bdubinator (not verified)

1.32-1.4 is the normal stock operating voltage on the 6700k when it is turbo boosting to 4.2 GHz. Anandtech was able to undervolt his samples to 1.20volts @ 4.3 Ghz, though which is likely what I'll be doing with my watercooling setup. I'm hoping for 4.4 Ghz at or below 1.30v.

August 5, 2015 | 11:21 PM - Posted by Anonymous (not verified)

What's the point of watercooling if you're undervolting? You could probably even get away with a passive cooler

August 5, 2015 | 12:44 PM - Posted by nevzim (not verified)

Why is idle power consumption so high?

August 5, 2015 | 01:06 PM - Posted by jharderson

Do you know when retailers will start selling Sky Lake CPUs

August 5, 2015 | 03:42 PM - Posted by Anonymous (not verified)

This is starting to look like a paper launch.

August 5, 2015 | 05:08 PM - Posted by Anonymous (not verified)

I was under the impression that DX12 will leverage the CPU's integrated graphics in tandem with a discrete graphics card. Similar to what AMD had previously "highly" touted with mantle.
Wouldn't this feature make Broadwell the better choice for gamers over Skylake in relation to game performance?

August 6, 2015 | 01:46 PM - Posted by Evil Sunbro

I didn't see your comment earlier and posted a similar one farther down.

No reviewer I have seen, yet, has brought up this point. I would like to see a comparison on just the integrated graphics since DirectX 12 will be utilizing it.

I posted a link to the article that I read on this in my comment.

August 5, 2015 | 08:02 PM - Posted by kenjo

half the chip is useless. Who buys this cpu and uses the built in gfx ??

If there was a competitive alternative Intel would not be able to sell a chip where 50% of the die is not used.

August 6, 2015 | 01:15 AM - Posted by Dr_b_

True, the die space would have been better spent on increasing PCIe lanes and/or adding more cores. 6 core skylake with 40pcie lanes anyone? OR just keep the four cores and pcie lanes as-is, and reduce the price?

Haswell-E is now 2 generations behind (>broadwell>skylake). Big differentiator? you dont have to pay for a GPU that you will never use and get more PCIe lanes so you can run more than just one x16 device. Say you want to have 2 GPU's, a sound card and a coupleof NVMe PCIe, or 10GB PCIe card, etc.

SoC's have their place, but not as an enthusiast or gamer part. amiright.

August 6, 2015 | 09:27 AM - Posted by BlackDove (not verified)

Now that Intel owns Altera they could put an FPGA and make a compiler to accelerate parts of code by 100X or so.

That would be a lot more useful than the integrated GPU but that will be on E5 and E7 Xeons way before their small desktop chip.

Unless they sell them as E3 Xeons for hyperscalers.

August 5, 2015 | 10:50 PM - Posted by Anonymous (not verified)

I would expect several of these test (IGP gaming, streaming apps, etc) to bandwidth sensitive. It could be that some of the differences are due to bandwidth improvements rather than any actual core improvements. You could be testing the difference between DDR4-2133 and DDR3-1600 more than anything else.

August 5, 2015 | 11:28 PM - Posted by Anonymous (not verified)

It still seems to be the case that it has significantly more PCI-e bandwidth in the chipset than can be up linked to the CPU.

August 6, 2015 | 01:30 AM - Posted by ibnMuhammad (not verified)

As mentioned by 'nevzim', I was also wondering, in this day and age, especially with the 14nm fab, why is the idle power consumption so high (with SSD)?!
Even the load at almost 150watts is pretty ridiculous.

Can the author confirm if these figures are without the monitor and without discrete GPU?

August 6, 2015 | 06:06 AM - Posted by Anonymous (not verified)

where i can see "widi " ? or 166 mhz ?

August 6, 2015 | 08:35 AM - Posted by kail (not verified)

Colonel Sanders:Hey guys,we've focused on KFC ramen for years,it's really really good(though still tastes like crap for real ramen lovers).Buy now,only $9.99 and get 1 pcs of chicken for free.

August 6, 2015 | 10:07 AM - Posted by BBMan (not verified)

Nicely done review. But I don't see any compelling evidence to make me feel bad about my Haswell-E. At all. In fact, If I were on an older i7, I might still wait to see what's over the next hill.

It's a significant jump, but not a compelling one. I think we'll be into a next generation of HW in maybe another 2-5 years. Call it 2020. And I'm thinking that "faster" may have to give way more to "efficient".

August 6, 2015 | 10:47 AM - Posted by Arul

looks better!

August 6, 2015 | 12:35 PM - Posted by Evil Sunbro

Every reviewer keeps downing on the integrated graphics, but according to the linked article, DirectX 12 will utilize the integrated along with the discrete. They say it will be like having another graphics card...

'How many video cards do you have in your PC? Think carefully (I didn’t, and told Wardell, who asked me the same question, just one). Wardell reminded me most modern PCs have at least two (not counting extremely high-end systems with cards run in tandem, in which case the number would be three or more).

“Everyone forgets about the integrated graphics card on the motherboard that you’d never use for gaming if you have a dedicated video card,” says Wardell. “With DirectX 12, you can fold in that integrated card as a seamless coprocessor. The game doesn’t have to do anything special, save support DirectX 12 and have that feature enabled. As a developer I don’t have to figure out which thing goes to what card, I just turn it on and DirectX 12 takes care of it.”

Wardell notes the performance boost from pulling in the integrated video card is going to be heavily dependent on the specific combination—the performance gap between integrated video cards over the past half-decade isn’t small—but at the high end, he says it could be as significant as DirectX 12’s ability to tap the idle cores in your CPU. Add the one on top of the other and, if he’s right, the shift at a developmental level starts to sound like that rare confluence of evolutionary plus the letter ‘r’.'

August 7, 2015 | 10:58 AM - Posted by Angelo (not verified)

What's in this new CPU for video editors?

August 7, 2015 | 01:14 PM - Posted by zgradt

Dude, upgrade your Handbrake software. The new version is way better optimized for 4+ core setups.

Also, how come nobody is comparing the 6700K to the 5820K? I'd rather pay a little more and get 6 cores instead of 4.

November 26, 2015 | 11:05 AM - Posted by Anonymous (not verified)

if review sites compare 5820K with 6700K no one will buy the 6700K.

August 7, 2015 | 02:57 PM - Posted by Seb777

Looking forward to the next I7 WITHOUT integrated GPU... :-) CMonnnn Intel !

August 8, 2015 | 01:42 PM - Posted by msmith (not verified)

Lol I'll just stick with this Phenom II x6 for another year. Someone get back to me when CPU performance has increased by an order of magnitude (aka when Intel gets real competition again).

August 10, 2015 | 12:25 PM - Posted by obababoy

I hope PC gamers do not get these CPUs. I don't want to see people waste money on this platform if you are in the market for building a gaming PC. The cost of DDR4 right now is just stupid expensive and motherboards for this are expensive. There are no incredible gains with either and you'd save a ton of money going with a 4790k or something similar on LGA 1150.

August 14, 2015 | 12:10 PM - Posted by Anon (not verified)

Except if you actually look at prices, DDR4-2400 4GB sticks are cheaper than their DDR3 counterparts. $30 vs $33

September 3, 2015 | 09:03 PM - Posted by DisasterArea (not verified)

I'm one of the people hanging grimly on to my old X58 platforms, of which I have two on ASUS P6T motherboards.

I'm in no rush to upgrade GPU from a pair of GTX680's in SLI, and no rush for faster HDD with 2x 240GB in Raid, so the question I have is, is it REALLY worth Dropping $2k to upgrade a motherboard / ram / cpu / psu.

Last year, for AUS$100 each, I replaced my i7-920 parts with Xeon hexacore units, which at 4ghz / 6 thread+HT (12 thread) seem to do fine.

If I look at the IPC@3.5ghz above, and reckon that a new 6700K part will be a full 1/3rd faster per core, it strikes me that I'm only behind on single core performance with even such an old setup! 6 cores at 1.0 each vs 4 cores at 1.33 each for example, even if I'm generous and say a 6700K perfoms 1.5x per core of the Xeon, I'm still packing the same processing punch.

Is that a fair assessment, or am I missing something? -
What I'm really asking is, if you have an old X58 platform - and aren't aiming for quad-sli or something requiring more pcie lanes than you have, isn't a $100 i7 to Xeon upgrade going to be enough to keep you in the game?

Is the IPC increase on a quad core worth the $2000 price tag, because I suspect you wouldn't notice the difference much!

I tend to play slightly older games, just finished mass effect series and bioshock for example, so I suspect the very latest titles would give issues to my setup, so I know it's adequate for me as I never see the CPU taxed much.

April 8, 2016 | 02:15 AM - Posted by paullost (not verified)

Buying chipsets with crappy on-board gpu's will encourage them to make more and the fact is that the speed is topping out because of the limitations of wave speed through the material the chips are made of, bus speed of external gpu's and the limits of the expanion slots speed are the real limits still left, that and doing all the binary physics and mathing out the hardware to function 3-dimensionally and running the signal to and from an expansion slot takes time.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.