Review Index:
Feedback

The Intel Core i7-6700K Review - Skylake First for Enthusiasts

Author: Ryan Shrout
Subject: Processors
Manufacturer: Intel

Power Consumption, Perf per Dollar, Closing Thoughts

Power Consumption Testing

View Full Size

View Full Size

Idle power consumption is quite good on the new Z170 chipset motherboard and Skylake processor, measuring at just 58 watts. Under a full load, the 6700K system pulls a total of 143.8 watts (full system, not the CPU alone), which is actually 12 watts less than the Core i7-4790K based system. Considering the move to 14nm process technology as well as the new (yet to be revealed) architectural changes, this seems right in line with expectations.

Performance per Dollar

One thing we wanted to take into consideration with this review is the idea of performance per dollar.  To get some interesting data I selected three benchmarks (7zip, Cinebench 11 and x264 v5.0) and included current pricing from Newegg.com (or Amazon if out of stock on Newegg).

View Full Size

View Full Size

View Full Size

View Full Size

At $350, the Core i7-6700K is priced right in line with previous Intel mainstream CPU releases including the Core i7-4770K, 4790K and 3770K. It's obvious that those Extreme Edition processors have a hell of a hill to climb with their $1000+ price tags.

Of all the Intel processors in this test, the Core i7-6700K has the highest performance per dollar, beating out the 4790K and the Haswell-E based Core i7-5960X as well. AMD's FX-8350 does very well here in these results thanks to its extreme low price, so users that are primarily concerned with this metric, but know the limitations of that platform in other areas, should definitely give AMD's parts a look.

Pricing and Availability

Today marks the launch of Skylake but only for the these two specific processors we are discussing today: the Core i7-6700K and the Core i5-6600K. Intel claims that both of these processors will be widely available this week, and we have heard from ASUS, MSI and Gigabyte, all three of which have plenty of motherboards based on the Z170 chipset ready to go.

The wider array of Skylake parts, including lower-cost desktop solutions, C-SKUs and mobile processors will be revealed at a future date, with more information likely coming through during IDF next week. It's great to see Intel giving at least a slight nod to the gamers and enthusiasts that really power and push this industry forward.

Closing Thoughts

I think it's safe to say that, although Skylake's performance isn't going to make Haswell users jump out of their seats to order immediately, consumer that are on Sandy Bridge or Ivy Bridge might finally have a line in the sand to step across. The Core i7-6700K is an incredibly fast desktop processor that will be able to handle your gaming, media encoding and web browsing with incredible efficiency and at nearly the same performance level as those $1000+ processor still taunting us with their 8-core setups. The CPU is unlocked, overclockable and easy on the wallet (in the grand scheme of things).

View Full Size

I think it might be the new Z170 chipset that is the icing on the cake to tease the sweet tooth of more users than either Z97 or Z87 was able to in recent history. The upgrade to DMI3 and 20x lanes of PCIe 3.0 on the chipset mean that motherboard vendors can offer up some fairly interesting and unique combinations of features including high speed wireless, USB 3.1, Thunderbolt, M.2 and U.2 storage and more. The ASUS Z170-Deluxe is an amazing example of a flagship board that offers up enough useful (that's the key) features to warrant the added cost over the more basic motherboards. If you are on a tighter budget, the Z170-A from ASUS looks like a great option down that road. We have already seen leaks of other vendors with boards support 7 (!!) PCIe SSDs, so clearly we aren't the only ones thinking outside the box here.

View Full Size

As an enthusiast, I am excited for what Skylake brings to the table. And yes, we might finally have a part that gets those Nehalem and Sandy Bridge users off their butts and into the upgrade line.

View Full Size


August 5, 2015 | 08:19 AM - Posted by Joe K (not verified)

What about x265? The dedicated hardware for HEVC is one of the biggest selling points for those of us looking to do next gen encoding.

August 5, 2015 | 10:03 AM - Posted by Anonymous (not verified)

What about it? My guess is that it's like Carrizo: it is present and it works. Other than testing that it functions, it's not like there is any real benchmark to perform.

August 5, 2015 | 10:40 AM - Posted by Joe K (not verified)

It "functions" on basically all relatively modern hardware. The question is how much of an improvement is there in performance with dedicated encoders built in vs just throwing a bunch of cores at it? And I would love to see some comparison involving GPU encoding, both with the similarly hardware supported GTX 960 or 950, with a non-hardware encoding equipped GPU like an AMD or earlier nvidia part.
Basically, now that we have a couple CPU's & GPU's with full hardware support for HEVC/H.265/x265, vs the hybrid/partial, and full software options, I'd love to see some serious benchmarking to give us an idea of what kind of performance gains we could see, and how much horsepower we would have to throw at a media encoding box to handle HEVC encoding practically.

August 5, 2015 | 11:11 AM - Posted by Polycrastinator (not verified)

Quality is a big question there, also. How does the output of the dedicated encoder look in comparison to the output from CPU cores and GPU cores?

August 6, 2015 | 07:46 AM - Posted by Albert89 (not verified)

Why they didn't include AMD's Carrizo in this skylake review is because it would mean that Intel iGPU hasn't gone as far as we were all lead to believe. You gotta love the PC perspective team hating all things AMD !

September 5, 2015 | 07:36 AM - Posted by Jest (not verified)

And which carrizo can you buy for desktop and on what motherboard are they working?

January 21, 2016 | 08:44 PM - Posted by Anonymous (not verified)

Well the same argument holds for broadwell which was included...

August 5, 2015 | 11:27 AM - Posted by willmore (not verified)

x265 isn't ready for users, yet as it's still in very active development. x264 is strongly recommended until x265 gets into better shape. I don't believe that x265 uses HEVC hardware acceleration rather it competes with it.

August 5, 2015 | 09:51 AM - Posted by PCPerFan (not verified)

CPU performance improvement is better than I expected. GPU improvement is as I expected. Nice part.

I still don't feel any urgent need to upgrade my Nehalem 920 or Sandy 2600k.

August 6, 2015 | 08:47 AM - Posted by Albert89 (not verified)

That's one gas guzzler.

August 5, 2015 | 09:56 AM - Posted by lantian (not verified)

If the price in Europe would be reasonable for the i7 this would be a must have upgrade, as it stands right now i5 6600k is going for 250 euro, while the i7 4700k is going for 389 at cheapest, the damn i7 5820k is the exact same price it's almost 150 euro price difference WTF is wrong with them

August 5, 2015 | 10:35 AM - Posted by jessterman21 (not verified)

The real question is, how does a 5GHz i7-2700K compare to a 4.3GHz 17-6700K. Since the OC headroom on Skylake will probably top out there...

August 5, 2015 | 10:36 AM - Posted by jessterman21 (not verified)

Nevermind - HardOCP got theirs to 4.7GHz...

August 7, 2015 | 12:43 PM - Posted by Nomingtons (not verified)

my POV-RAY test was 4665.59PPS on a 2700k 5ghz v1.475v HT=on.
Cinebench 11.5 was9.42
trucrypt AES=4.6 and twofish=386

August 5, 2015 | 10:45 AM - Posted by Joe K (not verified)

We've all heard the all of two instances of overclocking pre-release samples of the 6700K on air, now let's get some actual release samples out there to handle all the basic scenarios;
24x7 on air
24x7 on water
Max OC on air
Max OC on water
Max OC on LN2

August 5, 2015 | 12:06 PM - Posted by lantian (not verified)

these are all release/retail samples intel did not send any es out this time and from what people are seeing 4,7ghz for the i7 sku is about avarage

August 6, 2015 | 08:42 AM - Posted by Joe K (not verified)

The stories I've seen claimed 5.0 & 5.2

August 8, 2015 | 02:56 AM - Posted by Roon (not verified)

kitguru has an article on somebody using LN2 to get 6.5 ghz...

August 5, 2015 | 10:50 AM - Posted by jerrytsao (not verified)

Maybe it's time for Sandy Bridge users to upgrade, but people used to be in X58 either jumped in X99 already or still waiting for Skylake-E, instead of going to a "downgraded" platform.

August 5, 2015 | 10:58 AM - Posted by Anonymous (not verified)

And now the wait for the Skylake Pentium and Skylake Celeron begins...

August 5, 2015 | 11:03 AM - Posted by AnthonyShea

I still dont understand why they keep including igpu on the i7 (non mobile) product line. if you're going the htpc route a i5 or i3 seeing that 4k decoding isn't exactly taxing on processors anymore. also glad to see voltage regulation back where it belongs. thb this seems to be what should have followed up the 4770k or the 2600k for that matter (excluding nm) the fact that it's taken then so long to implement some real change is disappointing.

and you know they had been sitting on at least part of this for a fair bit of time, seeing how long they could hose us for. at least it releases with a good msrp, too bad you have to replace pretty much everything lol. it's like they have a pack with board venders, to change the socket as frequently as possible.

August 5, 2015 | 04:32 PM - Posted by Misa (not verified)

The iGPU is still useful for quicksync (think game streaming) or running a second/third monitor that doesnt hang off the GPU/SLI configuration so it can be powered down when using social media/web

August 5, 2015 | 11:07 AM - Posted by Polycrastinator (not verified)

1.4v seems awfully high to me. Has Intel given any guidance on a "safe" voltage for these chips? Even if you can get it stable, if it's fried in 6 months that's no help to anyone. Can the chips really take that sort of voltage consistently?

August 5, 2015 | 04:27 PM - Posted by bdubinator (not verified)

1.32-1.4 is the normal stock operating voltage on the 6700k when it is turbo boosting to 4.2 GHz. Anandtech was able to undervolt his samples to 1.20volts @ 4.3 Ghz, though which is likely what I'll be doing with my watercooling setup. I'm hoping for 4.4 Ghz at or below 1.30v.

August 5, 2015 | 11:21 PM - Posted by Anonymous (not verified)

What's the point of watercooling if you're undervolting? You could probably even get away with a passive cooler

August 5, 2015 | 12:44 PM - Posted by nevzim (not verified)

Why is idle power consumption so high?

August 5, 2015 | 01:06 PM - Posted by jharderson

Do you know when retailers will start selling Sky Lake CPUs

August 5, 2015 | 03:42 PM - Posted by Anonymous (not verified)

This is starting to look like a paper launch.

August 5, 2015 | 05:08 PM - Posted by Anonymous (not verified)

I was under the impression that DX12 will leverage the CPU's integrated graphics in tandem with a discrete graphics card. Similar to what AMD had previously "highly" touted with mantle.
Wouldn't this feature make Broadwell the better choice for gamers over Skylake in relation to game performance?

August 6, 2015 | 01:46 PM - Posted by Evil Sunbro

I didn't see your comment earlier and posted a similar one farther down.

No reviewer I have seen, yet, has brought up this point. I would like to see a comparison on just the integrated graphics since DirectX 12 will be utilizing it.

I posted a link to the article that I read on this in my comment.

August 5, 2015 | 08:02 PM - Posted by kenjo

half the chip is useless. Who buys this cpu and uses the built in gfx ??

If there was a competitive alternative Intel would not be able to sell a chip where 50% of the die is not used.

August 6, 2015 | 01:15 AM - Posted by Dr_b_

True, the die space would have been better spent on increasing PCIe lanes and/or adding more cores. 6 core skylake with 40pcie lanes anyone? OR just keep the four cores and pcie lanes as-is, and reduce the price?

Haswell-E is now 2 generations behind (>broadwell>skylake). Big differentiator? you dont have to pay for a GPU that you will never use and get more PCIe lanes so you can run more than just one x16 device. Say you want to have 2 GPU's, a sound card and a coupleof NVMe PCIe, or 10GB PCIe card, etc.

SoC's have their place, but not as an enthusiast or gamer part. amiright.

August 6, 2015 | 09:27 AM - Posted by BlackDove (not verified)

Now that Intel owns Altera they could put an FPGA and make a compiler to accelerate parts of code by 100X or so.

That would be a lot more useful than the integrated GPU but that will be on E5 and E7 Xeons way before their small desktop chip.

Unless they sell them as E3 Xeons for hyperscalers.

August 5, 2015 | 10:50 PM - Posted by Anonymous (not verified)

I would expect several of these test (IGP gaming, streaming apps, etc) to bandwidth sensitive. It could be that some of the differences are due to bandwidth improvements rather than any actual core improvements. You could be testing the difference between DDR4-2133 and DDR3-1600 more than anything else.

August 5, 2015 | 11:28 PM - Posted by Anonymous (not verified)

It still seems to be the case that it has significantly more PCI-e bandwidth in the chipset than can be up linked to the CPU.

August 6, 2015 | 01:30 AM - Posted by ibnMuhammad (not verified)

As mentioned by 'nevzim', I was also wondering, in this day and age, especially with the 14nm fab, why is the idle power consumption so high (with SSD)?!
Even the load at almost 150watts is pretty ridiculous.

Can the author confirm if these figures are without the monitor and without discrete GPU?

August 6, 2015 | 06:06 AM - Posted by Anonymous (not verified)

where i can see "widi " ? or 166 mhz ?

August 6, 2015 | 08:35 AM - Posted by kail (not verified)

Colonel Sanders:Hey guys,we've focused on KFC ramen for years,it's really really good(though still tastes like crap for real ramen lovers).Buy now,only $9.99 and get 1 pcs of chicken for free.

August 6, 2015 | 10:07 AM - Posted by BBMan (not verified)

Nicely done review. But I don't see any compelling evidence to make me feel bad about my Haswell-E. At all. In fact, If I were on an older i7, I might still wait to see what's over the next hill.

It's a significant jump, but not a compelling one. I think we'll be into a next generation of HW in maybe another 2-5 years. Call it 2020. And I'm thinking that "faster" may have to give way more to "efficient".

August 6, 2015 | 10:47 AM - Posted by Arul

looks better!

August 6, 2015 | 12:35 PM - Posted by Evil Sunbro

Every reviewer keeps downing on the integrated graphics, but according to the linked article, DirectX 12 will utilize the integrated along with the discrete. They say it will be like having another graphics card...

http://time.com/3975043/windows-10-microsoft-gamers/

'How many video cards do you have in your PC? Think carefully (I didn’t, and told Wardell, who asked me the same question, just one). Wardell reminded me most modern PCs have at least two (not counting extremely high-end systems with cards run in tandem, in which case the number would be three or more).

“Everyone forgets about the integrated graphics card on the motherboard that you’d never use for gaming if you have a dedicated video card,” says Wardell. “With DirectX 12, you can fold in that integrated card as a seamless coprocessor. The game doesn’t have to do anything special, save support DirectX 12 and have that feature enabled. As a developer I don’t have to figure out which thing goes to what card, I just turn it on and DirectX 12 takes care of it.”

Wardell notes the performance boost from pulling in the integrated video card is going to be heavily dependent on the specific combination—the performance gap between integrated video cards over the past half-decade isn’t small—but at the high end, he says it could be as significant as DirectX 12’s ability to tap the idle cores in your CPU. Add the one on top of the other and, if he’s right, the shift at a developmental level starts to sound like that rare confluence of evolutionary plus the letter ‘r’.'

August 7, 2015 | 10:58 AM - Posted by Angelo (not verified)

What's in this new CPU for video editors?

August 7, 2015 | 01:14 PM - Posted by zgradt

Dude, upgrade your Handbrake software. The new version is way better optimized for 4+ core setups.

Also, how come nobody is comparing the 6700K to the 5820K? I'd rather pay a little more and get 6 cores instead of 4.

November 26, 2015 | 11:05 AM - Posted by Anonymous (not verified)

if review sites compare 5820K with 6700K no one will buy the 6700K.

August 7, 2015 | 02:57 PM - Posted by Seb777

Looking forward to the next I7 WITHOUT integrated GPU... :-) CMonnnn Intel !

August 8, 2015 | 01:42 PM - Posted by msmith (not verified)

Lol I'll just stick with this Phenom II x6 for another year. Someone get back to me when CPU performance has increased by an order of magnitude (aka when Intel gets real competition again).

August 10, 2015 | 12:25 PM - Posted by obababoy

I hope PC gamers do not get these CPUs. I don't want to see people waste money on this platform if you are in the market for building a gaming PC. The cost of DDR4 right now is just stupid expensive and motherboards for this are expensive. There are no incredible gains with either and you'd save a ton of money going with a 4790k or something similar on LGA 1150.

August 14, 2015 | 12:10 PM - Posted by Anon (not verified)

Except if you actually look at prices, DDR4-2400 4GB sticks are cheaper than their DDR3 counterparts. $30 vs $33

September 3, 2015 | 09:03 PM - Posted by DisasterArea (not verified)

I'm one of the people hanging grimly on to my old X58 platforms, of which I have two on ASUS P6T motherboards.

I'm in no rush to upgrade GPU from a pair of GTX680's in SLI, and no rush for faster HDD with 2x 240GB in Raid, so the question I have is, is it REALLY worth Dropping $2k to upgrade a motherboard / ram / cpu / psu.

Last year, for AUS$100 each, I replaced my i7-920 parts with Xeon hexacore units, which at 4ghz / 6 thread+HT (12 thread) seem to do fine.

If I look at the IPC@3.5ghz above, and reckon that a new 6700K part will be a full 1/3rd faster per core, it strikes me that I'm only behind on single core performance with even such an old setup! 6 cores at 1.0 each vs 4 cores at 1.33 each for example, even if I'm generous and say a 6700K perfoms 1.5x per core of the Xeon, I'm still packing the same processing punch.

Is that a fair assessment, or am I missing something? -
What I'm really asking is, if you have an old X58 platform - and aren't aiming for quad-sli or something requiring more pcie lanes than you have, isn't a $100 i7 to Xeon upgrade going to be enough to keep you in the game?

Is the IPC increase on a quad core worth the $2000 price tag, because I suspect you wouldn't notice the difference much!

I tend to play slightly older games, just finished mass effect series and bioshock for example, so I suspect the very latest titles would give issues to my setup, so I know it's adequate for me as I never see the CPU taxed much.

April 8, 2016 | 02:15 AM - Posted by paullost (not verified)

Buying chipsets with crappy on-board gpu's will encourage them to make more and the fact is that the speed is topping out because of the limitations of wave speed through the material the chips are made of, bus speed of external gpu's and the limits of the expanion slots speed are the real limits still left, that and doing all the binary physics and mathing out the hardware to function 3-dimensionally and running the signal to and from an expansion slot takes time.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.