Review Index:
Feedback

The Intel Core i7-6700K Review - Skylake First for Enthusiasts

Author: Ryan Shrout
Subject: Processors
Manufacturer: Intel

Z170 Chipset and ASUS Deluxe: Adding 20 Lanes of PCIe 3.0

The Core i7-6700K and its Skylake architecture bring with it a new processor socket, LGA 1151. This is not backwards compatible with any other processors and Haswell/Broadwell will not install into motherboards built for Skylake either. The move to DDR4 memory as well as a faster DMI3 implementation required Intel to move on to another iteration of the LGA design and, this time at least, the advantages of the new chipset are noteworthy.

Intel's Z170 chipset will be including on the motherboards of choice for Skylake. Thus far I have only had hands-on time with the ASUS Z170-Deluxe (and Morry has had the ASUS Z170-A) but both offer some compelling features and options that were previously unavailable on Z97 boards.

View Full Size

The most important feature on the Z170 chipset, and the one that allows the chipset to enable many new features, is the move to the 8.0 GT/s capable DMI3 bus. This is the interface between the processor and the chipset itself, and it was previously limited to speeds that restricted some pretty basic configurations. Allyn knows all about not being able to utilize all six SATA ports in a RAID-0 array without limiting performance because of the slower DMI2 connection on the Haswell generation platforms. With Skylake, this should no longer be an issue and, thanks to other changes on the chipset, allows for a lot more flexibility from motherboard designers.

A close second in importance in my mind is the move to support up to 20x lanes of PCI Express 3.0 on the Z170 chipset! Compared to the Z87 and Z97 chipset that shipped with 8x lanes of PCIe 2.0, this is a dramatic improvement and allows vendors to create a collection of accessories for Z170 motherboard that otherwise would not be possible. Think of multiple USB 3.1 controllers, additional PCIe storage connectivity, support for more (and faster) add-in cards, etc.

View Full Size

The only other change includes a move to 10x USB 3.0 ports and/or 14x USB 2.0 ports. You still have access to 6x SATA III / 6.0 Gbps ports, Intel Rapid Storage Technology, Gigabit Ethernet MAC, etc. On a side note, the Intel RST software will now support RAID configurations of PCIe and NVMe storage devices - so those of you crazy enough to want to run multiples of them in your system will now have that capability. Thanks to the DMI3 connection back to the processor you can fully utilize a pair of x2 PCIe 3.0 SSDs in RAID-0 and get maximum performance.

On the processor side, Skylake still supports the same x16 lanes of PCI Express 3.0 that can be divided into a single slot for maximum performance or into a pair of x8 slots and even a single x8 and 2x4. Interestingly, with the added support for 20 lanes of PCIe 3.0 on the Z170 chipset, it SHOULD be possible for NVIDIA to implement support for 3-Way SLI on Skylake - something it has not done for Haswell. Though as we continue to see the decline in support for 3-Way SLI and for 3-Way CrossFire, I doubt this is a priority.

The ASUS Z170-Deluxe

This is by no means a review of the ASUS Z170-Deluxe, but I thought as the only Z170 motherboard in the office today, it deserved a quick overview.

If you are looking for a board with a ton of features, this is the place to be. Let's get a high level overview of what it offers:

  • 6 x USB 3.1 ports
  • 1 x USB 3.1 Type-C port
  • 4 x front USB 3.0 support
  • Dual Gigabit Intel LAN
  • 3x3 802.11ac Wireless
  • DDR4-3466 memory support
  • Hyper M.2 Mini Adapter card
  • NVMe U.2 device support
  • HDMI 2.0 support (IGP)

Oh, and maybe the coolest feature of all: the LED lights on the chipset cooler are controllable to 256 color choices, can be adjusted to "dance to the beat of your music" or can be set to indicate CPU temperature. It is actually a really cool effect in person.

View Full Size

Of course, ASUS has tweaked the motherboard's UEFI implementation as well to improve fan controls, make overclocking easier and to improve the utility of functions like UEFI updates. Check out Morry's overview of the Z170-A for a bit more info on these areas and check back soon for reviews of these and other Z170 motherboards!

Video News


August 5, 2015 | 08:19 AM - Posted by Joe K (not verified)

What about x265? The dedicated hardware for HEVC is one of the biggest selling points for those of us looking to do next gen encoding.

August 5, 2015 | 10:03 AM - Posted by Anonymous (not verified)

What about it? My guess is that it's like Carrizo: it is present and it works. Other than testing that it functions, it's not like there is any real benchmark to perform.

August 5, 2015 | 10:40 AM - Posted by Joe K (not verified)

It "functions" on basically all relatively modern hardware. The question is how much of an improvement is there in performance with dedicated encoders built in vs just throwing a bunch of cores at it? And I would love to see some comparison involving GPU encoding, both with the similarly hardware supported GTX 960 or 950, with a non-hardware encoding equipped GPU like an AMD or earlier nvidia part.
Basically, now that we have a couple CPU's & GPU's with full hardware support for HEVC/H.265/x265, vs the hybrid/partial, and full software options, I'd love to see some serious benchmarking to give us an idea of what kind of performance gains we could see, and how much horsepower we would have to throw at a media encoding box to handle HEVC encoding practically.

August 5, 2015 | 11:11 AM - Posted by Polycrastinator (not verified)

Quality is a big question there, also. How does the output of the dedicated encoder look in comparison to the output from CPU cores and GPU cores?

August 6, 2015 | 07:46 AM - Posted by Albert89 (not verified)

Why they didn't include AMD's Carrizo in this skylake review is because it would mean that Intel iGPU hasn't gone as far as we were all lead to believe. You gotta love the PC perspective team hating all things AMD !

September 5, 2015 | 07:36 AM - Posted by Jest (not verified)

And which carrizo can you buy for desktop and on what motherboard are they working?

January 21, 2016 | 08:44 PM - Posted by Anonymous (not verified)

Well the same argument holds for broadwell which was included...

August 5, 2015 | 11:27 AM - Posted by willmore (not verified)

x265 isn't ready for users, yet as it's still in very active development. x264 is strongly recommended until x265 gets into better shape. I don't believe that x265 uses HEVC hardware acceleration rather it competes with it.

August 5, 2015 | 09:51 AM - Posted by PCPerFan (not verified)

CPU performance improvement is better than I expected. GPU improvement is as I expected. Nice part.

I still don't feel any urgent need to upgrade my Nehalem 920 or Sandy 2600k.

August 6, 2015 | 08:47 AM - Posted by Albert89 (not verified)

That's one gas guzzler.

August 5, 2015 | 09:56 AM - Posted by lantian (not verified)

If the price in Europe would be reasonable for the i7 this would be a must have upgrade, as it stands right now i5 6600k is going for 250 euro, while the i7 4700k is going for 389 at cheapest, the damn i7 5820k is the exact same price it's almost 150 euro price difference WTF is wrong with them

August 5, 2015 | 10:35 AM - Posted by jessterman21 (not verified)

The real question is, how does a 5GHz i7-2700K compare to a 4.3GHz 17-6700K. Since the OC headroom on Skylake will probably top out there...

August 5, 2015 | 10:36 AM - Posted by jessterman21 (not verified)

Nevermind - HardOCP got theirs to 4.7GHz...

August 7, 2015 | 12:43 PM - Posted by Nomingtons (not verified)

my POV-RAY test was 4665.59PPS on a 2700k 5ghz v1.475v HT=on.
Cinebench 11.5 was9.42
trucrypt AES=4.6 and twofish=386

August 5, 2015 | 10:45 AM - Posted by Joe K (not verified)

We've all heard the all of two instances of overclocking pre-release samples of the 6700K on air, now let's get some actual release samples out there to handle all the basic scenarios;
24x7 on air
24x7 on water
Max OC on air
Max OC on water
Max OC on LN2

August 5, 2015 | 12:06 PM - Posted by lantian (not verified)

these are all release/retail samples intel did not send any es out this time and from what people are seeing 4,7ghz for the i7 sku is about avarage

August 6, 2015 | 08:42 AM - Posted by Joe K (not verified)

The stories I've seen claimed 5.0 & 5.2

August 8, 2015 | 02:56 AM - Posted by Roon (not verified)

kitguru has an article on somebody using LN2 to get 6.5 ghz...

August 5, 2015 | 10:50 AM - Posted by jerrytsao (not verified)

Maybe it's time for Sandy Bridge users to upgrade, but people used to be in X58 either jumped in X99 already or still waiting for Skylake-E, instead of going to a "downgraded" platform.

August 5, 2015 | 10:58 AM - Posted by Anonymous (not verified)

And now the wait for the Skylake Pentium and Skylake Celeron begins...

August 5, 2015 | 11:03 AM - Posted by AnthonyShea

I still dont understand why they keep including igpu on the i7 (non mobile) product line. if you're going the htpc route a i5 or i3 seeing that 4k decoding isn't exactly taxing on processors anymore. also glad to see voltage regulation back where it belongs. thb this seems to be what should have followed up the 4770k or the 2600k for that matter (excluding nm) the fact that it's taken then so long to implement some real change is disappointing.

and you know they had been sitting on at least part of this for a fair bit of time, seeing how long they could hose us for. at least it releases with a good msrp, too bad you have to replace pretty much everything lol. it's like they have a pack with board venders, to change the socket as frequently as possible.

August 5, 2015 | 04:32 PM - Posted by Misa (not verified)

The iGPU is still useful for quicksync (think game streaming) or running a second/third monitor that doesnt hang off the GPU/SLI configuration so it can be powered down when using social media/web

August 5, 2015 | 11:07 AM - Posted by Polycrastinator (not verified)

1.4v seems awfully high to me. Has Intel given any guidance on a "safe" voltage for these chips? Even if you can get it stable, if it's fried in 6 months that's no help to anyone. Can the chips really take that sort of voltage consistently?

August 5, 2015 | 04:27 PM - Posted by bdubinator (not verified)

1.32-1.4 is the normal stock operating voltage on the 6700k when it is turbo boosting to 4.2 GHz. Anandtech was able to undervolt his samples to 1.20volts @ 4.3 Ghz, though which is likely what I'll be doing with my watercooling setup. I'm hoping for 4.4 Ghz at or below 1.30v.

August 5, 2015 | 11:21 PM - Posted by Anonymous (not verified)

What's the point of watercooling if you're undervolting? You could probably even get away with a passive cooler

August 5, 2015 | 12:44 PM - Posted by nevzim (not verified)

Why is idle power consumption so high?

August 5, 2015 | 01:06 PM - Posted by jharderson

Do you know when retailers will start selling Sky Lake CPUs

August 5, 2015 | 03:42 PM - Posted by Anonymous (not verified)

This is starting to look like a paper launch.

August 5, 2015 | 05:08 PM - Posted by Anonymous (not verified)

I was under the impression that DX12 will leverage the CPU's integrated graphics in tandem with a discrete graphics card. Similar to what AMD had previously "highly" touted with mantle.
Wouldn't this feature make Broadwell the better choice for gamers over Skylake in relation to game performance?

August 6, 2015 | 01:46 PM - Posted by Evil Sunbro

I didn't see your comment earlier and posted a similar one farther down.

No reviewer I have seen, yet, has brought up this point. I would like to see a comparison on just the integrated graphics since DirectX 12 will be utilizing it.

I posted a link to the article that I read on this in my comment.

August 5, 2015 | 08:02 PM - Posted by kenjo

half the chip is useless. Who buys this cpu and uses the built in gfx ??

If there was a competitive alternative Intel would not be able to sell a chip where 50% of the die is not used.

August 6, 2015 | 01:15 AM - Posted by Dr_b_

True, the die space would have been better spent on increasing PCIe lanes and/or adding more cores. 6 core skylake with 40pcie lanes anyone? OR just keep the four cores and pcie lanes as-is, and reduce the price?

Haswell-E is now 2 generations behind (>broadwell>skylake). Big differentiator? you dont have to pay for a GPU that you will never use and get more PCIe lanes so you can run more than just one x16 device. Say you want to have 2 GPU's, a sound card and a coupleof NVMe PCIe, or 10GB PCIe card, etc.

SoC's have their place, but not as an enthusiast or gamer part. amiright.

August 6, 2015 | 09:27 AM - Posted by BlackDove (not verified)

Now that Intel owns Altera they could put an FPGA and make a compiler to accelerate parts of code by 100X or so.

That would be a lot more useful than the integrated GPU but that will be on E5 and E7 Xeons way before their small desktop chip.

Unless they sell them as E3 Xeons for hyperscalers.

August 5, 2015 | 10:50 PM - Posted by Anonymous (not verified)

I would expect several of these test (IGP gaming, streaming apps, etc) to bandwidth sensitive. It could be that some of the differences are due to bandwidth improvements rather than any actual core improvements. You could be testing the difference between DDR4-2133 and DDR3-1600 more than anything else.

August 5, 2015 | 11:28 PM - Posted by Anonymous (not verified)

It still seems to be the case that it has significantly more PCI-e bandwidth in the chipset than can be up linked to the CPU.

August 6, 2015 | 01:30 AM - Posted by ibnMuhammad (not verified)

As mentioned by 'nevzim', I was also wondering, in this day and age, especially with the 14nm fab, why is the idle power consumption so high (with SSD)?!
Even the load at almost 150watts is pretty ridiculous.

Can the author confirm if these figures are without the monitor and without discrete GPU?

August 6, 2015 | 06:06 AM - Posted by Anonymous (not verified)

where i can see "widi " ? or 166 mhz ?

August 6, 2015 | 08:35 AM - Posted by kail (not verified)

Colonel Sanders:Hey guys,we've focused on KFC ramen for years,it's really really good(though still tastes like crap for real ramen lovers).Buy now,only $9.99 and get 1 pcs of chicken for free.

August 6, 2015 | 10:07 AM - Posted by BBMan (not verified)

Nicely done review. But I don't see any compelling evidence to make me feel bad about my Haswell-E. At all. In fact, If I were on an older i7, I might still wait to see what's over the next hill.

It's a significant jump, but not a compelling one. I think we'll be into a next generation of HW in maybe another 2-5 years. Call it 2020. And I'm thinking that "faster" may have to give way more to "efficient".

August 6, 2015 | 10:47 AM - Posted by Arul

looks better!

August 6, 2015 | 12:35 PM - Posted by Evil Sunbro

Every reviewer keeps downing on the integrated graphics, but according to the linked article, DirectX 12 will utilize the integrated along with the discrete. They say it will be like having another graphics card...

http://time.com/3975043/windows-10-microsoft-gamers/

'How many video cards do you have in your PC? Think carefully (I didn’t, and told Wardell, who asked me the same question, just one). Wardell reminded me most modern PCs have at least two (not counting extremely high-end systems with cards run in tandem, in which case the number would be three or more).

“Everyone forgets about the integrated graphics card on the motherboard that you’d never use for gaming if you have a dedicated video card,” says Wardell. “With DirectX 12, you can fold in that integrated card as a seamless coprocessor. The game doesn’t have to do anything special, save support DirectX 12 and have that feature enabled. As a developer I don’t have to figure out which thing goes to what card, I just turn it on and DirectX 12 takes care of it.”

Wardell notes the performance boost from pulling in the integrated video card is going to be heavily dependent on the specific combination—the performance gap between integrated video cards over the past half-decade isn’t small—but at the high end, he says it could be as significant as DirectX 12’s ability to tap the idle cores in your CPU. Add the one on top of the other and, if he’s right, the shift at a developmental level starts to sound like that rare confluence of evolutionary plus the letter ‘r’.'

August 7, 2015 | 10:58 AM - Posted by Angelo (not verified)

What's in this new CPU for video editors?

August 7, 2015 | 01:14 PM - Posted by zgradt

Dude, upgrade your Handbrake software. The new version is way better optimized for 4+ core setups.

Also, how come nobody is comparing the 6700K to the 5820K? I'd rather pay a little more and get 6 cores instead of 4.

November 26, 2015 | 11:05 AM - Posted by Anonymous (not verified)

if review sites compare 5820K with 6700K no one will buy the 6700K.

August 7, 2015 | 02:57 PM - Posted by Seb777

Looking forward to the next I7 WITHOUT integrated GPU... :-) CMonnnn Intel !

August 8, 2015 | 01:42 PM - Posted by msmith (not verified)

Lol I'll just stick with this Phenom II x6 for another year. Someone get back to me when CPU performance has increased by an order of magnitude (aka when Intel gets real competition again).

August 10, 2015 | 12:25 PM - Posted by obababoy

I hope PC gamers do not get these CPUs. I don't want to see people waste money on this platform if you are in the market for building a gaming PC. The cost of DDR4 right now is just stupid expensive and motherboards for this are expensive. There are no incredible gains with either and you'd save a ton of money going with a 4790k or something similar on LGA 1150.

August 14, 2015 | 12:10 PM - Posted by Anon (not verified)

Except if you actually look at prices, DDR4-2400 4GB sticks are cheaper than their DDR3 counterparts. $30 vs $33

September 3, 2015 | 09:03 PM - Posted by DisasterArea (not verified)

I'm one of the people hanging grimly on to my old X58 platforms, of which I have two on ASUS P6T motherboards.

I'm in no rush to upgrade GPU from a pair of GTX680's in SLI, and no rush for faster HDD with 2x 240GB in Raid, so the question I have is, is it REALLY worth Dropping $2k to upgrade a motherboard / ram / cpu / psu.

Last year, for AUS$100 each, I replaced my i7-920 parts with Xeon hexacore units, which at 4ghz / 6 thread+HT (12 thread) seem to do fine.

If I look at the IPC@3.5ghz above, and reckon that a new 6700K part will be a full 1/3rd faster per core, it strikes me that I'm only behind on single core performance with even such an old setup! 6 cores at 1.0 each vs 4 cores at 1.33 each for example, even if I'm generous and say a 6700K perfoms 1.5x per core of the Xeon, I'm still packing the same processing punch.

Is that a fair assessment, or am I missing something? -
What I'm really asking is, if you have an old X58 platform - and aren't aiming for quad-sli or something requiring more pcie lanes than you have, isn't a $100 i7 to Xeon upgrade going to be enough to keep you in the game?

Is the IPC increase on a quad core worth the $2000 price tag, because I suspect you wouldn't notice the difference much!

I tend to play slightly older games, just finished mass effect series and bioshock for example, so I suspect the very latest titles would give issues to my setup, so I know it's adequate for me as I never see the CPU taxed much.

April 8, 2016 | 02:15 AM - Posted by paullost (not verified)

Buying chipsets with crappy on-board gpu's will encourage them to make more and the fact is that the speed is topping out because of the limitations of wave speed through the material the chips are made of, bus speed of external gpu's and the limits of the expanion slots speed are the real limits still left, that and doing all the binary physics and mathing out the hardware to function 3-dimensionally and running the signal to and from an expansion slot takes time.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.