Review Index:

Intel Kaby Lake Performance: Surprising Jump over Skylake

Author: Ryan Shrout
Subject: Processors
Manufacturer: Intel

General System Performance

What follows now is a breakdown of performance of these two notebooks giving us an apples to apples comparison of 6th Generation (Skylake) and 7th Generation (Kaby Lake) Intel mobile processors.

  Core i7-7500U Core i7-6500U
Architecture Kaby Lake Skylake
Process Tech 14nm+ 14nm
Cores/Threads 2/4 2/4
Base Clock 2.7 GHz 2.5 GHz
Max Turbo Clock 3.5 GHz 3.1 GHz
Cache 4MB 4MB
TDP 15 watt 15 watt
Max. Memory 32GB 32GB
Graphics HD Graphics 620 HD Graphics 520
Graphics Clocks 300 - 1050 MHz 300 - 1050 MHz
Tray Price $393 $393

There are two key feature changes that will affect the performance results we captured. The clock speed improvements allowed with the 14nm+ FinFET process technology nets a 200 MHz base frequency improvement and a 400 MHz peak Turbo clock frequency jump. Even though we know that heavy workloads won’t be able to run at peak Turbo speeds indefinitely in thermally constrained environments like notebooks, those speed increases are substantial.

Second, the addition of HEVC (H.265) hardware accelerated decode will offer dramatic decreases in CPU utilization when watching content encoded in that format. As the HEVC codec becomes more popular, even being used in streaming media, this will equate to longer battery life, quieter systems and smooth viewing experiences for media consumption.

We are going to start with a few general performance benchmarks, SYSmark 2014 and WebXPRT.

SYSmark 2014

SYSmark® 2014 is the latest revision of the preeminent system performance benchmark series that measures and compares PC performance using real world applications, featuring all-new workloads, support for Microsoft Windows 7, 8, and 8.1 both 64-bit and 32-bit, and a new automatic system configuration manager.

SYSmark 2014 gives commercial and government IT decision makers, media, channel buyers, consultants, and system and component designers and manufacturers an objective, easy-to-use tool to evaluate PC performance across the wide range of activities that a user may encounter.

SYSmark 2014 is designed for those who want to:

  • Evaluate and compare desktop and mobile computers for purchase consideration based on system performance and application responsiveness.
  • Provide useful information to their audiences to assist in the evaluation and purchase of desktop and mobile computers.
  • Evaluate desktop and mobile computers to better optimize the performance of the system.

Unlike synthetic benchmarks, which artificially drive components to peak capacity or attempt to deduce performance using a static simulation of application behavior, SYSmark 2014 uses real applications, real user workloads, and real data sets to accurately measure how overall system performance impacts user experience.

SYSmark 2014 builds upon BAPCo’s 23-year history of building benchmarks to evaluate platform technologies. Benchmarks designed by BAPCo are the result of cooperative development between companies representing the breadth of the computing industry. They harness a consortium of knowledge to better reflect the business trends of today and tomorrow.

It has been a while since we have used SYSmark in our testing suite but for mobile performance metrics it truly does offer one of the best user experience based testing suites available for professional users. Workloads for SYSmark 2014 include:

Office Productivity

The Office Productivity scenario models productivity usage including word processing, spreadsheet data manipulation, email creation/management and web browsing.

Media Creation

The Media Creation scenario models using digital photos and digital video to create, preview, and render advertisements for a fictional businesses.

Data/Financial Analysis

The Data/Financial Analysis scenario creates, compresses, and decompresses data to review, evaluate and forecast business expenses. Also, the performance and viability of financial investments is analyzed using past and projected performance data.

View Full Size

The overall performance advantage for the HP Spectre using Kaby Lake is 12.6% while the individual tests range from 10% to 16%. The largest gain is seen on the productivity side, likely due to the bursty nature of the workloads in Excel, etc. Longer workloads will see less of an advantage moving from Skylake simply because the clock speed advantage is less at the base frequencies. For business and consumer usage though, these results point to a noticeable performance increase, a surprise to many.


WebXPRT 2015 uses scenarios created to mirror the tasks you do every day to compare the performance of almost any Web-enabled device. It contains six HTML5- and JavaScript-based workloads: Photo Enhancement, Organize Album, Stock Option Pricing, Local Notes, Sales Graphs, and Explore DNA Sequencing.

It runs these four tests seven times each:

  • Photo Enhancement: Measures the time to apply three effects (Sharpen, Emboss, and Glow) to two photos each, a set of six photos total.
  • Organize Album: Measures the time it takes to check for human faces in a set of five photos.
  • Stocks Option Pricing: Measures the time to calculate financial indicators of a stock based on historical data and display the result in a dashboard.
  • Local Notes: Measures the time to store notes securely in the browser's local storage and display recent entries.
  • Sales Graphs: Measures the time to calculate and display multiple views of sales data.
  • Explore DNA Sequencing: Measures the time it takes to filter eight DNA sequences for specific characteristics.

Each test uses different combinations of HTML5 Canvas 2D and Javascript, common elements in many Web pages, to gauge how well your device and browser work together in everyday Web browsing situations.

Our testing was done with the latest version of the Microsoft Edge browser.

View Full Size

View Full Size

In our total score, the Kaby Lake based HP Spectre is about 20% faster than the previous generation model, a more significant increase than we saw in SYSmark. The individual scores (where lower times are better) show improvements ranging from just 10% (photo enhancements) all the way up to 25% (local note taking), indicating the range of possible performance advantages based on the workload. Again, I wager that many users and media would be surprised by the differences between the two most recent Intel processors in this space.

Video News

November 22, 2016 | 10:41 AM - Posted by JohnGR

Intel's 14nm+ was offering lower power consumption and in a laptop this is important. So, the higher frequency of Kaby Lake and probably the more time it stays at boost speeds, offer all the extra performance you could expect.

Put Kaby Lake against Skylake on the desktop and you have a second Devil's Canyon. The only difference is that now you have an optimized process, not better thermal paste.

The only real advantage of Kaby Lake series is probably the better codec support, especially HEVC.

Desktop comparisons

November 22, 2016 | 02:27 PM - Posted by Anonymous (not verified)

And just in time, check out quote "Netflix 4K streaming comes to the PC but it needs Kaby Lake CPU" and quaote "There's also the matter of hardware decoding support for 10-bit HEVC, the 4K codec used by Netflix and other streaming services". Thats game over for all AMD's APUs which can't decode 10bit HEVC.

November 22, 2016 | 03:17 PM - Posted by JohnGR

How about software decoding? If the processor is strong enough, this is not a problem in a desktop/htpc environment. How about discrete graphics cards? Are Pascal based cards useless compared to a Kaby Lake, or are you just super excited because of some marketing stuff that you read and talks only for Kaby Lake(thanks to Intel's $$$)?

And it's not game over for APUs. As it is not game over for Skylake, or Broadwell, or even Haswell based PCs. It's just one more reason for people using older AMD APUs, or Intel processors, to upgrade to newer APUs or Intel CPUs, that will also have full HEVC decoding, HDR and HDMI 2.0 support, or just buy a new discrete graphics card. You know, that's how industry is moving forward, with newer models offering more features compared to older models, so people have a reason to upgrade. The same applies to laptops. People will upgrade to a laptop that will be able to do hardware decoding of HEVC. Kaby Lake, Bristol Ridge, or with a Pascal/Polaris in it.

One last thing. The planet is not a customer to Netflix. Netflix could be something important in US, but I doubt it is so much of a big deal outside US borders.

November 22, 2016 | 06:10 PM - Posted by Anonymous (not verified)

Software decoding may work for the fastest desktop CPUs, but for mobile CPUs its not going to make the cut. For example, even mobile Skylake struggles with 10bit HEVC despite having a hybrid HEVC decoder quote "Our demanding 4K trailer (HEVC Main10, 50 Mbps, 60 fps) is handled smoothly by the i7-7500U at an average power consumption of just 3.2 Watts (CPU Package Power), while the video stuttered noticeably on the i7-6600U despite a consumption of 16.5 Watts".

And that mobile Skylake (Core i7-6600U) is faster than most of AMD's APUs, including desktop ones. Example in both in single and mult-thread performance, Intel Core i7-6600U is still faster than AMD's latest A12-9800 (for AM4 platform).

Installing a discrete graphics card that can support 10bit HEVC decoding is only viable for desktop machines. As for notebooks, depends on either the CPU (preferably Kaby Lake) and its integrated discrete graphics mostly (preferably latest Pascal or Polaris).

November 23, 2016 | 03:24 AM - Posted by JohnGR

A12-9800 comes with hardware decoding for HEVC. I am not sure if that includes 4K 10bit. It will be stupid not to, but it wouldn't be a first for AMD(Fury cards that didn't supported HDMI 2.0 for example). It's really bad to see the Skylake stutter here. That low TDP limit probably doesn't help.

On the other hand they do mention "demanding 4K trailer" which could be a non issue for most people out there, at least for now. They do mention that they didn't had problems with 8bit 4K HEVC, which is going to be pretty enough for everyone, not trying to use their laptops as media players on their ultra expensive new TVs. Anyway, I don't know if in 1-2 years support for that kind of video will be more important. If it does, it's good for both AMD and Intel. Good for their future sales.

In case of Pascal or Polaris GPU, it doesn't mater if you have a Kaby Lake in your laptop. The GPU will do the job anyway.

November 23, 2016 | 07:55 AM - Posted by Anonymous (not verified)

All of AMD's current APUs including Carrizo and Bristol Ridge still does not support 10bit HEVC decoding (both have the same integrated GPU and UVD engine also). There's a simple reason for that, "7th generation" Bristol Ridge is actually "6th generation" Carrizo "refresh" with DDR4 memory controller enabled (ie. Carrizo orignally had DDR4 memory controller built in, check out ), thus Bristol Ridge only supports the standard 8bit HEVC decoding just like Carrizo (example quote "HEVC 8-bit Main Profile Decode Only at Level 5.2").

And majority of (ultraportable) notebooks and tablets do not have discrete GPUs. Also having an extra GPU chip and video memory would reduce battery life. Likewise many highly compact mini PCs also do not have discrete GPUs. Thus, there is still a large number of portable and smaller machines (that uses previous generation parts) that are incapable of decoding 10bit HEVC.

Additionally Intel's new Apollo Lake also has 10bit HEVC decoder (example ). Same goes to many of the latest ARM SoCs in the market using the latest video engines, for example ARM Mali-V550 (check out ). Just like 10bit H.264 (a.k.a "Hi10p" which is still CPU only decoding), 10bit HEVC is beginning to make its presence known.

November 23, 2016 | 12:39 PM - Posted by JohnGR

I do know some of the stuff you posted. The first DDR4 AMD processors where Carrizo for the embedded market long ago. DDR4 was necessary for those models because they where coming with long support. Thanks for the rest of the info/reminders.

I guess 10bit HEVC is not yet necessary for the majority of consumers and probably wouldn't be for some time, that's why it is not supported yet. But it would have been nice if it was included in the Bristol Ridge features, especially when those products are advertising their GPU performance. On the other hand, Intel having hit some walls with it's GPU in 3D performance, it's no surprise that is concentrating on codec support.

November 25, 2016 | 12:53 PM - Posted by Anonymous (not verified)

You are not seeing the big picture here. If Intel starts supporting this 10bit HEVC codec then surely the rest of the content and VOD (video on demand) providers may start using it also. Thats because Intel currently has the lion's share of the PC market, thus the rest of the industry will start catering to new hardware capabilities. Likewise with more ARM SoCs featuring the new video decoders supporting 10bit HEVC, then the same effect also (since ARM chips practically outnumber Intel chips in mobile, embedded, consumer electronics like smart TVs, internet streaming video boxes, etc).

November 26, 2016 | 06:22 PM - Posted by JohnGR

It's not me that is not seeing the big picture, but you that exaggerates and in a way rush to move the world to the 10bit HEVC era. Just have a look at plain simple HEVC. Well, people are still happy with 1080p, or even 720p H264 movies. No one is rushing to pass to the HEVC era. We are at least 5 years away before 10bit HEVC becomes something more than (rare) premium content for 5 or 10% of the viewers.

Believe me. Technology moves much slower than what you probably think. Especially those last years. And don't forget that people are full of devices today(oh, poor poor mother Earth), so they will be less willing to throw them away (thank God for the environment), just for seeing movies at higher quality.

November 28, 2016 | 07:59 AM - Posted by Anonymous (not verified)

Confirmed, Netflix 4K works with Intel's Kaby Lake only They also tested AMD's Polaris and NVIDA's Pacal, unfortunately both failed due to DRM support required. Since companies like Netflix is already using 10bit HEVC, then probably a few other VOD providers will follow soon as the number of hardware capable of 10bit HEVC decoding with DRM support increases.

November 27, 2016 | 10:43 AM - Posted by Anonymous (not verified)

The market doesn't upgrade like it used to. People keep their PC's for 4yr+ now commonly. With no rush to Kabylake based PC's, since they're not offering much in the way of a performance increase over Skylake much less Haswell, AMD has nothing to worry about.

Netflix isn't going to alienate all the existing Intel user base just to please content providers and Intel.

November 28, 2016 | 08:22 AM - Posted by Anonymous (not verified)

Introducing new hardware features like 10bit HEVC is meant to drive sales for upgrading current hardware. That said, very likely most users already using AMD's low end APUs (on dead-end FM2+ platform) may be enticed to move to a newer platform (that is more futureproof). The new 10bit HEVC support will also attract HTPC builders as well, as previous generation hardware will soon become the least likely choice (includes almost forgotten AM1 platform and FM2+ platform). As JPR's market watch reports has shown throughout the rest of 2016 (Q-to-Q -16.8% in Q1, -22.2% in Q2 and -10% in Q3), the number of desktop APUs shipped has declined significantly.

November 28, 2016 | 11:22 AM - Posted by Anonymous (not verified)

AMD's marketshare is so low right now that Intel isn't interested in trying to take it away from them with features like on die X265 hardware decoders. Anyone holding on to AMD CPU's/APU's at this point probably isn't doing it for very rational reasons anyways.

Intel is essentially competing with itself right now. That may change if Zen turns out to be as good as rumored but that is a wait n' see situation.

November 23, 2016 | 04:23 AM - Posted by Anonymous (not verified)

The ps4 has almost NO CPU and can do netflix 4k.

November 23, 2016 | 08:08 AM - Posted by Anonymous (not verified)

Netflix and Amazon 4K videos will require H.265 (a.k.a "HEVC") codec, however the current PS4 console supports H.264 codec at HD resolutions only. Only the newer upgraded PS4 Pro will be able to support 4K Netflix.

November 23, 2016 | 11:28 AM - Posted by Anonymous (not verified)

That is a total non-issue. How many AMD APU based notebooks will ever be connected to a 4K screen? Probably around zero, at least until we get a high end Zen based laptop. Also, not that many 4K computer screens are even capable of displaying deep color. I have a very expensive Dell UltraSharp desktop display with a 10-bit capable panel. Most of the displays on laptops are essentially 6-bit TN panels or maybe 8-bit if you get an IPS touch screen. Looking at a lot of the laptops, even the supposed high-end option is often a 4K screen at only 72 % NTSC or so. That is t going to show of 10-bit HEVC very well. Most 4K TVs are coming with built in Netflix capabilities that can be used to get the full quality without going over HDMI anyway.

November 25, 2016 | 12:41 PM - Posted by Anonymous (not verified)

Certain notebooks have 4K panels, for example Dell XPS 15 quote "15.6" 4K Ultra HD (3840 x 2160) InfinityEdge touch". For desktop PC users, most likely they will need a new discrete GPU that supports 10bit HEVC. This also impacts HTPC builders to consider the hardware required if they want to use 10bit HEVC. However those already using compact mini PCs using only integrated GPU will be impacted the most, as a total motherboard change will be required to enable 10bit HEVC support. Likewise notebooks using previous generation chips will be left out (thus have to get new generation notebooks)...

November 27, 2016 | 10:48 AM - Posted by Anonymous (not verified)

Certain isn't common place.

4K monitors are still fairly niche and will be for a while yet. Virtually no one is going to upgrade their PC just to watch Netflix or Prime. They'll shrug and just use 1080p content and let their TV upscale the steam into 4K-ish which they probably won't notice anyways unless they have a very large (70"+) TV or sit very close to their TV's.

December 2, 2016 | 01:17 AM - Posted by Anonymous (not verified)

That's game over for all Intel-CPUs except Kaby Lake, too... if one follows your ridiculous logic.

November 22, 2016 | 11:13 AM - Posted by Anonymous (not verified)

Really, rendering workloads on the CPU's cores are only good for benchmarking purposes as rendering so taxes a CPU's hardware. But to be realistic CPUs are a joke at rendering and even Blender 3d has its Cycles rendering for rendering done on the GPU and done much quicker than any CPU cores can. The one test that I really want to see done with any Laptop based APU/SOC is a simple high polygon count model/scene in Blender's/Other 3d software's editor mode where the UI/Interface can sometimes become bogged down handling high polygon mesh models in 3D edit mode.

Intel’s integrated graphics appears to not have enough parallel shader resources relative to AMD’s or Nvidia’s GPU SKUs to allow for high polygon mesh models/scenes to be worked on in 3D edit mode and have the interface remain responsive. Intel’s graphics may be sufficient for the low polygon mesh models used for gaming but in Blender 3D’s Open GL based/rendered editing mode Intel’s SOC integrated graphics does not have enough shader cores to effectively handle high polygon mesh models while allowing the 3D editor software’s UI/Interface to maintain a fluid motion. I have had some high polygon count mesh models so bogged down under Blender 3D’s 3d editing that the entire mesh model and interface became so unresponsive under Intel’s integrated graphics that I could not get any productive work done. I would move the mouse and the model in Blender’s 3D edit mode took more than 5 seconds to respond using Intel’s Integrated graphics.

I will always need a discrete mobile GPU if the laptop comes with Intel’s integrated graphics. Intel’s CPU’s may be good when paired with AMD’s or Nvidia’s discrete mobile GPUs in the laptop from factor, but any serious laptop Graphics Workloads require either AMD’s or Nvidia’s discrete mobile graphics to handle the really high polygon mesh models that are created for animation and other 3D non gaming workloads.

I eagerly await AMD’s Zen/Vega APUs and not so much for the Zen core’s IPC metric but for that GPU shader count metric that will allow me to edit my high polygon count mesh models. Some of the rumored AMD Zen/Vega Interposer based APUs with 16 or more CUs on the APU’s GPU/Graphics will allow me to forgo having to have any discrete mobile GPU while allowing me to work with some very detailed high polygon mesh models in Blender 3D’s 3D editing mode. Getting 16 Vega compute units on an AMD Laptop APU SKU is going to be very popular for those that want to do some serious mesh Modeling/Editing on a laptop while having the UI/Interface remain very responsive. For rendering no one wants the CPU for that, as GPU’s are the way to go for rendering.

November 22, 2016 | 11:17 AM - Posted by Anonymous (not verified)

In terms of performance it seems that it's due to the higher boost clock, than architectural improvements. IPC improvements seem to be low, as usual for Intel.

What is really impressive is power consumption. Having higher clocks and improving consumption by this much is very good.

November 22, 2016 | 11:31 AM - Posted by Anonymous (not verified)

That probably has more to do with fab process node improvements but for some laptop SKUs better power usage is a good thing to a point. I'd rather have more processor power on my laptops than battery life, as even on the go I use my laptops plugged in mostly. The main limiting factor is Graphics as Intel is not improving it's graphics as much generation to generation and Intel refuses to use its top tier graphics on in more mainstream more affordable laptop SKUs.

November 23, 2016 | 09:33 AM - Posted by Albert89 (not verified)

Intel are saying its a 1% increase in IPC. But if compared to Skylake's base clock then its exactly the same. So Intel's kaby lake is returning to the days of the 'rebrand' at a ripoff to the customer.

November 24, 2016 | 05:51 AM - Posted by Anonymous (not verified)

Yeah, 12-20% better performance and 2 more hours of battery life for the same price. What a ripoff! How dare they!

November 27, 2016 | 10:49 AM - Posted by Anonymous (not verified)

Its more like 10% and maaaaybe an hour. And at Intel's prices yes how dare they.

November 22, 2016 | 11:58 AM - Posted by Buyers

A quick thing i noticed is on the Blender Graph:
1) Your BMW plot was labeled "BWM"
2) The time measurements are shown as seconds on the graph, but stated as minutes in the text below explaining the graph.

November 22, 2016 | 12:07 PM - Posted by Ryan Shrout

Ha, thanks for catching it. Will update soon!

November 22, 2016 | 02:31 PM - Posted by Anonymous (not verified)

There aren't any single thread tests in this reviews. Would like to see single thread results to see whether there are changes to the IPC besides the clock increase and speed shift thingy.

November 22, 2016 | 12:05 PM - Posted by Shambles (not verified)

Spoiler question for the upcoming review. Do you like the XPS 13 or specter better?

November 22, 2016 | 12:07 PM - Posted by Ryan Shrout

Hmm, tough to say. I'd like to get the new Kaby Lake XPS 13 in first to decide. Connectivity might have a lot to do with it.

November 22, 2016 | 12:19 PM - Posted by Michael Rand (not verified)

So no need to upgrade from my 6700K then?

November 22, 2016 | 12:31 PM - Posted by Ryan Shrout

This is only looking at the notebook processors. Desktop situation could be very different - we'll know more in a few months.

November 22, 2016 | 06:41 PM - Posted by John H (not verified)

Jan-Feb we'll have the info on 7700K, Z270 (extra PCI-E lanes, and Optane support -- we'll know what that actually means), as well as AMD Zen. Short time to wait for any upgrades and see what is really out there.

November 22, 2016 | 01:26 PM - Posted by remc86007

Why don't they put a 16:10 screen on that hp and get rid of some of the ugly bottom bezel? Additional Vertical screen real estate is very useful for getting work done.

November 22, 2016 | 02:14 PM - Posted by Paul Thomsen

Is the Spectre x360 with Kaby Lake using Connected Standby when sleeping and getting to DRIPS successfully?

Those were a couple of the biggest issues with Skylake. Microsoft struggled long and hard with them, releasing many firmware updates to try to get them right on the Surface series. Dell's XPS 13's didn't even try to support them with Skylake though predecessor models did.

(Connected Standby is important to give modern devices a 'smartphone-like' experience and DRIPS is important to maximize battery life while the device is on standby. I hope people are aware of them and keen for them, but that doesn't always seem to be the case.)


November 22, 2016 | 03:13 PM - Posted by Lucas B. (not verified)

Looks impressive on the battery life side of things!
Can't wait to see that in a surface pro 4, 11h maybe.
Surface book 2 probably 15h!!
And with the usual 5 to 10% gain nice.
Thanks for the article!

November 23, 2016 | 07:32 PM - Posted by Anonymous (not verified)

Don't get too excited, Microsoft might decide to use the added battery life to put in a smaller battery and make the Surface Pro 5 thinner. Because as we all know, thinner is what everyone craves. Apparently.

They did it with the SP4, the Skylake CPU was offering better battery life so they put in a smaller battery so the overall battery life turned out slightly worse than the older model. That's the only issue I have with my SP4 i5, the battery life is way too short, 3-4 hours is typical for my usage. I get battery anxiety when the needle dips below 80% so this is bad for me.

Hopefully someone will explain to them that we don't REALLY crave ever thinner devices before they finish designing the SP5, there's still hope...

November 22, 2016 | 04:57 PM - Posted by Anonymous (not verified)

you don't do a clock for clock analysis so it's a biased methodology and real cpus comparison cannot be taken in consideration as it's not core for core.

November 22, 2016 | 06:44 PM - Posted by John H (not verified)

Ryan - what RAM (speed) was used in each of these systems?

November 22, 2016 | 06:52 PM - Posted by Thorburn (not verified)

May I suggest that the gaming performance difference you're seeing here is actually LARGELY down to something other than the CPU?

The specification for the Skylake Spectre 13 you tested has it listed as having a single 8GB DIMM, where as the Kaby Lake system has 8GB soldered down, likely in a dual channel configuration.

In CPU focused tests the 200MHz clock speed advantage is playing out, but in the gaming tests you're benefitting from the greater memory bandwidth on the Kaby Lake system - in my testing with the i3-7100U the difference between single and dual channel DDR4-2133 is about 30%, where as the gap between i3-6100U and i3-7100U in 3DMark Sky Diver was just 6% - predominantly because of the CPU focused physics portion of the test.

November 23, 2016 | 04:05 AM - Posted by JohnGR

Lower power consumption means that both the CPU and the GPU can run for more time at higher frequencies, if I am not wrong.

Also that 30% from your tests, seems pretty high, considering we are talking about an Intel GPU. I wouldn't expect that GPU to be able to gain that much from more bandwidth. I consider it too slow to gain that kind of performance from that extra bandwidth.

November 23, 2016 | 04:06 PM - Posted by Anonymous (not verified)

Bandwidth is a major bottleneck for integrated GPUs. I wouldn't be surprised that dual channel vs. single channel could make such a difference. If they are going to solder the memory on the board, it would be nice if they would use GDDR5 instead of just slow DDR3 or DDR4, but 8 GB of GDDR5 would burn a huge amount of power. It wouldn't be worth it unless a powerful gaming APU is used. Bu guess we will have to wait for some HBM derivative to make it to mobile APUs.

November 23, 2016 | 04:54 PM - Posted by JohnGR

I have seen in the past an article where AMD integrated GPUs where gaining from faster memories, but Intel integrated GPUs where not. I guess Intel had improved in that.

November 24, 2016 | 06:19 PM - Posted by Thorburn (not verified)

Intel have improved their GPUs a LOT in recent generations.

November 25, 2016 | 03:08 AM - Posted by JohnGR

When you are at the bottom you have plenty of room for improvements. And so to be clear that I am not just bashing Intel here, take for example the leap from Excavator to Zen.

November 25, 2016 | 01:11 PM - Posted by Anonymous (not verified)

As the earlier poster mentioned, Intel has greatly improved its integrated graphics performance especially in the lower wattage areas. For example that is Intel's new lowly Atom E3900 versus AMD's latest A9-9410. That's low power "Atom class" versus ultraportable "notebook class". Another example quote "All in all, the HD Graphics 620 is roughly on par with a dedicated Nvidia GeForce 920M or AMD Radeon R7 M440".

November 26, 2016 | 06:41 PM - Posted by JohnGR

I don't really trust GFXbench for GPU tests. Show me game performance and then we have something to discus. Not to mention that the A9 has configurable TDP and OEMs love to destroy AMD APU performance with single channel memory and low TDP limits.

As for the 7500U, it's not exactly a cheap CPU. And those two discrete GPUs from AMD and Nvidia are using a combination of DDR3 with a 64bit bus, making them equivalent to an integrated GPU anyway, because of the low bandwidth. No matter how many shaders/cores they have. Put that Intel GPU next to a dedicated GPU with at least 30-40GB/sec bandwidth and things will look completely different.

November 28, 2016 | 08:32 AM - Posted by Anonymous (not verified)

You can compare the frames per second numbers for Intel's Kaby Lake Core i7-7500U here with AMD's Bristol Ridge A10-9600P here Do note that reviewed AMD Bristol Ridge powered notebook is using dual channel memory. Starting off with Diablo III, already can see that Intel's Kaby Lake is way ahead with 105.9 fps trouncing AMD's Bristol Ridge's 50.9 fps. Even at medium to high settings, the pattern is the same. The trend continues with other games like Crysis 3, Bioshock Infinite, Metro Last Light, etc

November 23, 2016 | 06:26 AM - Posted by John H (not verified)

The Skylake version seems to only be purchase able with 8 gb but it looks like you can get the KabyLake with 8 or 16 GB. Ram speeds are ddr3l-1866.

I can't determine whether either is single vs dual channel though.

November 22, 2016 | 06:54 PM - Posted by Thorburn (not verified)

A couple videos to demonstrate where I believe your results may be flawed:

i3-6100U and i3-7100U - 3DMark Sky Diver:
i3-7100U 1 x 8GB DDR4-2133 vs. 2 x 4GB DDR4-2133:

November 22, 2016 | 08:23 PM - Posted by Anonymous (not verified)

Wow 60fps in overwatch at medium is simply amazing for non dedicated gpu. I litterly get less on my desktop with a geforce 460 gtx and sucking nearly 200w. Bravo.

November 22, 2016 | 11:37 PM - Posted by Anonymous (not verified)

Looked at the new macbook pros today, they should be called the airbook pro anorexic, with no 32GB RAM options. Again the non Radeon Pro SKUs do not have more higher end Intel graphics options! Apple looks to be reducing the numbers of laptop SKUs, and now the macbook pro is not so pro anymore. Apple appears to be more interested in its brand than any sorts of functionality for any professional graphics usage so now there is the airbook not so pro full on anorexic form over functionality with the hide the escape key fun bar! It’s not looking good for any Apple laptop refreshes if they get any thinner they will disappear. RIP macbook pro!

November 23, 2016 | 04:27 PM - Posted by Anonymous (not verified)

They sacrifice too much for lower power consumption. I wouldn't buy most of their previous models due to the repair and upgrade issues. I want to be able to easily swap out storage at a minimum. My ancient MacBook Pro required removal of about 2 dozen screws to get to the hard drive and also had delicate tabs that got broken on the second hard drive upgrade making the touchpad unusable. Not a good design. Most of the stuff Apple does is not needed to make the devices thinner, at least for laptops.

The newer ones would probably require removal of a bunch of screws and probably some work with a heat gun because of the glue. With the ones with the touch sensitive strip, the SSD is actually soldered onto the board; this is unacceptable in my opinion. I guess the model without the touch bar doesn't have it soldered, but it uses a proprietary SSD. Definitely a case of planned to fail. With heavy use or even software bugs that hit the SSD hard, these things could be due for a full system board replacement in much less time than I usually keep my devices. I guess they are hoping to get people to do the cell phone model of upgrades. Get the latest and greatest laptop every two years, and pass the current one off on someone else. The planned failure mode isn't your problem then. I wouldn't recommend buying a used SSD or a used SSD with a laptop wrapped around it. Good luck with the resale value.

November 23, 2016 | 08:23 PM - Posted by Anonymous (not verified)

I do not intend to ever buy an Apple now and I'll guess I will have to keep my old IvyBridge core i7 HP probook a little longer but the AMD 7650M GPU is getting out of date and the laptop's keyboard is getting a little rough around the edges. Apple's macbook SKUs are just so much overkill with that overly thin design. I'm hoping that AMD's Zen/Vega APUs will get some design wins but I'll not be wanting windows 10 so maybe there will be some Zen/Vega Linux OS based laptops available before 2020.

It's only Intel/Nvidia offerings mostly from the Linux OS based OEM laptops makers currently but I have hopes for some Zen/Vega APU design wins from AMD before 2020 and maybe some of the Linux OS based laptop OEMs will get on board with any Zen/Vega APU based laptops in the future. I'm having a hard time finding any laptops with discrete mobile AMD GPUs that are newer than GCN 1.0/GCN gen 1 and even microcenter does not appear to have any laptops other than laptops with AMD's first generation GCN discrete mobile GPUs.

It’s really going to take Zen/Vega to get AMD any larger laptop APU design wins. So maybe by this time next year there will be Zen/Vega options from HP/Dell/others, and HP does have some Probook with Linux OS options or at least they did for my model probook when it was made. I have at least 3 years before windows 7 goes away for any support so I'll just have to rely on my older laptops that I still have that came with windows 7.

November 23, 2016 | 06:33 AM - Posted by Anonymous (not verified)

What good is 4K HEVC 10 support without HDCP 2.2?

November 23, 2016 | 11:06 AM - Posted by vox (not verified)

Comparing the two Spectre laptops, was there also a change from SATA to NVMe as the SSD bus?

November 24, 2016 | 06:19 PM - Posted by Anonymous (not verified)

I think you've all missed the point about Kaby Lake and Netflix.

Netflix will allow 4K on Windows because Kaby Lake supports HDCP 2.2. HEVC, 10-bit or not, it's all about DRM.

November 25, 2016 | 06:47 PM - Posted by Anonymous (not verified)

humm I feel a upgrade comming on from a I3 6320 to a I5 7600k what do you guys think ?

November 26, 2016 | 03:34 AM - Posted by cruiservodka1 (not verified)

Hi everyone! I came across your converstaion while researching for what laptop to buy. I am not good at specs and technical stuff, and I've read some of your comments here.

I am torn between i5 (6th Gen) with NVIDIA GEforce 940MX (2gb) and AMD A9-9410 with Radeon R5 m430 (2gb)

These are the actual notebooks I am looking at:

I'm choosing between the three. Basically, it will be my 2nd laptop as I have a macbook and I need to run some windows program for my studies and I don't wanna run bootcamp all the time. Thank you so much for all your advise.

November 26, 2016 | 06:06 AM - Posted by Anonymous (not verified)

Go for HP. SSD is a must in 2016.

November 27, 2016 | 02:23 PM - Posted by JohnGR

While the Intel will offer faster CPU performance and better efficiency(the laptop will stay more time on with the battery), the HP offers SSD and 1080p display at almost the same price.

I think you should go with the HP 15.

If you decide to go with an Intel option, you should consider at least a model with 1080p resolution, even if that means to pay more. Don't buy a 1366x768 laptop.

Ignore the Acer and any laptop with a dual core CPU, especially from AMD. Quad core AMD APUs are fine.

November 27, 2016 | 03:51 AM - Posted by cruiservodka1 (not verified)

Hi. Thank you for the reply.

HP has his specs: AMD Quad-Core A10-9600P, 8GB, 256GB SSD, AMD Radeon R5, FreeDOS

Does it beat i5 (6th generation)>Asus F556UQ-XO528D / 15,6" HD / Intel Core i5-6198DU / 8GB RAM / 500GB HDD / Geforce 940MX ?

December 26, 2016 | 05:06 PM - Posted by danwat1234 (not verified)

The 7700HQ laptop CPU has a 45w TDP and it's clocked at 2.8GHZ base / 3.8GHZ turbo. Pretty much the same clockspeed as a high end Haswell laptop CPU... maybe 15% faster, whoopie for a 3 generation gap.
Coffee Lake might actually bring something interesting to the table, but I'm really not holding my breath.
Perhaps 6-core laptop CPUs eventually, perhaps another 10% more clock for clock performance, but that's about it.
AMD, might be able to match it. They'd have to release laptop CPUs that are more than 35w TDP for once in the last 5 years to be remotely competitive with Intel's high end laptop CPUs..

December 4, 2018 | 08:20 AM - Posted by anixkill (not verified)

Hi everyone! I came across your converstaion while researching for what laptop to buy. I am not good at specs and technical stuff, and I've read some of your comments here.

I am torn between i5 (6th Gen) with NVIDIA GEforce 940MX (2gb) and AMD A9-9410 with Radeon R5 m430 (2gb)

These are the actual notebooks I am looking at:

I'm choosing between the three. Basically, it will be my 2nd laptop as I have a macbook and I need to run some windows program for my studies and I don't wanna run bootcamp all the time. Thank you so much for all your advise.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.