Review Index:
Feedback

Skylake vs. Sandy Bridge: Discrete GPU Showdown

Author:
Manufacturer: Intel

Bioshock Infinite Results

Our Intel Skylake launch coverage is intense! Make sure you hit up all the stories and videos that are interesting for you!

Today marks the release of Intel's newest CPU architecture, code named Skylake. I already posted my full review of the Core i7-6700K processor so, if you are looking for CPU performance and specification details on that part, you should start there. What we are looking at in this story is the answer to a very simple, but also very important question:

Is it time for gamers using Sandy Bridge system to finally bite the bullet and upgrade?

I think you'll find that answer will depend on a few things, including your gaming resolution and aptitude for multi-GPU configuration, but even I was surprised by the differences I saw in testing.

View Full Size

Our testing scenario was quite simple. Compare the gaming performance of an Intel Core i7-6700K processor and Z170 motherboard running both a single GTX 980 and a pair of GTX 980s in SLI against an Intel Core i7-2600K and Z77 motherboard using the same GPUs. I installed both the latest NVIDIA GeForce drivers and the latest Intel system drivers for each platform.

  Skylake System Sandy Bridge System
Processor Intel Core i7-6700K Intel Core i7-2600K
Motherboard ASUS Z170-Deluxe Gigabyte Z68-UD3H B3
Memory 16GB DDR4-2133 8GB DDR3-1600
Graphics Card 1x GeForce GTX 980
2x GeForce GTX 980 (SLI)
1x GeForce GTX 980
2x GeForce GTX 980 (SLI)
OS Windows 8.1 Windows 8.1

Our testing methodology follows our Frame Rating system, which uses a capture-based system to measure frame times at the screen (rather than trusting the software's interpretation).

If you aren't familiar with it, you should probably do a little research into our testing methodology as it is quite different than others you may see online.  Rather than using FRAPS to measure frame rates or frame times, we are using an secondary PC to capture the output from the tested graphics card directly and then use post processing on the resulting video to determine frame rates, frame times, frame variance and much more.

This amount of data can be pretty confusing if you attempting to read it without proper background, but I strongly believe that the results we present paint a much more thorough picture of performance than other options.  So please, read up on the full discussion about our Frame Rating methods before moving forward!!

While there are literally dozens of file created for each “run” of benchmarks, there are several resulting graphs that FCAT produces, as well as several more that we are generating with additional code of our own.

If you need some more background on how we evaluate gaming performance on PCs, just check out my most recent GPU review for a full breakdown.

I only had time to test four different PC titles:

  • Bioshock Infinite
  • Grand Theft Auto V
  • GRID 2
  • Metro: Last Light

Continue reading our look at discrete GPU scaling on Skylake compared to Sandy Bridge!!

Bioshock Infinite (DirectX 11)


 

BioShock Infinite is a first-person shooter like you’ve never seen. Just ask the judges from E3 2011, where the Irrational Games title won over 85 editorial awards, including the Game Critics Awards’ Best of Show. Set in 1912, players assume the role of former Pinkerton agent Booker DeWitt, sent to the flying city of Columbia on a rescue mission. His target? Elizabeth, imprisoned since childhood. During their daring escape, Booker and Elizabeth form a powerful bond -- one that lets Booker augment his own abilities with her world-altering control over the environment. Together, they fight from high-speed Sky-Lines, in the streets and houses of Columbia, on giant zeppelins, and in the clouds, all while learning to harness an expanding arsenal of weapons and abilities, and immersing players in a story that is not only steeped in profound thrills and surprises, but also invests its characters with what Game Informer called “An amazing experience from beginning to end."

Our Settings for Bioshock Infinite

View Full Size

View Full Size

View Full Size

View Full Size

View Full Size

Though the GTX 980 on the Core i7-6700K is 8% faster than it is on the 2600K, you can also see some differences in frame time consistency. Look at the FPS by Percentile graph where the orange line representing Sandy Bridge tails off sooner than the black line, representing Skylake.

View Full Size

View Full Size

View Full Size

View Full Size

View Full Size

At 2560x1440, all of this deviation between Skylake and Sandy Bridge is essentially gone, thanks to the added weight given to the GPU by the additional pixels. This isn't entirely surprising but does hint that maybe gamers with older systems that are using higher resolution screens may yet not need to upgrade.

GTX 980 SLI Results - 2560x1440

View Full Size

View Full Size

View Full Size

View Full Size

View Full Size

Well this is interesting - as we dive into the world of SLI and the hiccups of multi-GPU, even at 2560x1440, things start to look better for Skylake. In this case we find that the Core i7-6700K has about a 10% higher frame rate on average than Sandy Bridge and also much tighter frame time consistency.

Let's see how the other games in our testing fare.


August 5, 2015 | 08:11 AM - Posted by Mike Stringfellow (not verified)

Ryan,

In your opinion, would this make any difference in 4K? I'm runing Titan X in SLI at 4K, with 4K G-sync monitor smoothing the whole experience out. I am running on Sandy Bridge 3930K. Would upgrading to Skylake make any difference in my usage scenario?

August 5, 2015 | 08:39 AM - Posted by Ryan Shrout

Maybe. At 4K I would lean towards the CPU meaning even less but since you are running SLI, our 2560x1440 results show that it does in fact matter!

August 18, 2015 | 08:58 AM - Posted by Anonymous (not verified)

My advice would NOT be to change your CPU. Instead, for games which drop below 60FPS drop the resolution to 2560x1440 which not only may look IDENTICAL or very close but also will boost the frame rate significantly.

I've done a lot of testing on 4K and have found it extremely difficult to find ANY which look better to me over 1440p including CIV5 which has really small text.

(and going forward we'll see a slow shift to DX12 which will likely eliminate much of the need to upgrade your current CPU)

That's a friend's PC. For myself, I won't be considering 4K since I'll want a high refresh rate such as the Acer Predator 1440p GSYNC monitor.

Other:
You may want to investigate how to force a frame rate cap for games so you always stay in asynchronous mode. I'm not sure how that works myself though since I've had no access to a GSYNC monitor. I've heard people say they could force (globally?) to something like 135Hz on a 144Hz panel as apparently it had to be slightly below the max refresh.

January 1, 2016 | 07:52 AM - Posted by Huebie (not verified)

Ryan? I know this is a out of the box comparison, but at the pricepoint you can compare a 3930k with the 6700k. Here you can run PCIE 3.0 (with a well known tiny tool) and clock the CPU at 4 / 4.2 GHz. Now your Frametimes will be a lot better than with the old-fashioned 2600k (clocking at 3.4 / 3.8 GHz). There is no need for a sidegrade from SB-E / IVY-E to Skylake. Would youtu.be please make a Benchmark like this???

August 5, 2015 | 08:12 AM - Posted by John H (not verified)

Thanks for this Ryan - I have a 970 SLI / 2600k setup and was curious what 6700k would do. PCI-express 2.0 vs 3.0 is one difference that might matter here.. Also with the Oculus Rift in mind, it looks like Skylake gives a bit of a boost to minimum FPS -- which is going to be key for the right experience there. I'd like to see more investigation on minimum FPS as VR takes off next year..

As for making a case vs. 2600k -- other than the I/O portion of skylake, and larger memory capability for high end use cases with DDR4, I don't see a compelling case for 6700k vs 2600k. Some of the difference is clock speed (4.0/4.2 vs 3.5/3.9) - and Sandy Bridge can definitely match Skylake OC or not on air easily.

In any case, I was really happy to see this 980 SLI comparison of the two chips - thanks PCPer!

August 5, 2015 | 08:53 AM - Posted by jj_white (not verified)

Am I missing the overclocking? Why not compare the overclocked performance, Sandy Bridge clocks way higher than Skylake from what I've heard.
Also I'm guessing not a lot of people are running the 2600k stock, hell I even run my non k 2600 overclocked and I never run into a situation where I feel I am lacking performance.
Was this a deliberate choice?

August 5, 2015 | 01:58 PM - Posted by Anonymous (not verified)

Word. All these new reviews are comparing unlocked processors at stock clocks. Were not idiots. I bet there is little to no performance increase if they compared the CPUs @ minimum 4.5Ghz, which my 2500k is running at.

August 5, 2015 | 04:03 PM - Posted by Senpai (not verified)

The better test would be to do it at the same speeds to make it a more apples-to-apples comparison - http://www.pcper.com/reviews/Processors/Intel-Core-i7-6700K-Review-Skyla...

But it does not show what it does for gaming.

The point of the article was to show the out-of-the-box experience to show readers what they get from their potential upgrade.

I am sure Ryan and company will do a clock-for-clock comparison soon. I'm quite sure they had very limited time under their NDAs to get these stories out on time.

August 9, 2015 | 10:26 PM - Posted by Anonymous (not verified)

But no one runs their 2600K at stock clocks, which makes the comparison pointless as far as customers being able to tell what kind of upgrade they'd be getting.

August 29, 2015 | 04:50 AM - Posted by Divine (not verified)

Why would a "clock for clock" comparison matter? The video/article is clearly trying to say that it's time to upgrade from Sandy Bridge to Skylake, but if a Sandy Bridge processor can OC to 4.5-4.8 GHz and perform at a much higher level, it makes the whole upgrading debate, clearly displayed here, irrelevant.

August 5, 2015 | 02:55 PM - Posted by A Zebra (not verified)

That's what I need to know, if its worth upgrading from my 4.5ghz 2500k, if there is much performance advantage in gaming between a well OC'd 2500k or 2600k and the new Skylake offerings.

August 5, 2015 | 08:58 AM - Posted by MAXLD (not verified)

Still don't see a definite reason to upgrade yet if you have a Sandy. Much better to wait for Zen (+few proper DX12 games) and then decide. If it proves worthy, go for Zen, if it doesn't, then consider Skylake at better prices and possible discounts by then (same for DDR4 kits).

August 5, 2015 | 09:19 AM - Posted by Anonymous (not verified)

Did you see over at Tom's Hardware? They ran some of their benchmarks on Windows 10 and it gave Skylake a 40% boost over Windows 8.1. Not sure what happened there.

Is PCPer going to do a review of W10 soon?

August 18, 2015 | 09:06 AM - Posted by Anonymous (not verified)

There really should be NEGLIGIBLE performance differences between Windows 7, 8 and 10 except for some niche cases which have software optimization issues.

I remember Windows 8 working better than Windows 7 for Battlefield 4 with Intel processors due to a core parking issue which appeared to be TRUE.

Anyway, if there is any truth to the performance difference again it should NOT be representative of most gaming or other application scenarios, or else there is a serious bug or other issue not sorted out due to the new CPU.

August 20, 2015 | 02:35 PM - Posted by Anonymous (not verified)

I ran all 4 (8.0) and tested each with a myriad of games. 7 was great, but lots of overhead. As time went on, that overhead shrank, and now 10 is doing marvelously, until a recent update. Intel no longer supports win10 under the SandyBridge cores. http://www.intel.com/support/graphics/sb/CS-034343.htm

August 5, 2015 | 09:30 AM - Posted by Anonymous (not verified)

Run bf4 and see if you get a memory leak with the 980's with the new drivers for Win 10. I'm still getting a memory leak with my 780's in sli. Disable sli works perfect.

August 7, 2015 | 11:03 AM - Posted by Anonymous (not verified)

Was playing bf4 before on win 10 with sli 980's using the drivers win 10 installed during install (353.62) and don't seem to have any memory leak issues. game settings were ultra @ 1440p.

Played for about 2.5 hours and havn't rebooted since and everything looks fine still.

August 5, 2015 | 09:55 AM - Posted by Garry (not verified)

Be interesting to see how close the figures are with the same amount of RAM.

I have the 2600K, and will go for a 980ti. My motherboard is almost the same, I think mine is the UD5H. The difference between the 980 and 980ti is far less than the upgrade cost of RAM+CPU+MB..

With the 16gb of 1600MHz RAM and 850pro ssd, it's already pretty quick in Win7/64 and Linux Mint. Currently only has a 560ti card, but I'm waiting for 1440p gsync screens to become affordable...

I'm not sold on Win8/10 whatsoever.. Ugly is a fair call IMHO..

August 5, 2015 | 09:59 AM - Posted by Anonymous (not verified)

Ryan,

I'm guessing you were running the 2600k at stock speed 3.4GHz? Since they can OC to the same speed it's really not a fair comparison.

August 5, 2015 | 10:27 AM - Posted by RedShirt (not verified)

GJ on your Skylake coverage. IMO PCPER is by far the best tech site out there.

Who runs 2500K/2600K stock? It would have been interesting to see 2600K at 4.5-5GHz to see how it compares to a stock (and OC'd 6700K).

Most people on Sandy Bridge compare their overclocked chip with the stock speed of whatever the new i7 is. It's hard to justify buying a new CPU/mobo/RAM when you have to overclock it just to see a decent improvement.

I'm running a 6 core Westmere chip (4GHz) on an X58 mobo and the only thing compelling me to perhaps upgrade is the feature set on the motherboards. Old versions of USB, SATA, PCIe and no UEFI make me a sad panda.

I suppose it's a good thing my setup is still decent, ever since I moved in with my girlfriend I never seem to have any money :(

August 5, 2015 | 10:54 AM - Posted by klyde (not verified)

Ryan, was the processors tested at stock speeds?

August 5, 2015 | 01:33 PM - Posted by Ryan Shrout

Yup.

Both will overclock to about the same frequency so I don't expect much to change in our comparison.

August 5, 2015 | 05:35 PM - Posted by hJ (not verified)

You don't expect an extra 30% CPU speed to change the CPU/GPU bottlenecks in your graphs at all? What?

August 7, 2015 | 04:00 AM - Posted by Anonymous (not verified)

You don't expect comparing a 4.8GHz SB to a 4.8Ghz SL will show different results than comparing a 3.4GHz SB to a 4.0GHz SL ?

Rly ?

Srly ?

August 7, 2015 | 09:18 AM - Posted by tackle70

So... you think that a 40% overclock on a 2600k and a 15% overclock on the 6700k (which is about what you have when you clock both at 4.8 GHz) will yield the same results? Really?

WTB Math....

August 9, 2015 | 08:23 PM - Posted by Anonymous (not verified)

Skylake has higher IPC so 4.8ghz on both means the Skylake will still be faster. That's what Ryan's point was. Where's your math?

August 9, 2015 | 10:13 PM - Posted by Anonymous (not verified)

That's not what he said though.

August 9, 2015 | 11:29 PM - Posted by Anonymous (not verified)

"Both will overclock to about the same frequency so I don't expect much to change in our comparison."

It's going to scale linearly and if not both will reach a plateau. Impossible to know without him saying for sure. But it seems clear that's what he meant to say (the former).

August 11, 2015 | 01:47 AM - Posted by tackle70

You are interpreting my comment correctly. You can look at any set of benchmarks on different CPUs and find that CPUs fairly quickly hit a "good enough" point with high end graphics cards where there is really no difference between them. I think a stock 2600k is below that point (and has been for some time), primarily due to its low clock frequency.

I'd be *extremely* surprised to see significant differences between an overclocked 2600k and an overclocked 6700k, however. I would expect to see no more than a 1-5% difference between the two at 1080p+.

August 8, 2015 | 04:29 PM - Posted by Anonymous (not verified)

I really like pcper in general, but you guys lost a lot of credibility with this article due to not comparing overclocked speeds. It comes across as you were paid off by Intel to write this article to make Skylake look better than it is.

August 9, 2015 | 10:14 PM - Posted by Anonymous (not verified)

That would only make sense if they had similar stock speeds in the first place.

December 7, 2015 | 11:27 AM - Posted by Anonymous (not verified)

Lol but they are clocked different at stock speeds...

April 10, 2016 | 09:27 PM - Posted by tg (not verified)

They will overclock to the same frequency.. which means they would then be on a even playing field minus the ipc improvements.

A stock 3.3ghz 2600k to a 4ghz 6700k is a bit different than running the 2600k and the 6700k at 4.5-4.7 ghz.

Which is what we are trying to point out. The gap would lessen in this situation and the test conclusions would be more accurate. Not only that but you need to use the SAME amount of ram in both test benches. 8 vs 16 will also cause issues in 2k res expecially on games like gta which is a poor port.

I suggest you try the suggested things and then give us an actual comparison that doesn't appear to just be pushing for sales.

August 5, 2015 | 11:38 AM - Posted by Anonymous (not verified)

I too would like to see overclocking results from Sandy Bridge compared. To do this test without those numbers is absolutely criminal. The difference between my 4.7GHz i5 2500K and this new processor has to be fairly minimal, and besides that, I found GTA V to be extremely playable @ 2560x1440 so I don't believe it was far exceeding 4ms anymore. Come on guys! Re-test with max overclocks, maybe include these numbers for comparison too.

August 5, 2015 | 02:00 PM - Posted by Anonymous (not verified)

I want support for ECC memory!

August 6, 2015 | 11:58 AM - Posted by Anonymous (not verified)

Then you will have to go with the Xeon options at a higher cost. You will get no support for ECC in consumer SKUs from Intel, as that competes with the workstation part of their business. Expect to dole out more for the motherboard, and a motherboard without all the overclocking options at that for the ECC capabilities. Until AMD can begin to field its Zen based SKUs with even more integration with its GPUs, and even the future ability to directly dispatch FP computations directly to the GPU from the CPU cores, things will not improve on the consumer side of the equation. Even if Zen can just come up to Sandybridge levels, that HSA ability to send calculations to the GPU will be what puts Intel at more that just a price disadvantage even with the current generation HSA 1.0 compliance and not even the future direct dispatching of FP workloads directly to the GPU. There are plenty of Ivy bridge and Haswell parts that will be on sale, and not as may Boadwell because of the delays in 14nm, but why pay for the latest from Intel when it does not beat the previous SKUs from Intel by a wide enough margin to justify the cost of a new motherboard, just upgrade to an earlier Ivy-bridge, Haswell, or Broadwell and wait it out.

August 5, 2015 | 03:58 PM - Posted by semiconductorslave

I am still on a 3770K on Z87 board with CrossFire, so probably not time to upgrade.

May 24, 2016 | 07:40 PM - Posted by Anonymous (not verified)

How are you running an LGA 1151 chip on an lga 1150 mobo?

August 5, 2015 | 04:35 PM - Posted by Lance Ripplinger (not verified)

I guess if I was anxious to get into 4K then, I would upgrade. But I am perfectly happy with my i7-860 and GTX 660Ti right now at 1080p resolutions. How many people are still on older hardware like mine, or even older. You'd be very surprised.

November 17, 2015 | 10:01 AM - Posted by Anonymous (not verified)

I'm on a LGA771 Xeon E5450 @ 3.85ghz in an old Asus matx lga775 board with 8gb ddr2 and Geforce 560ti and I still make it work with battlefield 4 at 1080p, although I'm about to drop the coin on a new Skylake system.

August 5, 2015 | 07:44 PM - Posted by Anonymous75543 (not verified)

I want to see what AMD does with Zen (though I'm not expecting a miracle) until then I'm sticking with my sandy bridge.

I'd rather spend the money on a gsync or freesync setup at this point.

August 5, 2015 | 08:06 PM - Posted by KingKookaluke (not verified)

I don't understand. Why give the new processor twice as much ram? Why not do the inverse and see if there are any changes.

August 5, 2015 | 11:13 PM - Posted by BiT42 (not verified)

Yeah! What's up with the Sandy Lake system having twice as much RAM as the Sandy Bridge system?

August 5, 2015 | 11:25 PM - Posted by Anonymous (not verified)

I would be interested to see how much of the differences are due to memory speed and PCI-e speed. It isn't really relevant to a purchasing decision, since you can't separate the processor from the rest of the platform improvements. It may be interesting to turn the memory clock down and run a few test, if you have the time. I don't know if it is possible explicitly set PCI-e 2.0 mode though. Some off the platform power consumption differences may be due to lower DDR4 power consumption also.

August 6, 2015 | 03:09 AM - Posted by Benjamin (not verified)

I need to see GTA V tested with an equal 16GB of RAM before I can trust those results. That game chews through memory.

I would also like to echo the people asking for testing with Sandybridge overclocked. The whole point of the K parts is to overclock them so it seems odd not to test that(although I completely understand the time constraints you had).

I couldn't make a buying decision without having these questions answered to be honest.

August 6, 2015 | 11:01 AM - Posted by tackle70

Come on guys... why no clock for clock testing? I don't know a single person who ran a 2600k at stock speeds, and it's clocked quite a bit lower at stock than the 6700k. Fail review is fail... nothing but useless info here.

Run them both at 4.7+ GHz and then you'd have some meaningful information.

For what it's worth, I am running heavily overclocked Titan X SLI at 4k. I upgraded from a 4.8 GHz 2600k to a 4.6 GHz 5930k and the differences were minimal (I did have a PLX Z68 motherboard however). Crysis 3 got a little higher FPS in the very CPU intensive sections, and GTA V got a little smoother at the same FPS, and that was about it.

August 6, 2015 | 03:05 PM - Posted by Anonymous (not verified)

I second overclocking as well. Should just be typical recognized aircooled overclock for all K processors comparison.

and thank you for your articles since i don't think we say that enough.

August 7, 2015 | 01:26 AM - Posted by unacom

I'd love to see this test include a Q6600 as well, for more perspective.
It's going to be 10 years old soon. Comparing it to the newer generation would be neat.

August 8, 2015 | 06:34 AM - Posted by windwalker

I appreciate the testing very much but at the same time I strongly disagree with your conclusion.
Do you recommend people spend $600 for a new graphics card that delivers somewhere between 7% and 25% improvement depending on scenario?
Because that's what you're recommending for the motherboard, CPU and RAM combination.

August 9, 2015 | 11:25 PM - Posted by Anonymous (not verified)

7-25% isn't huge? What do you want??? Geez. 50%? 100%? Ridiculous expectations out of people. This chip is great.

August 10, 2015 | 02:27 PM - Posted by windwalker

7-25% is pathetic for that price.
Do you buy a new $600 graphic card for that kind of pitiful improvement?

August 9, 2015 | 10:18 PM - Posted by Anonymous (not verified)

I have a feeling that the difference wouldn't be near as large if both were clocked the same, and that things would be far more GPU bound if they were both clocked north of 4.5ghz

August 10, 2015 | 10:54 AM - Posted by Anonymous (not verified)

This article is basically useless for the intended audience.

I would guess that most PCPers that are still using their SB chips are doing so because they are great OCers, with most people getting around ~4.5ghz.

To not factor that into the analysis of an article whose sub title is "Is it time for gamers using Sandy Bridge system to finally bite the bullet and upgrade?" is a pretty glaring oversight and turns what could have been a extremely helpful and useful article into a curiosity.

August 10, 2015 | 03:02 PM - Posted by moonbogg (not verified)

My 3930K @ 4.6 still spoils me with insane FPS. Upgrading from this chip is a really bad idea right now. Just went to 980ti SLI @ 1440p/144hz and its so fast it just spoils me really. If Skylake-E manages to impress, maybe I'll consider upgrading to that just for the lulz.

August 11, 2015 | 12:12 PM - Posted by D1RTYD1Z619

Looks like ill have to scratch my upgrade itch next gen.
I'm running a 2600k@4.2 with a gtx980 at 4k and it runs
fine to me. But if I win the contest for that Gigabyte rig....GOODBYE SANDY BRIDGE!!!!

August 24, 2015 | 06:16 PM - Posted by Anonymous (not verified)

First off I'm running an i5 2500k running at 4.5 ghz and an AMD R9 280. If you are playing the games you want to play without any problems, smooth game play, etc. Why would you even consider upgrading any part of your computer. What is the real world difference between a stable 60fps and 120fps visually? No difference! Stop being ridiculous in recommending upgrades that don't benefit the vast majority of gamers operating at 1080p, because they won't see a difference worth a $600 price tag. As someone else mentioned it looks like you are being paid by intel to make these suggestions and comparisons when it has very little impact on 1080p gamers. I won't be upgrading my processor anytime soon. The video card will be my next upgrade when my game play gets closer to 30fps with new games. Which I estimate will be at least 2 years from now according to the trends. DX12 improvements might even make that upgrade further in the future. So, let's stick to real world comparisons in the future!

May 24, 2016 | 07:38 PM - Posted by Anonymous (not verified)

You are high, there is a huge difference between 60 and 120 fps :P

November 23, 2015 | 07:29 AM - Posted by Stephen Howe (not verified)

I'll keep my 4GHz 2500k with 16GB of DDR3 thanks.

Running both chips at the same clock speed and with the same ammount of ram would have shown the true generational improvement. Pretty disappointing that this was not done.

I definitely don't think the 5% improvement is worth the cost of uprading to the skylake platform and DDR4 though.

December 11, 2015 | 10:15 PM - Posted by ScrewB (not verified)

The test wasn't fair. Memory matters in CPU limited benchmarks and the CPUs where not OC'd. Who the hell gets a 6700k our a 2700k and does not OC? The is sort of the point of the K series.

December 25, 2015 | 01:33 AM - Posted by vargis 14 (not verified)

Yes i have to agree with everyone here that has stated that the benchmarks/tests were not performed at Overclocked speeds or with the same amount of memory. Just to show you the difference a percentage increase with my 2600k going from your stock Apple to apples Skylake reviews shows a dramatic improvement in my SiSoft Dhrystone and Whetstone scores of my 2600k at 4750mhz 35.71% faster clockspeed with scores for Dhrystone going from a score 116.86 @ 3500 Mhz to a score of 186.59 a amazing 59.6% increase @ 4750 Mhz, as for the Whetstone it went from a score of 73.44 @ 3500 Mhz to a score of 97.41 @ 4750 Mhz a 32.64% increase which is closer to the clockspeed % increase than the Dhrystone test that amazed me with a 59.6% increase I cannot account for since I was expecting it to be close like the clockspeed % change of 35.71%. I would be greatful to anyone who can account for the dramatic percentage increase of 59.6% over the clockspeed percentage increase. Now this is just the percentage increase 3.5ghz to 4.75ghz on the 2600k. Also I am repeating what everyone else said no one buys a 2500-2600k to run it at stock speeds Heck if you give me that Skylake platform i will give you my 5.1+ ghz capable 2600k I keep cool with a nice and simple but great performing Cooler Master Nepton 140xl with the push pull fans at 40% getting air through its 38mm thick radiator which is silent for the most part. It out cools most 240mm "not 280mm" radiators with its powerful pump and very large copper Cold Plate that has more microfins than any DIY water cooling systems CPU block according to FrostyTech i believe i read that from. I Have to give you a huge Thumbs up for adding SLI to the test even though you did not OVERCLOCK the 5 year old KING of Mainstream CPU's Good Ole SANDY BRIDGE CPU for the tests The Sandy 2600k is the best CPU I have ever purchased and I do not think I will ever get the payback that CPU has and is STILL giving me in performance on a 24/7 daily basis. Another fantastic thing about SKYLAKE is that the removed the idiotic on die VRM's and they are back on the Motherboard nice and big. I feel Haswell's on die VRM's has caused more CPU failures and chip degradation problems that I rarely if ever really hear about until Intel put those Tiny VRM's in the CPU die itself not only being too small to put too many volts through them and they add heat to the CPU die. Luckily Skylake has them back on the motherboard where thay can be cooled correctly and it can help you choose a motherboard for you....If you have 2 motherboards that have everything you need and need something to help you make you mind up you pick the motherboard WITH THE MOST VRM'S THUS BEST POWER DELIVERY SYSTEM FOR THE cpu LEADING TO LESS vDROOP ETC.

I am not going to do anymore SiSoft tests because it is making me want to clock my 2600k to 5100mhz and then clock my SLI'ed EVGA GTX 770 Classified cards to 1400mhz cores and and 8000 memory clocks and do some benchmarking which I do not really need since I am using a LG 34UM95 34" 21/9 3440-1440 IPS monitor with 8bit to 10bit color by dithering and a 60htz cap "tried overclocking the panel with no luck" so I do not use Vsync but I do set a frame rate target of 67fps with EVGA's Precision software " it saves me a lot of unneeded power use it keep the cards cool since they are not pushing out every frame they can possibly put out 120+fps all the time" and I get no tearing or stuttering and every game is buttery smooth. Yes I did do a 30 minute test with the Gsync enasbled 34" 3440-1440 Predator monitor, but nothing I had time to play ran under 60fps and my current rig runs everything above 60FPS with my main game right now being War Thunder Ground Forces that has fantastic graphics and gameplay...blows World of Tanks outta the water, plus if your into it you can fly planes also. It includes

February 1, 2016 | 09:41 PM - Posted by G-- (not verified)

I listen to your podcast pretty frequently but and like you guys BUT, yep, pretty obvious you guys got paid for this one. Shame on you!!!

February 2, 2016 | 01:53 PM - Posted by Jeremy Hellstrom

You got us, we each received a portion of this island chain in Dubai. 

 

April 10, 2016 | 09:17 PM - Posted by TG (not verified)

I think this test isn't exactly even. First off you didn't do a test of the performance on these cpu's in their overclocked state. As these are K processors chances are the people that would want these comparisons will be overclocking them. So a base vs base is already biased since they have vastly different base clocks.

I can speak from experience that most 2600k can reach 4.5ghz and of those most can reach 4.7k with decent cooling. I mention this because as the cpu speed goes up the less likely the cpu will be to bottleneck the gpu's.

Next thing i noticed in your test is the difference in RAM. 8 gig vs 16 gig. While some might not think this matters I have seen some of these games eat up over 8 gigs easy when run at 1440P or higher resolutions.

So my point here is your test beds were not as similar as possible. You might not be able to use the same ram or speed of ram, but you should have atleast matched the amount of ram on both machines.

I think if you were to do the two things I suggest you would see different results. Their might still be an advantage to skylake, but I think the game will be much more minimal. 1-4% would be my guess.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.