Subject: Processors
Manufacturer: Intel

Light on architecture details

Our Intel Skylake launch coverage is intense! Make sure you hit up all the stories and videos that are interesting for you!

The Intel Skylake architecture has been on our radar for quite a long time as Intel's next big step in CPU design. Through leaks and some official information discussed by Intel over the past few months, we know at least a handful of details: DDR4 memory support, 14nm process technology, modest IPC gains and impressive GPU improvements. But the details have remained a mystery on how the "tock" of Skylake on the 14nm process technology will differ from Broadwell and Haswell.

Interestingly, due to some shifts in how Intel is releasing Skylake, we are going to be doing a review today with very little information on the Skylake architecture and design (at least officially). While we are very used to the company releasing new information at the Intel Developer Forum along with the launch of a new product, Intel has instead decided to time the release of the first Skylake products with Gamescom in Cologne, Germany. Parts will go on sale today (August 5th) and we are reviewing a new Intel processor without the background knowledge and details that will be needed to really explain any of the changes or differences in performance that we see. It's an odd move honestly, but it has some great repercussions for the enthusiasts that read PC Perspective: Skylake will launch first as an enthusiast-class product for gamers and DIY builders.

For many of you this won't change anything. If you are curious about the performance of the new Core i7-6700K, power consumption, clock for clock IPC improvements and anything else that is measurable, then you'll get exactly what you want from today's article. If you are a gear-head that is looking for more granular details on how the inner-workings of Skylake function, you'll have to wait a couple of weeks longer - Intel plans to release that information on August 18th during IDF.


So what does the addition of DDR4 memory, full range base clock manipulation and a 4.0 GHz base clock on a brand new 14nm architecture mean for users of current Intel or AMD platforms? Also, is it FINALLY time for users of the Core i7-2600K or older systems to push that upgrade button? (Let's hope so!)

Continue reading our review of the Intel Core i7-6700K Skylake processor!!

Manufacturer: Intel

Bioshock Infinite Results

Our Intel Skylake launch coverage is intense! Make sure you hit up all the stories and videos that are interesting for you!

Today marks the release of Intel's newest CPU architecture, code named Skylake. I already posted my full review of the Core i7-6700K processor so, if you are looking for CPU performance and specification details on that part, you should start there. What we are looking at in this story is the answer to a very simple, but also very important question:

Is it time for gamers using Sandy Bridge system to finally bite the bullet and upgrade?

I think you'll find that answer will depend on a few things, including your gaming resolution and aptitude for multi-GPU configuration, but even I was surprised by the differences I saw in testing.


Our testing scenario was quite simple. Compare the gaming performance of an Intel Core i7-6700K processor and Z170 motherboard running both a single GTX 980 and a pair of GTX 980s in SLI against an Intel Core i7-2600K and Z77 motherboard using the same GPUs. I installed both the latest NVIDIA GeForce drivers and the latest Intel system drivers for each platform.

  Skylake System Sandy Bridge System
Processor Intel Core i7-6700K Intel Core i7-2600K
Motherboard ASUS Z170-Deluxe Gigabyte Z68-UD3H B3
Memory 16GB DDR4-2133 8GB DDR3-1600
Graphics Card 1x GeForce GTX 980
2x GeForce GTX 980 (SLI)
1x GeForce GTX 980
2x GeForce GTX 980 (SLI)
OS Windows 8.1 Windows 8.1

Our testing methodology follows our Frame Rating system, which uses a capture-based system to measure frame times at the screen (rather than trusting the software's interpretation).

If you aren't familiar with it, you should probably do a little research into our testing methodology as it is quite different than others you may see online.  Rather than using FRAPS to measure frame rates or frame times, we are using an secondary PC to capture the output from the tested graphics card directly and then use post processing on the resulting video to determine frame rates, frame times, frame variance and much more.

This amount of data can be pretty confusing if you attempting to read it without proper background, but I strongly believe that the results we present paint a much more thorough picture of performance than other options.  So please, read up on the full discussion about our Frame Rating methods before moving forward!!

While there are literally dozens of file created for each “run” of benchmarks, there are several resulting graphs that FCAT produces, as well as several more that we are generating with additional code of our own.

If you need some more background on how we evaluate gaming performance on PCs, just check out my most recent GPU review for a full breakdown.

I only had time to test four different PC titles:

  • Bioshock Infinite
  • Grand Theft Auto V
  • GRID 2
  • Metro: Last Light

Continue reading our look at discrete GPU scaling on Skylake compared to Sandy Bridge!!

Manufacturer: Lenovo


After spending some time in the computer hardware industry, it's easy to become jaded about trade shows and unannounced products. The vast majority of hardware we see at events like CES every year is completely expected beforehand. While this doesn't mean that these products are bad by any stretch, they can be difficult to get excited about.

Everyone once and a while however, we find ourselves with our hands on something completely unexpected. Hidden away in a back room of Lenovo's product showcase at CES this year, we were told there was a product would amaze us — called the LaVie.


And they were right.

Unfortunately, the Lenovo LaVie-Z is one of those products that you can't truly understand until you get it in your hands. Billed as the world's lightest 13.3" notebook, the standard LaVie-Z comes in at a weight of just 1.87 lbs. The touchscreen-enabled LaVie-Z 360 gains a bit of weight, coming in at 2.04 lbs.

While these numbers are a bit difficult to wrap your head around, I'll try to provide a bit of context. For example, the Google Nexus 9 weighs .94 lbs. For just over twice the weight as Google's flagship tablet, Lenovo has provided a full Windows notebook with an i7 ultra mobile processor.


Furthermore the new 12" Apple MacBook which people are touting as being extremely light comes in at 2.03 lbs, almost the same weight as the touchscreen version of the LaVie-Z. For the same weight, you also gain a much more powerful Intel i7 processor in the LaVie, when compared to the Intel Core-M option in the MacBook.

All of this comes together to provide an experience that is quite unbelievable. Anyone that I have handed one of these notebooks to has been absolutely amazed that it's a real, functioning computer. The closest analog that I have been able to come up with for picking up the LaVie-Z is one of the cardboard placeholder laptops they have at furniture stores.

The personal laptop that I carry day-to-day is a 11" MacBook Air, which only weighs 2.38 lbs, but the LaVie-Z feels infinitely lighter.

However, as impressive as the weight (or lack thereof) of the LaVie-Z is, let's dig deeper into what the experience of using the world's lightest notebook.

Click here to continue reading our review of the Lenovo LaVie-Z and LaVie-Z 360

Report: Intel Core i7-6700K and i5-6600K Retail Box Photos and Pricing Leak

Subject: Processors | August 3, 2015 - 10:58 AM |
Tagged: Skylake, leak, Intel, i7-6700K, Core i7-6700K

Leaked photos of what appear to be the full retail box version of the upcoming Intel Core i7-6700K and i5-6600K "Skylake" unlocked CPU have appeared on imgur, making the release of these processors feel ever closer.


Is this really the new box graphic for the unlocked i7?

While the authenticity of these photos can't be verified through any official channel, they certainly do look real. We have heard of Skylake leaks - a.k.a. Skyleaks - for a while now, and the rumors point to an August release for these new LGA 1151 chips (sorry LGA 1150 motherboard owners!).


Looks real. But we do live in a Photoshop world...

We only have about four weeks to wait at the most if an August release is, in fact, imminent. If not, I blame Jeremy for getting our hopes up with terms like Skyleak™. I encourage you to direct all angry correspondence to his inbox.


These boxes are very colorful (or colourful, if you will)

Update: A new report has emerged with US retail pricing for the upcoming Skylake lineup. Here is the chart from WCCFTech:


Chart taken from WCCFTech

The pricing of the top i7 part at $316 would be a welcome reduction from the current $339 retail of the i7-4790K. Now whether the 6700K can beat out that Devil's Canyon part remains to be seen. Doubtless we will have benchmarks and complete coverage once any official release is made by Intel for these parts.

Source: imgur

Podcast #360 - Intel XPoint Memory, Windows 10 and DX12, FreeSync displays and more!

Subject: General Tech | July 30, 2015 - 02:45 PM |
Tagged: podcast, video, Intel, XPoint, nand, DRAM, windows 10, DirectX 12, freesync, g-sync, amd, nvidia, benq, uhd420, wasabi mango, X99, giveaway

PC Perspective Podcast #360 - 07/30/2015

Join us this week as we discuss Intel XPoint Memory, Windows 10 and DX12, FreeSync displays and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Sebastian Peak

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!


Breaking: Intel and Micron announce 3D XPoint Technology - 1000x Faster Than NAND

Subject: Storage | July 28, 2015 - 12:41 PM |
Tagged: XPoint, non-volatile RAM, micron, memory, Intel

Everyone that reads SSD reviews knows that NAND Flash memory comes with advantages and disadvantages. The cost is relatively good as compared to RAM, and the data remains even with power removed (non-volatile), but there are penalties in the relatively slow programming (write) speeds. To help solve this, today Intel and Micron jointly launched a new type of memory technology.


XPoint (spoken 'cross point') is a new class of memory technology with some amazing characteristics. 10x the density (vs. DRAM), 1000x the speed, and most importantly, 1000x the endurance as compared to current NAND Flash technology.


128Gb XPoint memory dies, currently being made by Intel / Micron, are of a similar capacity to current generation NAND dies. This is impressive for a first generation part, especially since it is physically smaller than a current gen NAND die of the same capacity.

Intel stated that the method used to store the bits is vastly different from what is being used in NAND flash memory today. Intel stated that the 'whole cell' properties change as a bit is being programmed, and that the fundamental physics involved is different, and that it is writable in small amounts (NAND flash must be erased in large blocks). While they did not specifically state it, it looks to be phase change memory (*edit* at the Q&A Intel stated this is not Phase Change). The cost of this technology should end up falling somewhere between the cost of DRAM and NAND Flash.


3D XPoint memory is already being produced at the Intel / Micron Flash Technology plant at Lehi, Utah. We toured this facility a few years ago.

Intel and Micron stated that this technology is coming very soon. 2016 was stated as a launch year, and there was a wafer shown to us on stage:


You know I'm a sucker for good wafer / die photos. As soon as this session breaks I'll get a better shot!

There will be more analysis to follow on this exciting new technology, but for now I need to run to a Q&A meeting with the engineers who worked on it. Feel free to throw some questions in the comments and I'll answer what I can!

*edit* - here's a die shot:


Added note - this wafer was manufactured on a 20nm process, and consists of a 2-layer matrix. Future versions should scale with additional layers to achieve higher capacities.

Press blast after the break.

Source: Intel

Something is cooking in San Francisco

Subject: Storage | July 28, 2015 - 11:26 AM |
Tagged: Intel, micron, flash


...stay tuned!

Manufacturer: PC Perspective

... But Is the Timing Right?

Windows 10 is about to launch and, with it, DirectX 12. Apart from the massive increase in draw calls, Explicit Multiadapter, both Linked and Unlinked, has been the cause of a few pockets of excitement here and there. I am a bit concerned, though. People seem to find this a new, novel concept that gives game developers the tools that they've never had before. It really isn't. Depending on what you want to do with secondary GPUs, game developers could have used them for years. Years!

Before we talk about the cross-platform examples, we should talk about Mantle. It is the closest analog to DirectX 12 and Vulkan that we have. It served as the base specification for Vulkan that the Khronos Group modified with SPIR-V instead of HLSL and so forth. Some claim that it was also the foundation of DirectX 12, which would not surprise me given what I've seen online and in the SDK. Allow me to show you how the API works.


Mantle is an interface that mixes Graphics, Compute, and DMA (memory access) into queues of commands. This is easily done in parallel, as each thread can create commands on its own, which is great for multi-core processors. Each queue, which are lists leading to the GPU that commands are placed in, can be handled independently, too. An interesting side-effect is that, since each device uses standard data structures, such as IEEE754 decimal numbers, no-one cares where these queues go as long as the work is done quick enough.

Since each queue is independent, an application can choose to manage many of them. None of these lists really need to know what is happening to any other. As such, they can be pointed to multiple, even wildly different graphics devices. Different model GPUs with different capabilities can work together, as long as they support the core of Mantle.


DirectX 12 and Vulkan took this metaphor so their respective developers could use this functionality across vendors. Mantle did not invent the concept, however. What Mantle did is expose this architecture to graphics, which can make use of all the fixed-function hardware that is unique to GPUs. Prior to AMD's usage, this was how GPU compute architectures were designed. Game developers could have spun up an OpenCL workload to process physics, audio, pathfinding, visibility, or even lighting and post-processing effects... on a secondary GPU, even from a completely different vendor.

Vista's multi-GPU bug might get in the way, but it was possible in 7 and, I believe, XP too.

Read on to see a couple reasons why we are only getting this now...


Subject: General Tech | July 24, 2015 - 03:39 PM |
Tagged: Intel Skylake, Intel

As always you should take these leaks with a bit of salt but if they are accurate Skylake may well offer enough enhancements to make a convincing argument for buying a new machine.  The GPU portion of the high end mobile processors will be 34-41% faster than the models available now, with the desktop seeing a moderate 28% jump for those who do not have an add-in card.  The low powered mobile model's performance is not much improved over the previous generation but the claimed 80% reduction in power usage is more than enough to make up for that.  


SPECint benchmarks show that Skylak will offer a performance boost a bit over 10% but the added 1.4 hours of battery life is rather impressive, even the desktop part is more efficient with a 65W TDP.  As for accessories, Skylake will support 4k cameras and new and improved RealSense 3D cameras, Wake on Voice support and improved touch sensors.  You can see the other two leaked slides at FanlessTech.

Source: Fanless Tech

Meet the Intel Core i7-5775C Broadwell CPU

Subject: Processors | July 20, 2015 - 05:58 PM |
Tagged: Intel, i7-5775C, LGA1150, Broadwell, crystalwell

To keep it interesting and to drive tech reviewers even crazier, Intel has changed their naming scheme again, with C now designating an unlocked CPU as opposed to K on the new Broadwell models.  Compared to the previous 4770K, the TPD is down to 65W from 84W, the L3 cache has shrunk from 8MB to 6MB and the frequency of both the base and turbo clocks have dropped 200MHz. It does have the Iris Pro 6200 graphics core, finally available on an LGA chip.  Modders Inc. took the opportunity to clock both the flagship Haswell and Broadwell chips to 4GHz to do a clock for clock comparison of the architectures.  Check out the review right here.


"While it is important to recognize one's strengths and leverage it as an asset, accepting shortcomings and working on them is equally as important for the whole is greater than the sum of its parts."

Here are some more Processor articles from around the web:


Source: Modders Inc