Intel Loves Exponential Trends: Shrinking Mini-PCs

Subject: General Tech, Cases and Cooling, Systems, Shows and Expos | September 12, 2014 - 11:20 AM |
Tagged: idf, idf 2014, nuc, Intel, SFF, small form factor

A few years ago, Intel introduced the NUC line of small form factor PCs. At this year's IDF, they have announced plans to make even smaller, and cheaper, specifications that are intended for OEMs to install Windows, Linux, Android, and Chrome OS on. This initiative is not yet named, but will consist of mostly soldered components, leaving basically just the wireless adapters user-replaceable, rather than the more user-serviceable NUC.

intel-idf-mini-pc.jpg

Image Credit: Liliputing

Being the owner of Moore's Law, they just couldn't help but fit it to some type of exponential curve. While it is with respect to generation, not time, Intel expects the new, currently unnamed form factor to halve both the volume (size) and bill of material (BOM) cost of the NUC. They then said that another generation after ("Future SFF") will halve the BOM cost again, to a quarter of the NUC.

What do our readers think? Would you be willing to give up socketed components for smaller and cheaper devices in this category or does this just become indistinguishable from mobile devices (which we already know can be cheap and packed into small spaces)?

Source: Liliputing

Podcast #317 - ASUS X99 Deluxe Review, Core M Performance, 18 Core Xeons and much more news from IDF!

Subject: General Tech | September 11, 2014 - 11:30 AM |
Tagged: podcast, video, asus, X99, X99 Deluxe, Intel, core m, xeon e5-2600 v3, idf, idf 2014, fortville, 40GigE, dell, 5k, nvidia, GM204, maxwell

PC Perspective Podcast #317 - 09/11/2014

Join us this week as we discuss our ASUS X99 Deluxe Review, Core M Performance, 18 Core Xeons and much more news from IDF!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Josh Walrath, Allyn Malventano, and Morry Tietelman

Program length: 1:33:48

  1. Week in Review:
  2. IDF News:
  3. News items of interest:
  4. Hardware/Software Picks of the Week:
    1. Allyn: Read our IDF news!
  5. Closing/outro

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

 

IDF 2014: Through Silicon Via - Connecting memory dies without wires

Subject: Storage, Shows and Expos | September 10, 2014 - 12:34 PM |
Tagged: TSV, Through Silicon Via, memory, idf 2014, idf

If you're a general computer user, you might have never heard the term "Through Silicon Via". If you geek out on photos of chip dies and wafers, and how chips are assembled and packaged, you might have heard about it. Regardless of your current knowledge of TSV, it's about to be a thing that impacts all of you in the near future.

Let's go into a bit of background first. We're going to talk about how chips are packaged. Micron has an excellent video on the process here:

The part we are going to focus on appears at 1:31 in the above video:

die wiring.png

This is how chip dies are currently connected to the outside world. The dies are stacked (four high in the above pic) and a machine has to individually wire them to a substrate, which in turn communicates with the rest of the system. As you might imagine, things get more complex with this process as you stack more and more dies on top of each other:

chip stacking.png

16 layer die stack, pic courtesy NovaChips

...so we have these microchips with extremely small features, but to connect them we are limited to a relatively bulky process (called package-on-package). Stacking these flat planes of storage is a tricky thing to do, and one would naturally want to limit how many of those wires you need to connect. The catch is that those wires also equate to available throughput from the device (i.e. one wire per bit of a data bus). So, just how can we improve this method and increase data bus widths, throughput, etc?

Before I answer that, let me lead up to it by showing how flash memory has just taken a leap in performance. Samsung has recently made the jump to VNAND:

vnand crop--.png

By stacking flash memory cells vertically within a die, Samsung was able to make many advances in flash memory, simply because they had more room within each die. Because of the complexity of the process, they also had to revert back to an older (larger) feature size. That compromise meant that the capacity of each die is similar to current 2D NAND tech, but the bonus is speed, longevity, and power reduction advantages by using this new process.

I showed you the VNAND example because it bears a striking resemblance to what is now happening in the area of die stacking and packaging. Imagine if you could stack dies by punching holes straight through them and making the connections directly through the bottom of each die. As it turns out, that's actually a thing:

tsv cross section.png

Read on for more info about TSV!

Author:
Manufacturer: Intel

Core M 5Y70 Early Testing

During a press session today with Intel, I was able to get some early performance results on Broadwell-Y in the form of the upcoming Core M 5Y70 processor.

llama1.jpg

Testing was done on a reference design platform code named Llama Mountain and at the heart of the system is the Broadwell-Y designed dual-core CPU, the Core M 5Y70, which is due out later this year. Power consumption of this system is low enough that Intel has built it with a fanless design. As we posted last week, this processor has a base frequency of just 1.10 GHz but it can boost as high as 2.6 GHz for extra performance when it's needed.

Before we dive into the actual result, you should keep in mind a couple of things. First, we didn't have to analyze the systems to check driver revisions, etc., so we are going on Intel's word that these are setup as you would expect to see them in the real world. Next, because of the disjointed nature of test were were able to run, the comparisons in our graphs aren't as great as I would like. Still, the results for the Core M 5Y70 are here should you want to compare them to any other scores you like.

First, let's take a look at old faithful: CineBench 11.5.

cb11.png

UPDATE: A previous version of this graph showed the TDP for the Intel Core M 5Y70 as 15 watts, not the 4.5 watt listed here now. The reasons are complicated. Even though the Intel Ark website lists the TDP of the Core M 5Y70, Intel has publicly stated the processor will make very short "spikes" at 15 watts when in its highest Turbo Boost modes. It comes to a discussion of semantics really. The cooling capability of the tablet is only targeted to 4.5-6.0 watts and those very short 15 watt spikes can be dissipated without the need for extra heatsink surface...because they are so short. SDP anyone? END UPDATE

With a score of 2.77, the Core M 5Y70 processor puts up an impressive fight against CPUs with much higher TDP settings. For example, Intel's own Pentium G3258 gets a score of 2.71 in CB11, and did so with a considerably higher thermal envelope. The Core i3-4330 scores 38% higher than the Core M 5Y70 but it requires a TDP 3.6-times larger to do so. Both of AMD's APUs in the 45 watt envelope fail to keep up with Core M.

Continue reading our preview of Intel Core M 5Y70 Performance!!

IDF 2014: Skylake Silicon Up and Running for 2H 2015 Release

Subject: Shows and Expos | September 9, 2014 - 02:27 PM |
Tagged: Skylake, Intel, idf 2014, idf, 14nm

2015 is shaping up to be an interesting year for Intel's consumer processor product lines. We are still expected to see Broadwell make some kind of debut in a socketed form in addition to the mobile releases trickling out beginning this holiday, but it looks like we will also get our first taste of Skylake late next year.

skylake1.jpg

Skylake is Intel's next microarchitecture and will be built on the same 14nm process technology currently shipping with Broadwell-Y. Intel stated that it expects to see dramatic improvements in all areas of measurement including performance, power consumption and silicon efficiency.

On stage the company demoed Skylake running the 3DMark Fire Strike benchmark though without providing any kind of performance result (obviously). That graphics demo was running on an engineering development board and platform and though it looked incredibly good from where we were sitting, we can't make any guess as to the performance quite yet.

skylake3.jpg

Intel then surprised us by bringing a notebook out from behind the monitor showing Skylake up and running in a mobile form factor decoding and playing back 4K video. Once again, the demo was smooth and impressive though you expect no more from an overly rehearsed keynote.

skylake2.jpg

Intel concluded that it was "excited about the health of Skylake" and that they should be in mass production in the first quarter of 2015 with samples going out to customers. Looking even further down the rabbit hole the company believes they have a "great line of sight to 10nm and beyond." 

Even though details were sparse, it is good news for Intel that they would be willing to show Skylake so early and yet I can't help but worry about a potentially shorter-than-expected life span for Broadwell in the desktop space. Mobile users will find the increased emphasis on power efficiency a big win for thin and light notebooks but enthusiast are still on the look out for a new product to really drive performance up in the mainstream.

IDF 2014: Western Digital announces new Ae HDD series for archival / cold storage

Subject: Storage, Shows and Expos | September 9, 2014 - 01:51 PM |
Tagged: WDC< Western Digital, WD, idf 2014, idf, hdd, Cold, Archival, Ae

We talked about helium filled, shingled HDD's from HGST earlier today. Helium may give you reduced power demands, but at the added expensive of hermetically sealed enclosures over conventional HDD's. Shingling may give added capacity, but at the expense of being forced into specific writing methods. Now we know Western Digital's angle into archival / cold storage:

WD_AE_PRN.jpg

..so instead of going with higher cost newer technologies, WD is taking their consumer products and making them more robust. They are also getting rid of the conventional thinking of capacity increments and are moving to 100GB increments. The idea is that once a large company or distributor has qualified a specific HDD model on their hardware, that model will stick around for a while, but be continued at an increased capacity as platter density yields increase over time. WD has also told me that capacities may even be mixed an matched within a 20-box of drives, so long as the average capacity matches the box label. This works in the field of archival / cold storage for a few reasons:

  • Archival storage systems generally do not use conventional RAID (where an entire array of matching capacity disks are spinning simultaneously). Drives are spun up and written to individually, or spun up individually to service the occasional read request. This saves power overall, and it also means the individual drives can vary in capacity with no ill effects.
  • Allowing for variable capacity binning helps WD ship more usable platters/drives overall (i.e. not rejecting drives that can't meet 6TB). This should drive overall costs down.
  • Increasing capacity by only a few hundred GB per drive turns into *huge* differences in cost when you scale that difference up to the number of drives you would need to handle a very large total capacity (i.e. Exabytes).

So the idea here is that WD is choosing to stick with what they do best, which they can potentially do for even cheaper than their consumer products. That said, this is really meant for enterprise use and not as a way for a home power user to save a few bucks on a half-dozen drives for their home NAS. You really need an infrastructure in place that can handle variable capacity drives seamlessly. While these drives do not employ SMR to get greater capacity, that may work out as a bonus, as writes can be performed in a way that all systems are currently compatible with (even though I suspect they will be tuned more for sequential write workloads).

Here's an illustration of this difference:

capacity 1.png

The 'old' method meant that drives on the left half of the above bell curve would have to be sold as 5TB units.

capacity 2.png

With the 'new' method, drives can be sold based on a spec closer to their actual capacity yield. For a given model, shipping capacities would increase as time goes on (top to bottom of the above graphic).

To further clarify what is meant by the term 'cold storage' - the data itself is cold, as in rarely if ever accessed:

tiers.png

Examples of this would be Facebook posts / images from months or years ago. That data may be rarely touched, but it needs to be accessible enough to be browsed to via the internet. The few second archival HDD spinup can handle this sort of thing, while a tape system would take far too long and would likely timeout that data request.

WD's Ae press blast after the break.

IDF 2014: HGST announces 3.2TB NVMe SSDs, shingled 10TB HDDs

Subject: Storage, Shows and Expos | September 9, 2014 - 11:00 AM |
Tagged: ssd, SMR, pcie, NVMe, idf 2014, idf, hgst, hdd, 10TB

It's the first day of IDF, so it's only natural that we see a bunch of non-IDF news start pouring out :). I'll kick them off with a few announcements from HGST. First item up is their new SN100 line of PCIe SSDs:

Ultrastar_SN100_Family_CMYK_Master.jpg

These are NVMe capable PCIe SSDs, available from 800GB to 3.2TB capacities and in (PCI-based - not SATA) 2.5" as well as half-height PCIe cards.

Next up is an expansion of their HelioSeal (Helium filled) drive line:

10TB_Market_applications_HR.jpg

Through the use of Shingled Magnetic Recording (SMR), HGST can make an even bigger improvement in storage densities. This does not come completely free, as due to the way SMR writes to the disk, it is primarily meant to be a sequential write / random access read storage device. Picture roofing shingles, but for hard drives. The tracks are slightly overlapped as they are written to disk. This increases density greatly, but writting to the middle of a shingled section is not possible without potentially overwriting two shingled tracks simultaneously. Think of it as CD-RW writing, but for hard disks. This tech is primarily geared towards 'cold storage', or data that is not actively being written. Think archival data. The ability to still read that data randomly and on demand makes these drives more appealing than retrieving that same data from tape-based archival methods.

Further details on the above releases is scarce at present, but we will keep you posted on further details as they develop.

Full press blast for the SN100 after the break.

Source: HGST

IDF 2014: Intel and Google Announce Reference Design Program, Guaranteed 2 Week AOSP Updates

Subject: Mobile | September 9, 2014 - 10:00 AM |
Tagged: tablet, reference design program, Intel, idf 2014, idf, google, aosp, Android

During today's keynote of the Intel Developer Forum, Google and Intel jointly announced a new program aimed to ease the burden of Android deployment and speed up the operating system update adoption rates that have often plagued the ecosystem.

In today's Android market, whether we are talking about x86 or ARM-based SoC designs, the process to release a point update to the operating system is quite complicated. ODMs have to build unique operating system images for each build and each individual SKU has to pass Google Media Services (GMS). This can be cumbersome and time consuming, slowing down or preventing operating system updates from ever making it to the consumer.

androidref.jpg

With the Intel Reference Design Program, the company will provide it's partners with a single binary that allows them to choose from a pre-qualified set of components or a complete bill of materials specification. Obviously this BOM will include Intel x86 processors like Bay Trail but it should help speed up the development time of new hardware platforms. Even better, OEMs and ODMs won't have to worry about dealing with the process of passing GMS certification leaving the hardware vendor to simply release the hardware to the market.

IMG_0182.JPG

But, an even bigger step forward, is Intel's commitment on the software side. Everyone knows how fragmented the Android OS market with just 20% of the hardware on the Play Store running Android KitKat. For devices built on the Reference Design Program, Intel is going to guarantee software updates within 2 weeks of AOSP (Android Open Source Project) updates. And, that update support will be given for two years after the release of the launch of the device.

AOSP-logo.jpg

This combination of hardware and software support from Intel to its hardware ODMs should help ignite some innovation and sales in the x86 Android market. There aren't any partners to announce support for this Reference Design Program but hopefully we'll hear about some before the end of IDF. It will be very interesting to see what ARM (and its partners) respond with. There are plenty of roadblocks holding back the quick uptake of x86 Android tablets but those companies would be blind to ignore the weight that Intel can shift when the want to.

Intel Developer Forum (IDF) 2014 Keynote Live Blog

Subject: Processors, Shows and Expos | September 9, 2014 - 08:02 AM |
Tagged: idf, idf 2014, Intel, keynote, live blog

Today is the beginning of the 2014 Intel Developer Forum in San Francisco!  Join me at 9am PT for the first of our live blogs of the main Intel keynote where we will learn what direction Intel is taking on many fronts!

intelicon.jpg

Author:
Subject: Processors
Manufacturer: Intel

Server and Workstation Upgrades

Today, on the eve of the Intel Developer Forum, the company is taking the wraps off its new server and workstation class high performance processors, Xeon E5-2600 v3. Known previously by the code name Haswell-EP, the release marks the entry of the latest microarchitecture from Intel to multi-socket infrastructure. Though we don't have hardware today to offer you in-house benchmarks quite yet, the details Intel shared with me last month in Oregon are simply stunning.

slides01.jpg

Starting with the E5-2600 v3 processor overview, there are more changes in this product transition than we saw in the move from Sandy Bridge-EP to Ivy Bridge-EP. First and foremost, the v3 Xeons will be available in core counts as high as 18, with HyperThreading allowing for 36 accessible threads in a single CPU socket. A new socket, LGA2011-v3 or R3, allows the Xeon platforms to run a quad-channel DDR4 memory system, very similar to the upgrade we saw with the Haswell-E Core i7-5960X processor we reviewed just last week.

The move to a Haswell-based microarchitecture also means that the Xeon line of processors is getting AVX 2.0, known also as Haswell New Instructions, allowing for 2x the FLOPS per clock per core. It also introduces some interesting changes to Turbo Mode and power delivery we'll discuss in a bit.

slides02.jpg

Maybe the most interesting architectural change to the Haswell-EP design is per core P-states, allowing each of the up to 18 cores running on a single Xeon processor to run at independent voltages and clocks. This is something that the consumer variants of Haswell do not currently support - every cores is tied to the same P-state. It turns out that when you have up to 18 cores on a single die, this ability is crucial to supporting maximum performance on a wide array of compute workloads and to maintain power efficiency. This is also the first processor to allow independent uncore frequency scaling, giving Intel the ability to improve performance with available headroom even if the CPU cores aren't the bottleneck.

Continue reading our overview of the new Intel Xeon E5-2600 v3 Haswell-EP Processors!!