Intel Pushes Xeon to the Edge With Refreshed Skylake-Based Xeon D SoCs

Subject: Processors | February 7, 2018 - 09:01 AM |
Tagged: Xeon D, xeon, servers, networking, micro server, Intel, edge computing, augmented reality, ai

Intel announced a major refresh of its Xeon D System on a Chip processors aimed at high density servers that bring the power of the datacenter as close to end user devices and sensors as possible to reduce TCO and application latency. The new Xeon D 2100-series SoCs are built on Intel’s 14nm process technology and feature the company’s new mesh architecture (gone are the days of the ring bus). According to Intel the new chips are squarely aimed at “edge computing” and offer up 2.9-times the network performance, 2.8-times the storage performance, and 1.6-times the compute performance of the previous generation Xeon D-1500 series.

Intel New Xeon D-2100.png

Intel has managed to pack up to 18 Skylake-based processing cores, Quick Assist Technology co-processing (for things like hardware accelerated encryption/decryption), four DDR4 memory channels addressing up to 512 GB of DDR4 2666 MHz ECC RDIMMs, four Intel 10 Gigabit Ethernet controllers, 32 lanes of PCI-E 3.0, and 20 lanes of flexible high speed I/O that includes up to 14 lanes of SATA 3.0, four USB 3.0 ports, or 20 lanes of PCI-E. Of course, the SoCs support Intel’s Management Engine, hardware virtualization, HyperThreading, Turbo Boost 2.0, and AVX-512 instructions with 1 FMA (fuse-multiply-add) as well..

xeond2100-14.jpg

Suffice it to say, there is a lot going on here with these new chips which represent a big step up in capabilities (and TDPs) further bridging the gap between the Xeon E3 v5 family and Xeon E5 family and the new Xeon Scalable Processors. Xeon D is aimed at datacenters where power and space are limited and while the soldered SoCs are single socket (1P) setups, high density is achieved by filling racks with as many single processor Mini ITX boards as possible. Xeon D does not quite match the per-core clockspeeds of the “proper” Xeons but has significantly more cores than Xeon E3 and much lower TDPs and cost than Xeon E5. It’s many lower clocked and lower power cores excel at burstable tasks such as serving up websites where many threads may be generated and maintained for long periods of time but not need a lot of processing power and when new page requests do come in the cores are able to turbo boost to meet demand. For example, Facebook is using Xeon D processors to serve up its front end websites in its Yosemite OpenRack servers where each server rack holds 192 Xeon D 1540 SoCs (four Xeon D boards per 1U sleds) for 1,536 Broadwell cores. Other applications include edge routers, network security appliances, self-driving vehicles, and augmented reality processing clusters. The autonomous vehicles use case is perhaps the best example of just what the heck edge computing is. Rather than fighting the laws of physics to transfer sensor data back to a datacenter for processing to be sent back to the car to in time for it to safely act on the processed information, the idea of edge computing is to bring most of the processing, networking, and storage power as close as possible to both the input sensors and the device (and human) that relies on accurate and timely data to make decisions.

xeond2100-15.jpg

As far as specifications, Intel’s new Xeon D lineup includes 14 processor models broken up into three main categories. The Edge Server and Cloud SKUs include eight, twelve, and eighteen core options with TDPs ranging from 65W to 90W. Interestingly, the 18 core Xeon D does not feature the integrated 10 GbE networking the lower end models have though it supports higher DDR4 memory frequencies. The two remaining classes of Xeon D SoCs are “Network Edge and Storage” and “Integrated Intel Quick Assist Technology” SKUs. These are roughly similar with two eight core, one 12 core, and one 16 core processor (the former also has a quad core that isn’t present in the latter category) though there is a big differentiator in clockspeeds. It seems customers will have to choose between core clockspeeds or Quick Assist acceleration (up to 100 Gbps) as the chips that do have QAT are clocked much lower than the chips without the co-processor hardware which makes sense because they have similar TDPs so clocks needed to be sacrificed to maintain the same core count. Thanks to the updated architecture, Intel is encroaching a bit on the per-core clockspeeds of the Xeon E3 and Xeon E5s though when turbo boost comes into play the Xeon Ds can’t compete.

Intel Xeon D-2100 SKU Information.png

The flagship Xeon D 2191 offers up two more cores (four additional threads) versus the previous Broadwell-based flagship Xeon D 1577 as well as higher clockspeeds at 1.6 GHz base versus 1.3 GHz and 2.2 GHz turbo versus 2.1 GHz turbo. The Xeon D 2191 does lack the integrated networking though. Looking at the two 16 core refreshed Xeon Ds compared to the 16 core Xeon D 1577, Intel has managed to increase clocks significantly (up to 2.2 GHz base and 3.0 GHz boost versus 1.3 GHz base and 2.10 GHz boost), double the number of memory channels and network controllers, and increase the maximum amount of memory from 128 GB to 512 GB. All those increases did come at the cost of TDP though which went from 45W to 100W.

xeond2100-6.jpg

Xeon D has always been an interesting platform both for enthusiasts running VM labs and home servers and big data enterprise clients building and serving up the 'next big thing' built on the astonishing amounts of data people create and consume on a daily basis. (Intel estimates a single self driving car would generate as much as 4TB of data per day while the average person in 2020 will generate 1.5 GB of data per day and VR recordings such as NFL True View will generate up to 3TB a minute!) With Intel ramping up both the core count, per-core performance, and I/O the platform is starting to not only bridge the gap between single socket Xeon E3 and dual socket Xeon E5 but to claim a place of its own in the fast-growing server market.

I am looking forward to seeing how Intel's partners and the enthusiast community take advantage of the new chips and what new projects they will enable. It is also going to be interesting to see the responses from AMD (e.g. Snowy Owl and to a lesser extent Great Horned Owl at the low and niche ends as it has fewer CPU cores but a built in GPU) and the various ARM partners (Qualcomm Centriq, X-Gene, Ampere, ect.*) as they vie for this growth market space with higher powered SoC options in 2018 and beyond.

Also read:

*Note that X-Gene and Ampere are both backed by the Carlyle Group now with MACOM having sold X-Gene to Project Denver Holdings and the ex-Intel employee led Ampere being backed by the Carlyle Group.

Source: Intel

All I want for Christmas ... is an Intel firmware patch

Subject: General Tech | November 24, 2017 - 01:22 PM |
Tagged: Intel, 7th generation core, 6th generation core, 8th generation core, apollo lake, xeon, security

The issue with Intel's processors is widespread and a fix will not be available for some time yet.  The flaws in their security features are present in 6-8th gen Core chips, as well as a variety of Xeons, Celerons and Apollo Lake CPUs which accounts for a wide variety of systems, from gaming machines to NAS devices.  All suffer from the vulnerability which allows compromised code to run a system invisibly, as it will be executed below the OS on the actual chip.  From what The Register gleaned from various manufacturers, only Dell will release a patch before 2018 and even that will only offer a solution for a very limited number of machines.  The end of 2017 is going to be a little too interesting for many sysadmins.

Capture.PNG

"As Intel admitted on Monday, multiple flaws in its Management Engine, Server Platform Services, and Trusted Execution Engine make it possible to run code that operating systems – and therefore sysadmins and users – just can't see."

Here is some more Tech News from around the web:

Tech Talk

Source: The Register

Benchmarking a beast of a box, a dual Xeon Scalable Gold Server

Subject: Systems | August 30, 2017 - 03:42 PM |
Tagged: linux, xeon, Xeon Gold 6138, dual cpu, LGA-3647, Intel

The core counts and amount of RAM on enthusiast systems is growing quickly, especially with Threadripper, but we won't be seeing a system quite like this one under our desks in the near future.  The server which Phoronix tested sports dual Xeon Gold 6138 for a total of 40 physical cores and 80 threads, with each CPU having 48GB of RAM for a total of 96GB of DDR4-2666.  Not only did Phoronix run this system through a variety of tests, they did so on eight different Linux distros.   Can any benchmark push this thing to its limits?  Was there a clear winner for the OS?  Find out in the full review.

image.php_.jpg

"While we routinely run various Linux distribution / operating system comparisons at Phoronix, they tend to be done on desktop class hardware and the occasional servers. This is our look at the most interesting enterprise-focused Linux distribution comparison to date as we see how Intel's Xeon Scalable platform compares on different GNU/Linux distributions when using the Tyan GT24E-B7106 paired with two Dual Xeon Gold 6138 processors."

Here are some more Systems articles from around the web:

Systems

 

Source: Phoronix

Intel Announces Xeon W and Xeon Scalable Workstation Processors

Subject: Processors | August 29, 2017 - 12:00 PM |
Tagged: Xeon W, xeon scalable, xeon, workstation, processor, Intel, cpu

Intel has officially announced their new workstation processor lineup, with Xeon Scalable and Xeon W versions aimed at both professional and mainstream workstation systems.

"Workstations powered by Intel Xeon processors meet the most stringent demands for professionals seeking to increase productivity and rapidly bring data to life. Intel today disclosed that the world-record performance of the Intel Xeon Scalable processors is now available for next-generation expert workstations to enable photorealistic design, modeling, artificial intelligence (AI) analytics, and virtual-reality (VR) content creation."

Slide 1.png

The first part of Intel’s product launch announcement are the new Xeon Scalable processors, first announced in July, and these are dual-socket solutions targeting professional workstations. Versions with up to 56 cores/112 threads are available, and frequencies of up to 4.20 GHz are possible via Turbo Boost. Intel is emphasising the large performance impact of upgrading to these new Xeon processors with a comparison to older equipment (a trend in the industry of late), which is relevant when considering the professional market where upgrades are far slower than the enthusiast desktop segment:

“Expert workstations will experience up to a 2.71x boost in performance compared to a 4-year-old system and up to 1.65x higher performance compared to the previous generation.”

Slide 2.png

The second part of announcement are new Xeon W processors, which will be part of Intel’s mainstream workstation offering. These are single-socket processors, with up to 18 cores/36 threads and Turbo Boost frequencies up to 4.50 GHz. The performance impact with these new Xeon W CPUs compared to previous generations is not as great as the Xeon Scalable processors above, as Intel offers the same comparison to older hardware with the Xeon W:

“Mainstream workstations will experience up to a 1.87x boost in performance compared to a 4-year-old system4 and up to 1.38x higher performance compared to the previous generation.”

Slide 3.png

Full PR is available from Intel's newsroom.

Source: Intel

Podcast #458 - Intel Xeons, ThunderBolt 3 GPU chassis, Affordable 10GbE, and more!

Subject: General Tech | July 13, 2017 - 11:40 AM |
Tagged: xeon, x299, video, thunderbolt 3, sapphire, RX470, rift, radeon, podcast, nand, Intel, HDK2, gigabyte, external gpu, asus, 10GbE

PC Perspective Podcast #458 - 07/13/17

Join us for Intel Xeon launch, external ThunderBolt3 GPUs, 10Gb Ethernet, and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, Allyn Malventano

Peanut Gallery: Ken Addison, Alex Lustenberg

Program length: 1:38:08
 
Podcast topics of discussion:
  1. Week in Review:
  2. News items of interest:
  3. Hardware/Software Picks of the Week
    1. Ryan: ASUS XG-C100C lol
    2. Jeremy: Um, well I keep meaning to play Deserts of Kharak
  4. Closing/outro

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

Source:
Author:
Subject: Processors
Manufacturer: Intel

A massive lineup

The amount and significance of the product and platform launches occurring today with the Intel Xeon Scalable family is staggering. Intel is launching more than 50 processors and 7 chipsets falling under the Xeon Scalable product brand, targeting data centers and enterprise customers in a wide range of markets and segments. From SMB users to “Super 7” data center clients, the new lineup of Xeon parts is likely to have an option targeting them.

All of this comes at an important point in time, with AMD fielding its new EPYC family of processors and platforms, for the first time in nearly a decade becoming competitive in the space. That decade of clear dominance in the data center has been good to Intel, giving it the ability to bring in profits and high margins without the direct fear of a strong competitor. Intel did not spend those 10 years flat footed though, and instead it has been developing complimentary technologies including new Ethernet controllers, ASICs, Omni-Path, FPGAs, solid state storage tech and much more.

cpus.jpg

Our story today will give you an overview of the new processors and the changes that Intel’s latest Xeon architecture offers to business customers. The Skylake-SP core has some significant upgrades over the Broadwell design before it, but in other aspects the processors and platforms will be quite similar. What changes can you expect with the new Xeon family?

01-11 copy.jpg

Per-core performance has been improved with the updated Skylake-SP microarchitecture and a new cache memory hierarchy that we had a preview of with the Skylake-X consumer release last month. The memory and PCIe interfaces have been upgraded with more channels and more lanes, giving the platform more flexibility for expansion. Socket-level performance also goes up with higher core counts available and the improved UPI interface that makes socket to socket communication more efficient. AVX-512 doubles the peak FLOPS/clock on Skylake over Broadwell, beneficial for HPC and analytics workloads. Intel QuickAssist improves cryptography and compression performance to allow for faster connectivity implementation. Security and agility get an upgrade as well with Boot Guard, RunSure, and VMD for better NVMe storage management. While on the surface this is a simple upgrade, there is a lot that gets improved under the hood.

01-12 copy.jpg

We already had a good look at the new mesh architecture used for the inter-core component communication. This transition away from the ring bus that was in use since Nehalem gives Skylake-SP a couple of unique traits: slightly longer latencies but with more consistency and room for expansion to higher core counts.

01-18 copy.jpg

Intel has changed the naming scheme with the Xeon Scalable release, moving away from “E5/E7” and “v4” to a Platinum, Gold, Silver, Bronze nomenclature. The product differentiation remains much the same, with the Platinum processors offering the highest feature support including 8-sockets, highest core counts, highest memory speeds, connectivity options and more. To be clear: there are a lot of new processors and trying to create an easy to read table of features and clocks is nearly impossible. The highlights of the different families are:

  • Xeon Platinum (81xx)
    • Up to 28 cores
    • Up to 8 sockets
    • Up to 3 UPI links
    • 6-channel DDR4-2666
    • Up to 1.5TB of memory
    • 48 lanes of PCIe 3.0
    • AVX-512 with 2 FMA per core
  • Xeon Gold (61xx)
    • Up to 22 cores
    • Up to 4 sockets
    • Up to 3 UPI links
    • 6-channel DDR4-2666
    • AVX-512 with 2 FMA per core
  • Xeon Gold (51xx)
    • Up to 14 cores
    • Up to 2 sockets
    • 2 UPI links
    • 6-channel DDR4-2400
    • AVX-512 with 1 FMA per core
  • Xeon Silver (41xx)
    • Up to 12 cores
    • Up to 2 sockets
    • 2 UPI links
    • 6-channel DDR4-2400
    • AVX-512 with 1 FMA per core
  • Xeon Bronze (31xx)
    • Up to 8 cores
    • Up to 2 sockets
    • 2 UPI links
    • No Turbo Boost
    • 6-channel DDR4-2133
    • AVX-512 with 1 FMA per core

That’s…a lot. And it only gets worse when you start to look at the entire SKU lineup with clocks, Turbo Speeds, cache size differences, etc. It’s easy to see why the simplicity argument that AMD made with EPYC is so attractive to an overwhelmed IT department.

01-20 copy.jpg

Two sub-categories exist with the T or F suffix. The former indicates a 10-year life cycle (thermal specific) while the F is used to indicate units that integrate the Omni-Path fabric on package. M models can address 1.5TB of system memory. This diagram above, which you should click to see a larger view, shows the scope of the Xeon Scalable launch in a single slide. This release offers buyers flexibility but at the expense of complexity of configuration.

Continue reading about the new Intel Xeon Scalable Skylake-SP platform!

Microcode Bug Affects Intel Skylake and Kaby Lake CPUs

Subject: Processors | June 26, 2017 - 08:53 AM |
Tagged: xeon, Skylake, processor, pentium, microcode, kaby lake, Intel, errata, cpu, Core, 7th generation, 6th generation

A microcode bug affecting Intel Skylake and Kaby Lake processors with Hyper-Threading has been discovered by Debian developers (who describe it as "broken hyper-threading"), a month after this issue was detailed by Intel in errata updates back in May. The bug can cause the system to behave 'unpredictably' in certain situations.

Intel CPUs.jpg

"Under complex micro-architectural conditions, short loops of less than 64 instructions that use AH, BH, CH or DH registers as well as their corresponding wider register (eg RAX, EAX or AX for AH) may cause unpredictable system behaviour. This can only happen when both logical processors on the same physical processor are active."

Until motherboard vendors begin to address the bug with BIOS updates the only way to prevent the possibility of this microcode error is to disable HyperThreading. From the report at The Register (source):

"The Debian advisory says affected users need to disable hyper-threading 'immediately' in their BIOS or UEFI settings, because the processors can 'dangerously misbehave when hyper-threading is enabled.' Symptoms can include 'application and system misbehaviour, data corruption, and data loss'."

The affected models are 6th and 7th-gen Intel processors with HyperThreading, which include Core CPUs as well as some Pentiums, and Xeon v5 and v6 processors.

Source: The Register

Intel Skylake-X and Skylake-SP Utilize Mesh Architecture for Intra-Chip Communication

Subject: Processors | June 15, 2017 - 04:00 PM |
Tagged: xeon scalable, xeon, skylake-x, skylake-sp, skylake-ep, ring, mesh, Intel

Though we are just days away from the release of Intel’s Core i9 family based on Skylake-X, and a bit further away from the Xeon Scalable Processor launch using the same fundamental architecture, Intel is sharing a bit of information on how the insides of this processor tick. Literally. One of the most significant changes to the new processor design comes in the form of a new mesh interconnect architecture that handles the communications between the on-chip logical areas.

Since the days of Nehalem-EX, Intel has utilized a ring-bus architecture for processor design. The ring bus operated in a bi-directional, sequential method that cycled through various stops. At each stop, the control logic would determine if data was to be the collected to deposited with that module. These ring bus stops are located at memory controllers, CPU cores / caches, the PCI Express interface, memory controllers, LLCs, etc. This ring bus was fairly simple and easily expandable by simply adding more stops on the ring bus itself.

xeon-processor-5.jpg

However, over several generations, the ring bus has become quite large and unwieldly. Compare the ring bus from Nehalem above, to the one for last year’s Xeon E5 v5 platform.

intel-xeon-e5-v4-block-diagram-hcc.jpg

The spike in core counts and other modules caused a ballooning of the ring that eventually turned into multiple rings, complicating the design. As you increase the stops on the ring bus you also increase the physical latency of the messaging and data transfer, for which Intel compensated by increasing bandwidth and clock speed of this interface. The expense of that is power and efficiency.

For an on-die interconnect to remain relevant, it needs to be flexible in bandwidth scaling, reduce latency, and remain energy efficient. With 28-core Xeon processors imminent, and new IO capabilities coming along with it, the time for the ring bus in this space is over.

Starting with the HEDT and Xeon products released this year, Intel will be using a new on-chip design called a mesh that Intel promises will offer higher bandwidth, lower latency, and improved power efficiency. As the name implies, the mesh architecture is one in which each node relays messages through the network between source and destination. Though I cannot share many of the details on performance characteristics just yet, Intel did share the following diagram.

intelmesh.png

As Intel indicates in its blog on the mesh announcements, this generic diagram “shows a representation of the mesh architecture where cores, on-chip cache banks, memory controllers, and I/O controllers are organized in rows and columns, with wires and switches connecting them at each intersection to allow for turns. By providing a more direct path than the prior ring architectures and many more pathways to eliminate bottlenecks, the mesh can operate at a lower frequency and voltage and can still deliver very high bandwidth and low latency. This results in improved performance and greater energy efficiency similar to a well-designed highway system that lets traffic flow at the optimal speed without congestion.”

The bi-directional mesh design allows a many-core design to offer lower node to node latency than the ring architecture could provide, and by adjusting the width of the interface, Intel can control bandwidth (and by relation frequency). Intel tells us that this can offer lower average latency without increasing power. Though it wasn’t specifically mentioned in this blog, the assumption is that because nothing is free, this has a slight die size cost to implement the more granular mesh network.

Using a mesh architecture offers a couple of capabilities and also requires a few changes to the cache design. By dividing up the IO interfaces (think multiple PCI Express banks, or memory channels), Intel can provide better average access times to each core by intelligently spacing the location of those modules. Intel will also be breaking up the LLC into different segments which will share a “stop” on the network with a processor core. Rather than the previous design of the ring bus where the entirety of the LLC was accessed through a single stop, the LLC will perform as a divided system. However, Intel assures us that performance variability is not a concern:

Negligible latency differences in accessing different cache banks allows software to treat the distributed cache banks as one large unified last level cache. As a result, application developers do not have to worry about variable latency in accessing different cache banks, nor do they need to optimize or recompile code to get a significant performance boosts out of their applications.

There is a lot to dissect when it comes to this new mesh architecture for Xeon Scalable and Core i9 processors, including its overall effect on the LLC cache performance and how it might affect system memory or PCI Express performance. In theory, the integration of a mesh network-style interface could drastically improve the average latency in all cases and increase maximum memory bandwidth by giving more cores access to the memory bus sooner. But, it is also possible this increases maximum latency in some fringe cases.

Further testing awaits for us to find out!

Source: Intel

AMD Compares 1x 32-Core EPYC to 2x 12-Core Xeon E5s

Subject: Processors | May 17, 2017 - 04:05 AM |
Tagged: amd, EPYC, 32 core, 64 thread, Intel, Broadwell-E, xeon

AMD has formally announced their EPYC CPUs. While Sebastian covered the product specifications, AMD has also released performance claims against a pair of Intel’s Broadwell-E Xeons. While Intel’s E5-2650 v4 processors have an MSRP of around $1170 USD, each, we don’t know how that price will compare to AMD’s offering. At first glance, pitting thirty two cores against two twelve-core chips seems a bit unfair, although it could end up being a very fair comparison if the prices align.

amd-2017-epyc-ubuntucompile.jpg

Image Credit: Patrick Moorhead

Patrick Moorhead, who was at the event, tweeted out photos of a benchmark where Ubuntu was compiled over GCC. It looks like EPYC completed in just 33.7s while the Broadwell-E chip took 37.2s (making AMD’s part ~9.5% faster). While this, again, stems from having a third more cores, this depends on how much AMD is going to charge you for them, versus Intel’s current pricing structure.

amd-2017-epyc-threads.jpg

Image Credit: Patrick Moorhead

This one chip also has 128 PCIe lanes, rather than Intel’s 80 total lanes spread across two chips.

ioSafe Launches 5-Bay Xeon-Based 'Server 5' Fireproof NAS

Subject: Storage | March 8, 2017 - 09:58 PM |
Tagged: xeon, raid, NAS, iosafe, fireproof

ioSafe, makers of excellent fireproof external storage devices and NAS units, has introduced what they call the 'Server 5':

Server5-front2-.jpg

The Server 5 is a completely different twist for an ioSafe NAS. While previous units have essentially been a fireproof drive cage surrounding Synology NAS hardware, the Server 5 is a full blown Xeon D-1520 or D-1521 quad core HT, 16GB of DDR4, an Areca ARC-1225-8i hardware RAID controller (though only 5 ports are connected to the fireproof drive cage). ioSafe supports the Server 5 with Windows Server 2012 R2 or you can throw your preferred flavor of Linux on there. The 8-thread CPU and 16GB of RAM mean that you can have plenty of other services running straight off of this unit. It's not a particularly speedy CPU, but keep in mind that the Areca RAID card offloads all parity calculations from the host.

Server5-rear.jpg

Overall the Server 5 looks nearly identical to the ioSafe 1515+, but with an extra inch or two of height added to the bottom to accommodate the upgraded hardware. The Server 5 should prove to be a good way to keep local enterprise / business data protected and available immediately after a disaster. While only the hard drives will be protected in a fire, they can be popped out of the charred housing and shifted to a backup Server 5 or just migrated to another Areca-driven NAS system. For those wondering what a typical post-fire ioSafe looks like, here ya go:

1515+.jpg

Note how clean the cage and drives are (and yes, they all still work)!

Press blast appears after the break.

Source: ioSafe