An abundance of new processors
During its press conference at Computex 2017, Intel has officially announced the upcoming release of an entire new family of HEDT (high-end desktop) processors along with a new chipset and platform to power it. Though it has only been a year since Intel launched the Core i7-6950X, a Broadwell-E processor with 10-cores and 20-threads, it feels like it has been much longer than that. At the time Intel was accused of “sitting” on the market – offering only slight performance upgrades and raising prices on the segment with a flagship CPU cost of $1700. With can only be described as scathing press circuit, coupled with a revived and aggressive competitor in AMD and its Ryzen product line, Intel and its executive teams have decided it’s time to take enthusiasts and high end prosumer markets serious, once again.
Though the company doesn’t want to admit to anything publicly, it seems obvious that Intel feels threatened by the release of the Ryzen 7 product line. The Ryzen 7 1800X was launched at $499 and offered 8 cores and 16 threads of processing, competing well in most tests against the likes of the Intel Core i7-6900X that sold for over $1000. Adding to the pressure was the announcement at AMD’s Financial Analyst Day that a new brand of processors called Threadripper would be coming this summer, offering up to 16 cores and 32 threads of processing for that same high-end consumer market. Even without pricing, clocks or availability timeframes, it was clear that AMD was going to come after this HEDT market with a brand shift of its EPYC server processors, just like Intel does with Xeon.
The New Processors
Normally I would jump into the new platform, technologies and features added to the processors, or something like that before giving you the goods on the CPU specifications, but that’s not the mood we are in. Instead, let’s start with the table of nine (9!!) new products and work backwards.
|Core i9-7980XE||Core i9-7960X||Core i9-7940X||Core i9-7920X||Core i9-7900X||Core i7-7820X||Core i7-7800X||Core i7-7740X||Core i5-7640X|
|Architecture||Skylake-X||Skylake-X||Skylake-X||Skylake-X||Skylake-X||Skylake-X||Skylake-X||Kaby Lake-X||Kaby Lake-X|
|Base Clock||?||?||?||?||3.3 GHz||3.6 GHz||3.5 GHz||4.3 GHz||4.0 GHz|
|Turbo Boost 2.0||?||?||?||?||4.3 GHz||4.3 GHz||4.0 GHz||4.5 GHz||4.2 GHz|
|Turbo Boost Max 3.0||?||?||?||?||4.5 GHz||4.5 GHz||N/A||N/A||N/A|
|Cache||16.5MB (?)||16.5MB (?)||16.5MB (?)||16.5MB (?)||13.75MB||11MB||8.25MB||8MB||6MB|
|DDR4-2666 Dual Channel|
|TDP||165 watts (?)||165 watts (?)||165 watts (?)||165 watts (?)||140 watts||140 watts||140 watts||112 watts||112 watts|
There is a lot to take in here. The most interesting points are that Intel plans to one-up AMD Threadripper by offering an 18-core processor but it also wants to change the perception of the X299-class platform by offering lower price, lower core count CPUs like the quad-core, non-HyperThreaded Core i5-7640X. We also see the first ever branding of Core i9.
Intel only provided detailed specifications up to the Core i9-7900X, a 10-core / 20-thread processor with a base clock of 3.3 GHz and a Turbo peak of 4.5 GHz using the new Turbo Boost Max Technology 3.0. It sports 13.75MB of cache thanks to an updated cache configuration, includes 44 lanes of PCIe 3.0, an increase of 4 lanes over Broadwell-E, quad-channel DDR4 memory up to 2666 MHz and a 140 watt TDP. The new LGA2066 socket will be utilized. Pricing for this CPU is set at $999, which is interesting for a couple of reasons. First, it is $700 less than the starting MSRP of the 10c/20t Core i7-6950X from one year ago; obviously a big plus. However, there is quite a ways UP the stack, with the 18c/36t Core i9-7980XE coming in at a cool $1999.
The next CPU down the stack is compelling as well. The Core i7-7820X is the new 8-core / 16-thread HEDT option from Intel, with similar clock speeds to the 10-core above it, save the higher base clock. It has 11MB of L3 cache, 28-lanes of PCI Express (4 higher than Broadwell-E) but has a $599 price tag. Compared to the 8-core 6900K, that is ~$400 lower, while the new Skylake-X part iteration includes a 700 MHz clock speed advantage. That’s huge, and is a direct attack on the AMD Ryzen 7 1800X that sells for $499 today and cut Intel off at the knees this March. In fact, the base clock of the Core i7-7820X is only 100 MHz lower than the maximum Turbo Boost clock of the Core i7-6900K!
Subject: Processors | May 30, 2017 - 03:00 AM | Ryan Shrout
Tagged: Intel, computex 2017, computex, coffee lake, 8th generation core
During it's keynote at Computex today, Intel announced the high performane Skylake-X and Kaby Lake-X platforms with CPU core counts as high as 18 (!!) but also gave a brief mention of its upcoming Coffee Lake product, the 8th Generation Core product family.
To quote directly from the Intel press information:
"As we move toward the next generation of computing, Intel also shared its commitment to deliver 8th generational Intel® Core™ processor-based devices by the holiday season, boasting more than 30 percent improvement in performance versus the 7th Gen Intel® Core™ processor."
That is quite the claim, but let's dive into the details.
Based on SYSmark* 2014 v1.5 (Windows Desktop Application Performance). Comparing 7th Gen i7-7500U, PL1=15W TDP, 2C4T, Turbo up to 3.5GHz, Memory: 2x4GB DDR4-2133, vs. Estimates for 8th Gen Core i7: PL1=15W TDP, 4C8T, Turbo up to 4 GHz, Memory: 2x4GB DDR4-2400, Storage: Intel® SSD, Windows* 10 RS2. Power policy assumptions: AC mode. Note: Kaby Lake U42 performance estimates are Pre-Silicon and are subject to change. Pre-Si projections have +/- 7% margin of error.
In a more readable format:
|Code name||Coffee Lake||Kaby Lake|
|Process Tech||14nm Double Plus Good||14nm+|
|Base Clock||?||2.7 GHz|
|Turbo Clock||4.0 GHz||3.5 GHz|
|TDP||15 watt||15 watt|
|Memory Clock||2400 MHz||2133 MHz|
The 30% performance claim comes from both a doubling of core and thread count (2- to 4-cores) but also a 500 MHz higher peak Turbo Clock, going from Kaby Lake to Coffee Lake. The testing was done using SYSmark 2014 v1.5, a benchmark that is very burst-centric and is comparable to common productivity tasks. Even with a 15% increase in peak clock speed and a 2x core/thread count, Intel is still able to maintain a 15 watt TDP with this CPU.
While we might at first expect much larger performance gains with those clock and core count differences, keep in mind that SYSmark as a test has never scaled in such a way. We don't yet know what other considerations might be in place for the 8th Generation Core processor platforms, and how they might affect performance for single of multi-threaded applications.
Intel has given us very little information today on the Coffee Lake designs, but it seems we'll know all about this platform before the end of the year.
Subject: Systems | May 29, 2017 - 07:01 AM | Scott Michaud
Tagged: RX 570, kaby lake, Intel, dell, AIO, amd
Dell has refreshed their XPS 27 All-in-one with two new models. Both of these have their GPU upgraded to the AMD RX 570 and their CPU refreshed to the Core i7-7700, which Dell highlights for its VR readiness. The difference between the two is that the lower-end model, $1999.99 USD, has a non-touch screen and a 2TB hard drive backed by 32GB of M.2 SATA SSD cache; the higher-end model, $2649.99 USD, has a touch screen and a 512GB, PCIe SSD, which makes it a quarter of the storage, but much faster. Both are loaded with 16GB of RAM, but they can be configured up to 64GB.
About two weeks ago, Kyle Wiggers of Digital Trends had some hands-on time with the refreshed all-in-one. He liked the vibrant, 4K panel that was apparently calibrated to AdobeRGB (although I can’t find any listing for how much it covers). The purpose of that color space is to overlap with both non-HDR video and with the gamut of commercial printers, which is useful for multiple types of publishers.
The Dell XPS 27 All-in-one is available now.
Subject: General Tech, Memory, Storage | May 26, 2017 - 10:14 PM | Tim Verry
Tagged: XPoint, Intel, HPC, DIMM, 3D XPoint
Intel recently teased a bit of new information on its 3D XPoint DIMMs and launched its first public demonstration of the technology at the SAP Sapphire conference where SAP’s HANA in-memory data analytics software was shown working with the new “Intel persistent memory.” Slated to arrive in 2018, the new Intel DIMMs based on the 3D XPoint technology developed by Intel and Micron will work in systems alongside traditional DRAM to provide a pool of fast, low latency, and high density nonvolatile storage that is a middle ground between expensive DDR4 and cheaper NVMe SSDs and hard drives. When looking at the storage stack, the storage density increases along with latency as it gets further away from the CPU. The opposite is also true, as storage and memory gets closer to the processor, bandwidth increases, latency decreases, and costs increase per unit of storage. Intel is hoping to bridge the gap between system DRAM and PCI-E and SATA storage.
According to Intel, system RAM offers up 10 GB/s per channel and approximately 100 nanoseconds of latency. 3D XPoint DIMMs will offer 6 GB/s per channel and about 250 nanoseconds of latency. Below that is the 3D XPoint-based NVMe SSDs (e.g. Optane) on a PCI-E x4 bus where they max out the bandwidth of the bus at ~3.2 GB/s and 10 microseconds of latency. Intel claims that non XPoint NVMe NAND solid state drives have around 100 microsecomds of latency, and of course, it gets worse from there when you go to NAND-based SSDs or even hard drives hanging of the SATA bus.
Intel’s new XPoint DIMMs have persistent storage and will offer more capacity that will be possible and/or cost effective with DDR4 DRAM. In giving up some bandwidth and latency, enterprise users will be able to have a large pool of very fast storage for storing their databases and other latency and bandwidth sensitive workloads. Intel does note that there are security concerns with the XPoint DIMMs being nonvolatile in that an attacker with physical access could easily pull the DIMM and walk away with the data (it is at least theoretically possible to grab some data from RAM as well, but it will be much easier to grab the data from the XPoint sticks. Encryption and other security measures will need to be implemented to secure the data, both in use and at rest.
Interestingly, Intel is not positioning the XPoint DIMMs as a replacement for RAM, but instead as a supplement. RAM and XPoint DIMMs will be installed in different slots of the same system and the DDR4 RAM will be used for the OS and system critical applications while the XPoint pool of storage will be used for storing data that applications will work on much like a traditional RAM disk but without needing to load and save the data to a different medium for persistent storage and offering a lot more GBs for the money.
While XPoint is set to arrive next year along with Cascade Lake Xeons, it will likely be a couple of years before the technology takes off. Supporting it is going to require hardware and software support for the workstations and servers as well as developers willing to take advantage of it when writing their specialized applications. Fortunately, Intel started shipping the memory modules to its partners for testing earlier this year. It is an interesting technology and the DIMM solution and direct CPU interface will really let the 3D XPoint memory shine and reach its full potential. It will primarily be useful for the enterprise, scientific, and financial industries where there is a huge need for faster and lower latency storage that can accommodate massive (multiple terabyte+) data sets that continue to get larger and more complex. It is a technology that likely will not trickle down to consumers for a long time, but I will be ready when it does. In the meantime, I am eager to see what kinds of things it will enable the big data companies and researchers to do! Intel claims it will not only be useful at supporting massive in-memory databases and accelerating HPC workloads but for things like virtualization, private clouds, and software defined storage.
What are your thoughts on this new memory tier and the future of XPoint?
- Intel Has Started Shipping Optane Memory Modules
- Intel Optane Memory 32GB Review - Faster Than Lightning
- A Closer Look at Intel's Optane SSD DC P4800X Enterprise SSD Performance
Subject: General Tech | May 25, 2017 - 12:19 PM | Jeremy Hellstrom
Tagged: nvidia, Intel, Lake Crest, Knights Crest
DigiTimes have heard about Intel's plans to reveal their next hardware devoted to AI functionality at Computex. Lake Crest is their deep learning hardware to support a new generation of neural network based computing and Knights Crest is the result of Intel's $350m purchase of the deep learning company Nervana which will be based on the familiar Xeon and Xeon Phi families of processor.
Jen-Hsun Huang, will deliver a keynote about NVIDIA's current AI projects along with their advancements in autonomous driving and deep learning, but we have not heard any juicy rumours about hardware announcements yet. Love him or hate him, Jen-Hsun's keynotes are never a waste of time to listen to.
"Nvidia and Intel are expected to unveil their latest plans on hardware platforms for artificial intelligence (AI) applications at Computex 2017, according to sources from the upstream supply chain."
Here is some more Tech News from around the web:
- Fat-thumbed dev slashes Samba security @ The Register
- Google now mingles everything you've bought with everywhere you've been @ The Register
- Windows 10 Creators Update Disappoints @ Hardware Secrets
- Intel pitches a Thunderbolt 3-for-all @ The Register
- Tt eSPORTS X COMFORT (XC500) Gaming Chair Review @ NikKTech
- 8 out of 10 cats fear statistics – AI doesn't have this problem @ The Register
Subject: Motherboards | May 24, 2017 - 03:45 PM | Jeremy Hellstrom
Tagged: msi, Z270 Krait Gaming, intel z270, Intel
For around $150 the MSI Krait Gaming motherboard is a decent deal for anyone building a computer around an LGA 1151 Intel processor. With three PCIe 3.0 x16 slots and an additional three PCIe 3.0 1x slots you have a lot of space to install additional cards. The storage is equally expansive with six SATA 6Gbps ports as well as two M.2 slots for newer generation SSDs and there are a total of 16 USB ports split between 3.0 and 2.0 including a Type-C port. The overclocking potential is also impressive, [H]ard|OCP easily configured their i7-7600K to run at 5.1GHz with memory at 3600MHz. Overall the board is a great mix of price and features and well worth considering.
"While it is generally the flagship motherboards that grab the most attention, it's the midrange offerings that see the most sales. MSI's Z270 Krait Gaming motherboard is one of those bread and butter type offerings. It has everything the gamer needs without the unnecessary and expensive fluff."
Here are some more Motherboard articles from around the web:
- Biostar Z270GT8 @ techPowerUp
- ASUS ROG STRIX Z270I GAMING ITX Review @ Hardware Canucks
- Biostar Racing B350GT3 @ Modders-Inc
- SRock AB350 Gaming K4 @ Modders-Inc
- ASUS ROG Strix X99 Gaming Review @ OCC
Subject: Mobile | May 23, 2017 - 12:25 PM | Ryan Shrout
Tagged: shrout research, play store, Intel, Chromebook, arm, Android
Please excuse the bit of self-promotion here. Oh, and disclaimer: Shrout Research and PC Perspective share management and ownership.
Based on testing done by Shrout Research and published in a paper this week, the introduction of Android applications on Chromebooks directly though the Play Store has added a new wrinkle into the platform selection decision. Android applications, unlike Chromebook native apps, have a heavy weight towards the Android phone and tablet ecosystem, with "defacto" optimization for the ARM-based processors and platforms that represent 98%+ of that market. As a result, there are some noticeable and noteworthy differences when running Android apps on Chromebooks powered by an ARM SoC and an Intel x86 SoC.
With that market dominance as common knowledge, all Android applications are developed targeting ARM hardware, for ARM processors. Compilers and performance profiling software has been built and perfected to improve the experience and efficiency of apps to run on ARMv7 (32-bit) and ARMv8 (64-bit) architectures. This brings to the consumer an improved overall experience, including better application compatibility and better performance.
Using a pair of Acer Chromebooks, the R11 based on the Intel Celeron N3060 and the R13 based on the MediaTek MT8173C, testing was done to compare the performance, loading times, and overall stability of various Android Play Store applications. A range of application categories were addressed including games, social, and productivity.
Through 19 tested Android apps we found that the ARM-powered R13 Chromebook performed better than the Intel-powered R11 Chromebook in 9 of them. In 8 of the apps tested, both platforms performed equally well. In 2 of the test applications, the Intel-powered system performed better (Snapchat and Google Maps).
The paper also touches on power consumption, and between these two systems, the ARM-based MediaTek SoC was using 11.5% less power to accomplish the same tasks.
Our testing indicates the Acer R13, using the ARM-powered processor, uses 11.5% less power on average in our 150 minutes of use through our education simulation. This is a significant margin and would indicate that with two systems equally configured, one with the MediaTek ARM processor and another with the Intel Celeron processor, the ARM-powered platform would get 11.5% additional usage time before requiring a charge. Based on typical Chromebook battery life (11 hours), the ARM system would see an additional 75 minutes of usability.
There is a lot more detail in the white paper on ShroutResearch.com, including a discussion about the impact that the addition of Android applications on Chromebooks might have for the market as whole:
...bringing a vast library of applications from the smart phone market to the Chromebook would create a combination of capabilities that would turn the computing spectrum sideways. This move alleviates the sustained notion that Chromebooks are connected-only devices and gives an instant collection of usable offline applications and tools to the market.
Subject: General Tech | May 17, 2017 - 12:30 PM | Jeremy Hellstrom
Tagged: Intel, amd, rumour, release dates, ryzen, skylake-x, kaby lake x, Threadripper, X399, coffee lake
DigiTimes has posted an article covering the probable launch dates of AMD's new CPUs and GPUs as well as Intel's reaction to the release. Not all of these dates are confirmed but it is worth noting as these rumours are often close to those eventually announced. Naples will be the first, with the server chips launching at the end of June but that is just the start. July is the big month for AMD, with the lower end Ryzen 3 chips hitting the market as well as the newly announced 16 core Threadrippers and the X399 chipset. That will also be the month we see Vega's
Founders Frontier Edition graphics cards arrive.
Intel's Basin Falls platform; Skylake-X and Kaby Lake-X along with the associated X299 chipset are still scheduled for Computex reveal and a late June or early August release. Coffee Lake is getting pushed ahead however, it's launch has been moved up to late August instead of the beginning of next year.
Even with Intel's counters, AMD's balance sheet is likely to be looking better and better as the year goes on which is great news for everyone ... except perhaps Intel and NVIDIA.
"Demand for AMD's Ryzen 7- and Ryzen 5-series CPU products has continued rising, which may allow the chipmaker to narrow its losses to below US$50 million for the second quarter of 2017. With Intel also rumored to pay licensing fees to AMD for its GPUs, some market watchers believe AMD may turn profitable in the second quarter or in the third."
Here is some more Tech News from around the web:
- Spitballing the performance of AMD's Vega Frontier Edition graphics card @ The Tech Report
- AMD Financial Analyst Day Lisa Su Presentation @ [H]ard|OCP
- AMD Financial Analyst Day Raja Koduri Presentation @ [H]ard|OCP
- Microsoft goes all Sean Spicer when we ask about WannaCry XP patching @ The Inquirer
- Qualcomm Sues Apple Contract Manufacturers @ Slashdot
- HPE shows off ARM-powered 'The Machine' prototype with 160TB memory @ The Inquirer
- Monoprice Releases Their Mini Delta Printer (On Indiegogo) @ Hack a Day
- Chrome on Windows has credential theft bug @ The Register
- Bell Canada hacked: 2m account details swiped by mystery miscreants @ The Register
- Why Microsoft's Windows game plan makes us WannaCry @ The Register
Subject: Processors | May 17, 2017 - 04:05 AM | Scott Michaud
Tagged: amd, EPYC, 32 core, 64 thread, Intel, Broadwell-E, xeon
AMD has formally announced their EPYC CPUs. While Sebastian covered the product specifications, AMD has also released performance claims against a pair of Intel’s Broadwell-E Xeons. While Intel’s E5-2650 v4 processors have an MSRP of around $1170 USD, each, we don’t know how that price will compare to AMD’s offering. At first glance, pitting thirty two cores against two twelve-core chips seems a bit unfair, although it could end up being a very fair comparison if the prices align.
Image Credit: Patrick Moorhead
Patrick Moorhead, who was at the event, tweeted out photos of a benchmark where Ubuntu was compiled over GCC. It looks like EPYC completed in just 33.7s while the Broadwell-E chip took 37.2s (making AMD’s part ~9.5% faster). While this, again, stems from having a third more cores, this depends on how much AMD is going to charge you for them, versus Intel’s current pricing structure.
Image Credit: Patrick Moorhead
This one chip also has 128 PCIe lanes, rather than Intel’s 80 total lanes spread across two chips.
Application Profiling Tells the Story
It should come as no surprise to anyone that has been paying attention the last two months that the latest AMD Ryzen processors and architecture are getting a lot of attention. Ryzen 7 launched with a $499 part that bested the Intel $1000 CPU at heavily threaded applications and Ryzen 5 launched with great value as well, positioning a 6-core/12-thread CPU against quad-core parts from the competition. But part of the story that permeated through both the Ryzen 7 and the Ryzen 5 processor launches was the situation surrounding gaming performance, in particular 1080p gaming, and the surprising delta that we see in some games.
Our team has done quite a bit of research and testing on this topic. This included a detailed look at the first asserted reason for the performance gap, the Windows 10 scheduler. Our summary there was that the scheduler was working as expected and that minimal difference was seen when moving between different power modes. We also talked directly with AMD to find out its then current stance on the results, backing up our claims on the scheduler and presented a better outlook for gaming going forward. When AMD wanted to test a new custom Windows 10 power profile to help improve performance in some cases, we took part in that too. In late March we saw the first gaming performance update occur courtesy of Ashes of the Singularity: Escalation where an engine update to utilize more threads resulted in as much as 31% average frame increase.
As a part of that dissection of the Windows 10 scheduler story, we also discovered interesting data about the CCX construction and how the two modules on the 1800X communicated. The result was significantly longer thread to thread latencies than we had seen in any platform before and it was because of the fabric implementation that AMD integrated with the Zen architecture.
This has led me down another hole recently, wondering if we could further compartmentalize the gaming performance of the Ryzen processors using memory latency. As I showed in my Ryzen 5 review, memory frequency and throughput directly correlates to gaming performance improvements, in the order of 14% in some cases. But what about looking solely at memory latency alone?