All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Intel today made a number of product and strategy announcements that are all coordinated to continue the company’s ongoing “data-centric transformation.” Building off of recent events such as last August’s Data-Centric Innovation Summit but with roots spanning back years, today’s announcements further solidify Intel’s new strategy: a shift from the “PC-centric” model that for decades drove hundreds of billions of dollars in revenue but is now on the decline, to the rapidly growing and ever changing “data-centric” world of cloud computing, machine learning, artificial intelligence, automated vehicles, Internet-connected devices, and the seemingly unending growth of data that all of these areas generate.
Rather than abandon its PC roots in this transition, Intel’s plan is to leverage its existing technologies and market share advantages in order to attack the data-centric needs of its customers from all angles. Intel sees a huge market opportunity when considering the range of requirements “from edge to cloud and back:” that is, addressing the needs of everything from IoT devices, to wireless and cellular networking, to networked storage, to powerful data center and cloud servers, and all of the processing, analysis, and security that goes with it.
Intel’s goal, at least as I interpret it, is to be a ‘one stop shop’ for businesses and organizations of all sizes who are transitioning alongside Intel to data-centric business models and workloads. Sure, Intel will be happy to continue selling you Xeon-based servers and workstations, but they can also address your networking needs with new 100Gbps Ethernet solutions, speed up your storage-speed-limited workloads with Optane SSDs, increase performance and reduce costs for memory-dependent workloads by supplementing DRAM with Optane, and address specialized workloads with highly optimized Xeon SKUs and FPGAs. In short, Intel isn’t the company that makes your processor or server, it’s now (or rather wants to be) the platform that can handle your needs from end-to-end. Or, as the company’s recent slogan states: “move faster, store more, process everything.”
Shopping for a CPU in 2018 has been a bit of a moving target. Between the launch of AMD's Ryzen 2000 series processors in the beginning of the year, new AMD Threadripper X and WX-series products, and a consumer CPU refresh from Intel last month, it's been difficult to keep track of.
Now we are rounding out 2018 with new products for the last remaining platform that hasn't seen a refresh this year, Intel's Core X-series of processors, namely the Intel Core i9-9980XE.
Join us, as we talk about Intel's new 9th-generation Core X-series processors, and the current landscape of HEDT desktop platforms.
|Core i9-9980XE||Core i9-7980XE||Threadripper 2990WX||Threadripper 2970WX||Threadripper 2950X||Threadripper 2920X|
|Base Clock||3.0 GHz||2.6 GHz||3.0 GHz||3.0 GHz||3.5 GHz||3.5 GHz|
|Boost Clock||4.4 GHz||4.2 GHz||4.2 GHz||4.2 GHz||4.4 GHz||4.3 GHz|
|L3 Cache||24.75MB||24.75MB||64MB||64MB||32MB||32 MB|
|Memory Support||DDR4-2666 (Quad-Channel)||DDR4-2666 (Quad-Channel)||DDR4-2933 (Quad-Channel)||DDR4-2933 (Quad-Channel)||DDR4-2933 (Quad-Channel)||DDR4-2933 (Quad-Channel)|
|TDP||165 Watts||165 Watts||250 Watts||250 Watts||180 Watts||180 Watts|
A quick refresher and Dynamic Local Mode
In general, the rollout of AMD's second-generation Ryzen Threadripper processors has been a bit unconventional. While the full lineup was announced back in August, there has been a staggered release period.
Later in August, we first got our hands on the Threadripper 2950X and 2990WX, the 16 and 32-core variants. Even though both of these parts were reviewed at the same time, the 2990WX was available first, with the 2950X coming a few weeks later.
Now more than two months later, we are taking a look at the 12-core Threadripper 2920X and the 24-core Threadripper 2970WX which were announced alongside the Threadipper parts that have already been shipping for quite a while now.
Will these new Threadripper processors be worth the wait?
|Threadripper 2990WX||Threadripper 2970WX||Threadripper 2950X||Threadripper 2920X||Core i9-7980XE||Core i9-9900K|
|Architecture||Zen+||Zen+||Zen+||Zen+||Skylake-X||Coffee Lake Refresh|
|Base Clock||3.0 GHz||3.0 GHz||3.5 GHz||3.5 GHz||2.6 GHz||3.6 GHz|
|Boost Clock||4.2 GHz||4.2 GHz||4.4 GHz||4.3 GHz||4.2 GHz||5.0 GHz|
|L3 Cache||64MB||64MB||32MB||32 MB||24.75MB||16MB|
|Memory Support||DDR4-2933 (Quad-Channel)||DDR4-2933 (Quad-Channel)||DDR4-2933 (Quad-Channel)||DDR4-2933 (Quad-Channel)||DDR4-2666 (Quad-Channel)||DDR4-2666 (Dual-Channel)|
|TDP||250 Watts||250 Watts||180 Watts||180 Watts||165 Watts||95 Watts|
|Price (MSRP)||$1799||$1299||$899||$649||$1999||$499 MSRP ($580 street)|
One of the most radical changes to happen in the last two years in the PC hardware space has to be the launch of AMD's Ryzen processors. Despite the failure that was the FX-series with their Bulldozer architecture, AMD managed to shock the industry with the performance of their next generation Zen architecture.
After generations upon generations of consumer processors topping out at four cores going back to the Core 2 days, Intel finally launched their first 6-core processor for consumers with the 8700K almost exactly a year ago.
AMD's continued to persevere with the launch of the second generation Ryzen 7 2700X earlier this year, which managed to improve the single-threaded performance gap between AMD and Intel.
Still, this performance gap existed, leaving room for what Intel is launching today, their first 8-core mainstream consumer processor, the Core i9-9900K. Finally having core count parity with AMD, and still holding an advantage in single-threaded performance, this launch has garnered a lot of attention.
|Core i9-9900K||Ryzen 7 2700X||Threadripper 2950X||Core i9-7900X||Core i7-8700K||Core i7-7700K|
|Architecture||Coffee Lake Refresh||Zen+||Zen+||Skylake-X||Coffee Lake||Kaby Lake|
|Base Clock||3.6 GHz||3.7 GHz||3.5 GHz||3.3 GHz||3.7 GHz||4.2 GHz|
|Boost Clock||5.0 GHz||4.3 GHz||4.4 GHz||4.3 GHz||4.7 GHz||4.5 GHz|
|Memory Support||DDR4-2666 (Dual-Channel)||DDR4-2933 (Dual-Channel)||DDR4-2933 (Quad-Channel)||DDR4-2666 (Quad-Channel)||DDR4-2666 (Dual-Channel)||DDR4-2400 (Dual-Channel)|
|TDP||95 W||105 W||180 W||140 W||95 W||91 W|
Retesting the 2990WX
Earlier today, NVIDIA released version 399.24 of their GeForce drivers for Windows, citing Game Ready support for some newly released games including Shadow of the Tomb Raider, The Call of Duty: Black Ops 4 Blackout Beta, and Assetto Corsa Competizione early access.
While this in and of itself is a normal event, we shortly started to get some tips from readers about an interesting bug fix found in NVIDIA's release notes for this specific driver revision.
Specifically addressing performance differences between 16-core/32-thread processors and 32-core/64-thread processors, this patched issue immediately rang true of our experiences benchmarking the AMD Ryzen Threadripper 2990WX back in August, where we saw some games resulting in frames rates around 50% slower than the 16-core Threadripper 2950X.
This particular patch note lead us to update out Ryzen Threadripper 2990WX test platform to this latest NVIDIA driver release and see if there were any noticeable changes in performance.
The full testbed configuration is listed below:
|Test System Setup|
AMD Ryzen Threadripper 2990WX
|Motherboard||ASUS ROG Zenith Extreme - BIOS 1304|
16GB Corsair Vengeance DDR4-3200
Operating at DDR4-2933
|Storage||Corsair Neutron XTi 480 SSD|
|Graphics Card||NVIDIA GeForce GTX 1080 Ti 11GB|
|Graphics Drivers||NVIDIA 398.26 and 399.24|
|Power Supply||Corsair RM1000x|
|Operating System||Windows 10 Pro x64 RS4 (17134.165)|
Included at the end of this article are the full results from our entire suite of game benchmarks from our CPU testbed, but first, let's take a look at some of the games that provided particularly bad issues with the 2990WX previously.
The interesting data points for this testing are the 2990WX scores across both the driver revision we tested across every CPU, 398.26, as well as the results from the 1/4 core compatibility mode, and the Ryzen Threadripper 2950X. From the wording of the patch notes, we would expect gaming performance between the 16-core 2950X and the 32-core 2990WX to be very similar.
Grand Theft Auto V
GTA V was previously one of the worst offenders in our original 2990WX testing, with the frame rate almost halving compared to the 2950X.
However, with the newest GeForce driver update, we see this gap shrinking to around a 20% difference.
Widening the Offerings
Today, we are talking about something that would have seen impossible just a few shorts years ago— a 32-core processor for consumers. While I realize that talking about the history of computer hardware can be considered superfluous in a processor review, I think it's important to understand the context here of why this is just a momentous shift for the industry.
May 2016 marked the launch of what was then the highest core count consumer processor ever seen, the Intel Core i7-6950X. At 10 cores and 20 threads, the 6950X was easily the highest performing consumer CPU in multi-threaded tasks but came at a staggering $1700 price tag. In what we will likely be able to look back on as the peak of Intel's sole dominance of the x86 CPU space, it was an impossible product to recommend to almost any consumer.
Just over a year later saw the launch of Skylake-X with the Intel Core i9-7900X. Retaining the same core count as the 6950X, the 7900X would have been relatively unremarkable on its own. However, a $700 price drop and the future of upcoming 12, 14, 16, and 18-core processors on this new X299 platform showed an aggressive new course for Intel's high-end desktop (HEDT) platform.
This aggressiveness was brought on by the success of AMD's Ryzen platform, and the then upcoming Threadripper platform. Promising up to 16 cores/32 threads, and 64 lanes of PCI Express connectivity, it was clear that Intel would for the first time have a competitor on their hands in the HEDT space that they created back with the Core i7-920.
Fast forward another year, and we have the release of the 2nd Generation Threadripper. Promising to bring the same advancements we saw with the Ryzen 7 2700X, AMD is pushing Threadripper to even more competitive states with higher performance and lower cost.
Will Threadripper finally topple Intel from their high-end desktop throne?
A Weekend of Misadventure
Last Friday marked the release of the Intel Core i7-8086K to consumers through retail channels like Amazon, Newegg, and Microcenter. Announced just earlier that week at Computex, the i7-8086K is essentially an i7-8770K, running slightly higher clock speeds, and is meant as a limited edition item to commemorate the 40th anniversary of Intel’s 8086 processor, which marked the beginning of the x86 microarchitecture.
Eager to test this new CPU, I picked one up from our local Microcenter on Friday evening, and plugged it into our Coffee Lake CPU testbed, powered by a Gigabyte Z370 Aorus Gaming 7 motherboard (updated to the latest BIOS), let my first pass of automated CPU benchmarks run, and went off with the rest of my evening.
Saturday, when I came back to look at the results, they seemed mediocre at best, with the i7-8086K trading blows with the i7-8700K. While the extra 300MHz of clock speed seemed like it would provide more of a benefit than I was seeing, it wasn’t entirely unexpected that performance might not be spectacularly higher than the i7-8700K so I continued to run through the rest of our standard CPU benchmarking suite, as well as our CPU gaming benchmarks.
Finally looking at all of the data together, it appeared there was no change from the i7-8700K to the i7-8086K leading me to do some more digging.
Equipped with Intel’s Extreme Tuning Utility, I began to measure the clock speeds during several benchmarks.
Much to my surprise, even on purely single-threaded workload, such as Cinebench R15 in Single mode, the processor wasn’t getting close to its 5.0GHz Single Core Turbo Boost frequency, in fact, I never saw it get above 4.5GHz. We corroborated these issues with another piece of CPU monitoring software, HWInfo64.
As you can see in the screenshot from XTU, the processor was sitting at a cool 48C while this was going on, and no other alerts such as the motherboard power delivery or current limit throttling were an issue during our testing.
Moving to another motherboard, the ASUS Strix Z370-H Gaming, again on the latest UEFI release, we saw the same behavior.
So far, we have been unable to get this processor to operate at the advertised 5.0GHz Turbo Boost frequency, on a multitude of different hardware and software setups.
However, if we manually overclock the processor, we can get an all-core frequency of 5.1GHz, although with a temperature around 85C.
At this point, we are left puzzled and disappointed by the launch of the i7-8086K. This is the same hardware and software setup we used for all of our CPU benchmarking for the recent Ryzen 7 2700X review, with no issues. We even tried a fresh, fully updated Windows install on a separate SSD, to help eliminate any potential for weird software issues.
Jeff at The Tech Report used the same Gigabyte Z370 Aorus Gaming 7 motherboard as us, and while he didn’t see great performance overall, you can see explicit scaling in pure single-threaded workloads like Cinebench in his review.
As far as the ASUS motherboard we also tried is concerned, the i7-8086K is listed on ASUS’ CPU compatibility list for UEFI 1301 (which we are running), so it seems there should be no issue.
This morning, the i7-8086K we ordered on Amazon showed up, and did the exact same thing, in both test setups.
To be fair, based on the reviews that we have seen pop up thus far, including The Tech Report, the resulting performance if things were configured correctly doesn’t appear to be worth the extra cost.
What was meant to be a celebration of Intel’s 40 years of the X86 architecture seems more like a rushed release than a fully baked product. Remember, we bought this processor directly from a retail outlet with no intervention from Intel. Without the proper BIOS-level support, and potentially a more widespread issue affecting normal consumers building machines with i7-8086K.
A Year Later
Despite what might be considered an overall slump in enthusiast PC building due to record low GPU availability and sky-high memory prices, 2017 was one of the most exciting and competitive years in recent history when it comes to CPU innovation. On the desktop side alone, we saw the launch of AMD's new Zen CPU architecture with the Ryzen 1000 series of parts starting last March; we also saw new HEDT platforms from both Intel and AMD, and Intel's first 6-core mainstream CPUs.
Although the timeline doesn't quite work out for Ryzen to have affected the engineering-side of Intel's decision to release a 6-core desktop processor, it's evident AMD's pressure changed Intel's pricing and release schedule.
With little desktop competition, it's likely that the i7-8700K would have been a more expensive part, and released later. It's likely that Coffee Lake would have seen a full stack product launch in early 2018, as opposed to the staggered launch we experienced where only one compatible chipset and a subset of CPUs were available for months.
AMD and Ryzen have put significant pressure on Intel to remain competitive, which is good for the industry as a whole.
We're now at just over a year since AMD's first Ryzen processor releases, and looking at the first appearance of the codename Pinnacle Ridge CPUs. Launching today are the Ryzen 7 2700X and 2700, and the Ryzen 5 2600x and 2600 processors. Can AMD keep moving the needle forward in the CPU space? Let's take a look.
Announced at Intel's Developer Forum in 2012, and launched later that year, the Next Unit of Computing (NUC) project was initially a bit confusing to the enthusiast PC press. In a market that appeared to be discarding traditional desktops in favor of notebooks, it seemed a bit odd to launch a product that still depended on a monitor, mouse, and keyboard, yet didn't provide any more computing power.
Despite this criticism, the NUC lineup has rapidly expanded over the years, seeing success in areas such as digital signage and enterprise environments. However, the enthusiast PC market has mostly eluded the lure of the NUC.
Intel's Skylake-based Skull Canyon NUC was the company's first attempt to cater to the enthusiast market, with a slight stray from the traditional 4-in x 4-in form factor and the adoption of their best-ever integrated graphics solution in the Iris Pro. Additionally, the ability to connect external GPUs via Thunderbolt 3 meant Skull Canyon offered more of a focus on high-end PC graphics.
However, Skull Canyon mostly fell on deaf ears among hardcore PC users, and it seemed that Intel lacked the proper solution to make a "gaming-focused" NUC device—until now.
Announced at CES 2018, the lengthily named 8th Gen Intel® Core™ processors With Radeon™ RX Vega M Graphics (henceforth referred to as the code name, Kaby Lake-G) marks a new direction for Intel. By partnering with one of the leaders in high-end PC graphics, AMD, Intel can now pair their processors with graphics capable of playing modern games at high resolutions and frame rates.
The first product to launch using the new Kaby Lake-G family of processors is Intel's own NUC, the NUC8i7HVK (Hades Canyon). Will the marriage of Intel and AMD finally provide a NUC capable of at least moderate gaming? Let's dig a bit deeper and find out.
It's clear by now that AMD's latest CPU releases, the Ryzen 3 2200G and the Ryzen 5 2400G are compelling products. We've already taken a look at them in our initial review, as well as investigated how memory speed affected the graphics performance of the internal GPU but it seemed there was something missing.
Recently, it's been painfully clear that GPUs excel at more than just graphics rendering. With the rise of cryptocurrency mining, OpenCL and CUDA performance are as important as ever.
Cryptocurrency mining certainly isn't the only application where having a powerful GPU can help system performance. We set out to see how much of an advantage the Radeon Vega 11 graphics in the Ryzen 5 2400G provided over the significantly less powerful UHD 630 graphics in the Intel i5-8400.
|Test System Setup|
|CPU||AMD Ryzen 5 2400G
Intel Core i5-8400
|Motherboard||Gigabyte AB350N-Gaming WiFi
ASUS STRIX Z370-E Gaming
|Memory||2 x 8GB G.SKILL FlareX DDR4-3200
(All memory running at 3200 MHz)
|Storage||Corsair Neutron XTi 480 SSD|
|Graphics Card||AMD Radeon Vega 11 Graphics
Intel UHD 630 Graphics
|Graphics Drivers||AMD 17.40.3701
|Power Supply||Corsair RM1000x|
|Operating System||Windows 10 Pro x64 RS3|
Before we take a look at some real-world examples of where a powerful GPU can be utilized, let's look at the relative power of the Vega 11 graphics on the Ryzen 5 2400G compared to the UHD 630 graphics on the Intel i5-8400.
SiSoft Sandra is a suite of benchmarks covering a wide array of system hardware and functionality, including an extensive range of GPGPU tests, which we are looking at today.
Comparing the raw shader performance of the Ryzen 5 2400G and the Intel i5-8400 provides a clear snapshot of what we are dealing with. In every precision category, the Vega 11 graphics in the AMD part are significantly more powerful than the Intel UHD 630 graphics. This all combines to provide a 175% increase in aggregate shader performance over Intel for the AMD part.
Now that we've taken a look at the theoretical power of these GPUs, let's see how they perform in real-world applications.
Memory speed is not a factor that the average gamer thinks about when building their PC. For the most part, memory performance hasn't had much of an effect on modern processors running high-speed memory such as DDR3 and DDR4.
With the launch of AMD's Ryzen processors, last year emerged a platform that was more sensitive to memory speeds. By running Ryzen processors with higher frequency and lower latency memory, users should see significant performance improvements, especially in 1080p gaming scenarios.
However, the Ryzen processors are not the only ones to exhibit this behavior.
Gaming on integrated GPUs is a perfect example of a memory starved situation. Take for instance the new AMD Ryzen 5 2400G and it's Vega-based GPU cores. In a full Vega 56 or 64 situation, these Vega cores utilize blazingly fast HBM 2.0 memory. However, due to constraints such as die space and cost, this processor does not integrate HBM.
Instead, both the CPU portion and the graphics portion of the APU must both depend on the same pool of DDR4 system memory. DDR4 is significantly slower than memory traditionally found on graphics cards such as GDDR5 or HBM. As a result, APU performance is usually memory limited to some extent.
In the past, we've done memory speed testing with AMD's older APUs, however with the launch of the new Ryzen and Vega based R3 2200G and R5 2400G, we decided to take another look at this topic.
For our testing, we are running the Ryzen 5 2400G at three different memory speeds, 2400 MHz, 2933 MHz, and 3200 MHz. While the maximum supported JEDEC memory standard for the R5 2400G is 2933, the memory provided by AMD for our processor review will support overclocking to 3200MHz just fine.
Raven Ridge Desktop
As we approach the one-year anniversary of the release of the Ryzen family of processors, the full breadth of the releases AMD put forth inside of 12 months is more apparent than ever. Though I feel like I have written summations of 2017 for AMD numerous times, it still feels like an impressive accomplishment as I reflect for today’s review. Starting with the Ryzen 7 family of processors targeting enthusiasts, AMD iterated through Ryzen 5, Ryzen 3, Ryzen Threadripper, Ryzen Pro, EPYC, and Ryzen Mobile.
Today, though its is labeled as a 2000-series of parts, we are completing what most would consider the first full round of the Ryzen family. As the first consumer desktop APU (AMD’s term for a processor with tightly integrated on-die graphics), the Ryzen 5 2400G and the Ryzen 3 2200G look very much like the Ryzen parts before them and like the Ryzen mobile APUs that we previously looked at in notebook form. In fact, from an architectural standpoint, these are the same designs.
Before diving into the hardware specifications and details, I think it is worth discussing the opportunity that AMD has with the Ryzen with Vega graphics desktop part. By most estimates, more than 30% of the desktop PCs sold around the world ship without a discrete graphics card installed. This means they depend on the integrated graphics from processor to handle the functions of general compute and any/all gaming that might happen locally. Until today, AMD has been unable to address that market with its currently family of Ryzen processors, as they require discrete graphics solutions.
While most of our readers fall into the camp of not just using a discrete solution but requiring one for gaming purposes, there are a lot of locales and situations where the Ryzen APU is going to provide more than enough graphics horsepower. The emerging markets in China and India, for example, are regularly using low-power systems with integrated graphics, often based on Intel HD Graphics or previous generation AMD solutions. These gamers and consumers will see dramatic increases in performance with the Zen + Vega solution that today’s processor releases utilize.
Let’s not forget about secondary systems, small form factor designs, and PCs design for your entertainment centers as possible outlets for and uses for Ryzen APUs even for the most hardcore of enthusiast. Mom or Dad need a new PC for basic tasks on a budget? Again, AMD is hoping to make a case today for those sales.
The SDM845 Reference Platform and CPU Results
The Snapdragon 845 is Qualcomm’s latest flagship mobile platform, officially announced on December 6 and known officially as the SDM845 (moving from the MSMxxxx nomenclature of previous iterations). At a recent media event we had a chance to go hands-on with a development platform device for a preview of this new Snapdragon's performance, the results of which we can now share. Will the Snapdragon 845 be Qualcomm's Android antidote to Apple's A11? Read on to find out!
The SDM845 QRD (Qualcomm Reference Design) Device
While this article will focus on CPU and GPU performance with a few known benchmarks, the Snapdragon 845 is of course a full mobile platform which combines 8-core Kryo 385 CPU, Adreno 630 graphics, Hexagon 685 DSP (which includes the Snapdragon Neural Processing Engine), Spectra 280 image processor, X20 LTE modem, etc. The reference device was packaged like a typical 5.5-inch Android smartphone, which can only help to provide a real-world application of thermal management during benchmarking.
Qualcomm Reference Design Specifications:
- Baseband Chipset: SDM845
- Memory: 6 GB LPDDR4X (PoP)
- Display: 5.5-inch 1440x2560
- Front: IMX320 12 MP Sensor
- Rear: IMX386 12 MP Sensor
- No 3.5 mm headset jack (Analog over USB-C)
- 4 Digital Microphones
- Connector: USB 3.1 Type-C
- DisplayPort over USB-C
At the heart of the Snapdragon 845 is the octa-core Kryo 385 CPU, configured with 4x performance cores and 4x efficiency cores, and offering clock speeds of up to 2.8 GHz. In comparison the Snapdragon 835 had a similar 8x CPU configuration (Kryo 280) clocked up to 2.45 GHz. The SDM845 is produced on 10 nm LPP process technology, while the SD835 (MSM8998) was the first to be manufactured at 10 nm (LPE). It is not surprising that Qualcomm is getting higher clock speeds from this new chip at the same process node, and increases in efficiency (the new 10nm LPP FinFET process) should theoretically result in similar - or possibly even lower - power draw from these higher clocks.
The end of the world as we know it?
A surprise to most in the industry that such a thing would really occur, AMD and Intel announced in November a partnership that would bring Radeon graphics to Intel processors in 2018. The details were minimal at the time, and only told us specifics of the business relationship: this was a product purchase and not a license, no IP was changing hands, this was considered a semi-custom design for the AMD group, Intel was handling all the integration and packaging. Though we knew that the product would use HBM2 memory, the same utilized on the RX Vega products released last year, it was possible that the “custom” part was a Polaris architecture that had been retrofitted. Also, details of the processor side of this technology was left a mystery.
Today we have our answers and our first hands-on with systems utilizing what was previously known as Kaby Lake-G and what is now officially titled the “8th Generation Intel Core Processors with Radeon RX Vega M Graphics.” I’m serious.
For what I still call Kaby Lake-G, as it easier to type and understand, it introduces a new product line that we have not seen addressed in a very long time – high performance processors with high performance integrated graphics. Even though the combined part is not a single piece of silicon but instead a multi-chip package, it serves the same purpose in the eyes of the consumer and the OEM. The marriage of Intel’s highest performance mobile processor cores, the 8th Generation H-series, and one of, if not THE fastest mobile graphics core in a reasonable thermal envelope, the Vega M, is incredibly intriguing for all kinds of reasons. Even the currently announced AMD APUs and those in the public roadmaps don’t offer a combined performance package as impressive as this. Ryzen Mobile is interesting in its own right, but Kaby Lake-G is on a different level.
From a business standpoint, KBL-G is a design meant to attack NVIDIA. The green giant has become one of the most important computing companies on the planet in the last couple of years, leaning into its graphics processor dominance and turning it into cash and mindshare in the world of machine learning and AI. More than any other company, Intel is worried about the growth and capability of NVIDIA. Though not as sexy as “machine learning”, NVIDIA has dominated the mobile graphics markets as well, offering discrete GPU solutions to pair with Intel processor notebooks. In turn, NVIDIA eats up much of the margin and profitability that these mainstream gaming and content creation machines can generate. Productization of things like Max-Q give the market reason to believe that NVIDIA is the true innovator in the space, regardless of the legitimate answer to that question. Intel see that as no bueno – it wants to remain the leader in the market completely.
Overview and CPU Performance
When Intel announced their quad-core mobile 8th Generation Core processors in August, I was immediately interested. As a user who gravitates towards "Ultrabook" form-factor notebooks, it seemed like a no-brainer—gaining two additional CPU cores with no power draw increase.
However, the hardware reviewer in me was skeptical. Could this "Kaby Lake Refresh" CPU provide the headroom to fit two more physical cores on a die while maintaining the same 15W TDP? Would this mean that the processor fans would have to run out of control? What about battery life?
Now that we have our hands on our first two notebooks with the i7-8550U in, it's time to take a more in-depth look at Intel's first mobile offerings of the 8th Generation Core family.
A potential game changer?
I thought we were going to be able to make it through the rest of 2017 without seeing AMD launch another family of products. But I was wrong. And that’s a good thing. Today AMD is launching the not-so-cleverly-named Ryzen Processor with Radeon Vega Graphics product line that will bring the new Zen processor architecture and Vega graphics architecture onto a single die for the ultrathin mobile notebook platforms. This is no minor move for them – just as we discussed with the AMD EPYC processor launch, this is a segment that has been utterly dominated by Intel. After all, Intel created the term Ultrabook to target these designs, and though that brand is gone, the thin and light mindset continues to this day.
The claims AMD makes about its Ryzen mobile APU (combination CPU+GPU accelerated processing unit, to use an older AMD term) are not to be made lightly. Right up front in our discussion I was told this is going to be the “world’s fastest for ultrathin” machines. Considering that AMD had previously been unable to even enter those markets with previous products, both due to some technological and business roadblocks, AMD is taking a risk by painting this launch in such a light. Thanks to its ability combine CPU and GPU technology on a single die though, AMD has some flexibility today that simply did not have access to previously.
From the days that AMD first announced the acquisition of ATI graphics, the company has touted the long-term benefits of owning both a high-performance processor and graphics division. By combining the architectures on a single die, they could become greater than the sum of the parts, leveraging new software directions and the oft-discussed HSA (heterogenous systems architecture) that AMD helped create a foundation for. Though the first rounds of APUs were able to hit modest sales, the truth was that AMD’s advantage over Intel’s on the graphics technology front was often overshadowed by the performance and power efficiency advantages that Intel held on the CPU front.
But with the introduction of the first products based on Zen earlier this year, AMD has finally made good on the promises of catching up to Intel in many of the areas where it matters the most. The new from-the-ground-up design resulted in greater than 50% IPC gains, improved area efficiency compared to Intel’s latest Kaby Lake core design, and enormous gains in power efficiency compared to the previous CPU designs. When looking at the new Ryzen-based APU products with Vega built-in, AMD claims that they tower over the 7th generation APUs with up to 200% more CPU performance, 128% more GPU performance, and 58% lower power consumption. Again, these are bold claims, but it gives AMD confidence that it can now target premium designs and form factors with a solution that will meet consumer demands.
AMD is hoping that the release of the Ryzen 7 2700U and Ryzen 5 2500U can finally help turn the tides in the ultrathin notebook market.
|Core i7-8650U||Core i7-8550U||Core i5-8350U||Core i5-8250U||Ryzen 7 2700U||Ryzen 5 2500U|
|Architecture||Kaby Lake Refresh||Kaby Lake Refresh||Kaby Lake Refresh||Kaby Lake Refresh||Zen+Vega||Zen+Vega|
|Base Clock||1.9 GHz||1.8 GHz||1.7 GHz||1.6 GHz||2.2 GHz||2.0 GHz|
|Max Turbo Clock||4.2 GHz||4.0 GHz||3.8 GHz||3.6 GHz||3.8 GHz||3.6 GHz|
|System Bus||DMI3 - 8.0 GT/s||DMI3 - 8.0 GT/s||DMI2 - 6.4 GT/s||DMI2 - 5.0 GT/s||N/A||N/A|
|Graphics||UHD Graphics 620||UHD Graphics 620||UHD Graphics 620||UHD Graphics 620||Vega (10 CUs)||Vega (8 CUs)|
|Max Graphics Clock||1.15 GHz||1.15 GHz||1.1 GHz||1.1 GHz||1.3 GHz||1.1 GHz|
The Ryzen 7 2700U will run 200 MHz higher on the base and boost clocks for the CPU and 200 MHz higher on the peak GPU core clock. Though both systems have 4-cores and 8-threads, the GPU on the 2700U will have two additional CUs / compute units.
Specifications and Summary
As seems to be the trend for processor reviews as of late, today marks the second in a two-part reveal of Intel’s Coffee Lake consumer platform. We essentially know all there is to know about the new mainstream and DIY PC processors from Intel, including specifications, platform requirements, and even pricing; all that is missing is performance. That is the story we get to tell you today in our review of the Core i7-8700K and Core i5-8400.
Coffee Lake is the second spoke of Intel's “8th generation” wheel that began with the Kaby Lake-R release featuring quad-core 15-watt notebook processors for the thin and light market. Though today’s release of the Coffee Lake-S series (the S is the designation for consumer desktop) doesn’t share the same code name, it does share the same microarchitecture, same ring bus design (no mesh here), and same underlying technology. They are both built on the Intel 14nm process technology.
And much like Kaby Lake-R in the notebook front, Coffee Lake is here to raise the core count and performance profile of the mainstream Intel CPU playbook. When AMD first launched the Ryzen 7 series of processors that brought 8-cores and 16-threads of compute, it fundamentally shook the mainstream consumer markets. Intel was still on top in terms of IPC and core clock speeds, giving it the edge in single and lightly threaded workloads, but AMD had released a part with double the core and thread count and was able to dominate in most multi-threaded workloads compared to similar Intel offerings.
Much like Skylake-X before it, Coffee Lake had been on Intel’s roadmap from the beginning, but new pressure from a revived AMD meant bringing that technology to the forefront sooner rather than later in an effort stem any potential shifts in market share and maybe more importantly, mind share among investors, gamers, and builders. Coffee Lake, and the Core i7, Core i5, and Core i3 processors that will be a part of this 8000-series release, increase the core count across the board, and generally raise clock speeds too. Intel is hoping that by bumping its top mainstream CPU to 6-cores, and coupling that with better IPC and higher clocks, it can alleviate the advantages that AMD has with Ryzen.
But does it?
That’s what we are here to find out today. If you need a refresher on the build up to this release, we have the specifications and slight changes in the platform and design summarized for you below. Otherwise, feel free to jump on over to the benchmarks!
A New Standard
With a physical design that is largely unchanged other than the addition of a glass back for wireless charging support, and featuring incremental improvements to the camera system most notably with the Plus version, the iPhone 8 and 8 Plus are interesting largely due to the presence of a new Apple SoC. The upcoming iPhone X (pronounced "ten") stole the show at Apple's keynote annoucement earlier this month, but the new A11 Bionic chip powers all 2017 iPhone models, and for the first time Apple has a fully custom GPU after their highly publicized split with Imagination Technologies, makers of the PowerVR graphics found in previous Apple SoCs.
The A11 Bionic powering the 2017 iPhones contains Apple’s first 6-core processor, which is comprised of two high performance cores (code-named ‘Monsoon’) and four high efficiency cores (code-named ‘Mistral’). Hugely important to its performance is the fact that all six cores are addressable with this new design, as Apple mentions in their description of the SoC:
"With six cores and 4.3 billion transistors, A11 Bionic has four efficiency cores that are up to 70 percent faster than the A10 Fusion chip, and two performance cores that are up to 25 percent faster. The CPU can even harness all six cores simultaneously when you need a turbo boost."
It was left to improvments to IPC and clock speed to boost the per-core performance of previous Apple SoCs, such as the previous A10 Fusion part, which contained a quad-core CPU split in an even arrangement of 2x performance + 2x efficiency cores. Apple's quad-core effort did not affect app performance beyond the two performance cores, with additional cores limited to background tasks in real-world use (though the A10 Fusion did not provide any improvement to battery life over previous efforts, as we saw).
The A11 Bionic on the iPhone 8 system board (image credit: iFixit)
Just how big an impact this new six-core CPU design will have can be instantly observed with the CPU benchmarks to follow, and on the next page we will find out how Apple's in-house GPU solution compare to both the previous A10 Fusion PowerVR graphics, and market-leading Qualcomm Adreno 540 found in the Snapdragon 835. We will begin with the CPU benchmarks.
Specifications and Architecture
It has been an interesting 2017 for Intel. Though still the dominant market share leader in consumer processors of all shapes and sizes, from DIY PCs to notebooks to servers, it has come under attack with pressure from AMD unlike any it has felt in nearly a decade. It started with the release of AMD Ryzen 7 and a family of processors aimed at the mainstream user and enthusiast markets. That followed by the EPYC processor release moving in on Intel’s turf of the enterprise markets. And most recently, Ryzen Threadripper took a swing (and hit) at the HEDT (high-end desktop) market that Intel had created and held its own since the days of the Nehalem-based Core i7-920 CPU.
Between the time Threadripper was announced and when it shipped, Intel made an interesting move. It decided to launch and announce its updated family of HEDT processors dubbed Skylake-X. Only available in a 10-core model at first, the Core i9-7900X was the fastest tested processor in our labs, at the time. But it was rather quickly overtaken by the likes of the Threadripper 1950X that ran with 16-cores and 32-threads of processing. Intel had already revealed that its HEDT lineup would go to 18-core options, though availability and exact clock speeds remained in hiding until recently.
|i9-7980XE||i9-7960X||i9-7940X||i9-7920X||i9-7900X||i7-7820X||i7-7800X||TR 1950X||TR 1920X||TR 1900X|
|Base Clock||2.6 GHz||2.8 GHz||3.1 GHz||2.9 GHz||3.3 GHz||3.6 GHz||3.5 GHz||3.4 GHz||3.5 GHz||3.8 GHz|
|Turbo Boost 2.0||4.2 GHz||4.2 GHz||4.3 GHz||4.3 GHz||4.3 GHz||4.3 GHz||4.0 GHz||4.0 GHz||4.0 GHz||4.0 GHz|
|Turbo Boost Max 3.0||4.4 GHz||4.4 GHz||4.4 GHz||4.4 GHz||4.5 GHz||4.5 GHz||N/A||N/A||N/A||N/A|
|Memory Support||DDR4-2666 Quad Channel||DDR4-2666 Quad Channel||DDR4-2666 Quad Channel||DDR4-2666 Quad Channel||DDR4-2666
|DDR4-2666 Quad Channel||DDR4-2666 Quad Channel|
|TDP||165 watts||165 watts||165 watts||140 watts||140 watts||140 watts||140 watts||180 watts||180 watts||180 watts?|
Today we are now looking at both the Intel Core i9-7980XE and the Core i9-7960X, 18-core and 16-core processors, respectively. The goal from Intel is clear with the release: retake the crown as the highest performing consumer processor on the market. It will do that, but it does so at $700-1000 over the price of the Threadripper 1950X.
Who is this for, anyway?
Today is a critically important day for AMD. With the launch of reviews and the on-sale date for its new Ryzen Threadripper processor family, AMD is reentering the world of high-end consumer processors that it has been absent from for a decade, if not longer. Intel has dominated this high priced, but high margin, area of the market since the release of the Core i7-900 series of Nehalem CPUs in 2008, bringing workstation and server class hardware down to the content creator and enthusiast markets. Even at that point AMD had no competitive answer, with only the Phenom X4 in our comparison charts. It didn’t end well.
AMD has made no attempt of stealth with the release of Ryzen Threadripper, instead adopting the “tease and repeat” campaign style that Radeon has utilized in recent years for this release. The result of which is an already-knowledgeable group of pre-order ready consumers; not a coincidence. Today I will summarize the data we already know for those of you just joining us and dive into the importance of the new information we can provide today. That includes interesting technical details on the multi-die implementation and latency, overclocking, thermals, why AMD has a NUMA/UMA issue, gaming performance and of course, general system and workload benchmarks.
A Summary of Threadripper
AMD has been pumping up interest and excitement for Ryzen Threadripper since May, with an announcement of the parts at the company’s financial analyst day. It teased 16 cores and 32 threads of performance for a single consumer socket, something that we had never seen before. At Computex, Jim Anderson got on stage and told us that each Threadripper processor would have access to 64 lanes of PCI Express, exceeding the 40 lanes of Intel’s top HEDT platforms and going well above the 28 lanes that the lower end of its family offers.
In mid-July the official announcement of the Ryzen Threadripper 1950X and 1920X occurred, with CEO Lisa Su and CVP John Taylor having the honors. This announcement broke with most of the important information including core count, clock speeds, pricing, and a single performance benchmark (Cinebench). On July 24th we started to see pictures of the Threadripper packaging show up on AMD social media accounts, getting way more attention than anyone expected a box for a CPU could round up. At the end of July AMD announced a third Threadripper processor (due in late August). Finally, on August 3rd, I was allowed to share an unboxing of the review kit and the CPU itself as well as demonstrate the new installation method for this sled-based processor.
It’s been a busy summer.