Retesting the 2990WX
Earlier today, NVIDIA released version 399.24 of their GeForce drivers for Windows, citing Game Ready support for some newly released games including Shadow of the Tomb Raider, The Call of Duty: Black Ops 4 Blackout Beta, and Assetto Corsa Competizione early access.
While this in and of itself is a normal event, we shortly started to get some tips from readers about an interesting bug fix found in NVIDIA's release notes for this specific driver revision.
Specifically addressing performance differences between 16-core/32-thread processors and 32-core/64-thread processors, this patched issue immediately rang true of our experiences benchmarking the AMD Ryzen Threadripper 2990WX back in August, where we saw some games resulting in frames rates around 50% slower than the 16-core Threadripper 2950X.
This particular patch note lead us to update out Ryzen Threadripper 2990WX test platform to this latest NVIDIA driver release and see if there were any noticeable changes in performance.
The full testbed configuration is listed below:
|Test System Setup|
AMD Ryzen Threadripper 2990WX
|Motherboard||ASUS ROG Zenith Extreme - BIOS 1304|
16GB Corsair Vengeance DDR4-3200
Operating at DDR4-2933
|Storage||Corsair Neutron XTi 480 SSD|
|Graphics Card||NVIDIA GeForce GTX 1080 Ti 11GB|
|Graphics Drivers||NVIDIA 398.26 and 399.24|
|Power Supply||Corsair RM1000x|
|Operating System||Windows 10 Pro x64 RS4 (17134.165)|
Included at the end of this article are the full results from our entire suite of game benchmarks from our CPU testbed, but first, let's take a look at some of the games that provided particularly bad issues with the 2990WX previously.
The interesting data points for this testing are the 2990WX scores across both the driver revision we tested across every CPU, 398.26, as well as the results from the 1/4 core compatibility mode, and the Ryzen Threadripper 2950X. From the wording of the patch notes, we would expect gaming performance between the 16-core 2950X and the 32-core 2990WX to be very similar.
Grand Theft Auto V
GTA V was previously one of the worst offenders in our original 2990WX testing, with the frame rate almost halving compared to the 2950X.
However, with the newest GeForce driver update, we see this gap shrinking to around a 20% difference.
Subject: Processors, Mobile | September 9, 2018 - 04:50 PM | Ryan Shrout
Tagged: p20 pro, Kirin 970, Kirin, Huawei
Last week the gang at Anandtech posted a story discovering systematic cheating by Huawei in smartphone benchmarks. In its story, AT focused on 3DMark and GFXBench, looking at how the Chinese-based silicon and phone provider was artificially increasing benchmark scores to gain an advantage in its battles with other smartphone providers and SoC vendors like Qualcomm.
As a result of that testing, UL Benchmarks (who acquired Futuremark) delisted several Huawei smartphones from 3DMark, taking the artificial scores down from the leaderboards. This puts the existing device reviews in question while also pulling a cloud over the recently announced (and impressive sounding) Kirin 980 SoC meant to battle with the Snapdragon 845 and next-gen Qualcomm product. The Kirin 980 will be the first shipping processor to integrate high performance Arm Cortex-A76 cores, so the need to cheat on performance claims is questionable.
Just a day after this story broke, UL and Huawei released a joint statement that is, quite honestly, laughable.
"In the discussion, Huawei explained that its smartphones use an artificial intelligent resource scheduling mechanism. Because different scenarios have different resource needs, the latest Huawei handsets leverage innovative technologies such as artificial intelligence to optimize resource allocation in a way so that the hardware can demonstrate its capabilities to the fullest extent, while fulfilling user demands across all scenarios.
To somehow assert that any kind of AI processing is happening on Huawei devices that is responsible for the performance differences that Anandtech measured is at best naïve and at worst straight out lying. This criticism is aimed at both Huawei and UL Benchmarks – I would assume that a company with as much experience in performance evaluation would not succumb to this kind of messaging.
After that AT story was posted, I started talking with the team that builds Geekbench, one of the most widely used and respected benchmarks for processors on mobile devices and PCs. It provides a valuable resource of comparative performance and leaderboards. As it turns out, Huawei devices are exhibiting the same cheating behavior in this benchmark.
Below I have compiled results from Geekbench that were run by developer John Poole on a Huawei P20 Pro device powered by the Kirin 970 SoC. (Private app results, public app results.) To be clear: the public version is the application package as downloaded from the Google Play Store while the private version is a custom build he created to test against this behavior. It uses absolutely identical workloads and only renames the package and does basic string replacement in the application.
Clearly the Huawei P20 Pro is increasing performance on the public version of the Geekbench test and not on the private version, despite using identical workloads on both. In the single threaded tests, the total score is 6.5% lower with the largest outlier being in the memory performance sub-score, where the true result is 14.3% slower than the inaccurate public version result. Raw integer performance drops by 3.7% and floating-point performance falls by 5.6%.
The multi-threaded score differences are much more substantial. Floating point performance drops by 26% in the private version of Geekbench, taking a significant hit that would no doubt affect its placement in the leaderboards and reviews of flagship Android smartphones.
Overall, the performance of the Huawei P20 Pro is 6.5% slower in single threaded testing and 16.7% slower in multi-threaded testing when the artificial score inflation in place within the Huawei customized OS is removed. Despite claims to the contrary, and that somehow an AI system is being used to recognize specific user scenarios and improve performance, this is another data point to prove that Huawei was hoping to pull one over on the media and consumers with invalid performance comparisons.
Some have asked me why this issue matters; if the hardware is clearly capable of performance like this, why should Huawei and HiSilicon not be able to present it that way? The higher performance results that 3DMark, GFXBench, and now Geekbench show are not indicative of the performance consumers get with their devices on real applications. The entire goal of benchmarks and reviews is to try to convey the experience a buyer would get for a smartphone, or anything else for that matter.
If Huawei wanted one of its devices to offer this level of performance in games and other applications, it could do so, but at the expense of other traits. Skin temperature, battery life, and device lifespan could all be impacted – something that would definitely affect the reviews and reception of a smartphone. Hence, the practice of cheating in an attempt to have the best of both.
The sad part about all of this is that Huawei’s flagship smartphones have been exceptional in nearly every way. Design, screen quality, camera integration, features; the Mate and P-series devices have been excellent representations of what an Android device can be. Unfortunately, for enthusiasts that follow the market, this situation will follow the company and cloud some of those positives.
Today’s data shows that the story of Huawei and benchmarks goes beyond just 3DMark and GFXBench. We will be watching this closely to see how Huawei responds and if any kinds of updates to existing hardware are distributed. And, as the release of Kirin 980 devices nears, you can be sure that testing and evaluation of these will get a more scrutinizing eye than ever.
Subject: General Tech, Processors | September 6, 2018 - 01:22 PM | Jeremy Hellstrom
Tagged: amd, athlon, Zen, Vega, 200GE, PRO 200GE, ryzen, Ryzen 7 PRO 2700X, Ryzen 7 PRO 2700, Ryzen 5 PRO 2600
AMD is returning the Athlon name to active service with the arrival of the Athlon 200GE, combining their current Zen core with three Radeon Vega 3 GCUs and a GPU core of 1GHz. The dual core, multithreaded processor will run at 3.2GHz with a TDP of 35W, which should give you an idea of where you will find this new chip.
Along with the new Athlon comes four new Pro chips, the AMD Athlon PRO 200GE, Ryzen 7 PRO 2700X, Ryzen 7 PRO 2700 and Ryzen 5 PRO 2600. These will be more traditional desktop processors with enterprise level features to ensure the security of your systems as well as offering flexibility; with a cost somewhat lower than the competitions.
Subject: Processors, Mobile | September 2, 2018 - 11:45 AM | Sebastian Peak
Tagged: SoC, octa-core, mobile, Mali-G76, Kirin, Huawei, HiSilicon, gpu, cpu, Cortex-A76, arm, 8-core
Huawei has introduced their subsidiary HiSilicon’s newest mobile processor in the Kirin 980, which, along with Huawei's claim of the world's first commercial 7nm SoC, is the first SoC to use Arm Cortex A76 CPU cores and Arm’s Mali G76 GPU.
Huawei is aiming squarely at Qualcomm with this announcement, claiming better performance than a Snapdragon 845 during the presentation. One of its primary differences to the current Snapdragon is the composition of the Kirin 980’s eight CPU cores, notable as the usual 'big.LITTLE' Arm CPU core configuration for an octa-core design gives way to a revised organization with three groups, as illustrated by AnandTech here:
Of the four Cortex A76 cores just two are clocked up to maximize performance with certain applications such as gaming (and, likely, benchmarks) at 2.60 GHz, and the other two are used more generally as more efficient performance cores at 1.92 GHz. The remaining four A55 cores operate at 1.80 GHz, and are used for lower-performance tasks. A full breakdown of the CPU core configuration as well as slides from the event are available at AnandTech.
Huawei claims that the improved CPU in the Kirin 980 results in "75 percent more powerful and 58 percent more efficient compared to their previous generation" (the Kirin 970). This claim translates into what Huawei claims to be 37% better performance and 32% greater efficiency than Qualcomm’s Snapdragon 845.
The GPU also gets a much-needed lift this year from Arm's latest GPU, the Mali-G76, which features "new, wider execution engines with double the number of lanes" and "provides dramatic uplifts in both performance and efficiency for complex graphics and Machine Learning (ML) workloads", according to Arm.
Real-world testing with shipping handsets is needed to verify Huawei's performance claims, of course. In fact, the results shown by Huawei at the presentation carry a this disclaimer, sourced from today’s press release:
"The specifications of Kirin 980 does not represent the specifications of the phone using this chip. All data and benchmark results are based on internal testing. Results may vary in different environments."
The upcoming Mate 20 from Huawei will be powered by this new Kirin 980 - and could very well provide results consistent with the full potential of the new chip - and that is set for an official launch on October 16.
The full press release is available after the break.
Subject: Processors | August 31, 2018 - 10:36 AM | Ken Addison
Tagged: Threadripper, ryzen, 2nd generation threadripper, 2990wx, 2950x
Today, AMD's 2nd generation Ryzen Threadripper 2950X has finally reached retail availability. As you might remember from the launch a few weeks ago, the 32-core Threadripper 2990WX has already been on store shelves, but the 2950X was set to arrive on August 31st.
For those that need a bit of a refresher on 2nd generation Threadripper, you check out our full review of both the 2950X and 2990WX. Ultimately, we found the Threadripper 2950X is a great CPU for people looking to bridge the gap between content creation and gaming, with near top-level performance in both areas.
The 12-core and 24-core variants of 2nd generation Threadripper processors are still set to be coming later in the year.
Subject: Processors, Mobile | August 28, 2018 - 04:00 PM | Ken Addison
Tagged: whiskey lake, mobile, Intel, ifa 2018, amber lake, 8th generation
Tonight at the consumer electronics trade show IFA in Berlin, Intel announced their latest processors aimed at thin-and-light notebooks and 2-in-1 devices. Continuing the ever elongated 8th generation processor family from Intel, these new mobile CPUs are comprised of both 5W (Amber Lake-Y) and 15W (Whiskey Lake-U) parts.
|Core i7-8565U||Core i7-8550U||Core i5-8265U||Core i5-8250U||Core i3-8145U|
|Architecture||Whiskey Lake||Kaby Lake Refresh||Whiskey Lake||Kaby Lake Refresh||Whiskey Lake|
|Base Clock||1.8 GHz||1.8 GHz||1.6 GHz||1.6 GHz||2.1 GHz|
|Max Turbo Clock||4.6 GHz||4.0 GHz||3.9 GHz||3.6 GHz||3.9 GHz|
Just as we saw with the Kaby Lake Refresh CPUs last year, these 15W parts maintain the same quad-core, eight-thread configurations.
On the highest end part, the i7-8565U, we see an increase of 600MHz on the max turbo clock, while the base clock remains the same. The i5-8265U sees a smaller uptick of 300MHz boost while also keeping the same base clock of 1.6GHz as the previous generation.
|Core i7-8500Y||Core i7-7Y75||Core i5-8200Y||Core i5-7Y75||Core m3-8100Y||Core m3-7Y32|
|Architecture||Amber Lake||Kaby Lake||Amber Lake||Kaby Lake||Amber Lake||Kaby Lake|
|Base Clock||1.5 GHz||1.3 GHz||1.4 GHz||1.2 GHz||2.1 GHz||1.1 GHz|
|Max Turbo Clock||4.2 GHz||3.6 GHz||3.9 GHz||3.3 GHz||3.9 GHz||3.0GHz|
As we can see, the Amber Lake CPUs provide a significant frequency advantage over the previous Kaby Lake-Y processors, especially with the turbo frequencies ranging from 600-900MHz improvements.
These higher frequencies give these low power processors a substantial performance uptick from the previous generation, as long as the thermal solutions in the end product notebooks are up to the task of actually achieving these high turbo boost frequencies.
Across the board, Intel is marketing these CPU platforms as having increased connectivity options, with built-in 802.11AC 160MHz dual-band Wi-Fi support (which Intel is referring to as Gigabit WiFi). Additionally, both the Amber Lake and Whiskey Lake families have options to be paired with Intel LTE modems for cellular connectivity.
Also on the connectivity side, we see support for native USB 3.1 Gen 2 (10 Gbps) ports through the chipset on Whiskey Lake-U.
Intel is also touting battery life improvements with "16 hours on a single charge with power-optimized systems targeted to achieve about 19 hours" on the Whiskey Lake-U platform. However, as always, take these specifications with a grain of salt until we see real products with these processors integrated into them and benchmarked.
Subject: Processors | August 21, 2018 - 03:51 PM | Jeremy Hellstrom
Tagged: 2990wx, threadripper 2, linux, windows 10, amd
Windows 10 is much better at dealing with multithreaded tasks but Linux has been optimized for both high core counts and NUMA for quite a while, so looking at the performance difference is quite interesting. Phoronix tested a variety of Linux flavours as well as Windows 10 Pro and the performance differences are striking, in some cases we see results twice as fast on Linux as Win10. That does not hold true for all tests as there are some benchmarks which Windows excels at. Take a look at this full review as well as those under the fold for a fuller picture.
"Complementing the extensive Linux benchmarks done earlier today of the AMD Threadripper 2990WX in our review (as well as on the Threadripper 2950X), in this article are our first Windows 10 vs. Linux benchmarks of this 32-core / 64-thread $1799 USD processor. Tests were done from Microsoft Windows 10 against Clear Linux, Ubuntu 18.04, the Arch-based Antergos 18.7-Rolling, and openSUSE Tumbleweed."
Here are some more Processor articles from around the web:
- AMD's Ryzen Threadripper 2950X @ The Tech Report
- Threadripper 2990WX - 2950X & Wraith Ripper DIY Install @ [H]ard|OCP
- Linux vs. Windows Benchmark: Threadripper 2990WX vs. Core i9-7980XE Tested
- A Look At The Windows vs. Linux Scaling Performance Up To 64 Threads With The AMD 2990WX @ Phoronix
- The Mega-Tasking Test: AMD Threadripper 2990WX Heavy Multitasking Benchmark @ Techspot
- Armari AMD Ryzen Threadripper 2990WX – 32-Core Threadripper 2 Workstation @ Kitguru
- A Quick Look At The Windows Server vs. Linux Performance On The Threadripper 2990WX @ Phoronix
Subject: Processors | August 13, 2018 - 02:18 PM | Jeremy Hellstrom
Tagged: Zen+, Threadripper, second generation threadripper, ryzen, Intel, Core i9, 7980xe, 7960x, 7900x, 2990wx, 2950x
The 2950X and 2990WX are both ThreadRipper 2 chips but are very different beasts under the hood. The 2950X has two active die similar to the original chips while the 2990WX has four active die, two of which utilize an Infinity Fabric link to the other two to communicate to the memory subsystem. The W in the naming convention indicates the 2990WX is designed for workstation tasks and benchmarks support that designation. You will have seen our results here, but there are many other sources to read through. [H]ard|OCP offers up a different set of benchmarks in their review, with a similar result; with ThreadRipper AMD has a winner. The 2990WX is especially important as it opens up the lucrative lower cost workstations market for AMD.
"AMD teased us a bit last week by showing off its new 2nd Generation Threadripper 2990WX and 2950X packaging and specifications. This week AMD lets us share all our Threadripper data we have been collecting. The 2990WX is likely a lot different part than many people were expecting, and it turns out that it might usher AMD into a newly created market."
Here are some more Processor articles from around the web:
- AMD's Ryzen Threadripper 2990WX @ The Tech Report
- AMD Ryzen Threadripper 2950X and 2990WX @ Guru of 3D
- AMD Ryzen Threadripper 2990WX & 2950X @ TechSpot
- AMD Ryzen Threadripper 2950X @ TechPowerUp
- AMD Threadripper 2950X Offers Great Linux Performance At $900 USD @ Phoronix
- AMD Threadripper 2990WX Linux Benchmarks: The 32-Core / 64-Thread Beast @ Phoronix
- AMD Threadripper 2990WX Cooling Performance - Testing Five Heatsinks & Two Water Coolers @ Phoronix
Widening the Offerings
Today, we are talking about something that would have seen impossible just a few shorts years ago— a 32-core processor for consumers. While I realize that talking about the history of computer hardware can be considered superfluous in a processor review, I think it's important to understand the context here of why this is just a momentous shift for the industry.
May 2016 marked the launch of what was then the highest core count consumer processor ever seen, the Intel Core i7-6950X. At 10 cores and 20 threads, the 6950X was easily the highest performing consumer CPU in multi-threaded tasks but came at a staggering $1700 price tag. In what we will likely be able to look back on as the peak of Intel's sole dominance of the x86 CPU space, it was an impossible product to recommend to almost any consumer.
Just over a year later saw the launch of Skylake-X with the Intel Core i9-7900X. Retaining the same core count as the 6950X, the 7900X would have been relatively unremarkable on its own. However, a $700 price drop and the future of upcoming 12, 14, 16, and 18-core processors on this new X299 platform showed an aggressive new course for Intel's high-end desktop (HEDT) platform.
This aggressiveness was brought on by the success of AMD's Ryzen platform, and the then upcoming Threadripper platform. Promising up to 16 cores/32 threads, and 64 lanes of PCI Express connectivity, it was clear that Intel would for the first time have a competitor on their hands in the HEDT space that they created back with the Core i7-920.
Fast forward another year, and we have the release of the 2nd Generation Threadripper. Promising to bring the same advancements we saw with the Ryzen 7 2700X, AMD is pushing Threadripper to even more competitive states with higher performance and lower cost.
Will Threadripper finally topple Intel from their high-end desktop throne?
Subject: Processors | August 9, 2018 - 04:36 PM | Jeremy Hellstrom
Tagged: Ryzen 7 2700, amd, Zen+
There is a ~$30 difference between the Ryzen 7 2700 and the 2700X, which begs the question as to whom would chose the former over the latter. The Tech Report points out another major difference between the two processors, the 2700 has a 65W TDP while the 2700X is 105W; pointing to one possible reason for choosing the less expensive part. The question remains as to what you will be missing out on and if there is any reason not to go with the even less expensive and highly overclockable Ryzen 7 1700? Find out the results of their tests and get the answer right here.
"AMD's Ryzen 7 2700 takes all the benefits of AMD's Zen+ architecture and wraps eight of those cores up in a 65-W TDP. We tested the Ryzen 7 2700's performance out in stock and overclocked tune to see what it offers over the hugely popular Ryzen 7 1700."
Here are some more Processor articles from around the web:
- One year with Threadripper @ TechSpot
- Battle of the Workstations: AMD Ryzen Threadripper vs Intel Core X-Series @ Techgage
- We Test a $1,000 CPU From 2010 vs. Ryzen 3 @ TechSpot
- Intel's Spectre 'Variant 4' Performance Tested: Speculative Store Bypass @ TechSpot
- Qualcomm's Snapdragon 670 packs high-end features into a mid-range chip @ The Inquirer