All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
The software team at AMD and the Radeon Technologies Group is releasing Radeon Crimson ReLive Edition 17.7.2 this evening and it includes a host of new features, improved performance capabilities, and stability improvements to boot. This isn’t the major reboot of the software that we have come to expect on an annual basis, but rather an attempt to get the software team’s work out in front of media and gamers before the onslaught of RX Vega and Threadripper steal the attention.
AMD’s software team is big on its user satisfaction ratings, which it should be after the many years of falling behind NVIDIA in this department. With 16 individual driver releases in 2017 (so far) and 20 new games optimized and supported with day one releases, the 90% rating seems to be about right. Much of the work that could be done to improve multi-GPU and other critical problems are more than a calendar year behind us, so it seems reasonable the Radeon gamers would be in a good place in terms of software support.
One big change for Crimson ReLive today is that all of those lingering settings that remained in the old Catalyst Control Panel will now reside in the proper Radeon Settings. This means matching UI and streamlined interface.
The ReLive capture and streaming capability sees a handful of upgrades today including a bump from 50mbps to 100mbps maximum bit rate, transparency support for webcams, improved optimization to lower the memory usage (and thus the overhead of running ReLive), notifications of replays and record timers, and audio controls for microphone volume and push-to-talk.
In the original premise for today’s story, I had planned to do a standard and straight-forward review of the iPad Pro 10.5-inch model, the latest addition to Apple’s line of tablet devices. After receiving the 12.9-in variant, with the same processor upgrade but a larger and much more substantial screen, I started using them both as my daily-driver computing device. I was surprised at how well both handled the majority of tasks I tossed their way but there was still some lingering doubt in my mind about the usefulness of the iOS system as it exists today for my purposes.
The next step was for me to acquire an equivalent Windows 10-based tablet and try making THAT my everyday computer and see how my experiences changed. I picked up the new Surface Pro (2017) model that was priced nearly identical to the iPad Pro 12.9-in device. That did mean sacrificing some specifications that I would usually not do, including moving down to 4GB of memory and a 128GB SSD. This brought the total of the iPad Pro + Pencil + keyboard within $90 of the Surface Pro and matching accessories.
I should mention at the outset that with the pending release of iOS 11 due in the fall, the Apple iPad Pro line could undergo enough of a platform upgrade to change some of the points in this story. At that time, we can reevaluate our stance and conclusions.
Let’s start our editorial with a comparison of the hardware being tested in the specification department. Knowing that we are looking two ARM-based devices and an x86 system, we should realize core counts, clocks, and the like are even less comparable and relatable than in the Intel/AMD debates. However, it does give us a good bearing on how the hardware landscape looks when we get into the benchmarking section of this story.
|Surface Pro (2017) vs. iPad Pro (2017) Comparison|
|Processor||Intel Core i5-7300U (Kaby Lake)
(3x high performance Hurrican, 3x high efficiency Zephyr cores)
|Graphics||Intel HD Graphics 620||12-core Custom PowerVR|
|Screen||12.3-in 2736x1824 IPS||12.9-in 2732x2048 IPS 120 Hz
10.5-in 2224x1668 IPS 120 Hz
12MP Rear + OIS
|Battery||45 Wh||12.9-in: 41 Wh
10.5-in: 30.4 Wh
|Dimensions||11.50-in x 7.93-in x 0.33-in||12.9-in: 12.04-in x 8.69-in x 0.27-in
10.5-in: 9.87-in x 6.85-in x 0.24-in
|OS||Windows 10||iOS 10|
|Price||$999 - Amazon.com||12.9-in: $899
10.5-in: $749 - Amazon.com
A Premium Mechanical Option Under $100
In the past year or two we have seen a number of sub-$100 mechanical gaming keyboards on the market, and several of these have passed through our hands here at the PC Perspective offices. The latest of these to garner our attention is the ZALMAN ZM-K900M, a premium gaming design featuring RGB lighting effects and Kailh Blue key switches, along with a 1000 Hz polling rate and full N-key rollover. It currently retails for $89.99, though it can be found for as little as $79.99 (currently, at least) with a little googling. How impressive is it in person? Read on to find out!
The ZM-K900M offers a variety of RGB effects
The ZM-K900M certainly checks the right boxes as a gaming keyboard, with the above-mentioned 1000 Hz polling rate (which ZALMAN calls 'Z-Engine') and customizable RGB lighting, supports simulanious key presses for the full 104 keys, and offers programmable macro keys. All of the keyboard features are controlled via hot keys on the ZM-K900M itself, eliminating the need for software.
“The ZM-K900M requires no software installation and is universally compatible with any operating system. The macros automatically remember the time interval between the inputs and run exactly as you typed. The keyboard stores the data inside the keyboard so you can instantly run your macros on any computer.”
Features and specifications from ZALMAN:
- Simple and minimal design
- Equipped with Z-Engine
- Supports USB and PS/2 connection
- Intelligent hardware macro with option to add mouse clicks
- Multimedia hotkeys
- 4-stage macro speed adjustment
- 6-key and N-Key rollover
- Option to lock Windows key or entire keyboard
- High quality laser-etched keycaps
- Model: ZM-K900M
- Keyboard Layout: 104-key
- Key Switch: Kailh Blue mechanical key switch
- Keyboard Matrix: USB & PS/2 N key rollover (anti-ghost function)
- Key cap type: Step Sculpture 2
- Interface: USB
- Cable length: 5.6 ft
- Dimensions: 17.32 x 5.51 x 1.34 inches, 2.75 lbs
- ZALMAN ZM-K900M Keyboard: $89.99 - Amazon.com
A few months ago at Computex, NVIDIA announced their "GeForce GTX with Max-Q Design" initiative. Essentially, the heart of this program is the use of specifically binned GTX 1080, 1070 and 1060 GPUs. These GPUs have been tested and selected during the manufacturing process to ensure lower power draw at the same performance levels when compared to the GPUs used in more traditional form factors like desktop graphics cards.
In order to gain access to these "Max-Q" binned GPUs, notebook manufacturers have to meet specific NVIDIA guidelines on noise levels at thermal load (sub-40 dbA). To be clear, NVIDIA doesn't seem to be offering reference notebook designs (as demonstrated by the variability in design across the Max-Q notebooks) to partners, but rather ideas on how they can accomplish the given goals.
At the show, NVIDIA and some of their partners showed off several Max-Q notebooks. We hope to take a look at all of these machines in the coming weeks, but today we're focusing on one of the first, the ASUS ROG Zephyrus.
|ASUS ROG Zephyrus (configuration as reviewed)|
|Processor||Intel Core i7-7700HQ (Kaby Lake)|
|Graphics||NVIDIA Geforce GTX 1080 with Max-Q Deseign (8GB)|
|Memory||24GB DDR4 (8GB Soldered + 8GBx2 DIMM)|
|Screen||15.6-in 1920x1080 120Hz G-SYNC|
512GB Samsung SM961 NVMe
4 x USB 3.0
Audio combo jack
|Power||50 Wh Battery, 230W AC Adapter|
|Dimensions||378.9mm x 261.9mm x 17.01-17.78mm (14.92" x 10.31" x 0.67"-0.70")
4.94 lbs. (2240.746 g)
|OS||Windows 10 Home|
|Price||$2700 - Amazon.com|
As you can see, the ASUS ROG Zephyrus has the specifications of a high-end gaming desktop, let alone a gaming notebook. In some gaming notebook designs, the bottleneck comes down to CPU horsepower more than GPU horsepower. That doesn't seem to be the case here. The powerful GTX 1080 GPU is paired with a quad-core HyperThread Intel processor capable of boosting up to 3.8 GHz.
Kal Simpson recently had the chance to sit down and have an extensive interview with SILVIA's Chief Product Officer - Cognitive Code, Alex Mayberry. SILVIA is a company that specializes on conversational AI that can be adapted to a variety of platforms and applications. Kal's comments are in bold while Alex's are in italics.
Always good to speak with you Alex. Whether it's the latest Triple-A video game release or the progress being made in changing the way we play, virtual reality for instance – your views and developments within the gaming space as a whole remains impressive. Before we begin, I’d like to give the audience a brief flashback of your career history. Prominent within the video game industry you’ve been involved with many, many titles – primarily within the PC gaming space. Quake 2: The Reckoning, America’s Army, a plethora of World of Warcraft titles.
Those more familiar with your work know you as the lead game producer for Diablo 3 / Reaper of Souls, as well as the executive producer for Star Citizen. The former of which we spoke on during the release of the game for PC, PlayStation 4 and the Xbox One, back in 2014.
So I ask, given your huge involvement with some of the most popular titles, what sparked your interest within the development of intelligent computing platforms? No-doubt the technology can be adapted to applications within gaming, but what’s the initial factor that drove you to Cognitive Code – the SILVIA technology?
AM: Conversational intelligence was something that I had never even thought about in terms of game development. My experience arguing with my Xbox and trying to get it to change my television channel left me pretty sceptical about the technology. But after leaving Star Citizen, my paths crossed with Leslie Spring, the CEO and Founder of Cognitive Code, and the creator of the SILVIA platform. Initially, Leslie was helping me out with some engineering work on VR projects I was spinning up. After collaborating for a bit, he introduced me to his AI, and I became intrigued by it. Although I was still very focused on VR at the time, my mind kept drifting to SILVIA.
I kept pestering Leslie with questions about the technology, and he continued to share some of the things that it could do. It was when I saw one of his game engine demos showing off a sci-fi world with freely conversant robots that the light went on in my head, and I suddenly got way more interested in artificial intelligence. At the same time, I was discovering challenges in VR that needed solutions. Not having a keyboard in VR creates an obstacle for capturing user input, and floating text in your field of view is really detrimental to the immersion of the experience. Also, when you have life-size characters in VR, you naturally want to speak to them. This is when I got interested in using SILVIA to introduce an entirely new mechanic to gaming and interactive entertainment. No more do we have to rely on conversation trees and scripted responses.
No more do we have to read a wall of text from a quest giver. With this technology, we can have a realistic and free-form conversation with our game characters, and speak to them as if they are alive. This is such a powerful tool for interactive storytelling, and it will allow us to breathe life into virtual characters in a way that’s never before been possible. Seeing the opportunity in front of me, I joined up with Cognitive Code and have spent the last 18 months exploring how to design conversationally intelligent avatars. And I’ve been having a blast doing it.
Introduction and Technical Specifications
Courtesy of ASUS
With the latest revision of the TUF line, ASUS made the decision to drop the well-known "Sabertooth" moniker from the board's name, naming the board with the TUF branding only. The TUF Z270 Mark 1 motherboard is the flagship board in ASUS' TUF (The Ultimate Force) product line designed with the Intel Z270 chipset. The board offers support for the latest Intel Kaby Lake processor line as well as Dual Channel DDR 4 memory because of its integrated Intel Z270 chipset. While the MSRP for the board may be a bit higher than expected, its $239 price is more than justified by the board's build quality and "armored" offerings.
Courtesy of ASUS
Courtesy of ASUS
Courtesy of ASUS
Courtesy of ASUS
The TUF Z270 Mark 1 motherboard is built with the same quality and attention to detail that you've come to expect from TUF-branded motherboards. Its appearance follows the standard tan plastic armor overlay on the top with a 10-phase digital power system. ASUS also chose to include the armored backplate with the TUF Z270 Mark 1 motherboard, dubbed the "TUF Fortifier". The board contains the following integrated features: six SATA 3 ports; two M.2 PCIe x4 capable ports; dual GigE controllers - an Intel I219-V Gigabit NIC and an Intel I211 Gigabit NIC; three PCI-Express x16 slots; three PCI-Express x1 slots; an 8-channel audio subsystem; MEM OK! and USB BIOS Flashback buttons; integrated DisplayPort and HDMI; and USB 2.0, 3.0, and 3.1 Type-A and Type-C port support.
Specifications and Design
Just a couple of short weeks ago we looked at the Radeon Vega Frontier Edition 16GB graphics card in its air-cooled variety. The results were interesting – gaming performance proved to fall somewhere between the GTX 1070 and the GTX 1080 from NVIDIA’s current generation of GeForce products. That is under many of the estimates from players in the market, including media, fans, and enthusiasts. But before we get to the RX Vega product family that is targeted at gamers, AMD has another data point for us to look at with a water-cooled version of Vega Frontier Edition. At a $1500 MSRP, which we shelled out ourselves, we are very interested to see how it changes the face of performance for the Vega GPU and architecture.
Let’s start with a look at the specifications of this version of the Vega Frontier Edition, which will be…familiar.
|Vega Frontier Edition (Liquid)||Vega Frontier Edition||Titan Xp||GTX 1080 Ti||Titan X (Pascal)||GTX 1080||TITAN X||GTX 980||R9 Fury X|
|Base Clock||1382 MHz||1382 MHz||1480 MHz||1480 MHz||1417 MHz||1607 MHz||1000 MHz||1126 MHz||1050 MHz|
|Boost Clock||1600 MHz||1600 MHz||1582 MHz||1582 MHz||1480 MHz||1733 MHz||1089 MHz||1216 MHz||-|
|Memory Clock||1890 MHz||1890 MHz||11400 MHz||11000 MHz||10000 MHz||10000 MHz||7000 MHz||7000 MHz||1000 MHz|
|Memory Interface||2048-bit HBM2||2048-bit HBM2||384-bit G5X||352-bit||384-bit G5X||256-bit G5X||384-bit||256-bit||4096-bit (HBM)|
|Memory Bandwidth||483 GB/s||483 GB/s||547.7 GB/s||484 GB/s||480 GB/s||320 GB/s||336 GB/s||224 GB/s||512 GB/s|
|300 watts||250 watts||250 watts||250 watts||180 watts||250 watts||165 watts||275 watts|
|Peak Compute||13.1 TFLOPS||13.1 TFLOPS||12.0 TFLOPS||10.6 TFLOPS||10.1 TFLOPS||8.2 TFLOPS||6.14 TFLOPS||4.61 TFLOPS||8.60 TFLOPS|
The base specs remain unchanged and AMD lists the same memory frequency and even GPU clock rates across both models. In practice though, the liquid cooled version runs at higher sustained clocks and can overclock a bit easier as well (more details later). What does change with the liquid cooled version is a usable BIOS switch on top of the card that allows you to move between two distinct power draw states: 300 watts and 350 watts.
First, it’s worth noting this is a change from the “375 watt” TDP that this card was listed at during the launch and announcement. AMD was touting a 300-watt and 375-watt version of Frontier Edition, but it appears the company backed off a bit on that, erring on the side of caution to avoid breaking any of the specifcations of PCI Express (board slot or auxiliary connectors). Even more concerning is that AMD chose to have the default state of the switch on the Vega FE Liquid card at 300 watts rather than the more aggressive 350 watts. AMD claims this to avoid any problems with lower quality power supplies that may struggle to hit slightly over 150 watts of power draw (and resulting current) from the 8-pin power connections. I would argue that any system that is going to install a $1500 graphics card can and should be prepared to provide the necessary power, but for the professional market, AMD leans towards caution. (It’s worth pointing out the RX 480 power issues that may have prompted this internal decision making were more problematic because they impacted the power delivery through the motherboard, while the 6- and 8-pin connectors are generally much safer to exceed the ratings.)
Even without clock speed changes, the move to water cooling should result in better and more consistent performance by removing the overheating concerns that surrounded our first Radeon Vega Frontier Edition review. But let’s dive into the card itself and see how the design process created a unique liquid cooled solution.
Introduction and Specifications
The ZenBook 3 UX390UA is a 12.5-inch thin-and-light which offers a 1920x1080 IPS display, choice of 7th-generation Intel Core i5 or Core i7 processors, 16GB of DDR4 memory, and a roomy 512GB PCIe SSD. It also features just a single USB Type-C port, eschewing additional I/O in the vein of recent Apple MacBooks (more on this trend later in the review). How does it stack up? I had the pleasure of using it for a few weeks and can offer my own usage impressions (along with those ever-popular benchmark numbers) to try answering that question.
A thin-and-light (a.k.a. ‘Ultrabook’) is certainly an attractive option when it comes to portability, and the ZenBook 3 certainly delivers with a slim 0.5-inch thickness and 2 lb weight from its aluminum frame. Another aspect of thin-and-light designs are the typically low-power processors, though the “U” series in Intel’s 7th-generation processor lineup still offer good performance numbers for portable machines. Looking at the spec sheet it is clear that ASUS paid attention to performance with this ZenBook, and we will see later on if a good balance has been struck between performance and battery life.
Our review unit was equipped with a Core i7-7500U processor, a 2-core/4-thread part with a 15W TDP and speeds ranging from 2.70 - 3.50 GHz, along with the above-mentioned 16GB of RAM and 512GB SSD. With an MSRP of $1599 for this configuration it faces some stiff competition from the likes of the Dell XPS line and recent Lenovo ThinkPad and Apple MacBook offerings, though it can of course be found for less than its MSRP (and this configuration currently sells on Amazon for about $1499). The ZenBook 3 certainly offers style if you are into blade-like aluminum designs, and, while not a touchscreen, nothing short of Gorilla Glass 4 was employed to protect the LCD display.
“ZenBook 3’s design took some serious engineering prowess and craftsmanship to realize. The ultra-thin 11.9mm profile meant we had to invent the world’s most compact laptop hinge — just 3mm high — to preserve its sleek lines. To fit in the full-size keyboard, we had to create a surround that’s just 2.1mm wide at the edges, and we designed the powerful four-speaker audio system in partnership with audiophile specialists Harman Kardon. ZenBook is renowned for its unique, stunning looks, and you’ll instantly recognize the iconic Zen-inspired spun-metal finish on ZenBook 3’s all-metal unibody enclosure — a finish that takes 40 painstaking steps to create. But we’ve added a beautiful twist, using a special 2-phase anodizing process to create stunning golden edge highlights. To complete this sophisticated new theme, we’ve added a unique gold ASUS logo and given the keyboard a matching gold backlight.”
Just a little taste
In a surprise move with no real indication as to why, AMD has decided to reveal some of the most exciting and interesting information surrounding Threadripper and Ryzen 3, both due out in just a few short weeks. AMD CEO Lisa Su and CVP of Marketing John Taylor (along with guest star Robert Hallock) appear in a video being launched on the AMD YouTube website today to divulge the naming, clock speeds and pricing for the new flagship HEDT product line under the Ryzen brand.
We already know a lot of about Threadripper, AMD’s answer to the X299/X99 high-end desktop platforms from Intel, including that they would be coming this summer, have up to 16-cores and 32-threads of compute, and that they would all include 64 lanes of PCI Express 3.0 for a massive amount of connectivity for the prosumer.
Now we know that there will be two models launching and available in early August: the Ryzen Threadripper 1920X and the Ryzen Threadripper 1950X.
|Core i9-7980XE||Core i9-7960X||Core i9-7940X||Core i9-7920X||Core i9-7900X||Core i7-7820X||Core i7-7800X||Threadripper 1950X||Threadripper 1920X|
|Base Clock||?||?||?||?||3.3 GHz||3.6 GHz||3.5 GHz||3.4 GHz||3.5 GHz|
|Turbo Boost 2.0||?||?||?||?||4.3 GHz||4.3 GHz||4.0 GHz||4.0 GHz||4.0 GHz|
|Turbo Boost Max 3.0||?||?||?||?||4.5 GHz||4.5 GHz||N/A||N/A||N/A|
|Cache||16.5MB (?)||16.5MB (?)||16.5MB (?)||16.5MB (?)||13.75MB||11MB||8.25MB||40MB||?|
|DDR4-2666 Quad Channel|
|TDP||165 watts (?)||165 watts (?)||165 watts (?)||165 watts (?)||140 watts||140 watts||140 watts||180 watts||180 watts|
|Threadripper 1950X||Threadripper 1920X||Ryzen 7 1800X||Ryzen 7 1700X||Ryzen 7 1700||Ryzen 5 1600X||Ryzen 5 1600||Ryzen 5 1500X||Ryzen 5 1400|
|Base Clock||3.4 GHz||3.5 GHz||3.6 GHz||3.4 GHz||3.0 GHz||3.6 GHz||3.2 GHz||3.5 GHz||3.2 GHz|
|Turbo/Boost Clock||4.0 GHz||4.0 GHz||4.0 GHz||3.8 GHz||3.7 GHz||4.0 GHz||3.6 GHz||3.7 GHz||3.4 GHz|
|DDR4-2666 Quad Channel||DDR4-2400
|TDP||180 watts||180 watts||95 watts||95 watts||65 watts||95 watts||65 watts||65 watts||65 watts|
To say that the consumer wired networking market has stagnated has been an understatement. While we've seen generational improvements on NICs from companies like Intel, and companies like Rivet trying to add their own unique spin on things with their Killer products, the basic idea has remained mostly unchanged.
And for its time, Gigabit networking was an amazing thing. In the era of hard drive-based storage as your only option, 100 MB/s seemed like a great data transfer speed for your home network — who could want more?
Now that we've moved well into the era of flash-based storage technologies capable of upwards of 3 GB/s transfer speeds, and even high capacity hard drives hitting the 200 MB/s category, Gigabit networking is a frustrating bottleneck when trying to move files from PC to PC.
For the enterprise market, there has been a solution to this for a long time. 10 Gigabit networking has been available in enterprise equipment for over 10 years, and even old news with even faster specifications like 40 and 100 Gbps interfaces available.
So why then are consumers mostly stuck at 1Gbps? As is the case with most enterprise technologies, the cost for 10 Gigabit equipment is still at a high premium compared to it's slower sibling. In fact, we've only just started to see enterprise-level 10 Gigabit NICs integrated on consumer motherboards, like the ASUS X99-E 10G WS at a staggering $650 price point.
However, there is hope. Companies like Aquantia are starting to aggressively push down the price point of 10 Gigabit networking, which brings us to the product we are taking a look at today — the ASUS XG-C100C 10 Gigabit Network Adapter.
A massive lineup
The amount and significance of the product and platform launches occurring today with the Intel Xeon Scalable family is staggering. Intel is launching more than 50 processors and 7 chipsets falling under the Xeon Scalable product brand, targeting data centers and enterprise customers in a wide range of markets and segments. From SMB users to “Super 7” data center clients, the new lineup of Xeon parts is likely to have an option targeting them.
All of this comes at an important point in time, with AMD fielding its new EPYC family of processors and platforms, for the first time in nearly a decade becoming competitive in the space. That decade of clear dominance in the data center has been good to Intel, giving it the ability to bring in profits and high margins without the direct fear of a strong competitor. Intel did not spend those 10 years flat footed though, and instead it has been developing complimentary technologies including new Ethernet controllers, ASICs, Omni-Path, FPGAs, solid state storage tech and much more.
Our story today will give you an overview of the new processors and the changes that Intel’s latest Xeon architecture offers to business customers. The Skylake-SP core has some significant upgrades over the Broadwell design before it, but in other aspects the processors and platforms will be quite similar. What changes can you expect with the new Xeon family?
Per-core performance has been improved with the updated Skylake-SP microarchitecture and a new cache memory hierarchy that we had a preview of with the Skylake-X consumer release last month. The memory and PCIe interfaces have been upgraded with more channels and more lanes, giving the platform more flexibility for expansion. Socket-level performance also goes up with higher core counts available and the improved UPI interface that makes socket to socket communication more efficient. AVX-512 doubles the peak FLOPS/clock on Skylake over Broadwell, beneficial for HPC and analytics workloads. Intel QuickAssist improves cryptography and compression performance to allow for faster connectivity implementation. Security and agility get an upgrade as well with Boot Guard, RunSure, and VMD for better NVMe storage management. While on the surface this is a simple upgrade, there is a lot that gets improved under the hood.
We already had a good look at the new mesh architecture used for the inter-core component communication. This transition away from the ring bus that was in use since Nehalem gives Skylake-SP a couple of unique traits: slightly longer latencies but with more consistency and room for expansion to higher core counts.
Intel has changed the naming scheme with the Xeon Scalable release, moving away from “E5/E7” and “v4” to a Platinum, Gold, Silver, Bronze nomenclature. The product differentiation remains much the same, with the Platinum processors offering the highest feature support including 8-sockets, highest core counts, highest memory speeds, connectivity options and more. To be clear: there are a lot of new processors and trying to create an easy to read table of features and clocks is nearly impossible. The highlights of the different families are:
- Xeon Platinum (81xx)
- Up to 28 cores
- Up to 8 sockets
- Up to 3 UPI links
- 6-channel DDR4-2666
- Up to 1.5TB of memory
- 48 lanes of PCIe 3.0
- AVX-512 with 2 FMA per core
- Xeon Gold (61xx)
- Up to 22 cores
- Up to 4 sockets
- Up to 3 UPI links
- 6-channel DDR4-2666
- AVX-512 with 2 FMA per core
- Xeon Gold (51xx)
- Up to 14 cores
- Up to 2 sockets
- 2 UPI links
- 6-channel DDR4-2400
- AVX-512 with 1 FMA per core
- Xeon Silver (41xx)
- Up to 12 cores
- Up to 2 sockets
- 2 UPI links
- 6-channel DDR4-2400
- AVX-512 with 1 FMA per core
- Xeon Bronze (31xx)
- Up to 8 cores
- Up to 2 sockets
- 2 UPI links
- No Turbo Boost
- 6-channel DDR4-2133
- AVX-512 with 1 FMA per core
That’s…a lot. And it only gets worse when you start to look at the entire SKU lineup with clocks, Turbo Speeds, cache size differences, etc. It’s easy to see why the simplicity argument that AMD made with EPYC is so attractive to an overwhelmed IT department.
Two sub-categories exist with the T or F suffix. The former indicates a 10-year life cycle (thermal specific) while the F is used to indicate units that integrate the Omni-Path fabric on package. M models can address 1.5TB of system memory. This diagram above, which you should click to see a larger view, shows the scope of the Xeon Scalable launch in a single slide. This release offers buyers flexibility but at the expense of complexity of configuration.
There has been a lot of news lately about the release of Cryptocurrency-specific graphics cards from both NVIDIA and AMD add-in board partners. While we covered the currently cryptomining phenomenon in an earlier article, today we are taking a look at one of these cards geared towards miners.
It's worth noting that I purchased this card myself from Newegg, and neither AMD or Sapphire are involved in this article. I saw this card pop up on Newegg a few days ago, and my curiosity got the best of me.
There has been a lot of speculation, and little official information from vendors about what these mining cards will actually entail.
From the outward appearance, it is virtually impossible to distinguish this "new" RX 470 from the previous Sapphire Nitro+ RX 470, besides the lack of additional display outputs beyond the DVI connection. Even the branding and labels on the card identify it as a Nitro+ RX 470.
In order to test the hashing rates of this GPU, we are using Claymore's Dual Miner Version 9.6 (mining Ethereum only) against a reference design RX 470, also from Sapphire.
On the reference RX 470 out of the box, we hit rates of about 21.8 MH/s while mining Ethereum.
Once we moved to the Sapphire mining card, we move up to at least 24 MH/s from the start.
Introduction and Technical Specifications
Courtesy of GIGABYTE
For the launch of the Intel X299 chipset motherboards, GIGABYTE chose their AORUS brand to lead the charge. The AORUS branding differentiates the enthusiast and gamer friendly products from other GIGABYTE product lines, similar to how ASUS uses the ROG branding to differentiate their high performance product line. The X299 AORUS Gaming 3 is among GIGABYTE's intial release boards offering support for the latest Intel HEDT chipset and processor line. Built around the Intel X299 chlipset, the board supports the Intel LGA2066 processor line, including the Skylake-X and Kaby Lake-X processors, with support for Quad-Channel DDR4 memory running at a 2667MHz speed. The X299 AORUS Gaming 3 can be found in retail with an MRSP of $279.99.
Courtesy of GIGABYTE
GIGABYTE integrated the following features into the X299 AORUS Gaming 3 motherboard: eight SATA III 6Gbps ports; two M.2 PCIe Gen3 x4 32Gbps capable ports with Intel Optane support built-in; an Intel I219-V Gigabit RJ-45 port; five PCI-Express x16 slots; Realtek® ALC1220 8-Channel audio subsystem; and USB 2.0, 3.0, and 3.1 Type-A and Type-C port support.
Courtesy of GIGABYTE
To power the board, GIGABYTE integrated integrated a 9-phase digital power delivery system into the X299 AORUS Gaming 3's design. The digital power system was designed with IR digital power controllers and PowIRstage ICs, Server Level Chokes, and Durable Black capacitors.
Courtesy of GIGABYTE
Designed to withstand the punishment of even the largest video cards, GIGABYTE's Ultra Durable PCIe Armor gives added strength and retention force to the primary and secondary PCIe x16 video card slots (PCIe X16 slots 1 and 3). The PCIe slots are reinforced with a metal overlay that is anchored to the board, giving the slot better hold capabilities (both side-to-side and card retention) when the board is used in a vertical orientation.
A long time coming
External video cards for laptops have long been a dream of many PC enthusiasts, and for good reason. It’s compelling to have a thin-and-light notebook with great battery life for things like meetings or class, with the ability to plug it into a dock at home and enjoy your favorite PC games.
Many times we have been promised that external GPUs for notebooks would be a viable option. Over the years there have been many commercial solutions involving both industry standard protocols like ExpressCard, as well as proprietary connections to allow you to externally connect PCIe devices. Inspiring hackers have also had their hand with this for many years, cobbling together interesting solutions using mPCIe and M.2 ports on their notebooks which were meant for other devices.
With the introduction of Intel’s Thunderbolt standard in 2011, there was a hope that we would finally achieve external graphics nirvana. A modern, Intel-backed protocol promising PCIe x4 speeds (PCIe 2.0 at that point) sounded like it would be ideal for connecting GPUs to notebooks, and in some ways it was. Once again the external graphics communities managed to get it to work through the use of enclosures meant to connect other non-GPU PCIe devices such as RAID and video capture cards to systems. However, software support was still a limiting factor. You were required to use an external monitor to display your video, and it still felt like you were just riding the line between usability and a total hack. It felt like we were never going to get true universal support for external GPUs on notebooks.
Then, seemingly of out of nowhere, Intel decided to promote native support for external GPUs as a priority when they introduced Thunderbolt 3. Fast forward, and we've already seen a much larger adoption of Thunderbolt 3 on PC notebooks than we ever did with the previous Thunderbolt implementations. Taking all of this into account, we figured it was time to finally dip our toes into the eGPU market.
For our testing, we decided on the AKiTio Node for several reasons. First, at around $300, it's by far the lowest cost enclosure built to support GPUs. Additionally, it seems to be one of the most compatible devices currently on the market according to the very helpful comparison chart over at eGPU.io. The eGPU site is a wonderful resource for everything external GPU, over any interface possible, and I would highly recommend heading over there to do some reading if you are interested in trying out an eGPU for yourself.
The Node unit itself is a very utilitarian design. Essentially you get a folded sheet metal box with a Thunderbolt controller and 400W SFX power supply inside.
In order to install a GPU into the Node, you must first unscrew the enclosure from the back and slide the outer shell off of the device.
Once inside, we can see that there is ample room for any graphics card you might want to install in this enclosure. In fact, it seems a little too large for any of the GPUs we installed, including GTX 1080 Ti models. Here, you can see a more reasonable RX 570 installed.
Beyond opening up the enclosure to install a GPU, there is very little configuration required. My unit required a firmware update, but that was easily applied with the tools from the AKiTio site.
From here, I simply connected the Node to a ThinkPad X1, installed the NVIDIA drivers for our GTX 1080 Ti, and everything seemed to work — including using the 1080 Ti with the integrated notebook display and no external monitor!
Now that we've got the Node working, let's take a look at some performance numbers.
Two Vegas...ha ha ha
When the preorders for the Radeon Vega Frontier Edition went up last week, I made the decision to place orders in a few different locations to make sure we got it in as early as possible. Well, as it turned out, we actually had the cards show up very quickly…from two different locations.
So, what is a person to do if TWO of the newest, most coveted GPUs show up on their doorstep? After you do the first, full review of the single GPU iteration, you plug those both into your system and do some multi-GPU CrossFire testing!
There of course needs to be some discussion up front about this testing and our write up. If you read my first review of the Vega Frontier Edition you will clearly note my stance on the idea that “this is not a gaming card” and that “the drivers aren’t ready. Essentially, I said these potential excuses for performance were distraction and unwarranted based on the current state of Vega development and the proximity of the consumer iteration, Radeon RX.
But for multi-GPU, it’s a different story. Both competitors in the GPU space will tell you that developing drivers for CrossFire and SLI is incredibly difficult. Much more than simply splitting the work across different processors, multi-GPU requires extra attention to specific games, game engines, and effects rendering that are not required in single GPU environments. Add to that the fact that the market size for CrossFire and SLI has been shrinking, from an already small state, and you can see why multi-GPU is going to get less attention from AMD here.
Even more, when CrossFire and SLI support gets a focus from the driver teams, it is often late in the process, nearly last in the list of technologies to address before launch.
With that in mind, we all should understand the results we are going to show you might be indicative of the CrossFire scaling when Radeon RX Vega launches, but it very well could not. I would look at the data we are presenting today as a “current state” of CrossFire for Vega.
Performance not two-die four.
When designing an integrated circuit, you are attempting to fit as much complexity as possible within your budget of space, power, and so forth. One harsh limitation for GPUs is that, while your workloads could theoretically benefit from more and more processing units, the number of usable chips from a batch shrinks as designs grow, and the reticle limit of a fab’s manufacturing node is basically a brick wall.
What’s one way around it? Split your design across multiple dies!
NVIDIA published a research paper discussing just that. In their diagram, they show two examples. In the first diagram, the GPU is a single, typical die that’s surrounded by four stacks of HBM, like GP100; the second configuration breaks the GPU into five dies, four GPU modules and an I/O controller, with each GPU module attached to a pair of HBM stacks.
NVIDIA ran simulations to determine how this chip would perform, and, in various workloads, they found that it out-performed the largest possible single-chip GPU by about 45.5%. They scaled up the single-chip design until it had the same amount of compute units as the multi-die design, even though this wouldn’t work in the real world because no fab could actual lithograph it. Regardless, that hypothetical, impossible design was only ~10% faster than the actually-possible multi-chip one, showing that the overhead of splitting the design is only around that much, according to their simulation. It was also faster than the multi-card equivalent by 26.8%.
While NVIDIA’s simulations, run on 48 different benchmarks, have accounted for this, I still can’t visualize how this would work in an automated way. I don’t know how the design would automatically account for fetching data that’s associated with other GPU modules, as this would probably be a huge stall. That said, they spent quite a bit of time discussing how much bandwidth is required within the package, and figures of 768 GB/s to 3TB/s were mentioned, so it’s possible that it’s just the same tricks as fetching from global memory. The paper touches on the topic several times, but I didn’t really see anything explicit about what they were doing.
If you’ve been following the site over the last couple of months, you’ll note that this is basically the same as AMD is doing with Threadripper and EPYC. The main difference is that CPU cores are isolated, so sharing data between them is explicit. In fact, when that product was announced, I thought, “Huh, that would be cool for GPUs. I wonder if it’s possible, or if it would just end up being Crossfire / SLI.”
Apparently not? It should be possible?
I should note that I doubt this will be relevant for consumers. The GPU is the most expensive part of a graphics card. While the thought of four GP102-level chips working together sounds great for 4K (which is 4x1080p in resolution) gaming, quadrupling the expensive part sounds like a giant price-tag. That said, the market of GP100 (and the upcoming GV100) would pay five-plus digits for the absolute fastest compute device for deep-learning, scientific research, and so forth.
The only way I could see this working for gamers is if NVIDIA finds the sweet-spot for performance-to-yield (for a given node and time) and they scale their product stack with multiples of that. In that case, it might be cost-advantageous to hit some level of performance, versus trying to do it with a single, giant chip.
This is just my speculation, however. It’ll be interesting to see where this goes, whenever it does.
GTX 1060 keeps on kicking
Despite the market for graphics cards being disrupted by the cryptocurrency mining craze, board partners like Galax continue to build high quality options for gamers...if they can get their hands on them. We recently received a new Galax GTX 1060 EXOC White 6GB card that offers impressive performance and features as well as a visual style to help it stand out from the crowd.
We have worked with GeForce GTX 1060 graphics cards quite a bit on PC Perspective, so there is not a need to dive into the history of the GPU itself. If you need a refresher on this GP106 GPU, where it stands in the pantheon on the current GPU market, check out my launch review of the GTX 1060 from last year. The release of AMD’s Radeon RX 580 did change things a bit in the market landscape, so that review might be worth looking at too.
Our quick review at the Galax GTX 1060 EXOC White will look at performance (briefly), overclocking, and cost. But first, let’s take a look at this thing.
The Galax GTX 1060 EXOC White
As the name implies, the EXOC White card from Galax is both overclocked and uses a white fan shroud to add a little flair to the design. The PCB is a standard black color, but with the fan and back plate both a bright white, the card will be a point of interest for nearly any PC build. Pairing this with a white-accented motherboard, like the recent ASUS Prime series, would be an excellent visual combination.
The fans on the EXOC White have clear-ish white blades that are illuminated by the white LEDs that shine through the fan openings on the shroud.
The cooler that Galax has implemented is substantial, with three heatpipes used to distribute the load from the GPU across the fins. There is a 6-pin power connector (standard for the GTX 1060) but that doesn’t appear to hold back the overclocking capability of the GPU.
There is a lot of detail on the heatsink shroud – and either you like it or you don’t.
Galax has included a white backplate that doubles as artistic style and heatsink. I do think that with most users’ cases showcasing the rear of the graphics card more than the front, a good quality back plate is a big selling point.
The output connectivity includes a pair of DVI ports, a full-size HDMI and a full-size DisplayPort; more than enough for nearly any buyer of this class of GPU.
Introduction and Features
Seasonic is in the process of overhauling their entire PC power supply lineup. They began with the introduction of the PRIME Series in late 2016 and are now introducing the new FOCUS family, which will include three different series ranging from 450W up to 850W output capacity with either Platinum or Gold efficiency certification. In this review we will be taking a detailed look at the new Seasonic FOCUS PLUS Gold (FX) 650W power supply. And just to prove that reviewers are not being sent hand-picked golden samples, our PSU was delivered straight from Newegg.com inventory.
The Seasonic FOCUS PLUS Gold series includes four models: 550W, 650W, 750W, and 850W. In addition to 80 Plus Gold certification, the FOCUS Plus (FX) series features a small footprint chassis (140mm deep), all modular cables, high quality components, and comes backed by a 10-year warranty.
• FOCUS PLUS Gold (FX) 550W: $79.90 USD
• FOCUS PLUS Gold (FX) 650W: $89.90 USD
• FOCUS PLUS Gold (FX) 750W: $99.90 USD
• FOCUS PLUS Gold (FX) 850W: $109.90 USD
Seasonic FOCUS PLUS 650W Gold (FX) PSU Key Features:
• 650W Continuous DC output at up to 50°C (715W peak)
• 80 PLUS Gold certified for high efficiency
• Small footprint: chassis measures just 140mm (5.5”) deep
• Micro Tolerance load regulation @ 1%
• Fully-modular cables
• DC-to-DC Voltage converters
• Single +12V output
• Multi-GPU Technology support
• Quiet 120mm Fluid Dynamic Bearing (FDB) cooling fan
• Haswell support
• Active Power Factor correction with Universal AC input (100 to 240 VAC)
• Safety protections: OPP, OVP, UVP, OCP, OTP and SCP
• 10-Year warranty
An interesting night of testing
Last night I did our first ever live benchmarking session using the just-arrived Radeon Vega Frontier Edition air-cooled graphics card. Purchased directly from a reseller, rather than being sampled by AMD, gave us the opportunity to testing for a new flagship product without an NDA in place to keep us silenced, so I thought it would be fun to the let the audience and community go along for the ride of a traditional benchmarking session. Though I didn’t get all of what I wanted done in that 4.5-hour window, it was great to see the interest and excitement for the product and the results that we were able to generate.
But to the point of the day – our review of the Radeon Vega Frontier Edition graphics card. Based on the latest flagship GPU architecture from AMD, the Radeon Vega FE card has a lot riding on its shoulders, despite not being aimed at gamers. It is the FIRST card to be released with Vega at its heart. It is the FIRST instance of HBM2 being utilized in a consumer graphics card. It is the FIRST in a new attempt from AMD to target the group of users between gamers and professional users (like NVIDIA has addressed with Titan previously). And, it is the FIRST to command as much attention and expectation for the future of a company, a product line, and a fan base.
Other than the architectural details that AMD gave us previously, we honestly haven’t been briefed on the performance expectations or the advancements in Vega that we should know about. The Vega FE products were released to the market with very little background, only well-spun turns of phrase emphasizing the value of the high performance and compatibility for creators. There has been no typical “tech day” for the media to learn fully about Vega and there were no samples from AMD to media or analysts (that I know of). Unperturbed by that, I purchased one (several actually, seeing which would show up first) and decided to do our testing.
On the following pages, you will see a collection of tests and benchmarks that range from 3DMark to The Witcher 3 to SPECviewperf to LuxMark, attempting to give as wide a viewpoint of the Vega FE product as I can in a rather short time window. The card is sexy (maybe the best looking I have yet seen), but will disappoint many on the gaming front. For professional users that are okay not having certified drivers, performance there is more likely to raise some impressed eyebrows.
Radeon Vega Frontier Edition Specifications
Through leaks and purposeful information dumps over the past couple of months, we already knew a lot about the Radeon Vega Frontier Edition card prior to the official sale date this week. But now with final specifications in hand, we can start to dissect what this card actually is.
|Vega Frontier Edition||Titan Xp||GTX 1080 Ti||Titan X (Pascal)||GTX 1080||TITAN X||GTX 980||R9 Fury X||R9 Fury|
|GPU||Vega||GP102||GP102||GP102||GP104||GM200||GM204||Fiji XT||Fiji Pro|
|Base Clock||1382 MHz||1480 MHz||1480 MHz||1417 MHz||1607 MHz||1000 MHz||1126 MHz||1050 MHz||1000 MHz|
|Boost Clock||1600 MHz||1582 MHz||1582 MHz||1480 MHz||1733 MHz||1089 MHz||1216 MHz||-||-|
|Memory Clock||1890 MHz||11400 MHz||11000 MHz||10000 MHz||10000 MHz||7000 MHz||7000 MHz||1000 MHz||1000 MHz|
|Memory Interface||2048-bit HBM2||384-bit G5X||352-bit||384-bit G5X||256-bit G5X||384-bit||256-bit||4096-bit (HBM)||4096-bit (HBM)|
|Memory Bandwidth||483 GB/s||547.7 GB/s||484 GB/s||480 GB/s||320 GB/s||336 GB/s||224 GB/s||512 GB/s||512 GB/s|
|TDP||300 watts||250 watts||250 watts||250 watts||180 watts||250 watts||165 watts||275 watts||275 watts|
|Peak Compute||13.1 TFLOPS||12.0 TFLOPS||10.6 TFLOPS||10.1 TFLOPS||8.2 TFLOPS||6.14 TFLOPS||4.61 TFLOPS||8.60 TFLOPS||7.20 TFLOPS|
The Vega FE shares enough of a specification listing with the Fury X that it deserves special recognition. Both cards sport 4096 stream processors, 64 ROPs and 256 texture units. The Vega FE is running at much higher clock speeds (35-40% higher) and also upgrades to the next generation of high-bandwidth memory and quadruples capacity. Still, there will be plenty of comparisons between the two products, looking to measure IPC changes from the CUs (compute units) from Fiji to the NCUs built for Vega.
The Radeon Vega GPU
The clock speeds also see another shift this time around with the adoption of “typical” clock speeds. This is something that NVIDIA has been using for a few generations with the introduction of GPU Boost, and tells the consumer how high they should expect clocks to go in a nominal workload. Normally I would say a gaming workload, but since this card is supposedly for professional users and the like, I assume this applies across the board. So even though the GPU is rated at a “peak” clock rate of 1600 MHz, the “typical” clock rate is 1382 MHz. (As an early aside, I did NOT see 1600 MHz in any of my testing time with our Vega FE but did settle in a ~1440 MHz clock most of the time.)
Introduction and Specifications
The Galaxy S8 Plus is Samsung's first ‘big’ phone since the Note7 fiasco, and just looking at it the design and engineering process seems to have paid off. Simply put, the GS8/GS8+ might just be the most striking handheld devices ever made. The U.S. version sports the newest and fastest Qualcomm platform with the Snapdragon 835, and the international version of the handset uses Samsung’s Exynos 8895 Octa SoC. We have the former on hand, and it was this MSM8998-powered version of the 6.2-inch GS8+ that I spent some quality time with over the past two weeks.
There is more to a phone than its looks, and even in that department the Galaxy S8+ raises questions about durability with that large, curved glass screen. With the front and back panels wrapping around as they do the phone has a very slim, elegant look that feels fantastic in hand. And while one drop could easily ruin your day with any smartphone, this design is particularly unforgiving - and screen replacement costs with these new S8 phones are particularly high due to the difficulty in repairing the screen, and need to replace the AMOLED display along with the laminated glass.
Forgetting the fragility for a moment and just embracing the case-free lifestyle I was so tempted to adopt, lest I change the best in-hand feel I've had from a handset (and I didn't want to hide its knockout design, either), I got down to actually objectively assessing the phone's performance. This is the first production phone we have had on hand with the new Snapdragon 835 platform, and we will be able to draw some definitive performance conclusions compared to SoCs in other shipping phones.
|Samsung Galaxy S8+ Specifications (US Version)|
|Display||6.2-inch 1440x2960 AMOLED|
|SoC||Qualcomm Snapdragon 835 (MSM8998)|
|CPU Cores||4x 2.45 GHz Kryo
4x 1.90 GHz Kryo
|GPU Cores||Adreno 540|
|RAM||4 / 6 GB LPDDR4 (6 GB with 128 GB storage option)|
|Storage||64 / 128 GB|
|Network||Snapdragon X16 LTE|
Bluetooth 5.0; A2DP, aptX
USB 3.1 (Type-C)
|Battery||3500 mAh Li-Ion|
|Dimensions||159.5 x 73.4 x 8.1 mm, 173 g|