Author:
Subject: General Tech
Manufacturer: SILVIA

Intelligent Gaming

Kal Simpson recently had the chance to sit down and have an extensive interview with SILVIA's Chief Product Officer - Cognitive Code, Alex Mayberry.  SILVIA is a company that specializes on conversational AI that can be adapted to a variety of platforms and applications.  Kal's comments are in bold while Alex's are in italics.

SILVIA virtual assistant.jpg

Always good to speak with you Alex. Whether it's the latest Triple-A video game release or the progress being made in changing the way we play, virtual reality for instance – your views and developments within the gaming space as a whole remains impressive. Before we begin, I’d like to give the audience a brief flashback of your career history. Prominent within the video game industry you’ve been involved with many, many titles – primarily within the PC gaming space. Quake 2: The Reckoning, America’s Army, a plethora of World of Warcraft titles.

Those more familiar with your work know you as the lead game producer for Diablo 3 / Reaper of Souls, as well as the executive producer for Star Citizen. The former of which we spoke on during the release of the game for PC, PlayStation 4 and the Xbox One, back in 2014.

So I ask, given your huge involvement with some of the most popular titles, what sparked your interest within the development of intelligent computing platforms? No-doubt the technology can be adapted to applications within gaming, but what’s the initial factor that drove you to Cognitive Code – the SILVIA technology?

AM: Conversational intelligence was something that I had never even thought about in terms of game development. My experience arguing with my Xbox and trying to get it to change my television channel left me pretty sceptical about the technology. But after leaving Star Citizen, my paths crossed with Leslie Spring, the CEO and Founder of Cognitive Code, and the creator of the SILVIA platform. Initially, Leslie was helping me out with some engineering work on VR projects I was spinning up. After collaborating for a bit, he introduced me to his AI, and I became intrigued by it. Although I was still very focused on VR at the time, my mind kept drifting to SILVIA.

I kept pestering Leslie with questions about the technology, and he continued to share some of the things that it could do. It was when I saw one of his game engine demos showing off a sci-fi world with freely conversant robots that the light went on in my head, and I suddenly got way more interested in artificial intelligence. At the same time, I was discovering challenges in VR that needed solutions. Not having a keyboard in VR creates an obstacle for capturing user input, and floating text in your field of view is really detrimental to the immersion of the experience. Also, when you have life-size characters in VR, you naturally want to speak to them. This is when I got interested in using SILVIA to introduce an entirely new mechanic to gaming and interactive entertainment. No more do we have to rely on conversation trees and scripted responses.

how-silvia-work1.jpg

No more do we have to read a wall of text from a quest giver. With this technology, we can have a realistic and free-form conversation with our game characters, and speak to them as if they are alive. This is such a powerful tool for interactive storytelling, and it will allow us to breathe life into virtual characters in a way that’s never before been possible. Seeing the opportunity in front of me, I joined up with Cognitive Code and have spent the last 18 months exploring how to design conversationally intelligent avatars. And I’ve been having a blast doing it.

Click here to continue reading the entire interview!

Subject: Motherboards
Manufacturer: ASUS

Introduction and Technical Specifications

Introduction

02-board-all.jpg

Courtesy of ASUS

With the latest revision of the TUF line, ASUS made the decision to drop the well-known "Sabertooth" moniker from the board's name, naming the board with the TUF branding only. The TUF Z270 Mark 1 motherboard is the flagship board in ASUS' TUF (The Ultimate Force) product line designed with the Intel Z270 chipset. The board offers support for the latest Intel Kaby Lake processor line as well as Dual Channel DDR 4 memory because of its integrated Intel Z270 chipset. While the MSRP for the board may be a bit higher than expected, its $239 price is more than justified by the board's build quality and "armored" offerings.

03-board.jpg

Courtesy of ASUS

04-board-back.jpg

Courtesy of ASUS

05-board-flyapart.jpg

Courtesy of ASUS

06-board-pwr-comps.jpg

Courtesy of ASUS

The TUF Z270 Mark 1 motherboard is built with the same quality and attention to detail that you've come to expect from TUF-branded motherboards. Its appearance follows the standard tan plastic armor overlay on the top with a 10-phase digital power system. ASUS also chose to include the armored backplate with the TUF Z270 Mark 1 motherboard, dubbed the "TUF Fortifier". The board contains the following integrated features: six SATA 3 ports; two M.2 PCIe x4 capable ports; dual GigE controllers - an Intel I219-V Gigabit NIC and an Intel I211 Gigabit NIC; three PCI-Express x16 slots; three PCI-Express x1 slots; an 8-channel audio subsystem; MEM OK! and USB BIOS Flashback buttons; integrated DisplayPort and HDMI; and USB 2.0, 3.0, and 3.1 Type-A and Type-C port support.

Continue reading our preview of the ASUS TUF Z270 Mark 1 motherboard!

Author:
Manufacturer: AMD

Specifications and Design

Just a couple of short weeks ago we looked at the Radeon Vega Frontier Edition 16GB graphics card in its air-cooled variety. The results were interesting – gaming performance proved to fall somewhere between the GTX 1070 and the GTX 1080 from NVIDIA’s current generation of GeForce products. That is under many of the estimates from players in the market, including media, fans, and enthusiasts.  But before we get to the RX Vega product family that is targeted at gamers, AMD has another data point for us to look at with a water-cooled version of Vega Frontier Edition. At a $1500 MSRP, which we shelled out ourselves, we are very interested to see how it changes the face of performance for the Vega GPU and architecture.

Let’s start with a look at the specifications of this version of the Vega Frontier Edition, which will be…familiar.

  Vega Frontier Edition (Liquid) Vega Frontier Edition Titan Xp GTX 1080 Ti Titan X (Pascal) GTX 1080 TITAN X GTX 980 R9 Fury X
GPU Vega Vega GP102 GP102 GP102 GP104 GM200 GM204 Fiji XT
GPU Cores 4096 4096 3840 3584 3584 2560 3072 2048 4096
Base Clock 1382 MHz 1382 MHz 1480 MHz 1480 MHz 1417 MHz 1607 MHz 1000 MHz 1126 MHz 1050 MHz
Boost Clock 1600 MHz 1600 MHz 1582 MHz 1582 MHz 1480 MHz 1733 MHz 1089 MHz 1216 MHz -
Texture Units ? ? 224 224 224 160 192 128 256
ROP Units 64 64 96 88 96 64 96 64 64
Memory 16GB 16GB 12GB 11GB 12GB 8GB 12GB 4GB 4GB
Memory Clock 1890 MHz 1890 MHz 11400 MHz 11000 MHz 10000 MHz 10000 MHz 7000 MHz 7000 MHz 1000 MHz
Memory Interface 2048-bit HBM2 2048-bit HBM2 384-bit G5X 352-bit 384-bit G5X 256-bit G5X 384-bit 256-bit 4096-bit (HBM)
Memory Bandwidth 483 GB/s 483 GB/s 547.7 GB/s 484 GB/s 480 GB/s 320 GB/s 336 GB/s 224 GB/s 512 GB/s
TDP 300 watts
~350 watts
300 watts 250 watts 250 watts 250 watts 180 watts 250 watts 165 watts 275 watts
Peak Compute 13.1 TFLOPS 13.1 TFLOPS 12.0 TFLOPS 10.6 TFLOPS 10.1 TFLOPS 8.2 TFLOPS 6.14 TFLOPS 4.61 TFLOPS 8.60 TFLOPS
Transistor Count ? ? 12.0B 12.0B 12.0B 7.2B 8.0B 5.2B 8.9B
Process Tech 14nm 14nm 16nm 16nm 16nm 16nm 28nm 28nm 28nm
MSRP (current) $1499 $999 $1200 $699 $1,200 $599 $999 $499 $649

The base specs remain unchanged and AMD lists the same memory frequency and even GPU clock rates across both models. In practice though, the liquid cooled version runs at higher sustained clocks and can overclock a bit easier as well (more details later). What does change with the liquid cooled version is a usable BIOS switch on top of the card that allows you to move between two distinct power draw states: 300 watts and 350 watts.

IMG_4728.JPG

First, it’s worth noting this is a change from the “375 watt” TDP that this card was listed at during the launch and announcement. AMD was touting a 300-watt and 375-watt version of Frontier Edition, but it appears the company backed off a bit on that, erring on the side of caution to avoid breaking any of the specifcations of PCI Express (board slot or auxiliary connectors). Even more concerning is that AMD chose to have the default state of the switch on the Vega FE Liquid card at 300 watts rather than the more aggressive 350 watts. AMD claims this to avoid any problems with lower quality power supplies that may struggle to hit slightly over 150 watts of power draw (and resulting current) from the 8-pin power connections. I would argue that any system that is going to install a $1500 graphics card can and should be prepared to provide the necessary power, but for the professional market, AMD leans towards caution. (It’s worth pointing out the RX 480 power issues that may have prompted this internal decision making were more problematic because they impacted the power delivery through the motherboard, while the 6- and 8-pin connectors are generally much safer to exceed the ratings.)

Even without clock speed changes, the move to water cooling should result in better and more consistent performance by removing the overheating concerns that surrounded our first Radeon Vega Frontier Edition review. But let’s dive into the card itself and see how the design process created a unique liquid cooled solution.

Continue reading our review of the Radeon Vega Frontier Edition Liquid-Cooled card!!

Subject: Mobile
Manufacturer: ASUS

Introduction and Specifications

The ZenBook 3 UX390UA is a 12.5-inch thin-and-light which offers a 1920x1080 IPS display, choice of 7th-generation Intel Core i5 or Core i7 processors, 16GB of DDR4 memory, and a roomy 512GB PCIe SSD. It also features just a single USB Type-C port, eschewing additional I/O in the vein of recent Apple MacBooks (more on this trend later in the review). How does it stack up? I had the pleasure of using it for a few weeks and can offer my own usage impressions (along with those ever-popular benchmark numbers) to try answering that question.

DSC_0144.jpg

A thin-and-light (a.k.a. ‘Ultrabook’) is certainly an attractive option when it comes to portability, and the ZenBook 3 certainly delivers with a slim 0.5-inch thickness and 2 lb weight from its aluminum frame. Another aspect of thin-and-light designs are the typically low-power processors, though the “U” series in Intel’s 7th-generation processor lineup still offer good performance numbers for portable machines. Looking at the spec sheet it is clear that ASUS paid attention to performance with this ZenBook, and we will see later on if a good balance has been struck between performance and battery life.

Our review unit was equipped with a Core i7-7500U processor, a 2-core/4-thread part with a 15W TDP and speeds ranging from 2.70 - 3.50 GHz, along with the above-mentioned 16GB of RAM and 512GB SSD. With an MSRP of $1599 for this configuration it faces some stiff competition from the likes of the Dell XPS line and recent Lenovo ThinkPad and Apple MacBook offerings, though it can of course be found for less than its MSRP (and this configuration currently sells on Amazon for about $1499). The ZenBook 3 certainly offers style if you are into blade-like aluminum designs, and, while not a touchscreen, nothing short of Gorilla Glass 4 was employed to protect the LCD display.

“ZenBook 3’s design took some serious engineering prowess and craftsmanship to realize. The ultra-thin 11.9mm profile meant we had to invent the world’s most compact laptop hinge — just 3mm high — to preserve its sleek lines. To fit in the full-size keyboard, we had to create a surround that’s just 2.1mm wide at the edges, and we designed the powerful four-speaker audio system in partnership with audiophile specialists Harman Kardon. ZenBook is renowned for its unique, stunning looks, and you’ll instantly recognize the iconic Zen-inspired spun-metal finish on ZenBook 3’s all-metal unibody enclosure — a finish that takes 40 painstaking steps to create. But we’ve added a beautiful twist, using a special 2-phase anodizing process to create stunning golden edge highlights. To complete this sophisticated new theme, we’ve added a unique gold ASUS logo and given the keyboard a matching gold backlight.”

DSC_0147.jpg

Continue reading our review of the ASUS ZenBook 3 UX390UA laptop!

Author:
Subject: Processors
Manufacturer: AMD

Just a little taste

In a surprise move with no real indication as to why, AMD has decided to reveal some of the most exciting and interesting information surrounding Threadripper and Ryzen 3, both due out in just a few short weeks. AMD CEO Lisa Su and CVP of Marketing John Taylor (along with guest star Robert Hallock) appear in a video being launched on the AMD YouTube website today to divulge the naming, clock speeds and pricing for the new flagship HEDT product line under the Ryzen brand.

people.jpg

We already know a lot of about Threadripper, AMD’s answer to the X299/X99 high-end desktop platforms from Intel, including that they would be coming this summer, have up to 16-cores and 32-threads of compute, and that they would all include 64 lanes of PCI Express 3.0 for a massive amount of connectivity for the prosumer.

Now we know that there will be two models launching and available in early August: the Ryzen Threadripper 1920X and the Ryzen Threadripper 1950X.

  Core i9-7980XE Core i9-7960X Core i9-7940X Core i9-7920X Core i9-7900X Core i7-7820X Core i7-7800X Threadripper 1950X Threadripper 1920X
Architecture Skylake-X Skylake-X Skylake-X Skylake-X Skylake-X Skylake-X Skylake-X Zen Zen
Process Tech 14nm+ 14nm+ 14nm+ 14nm+ 14nm+ 14nm+ 14nm+ 14nm 14nm
Cores/Threads 18/36 16/32 14/28 12/24 10/20 8/16 6/12 16/32 12/24
Base Clock ? ? ? ? 3.3 GHz 3.6 GHz 3.5 GHz 3.4 GHz 3.5 GHz
Turbo Boost 2.0 ? ? ? ? 4.3 GHz 4.3 GHz 4.0 GHz 4.0 GHz 4.0 GHz
Turbo Boost Max 3.0 ? ? ? ? 4.5 GHz 4.5 GHz N/A N/A N/A
Cache 16.5MB (?) 16.5MB (?) 16.5MB (?) 16.5MB (?) 13.75MB 11MB 8.25MB 40MB ?
Memory Support ? ? ? ? DDR4-2666
Quad Channel
DDR4-2666
Quad Channel
DDR4-2666
Quad Channel
DDR4-2666
Quad Channel
DDR4-2666 Quad Channel
PCIe Lanes ? ? ? ? 44 28 28 64 64
TDP 165 watts (?) 165 watts (?) 165 watts (?) 165 watts (?) 140 watts 140 watts 140 watts 180 watts 180 watts
Socket 2066 2066 2066 2066 2066 2066 2066 TR4 TR4
Price $1999 $1699 $1399 $1199 $999 $599 $389 $999 $799

 

  Threadripper 1950X Threadripper 1920X Ryzen 7 1800X Ryzen 7 1700X Ryzen 7 1700 Ryzen 5 1600X Ryzen 5 1600 Ryzen 5 1500X Ryzen 5 1400
Architecture Zen Zen Zen Zen Zen Zen Zen Zen Zen
Process Tech 14nm 14nm 14nm 14nm 14nm 14nm 14nm 14nm 14nm
Cores/Threads 16/32 12/24 8/16 8/16 8/16 6/12 6/12 4/8 4/8
Base Clock 3.4 GHz 3.5 GHz 3.6 GHz 3.4 GHz 3.0 GHz 3.6 GHz 3.2 GHz 3.5 GHz 3.2 GHz
Turbo/Boost Clock 4.0 GHz 4.0 GHz 4.0 GHz 3.8  GHz 3.7 GHz 4.0 GHz 3.6  GHz 3.7 GHz 3.4 GHz
Cache 40MB ? 20MB 20MB 20MB 16MB 16MB 16MB 8MB
Memory Support DDR4-2666
Quad Channel
DDR4-2666 Quad Channel DDR4-2400
Dual Channel
DDR4-2400
Dual Channel
DDR4-2400
Dual Channel
DDR4-2400
Dual Channel
DDR4-2400
Dual Channel
DDR4-2400
Dual Channel
DDR4-2400
PCIe Lanes 64 64 20 20 20 20 20 20 20
TDP 180 watts 180 watts 95 watts 95 watts 65 watts 95 watts 65 watts 65 watts 65 watts
Socket TR4 TR4 AM4 AM4 AM4 AM4 AM4 AM4 AM4
Price $999 $799 $499 $399 $329 $249 $219 $189 $169

Continue reading about the announcement of the Ryzen Threadripper and Ryzen 3 processors!

Author:
Manufacturer: ASUS

Overview

To say that the consumer wired networking market has stagnated has been an understatement. While we've seen generational improvements on NICs from companies like Intel, and companies like Rivet trying to add their own unique spin on things with their Killer products, the basic idea has remained mostly unchanged.

And for its time, Gigabit networking was an amazing thing. In the era of hard drive-based storage as your only option, 100 MB/s seemed like a great data transfer speed for your home network — who could want more?

Now that we've moved well into the era of flash-based storage technologies capable of upwards of 3 GB/s transfer speeds, and even high capacity hard drives hitting the 200 MB/s category, Gigabit networking is a frustrating bottleneck when trying to move files from PC to PC.

For the enterprise market, there has been a solution to this for a long time. 10 Gigabit networking has been available in enterprise equipment for over 10 years, and even old news with even faster specifications like 40 and 100 Gbps interfaces available.

So why then are consumers mostly stuck at 1Gbps? As is the case with most enterprise technologies, the cost for 10 Gigabit equipment is still at a high premium compared to it's slower sibling. In fact, we've only just started to see enterprise-level 10 Gigabit NICs integrated on consumer motherboards, like the ASUS X99-E 10G WS at a staggering $650 price point.

However, there is hope. Companies like Aquantia are starting to aggressively push down the price point of 10 Gigabit networking, which brings us to the product we are taking a look at today — the ASUS XG-C100C 10 Gigabit Network Adapter.

IMG_4714.JPG

Continue reading about the ASUS XG-C100C 10GigE add-in card!

Author:
Subject: Processors
Manufacturer: Intel

A massive lineup

The amount and significance of the product and platform launches occurring today with the Intel Xeon Scalable family is staggering. Intel is launching more than 50 processors and 7 chipsets falling under the Xeon Scalable product brand, targeting data centers and enterprise customers in a wide range of markets and segments. From SMB users to “Super 7” data center clients, the new lineup of Xeon parts is likely to have an option targeting them.

All of this comes at an important point in time, with AMD fielding its new EPYC family of processors and platforms, for the first time in nearly a decade becoming competitive in the space. That decade of clear dominance in the data center has been good to Intel, giving it the ability to bring in profits and high margins without the direct fear of a strong competitor. Intel did not spend those 10 years flat footed though, and instead it has been developing complimentary technologies including new Ethernet controllers, ASICs, Omni-Path, FPGAs, solid state storage tech and much more.

cpus.jpg

Our story today will give you an overview of the new processors and the changes that Intel’s latest Xeon architecture offers to business customers. The Skylake-SP core has some significant upgrades over the Broadwell design before it, but in other aspects the processors and platforms will be quite similar. What changes can you expect with the new Xeon family?

01-11 copy.jpg

Per-core performance has been improved with the updated Skylake-SP microarchitecture and a new cache memory hierarchy that we had a preview of with the Skylake-X consumer release last month. The memory and PCIe interfaces have been upgraded with more channels and more lanes, giving the platform more flexibility for expansion. Socket-level performance also goes up with higher core counts available and the improved UPI interface that makes socket to socket communication more efficient. AVX-512 doubles the peak FLOPS/clock on Skylake over Broadwell, beneficial for HPC and analytics workloads. Intel QuickAssist improves cryptography and compression performance to allow for faster connectivity implementation. Security and agility get an upgrade as well with Boot Guard, RunSure, and VMD for better NVMe storage management. While on the surface this is a simple upgrade, there is a lot that gets improved under the hood.

01-12 copy.jpg

We already had a good look at the new mesh architecture used for the inter-core component communication. This transition away from the ring bus that was in use since Nehalem gives Skylake-SP a couple of unique traits: slightly longer latencies but with more consistency and room for expansion to higher core counts.

01-18 copy.jpg

Intel has changed the naming scheme with the Xeon Scalable release, moving away from “E5/E7” and “v4” to a Platinum, Gold, Silver, Bronze nomenclature. The product differentiation remains much the same, with the Platinum processors offering the highest feature support including 8-sockets, highest core counts, highest memory speeds, connectivity options and more. To be clear: there are a lot of new processors and trying to create an easy to read table of features and clocks is nearly impossible. The highlights of the different families are:

  • Xeon Platinum (81xx)
    • Up to 28 cores
    • Up to 8 sockets
    • Up to 3 UPI links
    • 6-channel DDR4-2666
    • Up to 1.5TB of memory
    • 48 lanes of PCIe 3.0
    • AVX-512 with 2 FMA per core
  • Xeon Gold (61xx)
    • Up to 22 cores
    • Up to 4 sockets
    • Up to 3 UPI links
    • 6-channel DDR4-2666
    • AVX-512 with 2 FMA per core
  • Xeon Gold (51xx)
    • Up to 14 cores
    • Up to 2 sockets
    • 2 UPI links
    • 6-channel DDR4-2400
    • AVX-512 with 1 FMA per core
  • Xeon Silver (41xx)
    • Up to 12 cores
    • Up to 2 sockets
    • 2 UPI links
    • 6-channel DDR4-2400
    • AVX-512 with 1 FMA per core
  • Xeon Bronze (31xx)
    • Up to 8 cores
    • Up to 2 sockets
    • 2 UPI links
    • No Turbo Boost
    • 6-channel DDR4-2133
    • AVX-512 with 1 FMA per core

That’s…a lot. And it only gets worse when you start to look at the entire SKU lineup with clocks, Turbo Speeds, cache size differences, etc. It’s easy to see why the simplicity argument that AMD made with EPYC is so attractive to an overwhelmed IT department.

01-20 copy.jpg

Two sub-categories exist with the T or F suffix. The former indicates a 10-year life cycle (thermal specific) while the F is used to indicate units that integrate the Omni-Path fabric on package. M models can address 1.5TB of system memory. This diagram above, which you should click to see a larger view, shows the scope of the Xeon Scalable launch in a single slide. This release offers buyers flexibility but at the expense of complexity of configuration.

Continue reading about the new Intel Xeon Scalable Skylake-SP platform!

Author:
Manufacturer: Sapphire

Overview

There has been a lot of news lately about the release of Cryptocurrency-specific graphics cards from both NVIDIA and AMD add-in board partners. While we covered the currently cryptomining phenomenon in an earlier article, today we are taking a look at one of these cards geared towards miners.

IMG_4681.JPG

It's worth noting that I purchased this card myself from Newegg, and neither AMD or Sapphire are involved in this article. I saw this card pop up on Newegg a few days ago, and my curiosity got the best of me.

There has been a lot of speculation, and little official information from vendors about what these mining cards will actually entail.

From the outward appearance, it is virtually impossible to distinguish this "new" RX 470 from the previous Sapphire Nitro+ RX 470, besides the lack of additional display outputs beyond the DVI connection. Even the branding and labels on the card identify it as a Nitro+ RX 470.

In order to test the hashing rates of this GPU, we are using Claymore's Dual Miner Version 9.6 (mining Ethereum only) against a reference design RX 470, also from Sapphire.

IMG_4684.JPG

On the reference RX 470 out of the box, we hit rates of about 21.8 MH/s while mining Ethereum. 

Once we moved to the Sapphire mining card, we move up to at least 24 MH/s from the start.

Continue reading about the Sapphire Radeon RX 470 Mining Edition!

Subject: Motherboards
Manufacturer: GIGABYTE

Introduction and Technical Specifications

Introduction

02-board_0.jpg

Courtesy of GIGABYTE

For the launch of the Intel X299 chipset motherboards, GIGABYTE chose their AORUS brand to lead the charge. The AORUS branding differentiates the enthusiast and gamer friendly products from other GIGABYTE product lines, similar to how ASUS uses the ROG branding to differentiate their high performance product line. The X299 AORUS Gaming 3 is among GIGABYTE's intial release boards offering support for the latest Intel HEDT chipset and processor line. Built around the Intel X299 chlipset, the board supports the Intel LGA2066 processor line, including the Skylake-X and Kaby Lake-X processors, with support for Quad-Channel DDR4 memory running at a 2667MHz speed. The X299 AORUS Gaming 3 can be found in retail with an MRSP of $279.99.

03-board-profile_0.jpg

Courtesy of GIGABYTE

GIGABYTE integrated the following features into the X299 AORUS Gaming 3 motherboard: eight SATA III 6Gbps ports; two M.2 PCIe Gen3 x4 32Gbps capable ports with Intel Optane support built-in; an Intel I219-V Gigabit RJ-45 port; five PCI-Express x16 slots; Realtek® ALC1220 8-Channel audio subsystem; and USB 2.0, 3.0, and 3.1 Type-A and Type-C port support.

04-pwr-system.jpg

Courtesy of GIGABYTE

To power the board, GIGABYTE integrated integrated a 9-phase digital power delivery system into the X299 AORUS Gaming 3's design. The digital power system was designed with IR digital power controllers and PowIRstage ICs, Server Level Chokes, and Durable Black capacitors.

05-pcie-armor.jpg

Courtesy of GIGABYTE

Designed to withstand the punishment of even the largest video cards, GIGABYTE's Ultra Durable PCIe Armor gives added strength and retention force to the primary and secondary PCIe x16 video card slots (PCIe X16 slots 1 and 3). The PCIe slots are reinforced with a metal overlay that is anchored to the board, giving the slot better hold capabilities (both side-to-side and card retention) when the board is used in a vertical orientation.

Continue reading our preview of the GIGABYTE X299 AORUS Gaming 3 motherboard!

Author:
Manufacturer: AKiTiO

A long time coming

External video cards for laptops have long been a dream of many PC enthusiasts, and for good reason. It’s compelling to have a thin-and-light notebook with great battery life for things like meetings or class, with the ability to plug it into a dock at home and enjoy your favorite PC games.

Many times we have been promised that external GPUs for notebooks would be a viable option. Over the years there have been many commercial solutions involving both industry standard protocols like ExpressCard, as well as proprietary connections to allow you to externally connect PCIe devices. Inspiring hackers have also had their hand with this for many years, cobbling together interesting solutions using mPCIe and M.2 ports on their notebooks which were meant for other devices.

With the introduction of Intel’s Thunderbolt standard in 2011, there was a hope that we would finally achieve external graphics nirvana. A modern, Intel-backed protocol promising PCIe x4 speeds (PCIe 2.0 at that point) sounded like it would be ideal for connecting GPUs to notebooks, and in some ways it was. Once again the external graphics communities managed to get it to work through the use of enclosures meant to connect other non-GPU PCIe devices such as RAID and video capture cards to systems. However, software support was still a limiting factor. You were required to use an external monitor to display your video, and it still felt like you were just riding the line between usability and a total hack. It felt like we were never going to get true universal support for external GPUs on notebooks.

Then, seemingly of out of nowhere, Intel decided to promote native support for external GPUs as a priority when they introduced Thunderbolt 3. Fast forward, and we've already seen a much larger adoption of Thunderbolt 3 on PC notebooks than we ever did with the previous Thunderbolt implementations. Taking all of this into account, we figured it was time to finally dip our toes into the eGPU market. 

For our testing, we decided on the AKiTio Node for several reasons. First, at around $300, it's by far the lowest cost enclosure built to support GPUs. Additionally, it seems to be one of the most compatible devices currently on the market according to the very helpful comparison chart over at eGPU.io. The eGPU site is a wonderful resource for everything external GPU, over any interface possible, and I would highly recommend heading over there to do some reading if you are interested in trying out an eGPU for yourself.

The Node unit itself is a very utilitarian design. Essentially you get a folded sheet metal box with a Thunderbolt controller and 400W SFX power supply inside.

DSC03490.JPG

In order to install a GPU into the Node, you must first unscrew the enclosure from the back and slide the outer shell off of the device.

DSC03495.JPG

Once inside, we can see that there is ample room for any graphics card you might want to install in this enclosure. In fact, it seems a little too large for any of the GPUs we installed, including GTX 1080 Ti models. Here, you can see a more reasonable RX 570 installed.

Beyond opening up the enclosure to install a GPU, there is very little configuration required. My unit required a firmware update, but that was easily applied with the tools from the AKiTio site.

From here, I simply connected the Node to a ThinkPad X1, installed the NVIDIA drivers for our GTX 1080 Ti, and everything seemed to work — including using the 1080 Ti with the integrated notebook display and no external monitor!

Now that we've got the Node working, let's take a look at some performance numbers.

Continue reading our look at external graphics with the Thunderbolt 3 AKiTiO Node!

Author:
Manufacturer: AMD

Two Vegas...ha ha ha

When the preorders for the Radeon Vega Frontier Edition went up last week, I made the decision to place orders in a few different locations to make sure we got it in as early as possible. Well, as it turned out, we actually had the cards show up very quickly…from two different locations.

dualvega.jpg

So, what is a person to do if TWO of the newest, most coveted GPUs show up on their doorstep? After you do the first, full review of the single GPU iteration, you plug those both into your system and do some multi-GPU CrossFire testing!

There of course needs to be some discussion up front about this testing and our write up. If you read my first review of the Vega Frontier Edition you will clearly note my stance on the idea that “this is not a gaming card” and that “the drivers aren’t ready. Essentially, I said these potential excuses for performance were distraction and unwarranted based on the current state of Vega development and the proximity of the consumer iteration, Radeon RX.

IMG_4688.JPG

But for multi-GPU, it’s a different story. Both competitors in the GPU space will tell you that developing drivers for CrossFire and SLI is incredibly difficult. Much more than simply splitting the work across different processors, multi-GPU requires extra attention to specific games, game engines, and effects rendering that are not required in single GPU environments. Add to that the fact that the market size for CrossFire and SLI has been shrinking, from an already small state, and you can see why multi-GPU is going to get less attention from AMD here.

Even more, when CrossFire and SLI support gets a focus from the driver teams, it is often late in the process, nearly last in the list of technologies to address before launch.

With that in mind, we all should understand the results we are going to show you might be indicative of the CrossFire scaling when Radeon RX Vega launches, but it very well could not. I would look at the data we are presenting today as a “current state” of CrossFire for Vega.

Continue reading our look at a pair of Vega Frontier Edition cards in CrossFire!

Manufacturer: NVIDIA

Performance not two-die four.

When designing an integrated circuit, you are attempting to fit as much complexity as possible within your budget of space, power, and so forth. One harsh limitation for GPUs is that, while your workloads could theoretically benefit from more and more processing units, the number of usable chips from a batch shrinks as designs grow, and the reticle limit of a fab’s manufacturing node is basically a brick wall.

What’s one way around it? Split your design across multiple dies!

nvidia-2017-multidie.png

NVIDIA published a research paper discussing just that. In their diagram, they show two examples. In the first diagram, the GPU is a single, typical die that’s surrounded by four stacks of HBM, like GP100; the second configuration breaks the GPU into five dies, four GPU modules and an I/O controller, with each GPU module attached to a pair of HBM stacks.

NVIDIA ran simulations to determine how this chip would perform, and, in various workloads, they found that it out-performed the largest possible single-chip GPU by about 45.5%. They scaled up the single-chip design until it had the same amount of compute units as the multi-die design, even though this wouldn’t work in the real world because no fab could actual lithograph it. Regardless, that hypothetical, impossible design was only ~10% faster than the actually-possible multi-chip one, showing that the overhead of splitting the design is only around that much, according to their simulation. It was also faster than the multi-card equivalent by 26.8%.

While NVIDIA’s simulations, run on 48 different benchmarks, have accounted for this, I still can’t visualize how this would work in an automated way. I don’t know how the design would automatically account for fetching data that’s associated with other GPU modules, as this would probably be a huge stall. That said, they spent quite a bit of time discussing how much bandwidth is required within the package, and figures of 768 GB/s to 3TB/s were mentioned, so it’s possible that it’s just the same tricks as fetching from global memory. The paper touches on the topic several times, but I didn’t really see anything explicit about what they were doing.

amd-2017-epyc-breakdown.jpg

If you’ve been following the site over the last couple of months, you’ll note that this is basically the same as AMD is doing with Threadripper and EPYC. The main difference is that CPU cores are isolated, so sharing data between them is explicit. In fact, when that product was announced, I thought, “Huh, that would be cool for GPUs. I wonder if it’s possible, or if it would just end up being Crossfire / SLI.”

Apparently not? It should be possible?

I should note that I doubt this will be relevant for consumers. The GPU is the most expensive part of a graphics card. While the thought of four GP102-level chips working together sounds great for 4K (which is 4x1080p in resolution) gaming, quadrupling the expensive part sounds like a giant price-tag. That said, the market of GP100 (and the upcoming GV100) would pay five-plus digits for the absolute fastest compute device for deep-learning, scientific research, and so forth.

The only way I could see this working for gamers is if NVIDIA finds the sweet-spot for performance-to-yield (for a given node and time) and they scale their product stack with multiples of that. In that case, it might be cost-advantageous to hit some level of performance, versus trying to do it with a single, giant chip.

This is just my speculation, however. It’ll be interesting to see where this goes, whenever it does.

Author:
Manufacturer: Galax

GTX 1060 keeps on kicking

Despite the market for graphics cards being disrupted by the cryptocurrency mining craze, board partners like Galax continue to build high quality options for gamers...if they can get their hands on them. We recently received a new Galax GTX 1060 EXOC White 6GB card that offers impressive performance and features as well as a visual style to help it stand out from the crowd.

We have worked with GeForce GTX 1060 graphics cards quite a bit on PC Perspective, so there is not a need to dive into the history of the GPU itself. If you need a refresher on this GP106 GPU, where it stands in the pantheon on the current GPU market, check out my launch review of the GTX 1060 from last year. The release of AMD’s Radeon RX 580 did change things a bit in the market landscape, so that review might be worth looking at too.

Our quick review at the Galax GTX 1060 EXOC White will look at performance (briefly), overclocking, and cost. But first, let’s take a look at this thing.

The Galax GTX 1060 EXOC White

As the name implies, the EXOC White card from Galax is both overclocked and uses a white fan shroud to add a little flair to the design. The PCB is a standard black color, but with the fan and back plate both a bright white, the card will be a point of interest for nearly any PC build. Pairing this with a white-accented motherboard, like the recent ASUS Prime series, would be an excellent visual combination.

IMG_4674.JPG

The fans on the EXOC White have clear-ish white blades that are illuminated by the white LEDs that shine through the fan openings on the shroud.

IMG_4675.JPG

IMG_4676.JPG

The cooler that Galax has implemented is substantial, with three heatpipes used to distribute the load from the GPU across the fins. There is a 6-pin power connector (standard for the GTX 1060) but that doesn’t appear to hold back the overclocking capability of the GPU.

IMG_4677.JPG

There is a lot of detail on the heatsink shroud – and either you like it or you don’t.

IMG_4678.JPG

Galax has included a white backplate that doubles as artistic style and heatsink. I do think that with most users’ cases showcasing the rear of the graphics card more than the front, a good quality back plate is a big selling point.

IMG_4680.JPG

The output connectivity includes a pair of DVI ports, a full-size HDMI and a full-size DisplayPort; more than enough for nearly any buyer of this class of GPU.

Continue reading about the Galax GTX 1060 EXOC White 6GB!

Author:
Manufacturer: Seasonic

Introduction and Features

Introduction

Seasonic is in the process of overhauling their entire PC power supply lineup. They began with the introduction of the PRIME Series in late 2016 and are now introducing the new FOCUS family, which will include three different series ranging from 450W up to 850W output capacity with either Platinum or Gold efficiency certification. In this review we will be taking a detailed look at the new Seasonic FOCUS PLUS Gold (FX) 650W power supply. And just to prove that reviewers are not being sent hand-picked golden samples, our PSU was delivered straight from Newegg.com inventory.

2-Banner.jpg

The Seasonic FOCUS PLUS Gold series includes four models: 550W, 650W, 750W, and 850W. In addition to 80 Plus Gold certification, the FOCUS Plus (FX) series features a small footprint chassis (140mm deep), all modular cables, high quality components, and comes backed by a 10-year warranty.

•    FOCUS PLUS Gold (FX) 550W: $79.90 USD
•    FOCUS PLUS Gold (FX) 650W: $89.90 USD
•    FOCUS PLUS Gold (FX) 750W: $99.90 USD
•    FOCUS PLUS Gold (FX) 850W: $109.90 USD

3-Diag-cables.jpg

Seasonic FOCUS PLUS 650W Gold (FX) PSU Key Features:

•    650W Continuous DC output at up to 50°C (715W peak)
•    80 PLUS Gold certified for high efficiency
•    Small footprint: chassis measures just 140mm (5.5”) deep
•    Micro Tolerance load regulation @ 1%
•    Fully-modular cables
•    DC-to-DC Voltage converters
•    Single +12V output
•    Multi-GPU Technology support
•    Quiet 120mm Fluid Dynamic Bearing (FDB) cooling fan
•    Haswell support
•    Active Power Factor correction with Universal AC input (100 to 240 VAC)
•    Safety protections: OPP, OVP, UVP, OCP, OTP and SCP
•    10-Year warranty

Please continue reading our review of the FOCUS PLUS 650W Gold PSU!!!

Author:
Manufacturer: AMD

An interesting night of testing

Last night I did our first ever live benchmarking session using the just-arrived Radeon Vega Frontier Edition air-cooled graphics card. Purchased directly from a reseller, rather than being sampled by AMD, gave us the opportunity to testing for a new flagship product without an NDA in place to keep us silenced, so I thought it would be fun to the let the audience and community go along for the ride of a traditional benchmarking session. Though I didn’t get all of what I wanted done in that 4.5-hour window, it was great to see the interest and excitement for the product and the results that we were able to generate.

But to the point of the day – our review of the Radeon Vega Frontier Edition graphics card. Based on the latest flagship GPU architecture from AMD, the Radeon Vega FE card has a lot riding on its shoulders, despite not being aimed at gamers. It is the FIRST card to be released with Vega at its heart. It is the FIRST instance of HBM2 being utilized in a consumer graphics card. It is the FIRST in a new attempt from AMD to target the group of users between gamers and professional users (like NVIDIA has addressed with Titan previously). And, it is the FIRST to command as much attention and expectation for the future of a company, a product line, and a fan base.

IMG_4621.JPG

Other than the architectural details that AMD gave us previously, we honestly haven’t been briefed on the performance expectations or the advancements in Vega that we should know about. The Vega FE products were released to the market with very little background, only well-spun turns of phrase emphasizing the value of the high performance and compatibility for creators. There has been no typical “tech day” for the media to learn fully about Vega and there were no samples from AMD to media or analysts (that I know of). Unperturbed by that, I purchased one (several actually, seeing which would show up first) and decided to do our testing.

On the following pages, you will see a collection of tests and benchmarks that range from 3DMark to The Witcher 3 to SPECviewperf to LuxMark, attempting to give as wide a viewpoint of the Vega FE product as I can in a rather short time window. The card is sexy (maybe the best looking I have yet seen), but will disappoint many on the gaming front. For professional users that are okay not having certified drivers, performance there is more likely to raise some impressed eyebrows.

Radeon Vega Frontier Edition Specifications

Through leaks and purposeful information dumps over the past couple of months, we already knew a lot about the Radeon Vega Frontier Edition card prior to the official sale date this week. But now with final specifications in hand, we can start to dissect what this card actually is.

  Vega Frontier Edition Titan Xp GTX 1080 Ti Titan X (Pascal) GTX 1080 TITAN X GTX 980 R9 Fury X R9 Fury
GPU Vega GP102 GP102 GP102 GP104 GM200 GM204 Fiji XT Fiji Pro
GPU Cores 4096 3840 3584 3584 2560 3072 2048 4096 3584
Base Clock 1382 MHz 1480 MHz 1480 MHz 1417 MHz 1607 MHz 1000 MHz 1126 MHz 1050 MHz 1000 MHz
Boost Clock 1600 MHz 1582 MHz 1582 MHz 1480 MHz 1733 MHz 1089 MHz 1216 MHz - -
Texture Units ? 224 224 224 160 192 128 256 224
ROP Units 64 96 88 96 64 96 64 64 64
Memory 16GB 12GB 11GB 12GB 8GB 12GB 4GB 4GB 4GB
Memory Clock 1890 MHz 11400 MHz 11000 MHz 10000 MHz 10000 MHz 7000 MHz 7000 MHz 1000 MHz 1000 MHz
Memory Interface 2048-bit HBM2 384-bit G5X 352-bit 384-bit G5X 256-bit G5X 384-bit 256-bit 4096-bit (HBM) 4096-bit (HBM)
Memory Bandwidth 483 GB/s 547.7 GB/s 484 GB/s 480 GB/s 320 GB/s 336 GB/s 224 GB/s 512 GB/s 512 GB/s
TDP 300 watts 250 watts 250 watts 250 watts 180 watts 250 watts 165 watts 275 watts 275 watts
Peak Compute 13.1 TFLOPS 12.0 TFLOPS 10.6 TFLOPS 10.1 TFLOPS 8.2 TFLOPS 6.14 TFLOPS 4.61 TFLOPS 8.60 TFLOPS 7.20 TFLOPS
Transistor Count ? 12.0B 12.0B 12.0B 7.2B 8.0B 5.2B 8.9B 8.9B
Process Tech 14nm 16nm 16nm 16nm 16nm 28nm 28nm 28nm 28nm
MSRP (current) $999 $1200 $699 $1,200 $599 $999 $499 $649 $549

The Vega FE shares enough of a specification listing with the Fury X that it deserves special recognition. Both cards sport 4096 stream processors, 64 ROPs and 256 texture units. The Vega FE is running at much higher clock speeds (35-40% higher) and also upgrades to the next generation of high-bandwidth memory and quadruples capacity. Still, there will be plenty of comparisons between the two products, looking to measure IPC changes from the CUs (compute units) from Fiji to the NCUs built for Vega.

DSC03536 copy.jpg

The Radeon Vega GPU

The clock speeds also see another shift this time around with the adoption of “typical” clock speeds. This is something that NVIDIA has been using for a few generations with the introduction of GPU Boost, and tells the consumer how high they should expect clocks to go in a nominal workload. Normally I would say a gaming workload, but since this card is supposedly for professional users and the like, I assume this applies across the board. So even though the GPU is rated at a “peak” clock rate of 1600 MHz, the “typical” clock rate is 1382 MHz. (As an early aside, I did NOT see 1600 MHz in any of my testing time with our Vega FE but did settle in a ~1440 MHz clock most of the time.)

Continue reading our review of the AMD Radeon Vega Frontier Edition!

Subject: Mobile
Manufacturer: Samsung

Introduction and Specifications

The Galaxy S8 Plus is Samsung's first ‘big’ phone since the Note7 fiasco, and just looking at it the design and engineering process seems to have paid off. Simply put, the GS8/GS8+ might just be the most striking handheld devices ever made. The U.S. version sports the newest and fastest Qualcomm platform with the Snapdragon 835, and the international version of the handset uses Samsung’s Exynos 8895 Octa SoC. We have the former on hand, and it was this MSM8998-powered version of the 6.2-inch GS8+ that I spent some quality time with over the past two weeks.

DSC_0836.jpg

There is more to a phone than its looks, and even in that department the Galaxy S8+ raises questions about durability with that large, curved glass screen. With the front and back panels wrapping around as they do the phone has a very slim, elegant look that feels fantastic in hand. And while one drop could easily ruin your day with any smartphone, this design is particularly unforgiving - and screen replacement costs with these new S8 phones are particularly high due to the difficulty in repairing the screen, and need to replace the AMOLED display along with the laminated glass.

Forgetting the fragility for a moment and just embracing the case-free lifestyle I was so tempted to adopt, lest I change the best in-hand feel I've had from a handset (and I didn't want to hide its knockout design, either), I got down to actually objectively assessing the phone's performance. This is the first production phone we have had on hand with the new Snapdragon 835 platform, and we will be able to draw some definitive performance conclusions compared to SoCs in other shipping phones.

DSC_0815.jpg

Samsung Galaxy S8+ Specifications (US Version)
Display 6.2-inch 1440x2960 AMOLED
SoC Qualcomm Snapdragon 835 (MSM8998)
CPU Cores 4x 2.45 GHz Kryo
4x 1.90 GHz Kryo
GPU Cores Adreno 540
RAM 4 / 6 GB LPDDR4 (6 GB with 128 GB storage option)
Storage 64 / 128 GB
Network Snapdragon X16 LTE
Connectivity 802.11ac Wi-Fi
2x2 MU-MIMO
Bluetooth 5.0; A2DP, aptX
USB 3.1 (Type-C)
NFC
Battery 3500 mAh Li-Ion
Dimensions 159.5 x 73.4 x 8.1 mm, 173 g
OS Android 7.0

Continue reading our review of the Samsung Galaxy S8+ smartphone!

Subject: Storage
Manufacturer: Intel

Introduction, Specifications and Packaging

Introduction

Today Intel is launching a new line of client SSDs - the SSD 545S Series. These are simple, 2.5" SATA parts that aim to offer good performance at an economical price point. Low-cost SSDs is not typically Intel's strong suit, mainly because they are extremely rigorous on their design and testing, but the ramping up of IMFT 3D NAND, now entering its second generation stacked to 64-layers, should finally help them get the cost/GB down to levels previously enjoyed by other manufacturers.

diag.jpg

Intel and Micron jointly announced 3D NAND just over two years ago, and a year ago we talked about the next IMFT capacity bump coming as a 'double' move. Well, that's only partially happening today. The 545S line will carry the new IMFT 64-layer flash, but the capacity per die remains the same 256Gbit (32GB) as the previous generation parts. The dies will be smaller, meaning more can fit on a wafer, which drives down production costs, but the larger 512Gbit dies won't be coming until later on (and in a different product line - Intel told us they do not intend to mix die types within the same lines as we've seen Samsung do in the past).

Specifications

specs.png

There are no surprises here, though I am happy to see a 'sustained sequential performance' specification stated by an SSD maker, and I'm happier to see Intel claiming such a high figure for sustained writes (implying this is the TLC writing speed as the SLC cache would be exhausted in sustained writes).

I'm also happy to see sensical endurance specs for once. We've previously seen oddly non-scaling figures in prior SSD releases from multiple companies. Clearly stating a specific TBW 'per 128GB' makes a lot of sense here, and the number itself isn't that bad, either.

Packaging

packaging.jpg

Simplified packaging from Intel here, apparently to help further reduce shipping costs.

Read on for our full review of the Intel 545S 512GB SSD!

Introduction and Technical Specifications

Introduction

02-cooler-all.jpg

Courtesy of Thermalright

Thermalright is a well established brand-name, known for their high performance air coolers. Their newest edition to the TRUE Spirit Series line of air coolers, the TRUE Spririt 140 Direct, is a redesigned version of their TRUE Spirit 140 Power air cooler offering a similar level of performance at a lower price point. The Thermalright TRUE Spirit 140 Direct cooler is a slim, single tower cooler featuring a nickle-plated copper base and an aluminum radiator with a 140mm fan. Additionally, Thermalright designed the cooler to be compatible with all modern platforms. The TRUE Spirit 140 Direct cooler is available with an MSRP $46.95.

03-cooler-front-fan.jpg

Courtesy of Thermalright

04-cooler-profile-nofan.jpg

Courtesy of Thermalright

05-cooler-side-fan.jpg

Courtesy of Thermalright

06-cooler-specs.jpg

Courtesy of Thermalright

The Thermalright TRUE Spirit 140 Direct cooler consists of a single finned aluminum tower radiator fed by five 6mm diameter nickel-plated copper heat pipes in a U-shaped configuration. The cooler can accommodate up to two 140mm fans, but comes standard with a single fan only. The fans are held to the radiator tower using metal clips through the radiator tower body. The cooler is held to the CPU using screws on either side of the mount plate that fix to the unit's mounting cage installed to the motherboard.

Continue reading our review of the Thermalright TRUE Spirit 140 Direct CPU air cooler!

Subject: General Tech
Manufacturer: Logitech

Introduction and Specifications

Logitech has been releasing gaming headphones with a steady regularity of late, and this summer we have another new model to examine in the G433 Gaming Headset, which has just been released (along with the G233). This wired, 7.1-channel capable headset is quite different visually from previous Logitech models as is finished with an interesting “lightweight, hydrophobic fabric shell” and offered in various colors (our review pair is a bright red). But the G433’s have function to go along with the style, as Logitech has focused on both digital and analog sound quality with this third model to incorporate the Logitech’s Pro-G drivers. How do they sound? We’ll find out!

DSC_0555.jpg

One of the main reasons to consider a gaming headset like this in the first place is the ability to take advantage of multi-channel surround sound from your PC, and with the G433’s (as with the previously reviewed G533) this is accomplished via DTS Headphone:X, a technology which in my experience is capable of producing a convincing sound field that is very close to that of multiple surround drivers. All of this is being created via the same pair of left/right drivers that handle music, and here Logitech is able to boast of some very impressive engineering that produced the Pro-G driver introduced two years ago. An included DAC/headphone amp interfaces with your PC via USB to drive the surround experience, and without this you still have a standard stereo headset that can connect to anything with a 3.5 mm jack.

g433_colors.jpg

The G433 is available in four colors, of which we have the red on hand today

If you have not read up on Logitech’s exclusive Pro-G driver, you will find in their description far more similarities to an audiophile headphone company than what we typically associate with a computer peripheral maker. Logitech explains the thinking behind the technology:

“The intent of the Pro-G driver design innovation is to minimize distortion that commonly occurs in headphone drivers. When producing lower frequencies (<1kHz), most speaker diaphragms operate as a solid mass, like a piston in an engine, without bending. When producing many different frequencies at the same time, traditional driver designs can experience distortion caused by different parts of the diaphragm bending when other parts are not. This distortion caused by rapid transition in the speaker material can be tuned and minimized by combining a more flexible material with a specially designed acoustic enclosure. We designed the hybrid-mesh material for the Pro-G driver, along with a unique speaker housing design, to allow for a more smooth transition of movement resulting in a more accurate and less distorted output. This design also yields a more efficient speaker due to less overall output loss due to distortion. The result is an extremely accurate and clear sounding audio experience putting the gamer closer to the original audio of the source material.”

Logitech’s claims about the Pro-G have, in my experience with the previous models featuring these drivers (G633/G933 Artemis Spectrum and G533 Wireless), have been spot on, and I have found them to produce a clarity and detail that rivals ‘audiophile’ stereo headphones.

DSC_0573.jpg

Continue reading our review of the Logitech G433 7.1 Wired Surround Gaming Headset!

Author:
Manufacturer: ASUS

Overview

It feels like forever that we've been hearing about 802.11ad. For years it's been an up-and-coming technology, seeing some releases in devices like Dell's WiGig-powered wireless docking stations for Latitude notebooks.

However, with the release of the first wave of 802.11ad routers earlier this year from Netgear and TP-Link there has been new attention drawn to more traditional networking applications for it. This was compounded with the announcement of a plethora of X299-chipset based motherboards at Computex, with some integrating 802.11ad radios.

That brings us to today, where we have the ASUS Prime X299-Deluxe motherboard, which we used in our Skylake-X review. This almost $500 motherboard is the first device we've had our hands on which features both 802.11ac and 802.11ad networking, which presented a great opportunity to get experience with WiGig. With promises of wireless transfer speeds up to 4.6Gbps how could we not?

For our router, we decided to go with the Netgear Nighthawk X10. While the TP-Link and Netgear options appear to share the same model radio for 802.11ad usage, the Netgear has a port for 10 Gigabit networking, something necessary to test the full bandwidth promises of 802.11ad from a wired connection to a wireless client.

IMG_4611.JPG

The Nighthawk X10 is a beast of a router (with a $500 price tag to match) in its own right, but today we are solely focusing on it for 802.11ad testing.

Making things a bit complicated, the Nighthawk X10's 10GbE port utilizes an SFP+ connector, and the 10GbE NIC on our test server, with the ASUS X99‑E‑10G WS motherboard, uses an RJ45 connection for its 10 Gigabit port. In order to remedy this in a manner where we could still move the router away from the test client to test the range, we used a Netgear ProSAFE XS716E 10GigE switch as the go-between.

IMG_4610.JPG

Essentially, it works like this. We are connecting the Nighthawk X10 to the ProSAFE switch through a SFP+ cable, and then to the test server through 10GBase-T. The 802.11ad client is of course connected wirelessly to the Nighthawk X10.

On the software side, we are using the tried and true iPerf3. You run this software in server mode on the host machine and connect to that machine through the same piece of software in client mode. In this case, we are running iPerf with 10 parallel clients, over a 30-second period which is then averaged to get the resulting bandwidth of the connection.

bandwith-comparison.png

There are two main takeaways from this chart - the maximum bandwidth comparison to 802.11ac, and the scaling of 802.11ad with distance.

First, it's impressive to see such high bandwidth over a wireless connection. In a world where the vast majority of the Ethernet connections are still limited to 1Gbps, seeing up to 2.2Gbps over a wireless connection is very promising.

However, when you take a look at the bandwidth drops as we move the router and client further and further away, we start to see some of the main issues with 802.11ad.

Instead of using more traditional frequency ranges like 2.4GHz and 5.0GHz like we've seen from Wi-Fi for so many years, 802.11ad uses frequency in the unlicensed 60GHz spectrum. Without getting too technical about RF technology, essentially this means that 802.11ad is capable of extremely high bandwidth rates, but cannot penetrate walls with line of sight between devices being ideal. In our testing, we even found that the given orientation of the router made a big difference. Rotating the router 180 degrees allowed us to connect or not in some scenarios.

As you can see, the drop off in bandwidth for the 802.11ad connection between our test locations 15 feet away from the client and 35 feet away from the client was quite stark. 

That being said, taking another look at our results you can see that in all cases the 802.11ad connection is faster than the 802.11ac results, which is good. For the promised applications of 802.11ad where the device and router are in the same room of reasonable size, WiGig seems to be delivering most of what is promised.

IMG_4613.JPG

It is likely we won't see high adoption rates of 802.11ad for networking computers. The range limitations are just too stark to be a solution that works for most homes. However, I do think WiGig has a lot of promise to replace cables in other situations. We've seen notebook docks utilizing WiGig and there has been a lot of buzz about VR headsets utilizing WiGig for wireless connectivity to gaming PCs.

802.11ad networking is in its infancy, so this is all subject to change. Stay tuned to PC Perspective for continuing news on 802.11ad and other wireless technologies!