All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Introduction and Technical Specifications
Courtesy of GIGABYTE
With the release of Intel Z270 chipset, GIGABYTE is unveiling its AORUS line of products. The AORUS branding will be used to differentiate enthusiast and gamer friendly products from their other product lines, similar to how ASUS uses the ROG branding to differentiate their high performance product line. The Z270X-Gaming 5 is among the first to be released as part of GIGABYTE's AORUS line. The board features the black and white branding common to the AORUS product line, with the rear panel cover and chipset featuring the brand logos. The board is designed around the Intel Z270 chipset with in-built support for the latest Intel LGA1151 Kaby Lake processor line (as well as support for Skylake processors) and Dual Channel DDR4 memory running at a 2400MHz speed. The Z270X-Gaming 5 can be found in retail with an MSRP of $189.99.
Courtesy of GIGABYTE
Courtesy of GIGABYTE
GIGABYTE integrated the following features into the Z270X-Gaming 5 motherboard: three SATA-Express ports; one U.2 32Gbps port; two M.2 PCIe x4 capable ports with Intel Optane support built-in; two RJ-45 GigE ports - an Intel I219-V Gigabit NIC and a Rivet Networks Killer E2500 NIC; three PCI-Express x16 slots; three PCI-Express x1 slots; ASMedia 8-Channel audio subsystem; integrated DisplayPort and HDMI video ports; and USB 2.0, 3.0, and 3.1 Type-A and Type-C port support.
- Corsair Bulldog 2.0 (includes case, PSU, MB and CPU cooler)
- Intel Core i7-7700K
- 16GB Corsair Vengeance DDR4
- Corsair Z270 Motherboard mini ITX
- Corsair Hydro GTX 1080
- 480GB Neutron GTX SSD
- 600 watt Corsair PSU
The barebones kit starts at $399 through Corsair.com and includes the case, the motherboard, CPU cooler and 600-watt power supply. Not a bad price for those components!
You won't find any specific benchmarks in the video above, but you will find some impressions playing Resident Evil 7 in HDR mode at 4K resolution with the specs above, all on an LG OLED display. (Hint: it's awesome.)
Is Mechanical Mandatory?
The Logitech G213 Prodigy gaming keyboard offers the company's unique Mech-Dome keys and customizable RGB lighting effects, and it faces some stiff competition in a market overflowing with gaming keyboards for every budget (including mechanical options). But it really comes down to performance, feel, and usability; and I was interested in giving these new Mech-Dome keys a try.
“The G213 Prodigy gaming keyboard features Logitech Mech-Dome keys that are specially tuned to deliver a superior tactile response and performance profile similar to a mechanical keyboard. Mech-Dome keys are full height, deliver a full 4mm travel distance, 50g actuation force, and a quiet sound operation.
The G213 Prodigy gaming keyboard was designed for gaming, featuring ultra-quick, responsive feedback that is up to 4x faster than the 8ms report rate of standard keyboards and an anti-ghosting matrix that keeps you in control when you press multiple gaming keys simultaneously.”
I will say that at $69.99 the G213 plays in a somewhat odd space relative to the current gaming keyboard market; though it would be well positioned in a retail setting, where at a local Best Buy it would be a compelling option vs. a $100+ mechanical option. But savvy internet shoppers see the growing number of <$70 mechanical keyboards available and might question the need for a ‘quasi-mechanical’ option like this. I don’t review products from a marketing perspective, however, and I simply set out to determine if the G213 is a well-executed product on the hardware front.
Living the Mesh Life
Mesh networking is the current hot topic when it comes to Wi-Fi. Breaking from the trend of increasingly powerful standalone Wi-Fi routers that has dominated the home networking scene over the past few years, mesh networking solutions aim to provide wider and more even Wi-Fi coverage in your home or office through a system of multiple self-configuring and self-managing hotspots. In theory, this approach not only provides better wireless coverage overall, it also makes the setup and maintenance of a Wi-Fi network easier for novice and experienced users alike.
Multiple companies have recently launched Wi-Fi mesh systems, including familiar names such as Google, Netgear, and Linksys. But this new approach to networking has also attracted newcomers, including San Francisco-based eero, one of the first companies to launch a consumer-targeted Wi-Fi mesh platform. eero loaned us their primary product, the 3-piece eero Home WiFi System, and we've spent a few weeks testing it as our home router.
This review is the first part of a series of articles looking at Wi-Fi mesh systems, and it will focus on the capabilities and user experience of the eero Home WiFi System. Future articles will compare eero to other mesh platforms and traditional standalone routers, and look at comparative wireless performance and coverage.
Box Contents & Technical Specifications
As mentioned, we're looking at the 3-pack eero Home WiFi System (hereafter referred to simply as "eero"), a bundle that gives you everything you need to get your home or office up and running with a Wi-Fi mesh system. The box includes three eeros, three power adapters, and a 2-foot Ethernet cable.
Each eero device is identical in terms of design and capability, measuring in at 4.75 inches wide, 4.75 inches deep, and 1.34 inches tall. They each feature two Gigabit Ethernet ports, a single USB 2.0 port (currently restricted to diagnostic use only), and are powered by two 2x2 MIMO Wi-Fi radios capable of supporting 802.11 a/b/g/n/ac. In addition, an eero network supports WPA2 Personal encryption, static IPs, manual DNS, IP reservations and port forwarding, and Universal Plug and Play (UPnP).
Living Long and Prospering
The open fork of AMD’s Mantle, the Vulkan API, was released exactly a year ago with, as we reported, a hard launch. This meant public, but not main-branch drivers for developers, a few public SDKs, a proof-of-concept patch for The Talos Principle, and, of course, the ratified specification. This sets up the API to find success right out of the gate, and we can now look back over the year since.
Thor's hammer, or a tempest in a teapot?
The elephant in the room is DOOM. This game has successfully integrated the API and it uses many of its more interesting features, like asynchronous compute. Because the API is designed in a sort-of “make a command, drop it on a list” paradigm, the driver is able to select commands based on priority and available resources. AMD’s products got a significant performance boost, relative to OpenGL, catapulting their Fury X GPU up to the enthusiast level that its theoretical performance suggested.
Mobile developers have been picking up the API, too. Google, who is known for banishing OpenCL from their Nexus line and challenging OpenGL ES with their Android Extension Pack (later integrated into OpenGL ES with version 3.2), has strongly backed Vulkan. The API was integrated as a core feature of Android 7.0.
On the engine and middleware side of things, Vulkan is currently “ready for shipping games” as of Unreal Engine 4.14. It is also included in Unity 5.6 Beta, which is expected for full release in March. Frameworks for emulators are also integrating Vulkan, often just to say they did, but sometimes to emulate the quirks of these system’s offbeat graphics co-processors. Many other engines, from Source 2 to Torque 3D, have also announced or added Vulkan support.
Finally, for the API itself, The Khronos Group announced (pg 22 from SIGGRAPH 2016) areas that they are actively working on. The top feature is “better” multi-GPU support. While Vulkan, like OpenCL, allows developers to enumerate all graphics devices and target them, individually, with work, it doesn’t have certain mechanisms, like being able to directly ingest output from one GPU into another. They haven’t announced a timeline for this.
Introduction and Specifications
The Mate 9 is the current version of Huawei’s signature 6-inch smartphone, building on last year’s iteration with the company’s new Kirin 960 SoC (featuring ARM's next-generation Bifrost GPU architecture), improved industrial design, and exclusive Leica-branded dual camera system.
In the ultra-competitive smartphone world there is little room at the top, and most companies are simply looking for a share of the market. Apple and Samsung have occupied the top two spots for some time, with HTC, LG, Motorola, and others, far behind. But the new #3 emerged not from the usual suspects, but from a name many of us in the USA had not heard of until recently; and it is the manufacturer of the Mate 9. And comparing this new handset to the preceding Mate 8 (which we looked at this past August), it is a significant improvement in most respects.
With this phone Huawei has really come into their own with their signature phone design, and 2016 was a very good product year with the company’s smartphone offerings. The P9 handset launched early in 2016, offering not only solid specs and impressive industrial design, but a unique camera that was far more than a gimmick. Huawei’s partnership with Leica has resulted in a dual-camera system that operates differently than systems found on phones such as the iPhone 7 Plus, and the results are very impressive. The Mate 9 is an extension of that P9 design, adapted for their larger Mate smartphone series.
Introduction and Technical Specifications
Courtesy of ECS
The ECS Z170-Lightsaber motherboard is the newest offering in ECS' L337 product line with support for the Intel Z170 Express chipset. The Z170-Lightsaber builds on their previous Z170-based product, adding several enthusiast-friendly features like enhanced audio and RGB LED support to the board. With an MSRP of $180, ECS priced the Lightsaber as a higher-tiered offering, justified with its additional features and functionality compared to their previous Z170-based product.
Courtesy of ECS
ECS designed the Z170-Lightsaber with a 14-phase digital power delivery system, using high efficiency chokes and MOSFETs, as well as solid core capacitors for optimal board performance. The following features into the Z170-Lightsaber board: six SATA 3 ports; one SATA-Express port; a PCIe X2 M.2 port; a Qualcomm Killer GigE NIC; three PCI-Express x16 slots; four PCI-Express x1 slots; 3-digit diagnostic LED display; on-board power, reset, Quick overclock, BIOS set, BIOS update, BIOS backup, and Clear CMOS buttons; Realtek audio solution; integrated DisplayPort and HDMI video port support; and USB 2.0, 3.0, and 3.1 Gen2 port support.
Get your brains ready
Just before the weekend, Josh and I got a chance to speak with David Kanter about the AMD Zen architecture and what it might mean for the Ryzen processor due out in less than a month. For those of you not familiar with David and his work, he is an analyst and consultant on processor architectrure and design through Real World Tech while also serving as a writer and analyst for the Microprocessor Report as part of the Linley Group. If you want to see a discussion forum that focuses on architecture at an incredibly detailed level, the Real World Tech forum will have you covered - it's an impressive place to learn.
David was kind enough to spend an hour with us to talk about a recently-made-public report he wrote on Zen. It's definitely a discussion that dives into details most articles and stories on Zen don't broach, so be prepared to do some pausing and Googling phrases and technologies you may not be familiar with. Still, for any technology enthusiast that wants to get an expert's opinion on how Zen compares to Intel Skylake and how Ryzen might fare when its released this year, you won't want to miss it.
Intro, Exterior and Internal Features
Lenovo sent over an OLED-equipped ThinkPad X1 Yoga a while back. I was mid-development on our client SSD test suite and had some upcoming travel. Given that the new suite’s result number crunching spreadsheet ends extends out to column FHY (4289 for those counting), I really needed a higher res screen and improved computer horsepower in a mobile package. I commandeered the X1 Yoga OLED for the trip and to say it grew on me quickly is an understatement. While I do tend to reserve my heavier duty computing tasks and crazy spreadsheets for desktop machines and 40” 4K displays, the compute power of the X1 Yoga proved itself quite reasonable for a mobile platform. Sure there is a built in pen that comes in handy when employing the Yoga’s flip over convertibility into tablet mode, but the real beauty of this particular laptop comes with its optional 2560x1440 14” OLED display.
OLED is just one of those things you need to see in person to truly appreciate. Photos of these screens just can’t capture the perfect blacks and vivid colors. In productivity use, something about either the pixel pattern or the amazing contrast made me feel like the effective resolution of the panel was higher than its rating. It really is a shame that you are likely reading this article on an LCD, because the OLED panel on this particular model of Lenovo laptop really is the superstar. I’ll dive more into the display later on, but for now let’s cover the basics:
The new EVGA GTX 1080 FTW2 with iCX Technology
Back in November of 2016, EVGA had a problem on its hands. The company had a batch of GTX 10-series graphics cards using the new ACX 3.0 cooler solution leave the warehouse missing thermal pads required to keep the power management hardware on its cards within reasonable temperature margins. To its credit, the company took the oversight seriously and instituted a set of solutions for consumers to select from: RMA, new VBIOS to increase fan speeds, or to install thermal pads on your hardware manually. Still, as is the case with any kind of product quality lapse like that, there were (and are) lingering questions about EVGA’s ability to maintain reliable product; with features and new options that don’t compromise the basics.
Internally, the drive to correct these lapses was…strong. From the very top of the food chain on down, it was hammered home that something like this simply couldn’t occur again, and even more so, EVGA was to develop and showcase a new feature set and product lineup demonstrating its ability to innovate. Thus was born, and accelerated, the EVGA iCX Technology infrastructure. While this was something in the pipeline for some time already, it was moved up to counter any negative bias that might have formed for EVGA’s graphics cards over the last several months. The goal was simple: prove that EVGA was the leader in graphics card design and prove that EVGA has learned from previous mistakes.
EVGA iCX Technology
Previous issues aside, the creation of iCX Technology is built around one simple question: is one GPU temperature sensor enough? For nearly all of today’s graphics cards, cooling is based around the temperature of the GPU silicon itself, as measured by NVIDIA (for all of EVGA’s cards). This is how fan curves are built, how GPU clock speeds are handled with GPU Boost, how noise profiles are created, and more. But as process technology has improved, and GPU design has weighed towards power efficiency, the GPU itself is often no longer the thermally limiting factor.
As it turns out, converting 12V (from the power supply) to ~1V (necessary for the GPU) is a simple process that creates a lot of excess heat. The thermal images above clearly demonstrate that and EVGA isn’t the only card vendor to take notice of this. As it turns out, EVGA’s product issue from last year was related to this – the fans were only spinning fast enough to keep the GPU cool and did not take into account the temperature of memory or power delivery.
The fix from EVGA is to ratchet up the number of sensors on the card PCB and wrap them with intelligence in the form of MCUs, updated Precision XOC software and user viewable LEDs on the card itself.
EVGA graphics cards with iCX Technology will include 9 total thermal sensors on the board, independent of the GPU temperature sensor directly integrated by NVIDIA. There are three sensors for memory, five for power delivery and an additional sensor for the GPU temperature. Some are located on the back of the PCB to avoid any conflicts with trace routing between critical components, including the secondary GPU sensor.
NVIDIA P100 comes to Quadro
At the start of the SOLIDWORKS World conference this week, NVIDIA took the cover off of a handful of new Quadro cards targeting professional graphics workloads. Though the bulk of NVIDIA’s discussion covered lower cost options like the Quadro P4000, P2000, and below, the most interesting product sits at the high end, the Quadro GP100.
As you might guess from the name alone, the Quadro GP100 is based on the GP100 GPU, the same silicon used on the Tesla P100 announced back in April of 2016. At the time, the GP100 GPU was specifically billed as an HPC accelerator for servers. It had a unique form factor with a passive cooler that required additional chassis fans. Just a couple of months later, a PCIe version of the GP100 was released under the Tesla GP100 brand with the same specifications.
Today that GPU hardware gets a third iteration as the Quadro GP100. Let’s take a look at the Quadro GP100 specifications and how it compares to some recent Quadro offerings.
|Quadro GP100||Quadro P6000||Quadro M6000||Full GP100|
|FP32 CUDA Cores / SM||64||64||64||64|
|FP32 CUDA Cores / GPU||3584||3840||3072||3840|
|FP64 CUDA Cores / SM||32||2||2||32|
|FP64 CUDA Cores / GPU||1792||120||96||1920|
|Base Clock||1303 MHz||1417 MHz||1026 MHz||TBD|
|GPU Boost Clock||1442 MHz||1530 MHz||1152 MHz||TBD|
|FP32 TFLOPS (SP)||10.3||12.0||7.0||TBD|
|FP64 TFLOPS (DP)||5.15||0.375||0.221||TBD|
|Memory Interface||1.4 Gbps
|Memory Bandwidth||716 GB/s||432 GB/s||316.8 GB/s||?|
|Memory Size||16GB||24 GB||12GB||16GB|
|TDP||235 W||250 W||250 W||TBD|
|Transistors||15.3 billion||12 billion||8 billion||15.3 billion|
|GPU Die Size||610mm2||471 mm2||601 mm2||610mm2|
There are some interesting stats here that may not be obvious at first glance. Most interesting is that despite the pricing and segmentation, the GP100 is not the de facto fastest Quadro card from NVIDIA depending on your workload. With 3584 CUDA cores running at somewhere around 1400 MHz at Boost speeds, the single precision (32-bit) rating for GP100 is 10.3 TFLOPS, less than the recently released P6000 card. Based on GP102, the P6000 has 3840 CUDA cores running at something around 1500 MHz for a total of 12 TFLOPS.
GP100 (full) Block Diagram
Clearly the placement for Quadro GP100 is based around its 64-bit, double precision performance, and its ability to offer real-time simulations on more complex workloads than other Pascal-based Quadro cards can offer. The Quadro GP100 offers 1/2 DP compute rate, totaling 5.2 TFLOPS. The P6000 on the other hand is only capable of 0.375 TLOPS with the standard, consumer level 1/32 DP rate. Inclusion of ECC memory support on GP100 is also something no other recent Quadro card has.
Raw graphics performance and throughput is going to be questionable until someone does some testing, but it seems likely that the Quadro P6000 will still be the best solution for that by at least a slim margin. With a higher CUDA core count, higher clock speeds and equivalent architecture, the P6000 should run games, graphics rendering and design applications very well.
There are other important differences offered by the GP100. The memory system is built around a 16GB HBM2 implementation which means more total memory bandwidth but at a lower capacity than the 24GB Quadro P6000. Offering 66% more memory bandwidth does mean that the GP100 offers applications that are pixel throughput bound an advantage, as long as the compute capability keeps up on the backend.
Mini-STX is the newest, smallest PC form-factor that accepts a socketed CPU, and in this review we'll be taking a look at a complete mini-STX build that will occupy just 1.53 liters of space. With a total size of just 6.1 x 5.98 x 2.56 inches, the SilverStone VT01 case offers a very small footprint, and the ECS H110S-2P motherboard accepts Intel desktop CPUs up to 65W (though I may have ignored this specification).
PS3 controller for scale. (And becuase it's the best controller ever.)
The Smallest Form-Factor
The world of small form-factor PC hardware is divided between tiny kit solutions such as the Intel NUC (and the host of mini-PCs from various manufacturers), and the mini-ITX form-factor for system builders. The advantage of mini-ITX is its ability to host standard components, such as desktop-class processors and full-length graphics cards. However, mini-ITX requires a significantly larger enclosure than a mini-PC, and the "thin mini-ITX" standard has been something of a bridge between the two, essentially halving the height requirement of mini-ITX. Now, an even smaller standard has emerged, and it almost makes mini-ITX look big in comparison.
Left: ECS H110S-2P (mini-STX) / Right: EVGA Z170 Stinger (mini-ITX)
Mini-STX had been teased for a couple of years (I wrote my first news post about it in January of 2015), and was originally an Intel concept called "5x5"; though the motherboard is actually about 5.8 x 5.5 inches (147 x 140 mm). At CES 2016 I was able to preview a SilverStone enclosure design for these systems, and ECS is one of the manufacturers producing mini-STX motherboards with an Intel H110-based board introduced this past summer. We saw some shipping products for the newest form-factor in 2016, and both companies were kind enough to send along a sample of these micro-sized components for a build. With the parts on hand it is now time to assemble my first mini-STX system, and of course I'll cover the process - and results - right here!
In conjunction with Ericsson, Netgear, and Telstra, Qualcomm officially unveiled the first Gigabit LTE ready network. Sydney, Australia is the first city to have this new cellular spec deployed through Telstra. Gigabit LTE, dubbed 4GX by Telstra, offers up to 1Gbps download speeds and 150 Mbps upload speeds with a supported device. Gigabit LTE implementation took partnership between all four companies to become a reality with Ericsson providing the backend hardware and software infrastructure and upgrades, Qualcomm designing its next-gen Snapdragon 835 SoC and Snapdragon X16 modem for Gigabit LTE support, Netgear developing the Nighthawk M1 Mobile router which leverages the Snapdragon 835, and Telstra bringing it all together on its Australian-based cellular network. Qualcomm, Ericsson, and Telstra all see the 4GX implementation as a solid step forward in the path to 5G with 4GX acting as the foundation layer for next-gen 5G networks and providing a fallback, much the same as 3G acted as a fallback for the current 4G LTE cellular networks.
Gigabit LTE Explained
Courtesy of Telstra
What exactly is meant by Gigabit LTE (or 4GX as Telstra has dubbed the new cellular technology)? Gigabit LTE increases both the download and upload speeds of current generation 4G LTE to 1Gbps download and 150 Mbps upload speeds by leveraging several technologies for optimizing the signal transmission between the consumer device and the cellular network itself. Qualcomm designed the Snapdragon X16 modem to operate on dual 60MHz signals with 4x4 MIMO support or dual 80MHz signals without 4x4 MIMO. Further, they increased the modem's QAM support to 256 (8-bit) instead of the current 64 QAM support (6-bit), enabling 33% more data per stream - an increase of 75 Mbps to 100 Mbps per stream. The X16 modem leverages a total of 10 communication streams for delivery of up to 1 Gbps performance and also offers access to previously inaccessible frequency bands using LAA (License Assisted Access) to leverage increased power and speed needs for Gigabit LTE support.
Introduction, Specifications and Packaging
Micron paper launched their 5100 Series Enterprise SATA SSD lineup early last month. The new line promised many sought after features for such a part, namely high performance, high-performance consistency, high capacities, and relatively low cost/GB (thanks to IMFT 3D NAND which is now well into volume production since launching nearly two years ago). The highs and lows I just rattled off are not only good for enterprise, they are good for general consumers as well. Enterprises deal in large SSD orders, which translates to increased production and ultimately a reduction in the production cost of the raw NAND that also goes into client SSDs and other storage devices.
The 5100 Series comes in three tiers and multiple capacities per tier (with even more launching over the next few months). Micron sampled us a 2TB 'ECO' model and a 1TB 'MAX'. The former is optimized more for read intensive workloads, while the latter is designed to take a continuous random write beating.
I'll be trying out some new QoS tests in this review, with plans to expand out with comparisons in future pieces. This review will stand as a detailed performance verification of these two parts - something we are uniquely equipped to accomplish.
Introduction and Features
In this review we are going to take a detailed look at FSP Technology Inc.’s new Twins 500W redundant power supply. It’s been quite a while since we reviewed a redundant power supply in the ATX form factor. This should be interesting! (Actually, it turned out to be very interesting, along with a few surprises we didn’t expect).
The FSP Twins 500W redundant power supply is targeted towards use in home and small businesses for mail or web server systems that require maximum up time. The Twins 500W PSU incorporates two 520W modular power supplies inside one standard ATX housing. Under normal operation the two power supplies operate in parallel, sharing the load. If one of the power supply modules should fail, the other one automatically takes over with no down time. And since the power supply modules are hot-swappable, a faulty unit can be replaced without having to turn off the system.
FSP claims the Twins 500W is a server-grade power supply designed to deliver stable power and is certified 80 Plus Gold for high efficiency. The ATX chassis measures 190mm (7.4”) deep and is fitted with fixed, ribbon-style cables. Each modular power supply uses a 40mm fan for cooling and the Twins 500W comes backed by a 5-year warranty. Users can also download and install FSP’s Guardian software to monitor power input, power output, and efficiency, along with other parameters in real time if desired.
FSP Twins 500W Redundant PSU Key Features:
• ATX PS2 redundant size ideal for mail, web and home server
• Server-grade design provides stable power
• Hot-swappable modules design
• 80 Plus 230V Internal Gold certification
• Digital-controlled power supply supports FSP Guardian monitoring software
• Smart power supply supports Alarm Guard and status LED indicators
• Flat ribbon-style cables with two 4+4 pin CPU connectors
• Complies with ATX 12V and EPS 12V standards
• Protections: OCP, OVP, SCP, FFP (Fan Failure Protection)
• 5-Year Manufacturer’s warranty
• MSRP: $399.00 USD
Bluetooth has come a long way since the technology was introduced in 1998. The addition of the Advanced Audio Distribution Profile (A2DP) in 2003 brought support for high-quality audio streaming, but Bluetooth still didn’t offer anywhere near the quality of a wired connection. This unfortunate fact is often overlooked in favor of the technology's convenience factor, but what if we could have the best of both worlds? This is where Qualcomm's aptX comes in, and it is a departure from the methods in place since the introduction of Bluetooth audio.
What is aptX audio? It's actually a codec that compresses audio in a very different manner than that of the standard Bluetooth codec, and the result is as close to uncompressed audio as the bandwidth-constrained Bluetooth technology can possibly allow. Qualcomm describes aptX audio as "a bit-rate efficiency technology that ensures you receive the highest possible sound quality from your Bluetooth audio device," and there is actual science to back up this claim. After doing quite a bit of reading on the subject as I prepared for this review, I found that the technology behind aptX audio, and its history, is very interesting.
A Brief History of aptX Audio
The aptX codec has actually been around since long before Bluetooth, with its invention in the 1980s and first commercial applications beginning in the 1990s. The version now found in compatible Bluetooth devices is 4th-generation aptX, and in the very beginning it was actually a hardware product (the APTX100ED chip). The technology has had a continued presence in pro audio for three decades now, with a wider reach than I had ever imagined when I started researching the topic. For example, aptX is used for ISDN line connections for remote voice work (voice over, ADR, foreign language dubs, etc.) in movie production, and even for mix approvals on film soundtracks. In fact, aptX was also the compression technology behind DTS theater sound, which had its introduction in 1993 with Jurassic Park. It is in use in over 30,000 radio stations around the world, where it has long been used for digital music playback.
So, while it is clear that aptX is a respected technology with a long history in the audio industry, how exactly does this translate into improvements for someone who just wants to listen to music over a bandwidth-constrained Bluetooth connection? The nature of the codec and its differences/advantages vs. A2DP is a complex topic, but I will attempt to explain in plain language how it actually can make Bluetooth audio sound better. Having science behind the claim of better sound goes a long way in legitimizing perceptual improvements in audio quality, particularly as the high-end audio industry is full of dubious - and often ridiculous - claims. There is no snake-oil to be sold here, as we are simply talking about a different way to compress and uncompress an audio signal - which is the purpose of a codec (code, decode) to begin with.
Introduction and Features
EVGA offers one of the most complete lines of PC power supplies on the market today and they are continually striving to improve their product offerings. The new Supernova G3 Series power supplies are based on EVGA’s popular G2 series but now come in a smaller chassis measuring only 150mm (5.9”) deep. The G3 Series also uses a 130mm cooling fan with a hydraulic dynamic bearing for quiet operation and EVGA claims the new G3 units offer even better performance than the G2 models. The Supernova G3 Series is now available in five different models ranging from 550W up to 1000W. We will be taking a detailed look at the Supernova 850W G3 power supply in this review.
• EVGA SuperNOVA 550W G3 ($99.99 USD)
• EVGA SuperNOVA 650W G3 ($109.99 USD)
• EVGA SuperNOVA 750W G3 ($129.99 USD)
• EVGA SuperNOVA 850W G3 ($149.99 USD)
• EVGA SuperNOVA 1000W G3 ($169.99 USD)
The Supernova G3 series power supplies are 80 Plus Gold certified for high efficiency and feature all modular cables, high-quality Japanese brand capacitors, and EVGA’s ECO Intelligent Thermal Control System which enables fan-less operation at low to mid power. All G3 series power supplies are NVIDIA SLI and AMD Crossfire Ready and are backed by either a 7-year (550W and 650W) or 10-year (750W, 850W and 1000W) EVGA warranty.
EVGA SuperNOVA 850W G3 PSU Key Features:
• 850W Continuous DC output at up to 50°C
• 10-Year warranty with unparalleled EVGA Customer Support
• 80 PLUS Gold certified, with up to 90%/92% efficiency (115VAC/240VAC)
• Highest quality Japanese brand capacitors ensure long-term reliability
• Fully modular cables to reduce clutter and improve airflow
• Quiet 130mm hydraulic dynamic bearing fan for long life
• ECO Intelligent Thermal Control allows silent, fan-less operation at low power
• NVIDIA SLI & AMD Crossfire Ready
• Active Power Factor correction (0.99) with Universal AC input
• Heavy-duty protections: OVP, UVP, OCP, OPP, SCP, and OTP
Price and Other Official Information
Since our last Nintendo Switch post, the company had their full reveal event, which confirmed the two most critical values: it will launch on March 3rd for $299.99 USD ($399.99 CDN). This is basically what the rumors have pointed to for a little while, and it makes sense. That was last week, but this week gave rise to a lot more information, mostly from either an interview with Nintendo of America’s President and COO, Reggie Fils-Aimé, or from footage that was recorded and analyzed by third parties, like Digital Foundry.
From the GameSpot interview, above, Reggie was asked about the launch bundle, and why it didn’t include any game, like 1 - 2 - Switch. His response was blunt and honest: they wanted to hit $299 USD and the game found itself below the cut-off point. While I can respect that, I cannot see enough people bothering with the title at full price for it to have been green-lit in the first place. If Nintendo wasn’t interested in just eating the cost of that game’s development to affect public (and developer) perceptions, although they might end up taking the loss if the game doesn’t sell anyway, then at least it wasn’t factored into the system.
Speaking of price, we are also seeing what the accessories sell at.
From the controller side of things, the more conventional one, the Nintendo Switch Pro Controller, has an MSRP of $69.99 USD. If you look at its competitors, the DualShock 4 for the PlayStation 4 at $49 and the Xbox Wireless Controller for the Xbox One at the same price, this is notably higher. While it has a bunch of interesting features, like “HD rumble”, motion sensing, and some support for amiibos, its competitors are similar, but $20 cheaper.
The Switch-specific controllers, called “Joy-Con”, are $10 more expensive than the Pro Controller, at $79.99 USD for the pair, or just $49.99 USD for the left or right halves. (Some multiplayer titles only require a half, so Nintendo doesn’t force you to buy the whole pair at the expense of extra SKUs, which is also probably helpful if you lose one.) This seems high, and could be a significant problem going forward.
As for its availability? Nintendo has disclosed that they are pushing 2 million units into the channel, so they are not expecting shortages like the NES Classic had. They do state that demand is up-in-the-air a bit, though.
Courtesy of ASUS
With the latest revision of the TUF line, ASUS made the decision to drop the well-known "Sabertooth" moniker from the board name, naming the board's with the TUF branding only. The TUF Z270 Mark 1 motherboard is the flagship board in ASUS' TUF (The Ultimate Force) product line designed with the Intel Z270 chipset. The board offers support for the latest Intel Kaby Lake processor line as well as Dual Channel DDR 4 memory because of its integrated Intel Z270 chipset. While the MSRP for the board may be a bit higher than expected, its $239 price is more than justified by the board's build quality and "armored" offerings.
Courtesy of ASUS
The TUF Z270 Mark 1 motherboard is built with the same quality and attention to detail that you've come to expect from TUF-branded motherboards. Its appearance follows the standard tan plastic armor overlay on the top with a 10-phase digital power system. The board contains the following integrated features: six SATA 3 ports; two M.2 PCIe x4 capable ports; dual GigE controllers - an Intel I219-V Gigabit NIC and an Intel I211 Gigabit NIC; three PCI-Express x16 slots; three PCI-Express x1 slots; an 8-channel audio subsystem; MEM OK! and USB BIOS Flashback buttons; integrated DisplayPort and HDMI; and USB 2.0, 3.0, and 3.1 Type-A and Type-C port support.
Courtesy of ASUS
ASUS also chose to include the armored backplate with the TUF Z270 Mark 1 motherboard, dubbed the "TUF Fortifier".
Performance and Impressions
This content was sponsored by AMD.
Last week in part 1 of our look at the Radeon RX 460 as a budget gaming GPU, I detailed our progress through component selection. Centered around an XFX 2GB version of the Radeon RX 460, we built a machine using an Intel Core i3-6100, ASUS H110M motherboard, 8GB of DDR4 memory, both an SSD and a HDD, as well as an EVGA power supply and Corsair chassis. Part 1 discussed the reasons for our hardware selections as well as an unboxing and preview of the giveaway to come.
In today's short write up and video, I will discuss my impressions of the system overall as well as touch on the performance in a handful of games. Despite the low the price, and despite the budget moniker attributed to this build, a budding PC gamer or converted console gamer will find plenty of capability in this system.
Let's quickly recap the components making up our RX 460 budget build.
Our Radeon RX 460 Build
|Budget Radeon RX 460 Build|
|Processor||Intel Core i3-6100 - $109|
|Cooler||CRYORIG M9i - $19|
|Motherboard||ASUS H110M-A/M.2 - $54|
|Memory||2 x 4GB Crucial Ballistix DDR4-2400 - $51|
|Graphics Card||XFX Radeon RX 460 2GB - $98|
|Storage||240GB Sandisk SSD Plus - $68
1TB Western Digital Blue - $49
|Case||Corsair Carbide Series 88R - $49|
|Power Supply||EVGA 500 Watt - $42|
|Monitor||Nixues VUE24A 1080p 144Hz FreeSync - $251|
|Total Price||$549 on Amazon; $799 with monitor on Amazon|
For just $549 I was able to create shopping list of hardware that provides very impressive performance for the investment.
The completed system is damn nice looking, if I do say so myself. The Corsair Carbide 88R case sports a matte black finish with a large window to peer in at the hardware contained within. Coupled with the Nixeus FreeSync display and some Logitech G mouse and keyboard hardware we love, this is a configuration that any PC gamer would be proud to display.