Manufacturer: PC Perspective

Living Long and Prospering

The open fork of AMD’s Mantle, the Vulkan API, was released exactly a year ago with, as we reported, a hard launch. This meant public, but not main-branch drivers for developers, a few public SDKs, a proof-of-concept patch for The Talos Principle, and, of course, the ratified specification. This sets up the API to find success right out of the gate, and we can now look back over the year since.

khronos-2017-vulkan-alt-logo.png

Thor's hammer, or a tempest in a teapot?

The elephant in the room is DOOM. This game has successfully integrated the API and it uses many of its more interesting features, like asynchronous compute. Because the API is designed in a sort-of “make a command, drop it on a list” paradigm, the driver is able to select commands based on priority and available resources. AMD’s products got a significant performance boost, relative to OpenGL, catapulting their Fury X GPU up to the enthusiast level that its theoretical performance suggested.

Mobile developers have been picking up the API, too. Google, who is known for banishing OpenCL from their Nexus line and challenging OpenGL ES with their Android Extension Pack (later integrated into OpenGL ES with version 3.2), has strongly backed Vulkan. The API was integrated as a core feature of Android 7.0.

On the engine and middleware side of things, Vulkan is currently “ready for shipping games” as of Unreal Engine 4.14. It is also included in Unity 5.6 Beta, which is expected for full release in March. Frameworks for emulators are also integrating Vulkan, often just to say they did, but sometimes to emulate the quirks of these system’s offbeat graphics co-processors. Many other engines, from Source 2 to Torque 3D, have also announced or added Vulkan support.

Finally, for the API itself, The Khronos Group announced (pg 22 from SIGGRAPH 2016) areas that they are actively working on. The top feature is “better” multi-GPU support. While Vulkan, like OpenCL, allows developers to enumerate all graphics devices and target them, individually, with work, it doesn’t have certain mechanisms, like being able to directly ingest output from one GPU into another. They haven’t announced a timeline for this.

Subject: Mobile
Manufacturer: Huawei

Introduction and Specifications

The Mate 9 is the current version of Huawei’s signature 6-inch smartphone, building on last year’s iteration with the company’s new Kirin 960 SoC (featuring ARM's next-generation Bifrost GPU architecture), improved industrial design, and exclusive Leica-branded dual camera system.

Mate9_Main.jpg

In the ultra-competitive smartphone world there is little room at the top, and most companies are simply looking for a share of the market. Apple and Samsung have occupied the top two spots for some time, with HTC, LG, Motorola, and others, far behind. But the new #3 emerged not from the usual suspects, but from a name many of us in the USA had not heard of until recently; and it is the manufacturer of the Mate 9. And comparing this new handset to the preceding Mate 8 (which we looked at this past August), it is a significant improvement in most respects.

With this phone Huawei has really come into their own with their signature phone design, and 2016 was a very good product year with the company’s smartphone offerings. The P9 handset launched early in 2016, offering not only solid specs and impressive industrial design, but a unique camera that was far more than a gimmick. Huawei’s partnership with Leica has resulted in a dual-camera system that operates differently than systems found on phones such as the iPhone 7 Plus, and the results are very impressive. The Mate 9 is an extension of that P9 design, adapted for their larger Mate smartphone series.

DSC_0649.jpg

Continue reading our review of the Huawei Mate 9!

Subject: Motherboards
Manufacturer: ECS

Introduction and Technical Specifications

Introduction

02-board.jpg

Courtesy of ECS

The ECS Z170-Lightsaber motherboard is the newest offering in ECS' L337 product line with support for the Intel Z170 Express chipset. The Z170-Lightsaber builds on their previous Z170-based product, adding several enthusiast-friendly features like enhanced audio and RGB LED support to the board. With an MSRP of $180, ECS priced the Lightsaber as a higher-tiered offering, justified with its additional features and functionality compared to their previous Z170-based product.

03-board-profile.jpg

Courtesy of ECS

ECS designed the Z170-Lightsaber with a 14-phase digital power delivery system, using high efficiency chokes and MOSFETs, as well as solid core capacitors for optimal board performance. The following features into the Z170-Lightsaber board: six SATA 3 ports; one SATA-Express port; a PCIe X2 M.2 port; a Qualcomm Killer GigE NIC; three PCI-Express x16 slots; four PCI-Express x1 slots; 3-digit diagnostic LED display; on-board power, reset, Quick overclock, BIOS set, BIOS update, BIOS backup, and Clear CMOS buttons; Realtek audio solution; integrated DisplayPort and HDMI video port support; and USB 2.0, 3.0, and 3.1 Gen2 port support.

Continue reading our review of the ECS Z170-Lightsaber Motherboard!

Author:
Subject: Processors
Manufacturer: AMD

Get your brains ready

Just before the weekend, Josh and I got a chance to speak with David Kanter about the AMD Zen architecture and what it might mean for the Ryzen processor due out in less than a month. For those of you not familiar with David and his work, he is an analyst and consultant on processor architectrure and design through Real World Tech while also serving as a writer and analyst for the Microprocessor Report as part of the Linley Group. If you want to see a discussion forum that focuses on architecture at an incredibly detailed level, the Real World Tech forum will have you covered - it's an impressive place to learn.

zenpm-4.jpg

David was kind enough to spend an hour with us to talk about a recently-made-public report he wrote on Zen. It's definitely a discussion that dives into details most articles and stories on Zen don't broach, so be prepared to do some pausing and Googling phrases and technologies you may not be familiar with. Still, for any technology enthusiast that wants to get an expert's opinion on how Zen compares to Intel Skylake and how Ryzen might fare when its released this year, you won't want to miss it.

Subject: Mobile
Manufacturer: Lenovo
Tagged: yoga, X1, Thinkpad, oled, Lenovo

Intro, Exterior and Internal Features

Lenovo sent over an OLED-equipped ThinkPad X1 Yoga a while back. I was mid-development on our client SSD test suite and had some upcoming travel. Given that the new suite’s result number crunching spreadsheet ends extends out to column FHY (4289 for those counting), I really needed a higher res screen and improved computer horsepower in a mobile package. I commandeered the X1 Yoga OLED for the trip and to say it grew on me quickly is an understatement. While I do tend to reserve my heavier duty computing tasks and crazy spreadsheets for desktop machines and 40” 4K displays, the compute power of the X1 Yoga proved itself quite reasonable for a mobile platform. Sure there is a built in pen that comes in handy when employing the Yoga’s flip over convertibility into tablet mode, but the real beauty of this particular laptop comes with its optional 2560x1440 14” OLED display.

170209-180903.jpg

OLED is just one of those things you need to see in person to truly appreciate. Photos of these screens just can’t capture the perfect blacks and vivid colors. In productivity use, something about either the pixel pattern or the amazing contrast made me feel like the effective resolution of the panel was higher than its rating. It really is a shame that you are likely reading this article on an LCD, because the OLED panel on this particular model of Lenovo laptop really is the superstar. I’ll dive more into the display later on, but for now let’s cover the basics:

Read on for our review of the ThinkPad X1 Yoga!

Author:
Manufacturer: EVGA

The new EVGA GTX 1080 FTW2 with iCX Technology

Back in November of 2016, EVGA had a problem on its hands. The company had a batch of GTX 10-series graphics cards using the new ACX 3.0 cooler solution leave the warehouse missing thermal pads required to keep the power management hardware on its cards within reasonable temperature margins. To its credit, the company took the oversight seriously and instituted a set of solutions for consumers to select from: RMA, new VBIOS to increase fan speeds, or to install thermal pads on your hardware manually. Still, as is the case with any kind of product quality lapse like that, there were (and are) lingering questions about EVGA’s ability to maintain reliable product; with features and new options that don’t compromise the basics.

Internally, the drive to correct these lapses was…strong. From the very top of the food chain on down, it was hammered home that something like this simply couldn’t occur again, and even more so, EVGA was to develop and showcase a new feature set and product lineup demonstrating its ability to innovate. Thus was born, and accelerated, the EVGA iCX Technology infrastructure. While this was something in the pipeline for some time already, it was moved up to counter any negative bias that might have formed for EVGA’s graphics cards over the last several months. The goal was simple: prove that EVGA was the leader in graphics card design and prove that EVGA has learned from previous mistakes.

EVGA iCX Technology

Previous issues aside, the creation of iCX Technology is built around one simple question: is one GPU temperature sensor enough? For nearly all of today’s graphics cards, cooling is based around the temperature of the GPU silicon itself, as measured by NVIDIA (for all of EVGA’s cards). This is how fan curves are built, how GPU clock speeds are handled with GPU Boost, how noise profiles are created, and more. But as process technology has improved, and GPU design has weighed towards power efficiency, the GPU itself is often no longer the thermally limiting factor.

slides05.jpg

As it turns out, converting 12V (from the power supply) to ~1V (necessary for the GPU) is a simple process that creates a lot of excess heat. The thermal images above clearly demonstrate that and EVGA isn’t the only card vendor to take notice of this. As it turns out, EVGA’s product issue from last year was related to this – the fans were only spinning fast enough to keep the GPU cool and did not take into account the temperature of memory or power delivery.

The fix from EVGA is to ratchet up the number of sensors on the card PCB and wrap them with intelligence in the form of MCUs, updated Precision XOC software and user viewable LEDs on the card itself.

slides10.jpg

EVGA graphics cards with iCX Technology will include 9 total thermal sensors on the board, independent of the GPU temperature sensor directly integrated by NVIDIA. There are three sensors for memory, five for power delivery and an additional sensor for the GPU temperature. Some are located on the back of the PCB to avoid any conflicts with trace routing between critical components, including the secondary GPU sensor.

Continue reading about EVGA iCX Technology!

Author:
Manufacturer: NVIDIA

NVIDIA P100 comes to Quadro

At the start of the SOLIDWORKS World conference this week, NVIDIA took the cover off of a handful of new Quadro cards targeting professional graphics workloads. Though the bulk of NVIDIA’s discussion covered lower cost options like the Quadro P4000, P2000, and below, the most interesting product sits at the high end, the Quadro GP100.

As you might guess from the name alone, the Quadro GP100 is based on the GP100 GPU, the same silicon used on the Tesla P100 announced back in April of 2016. At the time, the GP100 GPU was specifically billed as an HPC accelerator for servers. It had a unique form factor with a passive cooler that required additional chassis fans. Just a couple of months later, a PCIe version of the GP100 was released under the Tesla GP100 brand with the same specifications.

quadro2017-2.jpg

Today that GPU hardware gets a third iteration as the Quadro GP100. Let’s take a look at the Quadro GP100 specifications and how it compares to some recent Quadro offerings.

  Quadro GP100 Quadro P6000 Quadro M6000 Full GP100
GPU GP100 GP102 GM200 GP100 (Pascal)
SMs 56 60 48 60
TPCs 28 30 24 (30?)
FP32 CUDA Cores / SM 64 64 64 64
FP32 CUDA Cores / GPU 3584 3840 3072 3840
FP64 CUDA Cores / SM 32 2 2 32
FP64 CUDA Cores / GPU 1792 120 96 1920
Base Clock 1303 MHz 1417 MHz 1026 MHz TBD
GPU Boost Clock 1442 MHz 1530 MHz 1152 MHz TBD
FP32 TFLOPS (SP) 10.3 12.0 7.0 TBD
FP64 TFLOPS (DP) 5.15 0.375 0.221 TBD
Texture Units 224 240 192 240
ROPs 128? 96 96 128?
Memory Interface 1.4 Gbps
4096-bit HBM2
9 Gbps
384-bit GDDR5X
6.6 Gbps
384-bit
GDDR5
4096-bit HBM2
Memory Bandwidth 716 GB/s 432 GB/s 316.8 GB/s ?
Memory Size 16GB 24 GB 12GB 16GB
TDP 235 W 250 W 250 W TBD
Transistors 15.3 billion 12 billion 8 billion 15.3 billion
GPU Die Size 610mm2 471 mm2 601 mm2 610mm2
Manufacturing Process 16nm 16nm 28nm 16nm

There are some interesting stats here that may not be obvious at first glance. Most interesting is that despite the pricing and segmentation, the GP100 is not the de facto fastest Quadro card from NVIDIA depending on your workload. With 3584 CUDA cores running at somewhere around 1400 MHz at Boost speeds, the single precision (32-bit) rating for GP100 is 10.3 TFLOPS, less than the recently released P6000 card. Based on GP102, the P6000 has 3840 CUDA cores running at something around 1500 MHz for a total of 12 TFLOPS.

gp102-blockdiagram.jpg

GP100 (full) Block Diagram

Clearly the placement for Quadro GP100 is based around its 64-bit, double precision performance, and its ability to offer real-time simulations on more complex workloads than other Pascal-based Quadro cards can offer. The Quadro GP100 offers 1/2 DP compute rate, totaling 5.2 TFLOPS. The P6000 on the other hand is only capable of 0.375 TLOPS with the standard, consumer level 1/32 DP rate. Inclusion of ECC memory support on GP100 is also something no other recent Quadro card has.

quadro2017-3.jpg

Raw graphics performance and throughput is going to be questionable until someone does some testing, but it seems likely that the Quadro P6000 will still be the best solution for that by at least a slim margin. With a higher CUDA core count, higher clock speeds and equivalent architecture, the P6000 should run games, graphics rendering and design applications very well.

There are other important differences offered by the GP100. The memory system is built around a 16GB HBM2 implementation which means more total memory bandwidth but at a lower capacity than the 24GB Quadro P6000. Offering 66% more memory bandwidth does mean that the GP100 offers applications that are pixel throughput bound an advantage, as long as the compute capability keeps up on the backend.

m.jpg

Continue reading our preview of the new Quadro GP100!

Manufacturer: PC Perspective

Introduction

Mini-STX is the newest, smallest PC form-factor that accepts a socketed CPU, and in this review we'll be taking a look at a complete mini-STX build that will occupy just 1.53 liters of space. With a total size of just 6.1 x 5.98 x 2.56 inches, the SilverStone VT01 case offers a very small footprint, and the ECS H110S-2P motherboard accepts Intel desktop CPUs up to 65W (though I may have ignored this specification).

DSC_0490.jpg

PS3 controller for scale. (And becuase it's the best controller ever.)

The Smallest Form-Factor

The world of small form-factor PC hardware is divided between tiny kit solutions such as the Intel NUC (and the host of mini-PCs from various manufacturers), and the mini-ITX form-factor for system builders. The advantage of mini-ITX is its ability to host standard components, such as desktop-class processors and full-length graphics cards. However, mini-ITX requires a significantly larger enclosure than a mini-PC, and the "thin mini-ITX" standard has been something of a bridge between the two, essentially halving the height requirement of mini-ITX. Now, an even smaller standard has emerged, and it almost makes mini-ITX look big in comparison.

DSC_0268.jpg

Left: ECS H110S-2P (mini-STX) / Right: EVGA Z170 Stinger (mini-ITX)

Mini-STX had been teased for a couple of years (I wrote my first news post about it in January of 2015), and was originally an Intel concept called "5x5"; though the motherboard is actually about 5.8 x 5.5 inches (147 x 140 mm). At CES 2016 I was able to preview a SilverStone enclosure design for these systems, and ECS is one of the manufacturers producing mini-STX motherboards with an Intel H110-based board introduced this past summer. We saw some shipping products for the newest form-factor in 2016, and both companies were kind enough to send along a sample of these micro-sized components for a build. With the parts on hand it is now time to assemble my first mini-STX system, and of course I'll cover the process - and results - right here!

DSC_0421.jpg

Continue reading our review of a mini-STX computer build featuring ECS and SilverStone!

Subject: Mobile
Manufacturer: Qualcomm

Introduction

Introduction

In conjunction with Ericsson, Netgear, and Telstra, Qualcomm officially unveiled the first Gigabit LTE ready network. Sydney, Australia is the first city to have this new cellular spec deployed through Telstra. Gigabit LTE, dubbed 4GX by Telstra, offers up to 1Gbps download speeds and 150 Mbps upload speeds with a supported device. Gigabit LTE implementation took partnership between all four companies to become a reality with Ericsson providing the backend hardware and software infrastructure and upgrades, Qualcomm designing its next-gen Snapdragon 835 SoC and Snapdragon X16 modem for Gigabit LTE support, Netgear developing the Nighthawk M1 Mobile router which leverages the Snapdragon 835, and Telstra bringing it all together on its Australian-based cellular network. Qualcomm, Ericsson, and Telstra all see the 4GX implementation as a solid step forward in the path to 5G with 4GX acting as the foundation layer for next-gen 5G networks and providing a fallback, much the same as 3G acted as a fallback for the current 4G LTE cellular networks.

Gigabit LTE Explained

02-telstra-gigabit-lte-explained.jpg

Courtesy of Telstra

What exactly is meant by Gigabit LTE (or 4GX as Telstra has dubbed the new cellular technology)? Gigabit LTE increases both the download and upload speeds of current generation 4G LTE to 1Gbps download and 150 Mbps upload speeds by leveraging several technologies for optimizing the signal transmission between the consumer device and the cellular network itself. Qualcomm designed the Snapdragon X16 modem to operate on dual 60MHz signals with 4x4 MIMO support or dual 80MHz signals without 4x4 MIMO. Further, they increased the modem's QAM support to 256 (8-bit) instead of the current 64 QAM support (6-bit), enabling 33% more data per stream - an increase of 75 Mbps to 100 Mbps per stream. The X16 modem leverages a total of 10 communication streams for delivery of up to 1 Gbps performance and also offers access to previously inaccessible frequency bands using LAA (License Assisted Access) to leverage increased power and speed needs for Gigabit LTE support.

Continue reading our coverage of the Gigabit LTE technology!

Subject: Storage
Manufacturer: Micron

Introduction, Specifications and Packaging

Introduction:

Micron paper launched their 5100 Series Enterprise SATA SSD lineup early last month. The new line promised many sought after features for such a part, namely high performance, high-performance consistency, high capacities, and relatively low cost/GB (thanks to IMFT 3D NAND which is now well into volume production since launching nearly two years ago). The highs and lows I just rattled off are not only good for enterprise, they are good for general consumers as well. Enterprises deal in large SSD orders, which translates to increased production and ultimately a reduction in the production cost of the raw NAND that also goes into client SSDs and other storage devices.

170130-192150.jpg

The 5100 Series comes in three tiers and multiple capacities per tier (with even more launching over the next few months). Micron sampled us a 2TB 'ECO' model and a 1TB 'MAX'. The former is optimized more for read intensive workloads, while the latter is designed to take a continuous random write beating.

I'll be trying out some new QoS tests in this review, with plans to expand out with comparisons in future pieces. This review will stand as a detailed performance verification of these two parts - something we are uniquely equipped to accomplish.

Read on for our full review of the Micron 5100 MAX 960GB and 5100 ECO 1920GB Enterprise SATA SSDs!

Author:
Manufacturer: FSP Technology Inc.

Introduction and Features

Introduction
                    
In this review we are going to take a detailed look at FSP Technology Inc.’s new Twins 500W redundant power supply. It’s been quite a while since we reviewed a redundant power supply in the ATX form factor. This should be interesting! (Actually, it turned out to be very interesting, along with a few surprises we didn’t expect).

2-Banner-1.jpg

The FSP Twins 500W redundant power supply is targeted towards use in home and small businesses for mail or web server systems that require maximum up time. The Twins 500W PSU incorporates two 520W modular power supplies inside one standard ATX housing. Under normal operation the two power supplies operate in parallel, sharing the load. If one of the power supply modules should fail, the other one automatically takes over with no down time. And since the power supply modules are hot-swappable, a faulty unit can be replaced without having to turn off the system.

3-Twins-500-rear.jpg

FSP claims the Twins 500W is a server-grade power supply designed to deliver stable power and is certified 80 Plus Gold for high efficiency. The ATX chassis measures 190mm (7.4”) deep and is fitted with fixed, ribbon-style cables. Each modular power supply uses a 40mm fan for cooling and the Twins 500W comes backed by a 5-year warranty. Users can also download and install FSP’s Guardian software to monitor power input, power output, and efficiency, along with other parameters in real time if desired.

4a-Twins-500-front.jpg

FSP Twins 500W Redundant PSU Key Features:

•    ATX PS2 redundant size ideal for mail, web and home server
•    Server-grade design provides stable power
•    Hot-swappable modules design
•    80 Plus 230V Internal Gold certification
•    Digital-controlled power supply supports FSP Guardian monitoring software
•    Smart power supply supports Alarm Guard and status LED indicators
•    Flat ribbon-style cables with two 4+4 pin CPU connectors
•    Complies with ATX 12V and EPS 12V standards
•    Protections: OCP, OVP, SCP, FFP (Fan Failure Protection)
•    5-Year Manufacturer’s warranty
•    MSRP: $399.00 USD

Please continue reading our review of the FSP Twins 500W PSU!!!

Subject: General Tech
Manufacturer: Qualcomm aptX

Introduction

Bluetooth has come a long way since the technology was introduced in 1998. The addition of the Advanced Audio Distribution Profile (A2DP) in 2003 brought support for high-quality audio streaming, but Bluetooth still didn’t offer anywhere near the quality of a wired connection. This unfortunate fact is often overlooked in favor of the technology's convenience factor, but what if we could have the best of both worlds? This is where Qualcomm's aptX comes in, and it is a departure from the methods in place since the introduction of Bluetooth audio.

DSC_0370.jpg

What is aptX audio? It's actually a codec that compresses audio in a very different manner than that of the standard Bluetooth codec, and the result is as close to uncompressed audio as the bandwidth-constrained Bluetooth technology can possibly allow. Qualcomm describes aptX audio as "a bit-rate efficiency technology that ensures you receive the highest possible sound quality from your Bluetooth audio device," and there is actual science to back up this claim. After doing quite a bit of reading on the subject as I prepared for this review, I found that the technology behind aptX audio, and its history, is very interesting.

A Brief History of aptX Audio

The aptX codec has actually been around since long before Bluetooth, with its invention in the 1980s and first commercial applications beginning in the 1990s. The version now found in compatible Bluetooth devices is 4th-generation aptX, and in the very beginning it was actually a hardware product (the APTX100ED chip). The technology has had a continued presence in pro audio for three decades now, with a wider reach than I had ever imagined when I started researching the topic. For example, aptX is used for ISDN line connections for remote voice work (voice over, ADR, foreign language dubs, etc.) in movie production, and even for mix approvals on film soundtracks. In fact, aptX was also the compression technology behind DTS theater sound, which had its introduction in 1993 with Jurassic Park. It is in use in over 30,000 radio stations around the world, where it has long been used for digital music playback.

Qualcomm aptX.jpg

So, while it is clear that aptX is a respected technology with a long history in the audio industry, how exactly does this translate into improvements for someone who just wants to listen to music over a bandwidth-constrained Bluetooth connection? The nature of the codec and its differences/advantages vs. A2DP is a complex topic, but I will attempt to explain in plain language how it actually can make Bluetooth audio sound better. Having science behind the claim of better sound goes a long way in legitimizing perceptual improvements in audio quality, particularly as the high-end audio industry is full of dubious - and often ridiculous - claims. There is no snake-oil to be sold here, as we are simply talking about a different way to compress and uncompress an audio signal - which is the purpose of a codec (code, decode) to begin with.

Continue reading our review of Qualcomm aptX audio technology!

Author:
Subject:
Manufacturer: EVGA
Tagged:

Introduction and Features

Introduction

2-Banner-850.jpg

EVGA offers one of the most complete lines of PC power supplies on the market today and they are continually striving to improve their product offerings. The new Supernova G3 Series power supplies are based on EVGA’s popular G2 series but now come in a smaller chassis measuring only 150mm (5.9”) deep. The G3 Series also uses a 130mm cooling fan with a hydraulic dynamic bearing for quiet operation and EVGA claims the new G3 units offer even better performance than the G2 models. The Supernova G3 Series is now available in five different models ranging from 550W up to 1000W. We will be taking a detailed look at the Supernova 850W G3 power supply in this review.

•    EVGA SuperNOVA 550W G3 ($99.99 USD)
•    EVGA SuperNOVA 650W G3 ($109.99 USD)
•    EVGA SuperNOVA 750W G3 ($129.99 USD)
•    EVGA SuperNOVA 850W G3 ($149.99 USD)
•    EVGA SuperNOVA 1000W G3 ($169.99 USD)

3-850-G3-side.jpg

The Supernova G3 series power supplies are 80 Plus Gold certified for high efficiency and feature all modular cables, high-quality Japanese brand capacitors, and EVGA’s ECO Intelligent Thermal Control System which enables fan-less operation at low to mid power. All G3 series power supplies are NVIDIA SLI and AMD Crossfire Ready and are backed by either a 7-year (550W and 650W) or 10-year (750W, 850W and 1000W) EVGA warranty.

4a-850-G3-diag.jpg

EVGA SuperNOVA 850W G3 PSU Key Features:

•    850W Continuous DC output at up to 50°C
•    10-Year warranty with unparalleled EVGA Customer Support
•    80 PLUS Gold certified, with up to 90%/92% efficiency (115VAC/240VAC)
•    Highest quality Japanese brand capacitors ensure long-term reliability
•    Fully modular cables to reduce clutter and improve airflow
•    Quiet 130mm hydraulic dynamic bearing fan for long life
•    ECO Intelligent Thermal Control allows silent, fan-less operation at low power
•    NVIDIA SLI & AMD Crossfire Ready
•    Active Power Factor correction (0.99) with Universal AC input
•    Heavy-duty protections: OVP, UVP, OCP, OPP, SCP, and OTP

Please continue reading our review of the EVGA 850W G3 PSU!!!

Subject: General Tech
Manufacturer: Nintendo

Price and Other Official Information

Since our last Nintendo Switch post, the company had their full reveal event, which confirmed the two most critical values: it will launch on March 3rd for $299.99 USD ($399.99 CDN). This is basically what the rumors have pointed to for a little while, and it makes sense. That was last week, but this week gave rise to a lot more information, mostly from either an interview with Nintendo of America’s President and COO, Reggie Fils-Aimé, or from footage that was recorded and analyzed by third parties, like Digital Foundry.

From the GameSpot interview, above, Reggie was asked about the launch bundle, and why it didn’t include any game, like 1 - 2 - Switch. His response was blunt and honest: they wanted to hit $299 USD and the game found itself below the cut-off point. While I can respect that, I cannot see enough people bothering with the title at full price for it to have been green-lit in the first place. If Nintendo wasn’t interested in just eating the cost of that game’s development to affect public (and developer) perceptions, although they might end up taking the loss if the game doesn’t sell anyway, then at least it wasn’t factored into the system.

nintendo-2017-switchhero.jpg

Speaking of price, we are also seeing what the accessories sell at.

nintendo-2017-joy-con_pro_controller.jpg

From the controller side of things, the more conventional one, the Nintendo Switch Pro Controller, has an MSRP of $69.99 USD. If you look at its competitors, the DualShock 4 for the PlayStation 4 at $49 and the Xbox Wireless Controller for the Xbox One at the same price, this is notably higher. While it has a bunch of interesting features, like “HD rumble”, motion sensing, and some support for amiibos, its competitors are similar, but $20 cheaper.

nintendo-2017-joy-con_pair_gray.jpg

The Switch-specific controllers, called “Joy-Con”, are $10 more expensive than the Pro Controller, at $79.99 USD for the pair, or just $49.99 USD for the left or right halves. (Some multiplayer titles only require a half, so Nintendo doesn’t force you to buy the whole pair at the expense of extra SKUs, which is also probably helpful if you lose one.) This seems high, and could be a significant problem going forward.

As for its availability? Nintendo has disclosed that they are pushing 2 million units into the channel, so they are not expecting shortages like the NES Classic had. They do state that demand is up-in-the-air a bit, though.

Read on to find out about their online component and new performance info!

Subject: Motherboards
Manufacturer: ASUS

Introduction

Introduction

02-board-all_0.jpg

Courtesy of ASUS

With the latest revision of the TUF line, ASUS made the decision to drop the well-known "Sabertooth" moniker from the board name, naming the board's with the TUF branding only. The TUF Z270 Mark 1 motherboard is the flagship board in ASUS' TUF (The Ultimate Force) product line designed with the Intel Z270 chipset. The board offers support for the latest Intel Kaby Lake processor line as well as Dual Channel DDR 4 memory because of its integrated Intel Z270 chipset. While the MSRP for the board may be a bit higher than expected, its $239 price is more than justified by the board's build quality and "armored" offerings.

03-board_0.jpg

Courtesy of ASUS

The TUF Z270 Mark 1 motherboard is built with the same quality and attention to detail that you've come to expect from TUF-branded motherboards. Its appearance follows the standard tan plastic armor overlay on the top with a 10-phase digital power system. The board contains the following integrated features: six SATA 3 ports; two M.2 PCIe x4 capable ports; dual GigE controllers - an Intel I219-V Gigabit NIC and an Intel I211 Gigabit NIC; three PCI-Express x16 slots; three PCI-Express x1 slots; an 8-channel audio subsystem; MEM OK! and USB BIOS Flashback buttons; integrated DisplayPort and HDMI; and USB 2.0, 3.0, and 3.1 Type-A and Type-C port support.

04-board-back.jpg

Courtesy of ASUS

ASUS also chose to include the armored backplate with the TUF Z270 Mark 1 motherboard, dubbed the "TUF Fortifier".

Continue reading our preview of the ASUS TUF Z270 Mark 1 motherboard!

Author:
Manufacturer: AMD

Performance and Impressions

This content was sponsored by AMD.

Last week in part 1 of our look at the Radeon RX 460 as a budget gaming GPU, I detailed our progress through component selection. Centered around an XFX 2GB version of the Radeon RX 460, we built a machine using an Intel Core i3-6100, ASUS H110M motherboard, 8GB of DDR4 memory, both an SSD and a HDD, as well as an EVGA power supply and Corsair chassis. Part 1 discussed the reasons for our hardware selections as well as an unboxing and preview of the giveaway to come.

In today's short write up and video, I will discuss my impressions of the system overall as well as touch on the performance in a handful of games. Despite the low the price, and despite the budget moniker attributed to this build, a budding PC gamer or converted console gamer will find plenty of capability in this system.

Check out prices of Radeon RX 460 graphics cards on Amazon!!

Let's quickly recap the components making up our RX 460 budget build.

Our Radeon RX 460 Build

  Budget Radeon RX 460 Build
Processor Intel Core i3-6100 - $109
Cooler CRYORIG M9i - $19
Motherboard ASUS H110M-A/M.2 - $54
Memory 2 x 4GB Crucial Ballistix DDR4-2400 - $51
Graphics Card XFX Radeon RX 460 2GB - $98
Storage 240GB Sandisk SSD Plus - $68
1TB Western Digital Blue - $49
Case Corsair Carbide Series 88R - $49
Power Supply EVGA 500 Watt - $42
Monitor Nixues VUE24A 1080p 144Hz FreeSync - $251
Total Price $549 on Amazon; $799 with monitor on Amazon

For just $549 I was able to create shopping list of hardware that provides very impressive performance for the investment.

02.jpg

The completed system is damn nice looking, if I do say so myself. The Corsair Carbide 88R case sports a matte black finish with a large window to peer in at the hardware contained within. Coupled with the Nixeus FreeSync display and some Logitech G mouse and keyboard hardware we love, this is a configuration that any PC gamer would be proud to display.

03.jpg

Continue reading our performance thoughts on the RX 460 budget PC build!

Author:
Manufacturer: Seasonic

Introduction and Features

2-Banner-3.jpg

We are going to kick-off the New Year on a high-note by taking another look at Seasonic’s PRIME Titanium series power supplies. When we first tested the new PRIME 750W Titanium PSU last summer, we concluded it was the best overall PC power supply we had reviewed to date. So was that just a fluke? Did we get a hand-picked power supply? Or did our review sample truly represent the outstanding performance that a regular user would experience after buying one of the Seasonic PRIME Titanium series power supplies through normal retail channels? Today, we are going to test two more PRIME Titanium PSUs and compare our findings with the original PRIME 750W Titanium power supply test results. This will provide a unique opportunity to compare three different power supplies from the same series to see how consistent their performance really is, which can be a good indicator of quality.

Sea Sonic Electronics Co., Ltd has been designing and building PC power supplies since 1981 and they are one of the most highly respected manufacturers in the world. Not only do they market power supplies under their own Seasonic name but they are the OEM for numerous big name brands.

Seasonic’s new PRIME lineup was introduced earlier this year with the Titanium Series, which currently includes three models: 850W, 750W, and 650W (with more to follow). Additional PRIME models with both Platinum and Gold efficiency certifications are also making their way into retail channels with models ranging from 850W up to 1200W.

3a-Prime-diag.jpg

The two new power supplies we have in for review are the PRIME 650W and 850W Titanium models. These units also come with all modular cables and are certified to comply with the 80 Plus Titanium efficiency criteria; the highest available. The power supplies are designed to deliver extremely tight voltage regulation on the three primary rails (+3.3V, +5V and +12V) and they provide superior AC ripple and noise suppression with an extended hold-up time. Add in a super-quiet 135mm cooling fan with a Fluid Dynamic Bearing and a 10-year warranty, and you have the makings for an outstanding power supply.

4a-850-Front.jpg

Seasonic PRIME 850W Titanium PSU

Seasonic PRIME Titanium Series PSU Key Features:

•    850W, 750W or 650W continuous DC output
•    Ultra-high efficiency, 80 PLUS Titanium certified
•    Micro-Tolerance Load Regulation (MTLR)
•    Top-quality 135mm Fluid Dynamic Bearing fan
•    Premium Hybrid Fan Control (allows fanless operation at low power)
•    Superior AC ripple and noise suppression (under 20 mV)
•    Extended Hold-up time (above 30 ms)
•    Fully modular cabling design
•    Multi-GPU technologies supported
•    Gold-plated high-current terminals
•    Protections: OPP,OVP,UVP,SCP,OCP and OTP
•    10-Year Manufacturer’s warranty

Please continue reading our review of the Seasonic PRIME Titanium Series PSUs!!!

Author:
Manufacturer: AMD

Our Radeon RX 460 Build

This content was sponsored by AMD.

Be sure you check out part 2 of our story where we detail the performance our RX 460 build provides as well as our contest page where you can win this PC from AMD and PC Perspective!

Just before CES this month, AMD came to me asking about our views and opinions on its Radeon RX 460 line of graphics cards, how the GPU is perceived in the market, and how I felt they could better position it to the target audience. It was at that point that I had to openly admit to never actually having installed and used an RX 460 GPU before. I know, shame on me.

I like to pride myself and PC Perspective on being one of the top sources of technical information in the world of PCs, gaming or otherwise, and in particular on GPUs. But a pitfall that I fall into, and I imagine many other reviewers and media do as well, is that I overly emphasize the high end of the market. And that I tend to shift what is considered a “budget” product up the scale more than I should. Is a $250 graphics card really a budget product that the mass market is going to purchase? No, and the numbers clearly point to that as fact. More buyers purchase cards in the sub-$150 segment than in any other, upgrading OEMs PCs and building low cost boxes for themselves and for the family/friends.

So, AMD came to me with a proposal to address this deficiency in my mental database. If we were willing to build a PC based on the RX 460, testing it and evaluating it honestly, and then give that built system back to the community, they would pay for the hardware and promotion of such an event. So here we are.

To build out the RX 460-based PC, I went to the experts in the world of budget PC builds, the /r/buildapc subreddit. The community here is known for being the best at penny-pinching and maximizing the performance-per-dollar implementations on builds. While not the only types of hardware they debate and discuss in that group, it definitely is the most requested. I started a thread there to ask for input and advice on building a system with the only requirements being inclusion of the Radeon RX 460 and perhaps an AMD FreeSync monitor.

Check out prices of Radeon RX 460 graphics cards on Amazon!!

The results were impressive; a solid collection of readers and contributors gave me suggestions for complete builds based around the RX 460. Processors varied, memory configurations varied, storage options varied, but in the end I had at least a dozen solid options that ranged in price from $400-800. With the advice of the community at hand, I set out to pick the components for our own build, which are highlighted below:

Our Radeon RX 460 Build

  Budget Radeon RX 460 Build
Processor Intel Core i3-6100 - $109
Cooler CRYORIG M9i - $19
Motherboard ASUS H110M-A/M.2 - $54
Memory 2 x 4GB Crucial Ballistix DDR4-2400 - $51
Graphics Card XFX Radeon RX 460 2GB - $98
Storage 240GB Sandisk SSD Plus - $68
1TB Western Digital Blue - $49
Case Corsair Carbide Series 88R - $49
Power Supply EVGA 500 Watt - $42
Monitor Nixues VUE24A 1080p 144Hz FreeSync - $251
Total Price $549 on Amazon; $799 with monitor on Amazon

I’ll go in order of presentation for simplicity sake. First up is the selection of the Intel Core i3-6100 processor. This CPU was the most popular offering in the /r/buildapc group and has been the darling of budget gaming builds for a while. It is frequently used because of it $109 price tag, along with dual-core, HyperThreaded performance at 3.7 GHz; giving you plenty of headroom for single threaded applications. Since most games aren’t going to utilize more than four threads, the PC gaming performance will be excellent as well. One frequent suggestion in our thread was the Intel Pentium G4560, a Kaby Lake based part that will sell for ~$70. That would have been my choice but it’s not shipping yet, and I don’t know when it will be.

cpu.jpg

Continue reading our budget build based on the Radeon RX 460!

Manufacturer: Cooler Master

Introduction

Cooler Master's MasterLiquid Maker 92 is a unique liquid CPU cooler that fits all of its parts into one cluster atop the processor, and does it with a clever, hinged construction that allows it to be switched from an upright to a horizontal position at will. While the Maker 92 only occupies about as much space as a large tower air cooler in its upright position, the ability to fold it down provides both enhanced clearance and the option of directing airflow down to help cool motherboard components. But the big question for this cooler is just how effective can a closed-loop system be when it’s this compact? We’re about to find out!

Maker_92.jpg

Let's get part out if the way right off the bat: specialty small form-factor products generally don't offer competitive price/performance numbers, and critics are quick to point to this aspect of SFF computing. The small form-factor side of enthusiast PC building is a pretty small niche, and a product like the Maker 92 might not be for you; but what is important to consider when looking at a specialty product like this is the performance for its size, as designs of the most compact cooling components typically sacrifice something in this regard given their reduced surface area, smaller fan diameter, etc.

Most SFF solutions for processor cooling are of the air variety, with liquid being an option if a given enclosure supports your AiO (or custom loop) cooling of choice. Ultra low-profile CPU air coolers are popular for slim builds, and a product like the Maker 92 isn’t going to replace one of these if your enclosure of choice has a very low profile. Any system using a standard height PCI Express graphics card will work, though that top fan may have to come off depending on the case - which of course will affect cooling performance (in theory, anyway). But enough speculation! Let’s take a close look at this cooler and test out the fit and cooling prowess in both orientations.

DSC_0308.jpg

Continue reading our review of the Cooler Master MasterLiquid Maker 92 CPU cooler!!

Subject: Motherboards
Manufacturer: ASUS

Introduction

Introduction

02-board-full.jpg

Courtesy of ASUS

The Prime Z270-A motherboard is one of ASUS' initial offering integrating the Intel Z270 chipset. The board features ASUS' Channel line aesthetics with a black PCB and white plastic accents. The board's integrated Intel Z270 chipset integrates support for the latest Intel LGA1151 Kaby Lake processor line as well as Dual Channel DDR4 memory. Offered at a price-competitive MSRP of $164, the Prime Z270-A offers a compelling price point with respect to its integrated features and performance potential.

03-board.jpg

Courtesy of ASUS

ASUS does not cut corners on any of their boards with the Prime Z270 sharing similar power component circuitry as its higher tiered siblings, featuring a 10-phase digital power delivery system. ASUS integrated the following features into the Prime Z270-A board: six SATA 3 ports; two M.2 PCIe x4 capable ports; an Intel I219-V Gigabit NIC; three PCI-Express x16 slots; four PCI-Express x1 slots; on-board power and MemOK! buttons; an EZ XMP switch; Crystal Sound 3 audio subsystem; integrated DisplayPort, HDMI, and DVI video ports; and USB 3.0 and 3.1 Type-A and Type-C port support.

Continue reading our preview of the ASUS Prime Z270-A motherboard!