Subject: Storage
Manufacturer: Western Digital

Introduction and Specifications

Introduction

Storage devices for personal computers have always been a tricky proposition. While the majority of computer parts are solid state, the computer industry has spent most of its life storing bits on electromechanical mechanical devices like tapes and floppy disks. Speaking relatively, it was only recently (less than a decade) that solid state storage became mainstream, and even today the costs of flash production make rotating media the better option for bulk data storage. Hard drives are typically vented to atmosphere, as the Bernoulli Effect is necessary as part of what keep the drive heads flying above the rotating platters. With any vented enclosure, there is always the risk of atmospheric contaminants finding their way in. Sure there are HEPA-class filters at the vent holes, but they can’t stop organic vapors that may slightly degrade the disk surface over time.

By filling a hard disk with an inert gas and hermetically sealing the disk housing, we can eliminate those potential issues. An added bonus is that if Helium is used, its lower density enables lower air friction of the rotating platters, which translates to lower power consumption when compared to an equivalent air-filled HDD. Ever since HGST released their Helium filled drives, I’ve been waiting for this technology to trickle down to consumer products, and Western Digital has recently brought such a product to market. Today we will be diving into our full performance review of the Western Digital 8TB Red.

DSC00234.jpg

Specifications (source)

specs.png

Compared to the 6TB Red, the 8TB model doubles its cache size to 128MB. We also see a slight bump in claimed transfer rates. Idle power consumption sees a slight bump due to different electronics in use, and power/capacity figures check out as well (more on that later as we will include detailed power testing in this article).

Continue reading our review of the 8TB Western Digital Red Helium-filled HDD!!

Author:
Subject: Systems
Manufacturer: Various

The entry point for PC VR

Early this year I started getting request after request for hardware suggestions for upcoming PC builds for VR. The excitement surrounding the Oculus Rift and the HTC Vive has caught fire across all spectrums of technology, from PC enthusiasts to gaming enthusiasts to just those of you interested in a technology that has been "right around the corner" for decades. The requests for build suggestions spanned our normal readership as well as those that had previously only focused on console gaming, and thus the need for a selection of build guides began.

Looking for all of the PC Perspective Spring 2016 VR guides?

This build will focus on the $900 price point for a complete PC. Months and months ago, when Palmer Lucky started discussing pricing for the Rift, he mentioned a "total buy in cost of $1500." When it was finally revealed that the purchase price for the retail Rift was $599, the math works out to include a $900 PC. 

system1.jpg

With that in mind, let's jump right into the information you are looking for: the components we recommend.

VR Build Guide
$900 Spring 2016
Component Amazon.com Link B&H Photo Link
Processor Intel Core i5-6500 $204 $204
Motherboard Gigabyte H170-Gaming 3 $94  
Memory 8GB G.Skill Ripjaws DDR4-2400 $43  
Graphics Card EVGA GeForce GTX 970 Superclock $309 $334
Storage 250GB Samsung 850 EVO
Seagate 2TB Barracuda
$88
$71
$88
$71
Power Supply EVGA 500 watt 80+ Bronze $49  
CPU Cooler Cooler Master Hyper 212 EVO $29 $28
Case Corsair SPEC-01 Red $52 $69
Total Price   Full cart - $939  

For those of you interested in a bit more details on the why of the parts selection, rather than just the what, I have some additional information for you.

cpu.jpg

Starting at the beginning, the Core i5-6500 is a true quad-core processor that slightly exceeds the minimum specificaiton requirement from Oculus. It is based on the Skylake architecture so you are getting Intel's latest architecture and it is unlikely that you'll find an instance where any PC game, standard or VR, will require more processor horsepower. The motherboard from Gigabyte is based on the H170 chipset, which is lower cost but offers fewer features than Z170-class products. But for a gamer, the result will be nearly identical - stock performance and features are still impressive. 8GB of DDR4 memory should be enough as well for gaming and decent PC productivity.

Looking to build a PC for the very first time, or need a refresher? You can find our recent step-by-step build videos to help you through the process right here!!

The GPU is still the most important component of any VR system, and with the EVGA GeForce GTX 970 selection here we are reaching the recommended specifications from Oculus and HTC/Valve. The Maxwell 2.0 architecture that the GTX 970 is based on launched in late 2014 and was very well received. The equivalent part from the AMD spectrum is the Radeon R9 290/390, so you are interested in that you can find some here.

Continue reading our selections for a $900 VR PC Build!!

Subject: Storage
Manufacturer: Samsung

Introduction

Since Samsung’s August 2015 announcement of their upcoming 48-layer V-NAND, we’ve seen it trickle into recent products like the SSD T3, where it enabled 2TB of capacity in a very small form factor. What we have not yet seen was that same flash introduced in a more common product that we could directly compare against the old. Today we are going to satisfy our (and your) curiosity by comparing a 1TB 850 EVO V1 (32-layer - V2) to a 1TB 850 EVO V2 (48-layer - V3).

**edit**

While Samsung has produced three versions of their V-NAND (the first was 24-layer V1 and only available in one of an enterprise SSDs), there have only been two versions of the 850 EVO. Despite this, Samsung internally labels this new 850 EVO as a 'V3' product as they go by the flash revision in this particular case.

**end edit**

DSC00214.jpg

Samsung’s plan is to enable higher capacities with this new flash (think 4TB 850 EVO and PRO), they also intend to silently push that same flash down into the smaller capacities of those same lines. Samsung’s VP of Marketing assured me that they would not allow performance to drop due to higher per-die capacity, and we can confirm that in part with their decision to drop the 120GB 850 EVO during the switch to 48-layer in favor of a planar 750 EVO which can keep performance up. Smaller capacity SSDs work better with higher numbers of small capacity dies, and since 48-layer VNAND in TLC form comes in at 32GB per die, that would have meant only four 48-layer dies in a 120GB SSD.

48-V-NAND.png

Samsung's 48-Layer V-NAND, dissected by TechInsights
(Similar analysis on 32-Layer V-NAND here)

Other companies have tried silently switching flash memory types on the same product line in the past, and it usually does not go well. Any drops in performance metrics for a product with the same model and spec sheet is never welcome in tech enthusiast circles, but such issues are rarely discovered since companies will typically only sample their products at their initial launch. On the flip side, Samsung appears extremely confident in their mid-line flash substitution as they have voluntarily offered to sample us a 1TB 48-layer 850 EVO for direct comparison to our older 1TB 32-layer 850 EVO. The older EVO we had here had not yet been through our test suite, so we will be comparing these two variations directly against each other starting from the same fresh out of the box and completely unwritten state. Every test will be run on both SSDs in the same exact sequence, and while we are only performing an abbreviated round of testing for these products, the important point is that I will be pulling out our Latency Percentile test for detailed performance evaluation at a few queue depths. Latency Percentile testing has proven itself far more consistent and less prone to data scatter than any other available benchmark, so we’ll be trusting it to give us the true detailed scoop on any performance differences between these two types of flash.

Read on for our comparison of the new and the old!
(I just referred to a 3D Flash part as 'old'. Time flies.)

Author:
Manufacturer: AMD

Some Hints as to What Comes Next

On March 14 at the Capsaicin event at GDC AMD disclosed their roadmap for GPU architectures through 2018.  There were two new names in attendance as well as some hints at what technology will be implemented in these products.  It was only one slide, but some interesting information can be inferred from what we have seen and what was said in the event and afterwards during interviews.

Polaris the the next generation of GCN products from AMD that have been shown off for the past few months.  Previously in December and at CES we saw the Polaris 11 GPU on display.  Very little is known about this product except that it is small and extremely power efficient.  Last night we saw the Polaris 10 being run and we only know that it is competitive with current mainstream performance and is larger than the Polaris 11.  These products are purportedly based on Samsung/GLOBALFOUNDRIES 14nm LPP.

roadmap.jpg

The source of near endless speculation online.

In the slide AMD showed it listed Polaris as having 2.5X the performance per watt over the previous 28 nm products in AMD’s lineup.  This is impressive, but not terribly surprising.  AMD and NVIDIA both skipped the 20 nm planar node because it just did not offer up the type of performance and scaling to make sense economically.  Simply put, the expense was not worth the results in terms of die size improvements and more importantly power scaling.  20 nm planar just could not offer the type of performance overall that GPU manufacturers could achieve with 2nd and 3rd generation 28nm processes.

What was missing from the slide is mention that Polaris will integrate either HMB1 or HBM2.  Vega, the architecture after Polaris, does in fact list HBM2 as the memory technology it will be packaged with.  It promises another tick up in terms of performance per watt, but that is going to come more from aggressive design optimizations and likely improvements on FinFET process technologies.  Vega will be a 2017 product.

Beyond that we see Navi.  It again boasts an improvement in perf per watt as well as the inclusion of a new memory technology behind HBM.  Current conjecture is that this could be HMC (hybrid memory cube).  I am not entirely certain of that particular conjecture as it does not necessarily improve upon the advantages of current generation HBM and upcoming HBM2 implementations.  Navi will not show up until 2018 at the earliest.  This *could* be a 10 nm part, but considering the struggle that the industry has had getting to 14/16nm FinFET I am not holding my breath.

AMD provided few details about these products other than what we see here.  From here on out is conjecture based upon industry trends, analysis of known roadmaps, and the limitations of the process and memory technologies that are already well known.

Click here to read the rest about AMD's upcoming roadmap!

Shedding a little light on Monday's announcement

Most of our readers should have some familiarity with GameWorks, which is a series of libraries and utilities that help game developers (and others) create software. While many hardware and platform vendors provide samples and frameworks, taking the brunt of the work required to solve complex problems, this is NVIDIA's branding for their suite of technologies. Their hope is that it pushes the industry forward, which in turn drives GPU sales as users see the benefits of upgrading.

nvidia-2016-gdc-gameworksmission.png

This release, GameWorks SDK 3.1, contains three complete features and two “beta” ones. We will start with the first three, each of which target a portion of the lighting and shadowing problem. The last two, which we will discuss at the end, are the experimental ones and fall under the blanket of physics and visual effects.

nvidia-2016-gdc-volumetriclighting-fallout.png

The first technology is Volumetric Lighting, which simulates the way light scatters off dust in the atmosphere. Game developers have been approximating this effect for a long time. In fact, I remember a particular section of Resident Evil 4 where you walk down a dim hallway that has light rays spilling in from the windows. Gamecube-era graphics could only do so much, though, and certain camera positions show that the effect was just a translucent, one-sided, decorative plane. It was a cheat that was hand-placed by a clever artist.

nvidia-2016-gdc-volumetriclighting-shaftswireframe.png

GameWorks' Volumetric Lighting goes after the same effect, but with a much different implementation. It looks at the generated shadow maps and, using hardware tessellation, extrudes geometry from the unshadowed portions toward the light. These little bits of geometry sum, depending on how deep the volume is, which translates into the required highlight. Also, since it's hardware tessellated, it probably has a smaller impact on performance because the GPU only needs to store enough information to generate the geometry, not store (and update) the geometry data for all possible light shafts themselves -- and it needs to store those shadow maps anyway.

nvidia-2016-gdc-volumetriclighting-shaftsfinal.png

Even though it seemed like this effect was independent of render method, since it basically just adds geometry to the scene, I asked whether it was locked to deferred rendering methods. NVIDIA said that it should be unrelated, as I suspected, which is good for VR. Forward rendering is easier to anti-alias, which makes the uneven pixel distribution (after lens distortion) appear more smooth.

Read on to see the other four technologies, and a little announcement about source access.

Author:
Manufacturer: Thermaltake

Introduction and Features

Introduction

2-Banner-1.jpg

There has been growing interest in recent years in quiet computing with more users looking for components that will help them build a quiet PC. Thermaltake’s new Suppressor F31 Silent ATX mid-tower chassis is aimed squarely at this audience. Thermaltake has been around since 1999 and is a well-respected name in the PC industry. They offer a full line of cases, power supplies, cooling components, and accessories.

The Suppressor F31 chassis is available with or without a side window and comes in black. Thermaltake also offers several other variations in the Suppressor line, which include the Suppressor F31 Power Cover Edition (large baffle located over the PSU area) and the Suppressor F51 Mid-Tower case (slightly larger chassis capable of mounting an Extended-ATX form factor motherboard). We will be taking a detailed look at the Suppressor F31 Window ATX Mid-Tower Chassis in this review.

The Thermaltake Suppressor F31 is wider than most mid-tower enclosures (250mm/9.8”) and incorporates sound dampening panels on the front, top and both sides. Note: the Window version replaces the left side panel sound dampening material with a large acrylic window. The top panel has three separate sound dampening panels that can easily be removed to make room for additional case fans or a top mounted liquid cooling radiator. The Suppressor F31 comes with two quiet case fans installed: one 120mm intake on the front and one 120mm exhaust on the back.

3-Inside.jpg

The roomy chassis offers numerous options for adding more case fans (up to nine total) for increased airflow as well as several options for installing liquid cooling systems of various sizes (single, dual, and/or triple fan/radiators). All of the potential fan locations are designed to mount either 120mm or 140mm fans. The front panel can mount one 200mm fan and the top panel can mount two 200mm fans if desired.

4-F31-built.jpg

(Courtesy of Thermaltake)

Suppressor F31 Window ATX Mid-Tower Case Key Features:
•    Mid-Tower ATX enclosure (HxWxD, 497x250x515mm, 19.5x9.8x20.3”)
•    Large clear acrylic side window (also available without a side window)
•    Supports ATX, Micro-ATX and Mini-ITX motherboards (F51 supports E-ATX)
•    Extremely quiet case for noise sensitive applications
•    Sound dampening panels on front, side, and top
•    Easily removed dust filters on front, top and bottom panels
•    Two included Thermaltake fans (120mm intake and 120mm exhaust)
•    Numerous cooling options for adding fans and/or liquid cooling
•    (2) USB 3.0, (2) USB 2.0 and HD audio jacks on the top I/O panel
•    Three internal 3.5” hard drive / 2.5” SSD trays
•    Three optional 3.5”/2.5” drive mounting locations behind mobo tray
•    Two external 5.25” drive bays
•    Tool-free mounting for all 3.5” internal and 5.25” external drives
•    Up to 278mm (10.9”) clearance for graphic cards
•    Up to 420mm (16.5”) for long graphic cards (with HDD cage removed)
•    Up to 180mm (7.1”) of space for tall CPU coolers
•    Price: $99.99 USD

Please continue reading our Suppressor F31 case review!!!

Author:
Manufacturer: GitHub

A start to proper testing

During all the commotion last week surrounding the release of a new Ashes of the Singularity DX12 benchmark, Microsoft's launching of the Gears of War Ultimate Edition on the Windows Store and the company's supposed desire to merge Xbox and PC gaming, a constant source of insight for me was one Andrew Lauritzen. Andrew is a graphics guru at Intel and has extensive knowledge of DirectX, rendering, engines, etc. and has always been willing to teach and educate me on areas that crop up. The entire DirectX 12 and Unified Windows Platform was definitely one such instance. 

Yesterday morning Andrew pointed me to a GitHub release for a tool called PresentMon, a small sample of code written by a colleague of Andrew's that might be the beginnings of being able to properly monitor performance of DX12 games and even UWP games.

The idea is simple and it's implementation even more simple: PresentMon monitors the Windows event tracing stack for present commands and records data about them to a CSV file. Anyone familiar with the kind of ETW data you can gather will appreciate that PresentMon culls out nearly all of the headache of data gathering by simplifying the results into application name/ID, Present call deltas and a bit more.

gears.jpg

Gears of War Ultimate Edition - the debated UWP version

The "Present" method in Windows is what produces a frame and shows it to the user. PresentMon looks at the Windows events running through the system, takes note of when those present commands are received by the OS for any given application, and records the time between them. Because this tool runs at the OS level, it can capture Present data from all kinds of APIs including DX12, DX11, OpenGL, Vulkan and more. It does have limitations though - it is read only so producing an overlay on the game/application being tested isn't possible today. (Or maybe ever in the case of UWP games.) 

What PresentMon offers us at this stage is an early look at a Fraps-like performance monitoring tool. In the same way that Fraps was looking for Present commands from Windows and recording them, PresentMon does the same thing, at a very similar point in the rendering pipeline as well. What is important and unique about PresentMon is that it is API independent and useful for all types of games and programs.

presentmonscreen.png

PresentMon at work

The first and obvious question for our readers is how this performance monitoring tool compares with Frame Rating, our FCAT-based capture benchmarking platform we have used on GPUs and CPUs for years now. To be honest, it's not the same and should not be considered an analog to it. Frame Rating and capture-based testing looks for smoothness, dropped frames and performance at the display, while Fraps and PresentMon look at performance closer to the OS level, before the graphics driver really gets the final say in things. I am still targeting for universal DX12 Frame Rating testing with exclusive full screen capable applications and expect that to be ready sooner rather than later. However, what PresentMon does give us is at least an early universal look at DX12 performance including games that are locked behind the Windows Store rules.

Continue reading our look at the new PresentMon tool!!

Manufacturer: Corsair
Tagged:

Introduction and First Impressions

The Corsair Carbide 400C is a mid-tower enclosure that offers a very large window to show off your build through a side panel that’s also a hinged, and latching, door.

DSC_0659.jpg

With the Carbide 600 series Corsair introduced a new full-tower enclosure with an understated style, and some very nice features. The matching 400 series offer a slightly smaller version of these enclosures, with a few changes. These new Carbide 600 and 400 series cases offer both noise-reducing quiet versions (600Q, 400Q), as well as clear side-panel versions (600C, 400C). We recently looked at the full-tower Carbide 600Q, which performed well from a thermal standpoint, and, to a greater extent, with noise output.

The case we’ll be taking a look at today is the mid-tower cousin of the Carbide 600C, and this 400C drops the larger enclosure’s inverse ATX design in favor of a standard layout, but retains most other aspects of the design. Corsair’s 400 series duplicates many aspects of the larger, and more expensive, 600 series full-tower enclosures, and are priced $50 less. From the outside the 400C looks like a slightly smaller version, but once inside there are some notable differences.

DSC_0662.jpg

The Carbide 400C does not offer the noise dampening of the 400Q, dropping in favor of a large window and hinged, latching door. This tradeoff has allowed Corsair to market both version at the same price point, leaving it up to the consumer to decide where their priorities are. As mentioned, the 400 series are not simply smaller version of the 600C/600Q, as the internals are quite different. For instance, the PSU mount (and plastic shroud covering it) moves down to the case floor with to o400 series, and there are no 5.25-inch bays this time.

Beyond the changed layout, this clear version will likely differ from the results we saw with the 600Q. Thermal performance might be affected by the ATX layout, but the lack of insulation could mitigate this. Another factor is the noise output from a “C” version, which would presumably be significantly louder than the very quiet 600Q previously tested. We’ll cover all of that - along with build quality and ease of installation - in this review.

Continue reading our review of the Corsair Carbide 400C case!!

Subject: Motherboards
Manufacturer: ASUS

Introduction and Technical Specifications

Introduction

02-board.jpg

Courtesy of ASUS

03-armor-compoenents.jpg

Courtesy of ASUS

The ASUS Sabertooth Z170 Mark 1 motherboard is the latest addition to the TUF (The Ultimate Force) product series. ASUS updated the armored-version of their Sabertooth board for Skylake processor support using the Intel Z170 chipset, including other functionality upgrades and design changes to for a better user experience. The Sabertooth Z170 Mark 1 comes with an MSRP of $259.00 with features to more than justify its cost.

04-board-flyapart.jpg

Courtesy of ASUS

05-power-components.jpg

Courtesy of ASUS

06-esd-guard2.jpg

Courtesy of ASUS

ASUS continues to evolve the armored Sabertooth board, enhancing and improving the last generation design with the Sabertooth Z170 Mark 1. The board includes the line's signature TUF Thermal Armor board overlay and TUF Fortifier back plate. With the new revision of the Thermal Armor, ASUS integrated to fan ports into the rear panel and middle of the board into its design as well as flow ports over the wrap-around CPU VRM heat sinks to adjust the airflow through the Thermal Armor. The board is designed with an 8+4 phase digital power system featuring alloy chokes redesigned to run cooler than previous generation chokes, 10k-rated Titanium capacitors, and military-grade TUF MOSFETs. Additionally, ASUS integrated on board ESD (elctro-static discharge) protection into the rear panel keyboard/mouse, USB 2.0, USB 3.0, LAN, and audio ports to protect the motherboard and integrated components from voltage fluctuations and power surges.

Continue reading our review of the ASUS Sabertooth Z170 Mark 1 motherboard!

Author:
Subject: Processors
Manufacturer: AMD

Clockspeed Jump and More!

On March 1st AMD announced the availability of two new processors as well as more information on the A10 7860 APU.

The two new units are the A10-7890K and the Athlon X4 880K.  These are both Kaveri based parts, but of course the Athlon has the GPU portion disabled.  Product refreshes for the past several years have followed a far different schedule than the days of yore.  Remember back in time when the Phenom II series and the competing Core 2 series would have clockspeed updates that were expected yearly, if not every half year with a slightly faster top end performer to garner top dollar from consumers?

amd_lineup.png

Things have changed, for better or worse.  We have so far seen two clockspeed bumps for the Kaveri /Godavari based APU.  Kaveri was first introduced over two years ago with the A10-7850K and the lower end derivatives.  The 7850K has a clockspeed that ranges from 3.7 GHz to the max 4 GHz with boost.  The GPU portion is clocked at 720 MHz.  This is a 95 watt TDP part that is one of the introductory units from GLOBALFOUNDRIES 28 nm HKMG process.

Today the new top end A10-7890K is clocked at 4.1 GHz to 4.3 GHz max.  The GPU receives a significant boost in performance with a clockspeed of 866 MHz.  The combination of CPU and GPU clockspeed increases push the total performance of the part exceeding 1 TFLOPs.  It features the same dual module/quad core Godavari design as well as the 8 GCN Units.  The interesting part here is that the APU does not exceed the 95 watt TDP that it shares with the older and slower 7850K.  It is also a boost in performance from last year’s refresh of the A10-7870K which is clocked 200 MHz slower on the CPU portion but retains the 866 MHz speed of the GPU.  This APU is fully unlocked so a user can easily overclock both the CPU and GPU cores.

amd_7890k.png

The Athlon X4 880K is still based on the Godavari family rather than the Carizzo update that the X4 845 uses.  This part is clocked from 4.0 to 4.2 GHz.  It again retains the 95 watt TDP rating of the previous Athlon X4 CPUs.  Previously the X4 860K was the highest clocked unit at 3.7 GHz to 4.0, but the 880K raises that to 4 to 4.2 GHz.  A 300 MHz gain in base clock is pretty significant as well as stretching that ceiling to 4.2 GHz.  The Godavari modules retain their full amount of L2 cache so the 880K has 4 MB available to it.  These parts are very popular with budget enthusiasts and gaming builds as they are extremely inexpensive and perform at an acceptable level with free overclocking thrown in.

Click here to read more about AMD's March 2016 Refresh!

Manufacturer: Reeven

Introduction and Technical Specifications

Introduction

02-okeanos-profile.jpg

Courtesy of Reeven

03-brontes-profile.jpg

Courtesy of Reeven

Reeven is a new comer to the computer cooling space, offering a wide range of coolers and fans for all aspects of the industry. The latest members of their cooler lineup, the Okeanos and Brontes coolers, offer high performance in two very different form factors. The Okeanos is a large dual radiator, dual fan cooler while the Brontes is a low profile C-shaped cooler touting compatibility across most form factors. While the Okeanos offers support for all known CPU and board types, the Brontes supports all mainstream systems with the exception of the Intel 2011-based CPUs and boards.

Continue reading our review of the Reeven CPU coolers!

Author:
Manufacturer: Microsoft

Things are about to get...complicated

Earlier this week, the team behind Ashes of the Singularity released an updated version of its early access game, which updated its features and capabilities. With support for DirectX 11 and DirectX 12, and adding in multiple graphics card support, the game featured a benchmark mode that got quite a lot of attention. We saw stories based on that software posted by Anandtech, Guru3D and ExtremeTech, all of which had varying views on the advantages of one GPU or another.

That isn’t the focus of my editorial here today, though.

Shortly after the initial release, a discussion began around results from the Guru3D story that measured frame time consistency and smoothness with FCAT, a capture based testing methodology much like the Frame Rating process we have here at PC Perspective. In that post on ExtremeTech, Joel Hruska claims that the results and conclusion from Guru3D are wrong because the FCAT capture methods make assumptions on the output matching what the user experience feels like.  Maybe everyone is wrong?

First a bit of background: I have been working with Oxide and the Ashes of the Singularity benchmark for a couple of weeks, hoping to get a story that I was happy with and felt was complete, before having to head out the door to Barcelona for the Mobile World Congress. That didn’t happen – such is life with an 8-month old. But, in my time with the benchmark, I found a couple of things that were very interesting, even concerning, that I was working through with the developers.

vsyncoff.jpg

FCAT overlay as part of the Ashes benchmark

First, the initial implementation of the FCAT overlay, which Oxide should be PRAISED for including since we don’t have and likely won’t have a DX12 universal variant of, was implemented incorrectly, with duplication of color swatches that made the results from capture-based testing inaccurate. I don’t know if Guru3D used that version to do its FCAT testing, but I was able to get some updated EXEs of the game through the developer in order to the overlay working correctly. Once that was corrected, I found yet another problem: an issue of frame presentation order on NVIDIA GPUs that likely has to do with asynchronous shaders. Whether that issue is on the NVIDIA driver side or the game engine side is still being investigated by Oxide, but it’s interesting to note that this problem couldn’t have been found without a proper FCAT implementation.

With all of that under the bridge, I set out to benchmark this latest version of Ashes and DX12 to measure performance across a range of AMD and NVIDIA hardware. The data showed some abnormalities, though. Some results just didn’t make sense in the context of what I was seeing in the game and what the overlay results were indicating. It appeared that Vsync (vertical sync) was working differently than I had seen with any other game on the PC.

For the NVIDIA platform, tested using a GTX 980 Ti, the game seemingly randomly starts up with Vsync on or off, with no clear indicator of what was causing it, despite the in-game settings being set how I wanted them. But the Frame Rating capture data was still working as I expected – just because Vsync is enabled doesn’t mean you can look at the results in capture formats. I have written stories on what Vsync enabled captured data looks like and what it means as far back as April 2013. Obviously, to get the best and most relevant data from Frame Rating, setting vertical sync off is ideal. Running into more frustration than answers, I moved over to an AMD platform.

Continue reading PC Gaming Shakeup: Ashes of the Singularity, DX12 and the Microsoft Store!!

Introduction and Technical Specifications

Introduction

***Editor's Note*** - Before getting into the nuts and bolts of the Core X9, please understand that this initial review is meant as a detailed introduction into the capabilities and build strengths of the Core X9 E-ATX Cube Chassis. A deeper look into the advanced capabilities of this monstrous case will be explored in a soon to be released follow-up article. Stay tuned to PC Perspective for the follow-up.

02-case-profile-filled.jpg

Courtesy of Thermaltake

The Thermaltake Core X9 E-ATX Cube Chassis is one of the largest and most configurable they've developed. The case is roughly cube shaped with a steel and plastic construction. The height and depth of the unit allows the Core X9 to support up to quad-fan radiators mounted to its top or sides and up to a tri-fan radiator in front. At an MSRP of $169.99, the Core X9 E-ATX Cube Chassis features a competitive price in light of its size and configurability.

03-parts.jpg

Courtesy of Thermaltake

04-air-cooled-flyapart.jpg

Courtesy of Thermaltake

The Core X9 case was designed to be fully modular, supporting a variety of build configurations to be able to adapt to the whatever build style the end user can dream up. The case comes with a variety of mounts for mounting fans or liquid cooling radiators to the top, side, or bottom of the case. Additionally, Thermaltake integrated three 5.25" device bays as well as two hard drive bays supporting up to three drives each. The chassis motherboard is removable as well for easy install of the motherboard into the system. The chassis itself can be easily segregated into upper and lower sections for controlling system and component heat flow if desired.

05-radiator-support.jpg

Courtesy of Thermaltake

06-fan-support.jpg

Courtesy of Thermaltake

Until you can acurately visually just how many radiators and fans that this case supports, you really don't have a feel for the immense size of the Core X9. From front to back, the case support 4 x 120mm fans or a 480mm radiator along either of its lower sides or in the dual top mounts. On top, you can actually mount a total of eight 120mm fans or dual 480mm radiators if you so choose. And that doesn't take into account the additional two 140mm fans that can be mounted in the upper and lower sections of the case's rear panel, nor the three 120mm fans, dual 200mm fans, or 360mm radiator that can be mounted to the case's front panel.

Continue reading our review of the Thermaltake Core X9 Cube chassis!

Author:
Subject: General Tech
Manufacturer: Monoprice

It Begins

3D printing has been an interest of the staff here at PC Perspective for a few years now. Seeing how inexpensive it has gotten to build your own or buy an entry level 3D printer we were interested in doing some sort of content around 3D printing, but we weren't quite sure what to do.

DSC_0274.jpg

However, an idea arose after seeing Monoprice's new 3D printer offerings at this year's CES. What if we put Ryan, someone who has no idea how 3D printers actually work, in front of one of their entry level models and tell him to print something.

Late last week we received the Maker Select 3D printer from Monoprice, along with a box full of different types of filament and proceeded to live stream us attempting to make it all work.

And thus, the new series "Ryan 3D Prints" was born.

While I've had some limited 3d printing experience in the past, Ryan honestly went into this knowing virtually nothing about the process.

3dprinting_part1_source.00_12_52_09.Still001.png

The Maker Select printer isn't ready to print out of the box and requires a bit of assembly, with the setup time from unboxing to first print ultimately taking about 90 minutes for us on the stream. Keep in mind that we were going pretty slow and attempting to explain as best as we could as we went, so someone working by themselves could probably get up and running a bit quicker.

IMG_4266.JPG

I was extremely impressed with how quickly we were printing successful, and high-quality objects. Beyond having to take a second try at leveling the print bed, we ran into no issues during setup. 

IMG_4288.JPG

Monoprice includes a microSD card with 4 sample models you can print on the Maker Select, and we went ahead and printed an example of all of them. While I don't know at what resolution these models were sliced at, I am impressed with the quality considering the $350 price tag on the Maker Select.

This certainly isn't the end of our 3D printing experience. Our next steps involve taking this printer and hooking it up to a PC and attempting to print our own models with an application like Cura.

116141.jpg

Beyond that, we plan to compare different types of filament, take a look at the Dual Extruder Monoprice printer, and maybe even future offerings like the SLA printer they showed off at CES. Stay tuned to see what we end up making!

Introduction

Gaming headsets are an ever-growing segment, with seemingly every hardware company offering their own take on this popular concept these days. Logitech is far from a new player in this space, with a number of headsets on the market over the years. Their most recent lineup included the top-end G930, and this headset has been superseded by the new G933 (wireless) and G633 (wired) models. We’ll take a look - and listen - in this review.

artemis_cover.jpg

With the new Artemis Spectrum headsets Logitech is introducing their new 40 mm Pro-G drivers, which the company says will offer high-fidelity sound:

"Patent pending advanced Pro-G audio drivers are made with hybrid mesh materials that provide the audiophile-like performance gaming fans have been demanding. From your favorite music to expansive game soundtracks, the Pro-G drivers deliver both clean and accurate highs as well as a deep rich bass that you would expect from premium headphones."

More than a pair of stereo headphones, of course, the Artemis Spectrum G933 and G633 feature (simulated) 7.1 channel surround via selectable Dolby or DTS Headphone:X technology. How convincing this effect might be is a focus of the review, and we will take a close look at audio performance.

While these two pairs of gaming headphones might look identical, the G933 differentiates itself from the G633 by offering 2.4 GHz wireless capability. Both headsets also feature two fully customizable RGB lighting zones, with 16.8 million colors controlled through the Logitech Gaming Software on your PC. But a computer isn't required to use these headsets; both the G933 and G633 are fully compatible with the XBox One and PlayStation 4, and with a 3.5 mm audio cable (included with both) they can be used as a stereo headset with just about anything including smartphones.

artemis_main.jpg

Continue reading our review of the Logitech Artemis Spectrum G933/G633 headsets!!

Subject: Storage
Manufacturer: Samsung

Introduction, Specifications and Packaging

Introduction

Around this same time last year, Samsung launched their Portable SSD T1. This was a nifty little external SSD with some very good performance and capabilities. Despite its advantages and the cool factor of having a thin and light 1TB SSD barely noticeable in your pocket, there was some feedback from consumers that warranted a few tweaks to the design. There was also the need for a new line as Samsung was switching over their VNAND from 32 to 48 layer, enabling a higher capacity tier for this portable SSD. All of these changes were wrapped up into the new Samsung Portable SSD T3:

160217-180734.jpg

Specifications

T3 specs.png

Most of these specs are identical to the previous T1, with some notable exceptions. Consumer feedback prompted a newer / heavier metal housing, as the T1 (coming in at only 26 grams) was almost too light. With that newer housing came a slight enlarging of dimensions. We will do some side by side comparisons later in the review.

Read on for our full review of the new Samsung T3!

Author:
Subject: Systems
Manufacturer: Various

Part 1 - Picking the Parts

I'm guilty. I am one of those PC enthusiasts that thinks everyone knows how to build a PC. Everyone has done it before, and all you need from the tech community is the recommendation for parts, right? Turns out that isn't the case at all, and as more and more gamers and users come into our community, they are overwhelmed and often under served. It's time to fix that.

This cropped up for me personally when my nephew asked me about getting him a computer. At just 14 years old, he had never built a PC, watched a PC be constructed - nothing of that sort. Even though his uncle had built computers nearly every week for 15 years or more, he had little to no background on what the process was like. I decided that this was perfect opportunity to teach him and create a useful resource for the community at large to help empower another generation to adopt the DIY mindset.

I decided to start with three specific directions:

  • Part 1 - Introduce the array of PC components, what the function of each is and why we picked the specific hardware we did.
     
  • Part 2 - Show him the process of actual construction from CPU install to cable routing
     
  • Part 3 - Walk through the installation of Windows and get him setup with Steam and the idea of modern PC gaming.

Each of the above sections was broken up into a separate video during our day at the office, and will be presented here and on our YouTube channel

I would like to thank Gigabyte for sponsoring this project with us, providing the motherboard, graphics card and helping work with the other vendors to get us a great combination of hardware. Visit them at Gigabyte.com for the full lineup of motherboard, graphics cards and more!!

gbsponsor.jpg

Part 1 - Picking the Parts

Selecting the parts to build a PC can be a daunting task for a first timer. What exactly is a motherboard and do you need one? Should you get 2 or 4 or more memory modules? SSD vs HDD? Let's lay it all out there for you.

The specific configuration used in Austin's PC build is pretty impressive!

  Austin's First PC Build
Processor Intel Core i5-6600K - $249
Motherboard Gigabyte Z170X-Gaming 5 - $189
Memory Corsair Vengeance LPX 16GB DDR4-3200 - $192
Graphics Card Gigabyte GTX 970 Gaming Xtreme - $374
Storage Corsair Neutron XT 480GB - $184
Western Digital 3TB Red - $109
Case Corsair Obsidian 450D - $119
Power Supply Corsair RM550x - $117
Keyboard Logitech G910 Orion Spark - $159
Mouse Logitech G602 - $51
Headset Logitech G933 Artemis Spectrum - $192
Monitor Acer XB280HK - $699
OS Windows 10 Home - $119
Total Price $2054 (not including the monitor) - Amazon.com Cart

Continue reading My First PC Build on PC Perspective!!

Subject: Mobile
Manufacturer: Apple

It's Easier to Be Convincing than Correct

This is a difficult topic to discuss. Some perspectives assume that law enforcement have terrible, Orwellian intentions. Meanwhile, law enforcement officials, with genuinely good intentions, don't understand that the road to Hell is paved with those. Bad things are much more likely to happen when human flaws are justified away, which is easy to do when your job is preventing mass death and destruction. Human beings like to use large pools of evidence to validate assumptions, without realizing it, rather than discovering truth.

Ever notice how essays can always find sources, regardless of thesis? With increasing amounts of data, you are progressively more likely to make a convincing argument, but not necessarily a more true one. Mix in good intentions, which promotes complacency, and mistakes can happen.

apple-2016-iphonenot.png

HOPEFULLY NOT...

But this is about Apple. Recently, the FBI demanded that Apple creates a version of iOS that can be broken into by law enforcement. They frequently use the term “back door,” while the government prefers other terminology. Really, words are words and the only thing that matters is what it describes -- and it describes a mechanism to compromise the device's security in some way.

This introduces several problems.

The common line that I hear is, “I don't care, because I have nothing to hide.” Well... that's wrong in a few ways. First, having nothing to hide is irrelevant if the person who wants access to your data assumes that you have something you want to hide, and is looking for evidence that convinces themselves that they're right. Second, you need to consider all the people who want access to this data. The FBI will not be the only one demanding a back door, or even the United States as a whole. There are a whole lot of nations that trusts individuals, including their own respective citizens, less than the United States. You can expect that each of them would request a backdoor.

You can also expect each of them, and organized criminals, wanting to break into each others'.

Lastly, we've been here before, and what it comes down to is criminalizing math. Encryption is just a mathematical process that is easy to perform, but hard to invert. It all started because it is easy to multiply two numbers together, but hard to factor them. The only method we know is dividing by every possible number that's smaller than the square root of said number. If the two numbers are prime, then you are stuck finding one number out of all those possibilities (the other prime number will be greater than the square root). In the 90s, numbers over a certain size were legally classified as weapons. That may sound ridiculous, and there would be good reason for that feeling. Either way, it changed; as a result, online banks and retailers thrived.

Apple closes their letter with the following statement:

While we believe the FBI’s intentions are good, it would be wrong for the government to force us to build a backdoor into our products. And ultimately, we fear that this demand would undermine the very freedoms and liberty our government is meant to protect.

Good intentions lead to complacency, which is where the road to (metaphorical) Hell starts.

Subject: Storage
Manufacturer: Samsung

Introduction, Specifications and Packaging

Introduction

The steady increase in flash memory capacity per die is necessary for bringing SSD costs down, but SSDs need a minimum number of dies present to maintain good performance. Back when Samsung announced their 48-layer VNAND, their Senior VP of Marketing assured me that the performance drop that comes along with the low die count present in lower capacity models would be dealt with properly. At the time, Unsoo Kim mentioned the possibility of Samsung producing 128Gbit 48-layer VNAND, but it now appears that they have opted to put everything into 256Gbit on 3D side. Fortunately they still have a planar (2D) NAND production line going, and they will be using that same flash in a newer line of low capacity models. When their 850 Series transitions over to 48-layer (enabling 2TB capacities), Samsung will drop the 120GB capacity of that line and replace it with a new OEM / system builder destined 750 EVO:

160210-175142.jpg

The SSD 750 EVO Series is essentially a throwback to the 840 EVO, but without all of the growing pains experienced by that line. Samsung assured me that the same corrections that ultimately fixed the long-term read-based slow down issues with the 840 EVO also apply to the 750 EVO, and despite the model number being smaller, these should actually perform a bit better than their predecessor. Since it would be silly to just launch a single 120GB capacity to make up for the soon to be dropped 850 EVO 120GB, we also get a 250GB model, which should make for an interesting price point.

Specifications

specs.png

Baseline specs are very similar to the older 840 EVO series, with some minor differences (to be shown below). There are some unlisted specs that are carried over from the original series. For those we need to reference the slides from the 840 EVO launch:

DSC04638.JPG

Read on for the full review of these two new models!

Manufacturer: PC Perspective

Caught Up to DirectX 12 in a Single Day

The wait for Vulkan is over.

I'm not just talking about the specification. Members of the Khronos Group have also released compatible drivers, SDKs and tools to support them, conformance tests, and a proof-of-concept patch for Croteam's The Talos Principle. To reiterate, this is not a soft launch. The API, and its entire ecosystem, is out and ready for the public on Windows (at least 7+ at launch but a surprise Vista or XP announcement is technically possible) and several distributions of Linux. Google will provide an Android SDK in the near future.

khronos-2016-vulkan-why.png

I'm going to editorialize for the next two paragraphs. There was a concern that Vulkan would be too late. The thing is, as of today, Vulkan is now just as mature as DirectX 12. Of course, that could change at a moment's notice; we still don't know how the two APIs are being adopted behind the scenes. A few DirectX 12 titles are planned to launch in a few months, but no full, non-experimental, non-early access game currently exists. Each time I say this, someone links the Wikipedia list of DirectX 12 games. If you look at each entry, though, you'll see that all of them are either: early access, awaiting an unreleased DirectX 12 patch, or using a third-party engine (like Unreal Engine 4) that only list DirectX 12 as an experimental preview. No full, released, non-experimental DirectX 12 game exists today. Besides, if the latter counts, then you'll need to accept The Talos Principle's proof-of-concept patch, too.

But again, that could change. While today's launch speaks well to the Khronos Group and the API itself, it still needs to be adopted by third party engines, middleware, and software. These partners could, like the Khronos Group before today, be privately supporting Vulkan with the intent to flood out announcements; we won't know until they do... or don't. With the support of popular engines and frameworks, dependent software really just needs to enable it. This has not happened for DirectX 12 yet, and, now, there doesn't seem to be anything keeping it from happening for Vulkan at any moment. With the Game Developers Conference just a month away, we should soon find out.

khronos-2016-vulkan-drivers.png

But back to the announcement.

Vulkan-compatible drivers are launching today across multiple vendors and platforms, but I do not have a complete list. On Windows, I was told to expect drivers from NVIDIA for Windows 7, 8.x, 10 on Kepler and Maxwell GPUs. The standard is compatible with Fermi GPUs, but NVIDIA does not plan on supporting the API for those users due to its low market share. That said, they are paying attention to user feedback and they are not ruling it out, which probably means that they are keeping an open mind in case some piece of software gets popular and depends upon Vulkan. I have not heard from AMD or Intel about Vulkan drivers as of this writing, one way or the other. They could even arrive day one.

On Linux, NVIDIA, Intel, and Imagination Technologies have submitted conformant drivers.

Drivers alone do not make a hard launch, though. SDKs and tools have also arrived, including the LunarG SDK for Windows and Linux. LunarG is a company co-founded by Lens Owen, who had a previous graphics software company that was purchased by VMware. LunarG is backed by Valve, who also backed Vulkan in several other ways. The LunarG SDK helps developers validate their code, inspect what the API is doing, and otherwise debug. Even better, it is also open source, which means that the community can rapidly enhance it, even though it's in a releasable state as it is. RenderDoc, the open-source graphics debugger by Crytek, will also add Vulkan support. ((Update (Feb 16 @ 12:39pm EST): Baldur Karlsson has just emailed me to let me know that it was a personal project at Crytek, not a Crytek project in general, and their GitHub page is much more up-to-date than the linked site.))

vulkan_gltransition_maintenance1.png

The major downside is that Vulkan (like Mantle and DX12) isn't simple.
These APIs are verbose and very different from previous ones, which requires more effort.

Image Credit: NVIDIA

There really isn't much to say about the Vulkan launch beyond this. What graphics APIs really try to accomplish is standardizing signals that enter and leave video cards, such that the GPUs know what to do with them. For the last two decades, we've settled on an arbitrary, single, global object that you attach buffers of data to, in specific formats, and call one of a half-dozen functions to send it.

Compute APIs, like CUDA and OpenCL, decided it was more efficient to handle queues, allowing the application to write commands and send them wherever they need to go. Multiple threads can write commands, and multiple accelerators (GPUs in our case) can be targeted individually. Vulkan, like Mantle and DirectX 12, takes this metaphor and adds graphics-specific instructions to it. Moreover, GPUs can schedule memory, compute, and graphics instructions at the same time, as long as the graphics task has leftover compute and memory resources, and / or the compute task has leftover memory resources.

This is not necessarily a “better” way to do graphics programming... it's different. That said, it has the potential to be much more efficient when dealing with lots of simple tasks that are sent from multiple CPU threads, especially to multiple GPUs (which currently require the driver to figure out how to convert draw calls into separate workloads -- leading to simplifications like mirrored memory and splitting workload by neighboring frames). Lots of tasks aligns well with video games, especially ones with lots of simple objects, like strategy games, shooters with lots of debris, or any game with large crowds of people. As it becomes ubiquitous, we'll see this bottleneck disappear and games will not need to be designed around these limitations. It might even be used for drawing with cross-platform 2D APIs, like Qt or even webpages, although those two examples (especially the Web) each have other, higher-priority bottlenecks. There are also other benefits to Vulkan.

khronos-2016-vulkan-middleware.png

The WebGL comparison is probably not as common knowledge as Khronos Group believes.
Still, Khronos Group was criticized when WebGL launched as "it was too tough for Web developers".
It didn't need to be easy. Frameworks arrived and simplified everything. It's now ubiquitous.
In fact, Adobe Animate CC (the successor to Flash Pro) is now a WebGL editor (experimentally).

Open platforms are required for this to become commonplace. Engines will probably target several APIs from their internal management APIs, but you can't target users who don't fit in any bucket. Vulkan brings this capability to basically any platform, as long as it has a compute-capable GPU and a driver developer who cares.

Thankfully, it arrived before any competitor established market share.