Manufacturer: NVIDIA

A Beautiful Graphics Card

As a surprise to nearly everyone, on July 21st NVIDIA announced the existence of the new Titan X graphics cards, which are based on the brand new GP102 Pascal GPU. Though it shares a name, for some unexplained reason, with the Maxwell-based Titan X graphics card launched in March of 2015, this is card is a significant performance upgrade. Using the largest consumer-facing Pascal GPU to date (with only the GP100 used in the Tesla P100 exceeding it), the new Titan X is going to be a very expensive, and very fast gaming card.

As has been the case since the introduction of the Titan brand, NVIDIA claims that this card is for gamers that want the very best in graphics hardware as well as for developers and need an ultra-powerful GPGPU device. GP102 does not integrate improved FP64 / double precision compute cores, so we are basically looking at an upgraded and improved GP104 Pascal chip. That’s nothing to sneeze at, of course, and you can see in the specifications below that we expect (and can now show you) Titan X (Pascal) is a gaming monster.

  Titan X (Pascal) GTX 1080 GTX 980 Ti TITAN X GTX 980 R9 Fury X R9 Fury R9 Nano R9 390X
GPU GP102 GP104 GM200 GM200 GM204 Fiji XT Fiji Pro Fiji XT Hawaii XT
GPU Cores 3584 2560 2816 3072 2048 4096 3584 4096 2816
Rated Clock 1417 MHz 1607 MHz 1000 MHz 1000 MHz 1126 MHz 1050 MHz 1000 MHz up to 1000 MHz 1050 MHz
Texture Units 224 160 176 192 128 256 224 256 176
ROP Units 96 64 96 96 64 64 64 64 64
Memory 12GB 8GB 6GB 12GB 4GB 4GB 4GB 4GB 8GB
Memory Clock 10000 MHz 10000 MHz 7000 MHz 7000 MHz 7000 MHz 500 MHz 500 MHz 500 MHz 6000 MHz
Memory Interface 384-bit G5X 256-bit G5X 384-bit 384-bit 256-bit 4096-bit (HBM) 4096-bit (HBM) 4096-bit (HBM) 512-bit
Memory Bandwidth 480 GB/s 320 GB/s 336 GB/s 336 GB/s 224 GB/s 512 GB/s 512 GB/s 512 GB/s 320 GB/s
TDP 250 watts 180 watts 250 watts 250 watts 165 watts 275 watts 275 watts 175 watts 275 watts
Peak Compute 11.0 TFLOPS 8.2 TFLOPS 5.63 TFLOPS 6.14 TFLOPS 4.61 TFLOPS 8.60 TFLOPS 7.20 TFLOPS 8.19 TFLOPS 5.63 TFLOPS
Transistor Count 11.0B 7.2B 8.0B 8.0B 5.2B 8.9B 8.9B 8.9B 6.2B
Process Tech 16nm 16nm 28nm 28nm 28nm 28nm 28nm 28nm 28nm
MSRP (current) $1,200 $599 $649 $999 $499 $649 $549 $499 $329

GP102 features 40% more CUDA cores than the GP104 at slightly lower clock speeds. The rated 11 TFLOPS of single precision compute of the new Titan X is 34% higher than that of the GeForce GTX 1080 and I would expect gaming performance to scale in line with that difference.

Titan X (Pascal) does not utilize the full GP102 GPU; the recently announced Pascal P6000 does, however, which gives it a CUDA core count of 3,840 (256 more than Titan X).


A full GP102 GPU

The complete GPU effectively loses 7% of its compute capability with the new Titan X, although that is likely to help increase available clock headroom and yield.

The new Titan X will feature 12GB of GDDR5X memory, not HBM as the GP100 chip has, so this is clearly a unique chip with a new memory interface. NVIDIA claims it has 480 GB/s of bandwidth on a 384-bit memory controller interface running at the same 10 Gbps as the GTX 1080.

Continue reading our review of the new NVIDIA Titan X (Pascal) Graphics Card!!

Manufacturer: Realworldtech

Realworldtech with Compelling Evidence

Yesterday David Kanter of Realworldtech posted a pretty fascinating article and video that explored the two latest NVIDIA architectures and how they have branched away from the traditional immediate mode rasterization units.  It has revealed through testing that with Maxwell and Pascal NVIDIA has gone to a tiling method with rasterization.  This is a somewhat significant departure for the company considering they have utilized the same basic immediate mode rasterization model since the 90s.


The Videologic Apocolypse 3Dx based on the PowerVR PCX2.

(photo courtesy of Wikipedia)

Tiling is an interesting subject and we can harken back to the PowerVR days to see where it was first implemented.  There are many advantages to tiling and deferred rendering when it comes to overall efficiency in power and memory bandwidth.  These first TBDR (Tile Based Deferred Renderers) offered great performance per clock and could utilize slower memory as compared to other offerings of the day (namely Voodoo Graphics).  There were some significant drawbacks to the technology.  Essentially a lot of work had to be done by the CPU and driver in scene setup and geometry sorting.  On fast CPU systems the PowerVR boards could provide very good performance, but it suffered on lower end parts as compared to the competition.  This is a very simple explanation of what is going on, but the long and short of it is that TBDR did not take over the world due to limitations in its initial implementations.  Traditional immediate mode rasters would improve in efficiency and performance with aggressive Z checks and other optimizations that borrow from the TBDR playbook.

Tiling is also present in a lot of mobile parts.  Imagination’s PowerVR graphics technologies have been implemented by others such as Intel, Apple, Mediatek, and others.  Qualcomm (Adreno) and ARM (Mali) both implement tiler technologies to improve power consumption and performance while increasing bandwidth efficiency.  Perhaps most interestingly we can remember back to the Gigapixel days with the GP-1 chip that implemented a tiling method that seemed to work very well without the CPU hit and driver overhead that had plagued the PowerVR chips up to that point.  3dfx bought Gigapixel for some $150 million at the time.  That company then went on to file bankruptcy a year later and their IP was acquired by NVIDIA.


Screenshot of the program used to uncover the tiling behavior of the rasterizer.

It now appears as though NVIDIA has evolved their raster units to embrace tiling.  This is not a full TBDR implementation, but rather an immediate mode tiler that will still break up the scene in tiles but does not implement deferred rendering.  This change should improve bandwidth efficiency when it comes to rasterization, but it does not affect the rest of the graphics pipeline by forcing it to be deferred (tessellation, geometry setup and shaders, etc. are not impacted).  NVIDIA has not done a deep dive on this change for editors, so we do not know the exact implementation and what advantages we can expect.  We can look at the evidence we have and speculate where those advantages exist.

The video where David Kanter explains his findings


Bandwidth and Power

Tilers have typically taken the tiled regions and buffered them on the chip.  This is a big improvement in both performance and power efficiency as the raster data does not have to be cached and written out to the frame buffer and then swapped back.  This makes quite a bit of sense considering the overall lack of big jumps in memory technologies over the past five years.  We have had GDDR-5 since 2007/2008.  The speeds have increased over time, but the basic technology is still much the same.  We have seen HBM introduced with AMD’s Fury series, but large scale production of HBM 2 is still to come.  Samsung has released small amounts of HBM 2 to the market, but not nearly enough to handle the needs of a mass produced card.  GDDR-5X is an extension of GDDR-5 that does offer more bandwidth, but it is still not a next generation memory technology like HBM 2.

By utilizing a tiler NVIDIA is able to lower memory bandwidth needs for the rasterization stage. Considering that both Maxwell and Pascal architectures are based on GDDR-5 and 5x technologies, it makes sense to save as much bandwidth as possible where they can.  This is again probably one, among many, of the reasons that we saw a much larger L2 cache in Maxwell vs. Kepler (2048 KB vs. 256KB respectively).  Every little bit helps when we are looking at hard, real world bandwidth limits for a modern GPU.

The area of power efficiency has also come up in discussion when going to a tiler.  Tilers have traditionally been more power efficient as well due to how the raster data is tiled and cached, requiring fewer reads and writes to main memory.  The first impulse is to say, “Hey, this is the reason why NVIDIA’s Maxwell was so much more power efficient than Kepler and AMD’s latest parts!”  Sadly, this is not exactly true.  The tiler is more power efficient, but it is a small part to the power savings on a GPU.


The second fastest Pascal based card...

A modern GPU is very complex.  There are some 7.2 billion transistors on the latest Pascal GP-104 that powers the GTX 1080.  The vast majority of those transistors are implemented in the shader units of the chip.  While the raster units are very important, they are but a fraction of that transistor budget.  The rest is taken up by power regulation, PCI-E controllers, and memory controllers.  In the big scheme of things the raster portion is going to be dwarfed in power consumption by the shader units.  This does not mean that they are not important though.  Going back to the hated car analogy, one does not achieve weight savings by focusing on one aspect alone.  It is going over every single part of the car and shaving ounces here and there, and in the end achieving significant savings by addressing every single piece of a complex product.

This does appear to be the long and short of it.  This is one piece of a very complex ASIC that improves upon memory bandwidth utilization and power efficiency.  It is not the whole story, but it is an important part.  I find it interesting that NVIDIA did not disclose this change to editors with the introduction of Maxwell and Pascal, but if it is transparent to users and developers alike then there is no need.  There is a lot of “secret sauce” that goes into each architecture, and this is merely one aspect.  The one question that I do have is how much of the technology is based upon the Gigapixel IP that 3dfx bought at such a premium?  I believe that particular tiler was an immediate mode renderer as well due to it not having as many driver and overhead issues that PowerVR exhibited back in the day.  Obviously it would not be a copy/paste of the technology that was developed back in the 90s, it would be interesting to see if it was the basis for this current implementation.

Subject: Systems, Mobile
Manufacturer: Dell

Introduction and Specifications

Dell's premium XPS notebook family includes both 15 inch and 13 inch variants, and ship with the latest 6th-generation Intel Skylake processors and all of the latest hardware. But the screens are what will grab your immediate attention; bright, rich, and with the narrowest bezels on any notebook courtesy of Dell's InfinityEdge displays.


Since Ryan’s review of the XPS 13, which is now his daily driver, Dell has added the XPS 15, which is the smallest 15-inch notebook design you will find anywhere. The XPS 13 is already "the smallest 13-inch laptop on the planet", according to Dell, giving their XPS series a significant advantage in the ultrabook market. The secret is in the bezel, or lack thereof, which allows Dell to squeeze these notebooks into much smaller physical dimensions than you might expect given their display sizes.

But you get more than just a compact size with these XPS notebooks, as the overall quality of the machines rivals that of anything else you will find; and may just be the best Windows notebooks you can buy right now. Is this simply bluster? Notebooks, like smartphones, are a personal thing. They need to conform to the user to provide a great experience, and there are obviously many different kinds of users to satisfy. Ultimately, however, Dell has produced what could easily be described as class leaders with these machines.


Continue reading our review of the Dell XPS 13 and 15 notebooks!!

Subject: Storage
Manufacturer: DeepSpar

Introduction, Packaging, and Internals


Being a bit of a storage nut, I have run into my share of failed and/or corrupted hard drives over the years. I have therefore used many different data recovery tools to try to get that data back when needed. Thankfully, I now employ a backup strategy that should minimize the need for such a tool, but there will always be instances of fresh data on a drive that went down before a recent backup took place or a neighbor or friend that did not have a backup.

I’ve got a few data recovery pieces in the cooker, but this one will be focusing on ‘physical data recovery’ from drives with physically damaged or degraded sectors and/or heads. I’m not talking about so-called ‘logical data recovery’, where the drive is physically fine but has suffered some corruption that makes the data inaccessible by normal means (undelete programs also fall into this category). There are plenty of ‘hard drive recovery’ apps out there, and most if not all of them claim seemingly miraculous results on your physically failing hard drive. While there are absolutely success stories out there (most plastered all over testimonial pages at those respective sites), one must take those with an appropriate grain of salt. Someone who just got their data back with a <$100 program is going to be very vocal about it, while those who had their drive permanently fail during the process are likely to go cry quietly in a corner while saving up for a clean-room capable service to repair their drive and attempt to get their stuff back. I'll focus more on the exact issues with using software tools for hardware problems later in this article, but for now, surely there has to be some way to attempt these first few steps of data recovery without resorting to software tools that can potentially cause more damage?


Well now there is. Enter the RapidSpar, made by DeepSpar, who hope this little box can bridge the gap between dedicated data recovery operations and home users risking software-based hardware recoveries. DeepSpar is best known for making advanced tools used by big data recovery operations, so they know a thing or two about this stuff. I could go on and on here, but I’m going to save that for after the intro page. For now let’s get into what comes in the box.

Note: In this video, I read the MFT prior to performing RapidNebula Analysis. It's optimal to reverse those steps. More on that later in this article.

Read on for our full review of the RapidSpar!

Subject: Storage
Manufacturer: Angelbird

Cool your jets

Cool Your Jets: Can the Angelbird Wings PX1 Heatsink-Equipped PCIe Adapter Tame M.2 SSD Temps?

Introduction to the Angelbird Wings PX1

PCIe-based M.2 storage has been one of the more exciting topics in the PC hardware market during the past year. With tremendous performance packed into a small design no larger than a stick of chewing gum, PCIe M.2 SSDs open up new levels of storage performance and flexibility for both mobile and desktop computing. But these tiny, powerful drives can heat up significantly under load, to the point where thermal performance throttling was a critical concern when the drives first began to hit the market.

While thermal throttling is less of a concern for the latest generation of NVMe M.2 SSDs, Austrian SSD and accessories firm Angelbird wants to squash any possibility of performance-killing heat with its Wings line of PCIe SSD adapters. The company's first Wings-branded product is the PX1, a x4 PCIe adapter that can house an M.2 SSD in a custom-designed heatsink.


Angelbird claims that its aluminum-coated copper-core heatsink design can lower the operating temperature of hot M.2 SSDs like the Samsung 950 Pro, thereby preventing thermal throttling. But at a list price of $75, this potential protection doesn't come cheap. We set out to test the PX1's design to see if Angelbird's claims about reduced temperatures and increased performance hold true.

PX1 Design & Installation

PC Perspective's Allyn Malventano was impressed with the build quality of Angelbird's products when he reviewed its "wrk" series of SSDs in late 2014. Our initial impression of the PX1 revealed that Angelbird hasn't lost a step in that regard during the intervening years.


The PX1 features an attractive black design and removable heatsink, which is affixed to the PCB via six hex screws. A single M-key M.2 port resides in the center of the adapter, with mounting holes to accommodate 2230, 2242, 2260, 2280, and 22110-length drives.

Continue reading our review of the Angelbird Wings PX1 Heatsink PCIe Adapter!

Manufacturer: XSPC

Introduction and Technical Specifications



Courtesy of XSPC


Courtesy of XSPC


Courtesy of XSPC

XSPC is a well established name in the enthusiast cooling market, offering a wide range of custom cooling components and kits. Their newest CPU waterblock, the Raystorm Pro, offers a new look and optimized design in comparison to their last generation Raystorm CPU waterblock. The block features an all copper design with a dual metal / acrylic hold down plate for illumination around the outside edge of the block. The Raystorm Pro is compatible with all current CPU sockets with the currect mounting kit.

Continue reading our review of the XSPC Raystorm Pro CPU waterblock!

Subject: General Tech
Manufacturer: Microsoft

Make Sure You Understand Before the Deadline

I'm fairly sure that any of our readers who want Windows 10 have already gone through the process to get it, and the rest have made it their mission to block it at all costs (or they don't use Windows).


Regardless, there has been quite a bit of misunderstanding over the last couple of years, so it's better to explain it now than a week from now. Upgrading to Windows 10 will not destroy your original Windows 7 or Windows 8.x license. What you are doing is using that license to register your machine with Windows 10, which Microsoft will create a digital entitlement for. That digital entitlement will be good “for the supported lifetime of the Windows 10-enabled device”.

There's three misconceptions that kept recurring from the above paragraph.

First, “the supported lifetime of the Windows 10-enabled device” doesn't mean that Microsoft will deactivate Windows 10 on you. Instead, it apparently means that Microsoft will continue to update Windows 10, and require that users will keep the OS somewhat up to date (especially the Home edition). If an old or weird piece of hardware or software in your device becomes incompatible with that update, even if it is critical for the device to function, then Microsoft is allowing itself to shrug and say “that sucks”. There's plenty of room for legitimate complaints about this, and Microsoft's recent pattern of weakened QA and support, but the specific complaint that Microsoft is just trying to charge you down the line? False.

Second, even though I already stated it earlier in this post, I want to be clear: you can still go back to Windows 7 or Windows 8.x. Microsoft is granting the Windows 10 license for the Windows 7 or Windows 8.x device in addition to the original Windows 7 or Windows 8.x license granted to it. The upgrade process even leaves the old OS on your drive for a month, allowing the user to roll back through a recovery process. I've heard people say that, occasionally, this process can screw a few things up. It's a good idea to manage your own backup before upgrading, and/or plan on re-installing Windows 7 or 8.x the old fashioned way.

This brings us to the third misconception: you can re-install Windows 10 later!

If you upgrade to Windows 10, decide that you're better with Windows 7 or 8.x for a while, but decide to upgrade again in a few years, then your machine (assuming the hardware didn't change enough to look like a new device) will still use that Windows 10 entitlement that was granted to you on your first, free upgrade. You will need to download the current Windows 10 image from Microsoft's website, but, when you install it, you should be able to just input an empty license key (if they still ask for it by that point) and Windows 10 will pull down validation from your old activation.

If you have decided to avoid Windows 10, but based that decision on the above three, incorrect points? You now have the tools to make an informed decision before time runs out. Upgrading to Windows 10 (Update (immediate): waiting until it verifies that it successfully activated!) and rolling back is annoying, and it could be a hassle if it doesn't go cleanly (or your go super-safe and back-up ahead of time), but it might save you some money in the future.

On the other hand, if you don't want Windows 10, and never want Windows 10, then Microsoft will apparently stop asking Windows 7 and Windows 8.x users starting on the 29th, give or take.


Introduction and Features



SFX form factor cases and power supplies continue grow in popularity and in market share. As one of the original manufacturers of SFX power supplies, Silverstone Technology Co. is meeting demand with new products; continuing to raise the bar in the SFX power supply arena with the introduction of their new SX700-LPT unit.

(SX=SFX Form Factor, 700=700W, L=Lengthened, PT=Platinum certified)

SilverStone has a long-standing reputation for providing a full line of high quality enclosures, power supplies, cooling components, and accessories for PC enthusiasts. With a continued focus on smaller physical size and support for small form-factor enthusiasts, SilverStone added the new SX700-LPT to their SFX form factor series. There are now seven power supplies in the SFX Series, ranging in output capacity from 300W to 700W. The SX700-LPT is the second SFX unit to feature a lengthened chassis. The SX700-LPT enclosure is 30mm (1.2”) longer than a standard SFX chassis, which allows using a quieter 120mm cooling fan rather than the typical 80mm fan used in most SFX power supplies.

The new SX700-LPT power supply was designed for small form factor cases but it can also be used in place of a standard ATX power supply (in small cases) with an optional mounting bracket. In addition to its small size, the SX700-LPT features high efficiency (80 Plus Platinum certified), all modular flat ribbon-style cables, and provides up to 700W of continuous DC output (750W peak). The SX700-LPT also operates in semi-fanless mode and incorporates a very quiet 120mm cooling fan.


SilverStone SX700-LPT PSU Key Features:

•    Small Form Factor (SFX-L) design
•    700W continuous power output rated for 24/7 operation
•    Very quiet with semi-fanless operation
•    120mm cooling fan optimized for low noise
•    80 Plus Platinum certified for high efficiency
•    Powerful single +12V rail with 58.4A capacity
•    All-modular, flat ribbon-style cables
•    High quality construction with all Japanese capacitors
•    Strict ±3% voltage regulation and low AC ripple and noise
•    Support for high-end GPUs with four PCI-E 8/6-pin connectors
•    Safety Protections: OCP, OVP, UVP, SCP, OTP, and OPP

Please continue reading our review of the SilverStone SX700-LPT PSU!!!

Manufacturer: AMD

Introduction: Rethinking the Stock Cooler

AMD's Wraith cooler was introduced at CES this January, and has been available with select processors from AMD for a few months. We've now had a chance to put one of these impressive-looking CPU coolers through its paces on the test bench to see how much it improves on the previous model, and see if aftermarket cooling is necessary with AMD's flagship parts anymore.


While a switch in the bundled stock cooler might not seem very compelling, the fact that AMD has put effort into improving this aspect of their retail CPU offering is notable. AMD processors already present a great value relative to Intel's offerings for gaming and desktop productivity, but the stock coolers have to this point warranted a replacement.

Intel went the other direction with the current generation of enthusiast processors, as CPUs such as my Core i5-6600k no longer ship with a cooler of any kind. If AMD has upgraded the stock CPU cooler to the point that it now cools efficiently without significant noise, this will save buyers a little more cash when planning an upgrade, which is always a good thing.


The previous AMD stock cooler (left) and the AMD Wraith cooler (right)

A quick search for "Wraith" on Amazon yields retail-box products like the A10-7890K APU, and the FX-8370 CPU; options which have generally required an aftermarket cooler for the highest performance. In this review we’ll take a close look at the results with the previous cooler and the Wraith, and throw in results from the most popular aftermarket cooler of them all; the Cooler Master Hyper 212 EVO.

Continue reading our review of the AMD Wraith CPU Cooler!!


Yes, We're Writing About a Forum Post

Update - July 19th @ 7:15pm EDT: Well that was fast. Futuremark published their statement today. I haven't read it through yet, but there's no reason to wait to link it until I do.

Update 2 - July 20th @ 6:50pm EDT: We interviewed Jani Joki, Futuremark's Director of Engineering, on our YouTube page. The interview is embed just below this update.

Original post below

The comments of a previous post notified us of an thread, whose author claims that 3DMark's implementation of asynchronous compute is designed to show NVIDIA in the best possible light. At the end of the linked post, they note that asynchronous compute is a general blanket, and that we should better understand what is actually going on.


So, before we address the controversy, let's actually explain what asynchronous compute is. The main problem is that it actually is a broad term. Asynchronous compute could describe any optimization that allows tasks to execute when it is most convenient, rather than just blindly doing them in a row.

I will use JavaScript as a metaphor. In this language, you can assign tasks to be executed asynchronously by passing functions as parameters. This allows events to execute code when it is convenient. JavaScript, however, is still only single threaded (without Web Workers and newer technologies). It cannot run callbacks from multiple events simultaneously, even if you have an available core on your CPU. What it does, however, is allow the browser to manage its time better. Many events can be delayed until the browser renders the page, it performs other high-priority tasks, or until the asynchronous code has everything it needs, like assets that are loaded from the internet.


This is asynchronous computing.

However, if JavaScript was designed differently, it would have been possible to run callbacks on any available thread, not just the main thread when available. Again, JavaScript is not designed in this way, but this is where I pull the analogy back into AMD's Asynchronous Compute Engines. In an ideal situation, a graphics driver will be able to see all the functionality that a task will require, and shove them down an at-work GPU, provided the specific resources that this task requires are not fully utilized by the existing work.

Read on to see how this is being implemented, and what the controversy is.

Manufacturer: NVIDIA

GP106 Specifications

Twelve days ago, NVIDIA announced its competitor to the AMD Radeon RX 480, the GeForce GTX 1060, based on a new Pascal GPU; GP 106. Though that story was just a brief preview of the product, and a pictorial of the GTX 1060 Founders Edition card we were initially sent, it set the community ablaze with discussion around which mainstream enthusiast platform was going to be the best for gamers this summer.

Today we are allowed to show you our full review: benchmarks of the new GeForce GTX 1060 against the likes of the Radeon RX 480, the GTX 970 and GTX 980, and more. Starting at $250, the GTX 1060 has the potential to be the best bargain in the market today, though much of that will be decided based on product availability and our results on the following pages.

Does NVIDIA’s third consumer product based on Pascal make enough of an impact to dissuade gamers from buying into AMD Polaris?


All signs point to a bloody battle this July and August and the retail cards based on the GTX 1060 are making their way to our offices sooner than even those based around the RX 480. It is those cards, and not the reference/Founders Edition option, that will be the real competition that AMD has to go up against.

First, however, it’s important to find our baseline: where does the GeForce GTX 1060 find itself in the wide range of GPUs?

Continue reading our review of the GeForce GTX 1060 6GB graphics card!!

Manufacturer: RIOTORO

Introduction and First Impressions

A newcomer in the PC enclosure space, RIOTORO has a lineup of unique-looking products to offer in a market flooded with options at every price-point. With this full-tower PRISM CR1280 enclosure the company says that they are providing not just a home for your components, but the world’s 1st fully RGB case with unparalleled personalization options”.


Clearly, RGB lighting has been one of the biggest trends in PC hardware for the past year or so, and if you are so inclined the PRISM CR1280 promises fully customizable color with lighted accents on the front of the case, and included RGB intake fans.


Beyond the RGB lighting, however, the PRISM CR1280 has a rather unusual industrial design. There is angular black plastic over a steel body, and a large edge-to-edge side panel window (not to mention those bare aluminum feet). It looks like a premium enclosure, and it’s certainly priced like one with an MSRP of $169.99 (selling for $149 currently). Is it worth it? Read on to find out!


Continue reading our review of the Riotoro Prism CR1280 Enclosure!!

Manufacturer: Futuremark

Through the looking glass

Futuremark has been the most consistent and most utilized benchmark company for PCs for quite a long time. While other companies have faltered and faded, Futuremark continues to push forward with new benchmarks and capabilities in an attempt to maintain a modern way to compare performance across platforms with standardized tests.

Back in March of 2015, 3DMark added support for an API Overhead test to help gamers and editors understand the performance advantages of Mantle and DirectX 12 compared to existing APIs. Though the results were purely “peak theoretical” numbers, the data helped showcase to consumers and developers what low levels APIs brought to the table.


Today Futuremark is releasing a new benchmark that focuses on DX12 gaming. No longer just a feature test, Time Spy is a fully baked benchmark with its own rendering engine and scenarios for evaluating the performance of graphics cards and platforms. It requires Windows 10 and a DX12-capable graphics card, and includes two different graphics tests and a CPU test. Oh, and of course, there is a stunningly gorgeous demo mode to go along with it.

I’m not going to spend much time here dissecting the benchmark itself, but it does make sense to have an idea of what kind of technologies are built into the game engine and tests. The engine is based purely on DX12, and integrates technologies like asynchronous compute, explicit multi-adapter and multi-threaded workloads. These are highly topical ideas and will be the focus of my testing today.

Futuremark provides an interesting diagram to demonstrate the advantages DX12 has over DX11. Below you will find a listing of the average number of vertices, triangles, patches and shader calls in 3DMark Fire Strike compared with 3DMark Time Spy.


It’s not even close here – the new Time Spy engine has more than a factor of 10 more processing calls for some of these items. As Futuremark states, however, this kind of capability isn’t free.

With DirectX 12, developers can significantly improve the multi-thread scaling and hardware utilization of their titles. But it requires a considerable amount of graphics expertise and memory-level programming skill. The programming investment is significant and must be considered from the start of a project.

Continue reading our look at 3DMark Time Spy Asynchronous Compute performance!!

Subject: Storage
Manufacturer: Samsung

Introduction, Specifications, and Packaging


Everyone expects SSD makers to keep pushing out higher and higher capacity SSDs, but the thing holding them back is sufficient market demand for that capacity. With that, it appears Samsung has decided it was high time for a 4TB model of their 850 EVO. Today we will be looking at this huge capacity point, and paying close attention to any performance dips that sometimes result in pushing a given SSD controller / architecture to extreme capacities.


This new 4TB model benefits from the higher density of Samsung’s 48-layer V-NAND. We performed a side-by-side comparison of 32 and 48 layer products back in March, and found the newer flash to reduce Latency Percentile profiles closer to MLC-equipped Pro model than the 32-layer (TLC) EVO:


Latency Percentile showing reduced latency of Samsung’s new 48-layer V-NAND

We’ll be looking into all of this in today’s review, along with trying our hand at some new mixed paced workload testing, so let’s get to it!

Read on for our full review of the Samsung 850 EVO 4TB SATA SSD!

Manufacturer: Primochill

Introduction and Technical Specifications



Courtesy of Primochill

The Praxis WetBench open-air test bench is the newest version Primochill's test bench line of cases. The updated version of the WetBench features a dual steel and acrylic-based design, offering a stronger base than the original. The acrylic accents give the test bench a unique and compelling aesthetic, offered in over 20 different configurations. The open design and quick remove panels allow for easy access to the motherboard and PCIe cards without the hassle of removing case panels and mounting screws associated with a typical case motherboard change out. With a starting MSRP of $184.99, the Praxis WetBench is competitively priced when compared with other test bench solutions.


Courtesy of Primochill


Courtesy of Primochill

Like its predecessor, the Praxis WetBench is unique in its design - built to support custom water cooling solutions from the ground up and re-engineered with a stronger structure for added support. Primochill designed the Praxis for mounting of the water cooling kit's radiator to the back panel with support of up to a 280mm or 360mm radiator (or 2 x 140mm or 3 x 120mm fans). The back panel is designed to allow for radiator mounting to the inside or outside of the panel surface.

Continue reading our review of the Primochill Praxis WetBench kit!

Manufacturer: AMD

Radeon Software 16.7.1 Adjustments

Last week we posted a story that looked at a problem found with the new AMD Radeon RX 480 graphics card’s power consumption. The short version of the issue was that AMD’s new Polaris 10-based reference card was drawing more power than its stated 150 watt TDP and that it was drawing more power through the motherboard PCI Express slot that the connection was rated for. And sometimes that added power draw was significant, both at stock settings and overclocked. Seeing current draw over a connection rated at just 5.5A peaking over 7A at stock settings raised an alarm (validly) and our initial report detailed the problem very specifically.

AMD responded initially that “everything was fine here” but the company eventually saw the writing on the wall and started to work on potential solutions. The Radeon RX 480 is a very important product for the future of Radeon graphics and this was a launch that needs to be as perfect as it can be. Though the risk to users’ hardware with the higher than expected current draw is muted somewhat by motherboard-based over-current protection, it’s crazy to think that AMD actually believed that was the ideal scenario. Depending on the “circuit breaker” in any system to save you when standards exists for exactly that purpose is nuts.


Today AMD has released a new driver, version 16.7.1, that actually introduces a pair of fixes for the problem. One of them is hard coded into the software and adjusts power draw from the different +12V sources (PCI Express slot and 6-pin connector) while the other is an optional flag in the software that is disabled by default.

Reconfiguring the power phase controller

The Radeon RX 480 uses a very common power controller (IR3567B) on its PCB to cycle through the 6 power phases providing electricity to the GPU itself. Allyn did some simple multimeter trace work to tell us which phases were connected to which sources and the result is seen below.


The power controller is responsible for pacing the power coming in from the PCI Express slot and the 6-pin power connection to the GPU, in phases. Phases 1-3 come in from the power supply via the 6-pin connection, while phases 4-6 source power from the motherboard directly. At launch, the RX 480 drew nearly identical amounts of power from both the PEG slot and the 6-pin connection, essentially giving each of the 6 phases at work equal time.

That might seem okay, but it’s far from the standard of what we have seen in the past. In no other case have we measured a graphics card drawing equal power from the PEG slot as from an external power connector on the card. (Obviously for cards without external power connections, that’s a different discussion.) In general, with other AMD and NVIDIA based graphics cards, the motherboard slot would provide no more than 50-60 watts of power, while any above that would come from the 6/8-pin connections on the card. In many cases I saw that power draw through the PEG slot was as low as 20-30 watts if the external power connections provided a lot of overage for the target TDP of the product.

Continue reading our analysis of the new AMD 16.7.1. driver that fixed RX 480 power concerns!!

Manufacturer: NVIDIA

GP106 Preview

It’s probably not going to come as a surprise to anyone that reads the internet, but NVIDIA is officially taking the covers off its latest GeForce card in the Pascal family today, the GeForce GTX 1060. As the number scheme would suggest, this is a more budget-friendly version of NVIDIA’s latest architecture, lowering performance in line with expectations. The GP106-based GPU will still offer impressive specifications and capabilities and will probably push AMD’s new Radeon RX 480 to its limits.


Let’s take a quick look at the card’s details.

  GTX 1060 RX 480 R9 390 R9 380 GTX 980 GTX 970 GTX 960 R9 Nano GTX 1070
GPU GP106 Polaris 10 Grenada Tonga GM204 GM204 GM206 Fiji XT GP104
GPU Cores 1280 2304 2560 1792 2048 1664 1024 4096 1920
Rated Clock 1506 MHz 1120 MHz 1000 MHz 970 MHz 1126 MHz 1050 MHz 1126 MHz up to 1000 MHz 1506 MHz
Texture Units 80 (?) 144 160 112 128 104 64 256 120
ROP Units 48 (?) 32 64 32 64 56 32 64 64
Memory 6GB 4GB
Memory Clock 8000 MHz 7000 MHz
8000 MHz
6000 MHz 5700 MHz 7000 MHz 7000 MHz 7000 MHz 500 MHz 8000 MHz
Memory Interface 192-bit 256-bit 512-bit 256-bit 256-bit 256-bit 128-bit 4096-bit (HBM) 256-bit
Memory Bandwidth 192 GB/s 224 GB/s
256 GB/s
384 GB/s 182.4 GB/s 224 GB/s 196 GB/s 112 GB/s 512 GB/s 256 GB/s
TDP 120 watts 150 watts 275 watts 190 watts 165 watts 145 watts 120 watts 275 watts 150 watts
Peak Compute 3.85 TFLOPS 5.1 TFLOPS 5.1 TFLOPS 3.48 TFLOPS 4.61 TFLOPS 3.4 TFLOPS 2.3 TFLOPS 8.19 TFLOPS 5.7 TFLOPS
Transistor Count ? 5.7B 6.2B 5.0B 5.2B 5.2B 2.94B 8.9B 7.2B
Process Tech 16nm 14nm 28nm 28nm 28nm 28nm 28nm 28nm 16nm
MSRP (current) $249 $199 $299 $199 $379 $329 $279 $499 $379

The GeForce GTX 1060 will sport 1280 CUDA cores with a GPU Boost clock speed rated at 1.7 GHz. Though the card will be available in only 6GB varieties, the reference / Founders Edition will ship with 6GB of GDDR5 memory running at 8.0 GHz / 8 Gbps. With 1280 CUDA cores, the GP106 GPU is essentially one half of a GP104 in terms of compute capability. NVIDIA decided not to cut the memory interface in half though, instead going with a 192-bit design compared to the GP104 and its 256-bit option.

The rated GPU clock speeds paint an interesting picture for peak performance of the new card. At the rated boost clock speed, the GeForce GTX 1070 produces 6.46 TFLOPS of performance. The GTX 1060 by comparison will hit 4.35 TFLOPS, a 48% difference. The GTX 1080 offers nearly the same delta of performance above the GTX 1070; clearly NVIDIA has set the scale Pascal and product deviation.

NVIDIA wants us to compare the new GeForce GTX 1060 to the GeForce GTX 980 in gaming performance, but the peak theoretical performance results don’t really match up. The GeForce GTX 980 is rated at 4.61 TFLOPS at BASE clock speed, while the GTX 1060 doesn’t hit that number at its Boost clock. Obviously Pascal improves on performance with memory compression advancements, but the 192-bit memory bus is only able to run at 192 GB/s, compared to the 224 GB/s of the GTX 980. Obviously we’ll have to wait for performance result from our own testing to be sure, but it seems possible that NVIDIA’s performance claims might depend on technology like Simultaneous Multi-Projection and VR gaming to be validated.

Continue reading our preview of the new NVIDIA GeForce GTX 1060!!

Subject: Storage
Manufacturer: Micron
Tagged: U.2, ssd, pro, pcie, NVMe, micron, MAX, HHHL, 9100

Introduction, Specifications and Packaging


It's been too long since we took a look at enterprise SSDs here at PC Perspective, so it's high time we get back to it! The delay has stemmed from some low-level re-engineering of our test suite to unlock some really cool QoS and Latency Percentile possibilities involving PACED workloads. We've also done a lot of work to distill hundreds of hours of test results into fewer yet more meaningful charts. More on that as we get into the article. For now, let's focus on today's test subject:


Behold the Micron 9100 MAX Series. Inside that unassuming 2.5" U.2 enclosure sits 4TB of flash and over 4GB of DRAM. It's capable of 3 GB/s reads, 2 GB/s writes, and 750,000 IOPS. All from inside that little silver box! There's not a lot more to say here because nobody is going to read much past that 3/4 MILLION IOPS figure I just slipped, so I'll just get into the rest of the article now :).



The 9100's come in two flavors and form factors. The MAX series (1.2TB and 2.4TB in the above list) come with very high levels of performance and endurance, while the PRO series comes with lower overprovisioning, enabling higher capacity points for a given flash loadout (800GB, 1.6TB, 3.2TB). Those five different capacity / performance points are available in both PCIe (HHHL) and U.2 (2.5") form factors, making for 10 total available SKUs. All products are PCIe 3.0 x4, using NVMe as their protocol. They should all be bootable on systems capable of UEFI/NVMe BIOS enumeration.

Idle power consumption is a respectable 7W, while active consumption is selectable in 20W, 25W, and 'unlimited' increments. While >25W operation technically exceeds the PCIe specification for non-GPU devices, we know that the physical slot is capable of 75W for GPUs, so why can't SSDs have some more fun too! That said, even in unlimited mode, the 9100's should still stick relatively close to 25W and in our testing did not exceed 29W at any workload. Detailed power testing is coming to future enterprise articles, but for now, the extent will be what was measured and noted in this paragraph.


Our 9100 MAX samples came only in anti-static bags, so no fancy packaging to show here. Enterprise parts typically come in white/brown boxes with little flair.

Read on for our full review of the Micron 9100 MAX 2.4TB U.2 Enterprise SSD!

Subject: General Tech
Manufacturer: Thrustmaster

The New Corinthian Leather?

I really do not know what happened to me, but I used to hate racing games.  I mean, really hate them.  I played old, old racing games on Atari.  I had some of the first ones available on PC.  They did not appeal to me in the least.  Instant buyer’s remorse for the most part.  Then something strange happened.  3D graphics technology changed that opinion.  Not only did hardware accelerated 3D help me get over my dislike, but the improvements in physical simulations also allowed a greater depth of experience.  Throw in getting my first force feedback device and NFS: Porsche Unleashed and I was hooked from then on out.


The front of the box shows the lovely Ferrari 599XX supercar with the wheel in the foreground.

The itch to improve the driving experience only grows as time goes on.  More and more flashy looking titles are released, some of which actually improve upon the simulation with complex physics rewrites, all of which consume more horsepower from the CPU and GPU.  This then leads to more hardware upgrades.  The next thing a person knows they are ordering multiple monitors so they can just experience racing in Surround/Eyefinity (probably the best overall usage for the technology).

One bad thing about having a passion for something is that itch to improve the experience never goes away.  DiRT 2 inspired me to purchase my first FFB wheel, the TM Ferrari F420 model.  Several games later and my disappointment for the F420’s 270 degree steering had me pursue my next purchase which was a TX F458 Ferrari Edition racing wheel.  This featured the TX base, the stock/plastic Ferrari wheel, and the two pedal set.  This was a tremendous upgrade from the older TM F420 and the improvement to 900 degrees of rotation and far better FFB effects was tremendous.  Not only that, but the TX platform could be upgradeable.  The gate leading to madness was now open.

The TX base can fit a variety of 2 and 3 pedal systems, but the big push is towards the actual wheel itself.  Thrustmaster has several products that fit the base that feature a materials such as plastic, rubber, and leather.  These products go from $120 on up to around $150.  These are comprised of three GT style wheels and one F1 wheel.  All of them look pretty interesting and are a big step up from the bundled F458 replica that comes standard with the TX set.


The rear shows the rim itself at actual size.

I honestly had not thought about upgrading to any of these units as I was pleased with the feel and performance of the stock wheel.  It seemed to have fit my needs.  Then it happened.  Thrustmaster announced the Ferrari 599XX EVO wheel with honest-to-goodness Alcantara ™ construction.  The more I read about this wheel, the more I wanted it.  The only problem in my mind is that it is priced at a rather dramatic $179.  I had purchased the entire TX F458 setup on sale for only $280 some months before!  Was the purchase of the 599XX worth it?  Would it dramatically change my gaming experience?  I guess there is only one way to find out.  I hid the credit card statement and told my wife, “Hey, look what I got in for review!”

Click here to read the entire Thrusmaster 599XX EVO Alcantara Edition Wheel Review!

Subject: Systems, Mobile
Manufacturer: Lenovo

Introduction and Specifications

Lenovo made quite a splash with the introduction of the original X1 Carbon notebook in 2012; with its ultra-thin, ultra-light, and carbon fiber-infused construction, it became the flagship ThinkPad notebook. Fast-forward to late 2013, and the introduction of the ThinkPad Yoga; the business version of the previous year's consumer Yoga 2-in-1. The 360-degree hinge was novel for a business machine at the time, and the ThinkPad Yoga had a lot of promise, though it was far from perfect.


Now we fast-forward again, to the present day. It's 2016, and Lenovo has merged their ThinkPad X1 Carbon and ThinkPad Yoga together to create the X1 Yoga. This new notebook integrates the company's Yoga design (in appearance this is akin to the recent ThinkPad Yoga 260/460 revision) into the flagship ThinkPad X lineup, and provides what Lenovo is calling "the world's lightest 14-inch business 2-in-1".

Yoga and Carbon Merge

When Lenovo announced the marriage of the X1 Carbon notebook with the ThinkPad Yoga, I took notice. A buyer of the original ThinkPad Yoga S1 (with which I had a love/hate relationship) I wondered if the new X1 version of the business-oriented Yoga convertible would win me over. On paper it checks all the right boxes, and the slim new design looks great. I couldn't wait to get my hands on one for some real-world testing, and to see if my complaints about the original TP Yoga design were still valid.


As one would expect from a notebook carrying Lenovo’s ThinkPad X1 branding, this new Yoga is quite slim, and made from lightweight materials. Comparing this new Yoga to the X1 Carbon directly, the most obvious difference is that 360° hinge, which is the hallmark of the Yoga series, and exclusive to those Lenovo designs. This hinge allows the X1 Yoga to be used as a notebook, tablet, or any other imaginable position in between.


Lenovo ThinkPad X1 Yoga (base configuration, as reviewed)
Processor Intel Core i5-6200U (Skylake)
Graphics Intel HD Graphics 520
Memory 8GB LPDDR3-1866
Screen 14-in 1920x1080 IPS Touch (with digitizer, active pen)
Storage 256GB M.2 SSD
Camera 720p / Digital Array Microphone
Wireless Intel 8260 802.11ac + BT 4.1 (Dual Band, 2x2)
Connections OneLink+
Mini DisplayPort
3x USB 3.0
Audio combo jack
Dimensions 333mm x 229 mm x 16.8mm (13.11" x 9.01" x 0.66")
2.8 lbs. (1270 g)
OS Windows 10 Pro
Price $1349 -

Continue reading our review of the Lenovo ThinkPad X1 Yoga Notebook!!