Introduction, How PCM Works, Reading, Writing, and Tweaks

I’ve seen a bit of flawed logic floating around related to discussions about 3D XPoint technology. Some are directly comparing the cost per die to NAND flash (you can’t - 3D XPoint likely has fewer fab steps than NAND - especially when compared with 3D NAND). Others are repeating a bunch of terminology and element names without taking the time to actually explain how it works, and far too many folks out there can't even pronounce it correctly (it's spoken 'cross-point'). My plan is to address as much of the confusion as I can with this article, and I hope you walk away understanding how XPoint and its underlying technologies (most likely) work. While we do not have absolute confirmation of the precise material compositions, there is a significant amount of evidence pointing to one particular set of technologies. With Optane Memory now out in the wild and purchasable by folks wielding electron microscopes and mass spectrometers, I have seen enough additional information come across to assume XPoint is, in fact, PCM based.

XPoint.png

XPoint memory. Note the shape of the cell/selector structure. This will be significant later.

While we were initially told at the XPoint announcement event Q&A that the technology was not phase change based, there is overwhelming evidence to the contrary, and it is likely that Intel did not want to let the cat out of the bag too early. The funny thing about that is that both Intel and Micron were briefing on PCM-based memory developments five years earlier, and nearly everything about those briefings lines up perfectly with what appears to have ended up in the XPoint that we have today.

comparison.png

Some die-level performance characteristics of various memory types. source

The above figures were sourced from a 2011 paper and may be a bit dated, but they do a good job putting some actual numbers with the die-level performance of the various solid state memory technologies. We can also see where the ~1000x speed and ~1000x endurance comparisons with XPoint to NAND Flash came from. Now, of course, those performance characteristics do not directly translate to the performance of a complete SSD package containing those dies. Controller overhead and management must take their respective cuts, as is shown with the performance of the first generation XPoint SSD we saw come out of Intel:

gap.png

The ‘bridging the gap’ Latency Percentile graph from our Intel SSD DC P4800X review.
(The P4800X comes in at 10us above).

There have been a few very vocal folks out there chanting 'not good enough', without the basic understanding that the first publicly available iteration of a new technology never represents its ultimate performance capabilities. It took NAND flash decades to make it into usable SSDs, and another decade before climbing to the performance levels we enjoy today. Time will tell if this holds true for XPoint, but given Micron's demos and our own observed performance of Intel's P4800X and Optane Memory SSDs, I'd argue that it is most certainly off to a good start!

XPoint Die.jpg

A 3D XPoint die, submitted for your viewing pleasure (click for larger version).

You want to know how this stuff works, right? Read on to find out!

Author:
Manufacturer: AMD

We are up to two...

UPDATE (5/31/2017): Crystal Dynamics was able to get back to us with a couple of points on the changes that were made with this patch to affect the performance of AMD Ryzen processors.

  1. Rise of the Tomb Raider splits rendering tasks to run on different threads. By tuning the size of those tasks – breaking some up, allowing multicore CPUs to contribute in more cases, and combining some others, to reduce overheads in the scheduler – the game can more efficiently exploit extra threads on the host CPU.
     
  2. An optimization was identified in texture management that improves the combination of AMD CPU and NVIDIA GPU.  Overhead was reduced by packing texture descriptor uploads into larger chunks.

There you have it, a bit more detail on the software changes made to help adapt the game engine to AMD's Ryzen architecture. Not only that, but it does confirm our information that there was slightly MORE to address in the Ryzen+GeForce combinations.

END UPDATE

Despite a couple of growing pains out of the gate, the Ryzen processor launch appears to have been a success for AMD. Both the Ryzen 7 and the Ryzen 5 releases proved to be very competitive with Intel’s dominant CPUs in the market and took significant leads in areas of massive multi-threading and performance per dollar. An area that AMD has struggled in though has been 1080p gaming – performance in those instances on both Ryzen 7 and 5 processors fell behind comparable Intel parts by (sometimes) significant margins.

Our team continues to watch the story to see how AMD and game developers work through the issue. Most recently I posted a look at the memory latency differences between Ryzen and Intel Core processors. As it turns out, the memory latency differences are a significant part of the initial problem for AMD:

Because of this, I think it is fair to claim that some, if not most, of the 1080p gaming performance deficits we have seen with AMD Ryzen processors are a result of this particular memory system intricacy. You can combine memory latency with the thread-to-thread communication issue we discussed previously into one overall system level complication: the Zen memory system behaves differently than anything we have seen prior and it currently suffers in a couple of specific areas because of it.

In that story I detailed our coverage of the Ryzen processor and its gaming performance succinctly:

Our team has done quite a bit of research and testing on this topic. This included a detailed look at the first asserted reason for the performance gap, the Windows 10 scheduler. Our summary there was that the scheduler was working as expected and that minimal difference was seen when moving between different power modes. We also talked directly with AMD to find out its then current stance on the results, backing up our claims on the scheduler and presented a better outlook for gaming going forward. When AMD wanted to test a new custom Windows 10 power profile to help improve performance in some cases, we took part in that too. In late March we saw the first gaming performance update occur courtesy of Ashes of the Singularity: Escalation where an engine update to utilize more threads resulted in as much as 31% average frame increase.

Quick on the heels of the Ryzen 7 release, AMD worked with the developer Oxide on the Ashes of the Singularity: Escalation engine. Through tweaks and optimizations, the game was able to showcase as much as a 30% increase in average frame rate on the integrated benchmark. While this was only a single use case, it does prove that through work with the developers, AMD has the ability to improve the 1080p gaming positioning of Ryzen against Intel.

rotr-screen4-small.jpg

Fast forward to today and I was surprised to find a new patch for Rise of the Tomb Raider, a game that was actually one of the worst case scenarios for AMD with Ryzen. (Patch #12, v1.0.770.1) The patch notes mention the following:

The following changes are included in this patch

- Fix certain DX12 crashes reported by users on the forums.

- Improve DX12 performance across a variety of hardware, in CPU bound situations. Especially performance on AMD Ryzen CPUs can be significantly improved.

While we expect this patch to be an improvement for everyone, if you do have trouble with this patch and prefer to stay on the old version we made a Beta available on Steam, build 767.2, which can be used to switch back to the previous version.

We will keep monitoring for feedback and will release further patches as it seems required. We always welcome your feedback!

Obviously the data point that stood out for me was the improved DX12 performance “in CPU bound situations. Especially on AMD Ryzen CPUs…”

Remember how the situation appeared in April?

rotr.png

The Ryzen 7 1800X was 24% slower than the Intel Core i7-7700K – a dramatic difference for a processor that should only have been ~8-10% slower in single threaded workloads.

How does this new patch to RoTR affect performance? We tested it on the same Ryzen 7 1800X benchmarks platform from previous testing including the ASUS Crosshair VI Hero motherboard, 16GB DDR4-2400 memory and GeForce GTX 1080 Founders Edition using the 378.78 driver. All testing was done under the DX12 code path.

tr-1.png

tr-2.png

The Ryzen 7 1800X score jumps from 107 FPS to 126.44 FPS, an increase of 17%! That is a significant boost in performance at 1080p while still running at the Very High image quality preset, indicating that the developer (and likely AMD) were able to find substantial inefficiencies in the engine. For comparison, the 8-core / 16-thread Intel Core i7-6900K only sees a 2.4% increase from this new game revision. This tells us that the changes to the game were specific to Ryzen processors and their design, but that no performance was redacted from the Intel platforms.

Continue reading our look at the new Rise of the Tomb Raider patch for Ryzen!

Author:
Manufacturer: Intel

An abundance of new processors

During its press conference at Computex 2017, Intel has officially announced the upcoming release of an entire new family of HEDT (high-end desktop) processors along with a new chipset and platform to power it. Though it has only been a year since Intel launched the Core i7-6950X, a Broadwell-E processor with 10-cores and 20-threads, it feels like it has been much longer than that. At the time Intel was accused of “sitting” on the market – offering only slight performance upgrades and raising prices on the segment with a flagship CPU cost of $1700. With can only be described as scathing press circuit, coupled with a revived and aggressive competitor in AMD and its Ryzen product line, Intel and its executive teams have decided it’s time to take enthusiasts and high end prosumer markets serious, once again.

slides-3.jpg

Though the company doesn’t want to admit to anything publicly, it seems obvious that Intel feels threatened by the release of the Ryzen 7 product line. The Ryzen 7 1800X was launched at $499 and offered 8 cores and 16 threads of processing, competing well in most tests against the likes of the Intel Core i7-6900X that sold for over $1000. Adding to the pressure was the announcement at AMD’s Financial Analyst Day that a new brand of processors called Threadripper would be coming this summer, offering up to 16 cores and 32 threads of processing for that same high-end consumer market. Even without pricing, clocks or availability timeframes, it was clear that AMD was going to come after this HEDT market with a brand shift of its EPYC server processors, just like Intel does with Xeon.

The New Processors

Normally I would jump into the new platform, technologies and features added to the processors, or something like that before giving you the goods on the CPU specifications, but that’s not the mood we are in. Instead, let’s start with the table of nine (9!!) new products and work backwards.

  Core i9-7980XE Core i9-7960X Core i9-7940X Core i9-7920X Core i9-7900X Core i7-7820X Core i7-7800X Core i7-7740X Core i5-7640X
Architecture Skylake-X Skylake-X Skylake-X Skylake-X Skylake-X Skylake-X Skylake-X Kaby Lake-X Kaby Lake-X
Process Tech 14nm+ 14nm+ 14nm+ 14nm+ 14nm+ 14nm+ 14nm+ 14nm+ 14nm+
Cores/Threads 18/36 16/32 14/28 12/24 10/20 8/16 6/12 4/8 4/4
Base Clock ? ? ? ? 3.3 GHz 3.6 GHz 3.5 GHz 4.3 GHz 4.0 GHz
Turbo Boost 2.0 ? ? ? ? 4.3 GHz 4.3 GHz 4.0 GHz 4.5 GHz 4.2 GHz
Turbo Boost Max 3.0 ? ? ? ? 4.5 GHz 4.5 GHz N/A N/A N/A
Cache 16.5MB (?) 16.5MB (?) 16.5MB (?) 16.5MB (?) 13.75MB 11MB 8.25MB 8MB 6MB
Memory Support ? ? ? ? DDR4-2666
Quad Channel
DDR4-2666
Quad Channel
DDR4-2666
Quad Channel
DDR4-2666
Dual Channel
DDR4-2666 Dual Channel
PCIe Lanes ? ? ? ? 44 28 28 16 16
TDP 165 watts (?) 165 watts (?) 165 watts (?) 165 watts (?) 140 watts 140 watts 140 watts 112 watts 112 watts
Socket 2066 2066 2066 2066 2066 2066 2066 2066 2066
Price $1999 $1699 $1399 $1199 $999 $599 $389 $339 $242

There is a lot to take in here. The most interesting points are that Intel plans to one-up AMD Threadripper by offering an 18-core processor but it also wants to change the perception of the X299-class platform by offering lower price, lower core count CPUs like the quad-core, non-HyperThreaded Core i5-7640X. We also see the first ever branding of Core i9.

Intel only provided detailed specifications up to the Core i9-7900X, a 10-core / 20-thread processor with a base clock of 3.3 GHz and a Turbo peak of 4.5 GHz using the new Turbo Boost Max Technology 3.0. It sports 13.75MB of cache thanks to an updated cache configuration, includes 44 lanes of PCIe 3.0, an increase of 4 lanes over Broadwell-E, quad-channel DDR4 memory up to 2666 MHz and a 140 watt TDP. The new LGA2066 socket will be utilized. Pricing for this CPU is set at $999, which is interesting for a couple of reasons. First, it is $700 less than the starting MSRP of the 10c/20t Core i7-6950X from one year ago; obviously a big plus. However, there is quite a ways UP the stack, with the 18c/36t Core i9-7980XE coming in at a cool $1999.

intel1.jpg

The next CPU down the stack is compelling as well. The Core i7-7820X is the new 8-core / 16-thread HEDT option from Intel, with similar clock speeds to the 10-core above it, save the higher base clock. It has 11MB of L3 cache, 28-lanes of PCI Express (4 higher than Broadwell-E) but has a $599 price tag. Compared to the 8-core 6900K, that is ~$400 lower, while the new Skylake-X part iteration includes a 700 MHz clock speed advantage. That’s huge, and is a direct attack on the AMD Ryzen 7 1800X that sells for $499 today and cut Intel off at the knees this March. In fact, the base clock of the Core i7-7820X is only 100 MHz lower than the maximum Turbo Boost clock of the Core i7-6900K!

Continue reading about the Intel Core i9 series announcement!

Author:
Manufacturer: ARM

ARM Refreshes All the Things

This past April ARM invited us to visit Cambridge, England so they could discuss with us their plans for the next year.  Quite a bit has changed for the company since our last ARM Tech Day in 2016.  They were acquired by SoftBank, but continue to essentially operate as their own company.  They now have access to more funds, are less risk averse, and have a greater ability to expand in the ever growing mobile and IOT marketplaces.

dynamiq_01.png

The ARM of today certainly is quite different than what we had known 10 years ago when we saw their technology used in the first iPhone.  The company back then had good technology, but a relatively small head count.  They kept pace with the industry, but were not nearly as aggressive as other chip companies in some areas.  Through the past 10 years they have grown not only in numbers, but in technologies that they have constantly expanded on.  The company became more PR savvy and communicated more effectively with the press and in the end their primary users.  Where once ARM would announce new products and not expect to see shipping products upwards of 3 years away, we are now seeing the company be much more aggressive with their designs and getting them out to their partners so that production ends up happening in months as compared to years.

Several days of meetings and presentations left us a bit overwhelmed by what ARM is bringing to market towards the end of 2017 and most likely beginning of 2018.  On the surface it appears that ARM has only done a refresh of the CPU and GPU products, but once we start looking at these products in the greater scheme and how they interact with DynamIQ we see that ARM has changed the mobile computing landscape dramatically.  This new computing concept allows greater performance, flexibility, and efficiency in designs.  Partners will have far more control over these licensed products to create more value and differentiation as compared to years past.

dynamiq_02.png

We have previously covered DynamIQ at PCPer this past March.  ARM wanted to seed that concept before they jumped into more discussions on their latest CPUs and GPUs.  Previous Cortex products cannot be used with DynamIQ.  To leverage that technology we must have new CPU designs.  In this article we are covering the Cortex-A55 and Cortex-A75.  These two new CPUs on the surface look more like a refresh, but when we dig in we see that some massive changes have been wrought throughout.  ARM has taken the concepts of the previous A53 and A73 and expanded upon them fairly dramatically, not only to work with DynamIQ but also by removing significant bottlenecks that have impeded theoretical performance.

Continue reading our overview of the new family of ARM CPUs and GPU!

Subject: Motherboards
Manufacturer: MSI

Introduction and Technical Specifications

Introduction

02-board.jpg

Courtesy of MSI

The MSI Z270 Gaming Pro Carbon board features a black PCB with carbon fiber overlay covering the board's heat sinks and rear panel cover. MSI also liberally sprinkled RGB LED-enabled components across the board's surface and under the board for an interesting ground effects type look. The board is designed around the Intel Z270 chipset with in-built support for the latest Intel LGA1151 Kaby Lake processor line (as well as support for Skylake processors) and Dual Channel DDR4 memory running at a 2400MHz speed. The Z270 Gaming Pro Carbon can be found in retail with an MRSP of $174.99.

03-board-profile.jpg

Courtesy of MSI

04-board-flyapart.jpg

Courtesy of MSI

MSI integrated the following features into the Z270 Gaming Pro Carbon motherboard: six SATA III 6Gbps ports; two M.2 PCIe Gen3 x4 32Gbps capable ports with Intel Optane support built-in; an RJ-45 Intel I219-V Gigabit NIC; three PCI-Express x16 slots; three PCI-Express x1 slots; a Realtek ALC1220 8-Channel audio subsystem; integrated DVI-D and HDMI video ports; and USB 2.0, 3.0, and 3.1 Type-A and Type-C port support.

05-military-class-v.jpg

Courtesy of MSI

To power the Z270 Gaming Pro Carbon motherboard, MSI integrated a 10 phase (8+2) digital power delivery system dubbed Military Class V. The Military Class V integrated components included Titanium chokes, 10 year-rated Dark capacitors, and Dark chokes.

06-rear-panel-flyapart.jpg

Courtesy of MSI

Continue reading our preview of the MSI Z270 Gaming Pro Carbon motherboard!

Manufacturer: Alphacool

Introduction and Specifications

Alphacool's Eisbaer is a line of pre-assembled liquid CPU coolers using standard parts that add quick-release connections to make adding components to the loop simple. Today we'll have a look at the 360 mm and 280 mm versions of the Eisbaer and see what kind of performance you can expect from an all-in-one solution from a respected brand in custom liquid cooling.

DSC_0203.jpg

"With the Alphacool “Eisbaer”, we’re offering an extremely quiet high-performance cooler for every CPU on the market currently. A closed water cooling system that’s easy to install and can be easily and safely expanded with its quick-lock closure."

Not all AiO liquid coolers are created equal, of course, with different materials and approaches; and there are generally tradeoffs to be made between design and pricing. The best performance can result in pumps and fans that produce more noise than a high-performance air solution, while some liquid coolers manage to balance noise and performance in a way that makes liquid a far more attractive option - especially when overclocking a CPU.

DSC_0205.jpg

True to the premium nature of a product line like this, Alphacool has incorportated first-rate components into the Eisbaer series including all-copper radiators, high-performance fans, and touches like anti-kink springs for the hoses. The capability of easily adding a GPU to the loop with the quick-lock closure (Alphacool offers a line of GPU products called "Eiswolf" that connect with these quick-lock closures) is a nice plus, and the use of standard G1/4 fittings ensures compatibility with custom parts for future expansion/modification.

DSC_0100.jpg

The Eisbaer 360 spending some quality time on the test bench

Continue reading our review of the Alphacool Eisbaer 360 and 280 liquid CPU coolers!

Manufacturer: The Khronos Group

The Right People to Interview

Last week, we reported that OpenCL’s roadmap would be merging into Vulkan, and OpenCL would, starting at some unspecified time in the future, be based “on an extended version of the Vulkan API”. This was based on quotes from several emails between myself and the Khronos Group.

Since that post, I had the opportunity to have a phone interview with Neil Trevett, president of the Khronos Group and chairman of the OpenCL working group, and Tom Olson, chairman of the Vulkan working group. We spent a little over a half hour going over Neil’s International Workshop on OpenCL (IWOCL) presentation, discussing the decision, and answering a few lingering questions. This post will present the results of that conference call in a clean, readable way.

khronos-officiallogo-Vulkan_500px_Dec16.png

First and foremost, while OpenCL is planning to merge into the Vulkan API, the Khronos Group wants to make it clear that “all of the merging” is coming from the OpenCL working group. The Vulkan API roadmap is not affected by this decision. Of course, the Vulkan working group will be able to take advantage of technologies that are dropping into their lap, but those discussions have not even begun yet.

Neil: Vulkan has its mission and its roadmap, and it’s going ahead on that. OpenCL is doing all of the merging. We’re kind-of coming in to head in the Vulkan direction.

Does that mean, in the future, that there’s a bigger wealth of opportunity to figure out how we can take advantage of all this kind of mutual work? The answer is yes, but we haven’t started those discussions yet. I’m actually excited to have those discussions, and are many people, but that’s a clarity. We haven’t started yet on how Vulkan, itself, is changed (if at all) by this. So that’s kind-of the clarity that I think is important for everyone out there trying to understand what’s going on.

Tom also prepared an opening statement. It’s not as easy to abbreviate, so it’s here unabridged.

Tom: I think that’s fair. From the Vulkan point of view, the way the working group thinks about this is that Vulkan is an abstract machine, or at least there’s an abstract machine underlying it. We have a programming language for it, called SPIR-V, and we have an interface controlling it, called the API. And that machine, in its full glory… it’s a GPU, basically, and it’s got lots of graphics functionality. But you don’t have to use that. And the API and the programming language are very general. And you can build lots of things with them. So it’s great, from our point of view, that the OpenCL group, with their special expertise, can use that and leverage that. That’s terrific, and we’re fully behind it, and we’ll help them all we can. We do have our own constituency to serve, which is the high-performance game developer first and foremost, and we are going to continue to serve them as our main mission.

So we’re not changing our roadmap so much as trying to make sure we’re a good platform for other functionality to be built on.

Neil then went on to mention that the decision to merge OpenCL’s roadmap into the Vulkan API took place only a couple of weeks ago. The purpose of the press release was to reach OpenCL developers and get their feedback. According to him, they did a show of hands at the conference, with a room full of a hundred OpenCL developers, and no-one was against moving to the Vulkan API. This gives them confidence that developers will accept the decision, and that their needs will be served by it.

Next up is the why. Read on for more.

Author:
Subject: Storage
Manufacturer: Sony
Tagged: ssd, ps4 pro, ps4, consoles

Intro and Upgrading the PS4 Pro Hard Drive

When Sony launched the PS4 Pro late last year, it introduced an unusual mid-cycle performance update to its latest console platform. But in addition to increased processing and graphics performance, Sony also addressed one of the original PS4's shortcomings: the storage bus.

The original, non-Pro PlayStation 4 utilized a SATA II bus, capping speeds at 3Gb/s. This was more than adequate for keeping up with the console's stock hard drive, but those who elected to take advantage of Sony's user-upgradeable storage policy and install an SSD faced the prospect of a storage bus bottleneck. As we saw in our original look at upgrading the PS4 Pro with a solid state drive, the SSD brought some performance improvements in terms of load times, but these improvements weren't always as impressive as we might expect.

ps4-pro-ssd-vs-hdd.jpg

We therefore set out to see what performance improvements, if any, could be gained by the inclusion of SATA III in the PS4 Pro, and if this new Pro model makes a stronger case for users to shell out even more cash for a high capacity solid state drive. We weren't the only ones interested in this test. Digital Foundry conducted their own tests of the PS4 Pro's SATA III interface. They found that while a solid state drive in the PS4 Pro clearly outperformed the stock hard drive in the original PS4, it generally didn't offer much improvement over the SATA II-bottlenecked SSD in the original PS4, or even, in some cases, the stock HDD in the PS4 Pro.

ocz-trion-100.jpg

But we noticed a major issue with Digital Foundry's testing process. For their SSD tests, they used the OCZ Trion 100, an older SSD with relatively mediocre performance compared to its latest competitors. The Trion 100 also has a relatively low write endurance and we therefore don't know the condition and performance characteristics of Digital Foundry's drive.

samsung-850-evo-1tb.jpg

To address these issues, we conducted our tests with a brand new 1TB Samsung 850 EVO. While far from the cheapest, or even most reasonable option for a PS4 Pro upgrade, our aim is to assess the "best case scenario" when it comes to SSD performance via the PS4 Pro's SATA III bus.

Continue reading our analysis of PS4 Pro loading times with an SSD upgrade!

Author:
Subject: Systems, Mobile
Manufacturer: Apple

What have we here?

The latest iteration of the Apple MacBook Pro has been a polarizing topic to both Mac and PC enthusiasts. Replacing the aging Retina MacBook Pro introduced in 2012, the Apple MacBook Pro 13-inch with Touch Bar introduced late last year offered some radical design changes. After much debate (and a good Open Box deal), I decided to pick up one of these MacBooks to see if it could replace my 11" MacBook Air from 2013, which was certainly starting to show it's age.

DSC02852.JPG

I'm sure that a lot of our readers, even if they aren't Mac users, are familiar with some of the major changes the Apple made with this new MacBook Pro. One of the biggest changes comes when you take a look at the available connectivity on the machine. Gone are the ports you might expect like USB type-A, HDMI, and Mini DisplayPort. These ports have been replaced with 4 Thunderbolt 3 ports, and a single 3.5mm headphone jack.

While it seems like USB-C (which is compatible with Thunderbolt 3) is eventually posed to take over the peripheral market, there are obvious issues with replacing all of the connectivity on a machine aimed at professionals with type-c connectors. Currently, type-c devices are few and are between, meaning you will have to rely on a series of dongles to connect the devices you already own. 

DSC02860.JPG

I will say however, that it ultimately hasn't been that much of an issue for me so far in the limited time that I've owned this MacBook. In order to evaluate how bad the dongle issue was, I only purchased a single, simple adapter with my MacBook which provided me with a Type-A USB port and a pass-through Type-C port for charging.

Continue reading our look at using the MacBook Pro with Windows!

Author:
Manufacturer: Seasonic

Introduction and Features

Introduction

2-Prime-logo.jpg

Sea Sonic Electronics Co., Ltd has been designing and building PC power supplies since 1981 and they are one of the most highly respected manufacturers on the planet. Not only do they market power supplies under their own Seasonic name but they are the OEM for many other big name brands.

Seasonic began introducing their new PRIME Series power supplies last year and we reviewed several of the flagship Titanium units and found them to be among the best power supplies we have tested to date. But the PRIME Series now includes many more units with both Platinum and Gold level efficiency certification.

3-Prime-compare.jpg

The power supply we have in for review is the Seasonic PRIME 1200W Gold. This unit comes with all modular cables and is certified to comply with the 80 Plus Gold criteria for high efficiency. The power supply is designed to deliver very tight voltage regulation on the three primary rails (+3.3V, +5V and +12V) and provides superior AC ripple and noise suppression. Add in a super-quiet 135mm cooling fan with a Fluid Dynamic Bearing, top-quality components and a 12-year warranty, and you have the makings for another outstanding power supply.

4a-Side.jpg

Seasonic PRIME Gold Series PSU Key Features:

•    650W, 750W, 850W, 1000W or 1200W continuous DC output
•    High efficiency, 80 PLUS Gold certified
•    Micro-Tolerance Load Regulation (MTLR)
•    Top-quality 135mm Fluid Dynamic Bearing fan
•    Premium Hybrid Fan Control (allows fanless operation at low power)
•    Superior AC ripple and noise suppression
•    Fully modular cabling design
•    Multi-GPU technologies supported
•    Gold-plated high-current terminals
•    Protections: OPP,OVP,UVP,SCP,OCP and OTP
•    12-Year Manufacturer’s warranty
•    MSRP for the PRIME 1200W Gold is $199.90 USD

4b-Front-cables.jpg

Here is what Seasonic has to say about the new PRIME power supply line: “The creation of the PRIME Series is a renewed testimony of Sea Sonic’s determination to push the limits of power supply design in every aspect. This elegant-looking, exclusive lineup of new products will include 80 Plus Titanium – in the range of 650W to 1000W, and Platinum-Gold-rated units in the range of 650W to 1200W, with excellent electrical characteristics, top-level components and fully modular cabling.

Seasonic employs the most efficient manufacturing methods, uses the best materials and works with most reliable suppliers to produce reliable products. The PRIME Series layout, revolutionary manufacturing solutions and solid design attest to the highest level of ingenuity of Seasonics’s engineers and product developers. Demonstrating confidence in its power supplies, Seasonic stands out in the industry by offering PRIME Series a generous 12-year manufacturer’s warranty period.

Please continue reading our review of the Seasonic PRIME 1200W Gold PSU!

Author:
Manufacturer: AMD

Is it time to buy that new GPU?

Testing commissioned by AMD. This means that AMD paid us for our time, but had no say in the results or presentation of them.

Earlier this week Bethesda and Arkane Studios released Prey, a first-person shooter that is a re-imaging of the 2006 game of the same name. Fans of System Shock will find a lot to love about this new title and I have found myself enamored with the game…in the name of science of course.

Prey-2017-05-06-15-52-04-16.jpg

While doing my due diligence and performing some preliminary testing to see if we would utilize Prey for graphics testing going forward, AMD approached me to discuss this exact title. With the release of the Radeon RX 580 in April, one of the key storylines is that the card offers a reasonably priced upgrade path for users of 2+ year old hardware. With that upgrade you should see some substantial performance improvements and as I will show you here, the new Prey is a perfect example of that.

Targeting the Radeon R9 380, a graphics card that was originally released back in May of 2015, the RX 580 offers substantially better performance at a very similar launch price. The same is true for the GeForce GTX 960: launched in January of 2015, it is slightly longer in the tooth. AMD’s data shows that 80% of the users on Steam are running on R9 380X or slower graphics cards and that only 10% of them upgraded in 2016. Considering the great GPUs that were available then (including the RX 480 and the GTX 10-series), it seems more and more likely that we going to hit an upgrade inflection point in the market.

slides-5.jpg

A simple experiment was setup: does the new Radeon RX 580 offer a worthwhile upgrade path for those many users of R9 380 or GTX 960 classifications of graphics cards (or older)?

  Radeon RX 580 Radeon R9 380 GeForce GTX 960
GPU Polaris 20 Tonga Pro GM206
GPU Cores 2304 1792 1024
Rated Clock 1340 MHz 918 MHz 1127 MHz
Memory 4GB
8GB
4GB 2GB
4GB
Memory Interface 256-bit 256-bit 128-bit
TDP 185 watts 190 watts 120 watts
MSRP (at launch) $199 (4GB)
$239 (8GB)
$219 $199

Continue reading our look at the Radeon RX 580 in Prey!

Subject: General Tech
Manufacturer: The Khronos Group

It Started with an OpenCL 2.2 Press Release

Update (May 18 @ 4pm EDT): A few comments across the internet believes that the statements from The Khronos Group were inaccurately worded, so I emailed them yet again. The OpenCL working group has released yet another statement:

OpenCL is announcing that their strategic direction is to support CL style computing on an extended version of the Vulkan API. The Vulkan group is agreeing to advise on the extensions.

In other words, this article was and is accurate. The Khronos Group are converging OpenCL and Vulkan into a single API: Vulkan. There was no misinterpretation.

Original post below

Earlier today, we published a news post about the finalized specifications for OpenCL 2.2 and SPIR-V 1.2. This was announced through a press release that also contained an odd little statement at the end of the third paragraph.

We are also working to converge with, and leverage, the Khronos Vulkan API — merging advanced graphics and compute into a single API.

khronos-vulkan-logo.png

This statement seems to suggest that OpenCL and Vulkan are expecting to merge into a single API for compute and graphics at some point in the future. This seemed like a huge announcement to bury that deep into the press blast, so I emailed The Khronos Group for confirmation (and any further statements). As it turns out, this interpretation is correct, and they provided a more explicit statement:

The OpenCL working group has taken the decision to converge its roadmap with Vulkan, and use Vulkan as the basis for the next generation of explicit compute APIs – this also provides the opportunity for the OpenCL roadmap to merge graphics and compute.

This statement adds a new claim: The Khronos Group plans to merge OpenCL into Vulkan, specifically, at some point in the future. Making the move in this direction, from OpenCL to Vulkan, makes sense for a handful of reasons, which I will highlight in my analysis, below.

Going Vulkan to Live Long and Prosper?

The first reason for merging OpenCL into Vulkan, from my perspective, is that Apple, who originally created OpenCL, still owns the trademarks (and some other rights) to it. The Khronos Group licenses these bits of IP from Apple. Vulkan, based on AMD’s donation of the Mantle API, should be easier to manage from the legal side of things.

khronos-2016-vulkan-why.png

The second reason for going in that direction is the actual structure of the APIs. When Mantle was announced, it looked a lot like an API that wrapped OpenCL with a graphics-specific layer. Also, Vulkan isn’t specifically limited to GPUs in its implementation.

Aside: When you create a device queue, you can query the driver to see what type of device it identifies as by reading its VkPhysicalDeviceType. Currently, as of Vulkan 1.0.49, the options are Other, Integrated GPU, Discrete GPU, Virtual GPU, and CPU. While this is just a clue, to make it easier to select a device for a given task, and isn’t useful to determine what the device is capable of, it should illustrate that other devices, like FPGAs, could support some subset of the API. It’s just up to the developer to check for features before they’re used, and target it at the devices they expect.

If you were to go in the other direction, you would need to wedge graphics tasks into OpenCL. You would be creating Vulkan all over again. From my perspective, pushing OpenCL into Vulkan seems like the path of least resistance.

The third reason (that I can think of) is probably marketing. DirectX 12 isn’t attempting to seduce FPGA developers. Telling a game studio to program their engine on a new, souped-up OpenCL might make them break out in a cold sweat, even if both parties know that it’s an evolution of Vulkan with cross-pollination from OpenCL. OpenCL developers, on the other hand, are probably using the API because they need it, and are less likely to be shaken off.

What OpenCL Could Give Vulkan (and Vice Versa)

From the very onset, OpenCL and Vulkan were occupying similar spaces, but there are some things that OpenCL does “better”. The most obvious, and previously mentioned, element is that OpenCL supports a wide range of compute devices, such as FPGAs. That’s not the limit of what Vulkan can borrow, though, although it could make for an interesting landscape if FPGAs become commonplace in the coming years and decades.

khronos-SYCL_Color_Mar14_154_75.png

Personally, I wonder how SYCL could affect game engine development. This standard attempts to guide GPU- (and other device-) accelerated code into a single-source, C++ model. For over a decade, Tim Sweeney of Epic Games has talked about writing engines like he did back in the software-rendering era, but without giving up the ridiculous performance (and efficiency) provided by GPUs.

Long-time readers of PC Perspective might remember that I was investigating GPU-accelerated software rendering in WebCL (via Nokia’s implementation). The thought was that I could concede the raster performance of modern GPUs and make up for it with added control, the ability to explicitly target secondary compute devices, and the ability to run in a web browser. This took place in 2013, before AMD announced Mantle and browser vendors expressed a clear disinterest in exposing OpenCL through JavaScript. Seeing the idea was about to be crushed, I pulled out the GPU-accelerated audio ideas into a more-focused project, but that part of my history is irrelevant to this post.

The reason for bringing up this anecdote is because, if OpenCL is moving into Vulkan, and SYCL is still being developed, then it seems likely that SYCL will eventually port into Vulkan. If this is the case, then future game engines can gain benefits that I was striving toward without giving up access to fixed-function features, like hardware rasterization. If Vulkan comes to web browsers some day, it would literally prune off every advantage I was hoping to capture, and it would do so with a better implementation.

microsoft-2015-directx12-logo.jpg

More importantly, SYCL is something that Microsoft cannot provide with today’s DirectX.

Admittedly, it’s hard to think of something that OpenCL can acquire from Vulkan, besides just a lot more interest from potential developers. Vulkan was already somewhat of a subset of OpenCL that had graphics tasks (cleanly) integrated over top of it. On the other hand, OpenCL has been struggling to acquire mainstream support, so that could, in fact, be Vulkan’s greatest gift.

The Khronos Group has not provided a timeline for this change. It’s just a roadmap declaration.

Author:
Subject: General Tech
Manufacturer: YouTube TV

YouTube Tries Everything

Back in March, Google-owned YouTube announced a new live TV streaming service called YouTube TV to compete with the likes of Sling, DirecTV Now, PlayStation Vue, and upcoming offerings from Hulu, Amazon, and others. All these services aim to deliver curated bundles of channels aimed at cord cutters that run over the top of customer’s internet only connections as replacements for or in addition to cable television subscriptions.  YouTube TV is the latest entrant to this market with the service only available in seven test markets currently, but it is off to a good start with a decent selection of content and features including both broadcast and cable channels, on demand media, and live and DVR viewing options. A responsive user interface and generous number of family sharing options (six account logins and three simultaneous streams) will need to be balanced by the requirement to watch ads (even on some DVR’ed shows) and the $35 per month cost.

Get YouTube TV 1.jpg

YouTube TV was launched in 5 cities with more on the way. Fortunately, I am lucky enough to live close enough to Chicago to be in-market and could test out Google’s streaming TV service. While not a full review, the following are my first impressions of YouTube TV.

Setup / Sign Up

YouTube TV is available with a one month free trail, after which you will be charged $35 a month. Sign up is a simple affair and can be started by going to tv.youtube.com or clicking the YouTube TV link from “hamburger” menu on YouTube. If you are on a mobile device, YouTube TV uses a separate app than the default YouTube app and weighs in at 9.11 MB for the Android version. The sign up process is very simple. After verifying your location, the following screens show you the channels available in your market and gives you the option of adding Showtime ($11) and/or Fox Soccer ($15) for additional monthly fees. After that, you are prompted for a payment method that can be the one already linked to your Google account and used for app purchases and other subscriptions. As far as the free trial, I was not charged anything and there was no hold on my account for the $35. I like that Google makes it easy to see exactly how many days you have left on your trial and when you will be charged if you do not cancel. Further, the cancel link is not buried away and is intuitively found by clicking your account photo in the upper right > Personal > Membership. Google is doing things right here. After signup, a tour is offered to show you the various features, but you can skip this if you want to get right to it.

In my specific market, I have the following channels. When I first started testing some of the channels were not available, and were just added today. I hope to see more networks added, and if Google can manage that YouTube TV and it’s $35/month price are going to shape up to be a great deal.

  • ABC 7, CBS 2, Fox 32, NBC 5, ESPN, CSN, CSN Plus, FS1, CW, USA, FX, Free Form, NBC SN, ESPN 2, FS2, Disney, E!, Bravo, Oxygen, BTN, SEC ESPN Network, ESPN News, CBS Sports, FXX, Syfy, Disney Junior, Disney XD, MSNBC, Fox News, CNBC, Fox Business, National Geographic, FXM, Sprout, Universal, Nat Geo Wild, Chiller, NBC Golf, YouTube Red Originals
  • Plus: AMC, BBC America, IFC, Sundance TV, We TV, Telemundo, and NBC Universal (just added).
  • Optional Add-Ons: Showtime and Fox Soccer.

I tested YouTube TV out on my Windows PCs and an Android phone. You can also watch YouTube TV on iOS devices, and on your TV using an Android TVs and Chromecasts (At time of writing, Google will send you a free Chromecast after your first month). (See here for a full list of supported devices.) There are currently no Roku or Apple TV apps.

Get YouTube TV_full list.jpg

Each YouTube TV account can share out the subscription to 6 total logins where each household member gets their own login and DVR library. Up to three people can be streaming TV at the same time. While out and about, I noticed that YouTube TV required me to turn on location services in order to use the app. Looking further into it, the YouTube TV FAQ states that you will need to verify your location in order to stream live TV and will only be able to stream live TV if you are physically in the markets where YouTube TV has launched. You can watch your DVR shows anywhere in the US. However, if you are traveling internationally you will not be able to use YouTube TV at all (I’m not sure if VPNs will get around this or if YouTube TV blocks this like Netflix does). Users will need to login from their home market at least once every 3 months to keep their account active and able to stream content (every month for MLB content).

YouTube TV verifying location in Chrome (left) and on the android app (right).

On one hand, I can understand this was probably necessary in order for YouTube TV to negotiate a licensing deal, and their terms do seem pretty fair. I will have to do more testing on this as I wasn’t able to stream from the DVR without turning on location services on my Android – I can chalk this up to growing pains though and it may already be fixed.

Features & First Impressions

YouTube TV has an interface that is perhaps best described as a slimmed down YouTube that takes cues from Netflix (things like the horizontal scrolling of shows in categories). The main interface is broken down into three sections: Library, Home, and Live with the first screen you see when logging in being Home. You navigate by scrolling and clicking, and by pulling the menus up from the bottom while streaming TV like YouTube.

YouTube TV Home.jpg

Continue reading for my first impressions of YouTube TV!

Author:
Subject: Processors
Manufacturer: Various

Application Profiling Tells the Story

It should come as no surprise to anyone that has been paying attention the last two months that the latest AMD Ryzen processors and architecture are getting a lot of attention. Ryzen 7 launched with a $499 part that bested the Intel $1000 CPU at heavily threaded applications and Ryzen 5 launched with great value as well, positioning a 6-core/12-thread CPU against quad-core parts from the competition. But part of the story that permeated through both the Ryzen 7 and the Ryzen 5 processor launches was the situation surrounding gaming performance, in particular 1080p gaming, and the surprising delta  that we see in some games.

Our team has done quite a bit of research and testing on this topic. This included a detailed look at the first asserted reason for the performance gap, the Windows 10 scheduler. Our summary there was that the scheduler was working as expected and that minimal difference was seen when moving between different power modes. We also talked directly with AMD to find out its then current stance on the results, backing up our claims on the scheduler and presented a better outlook for gaming going forward. When AMD wanted to test a new custom Windows 10 power profile to help improve performance in some cases, we took part in that too. In late March we saw the first gaming performance update occur courtesy of Ashes of the Singularity: Escalation where an engine update to utilize more threads resulted in as much as 31% average frame increase.

ping-amd.png

As a part of that dissection of the Windows 10 scheduler story, we also discovered interesting data about the CCX construction and how the two modules on the 1800X communicated. The result was significantly longer thread to thread latencies than we had seen in any platform before and it was because of the fabric implementation that AMD integrated with the Zen architecture.

This has led me down another hole recently, wondering if we could further compartmentalize the gaming performance of the Ryzen processors using memory latency. As I showed in my Ryzen 5 review, memory frequency and throughput directly correlates to gaming performance improvements, in the order of 14% in some cases. But what about looking solely at memory latency alone?

Continue reading our analysis of memory latency, 1080p gaming, and how it impacts Ryzen!!

Introduction and Specifications

Fractal Design is well known in PC enthusiast circles for their excellent cases, and they also entered the self-contained liquid CPU cooler market in 2014 with the Kelvin, and today are releasing a brand new cooler lineup called Celsius. There are two models being introduced, with the 360 mm Celsius S36 and the 240 mm Celsius S24; the latter of which we have for review today.

DSC_0100.jpg

While on the surface this might appear to be a standard 240 mm all-in-one liquid CPU cooler, there are some key features that help to differentiate the Celsius lineup in an increasingly saturated market. The hoses (themselves flexible rubber in nice-looking sleeves) are attached at both ends with metal fittings, with the radiator side the standard (and removable) G1/4 variety, and the fans connect via an unusual radiator-mounted header that receives power via a hidden fan cable in one of the sleeved hoses. Additionally, the Celsius coolers offer a dual-mode setting with the choice of automatic fan control or PWM passthrough from the motherboard - and this is controlled via a clever switch built into the trim ring around the pump.

DSC_0030.jpg

I have been impressed with the low noise of Fractal Design fans in the past, and I went into this review expecting a very quiet cooling experience. How did the Celsius S24 fare on the test bench? Read on to find out!

Continue reading our review of the Fractal Design Celsius S24 Liquid CPU Cooler!

Subject: Motherboards
Manufacturer: ASUS

Introduction and Technical Specifications

Introduction

02-board-all.jpg

Courtesy of ASUS

The Prime Z270-A motherboard is one of ASUS' initial offering integrating the Intel Z270 chipset. The board features ASUS' Channel line aesthetics with a black PCB and white plastic accents. The board's integrated Intel Z270 chipset integrates support for the latest Intel LGA1151 Kaby Lake processor line as well as Dual Channel DDR4 memory. Offered at a price-competitive MSRP of $164, the Prime Z270-A offers a compelling price point with respect to its integrated features and performance potential.

03-board.jpg

Courtesy of ASUS

04-board-flyapart.jpg

Courtesy of ASUS

ASUS does not cut corners on any of their boards with the Prime Z270 sharing similar power component circuitry as its higher tiered siblings, featuring a 10-phase digital power delivery system. ASUS integrated the following features into the Prime Z270-A board: six SATA 3 ports; two M.2 PCIe x4 capable ports; an Intel I219-V Gigabit NIC; three PCI-Express x16 slots; four PCI-Express x1 slots; on-board power and MemOK! buttons; an EZ XMP switch; Crystal Sound 3 audio subsystem; integrated DisplayPort, HDMI, and DVI video ports; and USB 3.0 and 3.1 Type-A and Type-C port support.

05-board-profile.jpg

Courtesy of ASUS

ASUS enhanced several design aspects with the Prime Z270-A including integrated RGB support, mount points for custom 3D printed panels along the top right of the board, and metal reinforced PCIe x16 slots for the primary and secondary slots.

Continue reading our preview of the ASUS Prime Z270-A motherboard!

Author:
Subject: Memory
Manufacturer: Corsair

Show me your true colors

It's no secret that RGB accessories and components have been quite popular in the past few years. One of the most recent introductions in the quest to make everything related to your computer RGB LED customizable is system memory. 

DSC02834-2.JPG

Today, we're taking a look at Corsair's RGB DDR4 offering, the Vengeance RGB memory kit.

DSC02837.JPG

As you might expect, from the outside the Vengeance RGB DIMMs look mostly like standard memory modules. The heat spreader is full metal and has a matte texture, giving it a nice flat appearance and feel.

IMG_4559.JPG

The real magic lies underneath the removable top portion of the heat spreader. Taking this piece off will reveal the lightbar in all its glory. This removable portion of the heat spreader allows you to choose between maximum LED visibility and the more subtle appearance of the "slotted" design. For attention oriented people like me, it's also nice that you can flip the lid of the heat spreader so that the Corsair logo is oriented in the same way when you have 4 DIMMs installed into a motherboard.

Unlike the GEIL EVO X RGB memory that we used in our Ryzen 5 CPU review, the Corsair Vengeance RGB memory does not depend on your motherboard having headers for external RGB strips, but rather is fully controlled through Corsair Link software on your PC.

corsair-link.PNG

With Corsair Link installed on a supported platform (more on that later), it's very easy to customize the look of the Vengeance RGB modules. These LEDs are individually addressable so you can do patterns like Color Pulse and Shift as well as a Rainbow effect. You can also pair together modules into groups so that the effects are synchronized together.

After getting the memory installed and customized to our liking, we decided to run a couple of memory benchmarks on this kit at the stock DDR4-2400 speeds for the Kaby Lake platform, and at DDR4-3000 which this kit is certified for. Although it's worth nothing that Corsair claims this memory is very overclockable. 

ddr4-speeds-chart.png

In synthetic memory benchmarks, you can definitely see the expected difference in performance from running at DDR4-3000 vs DDR4-2400. Read/Write/Copy as well as memory bandwidth sees a nice increase. Although, as we have seen over the years, increases in memory bandwidth don't seem to translate to large performance increases in real world applications. 

However, with the advent of AMD's latest Ryzen CPUs, we have seen a new importance on memory speed in relation to certain applications including gaming. While we managed to run Vengeance RGB memory a DDR4-3000 speeds on our ASUS Crosshair VI Hero platform with no issues, you do lose the RGB functionality.

Currently, the Corsair Link software utilizes the Intel Management Engine software to enable support for changing the RGB LEDs over the DDR4 bus. This means that when you install the memory into a Ryzen system, you are unable to customize the LED patterns, with the memory modules staying in their default state of cycling through colors in an unsynchronized method. 

Corsair has said that Ryzen support for RGB customization is coming, and we will be on the lookout for when the updated version of Corsair Link software is available.

IMG_6635.JPG

At $160 for the 16GB kit, the Corsair Vengeance RGB DDR-3000 Memory carries about a $30-$40 price premium over similar non RGB-enabled kits. While it may seem a bit ridiculous to spend extra money just to get light up RAM, if you are working on a color scheme with your system and already have things like an RGB Motherboard and GPU, Corsair Vengeance RGB memory could be the final touch you are looking for.

Introduction and Technical Specifications

Introduction

02-full-kit.jpg

Alphacool NexXxos Cool Answer 360 D5/UT kit
Courtesy of Alphacool

Alphacool is a German-based company, known in liquid cooling enthusiast's circles for their high performance and innovative product designs. Alphacool provided us with one of their NexXxos Cool Answer cooling kits, featuring one of their 360mm (3 x 120mm) UT copper radiators, a Repack dual bay acrylic reservoir with integrated VPP655 D5 pump, and the NexXxos XP3 Light CPU block. With a retail price of 314.95 euros (approximately $330 USD), the kit comes at a competitive price compared with other higher-end DIY kits.

03-reservoir.jpg

5.25 Dual Bay Reservoir
Courtesy of Alphacool

04-radiator.jpg

NexXxoS UT60 Full Copper 360mm radiator
Courtesy of Alphacool

05-block-profile.jpg

NexXxos XP3 Light CPU Waterblock
Courtesy of Alphacool

XSPC bundled in many of their high end components into the NexXxos Cool Answer 360 D5/UT kit, including the NexXxos XP3 Light CPU block, the NexXxoS UT60 Full Copper 360mm triple fan radiator, the Repack 5.25 Dual Bay reservoir with integrated VPP655 D5 pump, three meters of 10mm (3/8") inner diameter / 13mm outer diameter clear tubing, six black chrome compression barbs, three 1200 RPM NB-eLoop - Bionic Lüfter fans, 1000ml of their CKC Cape Kelvin Catcher Clear coolant, and all the hardware necessary to put it all together. The Repack dual-bay reservoir has an anti-cyclon design on the inlets and directly feeds the rear-mounted D5 pump. The included VPP655 D5 pump is rated for a 350ml/hr flow rate. All components are copper, brass, Acetal, or acrylic to minimize the possibility of mixed-metal corrosion occurring in the loop.

Continue reading our review of the Alphacool NexXxos Cool Answer 360 D5/UT water cooling kit!

Author:
Manufacturer: Zalman

Introduction and Features

Introduction

2-Banner.jpg

Zalman is well known for supplying cases, cooling solutions, power supplies and accessories to PC enthusiasts around the world. It’s been quite a while since we last reviewed one of Zalman’s power supplies and today we are going to take a detailed look at one of their new Acrux Series units, the ZM1000-ARX. There are currently four models in the Acrux Series ranging in output capacity from 750W up to 1200W.

Zalman Acrux Series Power Supplies:
•    ZM750-ARX
•    ZM850-ARX
•    ZM1000-ARX
•    ZM1200-ARX

The Acrux Series is Zalman’s flagship line of power supplies. They all feature 80 Plus Platinum certification for high efficiency and come with modular cables for flexibility and ease of cable routing. The ARX power supplies are designed for quiet, reliable operation and come backed by a 7-year warranty.

3-ZM1000-diag.jpg

Zalman ZM1000-ARX Platinum PSU Key Features:
 
•    1000W DC power output at 40°C
•    80 Plus Platinum certification for high efficiency (89-94% @ 20-100% load)
•    All Modular cable design
•    Smart fan control system for very quiet operation
•    Quiet 140mm cooling fan with ball bearings
•    100% High quality Japanese made capacitors with 105°C rating
•    High current single +12V rail (83A/996W)
•    DC-to-DC converters for +5V and +3.3V outputs
•    Multi-GPU ready with eight PCI-E connectors
•    Supports NVIDIA SLI and AMD Crossfire
•    Meets ErP 2013 Lot6 standby power standards and Haswell ready
•    Compliant with Intel ATX12V Ver 2.3 & EPS 12V Ver 2.92 standards
•    Universal AC input (100-240V) with Active PFC
•    DC Output protections: UVP, OVP, OPP, SCP, OCP, and OTP
•    Dimensions: 150mm (W) x 86mm (H) x 180mm (L)
•    7-Year warranty
•    MSRP : $199.99 USD

4-ZM1000-Side.jpg

Please continue reading our review of the Zalman 1000W Platinum PSU !!!

Author:
Manufacturer: Corsair

Overview

Despite its surprise launch a few weeks ago, the Corsair ONE feels like it was inevitable. Corsair's steady expansion from RAM modules to power supplies, cases, SSDs, CPU coolers, co-branded video cards, and most recently barebones systems pointed to an eventual complete Corsair system. However, what we did not expect was the form it would take.

DSC02840.JPG

Did Corsair hit it out of the park on their first foray into prebuilt systems, or do they still have some work to do?

It's a bit difficult to get an idea of the scale of the Corsair ONE. Even the joke of "Is it bigger than a breadbox?" doesn't quite work here with the impressively breadbox-size and shape.

Essentially, when you don't take the fins on the top and the bottom into account, the Corsair ONE is as tall as a full-size graphics card — such as the GeForce GTX 1080 — and that's no coincidence. 

Corsair ONE Pro (configuration as reviewed)
Processor Intel Core i7-7700K (Kaby Lake)
Graphics NVIDIA Geforce GTX 1080 Watercooled
Memory 16GB DDR4-2400
Motherboard Custom MSI Z270 Mini-ITX
Storage 960 GB Corsair Force LE
Power Supply Corsair SF400 80+ Gold SFX
Wireless Intel 8265 802.11ac + BT 4.2 (Dual Band, 2x2)
Connections 1 X USB 3.1 GEN2 TYPE C 
3 X USB 3.1 GEN1 TYPE A 
2 X USB 2.0 TYPE A
1 X PS/2 Port
1 X HDMI 2.0 
2 X DisplayPort 
1 X S/PDIF
Dimensions 7.87 x 6.93 x 14.96 inches (20 x 17.6 x 38 cm)
15.87 lbs. (7.2 kg)
OS Windows 10 Home
Price $2299.99 - Corsair.com

Taking a look at the full specifcations, we see all the components for a capable gaming PC. In addition to the afforementioned GTX 1080, you'll find Intel's flagship Core i7-7700K, a Mini ITX Z270 motherboard produced by MSI, a 960GB SSD, and 16GB of DDR4 memory.

Click here to continue reading our review of the Corsair ONE Pro Gaming PC!