All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Introduction and Features
Sea Sonic Electronics Co., Ltd has been designing and building PC power supplies since 1981 and they are one of the most highly respected manufacturers on the planet. Not only do they market power supplies under their own Seasonic name but they are the OEM for many other big name brands.
Seasonic began introducing their new PRIME Series power supplies last year and we reviewed several of the flagship Titanium units and found them to be among the best power supplies we have tested to date. But the PRIME Series now includes many more units with both Platinum and Gold level efficiency certification.
The power supply we have in for review is the Seasonic PRIME 1200W Gold. This unit comes with all modular cables and is certified to comply with the 80 Plus Gold criteria for high efficiency. The power supply is designed to deliver very tight voltage regulation on the three primary rails (+3.3V, +5V and +12V) and provides superior AC ripple and noise suppression. Add in a super-quiet 135mm cooling fan with a Fluid Dynamic Bearing, top-quality components and a 12-year warranty, and you have the makings for another outstanding power supply.
Seasonic PRIME Gold Series PSU Key Features:
• 650W, 750W, 850W, 1000W or 1200W continuous DC output
• High efficiency, 80 PLUS Gold certified
• Micro-Tolerance Load Regulation (MTLR)
• Top-quality 135mm Fluid Dynamic Bearing fan
• Premium Hybrid Fan Control (allows fanless operation at low power)
• Superior AC ripple and noise suppression
• Fully modular cabling design
• Multi-GPU technologies supported
• Gold-plated high-current terminals
• Protections: OPP,OVP,UVP,SCP,OCP and OTP
• 12-Year Manufacturer’s warranty
• MSRP for the PRIME 1200W Gold is $199.90 USD
Here is what Seasonic has to say about the new PRIME power supply line: “The creation of the PRIME Series is a renewed testimony of Sea Sonic’s determination to push the limits of power supply design in every aspect. This elegant-looking, exclusive lineup of new products will include 80 Plus Titanium – in the range of 650W to 1000W, and Platinum-Gold-rated units in the range of 650W to 1200W, with excellent electrical characteristics, top-level components and fully modular cabling.
Seasonic employs the most efficient manufacturing methods, uses the best materials and works with most reliable suppliers to produce reliable products. The PRIME Series layout, revolutionary manufacturing solutions and solid design attest to the highest level of ingenuity of Seasonics’s engineers and product developers. Demonstrating confidence in its power supplies, Seasonic stands out in the industry by offering PRIME Series a generous 12-year manufacturer’s warranty period.”
Is it time to buy that new GPU?
Testing commissioned by AMD. This means that AMD paid us for our time, but had no say in the results or presentation of them.
Earlier this week Bethesda and Arkane Studios released Prey, a first-person shooter that is a re-imaging of the 2006 game of the same name. Fans of System Shock will find a lot to love about this new title and I have found myself enamored with the game…in the name of science of course.
While doing my due diligence and performing some preliminary testing to see if we would utilize Prey for graphics testing going forward, AMD approached me to discuss this exact title. With the release of the Radeon RX 580 in April, one of the key storylines is that the card offers a reasonably priced upgrade path for users of 2+ year old hardware. With that upgrade you should see some substantial performance improvements and as I will show you here, the new Prey is a perfect example of that.
Targeting the Radeon R9 380, a graphics card that was originally released back in May of 2015, the RX 580 offers substantially better performance at a very similar launch price. The same is true for the GeForce GTX 960: launched in January of 2015, it is slightly longer in the tooth. AMD’s data shows that 80% of the users on Steam are running on R9 380X or slower graphics cards and that only 10% of them upgraded in 2016. Considering the great GPUs that were available then (including the RX 480 and the GTX 10-series), it seems more and more likely that we going to hit an upgrade inflection point in the market.
A simple experiment was setup: does the new Radeon RX 580 offer a worthwhile upgrade path for those many users of R9 380 or GTX 960 classifications of graphics cards (or older)?
|Radeon RX 580||Radeon R9 380||GeForce GTX 960|
|GPU||Polaris 20||Tonga Pro||GM206|
|Rated Clock||1340 MHz||918 MHz||1127 MHz|
|TDP||185 watts||190 watts||120 watts|
|MSRP (at launch)||$199 (4GB)
It Started with an OpenCL 2.2 Press Release
Update (May 18 @ 4pm EDT): A few comments across the internet believes that the statements from The Khronos Group were inaccurately worded, so I emailed them yet again. The OpenCL working group has released yet another statement:
OpenCL is announcing that their strategic direction is to support CL style computing on an extended version of the Vulkan API. The Vulkan group is agreeing to advise on the extensions.
In other words, this article was and is accurate. The Khronos Group are converging OpenCL and Vulkan into a single API: Vulkan. There was no misinterpretation.
Original post below
Earlier today, we published a news post about the finalized specifications for OpenCL 2.2 and SPIR-V 1.2. This was announced through a press release that also contained an odd little statement at the end of the third paragraph.
We are also working to converge with, and leverage, the Khronos Vulkan API — merging advanced graphics and compute into a single API.
This statement seems to suggest that OpenCL and Vulkan are expecting to merge into a single API for compute and graphics at some point in the future. This seemed like a huge announcement to bury that deep into the press blast, so I emailed The Khronos Group for confirmation (and any further statements). As it turns out, this interpretation is correct, and they provided a more explicit statement:
The OpenCL working group has taken the decision to converge its roadmap with Vulkan, and use Vulkan as the basis for the next generation of explicit compute APIs – this also provides the opportunity for the OpenCL roadmap to merge graphics and compute.
This statement adds a new claim: The Khronos Group plans to merge OpenCL into Vulkan, specifically, at some point in the future. Making the move in this direction, from OpenCL to Vulkan, makes sense for a handful of reasons, which I will highlight in my analysis, below.
Going Vulkan to Live Long and Prosper?
The first reason for merging OpenCL into Vulkan, from my perspective, is that Apple, who originally created OpenCL, still owns the trademarks (and some other rights) to it. The Khronos Group licenses these bits of IP from Apple. Vulkan, based on AMD’s donation of the Mantle API, should be easier to manage from the legal side of things.
The second reason for going in that direction is the actual structure of the APIs. When Mantle was announced, it looked a lot like an API that wrapped OpenCL with a graphics-specific layer. Also, Vulkan isn’t specifically limited to GPUs in its implementation.
Aside: When you create a device queue, you can query the driver to see what type of device it identifies as by reading its VkPhysicalDeviceType. Currently, as of Vulkan 1.0.49, the options are Other, Integrated GPU, Discrete GPU, Virtual GPU, and CPU. While this is just a clue, to make it easier to select a device for a given task, and isn’t useful to determine what the device is capable of, it should illustrate that other devices, like FPGAs, could support some subset of the API. It’s just up to the developer to check for features before they’re used, and target it at the devices they expect.
If you were to go in the other direction, you would need to wedge graphics tasks into OpenCL. You would be creating Vulkan all over again. From my perspective, pushing OpenCL into Vulkan seems like the path of least resistance.
The third reason (that I can think of) is probably marketing. DirectX 12 isn’t attempting to seduce FPGA developers. Telling a game studio to program their engine on a new, souped-up OpenCL might make them break out in a cold sweat, even if both parties know that it’s an evolution of Vulkan with cross-pollination from OpenCL. OpenCL developers, on the other hand, are probably using the API because they need it, and are less likely to be shaken off.
What OpenCL Could Give Vulkan (and Vice Versa)
From the very onset, OpenCL and Vulkan were occupying similar spaces, but there are some things that OpenCL does “better”. The most obvious, and previously mentioned, element is that OpenCL supports a wide range of compute devices, such as FPGAs. That’s not the limit of what Vulkan can borrow, though, although it could make for an interesting landscape if FPGAs become commonplace in the coming years and decades.
Personally, I wonder how SYCL could affect game engine development. This standard attempts to guide GPU- (and other device-) accelerated code into a single-source, C++ model. For over a decade, Tim Sweeney of Epic Games has talked about writing engines like he did back in the software-rendering era, but without giving up the ridiculous performance (and efficiency) provided by GPUs.
The reason for bringing up this anecdote is because, if OpenCL is moving into Vulkan, and SYCL is still being developed, then it seems likely that SYCL will eventually port into Vulkan. If this is the case, then future game engines can gain benefits that I was striving toward without giving up access to fixed-function features, like hardware rasterization. If Vulkan comes to web browsers some day, it would literally prune off every advantage I was hoping to capture, and it would do so with a better implementation.
More importantly, SYCL is something that Microsoft cannot provide with today’s DirectX.
Admittedly, it’s hard to think of something that OpenCL can acquire from Vulkan, besides just a lot more interest from potential developers. Vulkan was already somewhat of a subset of OpenCL that had graphics tasks (cleanly) integrated over top of it. On the other hand, OpenCL has been struggling to acquire mainstream support, so that could, in fact, be Vulkan’s greatest gift.
The Khronos Group has not provided a timeline for this change. It’s just a roadmap declaration.
YouTube Tries Everything
Back in March, Google-owned YouTube announced a new live TV streaming service called YouTube TV to compete with the likes of Sling, DirecTV Now, PlayStation Vue, and upcoming offerings from Hulu, Amazon, and others. All these services aim to deliver curated bundles of channels aimed at cord cutters that run over the top of customer’s internet only connections as replacements for or in addition to cable television subscriptions. YouTube TV is the latest entrant to this market with the service only available in seven test markets currently, but it is off to a good start with a decent selection of content and features including both broadcast and cable channels, on demand media, and live and DVR viewing options. A responsive user interface and generous number of family sharing options (six account logins and three simultaneous streams) will need to be balanced by the requirement to watch ads (even on some DVR’ed shows) and the $35 per month cost.
YouTube TV was launched in 5 cities with more on the way. Fortunately, I am lucky enough to live close enough to Chicago to be in-market and could test out Google’s streaming TV service. While not a full review, the following are my first impressions of YouTube TV.
Setup / Sign Up
YouTube TV is available with a one month free trail, after which you will be charged $35 a month. Sign up is a simple affair and can be started by going to tv.youtube.com or clicking the YouTube TV link from “hamburger” menu on YouTube. If you are on a mobile device, YouTube TV uses a separate app than the default YouTube app and weighs in at 9.11 MB for the Android version. The sign up process is very simple. After verifying your location, the following screens show you the channels available in your market and gives you the option of adding Showtime ($11) and/or Fox Soccer ($15) for additional monthly fees. After that, you are prompted for a payment method that can be the one already linked to your Google account and used for app purchases and other subscriptions. As far as the free trial, I was not charged anything and there was no hold on my account for the $35. I like that Google makes it easy to see exactly how many days you have left on your trial and when you will be charged if you do not cancel. Further, the cancel link is not buried away and is intuitively found by clicking your account photo in the upper right > Personal > Membership. Google is doing things right here. After signup, a tour is offered to show you the various features, but you can skip this if you want to get right to it.
In my specific market, I have the following channels. When I first started testing some of the channels were not available, and were just added today. I hope to see more networks added, and if Google can manage that YouTube TV and it’s $35/month price are going to shape up to be a great deal.
- ABC 7, CBS 2, Fox 32, NBC 5, ESPN, CSN, CSN Plus, FS1, CW, USA, FX, Free Form, NBC SN, ESPN 2, FS2, Disney, E!, Bravo, Oxygen, BTN, SEC ESPN Network, ESPN News, CBS Sports, FXX, Syfy, Disney Junior, Disney XD, MSNBC, Fox News, CNBC, Fox Business, National Geographic, FXM, Sprout, Universal, Nat Geo Wild, Chiller, NBC Golf, YouTube Red Originals
- Plus: AMC, BBC America, IFC, Sundance TV, We TV, Telemundo, and NBC Universal (just added).
- Optional Add-Ons: Showtime and Fox Soccer.
I tested YouTube TV out on my Windows PCs and an Android phone. You can also watch YouTube TV on iOS devices, and on your TV using an Android TVs and Chromecasts (At time of writing, Google will send you a free Chromecast after your first month). (See here for a full list of supported devices.) There are currently no Roku or Apple TV apps.
Each YouTube TV account can share out the subscription to 6 total logins where each household member gets their own login and DVR library. Up to three people can be streaming TV at the same time. While out and about, I noticed that YouTube TV required me to turn on location services in order to use the app. Looking further into it, the YouTube TV FAQ states that you will need to verify your location in order to stream live TV and will only be able to stream live TV if you are physically in the markets where YouTube TV has launched. You can watch your DVR shows anywhere in the US. However, if you are traveling internationally you will not be able to use YouTube TV at all (I’m not sure if VPNs will get around this or if YouTube TV blocks this like Netflix does). Users will need to login from their home market at least once every 3 months to keep their account active and able to stream content (every month for MLB content).
YouTube TV verifying location in Chrome (left) and on the android app (right).
On one hand, I can understand this was probably necessary in order for YouTube TV to negotiate a licensing deal, and their terms do seem pretty fair. I will have to do more testing on this as I wasn’t able to stream from the DVR without turning on location services on my Android – I can chalk this up to growing pains though and it may already be fixed.
Features & First Impressions
YouTube TV has an interface that is perhaps best described as a slimmed down YouTube that takes cues from Netflix (things like the horizontal scrolling of shows in categories). The main interface is broken down into three sections: Library, Home, and Live with the first screen you see when logging in being Home. You navigate by scrolling and clicking, and by pulling the menus up from the bottom while streaming TV like YouTube.
Application Profiling Tells the Story
It should come as no surprise to anyone that has been paying attention the last two months that the latest AMD Ryzen processors and architecture are getting a lot of attention. Ryzen 7 launched with a $499 part that bested the Intel $1000 CPU at heavily threaded applications and Ryzen 5 launched with great value as well, positioning a 6-core/12-thread CPU against quad-core parts from the competition. But part of the story that permeated through both the Ryzen 7 and the Ryzen 5 processor launches was the situation surrounding gaming performance, in particular 1080p gaming, and the surprising delta that we see in some games.
Our team has done quite a bit of research and testing on this topic. This included a detailed look at the first asserted reason for the performance gap, the Windows 10 scheduler. Our summary there was that the scheduler was working as expected and that minimal difference was seen when moving between different power modes. We also talked directly with AMD to find out its then current stance on the results, backing up our claims on the scheduler and presented a better outlook for gaming going forward. When AMD wanted to test a new custom Windows 10 power profile to help improve performance in some cases, we took part in that too. In late March we saw the first gaming performance update occur courtesy of Ashes of the Singularity: Escalation where an engine update to utilize more threads resulted in as much as 31% average frame increase.
As a part of that dissection of the Windows 10 scheduler story, we also discovered interesting data about the CCX construction and how the two modules on the 1800X communicated. The result was significantly longer thread to thread latencies than we had seen in any platform before and it was because of the fabric implementation that AMD integrated with the Zen architecture.
This has led me down another hole recently, wondering if we could further compartmentalize the gaming performance of the Ryzen processors using memory latency. As I showed in my Ryzen 5 review, memory frequency and throughput directly correlates to gaming performance improvements, in the order of 14% in some cases. But what about looking solely at memory latency alone?
Introduction and Specifications
Fractal Design is well known in PC enthusiast circles for their excellent cases, and they also entered the self-contained liquid CPU cooler market in 2014 with the Kelvin, and today are releasing a brand new cooler lineup called Celsius. There are two models being introduced, with the 360 mm Celsius S36 and the 240 mm Celsius S24; the latter of which we have for review today.
While on the surface this might appear to be a standard 240 mm all-in-one liquid CPU cooler, there are some key features that help to differentiate the Celsius lineup in an increasingly saturated market. The hoses (themselves flexible rubber in nice-looking sleeves) are attached at both ends with metal fittings, with the radiator side the standard (and removable) G1/4 variety, and the fans connect via an unusual radiator-mounted header that receives power via a hidden fan cable in one of the sleeved hoses. Additionally, the Celsius coolers offer a dual-mode setting with the choice of automatic fan control or PWM passthrough from the motherboard - and this is controlled via a clever switch built into the trim ring around the pump.
I have been impressed with the low noise of Fractal Design fans in the past, and I went into this review expecting a very quiet cooling experience. How did the Celsius S24 fare on the test bench? Read on to find out!
Introduction and Technical Specifications
Courtesy of ASUS
The Prime Z270-A motherboard is one of ASUS' initial offering integrating the Intel Z270 chipset. The board features ASUS' Channel line aesthetics with a black PCB and white plastic accents. The board's integrated Intel Z270 chipset integrates support for the latest Intel LGA1151 Kaby Lake processor line as well as Dual Channel DDR4 memory. Offered at a price-competitive MSRP of $164, the Prime Z270-A offers a compelling price point with respect to its integrated features and performance potential.
Courtesy of ASUS
Courtesy of ASUS
ASUS does not cut corners on any of their boards with the Prime Z270 sharing similar power component circuitry as its higher tiered siblings, featuring a 10-phase digital power delivery system. ASUS integrated the following features into the Prime Z270-A board: six SATA 3 ports; two M.2 PCIe x4 capable ports; an Intel I219-V Gigabit NIC; three PCI-Express x16 slots; four PCI-Express x1 slots; on-board power and MemOK! buttons; an EZ XMP switch; Crystal Sound 3 audio subsystem; integrated DisplayPort, HDMI, and DVI video ports; and USB 3.0 and 3.1 Type-A and Type-C port support.
Courtesy of ASUS
ASUS enhanced several design aspects with the Prime Z270-A including integrated RGB support, mount points for custom 3D printed panels along the top right of the board, and metal reinforced PCIe x16 slots for the primary and secondary slots.
Show me your true colors
It's no secret that RGB accessories and components have been quite popular in the past few years. One of the most recent introductions in the quest to make everything related to your computer RGB LED customizable is system memory.
Today, we're taking a look at Corsair's RGB DDR4 offering, the Vengeance RGB memory kit.
As you might expect, from the outside the Vengeance RGB DIMMs look mostly like standard memory modules. The heat spreader is full metal and has a matte texture, giving it a nice flat appearance and feel.
The real magic lies underneath the removable top portion of the heat spreader. Taking this piece off will reveal the lightbar in all its glory. This removable portion of the heat spreader allows you to choose between maximum LED visibility and the more subtle appearance of the "slotted" design. For attention oriented people like me, it's also nice that you can flip the lid of the heat spreader so that the Corsair logo is oriented in the same way when you have 4 DIMMs installed into a motherboard.
Unlike the GEIL EVO X RGB memory that we used in our Ryzen 5 CPU review, the Corsair Vengeance RGB memory does not depend on your motherboard having headers for external RGB strips, but rather is fully controlled through Corsair Link software on your PC.
With Corsair Link installed on a supported platform (more on that later), it's very easy to customize the look of the Vengeance RGB modules. These LEDs are individually addressable so you can do patterns like Color Pulse and Shift as well as a Rainbow effect. You can also pair together modules into groups so that the effects are synchronized together.
After getting the memory installed and customized to our liking, we decided to run a couple of memory benchmarks on this kit at the stock DDR4-2400 speeds for the Kaby Lake platform, and at DDR4-3000 which this kit is certified for. Although it's worth nothing that Corsair claims this memory is very overclockable.
In synthetic memory benchmarks, you can definitely see the expected difference in performance from running at DDR4-3000 vs DDR4-2400. Read/Write/Copy as well as memory bandwidth sees a nice increase. Although, as we have seen over the years, increases in memory bandwidth don't seem to translate to large performance increases in real world applications.
However, with the advent of AMD's latest Ryzen CPUs, we have seen a new importance on memory speed in relation to certain applications including gaming. While we managed to run Vengeance RGB memory a DDR4-3000 speeds on our ASUS Crosshair VI Hero platform with no issues, you do lose the RGB functionality.
Currently, the Corsair Link software utilizes the Intel Management Engine software to enable support for changing the RGB LEDs over the DDR4 bus. This means that when you install the memory into a Ryzen system, you are unable to customize the LED patterns, with the memory modules staying in their default state of cycling through colors in an unsynchronized method.
Corsair has said that Ryzen support for RGB customization is coming, and we will be on the lookout for when the updated version of Corsair Link software is available.
At $160 for the 16GB kit, the Corsair Vengeance RGB DDR-3000 Memory carries about a $30-$40 price premium over similar non RGB-enabled kits. While it may seem a bit ridiculous to spend extra money just to get light up RAM, if you are working on a color scheme with your system and already have things like an RGB Motherboard and GPU, Corsair Vengeance RGB memory could be the final touch you are looking for.
Introduction and Technical Specifications
Alphacool NexXxos Cool Answer 360 D5/UT kit
Courtesy of Alphacool
Alphacool is a German-based company, known in liquid cooling enthusiast's circles for their high performance and innovative product designs. Alphacool provided us with one of their NexXxos Cool Answer cooling kits, featuring one of their 360mm (3 x 120mm) UT copper radiators, a Repack dual bay acrylic reservoir with integrated VPP655 D5 pump, and the NexXxos XP3 Light CPU block. With a retail price of 314.95 euros (approximately $330 USD), the kit comes at a competitive price compared with other higher-end DIY kits.
5.25 Dual Bay Reservoir
Courtesy of Alphacool
NexXxoS UT60 Full Copper 360mm radiator
Courtesy of Alphacool
NexXxos XP3 Light CPU Waterblock
Courtesy of Alphacool
XSPC bundled in many of their high end components into the NexXxos Cool Answer 360 D5/UT kit, including the NexXxos XP3 Light CPU block, the NexXxoS UT60 Full Copper 360mm triple fan radiator, the Repack 5.25 Dual Bay reservoir with integrated VPP655 D5 pump, three meters of 10mm (3/8") inner diameter / 13mm outer diameter clear tubing, six black chrome compression barbs, three 1200 RPM NB-eLoop - Bionic Lüfter fans, 1000ml of their CKC Cape Kelvin Catcher Clear coolant, and all the hardware necessary to put it all together. The Repack dual-bay reservoir has an anti-cyclon design on the inlets and directly feeds the rear-mounted D5 pump. The included VPP655 D5 pump is rated for a 350ml/hr flow rate. All components are copper, brass, Acetal, or acrylic to minimize the possibility of mixed-metal corrosion occurring in the loop.
Introduction and Features
Zalman is well known for supplying cases, cooling solutions, power supplies and accessories to PC enthusiasts around the world. It’s been quite a while since we last reviewed one of Zalman’s power supplies and today we are going to take a detailed look at one of their new Acrux Series units, the ZM1000-ARX. There are currently four models in the Acrux Series ranging in output capacity from 750W up to 1200W.
Zalman Acrux Series Power Supplies:
The Acrux Series is Zalman’s flagship line of power supplies. They all feature 80 Plus Platinum certification for high efficiency and come with modular cables for flexibility and ease of cable routing. The ARX power supplies are designed for quiet, reliable operation and come backed by a 7-year warranty.
Zalman ZM1000-ARX Platinum PSU Key Features:
• 1000W DC power output at 40°C
• 80 Plus Platinum certification for high efficiency (89-94% @ 20-100% load)
• All Modular cable design
• Smart fan control system for very quiet operation
• Quiet 140mm cooling fan with ball bearings
• 100% High quality Japanese made capacitors with 105°C rating
• High current single +12V rail (83A/996W)
• DC-to-DC converters for +5V and +3.3V outputs
• Multi-GPU ready with eight PCI-E connectors
• Supports NVIDIA SLI and AMD Crossfire
• Meets ErP 2013 Lot6 standby power standards and Haswell ready
• Compliant with Intel ATX12V Ver 2.3 & EPS 12V Ver 2.92 standards
• Universal AC input (100-240V) with Active PFC
• DC Output protections: UVP, OVP, OPP, SCP, OCP, and OTP
• Dimensions: 150mm (W) x 86mm (H) x 180mm (L)
• 7-Year warranty
• MSRP : $199.99 USD
Despite its surprise launch a few weeks ago, the Corsair ONE feels like it was inevitable. Corsair's steady expansion from RAM modules to power supplies, cases, SSDs, CPU coolers, co-branded video cards, and most recently barebones systems pointed to an eventual complete Corsair system. However, what we did not expect was the form it would take.
Did Corsair hit it out of the park on their first foray into prebuilt systems, or do they still have some work to do?
It's a bit difficult to get an idea of the scale of the Corsair ONE. Even the joke of "Is it bigger than a breadbox?" doesn't quite work here with the impressively breadbox-size and shape.
Essentially, when you don't take the fins on the top and the bottom into account, the Corsair ONE is as tall as a full-size graphics card — such as the GeForce GTX 1080 — and that's no coincidence.
|Corsair ONE Pro (configuration as reviewed)|
|Processor||Intel Core i7-7700K (Kaby Lake)|
|Graphics||NVIDIA Geforce GTX 1080 Watercooled|
|Motherboard||Custom MSI Z270 Mini-ITX|
|Storage||960 GB Corsair Force LE|
|Power Supply||Corsair SF400 80+ Gold SFX|
|Wireless||Intel 8265 802.11ac + BT 4.2 (Dual Band, 2x2)|
|Connections||1 X USB 3.1 GEN2 TYPE C
3 X USB 3.1 GEN1 TYPE A
2 X USB 2.0 TYPE A
1 X PS/2 Port
1 X HDMI 2.0
2 X DisplayPort
1 X S/PDIF
|Dimensions||7.87 x 6.93 x 14.96 inches (20 x 17.6 x 38 cm)
15.87 lbs. (7.2 kg)
|OS||Windows 10 Home|
|Price||$2299.99 - Corsair.com|
Taking a look at the full specifcations, we see all the components for a capable gaming PC. In addition to the afforementioned GTX 1080, you'll find Intel's flagship Core i7-7700K, a Mini ITX Z270 motherboard produced by MSI, a 960GB SSD, and 16GB of DDR4 memory.
Back in February we took a quick initial look at the eero Home Wi-Fi System, one of several new entrants in the burgeoning Mesh Networking industry. Like its competitors, eero's goal is to increase home Wi-Fi performance and coverage by switching from a system based upon a powerful standalone router to one which utilizes multiple lower power wireless base stations positioned throughout a home.
The idea is that these multiple wireless access points, which are configured to communicate with each other automatically via proprietary software, can not only increase the range of your home Wi-Fi network, but also reduce the burden of our ever-increasing number of wireless devices on any one single access point.
There are a number of mesh Wi-Fi systems already available from both established networking companies as well as industry newcomers, with more set for release this year. We don't have every system ready to test just yet, but join us as we take a look at three popular options to see if mesh networking performance lives up to the hype.
The Dell Inspiron 15 7000 Gaming series has been part of the increasingly interesting sub-$1000 gaming notebook market since it’s introduction in 2015. We took a look at last year’s offering and were very impressed with the performance it had to offer, but slightly disappointed in the build quality.
Dell is back this year with an all-new industrial design for the Inspiron 15 7000 Gaming along with updated graphics in form of the GeForce GTX 1050 Ti. Can a $850 gaming notebook possibly live up to expectations? Let’s take a closer look.
After three generations of the Dell Inspiron 15 Gaming product, it’s evident that Dell takes this market segment seriously. Alienware seems to have lost a bit of the hearts and minds of gamers in the high-end segment, but Dell has carved out a nice corner of the gaming market.
|Dell Inspiron 15 7567 Gaming (configuration as reviewed)|
|Processor||Intel Core i5-7300HQ (Kaby Lake)|
|Graphics||NVIDIA Geforce GTX 1050 Ti (4GB)|
|Memory||8GB DDR4-2400 (One DIMM)|
|Screen||15.6-in 1920x1080 I|
256GB SanDisk X400 SATA M.2
Available 2.5" drive slot
|Camera||720p / Dual Digital Array Microphone|
|Wireless||Intel 3165 802.11ac + BT 4.2 (Dual Band, 1x1)|
3x USB 3.0
Audio combo jack
|Dimensions||384.9mm x 274.73mm x 25.44mm (15.15" x 10.82" x 1")
5.76 lbs. (2620 g)
|OS||Windows 10 Home|
|Price||$849 - Dell.com|
Let's just get this out of the way: for the $850 price tag of the model that we were sent by Dell for review, this is an amazing collection of hardware. Traditionally laptops under $1000 have an obvious compromise, but it's difficult to find one here. Dedicated graphics, flash Storage, 1080p screen, and a large battery all are features that I look for in notebooks. Needless to say, my expectations for the Inspiron 15 Gaming are quite high.
Introduction and Technical Specifications
Courtesy of EKWB
EK's Supremacy line of CPU waterblocks are well known for their performance and style. Their latest version in this block line, the Supremacy MX, advances their design in the hopes of getting more optimized performance out of a less costly version of their award winning block series. The base Supremacy MX CPU waterblock is a copper and plexi construction using the same jet-impingement and micro-channel design as that used in their previous block versions. The block comes fully assembled from the factory with a single CPU mounting bracket type (in this case, the Intel version). Note that additional CPU mounting kits are available for purchase. With an MSRP of $54.99, the Supremacy MX waterblock offers a compelling purchase in light of its performance potential.
Courtesy of EKWB
Courtesy of EKWB
The block is assembled with hex-head screws going through the copper base plate with rubber grommets ensuring the integrity of the block internals. The top aluminum cover plate is held to the plexi top using short hex-head screws that thread directly into the plexi top plate. The center inlet feeds the micro-channels embedded in the copper base plate through the jet-impingement assembly. The mounting bracket sits in between the top plexi plate and the copper base plate, making any an interesting upgrade if you want to switch out the CPU mount plate to use the block on a different CPU family (like going from Intel to AMD Ryzen for example). The aluminum top plate gives the block a sleek appearance and acts to redirect illumination from the side mounted LEDs (if you choose to use LEDs with the block that is).
Specifications and Design
When the GeForce GTX 1080 Ti launched last month it became the fastest consumer graphics card on the market, taking over a spot that NVIDIA had already laid claim to since the launch of the GTX 1080, and arguably before that with the GTX 980 Ti. Passing on the notion that the newly released Titan Xp is a graphics cards gamers should actually consider for their cash, the 1080 Ti continues to stand alone at the top. That is until NVIDIA comes up another new architecture or AMD surprises us all with the release of the Vega chip this summer.
NVIDIA board partners have the flexibility to build custom hardware around the GTX 1080 Ti design and the EVGA GeForce GTX 1080 Ti SC2 sporting iCX Technology is one of those new models. Today’s story is going to give you my thoughts and impressions on this card in a review – one with fewer benchmarks than you are used to see but one that covers all the primary differentiation points to consider over the reference/Founders Edition options.
Specifications and Design
The EVGA GTX 1080 Ti SC2 with iCX Technology takes the same GPU and memory technology shown off with the GTX 1080 Ti launch and gussies it up with higher clocks, a custom PCB with thermal sensors in 9 different locations, LEDs for externally monitoring the health of your card and a skeleton-like cooler design that is both effective and aggressive.
|EVGA 1080 Ti SC2||GTX 1080 Ti||Titan X (Pascal)||GTX 1080||GTX 980 Ti||TITAN X||GTX 980||R9 Fury X||R9 Fury|
|GPU||GP102||GP102||GP102||GP104||GM200||GM200||GM204||Fiji XT||Fiji Pro|
|Base Clock||1557 MHz||1480 MHz||1417 MHz||1607 MHz||1000 MHz||1000 MHz||1126 MHz||1050 MHz||1000 MHz|
|Boost Clock||1671 MHz||1582 MHz||1480 MHz||1733 MHz||1076 MHz||1089 MHz||1216 MHz||-||-|
|Memory Clock||11000 MHz||11000 MHz||10000 MHz||10000 MHz||7000 MHz||7000 MHz||7000 MHz||500 MHz||500 MHz|
|Memory Interface||352-bit||352-bit||384-bit G5X||256-bit G5X||384-bit||384-bit||256-bit||4096-bit (HBM)||4096-bit (HBM)|
|Memory Bandwidth||484 GB/s||484 GB/s||480 GB/s||320 GB/s||336 GB/s||336 GB/s||224 GB/s||512 GB/s||512 GB/s|
|TDP||250 watts||250 watts||250 watts||180 watts||250 watts||250 watts||165 watts||275 watts||275 watts|
|Peak Compute||11.1 TFLOPS||10.6 TFLOPS||10.1 TFLOPS||8.2 TFLOPS||5.63 TFLOPS||6.14 TFLOPS||4.61 TFLOPS||8.60 TFLOPS||7.20 TFLOPS|
Out of the box EVGA has overclocked the GTX 1080 Ti SC2 above reference specs. With a base clock of 1557 MHz and a GPU Boost clock of 1671 MHz, it has a 77 MHz jump on base and an 89 MHz jump on boost. Though moderate by some overclockers’ standards, that’s a healthy increase of 5.3% on the typical boost clock rate. The memory speed remains the same at 11.0 Gbps on 11GB, unchanged from the Founders Edition.
I’m not going to walk through the other specifications of the GeForce GTX 1080 Ti GPU in general – I assume if you are looking at this story you are already well aware of it features and capabilities. If you need a refresh on this oddly-designed 352-bit memory bus behemoth, just read over the first page of my GeForce GTX 1080 Ti launch review.
Introduction, Specifications, and Requirements
Finally! Optane Memory sitting in our lab! Sure, it’s not the mighty P4800X we remotely tested over the past month, but this is right here, sitting on my desk. It’s shipping, too, meaning it could be sitting on your desk (or more importantly, in your PC) in just a matter of days.
The big deal about Optane is that it uses XPoint Memory, which has fast-as-lightning (faster, actually) response times of less than 10 microseconds. Compare this to the fastest modern NAND flash at ~90 microseconds, and the differences are going to add up fast. What’s wonderful about these response times is that they still hold true even when scaling an Optane product all the way down to just one or two dies of storage capacity. When you consider that managing fewer dies means less work for the controller, we can see latencies fall even further in some cases (as we will see later).
Introduction and Features
Earlier this year we looked at the Enigma 850W PSU from Riotoro, their first entry into the PC power supply market. Today we are going to take a detailed look at two more power supplies in Riotoro’s new Onyx Series. Formed in 2014 and based in California, Riotoro originally started their PC hardware business with a focus on cases, mice, and LED fans targeted towards the gaming community. They are continuing to expand their product offerings to include two new power supply lines, the Enigma and Onyx Series, along with two liquid CPU coolers and several RGB gaming keyboards.
Riotoro announced the introduction of the three power supplies last year at Computex 2016: the Enigma 850W, Onyx 750W, and Onyx 650W. All three power supplies were developed in partnership with Great Wall and are based on a new platform designed to hit the sweet spot for practical real-world performance, reliability, and price. The Enigma line kicked off with the 850W unit and the Onyx line will initially be available in 650W and 750W models.
The main differences between the Enigma Series and the Onyx Series are listed in the table above. The Riotoro Onyx Series power supplies are certified to comply with the 80 Plus Bronze criteria for efficiency, come with semi-modular cables, and both use a quiet 120mm variable speed fan for cooling.
Riotoro Onyx Series PSU Key Features:
• 650W or 750W Continuous DC output at up to 40°C
• 80 PLUS Bronze level efficiency certification
• Semi-modular cables
• Quiet 120mm cooling fan
• Japanese made bulk (electrolytic) capacitors
• Compatible with Intel and AMD processors and motherboards
• Active Power Factor correction with Universal AC input (100 to 240 VAC)
• Safety protections: OVP, UVP, OCP, OPP, and SCP
• 3-Year warranty
• MSRP: Onyx 650W $64.99 and Onyx 750W $74.99 USD
Introduction and Specifications
XPoint. Optane. QuantX. We've been hearing these terms thrown around for two years now. A form of 3D stackable non-volatile memory that promised 10x the density of DRAM and 1000x the speed and endurance of NAND. These were bold statements, and over the following months, we would see them misunderstood and misconstrued by many in the industry. These misconceptions were further amplified by some poor demo choices on the part of Intel (fortunately countered by some better choices made by Micron). Fortunately cooler heads prevailed as Jim Handy and other industry analysts helped explain that a 1000x improvement at the die level does not translate to the same improvement at the device level, especially when the first round of devices must comply with what will soon become a legacy method of connecting a persistent storage device to a PC.
Did I just suggest that PCIe 3.0 and the NVMe protocol - developed just for high-speed storage, is already legacy tech? Well, sorta.
That 'Future NVM' bar at the bottom of that chart there was a 2-year old prototype iteration of what is now Optane. Note that while NVMe was able to shrink down the yellow bar a bit, as you introduce faster and faster storage, the rest of the equation (meaning software, including the OS kernel) starts to have a larger and larger impact on limiting the ultimate speed of the device.
NAND Flash simplified schematic (via Wikipedia)
Before getting into the first retail product to push all of these links in the storage chain to the limit, let's explain how XPoint works and what makes it faster. Taking random writes as an example, NAND Flash (above) must program cells in pages and erase cells in blocks. As modern flash has increased in capacity, the sizes of those pages and blocks have scaled up roughly proportionally. At present day we are at pages >4KB and block sizes in the megabytes. When it comes to randomly writing to an already full section of flash, simply changing the contents of one byte on one page requires the clearing and rewriting of the entire block. The difference between what you wanted to write and what the flash had to rewrite to accomplish that operation is called the write amplification factor. It's something that must be dealt with when it comes to flash memory management, but for XPoint it is a completely different story:
XPoint is bit addressible. The 'cross' structure means you can select very small groups of data via Wordlines, with the ultimate selection resolving down to a single bit.
Since the programmed element effectively acts as a resistor, its output is read directly and quickly. Even better - none of that write amplification nonsense mentioned above applies here at all. There are no pages or blocks. If you want to write a byte, go ahead. Even better is that the bits can be changed regardless of their former state, meaning no erase or clear cycle must take place before writing - you just overwrite directly over what was previously stored. Is that 1000x faster / 1000x more write endurance than NAND thing starting to make more sense now?
Ok, with all of the background out of the way, let's get into the meat of the story. I present the P4800X:
Introduction and First Impressions
The A4-SFX takes the minimalist, full-length GPU capable mini-ITX chassis design down to stunningly compact dimensions, and does so with a precise all-aluminum build and refined industrial design. Created by the one-man company DAN Cases and funded on Kickstarter, the A4-SFX share the spirit of the crowdfunded NCASE M1 that preceded it, but takes even that tiny enclosure's dimensions down considerably. It is, as the company puts it, "the world's smallest gaming tower case".
What was omitted to bring the size down this far? Comparing the A4-SFX to the aforementioned NCASE M1 (an inevitability as both were crowd-funded and manufactured by Lian Li), the A4-SFX drops support for compact ATX power supplies in favor of SFX/SFX-L units, and CPU cooling is limited to a height of 48 mm, with no liquid cooling support. Many low-profile CPU coolers - including Intel’s stock design - fit this description, but the cooling limitation suggests stock CPU speeds are the tradeoff for such a compact case design.
So how compact is this case, exactly? The A4-SFX has a volume of just 7.25L compared to the NCASE M1 at 12.6L. Yet the A4-SFX can still house a powerful, gaming-ready system with standard components including a full sized GPU (up to 295 mm in length) and any mini-ITX motherboard and CPU.
What is old is new again
Trust me on this one – AMD is aware that launching the RX 500-series of graphics cards, including the RX 580 we are reviewing today, is an uphill battle. Besides battling the sounds on the hills that whisper “reeebbrraannndd” AMD needs to work with its own board partners to offer up total solutions that compete well with NVIDIA’s stronghold on the majority of the market. Just putting out the Radeon RX 580 and RX 570 cards with same coolers and specs as the RX 400-series would be a recipe for ridicule. AMD is aware and is being surprisingly proactive in its story telling the consumer and the media.
- If you already own a Radeon RX 400-series card, the RX 500-series is not expected to be an upgrade path for you.
- The Radeon RX 500-series is NOT based on Vega. Polaris here everyone.
- Target users are those with Radeon R9 380 class cards and older – Polaris is still meant as an upgrade for that very large user base.
The story that is being told is compelling; more than you might expect. With more than 500 million gamers using graphics cards two years or older, based on Steam survey data, there is a HUGE audience that would benefit from an RX 580 graphics card upgrade. Older cards may lack support for FreeSync, HDR, higher refresh rate HDMI output and hardware encode/decode support for 4K resolution content. And while the GeForce GTX 1060 family would also meet that criteria, AMD wants to make the case that the Radeon family is the way to go.
The Radeon RX 500-series is based on the same Polaris architecture as the RX 400-series, though AMD would tell us that the technology has been refined since initial launch. More time with the 14nm FinFET process technology has given the fab facility, and AMD, some opportunities to refine. This gives the new GPUs the ability to scale to higher clocks than they could before (though not without the cost of additional power draw). AMD has tweaked multi-monitor efficiency modes, allowing idle power consumption to drop a handful of watts thanks to a tweaked pixel clock.
Maybe the most substantial change with this RX 580 release is the unleashing of any kind of power consumption constraints for the board partners. The Radeon RX 480 launch was marred with issues surrounding the amount of power AMD claimed the boards would use compared to how much they DID use. This time around, all RX 580 graphics cards will ship with AT LEAST an 8-pin power connector, opening overclocked models to use as much as 225 watts. Some cards will have an 8+6-pin configuration to go even higher. Considering the RX 480 launched with a supposed 150 watt TDP (that it never lived up to), that’s quite an increase.
AMD is hoping to convince gamers that Radeon Chill is a good solution to help some specific instances of excessive power draw. Recent drivers have added support for games like League of Legends and DOTA 2, adding to The Witcher 3, Dues Ex: Mankind Divided and more. I will freely admit that while the technology behind Chill sounds impressive, I don’t have the experience with it yet to claim or counterclaim its supposed advantages…without sacrificing user experience.