All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
The Expected Unexpected
Last night we first received word that Raja had resigned from AMD (during a sabbatical) after they had launched Vega. The initial statement was that Raja would come back to resume his position at AMD in a December/January timeframe. During this time there was some doubt as to if Raja would in fact come back to AMD, as “sabbaticals” in the tech world would often lead the individual to take stock of their situation and move on to what they would consider to be greener pastures.
Raja has dropped by the PCPer offices in the past.
Initially it was thought that Raja would take the time off and then eventually jump to another company and tackle the issues there. This behavior is quite common in Silicon Valley and Raja is no stranger to this. Raja cut his teeth on 3D graphics at S3, but in 2001 he moved to ATI. While there he worked on a variety of programs including the original Radeon, the industry changing Radeon 9700 series, and finishing up with the strong HD 4000 series of parts. During this time ATI was acquired by AMD and he became one of the top graphics guru at that company. In 2009 he quit AMD and moved on to Apple. He was Director of Graphics Architecture at Apple, but little is known about what he actually did. During that time Apple utilized AMD GPUs and licensed Imagination Technologies graphics technology. Apple could have been working on developing their own architecture at this point, which has recently showed up in the latest iPhone products.
In 2013 Raja rejoined AMD and became a corporate VP of Visual Computing, but in 2015 he was promoted to leading the Radeon Technology Group after Lisu Su became CEO of the company. While there Raja worked to get AMD back on an even footing under pretty strained conditions. AMD had not had the greatest of years and had seen their primary moneymakers start taking on water. AMD had competitive graphics for the most part, and the Radeon technology integrated into AMD’s APUs truly was class leading. On the discrete side AMD was able to compare favorably to NVIDIA with the HD 7000 and later R9 200 series of cards. After NVIDIA released their Maxwell based chips, AMD had a hard time keeping up. The general consensus here is that the RTG group saw its headcount decreased by the company-wide cuts as well as a decrease in R&D funds.
Here comes a new challenger
The release of the GeForce GTX 1070 Ti has been an odd adventure. Launched into a narrow window of a product stack between the GTX 1070 and the GTX 1080, the GTX 1070 Ti is a result of the competition from the AMD RX Vega product line. Sure, NVIDIA might have speced out and prepared an in-between product for some time, but it was the release of competitive high-end graphics cards from AMD (for the first time in forever it seems) that pushed NVIDIA to launch what you see before us today.
With MSRPs of $399 and $499 for the GTX 1070 and GTX 1080 respectively, a new product that fits between them performance wise has very little room to stretch its legs. Because of that, there are some interesting peculiarities involved with the release cycle surrounding overclocks, partner cards, and more.
But before we get into that concoction, let’s first look at the specifications of this new GPU option from NVIDIA as well as the reference Founders Edition and EVGA SC Black Edition cards that made it to our offices!
GeForce GTX 1070 Ti Specifications
We start with our classic table of details.
|RX Vega 64 Liquid||RX Vega 64 Air||RX Vega 56||Vega Frontier Edition||GTX 1080 Ti||GTX 1080||GTX 1070 Ti||GTX 1070|
|Base Clock||1406 MHz||1247 MHz||1156 MHz||1382 MHz||1480 MHz||1607 MHz||1607 MHz||1506 MHz|
|Boost Clock||1677 MHz||1546 MHz||1471 MHz||1600 MHz||1582 MHz||1733 MHz||1683 MHz||1683 MHz|
|Memory Clock||1890 MHz||1890 MHz||1600 MHz||1890 MHz||11000 MHz||10000 MHz||8000 MHz||8000 MHz|
|Memory Interface||2048-bit HBM2||2048-bit HBM2||2048-bit HBM2||2048-bit HBM2||352-bit G5X||256-bit G5X||256-bit||256-bit|
|Memory Bandwidth||484 GB/s||484 GB/s||410 GB/s||484 GB/s||484 GB/s||320 GB/s||256 GB/s||256 GB/s|
|TDP||345 watts||295 watts||210 watts||300 watts||250 watts||180 watts||180 watts||150 watts|
|Peak Compute||13.7 TFLOPS||12.6 TFLOPS||10.5 TFLOPS||13.1 TFLOPS||11.3 TFLOPS||8.2 TFLOPS||7.8 TFLOPS||5.7 TFLOPS|
If you have followed the leaks and stories over the last month or so, the information here isn’t going to be a surprise. The CUDA core count of the GTX 1070 Ti is 2432, only one SM unit less than the GTX 1080. Base and boost clock speeds are the same as the GTX 1080. The memory system includes 8GB of GDDR5 running at 8 GHz, matching the performance of the GTX 1070 in this case. The TDP gets a bump up to 180 watts, in line with the GTX 1080 and slightly higher than the GTX 1070.
Overview and CPU Performance
When Intel announced their quad-core mobile 8th Generation Core processors in August, I was immediately interested. As a user who gravitates towards "Ultrabook" form-factor notebooks, it seemed like a no-brainer—gaining two additional CPU cores with no power draw increase.
However, the hardware reviewer in me was skeptical. Could this "Kaby Lake Refresh" CPU provide the headroom to fit two more physical cores on a die while maintaining the same 15W TDP? Would this mean that the processor fans would have to run out of control? What about battery life?
Now that we have our hands on our first two notebooks with the i7-8550U in, it's time to take a more in-depth look at Intel's first mobile offerings of the 8th Generation Core family.
Providers and Devices
"Cutting the Cord," the process of ditching traditional cable and satellite content providers for cheaper online-based services, is nothing new. For years, consumers have cancelled their cable subscriptions (or declined to even subscribe in the first place), opting instead to get their entertainment from companies like Netflix, Hulu, and YouTube.
But the recent introduction of online streaming TV services like Sling TV, new technologies like HDR, and the slow online adoption of live local channels has made the idea of cord cutting more complicated. While cord cutters who are happy with just Netflix and YouTube need not worry, what are the solutions for those who don't like the idea of high cost cable subscriptions but also want to preserve access to things like local channels and the latest 4K HDR content?
This article is the first in a three-part series that will look at this "high-end" cord cutting scenario. We'll be taking a look at the options for online streaming TV, access to local "OTA" (over the air) channels, and the devices that can handle it all, including DVR support, 4K output, and HDR compliance.
There are two approaches that you can take when considering the cord cutting process. The first is to focus on capabilities: Do you want 4K? HDR? Lossless surround sound audio? Voice search? Gaming?
The second approach is to focus on content: Do you want live TV or à la carte downloads? Can you live without ESPN or must it and your other favorite networks still be available? Are you heavily invested in iTunes content? Perhaps most importantly for those concerned with the "Spousal Acceptance Factor" (SAP), do you want the majority of your content contained in a single app, which can prevent you and your family members from having to jump between apps or devices to find what they want?
While most people on the cord cutting path will consider both approaches to a certain degree, it's easier to focus on the one that's most important to you, as that will make other choices involving devices and content easier. Of course, there are those of us out there that are open to purchasing and using multiple devices and content sources at once, giving us everything at the expense of increased complexity. But most cord cutters, especially those with families, will want to pursue a setup based around a single device that accommodates most, if not all, of their needs. And that's exactly what we set out to find.
Introduction, Specifications and Packaging
It’s been two long years since we first heard about 3D XPoint Technology. Intel and Micron serenaded us with tales of ultra-low latency and very high endurance, but when would we have this new media in our hot little hands? We got a taste of things with Optane Memory (caching) back in April, and later that same month we got a much bigger, albeit remotely-tested taste in the form of the P4800X. Since April all was quiet, with all of us storage freaks waiting for a consumer version of Optane with enough capacity to act as a system drive. Sure we’ve played around with Optane Memory parts in various forms of RAID, but as we found in our testing, Optane’s strongest benefits are the very performance traits that do not effectively scale with additional drives added to an array. The preferred route is to just get a larger single SSD with more 3D XPoint memory installed on it, and we have that very thing today (and in two separate capacities)!
You might have seen various rumors centered around the 900P lately. The first is that the 900P was to supposedly support PCIe 4.0. This is not true, and after digging back a bit appears to be a foreign vendor mistaking / confusing PCIe X4 (4 lanes) with the recently drafted PCIe 4.0 specification. Another set of rumors centered around pre-order listings and potential pricing for the 280 and 480 GB variants of the 900P. We are happy to report that those prices (at the time of this writing) are way higher than Intel’s stated MSRP's for these new models. I’ll even go as far as to say that the 480GB model can be had for less than what the 280GB model is currently listed for! More on that later in the review.
Performance specs are one place where the rumors were all true, but since all the folks had to go on was a leaked Intel press deck slide listing figures identical to the P4800X, we’re not really surprised here.
Lots of technical stuff above, but the high points are <10us typical latency (‘regular’ SSDs run between 60-100us), 2.5/2.0 GB/s sequential reads/writes, and 550k/500k random read/write performance. Yes I know, don’t tell me, you’ve seen higher sequentials on smaller form factor devices. I agree, and we’ve even seen higher maximum performance from unreleased 3D XPoint-equipped parts from Micron, but Intel has done what they needed to do in order to make this a viable shipping retail product, which likely means sacrificing the ‘megapixel race’ figures in favor of offering the lowest possible latencies and best possible endurance at this price point.
Packaging is among the nicest we’ve seen from an Intel SSD. It actually reminds me of how the Fusion-io ioDrives used to come.
Also included with the 900P is a Star Citizen ship. The Sabre Raven has been a topic of gossip and speculation for months now, and it appears to be a pretty sweet looking fighter. For those unaware, Star Citizen is a space-based MMO, and with a ‘ship purchase’ also comes a license to play the game. The Sabre Raven counts as such a purchase and apparently comes with lifetime insurance, meaning it will always be tied to your account in case it gets blown up doing data runs. Long story short, you get the game for free with the purchase of a 900P.
A potential game changer?
I thought we were going to be able to make it through the rest of 2017 without seeing AMD launch another family of products. But I was wrong. And that’s a good thing. Today AMD is launching the not-so-cleverly-named Ryzen Processor with Radeon Vega Graphics product line that will bring the new Zen processor architecture and Vega graphics architecture onto a single die for the ultrathin mobile notebook platforms. This is no minor move for them – just as we discussed with the AMD EPYC processor launch, this is a segment that has been utterly dominated by Intel. After all, Intel created the term Ultrabook to target these designs, and though that brand is gone, the thin and light mindset continues to this day.
The claims AMD makes about its Ryzen mobile APU (combination CPU+GPU accelerated processing unit, to use an older AMD term) are not to be made lightly. Right up front in our discussion I was told this is going to be the “world’s fastest for ultrathin” machines. Considering that AMD had previously been unable to even enter those markets with previous products, both due to some technological and business roadblocks, AMD is taking a risk by painting this launch in such a light. Thanks to its ability combine CPU and GPU technology on a single die though, AMD has some flexibility today that simply did not have access to previously.
From the days that AMD first announced the acquisition of ATI graphics, the company has touted the long-term benefits of owning both a high-performance processor and graphics division. By combining the architectures on a single die, they could become greater than the sum of the parts, leveraging new software directions and the oft-discussed HSA (heterogenous systems architecture) that AMD helped create a foundation for. Though the first rounds of APUs were able to hit modest sales, the truth was that AMD’s advantage over Intel’s on the graphics technology front was often overshadowed by the performance and power efficiency advantages that Intel held on the CPU front.
But with the introduction of the first products based on Zen earlier this year, AMD has finally made good on the promises of catching up to Intel in many of the areas where it matters the most. The new from-the-ground-up design resulted in greater than 50% IPC gains, improved area efficiency compared to Intel’s latest Kaby Lake core design, and enormous gains in power efficiency compared to the previous CPU designs. When looking at the new Ryzen-based APU products with Vega built-in, AMD claims that they tower over the 7th generation APUs with up to 200% more CPU performance, 128% more GPU performance, and 58% lower power consumption. Again, these are bold claims, but it gives AMD confidence that it can now target premium designs and form factors with a solution that will meet consumer demands.
AMD is hoping that the release of the Ryzen 7 2700U and Ryzen 5 2500U can finally help turn the tides in the ultrathin notebook market.
|Core i7-8650U||Core i7-8550U||Core i5-8350U||Core i5-8250U||Ryzen 7 2700U||Ryzen 5 2500U|
|Architecture||Kaby Lake Refresh||Kaby Lake Refresh||Kaby Lake Refresh||Kaby Lake Refresh||Zen+Vega||Zen+Vega|
|Base Clock||1.9 GHz||1.8 GHz||1.7 GHz||1.6 GHz||2.2 GHz||2.0 GHz|
|Max Turbo Clock||4.2 GHz||4.0 GHz||3.8 GHz||3.6 GHz||3.8 GHz||3.6 GHz|
|System Bus||DMI3 - 8.0 GT/s||DMI3 - 8.0 GT/s||DMI2 - 6.4 GT/s||DMI2 - 5.0 GT/s||N/A||N/A|
|Graphics||UHD Graphics 620||UHD Graphics 620||UHD Graphics 620||UHD Graphics 620||Vega (10 CUs)||Vega (8 CUs)|
|Max Graphics Clock||1.15 GHz||1.15 GHz||1.1 GHz||1.1 GHz||1.3 GHz||1.1 GHz|
The Ryzen 7 2700U will run 200 MHz higher on the base and boost clocks for the CPU and 200 MHz higher on the peak GPU core clock. Though both systems have 4-cores and 8-threads, the GPU on the 2700U will have two additional CUs / compute units.
Forza Motorsport 7 Performance
The first full Forza Motorsport title available for the PC, Forza Motorsport 7 on Windows 10 launched simultaneously with the Xbox version earlier this month. With native 4K assets, HDR support, and new visual features like fully dynamic weather, this title is an excellent showcase of what modern PC hardware can do.
Now that both AMD and NVIDIA have released drivers optimized for Forza 7, we've taken an opportunity to measure performance across an array of different GPUs. After some significant performance mishaps with last year's Forza Horizon 3 at launch on PC, we are excited to see if Forza Motorsport 7 brings any much-needed improvements.
For this testing, we used our standard GPU testbed, including an 8-core Haswell-E processor and plenty of memory and storage.
|PC Perspective GPU Testbed|
|Processor||Intel Core i7-5960X Haswell-E|
|Motherboard||ASUS Rampage V Extreme X99|
|Memory||G.Skill Ripjaws 16GB DDR4-3200|
|Storage||OCZ Agility 4 256GB (OS)
Adata SP610 500GB (games)
|Power Supply||Corsair AX1500i 1500 watt|
|OS||Windows 10 x64|
|Drivers||AMD: 17.10.1 (Beta)
As with a lot of modern console-first titles, Forza 7 defaults to "Dynamic" image quality settings. This means that the game engine is supposed to find the best image settings for your hardware automatically, and dynamically adjust them so that you hit a target frame rate (adjustable between 30 and 60fps) no matter what is going on in the current scene that is being rendered.
While this is a good strategy for consoles, and even for casual PC gamers, it poses a problem for us trying to measure equivalent performance across GPUs. Luckily the developers of Forza Motorsport 7, Turn 10 Studios, still let you disable the dynamic control and configure the image quality settings as you desire.
One quirk however though is that in order for V-Sync to be disabled, the rendering resolution within the game must match the native resolution of your monitor. This means that if you are running 2560x1440 on your 4K monitor, you must first set the resolution within windows to 2560x1440 in order to run the game in V-Sync off mode.
We did our testing with an array of three different resolutions (1080p, 1440p, and 4K) at maximum image quality settings. We tested both AMD and NVIDIA graphics cards in similar price and performance segments. The built-in benchmark mode for this game was used, which does feature some variance due to dynamic weather patterns. However, our testing within the full game matched the results of the benchmark mode closely, so we used it for our final results.
Right off the bat, I have been impressed at how well optimized Forza Motorsport 7 seems to be on the PC. Compared to the unoptimized disaster that was Forza Horizon 3 when it launched on PC last year, it's clear that Turn 10 Studios and Microsoft have come a long way.
Even gamers looking to play on a 4K display at 60Hz can seemingly get away with the cheaper, and more mainstream GPUs such as the RX 580 or the GTX 1060 with acceptable performance in most scenarios.
Games on high-refresh-rate displays don't appear to have the same luxury. If you want to game at a resolution such as 2560x1440 at a full 144Hz, neither the RX Vega 64 or GTX 1080 will do this with maximum image quality settings. Although these GPUs appear to be in the margin where you could turn down a few settings to achieve your full refresh rate.
For some reason, the RX Vega cards didn't seem to show any scaling in performance when moving from 2560x1440 to 1920x1080, unlike the Polaris-based RX 580 and the NVIDIA options. We aren't quite sure of the cause of this and have reached out to AMD for clarification.
As far as frame times are concerned, we also gathered some data with our Frame Rating capture analysis system.
Taking a look at the first chart, we can see while the GTX 1080 frame times are extremely consistent, the RX Vega 64 shows some additional variance.
However, the frame time variance chart shows that over 95% of the frame times of the RX Vega 64 come in at under 2ms of variance, which will still provide a smooth gameplay experience in most scenarios. This matches with our experience while playing on both AMD and NVIDIA hardware where we saw no major issues with gameplay smoothness.
Forza Motorsport 7 seems to be a great addition to the PC gaming world (if you don't mind using the Microsoft store exclusively) and will run great on a wide array of hardware. Whether or not you have a NVIDIA or AMD GPU, you should be able to enjoy this fantastic racing simulator.
Introduction and Technical Specifications
Courtesy of ASUS
The ASUS Crosshair VI Hero board features a black PCB with a plastic armor overlay covering the board's rear panel and audio subsystem components. ASUS added RGB LED backlighting to the rear panel cover and chipset heat sink to illuminate the board and ASUS ROG logos, as well as under board lighting along the sound PCB separator line. ASUS designed the board around the AMD X370 chipset, offering support for AMD's Ryzen processor line and Dual Channel DDR4 memory running at a 2400MHz speed. The Crosshair VI Hero motherboard can be found in the wild at an MRSP of $254.99
Courtesy of ASUS
To power the Ryzen CPU, ASUS integrated a 12 phase digital power delivery system into the Crosshair VI Hero, providing enough juice to push your CPU to its limits. The following features have been integrated into the board: eight SATA III 6Gbps ports; an M.2 PCIe Gen3 x4 32Gbps capable port; an RJ-45 port featuring the Intel I211-AT Gigabit NIC; three PCI-Express x16 slots; two PCI-Express x1 slots; the ASUS SupremeFX S1220 8-Channel audio subsystem; integrated DVI-D and HDMI video ports; and USB 2.0, 3.0, and 3.1 Type-A and Type-C port support.
Courtesy of ASUS
For superior audio performance, ASUS built the Crosshair VI Hero's audio subsystem around the SupremeFX CODEC, featuring Nichicon audio capacitors, switching MOSFETs, a high-precision clock source, an ESS ESS9023P DAC, and an RC4580 audio buffer.
Courtesy of ASUS
To appease their AMD user population, ASUS designed the CPU cooler mount for compatibility with both the AM3 and AM4 style coolers. This gives users a wider selection of cooling solutions available to use with the board.
How a ThinkPad is born
During Lenovo's recent ThinkPad 25th Anniversary Event in Yokohama, Japan, we were given an opportunity to learn a lot about the evolution of the ThinkPad brand over the years.
One of the most significant sources of pride mentioned by the Lenovo executives in charge of the ThinkPad division during this event was the team's Yamato Laboratory. Formerly located in Yamato City (hence the name) and relocated to Yokohama in 2011, the Yamato Labs have been responsible for every ThinkPad product, dating back to the IBM days and the original ThinkPad 700C.
This continuity from the earliest days of ThinkPad has helped provide a standard of quality and education passed down from engineer to engineer over the last 25 years of the ThinkPad brand. In fact, some of the original engineers from 1987 are still with the company and working on the latest and greatest ThinkPad innovations. It's impressive to see such continuity and pride in the Japanese development team considering Lenovo's acquisition of the brand back in 2005.
One of the most exciting things was a peek at some of the tests that every device bearing the ThinkPad name must go through, including non-notebook devices like the X1 Tablet.
Introduction and Features
FSP’s new Hydro PTM lineup is part of their top-tier Premium Series and currently includes three models: 750W, 650W, and 550W. We will be taking a detailed look at the 750W Platinum model in this review. FSP Group Inc. has been designing and building PC power supplies under their own brand since 2003. Not only do they market power supplies under their own FSP name but they are the OEM for many other big name brands. Now you might be thinking “Hydro” refers to water-cooling but like we saw last year with the Hydro G series power supplies, the Platinum Series all use conventional air cooling. The Hydro apparently refers to the “Hydro Dynamic Bearing” used in the cooling fan (more commonly referred to as a FDB – Fluid Dynamic Bearing).
FSP developed the Hydro Platinum Series with an advanced thermal layout design. The units come with all modular cables and are certified to comply with the 80 Plus Platinum efficiency criteria. The power supplies are designed to deliver tight voltage regulation with excellent AC ripple and noise suppression. All Hydro PTM Series power supplies incorporate a quiet 135mm cooling fan and they come backed with a 10-year warranty!
FSP Hydro PTM Series PSU Key Features:
• 550W, 650W or 750W continuous DC output @ 50°C
• High efficiency, 80 PLUS Platinum certified ≥92%
• Complies with newest ATX12V v2.4 & EPS12 v2.92 standards
• 100% Japanese made electrolytic capacitors
• Quiet 135mm Fluid Dynamic Bearing fan
• Powerful single +12V rail design
• Advanced thermal layout design
• Fully modular with flat ribbon-style cables
• SLI, Crossfire, and VR ready
• Protections: OVP, UVP, OCP, OPP, SCP and OTP
• 10-Year Manufacturer’s warranty
• $124.99 USD (Amazon.com, Oct. 2017)
Specifications and Summary
As seems to be the trend for processor reviews as of late, today marks the second in a two-part reveal of Intel’s Coffee Lake consumer platform. We essentially know all there is to know about the new mainstream and DIY PC processors from Intel, including specifications, platform requirements, and even pricing; all that is missing is performance. That is the story we get to tell you today in our review of the Core i7-8700K and Core i5-8400.
Coffee Lake is the second spoke of Intel's “8th generation” wheel that began with the Kaby Lake-R release featuring quad-core 15-watt notebook processors for the thin and light market. Though today’s release of the Coffee Lake-S series (the S is the designation for consumer desktop) doesn’t share the same code name, it does share the same microarchitecture, same ring bus design (no mesh here), and same underlying technology. They are both built on the Intel 14nm process technology.
And much like Kaby Lake-R in the notebook front, Coffee Lake is here to raise the core count and performance profile of the mainstream Intel CPU playbook. When AMD first launched the Ryzen 7 series of processors that brought 8-cores and 16-threads of compute, it fundamentally shook the mainstream consumer markets. Intel was still on top in terms of IPC and core clock speeds, giving it the edge in single and lightly threaded workloads, but AMD had released a part with double the core and thread count and was able to dominate in most multi-threaded workloads compared to similar Intel offerings.
Much like Skylake-X before it, Coffee Lake had been on Intel’s roadmap from the beginning, but new pressure from a revived AMD meant bringing that technology to the forefront sooner rather than later in an effort stem any potential shifts in market share and maybe more importantly, mind share among investors, gamers, and builders. Coffee Lake, and the Core i7, Core i5, and Core i3 processors that will be a part of this 8000-series release, increase the core count across the board, and generally raise clock speeds too. Intel is hoping that by bumping its top mainstream CPU to 6-cores, and coupling that with better IPC and higher clocks, it can alleviate the advantages that AMD has with Ryzen.
But does it?
That’s what we are here to find out today. If you need a refresher on the build up to this release, we have the specifications and slight changes in the platform and design summarized for you below. Otherwise, feel free to jump on over to the benchmarks!
We've been hearing about Intel's VROC (NVMe RAID) technology for a few months now. ASUS started slipping clues in with their X299 motherboard releases starting back in May. The idea was very exciting, as prior NVMe RAID implementations on Z170 and Z270 platforms were bottlenecked by the chipset's PCIe 3.0 x4 DMI link to the CPU, and they also had to trade away SATA ports for M.2 PCIe lanes in order to accomplish the feat. X99 motherboards supported SATA RAID and even sported four additional ports, but they were left out of NVMe bootable RAID altogether. It would be foolish of Intel to launch a successor to their higher end workstation-class platform without a feature available in two (soon to be three) generations of their consumer platform.
To get a grip on what VROC is all about, lets set up some context with a few slides:
First, we have a slide laying out what the acronyms mean:
- VROC = Virtual RAID on CPU
- VMD = Volume Management Device
What's a VMD you say?
...so the VMD is extra logic present on Intel Skylake-SP CPUs, which enables the processor to group up to 16 lanes of storage (4x4) into a single PCIe storage domain. There are three VMD controllers per CPU.
VROC is the next logical step, and takes things a bit further. While boot support is restricted to within a single VMD, PCIe switches can be added downstream to create a bootable RAID possibly exceeding 4 SSDs. So long as the array need not be bootable, VROC enables spanning across multiple VMDs and even across CPUs!
Assembling the Missing Pieces
Unlike prior Intel storage technology launches, the VROC launch has been piecemeal at best and contradictory at worst. We initially heard that VROC would only support Intel SSDs, but Intel later published a FAQ that stated 'selected third-party SSDs' would also be supported. One thing they have remained steadfast on is the requirement for a hardware key to unlock RAID-1 and RAID-5 modes - a seemingly silly requirement given their consumer chipset supports bootable RAID-0,1,5 without any key requirement (and VROC only supports one additional SSD over Z170/Z270/Z370, which can boot from 3-drive arrays).
On the 'piecemeal' topic, we need three things for VROC to work:
- BIOS support for enabling VMD Domains for select groups of PCIe lanes.
- Hardware for connecting a group of NVMe SSDs to that group of PCIe lanes.
- A driver for OS mounting and managing of the array.
Let's run down this list and see what is currently available:
Check. Hardware for connecting multiple drives to the configured set of lanes?
Check (960 PRO pic here). Note that the ASUS Hyper M.2 X16 Card will only work on motherboards supporting PCIe bifurcation, which allows the CPU to split PCIe lanes into subgroups without the need of a PLX chip. You can see two bifurcated modes in the above screenshot - one intended for VMD/VROC, while the other (data) selection enables bifurcation without enabling the VMD controller. This option presents the four SSDs to the OS without the need of any special driver.
With the above installed, and the slot configured for VROC in the BIOS, we are greeted by the expected disappointing result:
Now for that pesky driver. After a bit of digging around the dark corners of the internet:
Check! (well, that's what it looked like after I rapidly clicked my way through the array creation)
Don't even pretend like you won't read the rest of this review! (click here now!)
A New Standard
With a physical design that is largely unchanged other than the addition of a glass back for wireless charging support, and featuring incremental improvements to the camera system most notably with the Plus version, the iPhone 8 and 8 Plus are interesting largely due to the presence of a new Apple SoC. The upcoming iPhone X (pronounced "ten") stole the show at Apple's keynote annoucement earlier this month, but the new A11 Bionic chip powers all 2017 iPhone models, and for the first time Apple has a fully custom GPU after their highly publicized split with Imagination Technologies, makers of the PowerVR graphics found in previous Apple SoCs.
The A11 Bionic powering the 2017 iPhones contains Apple’s first 6-core processor, which is comprised of two high performance cores (code-named ‘Monsoon’) and four high efficiency cores (code-named ‘Mistral’). Hugely important to its performance is the fact that all six cores are addressable with this new design, as Apple mentions in their description of the SoC:
"With six cores and 4.3 billion transistors, A11 Bionic has four efficiency cores that are up to 70 percent faster than the A10 Fusion chip, and two performance cores that are up to 25 percent faster. The CPU can even harness all six cores simultaneously when you need a turbo boost."
It was left to improvments to IPC and clock speed to boost the per-core performance of previous Apple SoCs, such as the previous A10 Fusion part, which contained a quad-core CPU split in an even arrangement of 2x performance + 2x efficiency cores. Apple's quad-core effort did not affect app performance beyond the two performance cores, with additional cores limited to background tasks in real-world use (though the A10 Fusion did not provide any improvement to battery life over previous efforts, as we saw).
The A11 Bionic on the iPhone 8 system board (image credit: iFixit)
Just how big an impact this new six-core CPU design will have can be instantly observed with the CPU benchmarks to follow, and on the next page we will find out how Apple's in-house GPU solution compare to both the previous A10 Fusion PowerVR graphics, and market-leading Qualcomm Adreno 540 found in the Snapdragon 835. We will begin with the CPU benchmarks.
When we first saw product page for the Marseille mCable Gaming Edition, a wave of skepticism waved across the PC Perspective offices. Initially, an HDMI cable that claims to improve image quality while gaming sounds like the snake oil that "audiophile" companies like AudioQuest have been peddling for years.
However, looking into some of the more technical details offered by Marseille, their claims seemed to be more and more likely. By using a signal processor embedded inside the HDMI connector itself, Marseille appears to be manipulating the video signal to improve quality in ways applicable to gaming. Specifically, their claim of Anti-Aliasing on all video signals has us interested.
So for curiosities sake, we ordered the $150 mCable Gaming Edition and started to do some experimentation.
Even from the initial unboxing, there are some unique aspects to the mCable. First, you might notice that the connectors are labeled with "Source" and "TV." Since the mCable has a signal processor in it, this distinction which is normally meaningless starts to matter a great deal.
Similarly, on the "TV" side, there is a USB cable used to power the signal processing chip. Marseille claims that most modern TV's with USB connections will be able to power the mCable.
While a lot of Marseilles marketing materials are based on upgrading the visual fidelity of console games that don't have adjustable image quality settings, we decided to place our aim on a market segment we are intimately familiar with—PC Gaming. Since we could selectively turn off Anti-Aliasing in a given game, and PC games usually implement several types of AA, it seemed like the most interesting testing methodology.
Specifications and Architecture
It has been an interesting 2017 for Intel. Though still the dominant market share leader in consumer processors of all shapes and sizes, from DIY PCs to notebooks to servers, it has come under attack with pressure from AMD unlike any it has felt in nearly a decade. It started with the release of AMD Ryzen 7 and a family of processors aimed at the mainstream user and enthusiast markets. That followed by the EPYC processor release moving in on Intel’s turf of the enterprise markets. And most recently, Ryzen Threadripper took a swing (and hit) at the HEDT (high-end desktop) market that Intel had created and held its own since the days of the Nehalem-based Core i7-920 CPU.
Between the time Threadripper was announced and when it shipped, Intel made an interesting move. It decided to launch and announce its updated family of HEDT processors dubbed Skylake-X. Only available in a 10-core model at first, the Core i9-7900X was the fastest tested processor in our labs, at the time. But it was rather quickly overtaken by the likes of the Threadripper 1950X that ran with 16-cores and 32-threads of processing. Intel had already revealed that its HEDT lineup would go to 18-core options, though availability and exact clock speeds remained in hiding until recently.
|i9-7980XE||i9-7960X||i9-7940X||i9-7920X||i9-7900X||i7-7820X||i7-7800X||TR 1950X||TR 1920X||TR 1900X|
|Base Clock||2.6 GHz||2.8 GHz||3.1 GHz||2.9 GHz||3.3 GHz||3.6 GHz||3.5 GHz||3.4 GHz||3.5 GHz||3.8 GHz|
|Turbo Boost 2.0||4.2 GHz||4.2 GHz||4.3 GHz||4.3 GHz||4.3 GHz||4.3 GHz||4.0 GHz||4.0 GHz||4.0 GHz||4.0 GHz|
|Turbo Boost Max 3.0||4.4 GHz||4.4 GHz||4.4 GHz||4.4 GHz||4.5 GHz||4.5 GHz||N/A||N/A||N/A||N/A|
|Memory Support||DDR4-2666 Quad Channel||DDR4-2666 Quad Channel||DDR4-2666 Quad Channel||DDR4-2666 Quad Channel||DDR4-2666
|DDR4-2666 Quad Channel||DDR4-2666 Quad Channel|
|TDP||165 watts||165 watts||165 watts||140 watts||140 watts||140 watts||140 watts||180 watts||180 watts||180 watts?|
Today we are now looking at both the Intel Core i9-7980XE and the Core i9-7960X, 18-core and 16-core processors, respectively. The goal from Intel is clear with the release: retake the crown as the highest performing consumer processor on the market. It will do that, but it does so at $700-1000 over the price of the Threadripper 1950X.
Introduction and Technical Specifications
Courtesy of GIGABYTE
With the release of Intel Z270 chipset, GIGABYTE unveiled its AORUS line of products. The AORUS branding differentiates the enthusiast and gamer friendly products from other GIGABYTE product lines, similar to how ASUS uses the ROG branding to differentiate their high performance product line. The Z270X-Gaming 8 is one of two "enhanced" boards in the AORUS, factory-customized with a Bitspower designed VRM hybrid water block. The board features the black and white branding common to the AORUS product line, with the rear panel cover and chipset featuring the brand logos. The board is designed around the Intel Z270 chipset with in-built support for the latest Intel LGA1151 Kaby Lake processor line (as well as support for Skylake processors) and Dual Channel DDR4 memory running at a 2400MHz speed. The Z270X-Gaming 8 can be found in retail with an MRSP of $399.99.
Courtesy of GIGABYTE
Courtesy of GIGABYTE
GIGABYTE integrated the following features into the Z270X-Gaming 8 motherboard: four SATA III 6Gbps ports; two SATA-Express 10Gbps ports; two U.2 PCIe Gen3 x4 32Gbps ports; two M.2 PCIe Gen3 x4 32Gbps capable ports with Intel Optane support built-in; two RJ-45 GigE ports - an Intel I219-V Gigabit NIC and a Rivet Networks Killer E2500 NIC; a Rivet Networks Killer 802.11ac 2x2 Wireless adapter; four PCI-Express x16 slots; two PCI-Express x1 slots; Creative® Sound Core 3D 8-Channel audio subsystem; integrated DisplayPort and HDMI video ports; Intel Thunderbolt 40Gbps support; G-Chill hybrid VRM water block (designed by Bitspower); and USB 2.0, 3.0, and 3.1 Type-A and Type-C port support.
Courtesy of GIGABYTE
GIGABYTE partnered with Bitspower in designing the integrated cooling solution for the Z270X-Gaming 8 motherboard. The integrated VRM hybrid block, dubbed G-Chill by GIGABYTE, can operate with or without coolant. The block itself consists of a nickel-plated copper base plate, an acrylic top plate, a metal overplate, and a plastic cover to give it a unified appearance with the rest of the board components. The inlet and outlet ports are sealed with port covers by default, and are G1/4" threaded for use with any after-market water fittings currently available.
Thanks goes out to CUK, Computer Upgrade King, for supplying the MSI GS63VR notebook for our testing and evaluation
It's been a few weeks since we took a look at our first gaming notebook with NVIDIA's Max-Q design, the ASUS ROG Zephyrus. We briefly touched on the broad array of announced Max-Q Notebooks on that review, and today we are taking a look at the MSI GS63VR Stealth Pro.
One of the first notebooks to feature the GTX 1070 with Max-Q Design, the MSI GS63VR is a more traditional notebook form factor than the GTX 1080-toting ASUS ROG Zephyrus. In fact, the GS series has been a long running line of thin-and-light gaming notebooks from MSI. What is new though is the avalability of a GTX 1070-class option in this chassis. The GS63VR previously topped out with the GTX 1060 as the highest end option.
|MSI GS63VR Stealth Pro-002 (configuration as reviewed)|
|Processor||Intel Core i7-7700HQ (Kaby Lake)|
|Graphics||NVIDIA Geforce GTX 1070 with Max-Q Design (8GB)|
|Screen||15.6-in 1920x1080 120Hz|
512GB Samsung PM871a M.2 SATA SSD
1TB Seagate 5400RPM HDD
|Wireless||Intel 8265 802.11ac (2x2) + BT 4.1|
|Dimensions||379.98mm x 248.92mm x17.53mm (14.96" x 9.80" x 0.69")
3.96 lbs. (1792 g)
|OS||Windows 10 Pro|
|Price||$2399 - Newegg.com CUKUSA|
Taking a look a look at the exact notebook configuration we are testing, we find a well-equipped gaming notebook. In addition to the GTX 1070 Max-Q, we find a 35W Quad-Core mobile CPU from Intel, 32GB of system RAM, and plentiful storage options including both M.2 SSD and traditional 2.5" SATA drive configurations. This specific notebook is equipped with a SATA M.2 SSD, but this notebook will also support PCIe devices with the same M.2 port.
Introduction and Features
Seasonic’s new FOCUS Plus family of power supplies currently includes two different series ranging from 550W up to 850W output capacity with either Platinum or Gold efficiency certification. Earlier this year we looked at the FOCUS Plus Gold (FX) 650W power supply and found it to be an excellent new addition to Seasonic’s lineup. In this review we will be taking a detailed look at the Seasonic FOCUS Plus Platinum (PX) 550W power supply. And to insure that reviewers are not being sent hand-picked golden samples, Seasonic once again arranged to have our sample delivered straight from Newegg.com inventory.
The Seasonic FOCUS Plus Platinum series includes four models: 550W, 650W, 750W, and 850W. In addition to 80 Plus Platinum certification, the FOCUS Plus (PX) series features a small footprint chassis (140mm deep), all modular cables, high quality components, and comes backed by a 10-year warranty.
• FOCUS Plus Platinum (PX) 550W: $99.90 USD
• FOCUS Plus Platinum (PX) 650W: $109.90 USD
• FOCUS Plus Platinum (PX) 750W: $119.90 USD
• FOCUS Plus Platinum (PX) 850W: $139.90 USD
Seasonic FOCUS Plus 550W Platinum (PX) PSU Key Features:
• 550W Continuous DC output at up to 50°C
• 80 PLUS Platinum certified for high efficiency
• Small footprint: chassis measures just 140mm (5.5”) deep
• Tight voltage regulation ±3% (3.3V, 5V, and 12V)
• Fully-modular cables
• DC-to-DC Voltage converters
• Single +12V output
• Multi-GPU Technology support
• Quiet 120mm Fluid Dynamic Bearing (FDB) cooling fan
• Seasonic Hybrid Silent Fan Control
• Active Power Factor correction with Universal AC input (100 to 240 VAC)
• Safety protections: OPP, OVP, UVP, OCP, OTP and SCP
• 10-Year warranty
Here is what Seasonic has to say about their new FOCUS Plus 550W Platinum PSU:
“The fully modular, 80 PLUS® Platinum rated FOCUS+ Platinum 550 watt unit is based on the DC to DC Converter Design and its electric performance characteristics are impressive with their tight voltage regulation. These power supplies use high temperature Japanese capacitors to ensure stability under extreme conditions. Besides its stability and remarkable output characteristics, this unit has the highest power density in its class. Silence enthusiasts will appreciate the Seasonic Hybrid Silent Fan Control system that is the industry’s first three-phased advanced thermal control to achieve optimal balance between silence and cooling.
Due to its small form factor the FOCUS PLUS power supply is a great solution for those who want to build smaller systems and care about the airy arrangement inside their computer case. The fully modular cables are easy to install, save space inside the computer case and provide ample customization and upgrading possibilities at the same time. This mid power range power supply is not only small, but mean too. It boasts the highest power density in its class and it also has remarkable stability and output characteristics during operation.”
A Tale of Two Form-Factors
HyperX (a division of Kingston) entered the mechanical keyboard market a year ago with the Alloy series, which began as a pair of 104-key designs with the Alloy Elite and Alloy FPS. Both keyboards feature Cherry MX keys, with the FPS sporting a minimalist design with a compact frame to save room on a desk. Now a TKL version of the FPS has arrived - the FPS Pro - to compliment the 104-key version already at the PC Perspective offices, and in this review we will test out both versions of this gaming keyboard.
Both keyboards feature adjustable red backlighting
Features from HyperX for the Alloy FPS:
- Compact design frees desktop space — waste less time reorienting the mouse
- Solid-steel frame for stability, giving you supreme confidence in your controls
- Ultra-portable design with detachable cable is great for LAN parties and tournaments
- Cherry MX mechanical keys for tactile feedback and reliable keypresses
- Convenient USB charge port allows you to charge other devices
- Game mode, 100-percent Anti-Ghosting and full N-key rollover features ensure your inputs are correct
- HyperX red backlit keys with customizable, dynamic lighting functions
- Additional colored, textured keycaps spotlight the most important keys
Now take virtually the same feature list (minus the additional keycaps) and subtract the number pad, and you have the Alloy FPS Pro, an “ultra-minimalistic tenkeyless design ideal for FPS pros”, according to HyperX. This reduction in size and number of keys is accompanied by a reduction in price, and the Alloy FPS Pro will be 20% less expensive than the 104-key FPS when it launches in late August. How do these mechanical keyboards stack up? Read on for our full review!