All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Despite their large global presence in smartphones, Huawei isn't a brand widely known to US consumers. While this has improved year by year with the introduction of unlocked phones from and their Mate brand, I don't think that most Americans realize how big of a consumer electronics company Huawei is.
One of the more recent categories that Huawei has entered is the Windows notebook and tablet market. Starting with the announcement of the original MateBook at Mobile World Congress in 2016 (see our subsequent review here), the MateBook line was expanded this year to include two traditional notebook form factors—the thin-and-light MateBook X, and the more mainstream MateBook D.
With the introduction of these new products, the 2-in-1 tablet formerly known as just the MateBook has been slightly revised and renamed to the MateBook E, the product that we are looking at today.
|Huawei MateBook E (configuration as reviewed)|
|Processor||Intel Core m3-7Y30|
|Graphics||Intel HD Graphics 615|
|Screen||12-in 2160x1440 IPS|
128GB SanDisk SATA SSD
|Wireless||Intel 8275 802.11ac + BT 4.2 (Dual Band, 2x2)|
|Connections||1 x USB 3.1 Gen 1 Type C
Audio combo jack
|Dimensions||278.8mm x 194.1mm x 6.8mm (10.98" x 7.64" x .27")
2.43 lb (1100 g)
|OS||Windows 10 Home|
|Price||$699 - Amazon.com|
A quiet facade
Iceberg Interactive, whom you may know from games like Killing Floor or the Stardrive series have released a new strategy game called Oriental Empires, and happened to send me a copy to try out.
On initial inspection it resembles recent Civilization games but with a more focused design as you take on a tribe in ancient China and attempt to become Emperor, or at least make your neighbours sorry that they ever met you. Until you have been through 120 turns of the Grand Campaign you cannot access many of the tribes; not a bad thing as that first game is your tutorial. Apart from an advisor popping up during turns or events, the game does not hold your hand and instead lets you figure out the game on your own.
That minimalist ideal is featured throughout the entire game, offering one of the cleanest interfaces I've seen in a game. All of the information you need to maintain and grow your empire is contained in a tiny percentage of the screen or in a handful of in game menus. This plays well as the terrain and look of the campaign map is quite striking and varies noticeably with the season.
Spring features cherry blossom trees as well as the occasional flooding.
Summer is a busy season for your workers and perhaps your armies.
Fall colours indicate the coming of winter and snow.
Which also shrouds the peaks in fog. The atmosphere thus created is quite relaxing, somewhat at odds with many 4X games and perhaps the most interesting thing about this game.
In these screenshots you can see the entire GUI that gives you the information you need to play. The upper right shows your turn, income and occaisonally a helpful advsor offering suggestions. Below that you will find a banner that toggles between displaying three lists. The first is of your cites and their current build queues and population information, the second lists your armies compositions and if they currently have any orders while the last displays any events which effect your burgeoning empire. The bottom shows your leader and his authority which, among other things, indicates the number of cities you can support without expecting quickly increasing unrest.
The right hand side lets you bring up the only other five menus which you use in this game. From top to bottom they offer you diplomacy, technology, Imperial edicts you can or have applied to your Empire, player statistics to let you know how you are faring and the last offering detailed statistics of your empire and those competing tribes you have met.
A Trio of Air Coolers
Scythe is a major player in the air cooling space with a dizzying array of coolers for virtually any application from the Japanese company. In addition to some of the most compact coolers in the business Scythe also offers some of the highest performing - and most quiet - tower coolers available. Two of the largest coolers in the lineup are the new Mugen 5 Rev. B, and the Grand Kama Cross 3 - the latter of which is one of their most outlandish designs.
Rounding out this review we also have a compact tower option from Scythe in the Byakko, which is a 130 mm tall cooler that can fit in a greater variety of enclosures than the Mugen 5 or Grand Kama Cross due to its lower profile. So how did each perform on the cooler test bench? We put these Scythe coolers against the Intel Core i7-7700K to see how potent their cooling abilities are when facing a CPU that gets quite toasty under load. Read on to see how this trio responded to the challenge!
YouTube TV for NVIDIA SHIELD
When YouTube TV first launched earlier this year, it had one huge factor in its favor compared to competing subscription streaming services: local channels. The service wasn't available everywhere, but in the markets where it was available, users were able to receive all of their major local networks. This factor, combined with its relatively low subscription price of $35 per month, immediately made YouTube TV one of the best streaming options, but it also had a downside: device support.
At launch YouTube TV was only available via the Chrome browser, iOS and Android, and newer Chromecast devices. There were no native apps for popular media devices like the Roku, Amazon Fire TV, or Apple TV. But perhaps the most surprising omission was support for Android TV via devices like the NVIDIA SHIELD. Most of the PC Perspective staff personally use the SHIELD due to its raw power and capabilities, and the lack of YouTube TV support on Google's own media platform was disappointing.
Thankfully, Google recently addressed this omission and has finally brought a native YouTube TV app to the SHIELD with the SHIELD TV 6.1 Update.
Introduction and Specifications
Back in April, we finally got our mitts on some actual 3D XPoint to test, but there was a catch. We had to do so remotely. The initial round of XPoint testing done (by all review sites) was on a set of machines located on the Intel campus. Intel had their reasons for this unorthodox review method, but we were satisfied that everything was done above board. Intel even went as far as walking me over to the very server that we would be remoting into for testing. Despite this, there were still a few skeptics out there, and today we can put all of that to bed.
This is a 750GB Intel Optane SSD DC P4800X - in the flesh and this time on *our* turf. I'll be putting it through the same initial round of tests we conducted remotely back in April. I intend to follow up at a later date with additional testing depth, as well as evaluating kernel response times across Windows and Linux (IRQ, Polling, Hybrid Polling, etc), but for now, we're here to confirm the results on our own testbed as well as evaluate if the higher capacity point takes any sort of hit to performance. We may actually see a performance increase in some areas as Intel has had several months to further tune the P4800X.
This video is for the earlier 375GB model launch, but all points apply here
(except that the 900P has now already launched)
The baseline specs remain the same as they were back in April with a few significant notable exceptions:
The endurance figure for the 375GB capacity has nearly doubled to 20.5 PBW (PetaBytes Written), with the 750GB capacity logically following suit at 41 PBW. These figures are based on a 30 DWPD (Drive Write Per Day) rating spanned across a 5-year period. The original product brief is located here, but do note that it may be out of date.
We now have official sequential throughput ratings: 2.0 GB/s writes and 2.4 GB/s reads.
We also have been provided detailed QoS figures and those will be noted as we cover the results throughout the review.
The Expected Unexpected
Last night we first received word that Raja had resigned from AMD (during a sabbatical) after they had launched Vega. The initial statement was that Raja would come back to resume his position at AMD in a December/January timeframe. During this time there was some doubt as to if Raja would in fact come back to AMD, as “sabbaticals” in the tech world would often lead the individual to take stock of their situation and move on to what they would consider to be greener pastures.
Raja has dropped by the PCPer offices in the past.
Initially it was thought that Raja would take the time off and then eventually jump to another company and tackle the issues there. This behavior is quite common in Silicon Valley and Raja is no stranger to this. Raja cut his teeth on 3D graphics at S3, but in 2001 he moved to ATI. While there he worked on a variety of programs including the original Radeon, the industry changing Radeon 9700 series, and finishing up with the strong HD 4000 series of parts. During this time ATI was acquired by AMD and he became one of the top graphics guru at that company. In 2009 he quit AMD and moved on to Apple. He was Director of Graphics Architecture at Apple, but little is known about what he actually did. During that time Apple utilized AMD GPUs and licensed Imagination Technologies graphics technology. Apple could have been working on developing their own architecture at this point, which has recently showed up in the latest iPhone products.
In 2013 Raja rejoined AMD and became a corporate VP of Visual Computing, but in 2015 he was promoted to leading the Radeon Technology Group after Lisu Su became CEO of the company. While there Raja worked to get AMD back on an even footing under pretty strained conditions. AMD had not had the greatest of years and had seen their primary moneymakers start taking on water. AMD had competitive graphics for the most part, and the Radeon technology integrated into AMD’s APUs truly was class leading. On the discrete side AMD was able to compare favorably to NVIDIA with the HD 7000 and later R9 200 series of cards. After NVIDIA released their Maxwell based chips, AMD had a hard time keeping up. The general consensus here is that the RTG group saw its headcount decreased by the company-wide cuts as well as a decrease in R&D funds.
Here comes a new challenger
The release of the GeForce GTX 1070 Ti has been an odd adventure. Launched into a narrow window of a product stack between the GTX 1070 and the GTX 1080, the GTX 1070 Ti is a result of the competition from the AMD RX Vega product line. Sure, NVIDIA might have speced out and prepared an in-between product for some time, but it was the release of competitive high-end graphics cards from AMD (for the first time in forever it seems) that pushed NVIDIA to launch what you see before us today.
With MSRPs of $399 and $499 for the GTX 1070 and GTX 1080 respectively, a new product that fits between them performance wise has very little room to stretch its legs. Because of that, there are some interesting peculiarities involved with the release cycle surrounding overclocks, partner cards, and more.
But before we get into that concoction, let’s first look at the specifications of this new GPU option from NVIDIA as well as the reference Founders Edition and EVGA SC Black Edition cards that made it to our offices!
GeForce GTX 1070 Ti Specifications
We start with our classic table of details.
|RX Vega 64 Liquid||RX Vega 64 Air||RX Vega 56||Vega Frontier Edition||GTX 1080 Ti||GTX 1080||GTX 1070 Ti||GTX 1070|
|Base Clock||1406 MHz||1247 MHz||1156 MHz||1382 MHz||1480 MHz||1607 MHz||1607 MHz||1506 MHz|
|Boost Clock||1677 MHz||1546 MHz||1471 MHz||1600 MHz||1582 MHz||1733 MHz||1683 MHz||1683 MHz|
|Memory Clock||1890 MHz||1890 MHz||1600 MHz||1890 MHz||11000 MHz||10000 MHz||8000 MHz||8000 MHz|
|Memory Interface||2048-bit HBM2||2048-bit HBM2||2048-bit HBM2||2048-bit HBM2||352-bit G5X||256-bit G5X||256-bit||256-bit|
|Memory Bandwidth||484 GB/s||484 GB/s||410 GB/s||484 GB/s||484 GB/s||320 GB/s||256 GB/s||256 GB/s|
|TDP||345 watts||295 watts||210 watts||300 watts||250 watts||180 watts||180 watts||150 watts|
|Peak Compute||13.7 TFLOPS||12.6 TFLOPS||10.5 TFLOPS||13.1 TFLOPS||11.3 TFLOPS||8.2 TFLOPS||7.8 TFLOPS||5.7 TFLOPS|
If you have followed the leaks and stories over the last month or so, the information here isn’t going to be a surprise. The CUDA core count of the GTX 1070 Ti is 2432, only one SM unit less than the GTX 1080. Base and boost clock speeds are the same as the GTX 1080. The memory system includes 8GB of GDDR5 running at 8 GHz, matching the performance of the GTX 1070 in this case. The TDP gets a bump up to 180 watts, in line with the GTX 1080 and slightly higher than the GTX 1070.
Overview and CPU Performance
When Intel announced their quad-core mobile 8th Generation Core processors in August, I was immediately interested. As a user who gravitates towards "Ultrabook" form-factor notebooks, it seemed like a no-brainer—gaining two additional CPU cores with no power draw increase.
However, the hardware reviewer in me was skeptical. Could this "Kaby Lake Refresh" CPU provide the headroom to fit two more physical cores on a die while maintaining the same 15W TDP? Would this mean that the processor fans would have to run out of control? What about battery life?
Now that we have our hands on our first two notebooks with the i7-8550U in, it's time to take a more in-depth look at Intel's first mobile offerings of the 8th Generation Core family.
Providers and Devices
"Cutting the Cord," the process of ditching traditional cable and satellite content providers for cheaper online-based services, is nothing new. For years, consumers have cancelled their cable subscriptions (or declined to even subscribe in the first place), opting instead to get their entertainment from companies like Netflix, Hulu, and YouTube.
But the recent introduction of online streaming TV services like Sling TV, new technologies like HDR, and the slow online adoption of live local channels has made the idea of cord cutting more complicated. While cord cutters who are happy with just Netflix and YouTube need not worry, what are the solutions for those who don't like the idea of high cost cable subscriptions but also want to preserve access to things like local channels and the latest 4K HDR content?
This article is the first in a three-part series that will look at this "high-end" cord cutting scenario. We'll be taking a look at the options for online streaming TV, access to local "OTA" (over the air) channels, and the devices that can handle it all, including DVR support, 4K output, and HDR compliance.
There are two approaches that you can take when considering the cord cutting process. The first is to focus on capabilities: Do you want 4K? HDR? Lossless surround sound audio? Voice search? Gaming?
The second approach is to focus on content: Do you want live TV or à la carte downloads? Can you live without ESPN or must it and your other favorite networks still be available? Are you heavily invested in iTunes content? Perhaps most importantly for those concerned with the "Spousal Acceptance Factor" (SAP), do you want the majority of your content contained in a single app, which can prevent you and your family members from having to jump between apps or devices to find what they want?
While most people on the cord cutting path will consider both approaches to a certain degree, it's easier to focus on the one that's most important to you, as that will make other choices involving devices and content easier. Of course, there are those of us out there that are open to purchasing and using multiple devices and content sources at once, giving us everything at the expense of increased complexity. But most cord cutters, especially those with families, will want to pursue a setup based around a single device that accommodates most, if not all, of their needs. And that's exactly what we set out to find.
Introduction, Specifications and Packaging
It’s been two long years since we first heard about 3D XPoint Technology. Intel and Micron serenaded us with tales of ultra-low latency and very high endurance, but when would we have this new media in our hot little hands? We got a taste of things with Optane Memory (caching) back in April, and later that same month we got a much bigger, albeit remotely-tested taste in the form of the P4800X. Since April all was quiet, with all of us storage freaks waiting for a consumer version of Optane with enough capacity to act as a system drive. Sure we’ve played around with Optane Memory parts in various forms of RAID, but as we found in our testing, Optane’s strongest benefits are the very performance traits that do not effectively scale with additional drives added to an array. The preferred route is to just get a larger single SSD with more 3D XPoint memory installed on it, and we have that very thing today (and in two separate capacities)!
You might have seen various rumors centered around the 900P lately. The first is that the 900P was to supposedly support PCIe 4.0. This is not true, and after digging back a bit appears to be a foreign vendor mistaking / confusing PCIe X4 (4 lanes) with the recently drafted PCIe 4.0 specification. Another set of rumors centered around pre-order listings and potential pricing for the 280 and 480 GB variants of the 900P. We are happy to report that those prices (at the time of this writing) are way higher than Intel’s stated MSRP's for these new models. I’ll even go as far as to say that the 480GB model can be had for less than what the 280GB model is currently listed for! More on that later in the review.
Performance specs are one place where the rumors were all true, but since all the folks had to go on was a leaked Intel press deck slide listing figures identical to the P4800X, we’re not really surprised here.
Lots of technical stuff above, but the high points are <10us typical latency (‘regular’ SSDs run between 60-100us), 2.5/2.0 GB/s sequential reads/writes, and 550k/500k random read/write performance. Yes I know, don’t tell me, you’ve seen higher sequentials on smaller form factor devices. I agree, and we’ve even seen higher maximum performance from unreleased 3D XPoint-equipped parts from Micron, but Intel has done what they needed to do in order to make this a viable shipping retail product, which likely means sacrificing the ‘megapixel race’ figures in favor of offering the lowest possible latencies and best possible endurance at this price point.
Packaging is among the nicest we’ve seen from an Intel SSD. It actually reminds me of how the Fusion-io ioDrives used to come.
Also included with the 900P is a Star Citizen ship. The Sabre Raven has been a topic of gossip and speculation for months now, and it appears to be a pretty sweet looking fighter. For those unaware, Star Citizen is a space-based MMO, and with a ‘ship purchase’ also comes a license to play the game. The Sabre Raven counts as such a purchase and apparently comes with lifetime insurance, meaning it will always be tied to your account in case it gets blown up doing data runs. Long story short, you get the game for free with the purchase of a 900P.
A potential game changer?
I thought we were going to be able to make it through the rest of 2017 without seeing AMD launch another family of products. But I was wrong. And that’s a good thing. Today AMD is launching the not-so-cleverly-named Ryzen Processor with Radeon Vega Graphics product line that will bring the new Zen processor architecture and Vega graphics architecture onto a single die for the ultrathin mobile notebook platforms. This is no minor move for them – just as we discussed with the AMD EPYC processor launch, this is a segment that has been utterly dominated by Intel. After all, Intel created the term Ultrabook to target these designs, and though that brand is gone, the thin and light mindset continues to this day.
The claims AMD makes about its Ryzen mobile APU (combination CPU+GPU accelerated processing unit, to use an older AMD term) are not to be made lightly. Right up front in our discussion I was told this is going to be the “world’s fastest for ultrathin” machines. Considering that AMD had previously been unable to even enter those markets with previous products, both due to some technological and business roadblocks, AMD is taking a risk by painting this launch in such a light. Thanks to its ability combine CPU and GPU technology on a single die though, AMD has some flexibility today that simply did not have access to previously.
From the days that AMD first announced the acquisition of ATI graphics, the company has touted the long-term benefits of owning both a high-performance processor and graphics division. By combining the architectures on a single die, they could become greater than the sum of the parts, leveraging new software directions and the oft-discussed HSA (heterogenous systems architecture) that AMD helped create a foundation for. Though the first rounds of APUs were able to hit modest sales, the truth was that AMD’s advantage over Intel’s on the graphics technology front was often overshadowed by the performance and power efficiency advantages that Intel held on the CPU front.
But with the introduction of the first products based on Zen earlier this year, AMD has finally made good on the promises of catching up to Intel in many of the areas where it matters the most. The new from-the-ground-up design resulted in greater than 50% IPC gains, improved area efficiency compared to Intel’s latest Kaby Lake core design, and enormous gains in power efficiency compared to the previous CPU designs. When looking at the new Ryzen-based APU products with Vega built-in, AMD claims that they tower over the 7th generation APUs with up to 200% more CPU performance, 128% more GPU performance, and 58% lower power consumption. Again, these are bold claims, but it gives AMD confidence that it can now target premium designs and form factors with a solution that will meet consumer demands.
AMD is hoping that the release of the Ryzen 7 2700U and Ryzen 5 2500U can finally help turn the tides in the ultrathin notebook market.
|Core i7-8650U||Core i7-8550U||Core i5-8350U||Core i5-8250U||Ryzen 7 2700U||Ryzen 5 2500U|
|Architecture||Kaby Lake Refresh||Kaby Lake Refresh||Kaby Lake Refresh||Kaby Lake Refresh||Zen+Vega||Zen+Vega|
|Base Clock||1.9 GHz||1.8 GHz||1.7 GHz||1.6 GHz||2.2 GHz||2.0 GHz|
|Max Turbo Clock||4.2 GHz||4.0 GHz||3.8 GHz||3.6 GHz||3.8 GHz||3.6 GHz|
|System Bus||DMI3 - 8.0 GT/s||DMI3 - 8.0 GT/s||DMI2 - 6.4 GT/s||DMI2 - 5.0 GT/s||N/A||N/A|
|Graphics||UHD Graphics 620||UHD Graphics 620||UHD Graphics 620||UHD Graphics 620||Vega (10 CUs)||Vega (8 CUs)|
|Max Graphics Clock||1.15 GHz||1.15 GHz||1.1 GHz||1.1 GHz||1.3 GHz||1.1 GHz|
The Ryzen 7 2700U will run 200 MHz higher on the base and boost clocks for the CPU and 200 MHz higher on the peak GPU core clock. Though both systems have 4-cores and 8-threads, the GPU on the 2700U will have two additional CUs / compute units.
Forza Motorsport 7 Performance
The first full Forza Motorsport title available for the PC, Forza Motorsport 7 on Windows 10 launched simultaneously with the Xbox version earlier this month. With native 4K assets, HDR support, and new visual features like fully dynamic weather, this title is an excellent showcase of what modern PC hardware can do.
Now that both AMD and NVIDIA have released drivers optimized for Forza 7, we've taken an opportunity to measure performance across an array of different GPUs. After some significant performance mishaps with last year's Forza Horizon 3 at launch on PC, we are excited to see if Forza Motorsport 7 brings any much-needed improvements.
For this testing, we used our standard GPU testbed, including an 8-core Haswell-E processor and plenty of memory and storage.
|PC Perspective GPU Testbed|
|Processor||Intel Core i7-5960X Haswell-E|
|Motherboard||ASUS Rampage V Extreme X99|
|Memory||G.Skill Ripjaws 16GB DDR4-3200|
|Storage||OCZ Agility 4 256GB (OS)
Adata SP610 500GB (games)
|Power Supply||Corsair AX1500i 1500 watt|
|OS||Windows 10 x64|
|Drivers||AMD: 17.10.1 (Beta)
As with a lot of modern console-first titles, Forza 7 defaults to "Dynamic" image quality settings. This means that the game engine is supposed to find the best image settings for your hardware automatically, and dynamically adjust them so that you hit a target frame rate (adjustable between 30 and 60fps) no matter what is going on in the current scene that is being rendered.
While this is a good strategy for consoles, and even for casual PC gamers, it poses a problem for us trying to measure equivalent performance across GPUs. Luckily the developers of Forza Motorsport 7, Turn 10 Studios, still let you disable the dynamic control and configure the image quality settings as you desire.
One quirk however though is that in order for V-Sync to be disabled, the rendering resolution within the game must match the native resolution of your monitor. This means that if you are running 2560x1440 on your 4K monitor, you must first set the resolution within windows to 2560x1440 in order to run the game in V-Sync off mode.
We did our testing with an array of three different resolutions (1080p, 1440p, and 4K) at maximum image quality settings. We tested both AMD and NVIDIA graphics cards in similar price and performance segments. The built-in benchmark mode for this game was used, which does feature some variance due to dynamic weather patterns. However, our testing within the full game matched the results of the benchmark mode closely, so we used it for our final results.
Right off the bat, I have been impressed at how well optimized Forza Motorsport 7 seems to be on the PC. Compared to the unoptimized disaster that was Forza Horizon 3 when it launched on PC last year, it's clear that Turn 10 Studios and Microsoft have come a long way.
Even gamers looking to play on a 4K display at 60Hz can seemingly get away with the cheaper, and more mainstream GPUs such as the RX 580 or the GTX 1060 with acceptable performance in most scenarios.
Games on high-refresh-rate displays don't appear to have the same luxury. If you want to game at a resolution such as 2560x1440 at a full 144Hz, neither the RX Vega 64 or GTX 1080 will do this with maximum image quality settings. Although these GPUs appear to be in the margin where you could turn down a few settings to achieve your full refresh rate.
For some reason, the RX Vega cards didn't seem to show any scaling in performance when moving from 2560x1440 to 1920x1080, unlike the Polaris-based RX 580 and the NVIDIA options. We aren't quite sure of the cause of this and have reached out to AMD for clarification.
As far as frame times are concerned, we also gathered some data with our Frame Rating capture analysis system.
Taking a look at the first chart, we can see while the GTX 1080 frame times are extremely consistent, the RX Vega 64 shows some additional variance.
However, the frame time variance chart shows that over 95% of the frame times of the RX Vega 64 come in at under 2ms of variance, which will still provide a smooth gameplay experience in most scenarios. This matches with our experience while playing on both AMD and NVIDIA hardware where we saw no major issues with gameplay smoothness.
Forza Motorsport 7 seems to be a great addition to the PC gaming world (if you don't mind using the Microsoft store exclusively) and will run great on a wide array of hardware. Whether or not you have a NVIDIA or AMD GPU, you should be able to enjoy this fantastic racing simulator.
Introduction and Technical Specifications
Courtesy of ASUS
The ASUS Crosshair VI Hero board features a black PCB with a plastic armor overlay covering the board's rear panel and audio subsystem components. ASUS added RGB LED backlighting to the rear panel cover and chipset heat sink to illuminate the board and ASUS ROG logos, as well as under board lighting along the sound PCB separator line. ASUS designed the board around the AMD X370 chipset, offering support for AMD's Ryzen processor line and Dual Channel DDR4 memory running at a 2400MHz speed. The Crosshair VI Hero motherboard can be found in the wild at an MRSP of $254.99
Courtesy of ASUS
To power the Ryzen CPU, ASUS integrated a 12 phase digital power delivery system into the Crosshair VI Hero, providing enough juice to push your CPU to its limits. The following features have been integrated into the board: eight SATA III 6Gbps ports; an M.2 PCIe Gen3 x4 32Gbps capable port; an RJ-45 port featuring the Intel I211-AT Gigabit NIC; three PCI-Express x16 slots; two PCI-Express x1 slots; the ASUS SupremeFX S1220 8-Channel audio subsystem; integrated DVI-D and HDMI video ports; and USB 2.0, 3.0, and 3.1 Type-A and Type-C port support.
Courtesy of ASUS
For superior audio performance, ASUS built the Crosshair VI Hero's audio subsystem around the SupremeFX CODEC, featuring Nichicon audio capacitors, switching MOSFETs, a high-precision clock source, an ESS ESS9023P DAC, and an RC4580 audio buffer.
Courtesy of ASUS
To appease their AMD user population, ASUS designed the CPU cooler mount for compatibility with both the AM3 and AM4 style coolers. This gives users a wider selection of cooling solutions available to use with the board.
How a ThinkPad is born
During Lenovo's recent ThinkPad 25th Anniversary Event in Yokohama, Japan, we were given an opportunity to learn a lot about the evolution of the ThinkPad brand over the years.
One of the most significant sources of pride mentioned by the Lenovo executives in charge of the ThinkPad division during this event was the team's Yamato Laboratory. Formerly located in Yamato City (hence the name) and relocated to Yokohama in 2011, the Yamato Labs have been responsible for every ThinkPad product, dating back to the IBM days and the original ThinkPad 700C.
This continuity from the earliest days of ThinkPad has helped provide a standard of quality and education passed down from engineer to engineer over the last 25 years of the ThinkPad brand. In fact, some of the original engineers from 1987 are still with the company and working on the latest and greatest ThinkPad innovations. It's impressive to see such continuity and pride in the Japanese development team considering Lenovo's acquisition of the brand back in 2005.
One of the most exciting things was a peek at some of the tests that every device bearing the ThinkPad name must go through, including non-notebook devices like the X1 Tablet.
Introduction and Features
FSP’s new Hydro PTM lineup is part of their top-tier Premium Series and currently includes three models: 750W, 650W, and 550W. We will be taking a detailed look at the 750W Platinum model in this review. FSP Group Inc. has been designing and building PC power supplies under their own brand since 2003. Not only do they market power supplies under their own FSP name but they are the OEM for many other big name brands. Now you might be thinking “Hydro” refers to water-cooling but like we saw last year with the Hydro G series power supplies, the Platinum Series all use conventional air cooling. The Hydro apparently refers to the “Hydro Dynamic Bearing” used in the cooling fan (more commonly referred to as a FDB – Fluid Dynamic Bearing).
FSP developed the Hydro Platinum Series with an advanced thermal layout design. The units come with all modular cables and are certified to comply with the 80 Plus Platinum efficiency criteria. The power supplies are designed to deliver tight voltage regulation with excellent AC ripple and noise suppression. All Hydro PTM Series power supplies incorporate a quiet 135mm cooling fan and they come backed with a 10-year warranty!
FSP Hydro PTM Series PSU Key Features:
• 550W, 650W or 750W continuous DC output @ 50°C
• High efficiency, 80 PLUS Platinum certified ≥92%
• Complies with newest ATX12V v2.4 & EPS12 v2.92 standards
• 100% Japanese made electrolytic capacitors
• Quiet 135mm Fluid Dynamic Bearing fan
• Powerful single +12V rail design
• Advanced thermal layout design
• Fully modular with flat ribbon-style cables
• SLI, Crossfire, and VR ready
• Protections: OVP, UVP, OCP, OPP, SCP and OTP
• 10-Year Manufacturer’s warranty
• $124.99 USD (Amazon.com, Oct. 2017)
Specifications and Summary
As seems to be the trend for processor reviews as of late, today marks the second in a two-part reveal of Intel’s Coffee Lake consumer platform. We essentially know all there is to know about the new mainstream and DIY PC processors from Intel, including specifications, platform requirements, and even pricing; all that is missing is performance. That is the story we get to tell you today in our review of the Core i7-8700K and Core i5-8400.
Coffee Lake is the second spoke of Intel's “8th generation” wheel that began with the Kaby Lake-R release featuring quad-core 15-watt notebook processors for the thin and light market. Though today’s release of the Coffee Lake-S series (the S is the designation for consumer desktop) doesn’t share the same code name, it does share the same microarchitecture, same ring bus design (no mesh here), and same underlying technology. They are both built on the Intel 14nm process technology.
And much like Kaby Lake-R in the notebook front, Coffee Lake is here to raise the core count and performance profile of the mainstream Intel CPU playbook. When AMD first launched the Ryzen 7 series of processors that brought 8-cores and 16-threads of compute, it fundamentally shook the mainstream consumer markets. Intel was still on top in terms of IPC and core clock speeds, giving it the edge in single and lightly threaded workloads, but AMD had released a part with double the core and thread count and was able to dominate in most multi-threaded workloads compared to similar Intel offerings.
Much like Skylake-X before it, Coffee Lake had been on Intel’s roadmap from the beginning, but new pressure from a revived AMD meant bringing that technology to the forefront sooner rather than later in an effort stem any potential shifts in market share and maybe more importantly, mind share among investors, gamers, and builders. Coffee Lake, and the Core i7, Core i5, and Core i3 processors that will be a part of this 8000-series release, increase the core count across the board, and generally raise clock speeds too. Intel is hoping that by bumping its top mainstream CPU to 6-cores, and coupling that with better IPC and higher clocks, it can alleviate the advantages that AMD has with Ryzen.
But does it?
That’s what we are here to find out today. If you need a refresher on the build up to this release, we have the specifications and slight changes in the platform and design summarized for you below. Otherwise, feel free to jump on over to the benchmarks!
We've been hearing about Intel's VROC (NVMe RAID) technology for a few months now. ASUS started slipping clues in with their X299 motherboard releases starting back in May. The idea was very exciting, as prior NVMe RAID implementations on Z170 and Z270 platforms were bottlenecked by the chipset's PCIe 3.0 x4 DMI link to the CPU, and they also had to trade away SATA ports for M.2 PCIe lanes in order to accomplish the feat. X99 motherboards supported SATA RAID and even sported four additional ports, but they were left out of NVMe bootable RAID altogether. It would be foolish of Intel to launch a successor to their higher end workstation-class platform without a feature available in two (soon to be three) generations of their consumer platform.
To get a grip on what VROC is all about, lets set up some context with a few slides:
First, we have a slide laying out what the acronyms mean:
- VROC = Virtual RAID on CPU
- VMD = Volume Management Device
What's a VMD you say?
...so the VMD is extra logic present on Intel Skylake-SP CPUs, which enables the processor to group up to 16 lanes of storage (4x4) into a single PCIe storage domain. There are three VMD controllers per CPU.
VROC is the next logical step, and takes things a bit further. While boot support is restricted to within a single VMD, PCIe switches can be added downstream to create a bootable RAID possibly exceeding 4 SSDs. So long as the array need not be bootable, VROC enables spanning across multiple VMDs and even across CPUs!
Assembling the Missing Pieces
Unlike prior Intel storage technology launches, the VROC launch has been piecemeal at best and contradictory at worst. We initially heard that VROC would only support Intel SSDs, but Intel later published a FAQ that stated 'selected third-party SSDs' would also be supported. One thing they have remained steadfast on is the requirement for a hardware key to unlock RAID-1 and RAID-5 modes - a seemingly silly requirement given their consumer chipset supports bootable RAID-0,1,5 without any key requirement (and VROC only supports one additional SSD over Z170/Z270/Z370, which can boot from 3-drive arrays).
On the 'piecemeal' topic, we need three things for VROC to work:
- BIOS support for enabling VMD Domains for select groups of PCIe lanes.
- Hardware for connecting a group of NVMe SSDs to that group of PCIe lanes.
- A driver for OS mounting and managing of the array.
Let's run down this list and see what is currently available:
Check. Hardware for connecting multiple drives to the configured set of lanes?
Check (960 PRO pic here). Note that the ASUS Hyper M.2 X16 Card will only work on motherboards supporting PCIe bifurcation, which allows the CPU to split PCIe lanes into subgroups without the need of a PLX chip. You can see two bifurcated modes in the above screenshot - one intended for VMD/VROC, while the other (data) selection enables bifurcation without enabling the VMD controller. This option presents the four SSDs to the OS without the need of any special driver.
With the above installed, and the slot configured for VROC in the BIOS, we are greeted by the expected disappointing result:
Now for that pesky driver. After a bit of digging around the dark corners of the internet:
Check! (well, that's what it looked like after I rapidly clicked my way through the array creation)
Don't even pretend like you won't read the rest of this review! (click here now!)
A New Standard
With a physical design that is largely unchanged other than the addition of a glass back for wireless charging support, and featuring incremental improvements to the camera system most notably with the Plus version, the iPhone 8 and 8 Plus are interesting largely due to the presence of a new Apple SoC. The upcoming iPhone X (pronounced "ten") stole the show at Apple's keynote annoucement earlier this month, but the new A11 Bionic chip powers all 2017 iPhone models, and for the first time Apple has a fully custom GPU after their highly publicized split with Imagination Technologies, makers of the PowerVR graphics found in previous Apple SoCs.
The A11 Bionic powering the 2017 iPhones contains Apple’s first 6-core processor, which is comprised of two high performance cores (code-named ‘Monsoon’) and four high efficiency cores (code-named ‘Mistral’). Hugely important to its performance is the fact that all six cores are addressable with this new design, as Apple mentions in their description of the SoC:
"With six cores and 4.3 billion transistors, A11 Bionic has four efficiency cores that are up to 70 percent faster than the A10 Fusion chip, and two performance cores that are up to 25 percent faster. The CPU can even harness all six cores simultaneously when you need a turbo boost."
It was left to improvments to IPC and clock speed to boost the per-core performance of previous Apple SoCs, such as the previous A10 Fusion part, which contained a quad-core CPU split in an even arrangement of 2x performance + 2x efficiency cores. Apple's quad-core effort did not affect app performance beyond the two performance cores, with additional cores limited to background tasks in real-world use (though the A10 Fusion did not provide any improvement to battery life over previous efforts, as we saw).
The A11 Bionic on the iPhone 8 system board (image credit: iFixit)
Just how big an impact this new six-core CPU design will have can be instantly observed with the CPU benchmarks to follow, and on the next page we will find out how Apple's in-house GPU solution compare to both the previous A10 Fusion PowerVR graphics, and market-leading Qualcomm Adreno 540 found in the Snapdragon 835. We will begin with the CPU benchmarks.
When we first saw product page for the Marseille mCable Gaming Edition, a wave of skepticism waved across the PC Perspective offices. Initially, an HDMI cable that claims to improve image quality while gaming sounds like the snake oil that "audiophile" companies like AudioQuest have been peddling for years.
However, looking into some of the more technical details offered by Marseille, their claims seemed to be more and more likely. By using a signal processor embedded inside the HDMI connector itself, Marseille appears to be manipulating the video signal to improve quality in ways applicable to gaming. Specifically, their claim of Anti-Aliasing on all video signals has us interested.
So for curiosities sake, we ordered the $150 mCable Gaming Edition and started to do some experimentation.
Even from the initial unboxing, there are some unique aspects to the mCable. First, you might notice that the connectors are labeled with "Source" and "TV." Since the mCable has a signal processor in it, this distinction which is normally meaningless starts to matter a great deal.
Similarly, on the "TV" side, there is a USB cable used to power the signal processing chip. Marseille claims that most modern TV's with USB connections will be able to power the mCable.
While a lot of Marseilles marketing materials are based on upgrading the visual fidelity of console games that don't have adjustable image quality settings, we decided to place our aim on a market segment we are intimately familiar with—PC Gaming. Since we could selectively turn off Anti-Aliasing in a given game, and PC games usually implement several types of AA, it seemed like the most interesting testing methodology.