All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Introduction and Technical Specifications
Courtesy of MSI
Unique is a word I don't throw around much with motherboard reviews, but the latest revision of MSI' XPower series is nothing if not unique. The Z170A-XPower Gaming Titanium Edition features a stunning silver and black aesthetic for a look unlike any of its competition in the LGA1151 realm, featuring a silver colored PCB and black and red accents in some key areas of the board for some additional eye-catching value. The board features branding on its VRM and chipset heat sinks as well. By integrating the Intel Z170 chipset, the motherboard supports the latest Intel LGA1151 Skylake processor line as well as Dual Channel DDR4 memory. With an MSRP of $299, the Z170A-XPower Gaming Titanium Edition sits at a competitive price point for the features it offers.
Courtesy of MSI
Courtesy of MSI
Courtesy of MSI
To give the board some teeth, MSI paired up the Z170A-XPower Gaming Titanium (Ti) Edition motherboard with its Military Class 5 component architecture. This latest iteration includes a 16-phase digital power delivery system with DrMOS 3-in-1 chip MOSFETS, Titanium chokes, and a mix of Hi-c and DARK capacitors to ensure stable operation under the most extreme circumstances as well as the greatest degree of power efficiency possible. MSI also integrated a high quality audio subsystem, centered around their Nahimic sound technology with Chemi-Con sourced audio capacitors and headphone amplifiers on an isolated PCB. MSI chose to integrate the following features into the Z170A-XPower Gaming Titanium Edition board: four SATA 3 ports; two SATA-Express ports; two M.2 PCIe x4 capable ports; an Intel I219-V GigE controller; four PCI-Express x16 slots; three PCI-Express x1 slots; a 2-digit diagnostic LED display; on-board power, reset, CMOS clear, BIOS Flashback, and Game Boost buttons; Multi-BIOS, Hotkey, and PCI CeaseFire switches; OC Dashboard device support; BIOS Flashback+ BIOS updater; integrated V-Check voltage measurement points;dual HDMI video ports; a DisplayPort video port; and USB 2.0, 3.0, and 3.1 Type-A and Type-C port support.
Meet EMTECs powerful portable peripherals
You may not think you have heard of EMTEC before but if you are old enough to remember audio and video tape then their original name will ring a bell; they were once known as BASF Magnetics. BASF officially launched their new division in 2000, focused on modern magnetic storage such as hard drives and flash.
If you have seen an oddly shaped flash drive or ones made in the shapes of Looney Tunes or Angry Birds then you have run into EMTEC products. As you can see above their product line is quite varied and includes the Power Connect for Mobile devices and the Wi-Fi Hard Drive P600, both of which they have sent for us to review.
The packaging is reminiscent of a gaming mouse with Velcro so you can open the box to see the device inside. Even better is the lack of clamshell packaging, you won't have to risk a finger trying to open them.
The WiFi Drive P600 is designed for portability, it is slightly larger than a deck of cards and is available in 1TB as well as 500GB models, the latter of which is the version that we received. You can sync the hard drive wirelessly, over the LAN connection present on the bottom of the device or through the USB 3.0 port which is also the drives recharging port. You can connect your devices to this drive in numerous ways, including setting it up as a SAMBA server or through DNLA if your devices are compatible.
The Power Connect U600 is perhaps the more interesting of the two devices for people on the go. With a large enough MicroSD card installed it can fulfill the same role as the WiFi HDD as it offers the same connectivity choices, including a LAN port, with the exception of DNLA functionality which is replaced with UPNP. In addition to offering portable storage it can function as a WiFi hot spot and with the internal 5200 mAh battery it will be able to charge your phone when you are away from power.
It's like Legos for the working man
Way back in January of 2015 at CES we were shown a new line of accessories from Lenovo called ThinkPad Stack. The company is targeting the professional user on the go with a collection of four devices that can be used together in a stackable form that offers up some impressive capability and function in a small package, though it does come with a business-user markup.
Last week Lenovo sent us a full set of the ThinkPad Stack devices including a portable router, external USB 3.0 hard drive, Bluetooth speaker and external battery. With a price tag totaling nearly $400 for the entire set, there is a pretty high expectation for functionality, build quality and usability that Lenovo needs to hit, and they do a better job than I expected (honestly) to hit it. You don't have to buy all of the available Stack accessories, and that is part of the charm of the new product line - you can customize them to your own needs.
Though it's not for everyone, I do find myself enjoying the idea of Lenovo's ThinkPad Stack products and how it enables the mobile professional. Let's take a look at what it is, how it works and if it's something you need.
Last month NVIDIA introduced the world to the GTX 980 in a new form factor for gaming notebook. Using the same Maxwell GPU, the same performance levels but with slightly tweaked power delivery and TDPs, notebooks powered by the GTX 980 promise to be a noticeable step faster than anything before it.
Late last week I got my hands on the updated MSI GT72S Dominator Pro G, the first retail ready gaming notebook to not only integrate the new GTX 980 GPU but also an unlocked Skylake mobile processor.
This machine is something to behold - though it looks very similar to previous GT72 versions, this machine hides hardware unlike anything we have been able to carry in a backpack before. And the sexy red exterior with MSI Dragon Army logo blazoned across the back definitely help it to stand out in a crowd. If you happen to be in a crowd of notebooks.
A quick spin around the GT72S reveals a sizeable collection of hardware and connections. On the left you'll find a set of four USB 3.0 ports as well as four audio inputs and ouputs and an SD card reader.
On the opposite side there are two more USB 3.0 ports (totalling six) and the optical / Blu-ray burner. With that many USB 3.0 ports you should never struggle with accessories availability - headset, mouse, keyboard, hard drive and portable fan? Check.
What you never knew you didn't know
While researching a few upcoming SD / microSD product reviews here at PC Perspective, I quickly found myself swimming in a sea of ratings and specifications. This write up was initially meant to explain and clarify these items, but it quickly grew into a reference too large to include in every SD card article, so I have spun it off here as a standalone reference. We hope it is as useful to you as it will be to our upcoming SD card reviews.
SD card speed ratings are a bit of a mess, so I'm going to do my best to clear things up here. I'll start with classes and grades. These are specs that define the *minimum* speed a given SD card should meet when reading or writing (both directions are used for the test). As with all flash devices, the write speed tends to be the more limiting factor. Without getting into gory detail, the tests used assume mostly sequential large writes and random reads occurring at no smaller than the minimum memory unit of the card (typically 512KB). The tests match the typical use case of an SD card, which is typically writing larger files (or sequential video streams), with minimal small writes (file table updates, etc).
In the above chart, we see speed 'Class' 2, 4, 6, and 10. The SD card spec calls out very specific requirements for these specs, but the gist of it is that an unfragmented SD card will be able to write at a minimum MB/s corresponding to its rated class (e.g. Class 6 = 6 MB/s minimum transfer speed). The workload specified is meant to represent a typical media device writing to an SD card, with buffering to account for slower FAT table updates (small writes). With higher bus speed modes (more on that later), we also get higher classes. Older cards that are not rated under this spec are referred to as 'Class 0'.
As we move higher than Class 10, we get to U1 and U3, which are referred to as UHS Speed Grades (contrary to the above table which states 'Class') in the SD card specification. The changeover from Class to Grade has something to do with speed modes, which also relates with the standard capacity of the card being used:
U1 and U3 correspond to 10 and 30 MB/s minimums, but the test conditions are slightly different for these specs (so Class 10 is not *exactly* the same as a U1 rating, even though they both equate to 10 MB/sec). Cards not performing to U1 are classified as 'Speed Grade 0'. One final note here is that a U rating also implies a UHS speed mode (see the next section).
New Components, New Approach
After 20 or so enclosure reviews over the past year and a half and some pretty inconsistent test hardware along the way, I decided to adopt a standardized test bench for all reviews going forward. Makes sense, right? Turns out choosing the best components for a cases and cooling test system was a lot more difficult than I expected going in, as special consideration had to be made for everything from form-factor to noise and heat levels.
Along with the new components I will also be changing the approach to future reviews by expanding the scope of CPU cooler testing. After some debate as to the type of CPU cooler to employ I decided that a better test of an enclosure would be to use both closed-loop liquid and air cooling for every review, and provide thermal and noise results for each. For CPU cooler reviews themselves I'll be adding a "real-world" load result to the charts to offer a more realistic scenario, running a standard desktop application (in this case a video encoder) in addition to the torture-test result using Prime95.
But what about this new build? It isn't completely done but here's a quick look at the components I ended up with so far along with the rationale for each selection.
CPU – Intel Core i5-6600K ($249, Amazon.com)
The introduction of Intel’s 6th generation Skylake processors provided the
excuse opportunity for an upgrade after using an AMD FX-6300 system for the last couple of enclosure reviews, and after toying with the idea of the new i7-6700K, and immediately realizing this was likely overkill and (more importantly) completely unavailable for purchase at the time, I went with the more "reasonable" option with the i5. There has long been a debate as to the need for hyper-threading for gaming (though this may be changing with the introduction of DX12) but in any case this is still a very powerful processor and when stressed should produce a challenging enough thermal load to adequately test both CPU coolers and enclosures going forward.
GPU – XFX Double Dissipation Radeon R9 290X ($347, Amazon.com)
This was by far the most difficult selection. I don’t think of my own use when choosing a card for a test system like this, as it must meet a set of criteria to be a good fit for enclosure benchmarks. If I choose a card that runs very cool and with minimal noise, GPU benchmarks will be far less significant as the card won’t adequately challenge the design and thermal characteristics of the enclosure. There are certainly options that run at greater temperatures and higher noise (a reference R9 290X for example), but I didn’t want a blower-style cooler with the GPU. Why? More and more GPUs are released with some sort of large multi-fan design rather than a blower, and for enclosure testing I want to know how the case handles the extra warm air.
Noise was an important consideration, as levels from an enclosure of course vary based on the installed components. With noise measurements a GPU cooler that has very low output at idle (or zero, as some recent cooler designs permit) will allow system idle levels to fall more on case fans and airflow than a GPU that might drown them out. (This would also allow a better benchmark of CPU cooler noise - particularly with self-contained liquid coolers and audible pump noise.) And while I wanted very quiet performance at idle, at load there must be sufficient noise to measure the performance of the enclosure in this regard, though of course nothing will truly tax a design quite like a loud blower. I hope I've found a good balance here.
GPU Enthusiasts Are Throwing a FET
NVIDIA is rumored to launch Pascal in early (~April-ish) 2016, although some are skeptical that it will even appear before the summer. The design was finalized months ago, and unconfirmed shipping information claims that chips are being stockpiled, which is typical when preparing to launch a product. It is expected to compete against AMD's rumored Arctic Islands architecture, which will, according to its also rumored numbers, be very similar to Pascal.
This architecture is a big one for several reasons.
Image Credit: WCCFTech
First, it will jump two full process nodes. Current desktop GPUs are manufactured at 28nm, which was first introduced with the GeForce GTX 680 all the way back in early 2012, but Pascal will be manufactured on TSMC's 16nm FinFET+ technology. Smaller features have several advantages, but a huge one for GPUs is the ability to fit more complex circuitry in the same die area. This means that you can include more copies of elements, such as shader cores, and do more in fixed-function hardware, like video encode and decode.
That said, we got a lot more life out of 28nm than we really should have. Chips like GM200 and Fiji are huge, relatively power-hungry, and complex, which is a terrible idea to produce when yields are low. I asked Josh Walrath, who is our go-to for analysis of fab processes, and he believes that FinFET+ is probably even more complicated today than 28nm was in the 2012 timeframe, which was when it launched for GPUs.
It's two full steps forward from where we started, but we've been tiptoeing since then.
Image Credit: WCCFTech
Second, Pascal will introduce HBM 2.0 to NVIDIA hardware. HBM 1.0 was introduced with AMD's Radeon Fury X, and it helped in numerous ways -- from smaller card size to a triple-digit percentage increase in memory bandwidth. The 980 Ti can talk to its memory at about 300GB/s, while Pascal is rumored to push that to 1TB/s. Capacity won't be sacrificed, either. The top-end card is expected to contain 16GB of global memory, which is twice what any console has. This means less streaming, higher resolution textures, and probably even left-over scratch space for the GPU to generate content in with compute shaders. Also, according to AMD, HBM is an easier architecture to communicate with than GDDR, which should mean a savings in die space that could be used for other things.
Third, the architecture includes native support for three levels of floating point precision. Maxwell, due to how limited 28nm was, saved on complexity by reducing 64-bit IEEE 754 decimal number performance to 1/32nd of 32-bit numbers, because FP64 values are rarely used in video games. This saved transistors, but was a huge, order-of-magnitude step back from the 1/3rd ratio found on the Kepler-based GK110. While it probably won't be back to the 1/2 ratio that was found in Fermi, Pascal should be much better suited for GPU compute.
Image Credit: WCCFTech
Mixed precision could help video games too, though. Remember how I said it supports three levels? The third one is 16-bit, which is half of the format that is commonly used in video games. Sometimes, that is sufficient. If so, Pascal is said to do these calculations at twice the rate of 32-bit. We'll need to see whether enough games (and other applications) are willing to drop down in precision to justify the die space that these dedicated circuits require, but it should double the performance of anything that does.
So basically, this generation should provide a massive jump in performance that enthusiasts have been waiting for. Increases in GPU memory bandwidth and the amount of features that can be printed into the die are two major bottlenecks for most modern games and GPU-accelerated software. We'll need to wait for benchmarks to see how the theoretical maps to practical, but it's a good sign.
Introduction and Technical Specifications
Courtesy of GIGABYTE
As the flagship board in their Intel Z170-based G1 Gaming Series product line, GIGABYTE integrated all the premium features you could ever want into their Z170X-G1 Gaming motherboard. The board features a black PCB with red and white accents spread throughout its surface to make for a very appealing aesthetic. GIGIGABYTE chose to integrate plastic shields covering their rear panel assembly, the VRM heat sinks, the audio components, and the chipset and SATA ports. With the addition of the Intel Z170 chipset, the motherboard supports the latest Intel LGA1151 Skylake processor line as well as Dual Channel DDR4 memory. Offered at at a premium MSRP of $499, the Z170X-Gaming G1 is priced to appeal to the premium user enthusiasts.
Courtesy of GIGABYTE
Courtesy of GIGABYTE
GIGABYTE over-engineered the Z170X-Gaming G1 to take anything you could think of throwing at it with a massive 22-phase digital power delivery system, featuring 4th gen IR digital controllers and 3rd gen IR PowerIRStage ICs as well as Durable Black solid capacitors rated at 10k operational hours. GIGABYTE integrated the following features into the Z170X-G1 Gaming board: four SATA 3 ports; three SATA-Express ports; two M.2 PCIe x4 capable port; dual Qualcomm® Atheros Killer E2400 NICs; a Killer™ Wireless-AC 1535 802.11AC WiFI controller; four PCI-Express x16 slots; three PCI-Express x1 slots; 2-digit diagnostic LED display; on-board power, reset, CMOS clear, ECO, and CPU Overclock buttons; Dual-BIOS and active BIOS switches; audio gain control switch; Sound Blaster Core 3D audio solution; removable audio OP-AMP port; integrated voltage measurement points; Q-Flash Plus BIOS updater; integrated HDMI video port; and USB 2.0, 3.0, and 3.1 Type-A and Type-C port support.
Setup, Game Selection
Yesterday NVIDIA officially announced the new GeForce NOW streaming game service, the conclusion to the years-long beta and development process known as NVIDIA GRID. As I detailed on my story yesterday about the reveal, GeForce NOW is a $7.99/mo. subscription service that will offer on-demand, cloud-streamed games to NVIDIA SHIELD devices, including a library of 60 games for that $7.99/mo. fee in addition to 7 titles in the “purchase and play” category. There are several advantages that NVIDIA claims make GeForce NOW a step above any other streaming gaming service including PlayStation Now, OnLive and others. Those include load times, resolution and frame rate, combined local PC and streaming game support and more.
I have been able to use and play with the GeForce NOW service on our SHIELD Android TV device in the office for the last few days and I thought I would quickly go over my initial thoughts and impressions up to this point.
Setup and Availability
If you have an NVIDIA SHIELD Android TV (or a SHIELD Tablet) then the setup and getting started process couldn’t be any simpler for new users. An OS update is pushed that changes the GRID application on your home screen to GeForce NOW and you can sign in using your existing Google account on your Android device, making payment and subscription simple to manage. Once inside the application you can easily browse through the included streaming games or look through the smaller list of purchasable games and buy them if you so choose.
Playing a game is as simple and selecting title from the grid list and hitting play.
Let’s talk about that game selection first. For $7.99/mo. you get access to 60 titles for unlimited streaming. I have included a full list below, originally posted in our story yesterday, for reference.
Choosing the Right Platform
Despite what most people may think, our personal workstations here at the PC Perspective offices aren’t exactly comprised of cutting edge hardware. Just as in every other production environment, we place a real benefit on stability with the machines that we write, photo edit, and in this case, video edit on.
The current video editing workstation for PC Perspective offices is quite old when you look at the generations upon generations of hardware we have reviewed in the years since it was built. In fact, it has hardly been touched since early 2011. Built around the then $1000 Intel Core-i7 990X, 24GB of DDR3, a Fermi-based NVIDIA Quadro 5000, and a single 240gb SandForce 2 based SSD, this machine has edited a lot of 1080p video for us with little problems.
However, after starting to explore the Panasonic GH4 and 4K video a few months ago, the age of this machine became quite apparent. Real-time playback of high bit rate 4K content was choppy at best, and scrubbing through the timeline next to impossible. Transcoding to a lower resolution mezzanine file, or turning down the playback quality in Premiere Pro worked to some extent, but made the visual quality we gained more difficult to deal with. It was clear that we were going to need a new workstation sooner than later.
The main question was what platform to build upon. My initial thought was to build using the 8-core Intel Core i7-5960X and X99 platform. The main application we use, Adobe Premiere Pro (and it’s associated Media Encoder app) are very multithreaded. Going from 6-cores with the i7-990X to 8-cores with the i7-5960S with modest improvement in IPC didn’t seem like a big enough gain nor very future proof.
Luckily, we had a pair of Xeon E5-2680v2’s around from another testbed that had been replaced. These processors each provide 10 cores (Hyperthreading enabled for a resulting 20 threads each) at a base frequency of 2.8GHz, with the ability to boost up to 3.6GHz. By going with two of these processors in a dual CPU configuration, we will be significantly increasing our compute power and hopefully providing some degree of future proofing. Plus, we already use the slightly higher clocked Xeon E5-2690v2’s in our streaming server, so we have some experience with a very similar setup.
Introduction and Features
Corsair has just expanded their RM Series of PC power supplies to include a third line, the RMx Series, in addition to the original RM and RMi Series. The new RMx power supplies will be available in 550W, 650W, 750W, 850W and 1000W models and are designed by Corsair and built by Channel Well Technologies (CWT). We will be taking a detailed look at the new RM850x 850W PSU in this review.
The RMx Series power supplies are equipped with fully modular cables and optimized for very quiet operation and high efficiency. RMx Series power supplies incorporate Zero RPM Fan Mode, which means the fan does not spin until the power supply is under a moderate to heavy load. The cooling fan is designed to deliver low noise and high static pressure. All of the RMx power supplies are 80 Plus Gold certified for high efficiency.
The Corsair RMx Series is built with high-quality components, including all Japanese made electrolytic capacitors, and Corsair guarantees these PSUs to deliver clean, stable, continuous power, even at ambient temperatures up to 50°C.
Corsair’s new RMx Series power supplies are nearly identical to the current RMi Series units except for these differences:
• Lower cost
• No Corsair Link interface
• 135mm fan vs. 140mm fan
• Additional 550W model
The following table provided by Corsair gives a good summary of the differences and similarities between the RM, RMx, and RMi Series power supplies.
(Courtesy of Corsair)
Corsair RM850x PSU Features summary:
• 850W continuous DC output (up to 50°C)
• 7-Year Warranty and Comprehensive Customer Support
• 80 PLUS Gold certified, at least 90% efficiency under 50% load
• Fully modular cables for easy installation
• Flat ribbon-style, low profile cables help optimize airflow
• Zero RPM Fan Mode for silent operation up to 40% load
• Quiet NR135L fan for long life and quiet operation
• High quality components including all Japanese electrolytic capacitors
• Active Power Factor correction (0.99) with Universal AC input
• Safety Protections : OCP, OVP, UVP, SCP, OTP, and OPP
• MSRP for the RM850x : $149.99 USD
When approached a couple of weeks ago by Microsoft with the opportunity to take an early look at an upcoming performance benchmark built on a DX12 game pending release later this year, I of course was excited for the opportunity. Our adventure into the world of DirectX 12 and performance evaluation started with the 3DMark API Overhead Feature Test back in March and was followed by the release of the Ashes of the Singularity performance test in mid-August. Both of these tests were pinpointing one particular aspect of the DX12 API - the ability to improve CPU throughput and efficiency with higher draw call counts and thus enabling higher frame rates on existing GPUs.
This game and benchmark are beautiful...
Today we dive into the world of Fable Legends, an upcoming free to play based on the world of Albion. This title will be released on the Xbox One and for Windows 10 PCs and it will require the use of DX12. Though scheduled for release in Q4 of this year, Microsoft and Lionhead Studios allowed us early access to a specific performance test using the UE4 engine and the world of Fable Legends. UPDATE: It turns out that the game will have a fall-back DX11 mode that will be enabled if the game detects a GPU incapable of running DX12.
This benchmark focuses more on the GPU side of DirectX 12 - on improved rendering techniques and visual quality rather than on the CPU scaling aspects that made Ashes of the Singularity stand out from other graphics tests we have utilized. Fable Legends is more representative of what we expect to see with the release of AAA games using DX12. Let's dive into the test and our results!
Taking Racing Games a Step Further
I remember very distinctly the first racing game I had ever played and where. It was in the basement of a hotel in Billings, MT where I first put a couple of quarters through the ATARI Night Driver arcade machine. It was a very basic simulator with white dots coming at you as if they were reflectors on poles. The game had a wheel and four gears available through a shifter. It had an accelerator and no brake. It was the simplest racing game a person could play. I was pretty young, so it was not as fun to me because I did not do well actually playing it. Like most kids that age, fun is in the anticipation of playing and putting the quarter in rather than learning the intricacies of a game.
Throughout the years there were distinct improvements. I played Pole Position and Enduro on the ATARI 2600, I had my first PC racer with Test Drive (the Ferrari Testarossa was my favorite vehicle) using only the keyboard. I took a break for a few years and did not get back into racing games until I attended the 3dfx T-buffer demo when I saw the latest NFS 4 (High Stakes) played at 1024x768 with AA enabled. Sure, it looked like the cars were covered in baby oil, but that was not a bad thing at the time.
One of the real breakthrough titles for me was NFS: Porsche Unleashed. EA worked with Porsche to create a game that was much closer to a simulation than the previous arcade racers. It was not perfect, but it was one of the first titles to support Force Feedback in racing. I purchased a Microsoft Sidewinder Force Feedback 2 joystick. The addition of FFB was a tremendous improvement in the game as I could feel the tires start to slip and experience the increased resistance to turns. This was my first real attempt at a racing game and actually completing it. I still have fond memories and it would be great to get a remastered version with better graphics and physics, while still retaining the simulation roots.
After PU I again stopped playing racers. The release of Project Gotham racing for the XBox rekindled that a bit, but I soon tired of the feel of the controller and the rumble rather than real FFB effects. Fast forward to Quakecon 2009 when I saw the first gameplay videos of the upcoming DiRT 2. This title was one of the first to adopt DX11 that would push the HD 5800 and GTX 480 video cards for all they were worth. This re-ignited my desire to race. I purchased DiRT 2 as soon as it was available for the PC and played with the aging (but still solid) Sidewinder FFB P2.
The box was a little beat up when it got to me, but everything was intact.
Something was missing though. I really wanted more out of my racing game. The last time I had used a wheel on a racing game was probably an Outrun arcade machine in the late 80s. I did some shopping around and decided on the Thrustmaster F430 Ferrari FFB wheel. It was on sale at the time for a low, low price of $76. It had a 270 degree rotation which is more apt for arcade racers than sims, but it was a solid wheel for not a whole lot of money. It was a fantastic buy for the time and helped turn me into a racing enthusiast.
During this time I purchased my kids a couple of low end wheels that use the bungee cord centering mechanism. These of course lack any FFB features, but the Genius one I acquired was supposed to have some basic feedback and rumble effects: it never worked as such. So, my experience to this point has been joysticks, bungee wheels, and a 270 degree F430 wheel. This does not make me an expert, but it does provide an interesting background for the jump to a higher level of product.
Introduction, Specifications and Packaging
What's better than an 18-channel NVMe PCIe Datacenter SSD controller in a Half Height Half Length (HHHL) package? *TWO* 18-channel NVMe PCIe Datacenter controllers in a HHHL package! I'm sure words to this effect were uttered in an Intel meeting room some time in the past, because such a device now exists, and is called the SSD DC P3608:
The P3608 is essentially a pair of P3600's glued together on a single PCB, much like how some graphics cards merge a pair of GPUs to act with the performance of a pair of cards combined into a single one:
What is immediately impressive here is that Intel has done this same trick within 1/4 of the space (HHHL compared to a typical graphics card). We can only imagine the potential of a pair of P3600 SSDs, so lets get right into the specs, disassembly, and testing!
Pack a full GTX 980 on the go!
For many years, the idea of a truly mobile gaming system has been attainable if you were willing to pay the premium for high performance components. But anyone that has done research in this field would tell you that though they were named similarly, the mobile GPUs from both AMD and NVIDIA had a tendency to be noticeably slower than their desktop counterparts. A GeForce GTX 970M, for example, only had a CUDA core count that was slightly higher than the desktop GTX 960, and it was 30% lower than the true desktop GTX 970 product. So even though you were getting fantastic mobile performance, there continued to be a dominant position that desktop users held over mobile gamers in PC gaming.
This fall, NVIDIA is changing that with the introduction of the GeForce GTX 980 for gaming notebooks. Notice I did not put an 'M' at the end of that name; it's not an accident. NVIDIA has found a way, through binning and component design, to cram the entirety of a GM204-based Maxwell GTX 980 GPU inside portable gaming notebooks.
The results are impressive and the implications for PC gamers are dramatic. Systems built with the GTX 980 will include the same 2048 CUDA cores, 4GB of GDDR5 running at 7.0 GHz and will run at the same base and typical GPU Boost clocks as the reference GTX 980 cards you can buy today for $499+. And, while you won't find this GPU in anything called a "thin and light", 17-19" gaming laptops do allow for portability of gaming unlike any SFF PC.
So how did they do it? NVIDIA has found a way to get a desktop GPU with a 165 watt TDP into a form factor that has a physical limit of 150 watts (for the MXM module implementations at least) through binning, component selection and improved cooling. Not only that, but there is enough headroom to allow for some desktop-class overclocking of the GTX 980 as well.
A Diverse Lineup
ThinkPads have always been one of our favorite notebook brands here at PC Perspective. While there certainly has been some competition from well-designed portables such as the Dell XPS 13 and Microsoft Surface Pro 3, the ThinkPad line remains a solid choice for power users.
We had the chance to look at a lot of Lenovo's ThinkPad lineup for Broadwell, and as this generation comes to a close we decided to give a brief overview of the diversity available. Skylake-powered notebooks may be just on the horizon, but the comparisons of form factor and usability should remain mostly applicable into the next generation.
Within the same $1200-$1300 price range, Lenovo offers a myriad of portable machines with roughly the same hardware in vastly different form factors.
First, let's take a look at the more standard ThinkPads.
Lenovo ThinkPad T450s
The ThinkPad T450s is my default recommendation for anyone looking for a notebook in the $1000+ range. Featuring a 14" 1080p display and an Intel Core i5-5300U processor, it will perform great for the majority of users. While you won't be using this machine for 3D Modeling or CAD/CAM applications, general productivity tasks will feel right at home here.
Technically classified as an Ultrabook, the T450s won't exactly be turning any heads with it's thinness. Lenovo strikes a balance here, making the notebook as thin as possible at 0.83" while retaining features such as a gigabit Ethernet port, 3 USB 3.0 Ports, an SD card reader, and plenty of display connectivity with Mini DisplayPort and VGA.
Introduction and Technical Specifications
Water cooling has become very popular over the last few years with the rise in use of the all-in-one (AIO) coolers. Those type of coolers combine a single or dual-fan radiator with a combination CPU block / pump unit, pre-filled from the factory and maintenance free. They are a good cooling alternative to an air-based CPU cooler, but are limited in their expandability potential. That is where the DIY water cooling components come into place. DIY water cooling components allow you to build a customized cooling loop for cooling everything from the CPU to the chipset and GPUs (and more). However, DIY loops are much more maintenance intensive than the AIO coolers because of the need to flush and refill the loops periodically to maintain performance and component health.
With the increased popularity in liquid cooling type CPU coolers and the renewed interest and availability of enthusiast-friendly parts with the introduction of the Intel Z97, X99, and Z170 parts, it was past time to measure how well different CPU water blocks performed on an Intel X99 board paired up with an Intel LGA2011-v3 5960X processor. The five water blocks compared include the following:
- Koolance CPU-360 water block
- Koolance CPU-380I water block
- Swiftech Apogee HD water block
- Swiftech Apogee XL water block
- XSPC Raystorm water block
Technical Specifications (taken from the manufacturer websites)
|Water Block Specifications|
|CPU-360||CPU-380I||Apogee HD||Apogee XL||Raystorm|
|Block Top Material||Nickel-plated Brass||POM Acetal|
|Base Plate Material||Nickel-plated Copper||Copper|
|Water Inlet||Jet Impingement Plate||Straight Pass-Thru||Jet Impingement Plate|
The Killer 1535 Wi-Fi adapter was the first 2x2 MU-MIMO compatible adapter on the market when it launched earlier this year, and is only found in a few products right now. We had a chance to test it out with the recently reviewed MSI G72 Dominator Pro G-Sync laptop, using the new Linksys EA8500 MU-MIMO router. How did it perform, and just what is MU-MIMO? Read on to find out!
Killer networks certainly haven’t skimped on the hardware with their new wireless adapter, as the Wireless-AC 1535 features two external 5 GHz signal amplifiers and is 802.11ac Wave 2 compliant with its support for MU-MIMO and Transmit Beamforming. And while the adapter itself certainly sounds impressive the real star here – besides the MU-MIMO support – is the Killer software. With these two technologies Killer has a unique product on the market, and if it works as advertised it would create an attractive alternative to the typical Wi-Fi solution.
MU-MIMO: What is it?
With an increasing number of devices using Wi-Fi in the average connected home the strain on a wireless network can often be felt. Just as one download can bring your internet connection to a crawl, one computer can hog nearly all available bandwidth from your router. MU-MIMO offers a solution to the network limitations of a typical multi-user home, and in fact the MU in MU-MIMO stands for Multi-User. The technology is part of the Wave 2 spec for 802.11ac, and it works differently than standard MIMO (multiple input, multiple output) technology. What’s the difference?
With standard MIMO (also known as Single-User MIMO) compatible devices take advantage of multiple data streams that are propagated to provide faster data than would otherwise be available for a single device. Multiple antennas on both base station and the client device are used to create the multiple transmit/receive streams needed for the added bandwidth. The multiple antennas used in MIMO systems create multiple channels, allowing for those separate data streams, and the number of streams is equal to the number of antennas (1x1 supports one stream, 2x2 supports two streams, etc.).
For multi-device users
One of the things that we still wrestle with here at PC Perspective is keeping a host of phones, tablets and mobile gaming devices charged and ready to go when we need them. Reviewing items means we need to have multiple devices ready to go to run tests and benchmarks at any given time. Keeping that collection of technology powered up can be a pain in the rear - adapters everywhere, cables strewn across the shelf, etc.
The same is true for me at home - even though we are only a two adult household, my wife and I each have a tablet we use regularly, smartphones and a host of accessories like wireless headphones, smart watches and more. And when company comes over it is expected that at least someone will need to top off the power to their phone.
Skiva has a USB charging accessory to help alleviate much of the headache involved with these situations. The Powerflow 7 Stand Charger combines a 7-port USB charger capable of 2.4A to each port with a simple stand to support 7 tablets and phones vertically. The result is a neatly organized set of hardware that is accessible when you need it.
Specs and Hardware
The AMD Radeon Nano graphics card is unlike any product we have ever tested at PC Perspective. As I wrote and described to the best of my ability (without hardware in my hands) late last month, AMD is targeting a totally unique and different classification of hardware with this release. As a result, there is quite a bit of confusion, criticism, and concern about the Nano, and, to be upfront, not all of it is unwarranted.
After spending the past week with an R9 Nano here in the office, I am prepared to say this immediately: for users matching specific criteria, there is no other option that comes close to what AMD is putting on the table today. That specific demographic though is going to be pretty narrow, a fact that won’t necessarily hurt AMD simply due to the obvious production limitations of the Fiji and HBM architectures.
At $650, the R9 Nano comes with a flagship cost but it does so knowing full well that it will not compete in terms of raw performance against the likes of the GTX 980 Ti or AMD’s own Radeon R9 Fury X. However, much like Intel has done with the Ultrabook and ULV platforms, AMD is attempting to carve out a new market that is looking for dense, modest power GPUs in small form factors. Whether or not they have succeeded is what I am looking to determine today. Ride along with me as we journey on the roller coaster of a release that is the AMD Radeon R9 Nano.