All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Introduction and Specifications
Lenovo made quite a splash with the introduction of the original X1 Carbon notebook in 2012; with its ultra-thin, ultra-light, and carbon fiber-infused construction, it became the flagship ThinkPad notebook. Fast-forward to late 2013, and the introduction of the ThinkPad Yoga; the business version of the previous year's consumer Yoga 2-in-1. The 360-degree hinge was novel for a business machine at the time, and the ThinkPad Yoga had a lot of promise, though it was far from perfect.
Now we fast-forward again, to the present day. It's 2016, and Lenovo has merged their ThinkPad X1 Carbon and ThinkPad Yoga together to create the X1 Yoga. This new notebook integrates the company's Yoga design (in appearance this is akin to the recent ThinkPad Yoga 260/460 revision) into the flagship ThinkPad X lineup, and provides what Lenovo is calling "the world's lightest 14-inch business 2-in-1".
Yoga and Carbon Merge
When Lenovo announced the marriage of the X1 Carbon notebook with the ThinkPad Yoga, I took notice. A buyer of the original ThinkPad Yoga S1 (with which I had a love/hate relationship) I wondered if the new X1 version of the business-oriented Yoga convertible would win me over. On paper it checks all the right boxes, and the slim new design looks great. I couldn't wait to get my hands on one for some real-world testing, and to see if my complaints about the original TP Yoga design were still valid.
As one would expect from a notebook carrying Lenovo’s ThinkPad X1 branding, this new Yoga is quite slim, and made from lightweight materials. Comparing this new Yoga to the X1 Carbon directly, the most obvious difference is that 360° hinge, which is the hallmark of the Yoga series, and exclusive to those Lenovo designs. This hinge allows the X1 Yoga to be used as a notebook, tablet, or any other imaginable position in between.
|Lenovo ThinkPad X1 Yoga (base configuration, as reviewed)|
|Processor||Intel Core i5-6200U (Skylake)|
|Graphics||Intel HD Graphics 520|
|Screen||14-in 1920x1080 IPS Touch (with digitizer, active pen)|
|Storage||256GB M.2 SSD|
|Camera||720p / Digital Array Microphone|
|Wireless||Intel 8260 802.11ac + BT 4.1 (Dual Band, 2x2)|
3x USB 3.0
Audio combo jack
|Dimensions||333mm x 229 mm x 16.8mm (13.11" x 9.01" x 0.66")
2.8 lbs. (1270 g)
|OS||Windows 10 Pro|
|Price||$1349 - Amazon.com|
Pre and Post Update Testing
Samsung launched their 840 Series SSDs back in May of 2013, which is over three years ago as of this writing. They were well-received as a budget unit but rapidly eclipsed by the follow-on release of the 840 EVO.
A quick check of our test 840 revealed inconsistent read speeds.
We broke news of Samsung’s TLC SSDs being effected by a time-based degrading of read speeds in September of 2014, and since then we have seen nearly every affected product patched by Samsung, with one glaring exception - the original 840 SSD. While the 840 EVO was a TLC SSD with a built-in SLC static data cache, the preceding 840 was a pure TLC drive. With the focus being on the newer / more popular drives, I had done only spot-check testing of our base 840 sample here at the lab, but once I heard there was finally a patch for this unit, I set out to do some pre-update testing so that I could gauge any improvements to read speed from this update.
As a refresher, ‘stale’ data on an 840 EVO would see reduced read speeds over a period of months after those files were written to the drive. This issue was properly addressed in a firmware issued back in April of 2015, but there were continued grumbles from owners of other affected drives, namely the base model 840. With the Advanced Performance Optimization patch being issued so long after others have been patched, I’m left wondering why there was such a long delay on this one? Differences in the base-840’s demonstration of this issue revealed themselves in my pre-patch testing:
Too much power to the people?
UPDATE (7/1/16): I have added a third page to this story that looks at the power consumption and power draw of the ASUS GeForce GTX 960 Strix card. This card was pointed out by many readers on our site and on reddit as having the same problem as the Radeon RX 480. As it turns out...not so much. Check it out!
UPDATE 2 (7/2/16): We have an official statement from AMD this morning.
As you know, we continuously tune our GPUs in order to maximize their performance within their given power envelopes and the speed of the memory interface, which in this case is an unprecedented 8Gbps for GDDR5. Recently, we identified select scenarios where the tuning of some RX 480 boards was not optimal. Fortunately, we can adjust the GPU's tuning via software in order to resolve this issue. We are already testing a driver that implements a fix, and we will provide an update to the community on our progress on Tuesday (July 5, 2016).
Honestly, that doesn't tell us much. And AMD appears to be deflecting slightly by using words like "some RX 480 boards". I don't believe this is limited to a subset of cards, or review samples only. AMD does indicate that the 8 Gbps memory on the 8GB variant might be partially to blame - which is an interesting correlation to test out later. The company does promise a fix for the problem via a driver update on Tuesday - we'll be sure to give that a test and see what changes are measured in both performance and in power consumption.
The launch of the AMD Radeon RX 480 has generally been considered a success. Our review of the new reference card shows impressive gains in architectural efficiency, improved positioning against NVIDIA’s competing parts in the same price range as well as VR-ready gaming performance starting at $199 for the 4GB model. AMD has every right to be proud of the new product and should have this lone position until the GeForce product line brings a Pascal card down into the same price category.
If you read carefully through my review, there was some interesting data that cropped up around the power consumption and delivery on the new RX 480. Looking at our power consumption numbers, measured directly from the card, not from the wall, it was using slightly more than the 150 watt TDP it was advertised as. This was done at 1920x1080 and tested in both Rise of the Tomb Raider and The Witcher 3.
When overclocked, the results were even higher, approaching the 200 watt mark in Rise of the Tomb Raider!
A portion of the review over at Tom’s Hardware produced similar results but detailed the power consumption from the motherboard PCI Express connection versus the power provided by the 6-pin PCIe power cable. There has been a considerable amount of discussion in the community about the amount of power the RX 480 draws through the motherboard, whether it is out of spec and what kind of impact it might have on the stability or life of the PC the RX 480 is installed in.
As it turns out, we have the ability to measure the exact same kind of data, albeit through a different method than Tom’s, and wanted to see if the result we saw broke down in the same way.
Our Testing Methods
This is a complex topic so it makes sense to detail the methodology of our advanced power testing capability up front.
How do we do it? Simple in theory but surprisingly difficult in practice, we are intercepting the power being sent through the PCI Express bus as well as the ATX power connectors before they go to the graphics card and are directly measuring power draw with a 10 kHz DAQ (data acquisition) device. A huge thanks goes to Allyn for getting the setup up and running. We built a PCI Express bridge that is tapped to measure both 12V and 3.3V power and built some Corsair power cables that measure the 12V coming through those as well.
The result is data that looks like this.
What you are looking at here is the power measured from the GTX 1080. From time 0 to time 8 seconds or so, the system is idle, from 8 seconds to about 18 seconds Steam is starting up the title. From 18-26 seconds the game is at the menus, we load the game from 26-39 seconds and then we play through our benchmark run after that.
There are four lines drawn in the graph, the 12V and 3.3V results are from the PCI Express bus interface, while the one labeled PCIE is from the PCIE power connection from the power supply to the card. We have the ability to measure two power inputs there but because the GTX 1080 only uses a single 8-pin connector, there is only one shown here. Finally, the blue line is labeled total and is simply that: a total of the other measurements to get combined power draw and usage by the graphics card in question.
From this we can see a couple of interesting data points. First, the idle power of the GTX 1080 Founders Edition is only about 7.5 watts. Second, under a gaming load of Rise of the Tomb Raider, the card is pulling about 165-170 watts on average, though there are plenty of intermittent, spikes. Keep in mind we are sampling the power at 1000/s so this kind of behavior is more or less expected.
Different games and applications impose different loads on the GPU and can cause it to draw drastically different power. Even if a game runs slowly, it may not be drawing maximum power from the card if a certain system on the GPU (memory, shaders, ROPs) is bottlenecking other systems.
One interesting note on our data compared to what Tom’s Hardware presents – we are using a second order low pass filter to smooth out the data to make it more readable and more indicative of how power draw is handled by the components on the PCB. Tom’s story reported “maximum” power draw at 300 watts for the RX 480 and while that is technically accurate, those figures represent instantaneous power draw. That is interesting data in some circumstances, and may actually indicate other potential issues with excessively noisy power circuitry, but to us, it makes more sense to sample data at a high rate (10 kHz) but to filter it and present it more readable way that better meshes with the continuous power delivery capabilities of the system.
Image source: E2E Texas Instruments
An example of instantaneous voltage spikes on power supply phase changes
Some gamers have expressed concern over that “maximum” power draw of 300 watts on the RX 480 that Tom’s Hardware reported. While that power measurement is technically accurate, it doesn’t represent the continuous power draw of the hardware. Instead, that measure is a result of a high frequency data acquisition system that may take a reading at the exact moment that a power phase on the card switches. Any DC switching power supply that is riding close to a certain power level is going to exceed that on the leading edges of phase switches for some minute amount of time. This is another reason why our low pass filter on power data can help represent real-world power consumption accurately. That doesn’t mean the spikes they measure are not a potential cause for concern, that’s just not what we are focused on with our testing.
Polaris 10 Specifications
It would be hard at this point to NOT know about the Radeon RX 480 graphics card. AMD and the Radeon Technologies Group has been talking publicly about the Polaris architecture since December of 2015 with lofty ambitions. In the precarious position that the company rests, being well behind in market share and struggling to compete with the dominant player in the market (NVIDIA), the team was willing to sacrifice sales of current generation parts (300-series) in order to excite the user base for the upcoming move to Polaris. It is a risky bet and one that will play out over the next few months in the market.
Since then AMD continued to release bits of information at a time. First there were details on the new display support, then information about the 14nm process technology advantages. We then saw demos of working silicon at CES with targeted form factors and then at events in Macau, showed press the full details and architecture. At Computex they announced rough performance metrics and a price point. Finally, at E3, AMD discussed the RX 460 and RX 470 cousins and the release date of…today. It’s been quite a whirlwind.
Today the rubber meets the road: is the Radeon RX 480 the groundbreaking and stunning graphics card that we have been promised? Or does it struggle again to keep up with the behemoth that is NVIDIA’s GeForce product line? AMD’s marketing team would have you believe that the RX 480 is the start of some kind of graphics revolution – but will the coup be successful?
Join us for our second major graphics architecture release of the summer and learn for yourself if the Radeon RX 480 is your next GPU.
A new competitor has entered the arena!
When we first saw the announcement of the MateBook in Spain back in March, pricing was immediately impressive. The base model of the tablet starts at just $699; $200 less than the lowest-priced Surface Pro 4, with features and performance that pretty closely match one another.
The MateBook only ships with Core m processors, a necessity of the incredibly thin and fanless design that Huawei is using. That obviously will put the MateBook behind other tablets and notebooks that use the Core i3/i5/i7 processors, but with a power consumption advantage along the way. Honestly, the performance differences between the Core m3 and m5 and m7 parts is pretty small – all share the same 4.5 watt TDP and all have fairly low base clock speeds and high boost clocks. The Core m5-6Y54 that rests in our test sample has a base clock of 1.1 GHz and a maximum Turbo Boost clock of 2.7 GHz. The top end Core m7-6Y75 has a base of 1.2 GHz and Boost of 3.1 GHz. The secret of course is that these processors run at Turbo clocks very infrequently; only during touch interactions and when applications demand performance.
If you work-load regularly requires you to do intensive transcoding, video editing or even high-resolution photo manipulation, the Core m parts are going to be slower than the Core i-series options available in other solutions. If you just occasionally need to use an application like Photoshop, the MateBook has no problems doing so.
|Huawei MateBook Tablet PC|
|Screen||12-in 2160x1440 IPS|
|CPU||Core m3||Core m3||Core m5||Core m5||Core m7||Core m7|
|GPU||Intel HD Graphics 515|
|Network||802.11ac MIMO (2.4 GHz, 5.0 GHz)
Gigabite Ethernet (MateDock)
|Display Output||HDMI / VGA (through MateDock)|
|Connectivity||USB 3.0 Type-C
USB 3.0 x 2 (MateDock)
|Audio||Dual Digital Mic
|Weight||640g (1.41 lbs)|
|Dimensions||278.8mm x 194.1mm x 6.9mm
(10.9-in x 7.6-in x 0.27-in)
|Operating System||Windows 10 Home / Pro|
Update: The Huawei Matebook is now available on Amazon.com!
At the base level, both the Surface Pro 4 and the MateBook have identical specs, but the Huawei unit is priced $200 lower. After that, things get more complicated as the Surface Pro 4 moves to Core i5 and Core i7 processors while the MateBook sticks with m5 and m7 parts. Storage capacities and memory size scale though. The lowest entry point for the MateBook to get 256GB of storage and 8GB of memory is $999 and comes with a Core m5 processor; a comparable Surface Pro 4 uses a Core i5 CPU instead but will run you $1199. If you want to move from 256GB to 512GB of storage, Microsoft wants $400 more for your SP4, while Huawei’s price only goes up $200.
Introduction and Features
In this review we will take a detailed look at one of be quiet!’s top of the line power supplies, the Dark Power Pro 11 750W. There are currently six power supplies in the Dark Power Pro 11 Series, which include 550W, 650W, 750W, 850W, 1000W and 1200W models. As you might expect, be quiet! continues to be focused on delivering virtually silent power supplies and they are one of the top selling brands in Europe. All of the Dark Power Pro 11 models are certified for high efficiency (80 Plus Platinum) and come with modular cables.
be quiet! designed the Dark Power Pro 11 Series to provide high efficiency with minimal noise for systems that demand whisper-quiet operation without compromising on power quality. In addition to the Dark Power Pro 11 Series, be quiet! offers a full range of power supplies in ATX, SFX, and TFX form factors.
(Courtesy of be quiet!)
All of the Dark Power Pro 11 Series power supplies are semi-modular (all cables are modular except for the fixed 24-pin ATX cable). Along with 80 Plus Platinum certified high efficiency and quiet operation, the Dark Power Pro 11 750W PSU features an “overclocking” key to select between multi-rail and single rail +12V outputs. The power supply uses be quiet!’s latest SilentWings3 135mm fan for virtually silent operation. The fan speed starts out very slow and remains slow and quiet through mid-power levels. And the Dark Power Pro 11 power supplies allow connecting up to four case fans, whose speed will be controlled by the PSU.
be quiet! Dark Power Pro 11 750W PSU Key Features:
• 750W continuous DC output (ATX12V v2.4, EPS 2.92 compliant)
• Virtually inaudible SilentWings3 135mm FDB cooling fan
• 80 PLUS Platinum certified efficiency (up to 94%)
• Premium 105°C rated parts enhance stability and reliability
• Powerful GPU support with seven PCI-E connectors
• User-friendly cable management reduces clutter and improves airflow
• NVIDIA SLI Ready and AMD CrossFire X certified
• ErP 2014 ready and meets Energy Star 6.0 guidelines
• Zero load design supports Intel’s Deep Power Down C6 & C7 modes
• Overclocking key selects between single or multiple +12V rails
• Active Power Factor correction (0.99) with Universal AC input
• Intelligent speed control for up to four case fans
• Safety Protections :OCP, OVP, UVP, SCP, OTP, and OPP
• 5-Year warranty
Here is what be quiet! has to say about the Dark Power Pro 11 750W PSU: "It is a fact of the modern world that high technology requires constant refinement and unending improvement – and that is even truer for those who would be leaders. Dark Power Pro power supplies are renowned as the world’s quietest and most efficient high-performance PSUs. The Dark Power Pro 11 750W model takes that a step further with a power conversion topology that delivers 80Plus Platinum performance, add to that an unparalleled array of enhancements that augment this unit’s compatibility, convenience of use, reliability, and safety, and the result is the most technologically-advanced power supply be quiet! has ever built.”
Introduction, Specifications, and Packaging
Western Digital launched their My Passport Wireless nearly two years ago. It was a nifty device that could back up or offload SD cards without the need for a laptop, making it ideal for photographers in the field. I came away from that review wondering just how much more you could pack into a device like that, and today I get to find out:
Not to be confused with the My Passport Pro (a TB-connected portable RAID storage device), the My Passport Wireless Pro is meant for on-the-go photographers who seek to back up their media while in the field but also lighten their backpacks. The concept is simple - have a small device capable of offloading (or backing up) SD cards without having to lug along your laptop and a portable hard drive to do so. Add in a wireless hotspot with WAN pass-through along with mobile apps to access the media and you can almost get away without bringing a laptop at all. Oh, and did I mention this one can also import photos and videos from your smartphone while charging it via USB?
- Capacity: 2TB and 3TB
- Battery: 6,400 mAH / 24WH
- UHS-I SD Card Reader
- USB 3.0 (upstream) port for data and charging
- USB 2.0 (downstream) port for importing and charging smartphones
- 802.11AC + N dual band (2.4 / 5 GHz) WiFi
- 2.4A Travel Charge Adapter (included)
- Plex Media Server capable
- Available 'My Cloud' mobile apps
No surprises here. 2.4W power adapter is included this time around, which is a nice touch.
ARM Releases Egil Specs
The final product that ARM showed us at that Austin event is the latest video processing unit that will be integrated into their Mali GPUs. The Egil video processor is a next generation unit that will be appearing later this year with the latest products that utilize Mali GPUs up and down the spectrum. It is not tied to the latest G71 GPU, but rather can be used with a multitude of current Mali products.
Video is one of the biggest usage cases for modern SOCs in mobile devices. People constantly stream and record video from their handhelds and tablets, and there are some real drawbacks in current video processor products from a variety of sources. We have seen the amazing increase in pixel density on phones and tablets and the power draw to render video effectively on these products has gone up. We have also seen the introduction of new codecs that require a serious amount of processing capabilities to decode.
Egil is a scalable product that can go from one core to six. A single core can display video from a variety of codecs at 1080P and up to 80 fps. The six core solution can play back 4K video at 120 Hz. This is assuming that the Egil processor is produced on a 16nm FF process or smaller and running at 800 MHz. This provides a lot of flexibility with SOC manufacturers that allows them to adequately tailor their products for specific targets and markets.
The cores themselves are fixed function blocks with dedicated controllers and control logic. Previous video processors were more heavy on the decode aspects rather than encode. Now that we have more pervasive streaming from mobile devices and cameras/optics that can support higher resolutions and bitrates, ARM has redesigned Egil to offer extensive encoding capabilities. Not only does it add this capability, but it enhances it by not only decoding at 4K but being able to encode four 1080p30 streams at the same time.
Egil will eventually find its way into other products such as TVs. These custom SOCs will be even more important as 4K playback and media become more common plus potential new functionality that has yet to be implemented effectively on TVs. For the time being we will likely see this in mobile first, with the initial products hitting the market in the second half of 2016.
ARM is certainly on a roll this year with introducing new CPU, GPU, and now video processors. We will start to see these products being introduced throughout the end of this year and into the next. The company certainly has not been resting or letting potential competitors get the edge on them. Their products are always focused on consuming low amounts of power, but the potential performance looks to satisfy even power hungry users in the mobile and appliance markets. Egil is another solid looking member to the lineup that brings some impressive performance and codec support for both decoding and encoding.
Introduction, Dynamic Write Acceleration, and Packaging
Micron joined Intel in announcing their joint venture production of IMFT 3D NAND just a bit over a year ago. The industry was naturally excited since IMFT has historically enabled relatively efficient production, ultimately resulting in reduced SSD prices over time. I suspect this time things will be no different as IMFT's 3D Flash has been aiming high die capacities since its inception, and I suspect their second generation will *double* per-die capacities while keeping speeds reasonable thanks to a quad-plane design implemented from the start of this endeavor. Of course, I'm getting ahead of myself a bit as there are no consumer products sporting this flash just yet - well not until today at least:
Marketed under Micron's consumer brand Crucial, the MX300 is their first entrant into the consumer space, as well as the first consumer SSD sporting IMFT 3D NAND. Crucial is known for their budget-minded SSDs, and for the MX300 they chose to go with the best cost/GB they could manage with what they had to work with. That meant putting this new 3D NAND into TLC mode. Now there are many TLC haters out there, but remember this is 3D NAND. Samsung's 850 EVO can exceed 500 MB/sec writes to TLC at its 500GB capacity point, and this MX300 is a product that is launching with *only* a 750GB capacity, so its TLC speed should be at least reasonable.
(the return of) Dynamic Write Acceleration
Dynamic Write Acceleration in action during a sequential fill - that last slowest part was my primary concern for the mX300.
TLC is not the only story here because Crucial has included their Dynamic Write Acceleration (DWA) technology into the MX300. This is a tech where the SSD controller is able to dynamically switch flash programming modes of the flash pool, doing so at the block level. This appears to be a feature unique to IMFT flash, as every other 'hybrid' SSD we have tested had a static SLC cache area. DWA's ability to switch flash modes on-the-fly has always fascinated me on paper, but I just haven't been impressed by Micron's previous attempts to implement it. The M600 was a bit all over the place on its write consistency, and that SSD was flipping blocks between SLC and MLC. With the MX300 flipping between SLC and *TLC*, there was a possibility of far more noticeable slow downs in the cases where large writes were taking place and the controller was caught trying to scavenge space in the background.
New Latency Percentile vs. legacy IO Percentile, shown here highlighting a performance inconsistency seen in the Toshiba OCZ RD400. Note which line more closely represents the Latency Distribution (gray) also on this plot.
Introduction and Features
SilverStone is a veteran in the PC power supply industry and they continue to offer a full line of enclosures, power supplies, fans, coolers, and PC accessories. They have raised the bar in their Strider power supply series, which now includes three 80 Plus Titanium certified units, the ST60F-TI, ST70F-TI, and ST80F-TI. These three units are billed as being “the world’s smallest 80 Plus Titanium, full-modular ATX power supplies”; with a chassis that is only 150mm (5.9”) deep.
The 80 Plus Titanium efficiency standards were introduced in 2012 and are the most demanding specifications to date. In addition to raising the efficiency requirements at 20%, 50% and 100% loads, the Titanium standard adds a new requirement at 10%. This insures that a Titanium certified power supply will operate with at least 90% efficiency over the full range of loads and deliver up to 94% efficiency at a 50% load.
I’m also happy to report that SilverStone is now providing a 5-year warranty on both the Strider Titanium and Strider Platinum series power supplies; up from their standard 3-year warranty.
SilverStone ST60F-TI Power Supply Key Features:
• 80 Plus Titanium certified for super-high efficiency
• Compact design with a depth of 150mm for easy integration
• All-modular, flat ribbon-style cables
• 100% all Japanese made capacitors
• Strict ±3% voltage regulation and low AC ripple and noise
• Powerful single +12V rail
• Four PCI-E connectors for multiple GPU support
• Safety Protections: OCP, OTP, OPP, UVP, OVP, and SCP
• Quiet 120mm fan with Fluid Dynamic bearing
• 140mm dust filter
• 5-Year warranty
Introduction and Specifications
The Corsair VOID Surround Gaming Headset is a hybrid product of sorts, combining a traditional stereo gaming headset with a Dolby Headphone-enabled USB dongle to unlock virtual 7.1 surround sound. We’ll have a look, and listen, in this review.
The market for gaming headsets being what it is, one of the most important factors with each new product inevitably becomes price. There are different tiers of products out there from many companies, and Corsair themselves offer a few different choices and various price-points. With the VOID Surround we have a pretty affordable option at $79.99, which is about half the price of the previous wired pair of gaming headphones I looked it, Logitech’s G633.
One of the advantages Corsair offers with this VOID headset is a pair of 50mm drivers, which theoretically offer better bass than 40mm options (though of course size alone is not a guarantee). The 7.1 surround effect is via Dolby Headphone, which is a virtual effect that is commonly found with single-driver options such as this. If the effect is convincing, a headset like the VOID can save the user a lot of money over the pricey discrete multi-driver options on the market.
Introduction and Features
Corsair continues to expand their extensive power supply lineup with the addition of two new small form factor (SFX) units, the SF450 and SF600. The SF Series power supplies are fully modular and optimized for quiet operation and high efficiency. Both power supplies feature Zero RPM Fan Mode, which means the fan doesn’t start to spin until the power supply is under a moderate to heavy load. The SF450 and SF600 are 80 Plus Gold certified for high efficiency and come with a 5-year warranty.
While the SF Series is designed for use in small form factor enclosures, Corsair’s SF Series power supplies can also be used in standard ATX cases to save room via the optional SFX to ATX adapter bracket. As you can see in the photo below, the SF Series power supply is much smaller in all three dimensions than a standard ATX power supply. We will be taking a detailed look at the new SF600W power supply in this review.
SF Series 600W vs. ATX Series 650W
Corsair SF Series 600W PSU Key Features:
• Small Form Factor (SFX) design
• Very quiet with Zero RPM Fan Mode
• 92mm cooling fan optimized for low noise
• 80 Plus Gold certified for high efficiency
• All-modular, flat ribbon-style cables
• 100% all Japanese made 105°C capacitors
• ATX12V v2.4 and EPS 2.92 compliant
• 6th Generation Intel Core processor Ready
• Safety Protections: OCP, OVP, UVP, SCP, OTP, and OPP
• 7-Year warranty
Introduction and Unboxing
A few years ago, Ryan reviewed the Couchmaster. It was a simple keyboard and mouse holder that suspended those parts above your lap, much like a computer chair, but at your couch. It was a cool concept, but at the time, living room PC gaming hadn't gained much popularity. While we don't all suddenly have living room PCs, the concept has gained some steam. We've seen recent launches of devices like the Corsair Bulldog - a rather beefy DIY living room PC meant to handle enough hardware to support living room gaming at up to 4K resolutions. This left a bit of a gap in Corsair's lineup. They make keyboards, mice, and now a living room PC, but where do you put those peripherals while sitting on your couch? Enter the Corsair Lapdog:
Above is the setup process staged with the keyboard and mouse plugged into the integrated 4-port USB 3.0 hub. Note that we did not need to plug in both keyboard connectors as there is no need to use the USB pass-through feature of these keyboards as the mouse gets its own dedicated port. Owners of the older K70 RGBs might note that even though the early models did not come with a pass-through port, they still had an additional connector for additional USB current. Fear not, as the second plug of those keyboards is also not needed here since the Lapdog uses a powered USB 3.0 hub that can provide sufficient current to light up those models over that single connector.
The cable that combines both power and USB connection from the Lapdog to the wall/PC is 16 feet long, which should provide plenty of space to stretch between just about any TV + couch combination. It was a great idea by Corsair to combine the USB cable and power cable in this way, minimizing the mess and cable clutter that reaches across the floor. You get another 5 feet or so of length for the 12V power adapater as well, so install should be a breeze for users.
Here we see the removable block-off plate. This comes pre-installed in case the user intends to use a K65 (short-body) keyboard. For those cases, the plate keeps the surface flush while covering the area normally used by the number pad. We are installing a K70 model and will be removing the plate for our configuration.
In case you're wondering how to remove the various cover plates and mouse pad in order to complete the installation, there is a mini hex driver built-in to the back of the foam lap pad.
Looking at the bottom of the Lapdog keyboard/mouse housing, we see six magnets that mate with the appropriate places on the bottom of the foam lap pad. The pad is made of cloth covered polyurethane foam. It does not appear to be memory foam and is fairly rigid, which is desirable as we need to keep the keyboard and mouse on a reasonably firm surface when using it on a lap.
On the right edge of the Lapdog we have rear ports for power and USB 3.0 back to the PC, and on the side, we have another pair of USB 3.0 ports off of the internal powered hub. This lets you do other cool stuff like plugging in portable USB storage or even connecting and charging your phone.
With the build complete, I'd just like to comment on how seamlessly the corsair keyboards blend with the rest of the Lapdog. The anodized brushed aluminum is a perfect match, though it does add some weight to the completed product. There is a slight lip at the bottom and right edges of the mouse pad which keep it from sliding off when not in use.
After setup, I spent some quality time with the Lapdog. In gaming, it definitely works as advertised. With the device on your lap, WASD + mouse gaming is essentially where your hands naturally rest with the default positioning, making gaming just about the same as doing so on a desktop. The lap pad design helps to keep it from sliding around on your lap while in use, and the overall bulk and heft of the unit keep it firmly planted on your lap. It is not overly heavy, and I feel that going any lighter would negatively impact stability.
I also tried some actual writing on the Lapdog (I used it to write this article). While the typical gaming position is natural when centered, the left offset of the keyboard means that any serious typing requires you to scoot everything over to the right. The keyboard side is heavier than the mousing side, so there are no tipping issues when doing so. Even if you were to place the center of the Lapdog over your right leg, centering the keyboard on your lap, its weight will still keep the Lapdog planted on your left, so no issues there. Long periods of typing may put a strain on your back if you tend to lean forward off of the front edge of your couch, but the Lapdog is really meant to be a 'lay back' experience, and extended typing is certainly doable in that position with a bit of practice.
The Corsair Lapdog is available for $119.99, which I feel is a fair price given the high-grade components and solid build quality. If you're into PC gaming from the comfort of your couch, the Corsair Lapdog looks to be the best solution your you!
Fractal Design has reduced their excellent Define S enclosure all the way down from ATX to mini-ITX, and the Define Nano S offers plenty of room for a small form-factor case.
Large mini-ITX cases have become the trend in the past year or so, with the NZXT Manta the most recent (and possibly the most extreme) example. Fractal Design's Nano S isn't quite as large as the Manta, but it is cavernous inside thanks to a completely open internal layout. There are no optical drive bays, no partitions for PSU or storage, and really not much of anything inside the main compartment at all as Fractal Design has essentially miniaturized the Define S enclosure.
We have the windowed version of the Define Nano S for review here, which adds some interest to a very understated design. There is still something very sophisticated about this sort of industrial design, and I must admit to liking it quite a bit myself. Details such as the side vents for front panel air intake do add some interest, and that big window helps add some style as well (and builders could always add some increasingly ubiquitous RGB lighting inside!).
AMD gets aggressive
At its Computex 2016 press conference in Taipei today, AMD has announced the branding and pricing, along with basic specifications, for one of its upcoming Polaris GPUs shipping later this June. The Radeon RX 480, based on Polaris 10, will cost just $199 and will offer more than 5 TFLOPS of compute capability. This is an incredibly aggressive move obviously aimed at continuing to gain market share at NVIDIA's expense. Details of the product are listed below.
|RX 480||GTX 1070||GTX 980||GTX 970||R9 Fury||R9 Nano||R9 390X||R9 390|
|GPU||Polaris 10||GP104||GM204||GM204||Fiji Pro||Fiji XT||Hawaii XT||Grenada Pro|
|Rated Clock||?||1506 MHz||1126 MHz||1050 MHz||1000 MHz||up to 1000 MHz||1050 MHz||1000 MHz|
|Memory Clock||8000 MHz||8000 MHz||7000 MHz||7000 MHz||500 MHz||500 MHz||6000 MHz||6000 MHz|
|Memory Interface||256-bit||256-bit||256-bit||256-bit||4096-bit (HBM)||4096-bit (HBM)||512-bit||512-bit|
|Memory Bandwidth||256 GB/s||256 GB/s||224 GB/s||196 GB/s||512 GB/s||512 GB/s||384 GB/s||384 GB/s|
|TDP||150 watts||150 watts||165 watts||145 watts||275 watts||175 watts||275 watts||230 watts|
|Peak Compute||> 5.0 TFLOPS||5.7 TFLOPS||4.61 TFLOPS||3.4 TFLOPS||7.20 TFLOPS||8.19 TFLOPS||5.63 TFLOPS||5.12 TFLOPS|
The RX 480 will ship with 36 CUs totaling 2304 stream processors based on the current GCN breakdown of 64 stream processors per CU. AMD didn't list clock speeds and instead is only telling us that the performance offered will exceed 5 TFLOPS of compute; how much is still a mystery and will likely change based on final clocks.
The memory system is powered by a 256-bit GDDR5 memory controller running at 8 Gbps and hitting 256 GB/s of throughput. This is the same resulting memory bandwidth as NVIDIA's new GeForce GTX 1070 graphics card.
AMD also tells us that the TDP of the card is 150 watts, again matching the GTX 1070, though without more accurate performance data it's hard to assume anything about the new architectural efficiency of the Polaris GPUs built on the 14nm Global Foundries process.
Obviously the card will support FreeSync and all of AMD's VR features, in addition to being DP 1.3 and 1.4 ready.
AMD stated that the RX 480 will launch on June 29th.
I know that many of you will want us to start guessing at what performance level the new RX 480 will actually fall, and trust me, I've been trying to figure it out. Based on TFLOPS rating and memory bandwidth alone, it seems possible that the RX 480 could compete with the GTX 1070. But if that were the case, I don't think even AMD is crazy enough to set the price this far below where the GTX 1070 launched, $379.
I would expect the configuration of the GCN architecture to remain mostly unchanged on Polaris, compared to Hawaii, for the same reasons that we saw NVIDIA leave Pascal's basic compute architecture unchanged compared to Maxwell. Moving to the new process node was the primary goal and adding to that with drastic shifts in compute design might overly complicate product development.
In the past, we have observed that AMD's GCN architecture tends to operate slightly less efficiently in terms of rated maximum compute capability versus realized gaming performance, at least compared to Maxwell and now Pascal. With that in mind, the >5 TFLOPS offered by the RX 480 likely lies somewhere between the Radeon R9 390 and R9 390X in realized gaming output. If that is the case, the Radeon RX 480 should have performance somewhere between the GeForce GTX 970 and the GeForce GTX 980.
AMD claims that the RX 480 at $199 is set to offer a "premium VR experience" that has previously be limited to $500 graphics cards (another reference to the original price of the GTX 980 perhaps...). AMD claims this should have a dramatic impact on increasing the TAM (total addressable market) for VR.
In a notable market survey, price was a leading barrier to adoption of VR. The $199 SEP for select Radeon™ RX Series GPUs is an integral part of AMD’s strategy to dramatically accelerate VR adoption and unleash the VR software ecosystem. AMD expects that its aggressive pricing will jumpstart the growth of the addressable market for PC VR and accelerate the rate at which VR headsets drop in price:
- More affordable VR-ready desktops and notebooks
- Making VR accessible to consumers in retail
- Unleashing VR developers on a larger audience
- Reducing the cost of entry to VR
AMD calls this strategy of starting with the mid-range product its "Water Drop" strategy with the goal "at releasing new graphics architectures in high volume segments first to support continued market share growth for Radeon GPUs."
So what do you guys think? Are you impressed with what Polaris looks like its going to be now?
Bristol Ridge Takes on Mobile: E2 Through FX
It is no secret that AMD has faced an uphill battle since the release of the original Core 2 processors from Intel. While stayed mostly competitive through the Phenom II years, they hit some major performance issues when moving to the Bulldozer architecture. While on paper the idea of Chip Multi-Threading sounded fantastic, AMD was never able to get the per thread performance up to expectations. While their CPUs performed well in heavily multi-threaded applications, they just were never seen in as positive of a light as the competing Intel products.
The other part of the performance equation that has hammered AMD is the lack of a new process node that would allow it to more adequately compete with Intel. When AMD was at 32 nm PD-SOI, Intel had introduced its 22nm TriGate/FinFET. AMD then transitioned to a 28nm HKMG planar process that was more size optimized than 32nm, but did not drastically improve upon power and transistor switching performance.
So AMD had a double whammy on their hands with an underperforming architecture and limitted to no access to advanced process nodes that would actually improve their power and speed situation. They could not force their foundry partners to spend billions on a crash course in FinFET technology to bring that to market faster, so they had to iterate and innovate on their designs.
Bristol Ridge is the fruit of that particular labor. It is also the end point to the architecture that was introduced with Bulldozer way back in 2011.
It has been nearly two years since the release of the Haswell-E platform, which began with the launch of the Core i7-5960X processor. Back then, the introduction of an 8-core consumer processor was the primary selling point; along with the new X99 chipset and DDR4 memory support. At the time, I heralded the processor as “easily the fastest consumer processor we have ever had in our hands” and “nearly impossible to beat.” So what has changed over the course of 24 months?
Today Intel is launching Broadwell-E, the follow up to Haswell-E, and things look very much the same as they did before. There are definitely a couple of changes worth noting and discussing, including the move to a 10-core processor option as well as Turbo Boost Max Technology 3.0, which is significantly more interesting than its marketing name implies. Intel is sticking with the X99 platform (good for users that might want to upgrade), though the cost of these new processors is more than slightly disappointing based on trends elsewhere in the market.
This review of the new Core i7-6950X 10-core Broadwell-E processor is going to be quick, and to the point: what changes, what is the performance, how does it overclock, and what will it cost you?
New Products for 2017
PC Perspective was invited to Austin, TX on May 11 and 12 to participate in ARM’s yearly tech day. Also invited were a handful of editors and analysts that cover the PC and mobile markets. Those folks were all pretty smart, so it is confusing as to why they invited me. Perhaps word of my unique talent of screenshoting PDFs into near-unreadable JPGs preceded me? Regardless of the reason, I was treated to two full days of in-depth discussion of the latest generation of CPU and GPU cores, 10nm test chips, and information on new licensing options.
Today ARM is announcing their next CPU core with the introduction of the Cortex-A73. They are also unwrapping the latest Mali-G71 graphics technology. Other technologies such as the CCI-550 interconnect are also revealed. It is a busy and important day for ARM, especially in light of Intel seemingly abandoning the sub-milliwatt mobile market.
ARM previously announced the Cortex-A72 in February, 2015. Since that time it has been seen in most flagship mobile devices in late 2015 and throughout 2016. The market continues to evolve, and as such the workloads and form factors have pushed ARM to continue to develop and improve their CPU technology.
The Sofia Antipolis, France design group is behind the new A73. The previous several core architectures had been developed by the Cambridge group. As such, the new design differs quite dramatically from the previous A72. I was actually somewhat taken aback by the differences in the design philosophy of the two groups and the changes between the A72 and A73, but the generational jumps we have seen in the past make a bit more sense to me.
The marketplace is constantly changing when it comes to workloads and form factors. More and more complex applications are being ported to mobile devices, including hot technologies like AR and VR. Other technologies include 3D/360 degree video, greater than 20 MP cameras, and 4K/8K displays and their video playback formats. Form factors on the other hand have continued to decrease in size, especially in overall height. We have relatively large screens on most premium devices, but the designers have continued to make these phones thinner and thinner throughout the years. This has put a lot of pressure on ARM and their partners to increase performance while keeping TDPs in check, and even reducing them so they more adequately fit in the TDP envelope of these extremely thin devices.
GP104 Strikes Again
It’s only been three weeks since NVIDIA unveiled the GeForce GTX 1080 and GTX 1070 graphics cards at a live streaming event in Austin, TX. But it feels like those two GPUs, one of which hasn't even been reviewed until today, have already drastically shifted the landscape of graphics, VR and PC gaming.
Half of the “new GPU” stories are told, with AMD due to follow up soon with Polaris, but it was clear to anyone watching the enthusiast segment with a hint of history that a line was drawn in the sand that day. There is THEN, and there is NOW. Today’s detailed review of the GeForce GTX 1070 completes NVIDIA’s first wave of NOW products, following closely behind the GeForce GTX 1080.
Interestingly, and in a move that is very uncharacteristic of NVIDIA, detailed specifications of the GeForce GTX 1070 were released on GeForce.com well before today’s reviews. With information on the CUDA core count, clock speeds, and memory bandwidth it was possible to get a solid sense of where the GTX 1070 performed; and I imagine that many of you already did the napkin math to figure that out. There is no more guessing though - reviews and testing are all done, and I think you'll find that the GTX 1070 is as exciting, if not more so, than the GTX 1080 due to the performance and pricing combination that it provides.
Let’s dive in.
We’ve probably all lost data at some point, and many of us have tried various drive recovery solutions over the years. Of these, Disk Drill has been available for Mac OS X users for some time, but the company currently offers a Windows compatible version, released last year. The best part? It’s totally free (and not in the ad-ridden, drowning in popups kind of way). So does it work? Using some of my own data as a guinea pig, I decided to find out.
The interface is clean and simple
To begin with I’ll list the features of Disk Drill as Clever Files describes it on their product page:
- Any Drive
- Our free data recovery software for Windows PC can recover data from virtually any storage device - including internal and external hard drives, USB flash drives, iPods, memory cards, and more.
- Recovery Options
- Disk Drill has several different recovery algorithms, including Undelete, Protected Data, Quick Scan, and Deep Scan. It will run through them one at a time until your lost data is found.
- Speed & Simplicity
- It’s as easy as one click: Disk Drill scans start with just the click of a button. There’s no complicated interface with too many options, just click, sit back and wait for your files to appear.
- All File Systems
- Different types of hard drives and memory cards have different ways of storing data. Whether your media has a FAT, exFAT or NTFS file system, is HFS+ Mac drive or Linux EXT2/3/4, Disk Drill can recover deleted files.
- Partition Recovery
- Sometimes your data is still on your drive, but a partition has been lost or reformatted. Disk Drill can help you find the “map” to your old partition and rebuild it, so your files can be recovered.
- Recovery Vault
- In addition to deleted files recovery, Disk Drill also protects your PC from future data loss. Recovery Vault keeps a record of all deleted files, making it much easier to recover them.
- Disk Drill For Windows - Free download here
The Recovery Process
(No IDE hard drives were harmed in the making of this photo)
My recovery process involved an old 320GB IDE drive, which was used for backup until a power outage-related data corruption (I didn’t own a UPS at the time, and the drive was in the process of writing) which left me without a valid partition. At one point I had given up and formatted the drive; thinking all of my original backup was lost. Thankfully I didn’t use it much after this, and it’s been sitting on a shelf for years.
There are different methods that can be employed to recover lost or deleted data. One of these is to scan for the file headers (or signatures), which contain information about what type of file it is (i.e. Microsoft Word, JPEG image, etc.). There are advanced recovery methods that attempt to reconstruct an entire file system, preserving the folder structures and the original files names. Unfortunately, this is not a simple (or fast) process, and is generally left to the professionals.