All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Introduction and Features
In this review we will take a detailed look at one of be quiet!’s top of the line power supplies, the Dark Power Pro 11 750W. There are currently six power supplies in the Dark Power Pro 11 Series, which include 550W, 650W, 750W, 850W, 1000W and 1200W models. As you might expect, be quiet! continues to be focused on delivering virtually silent power supplies and they are one of the top selling brands in Europe. All of the Dark Power Pro 11 models are certified for high efficiency (80 Plus Platinum) and come with modular cables.
be quiet! designed the Dark Power Pro 11 Series to provide high efficiency with minimal noise for systems that demand whisper-quiet operation without compromising on power quality. In addition to the Dark Power Pro 11 Series, be quiet! offers a full range of power supplies in ATX, SFX, and TFX form factors.
(Courtesy of be quiet!)
All of the Dark Power Pro 11 Series power supplies are semi-modular (all cables are modular except for the fixed 24-pin ATX cable). Along with 80 Plus Platinum certified high efficiency and quiet operation, the Dark Power Pro 11 750W PSU features an “overclocking” key to select between multi-rail and single rail +12V outputs. The power supply uses be quiet!’s latest SilentWings3 135mm fan for virtually silent operation. The fan speed starts out very slow and remains slow and quiet through mid-power levels. And the Dark Power Pro 11 power supplies allow connecting up to four case fans, whose speed will be controlled by the PSU.
be quiet! Dark Power Pro 11 750W PSU Key Features:
• 750W continuous DC output (ATX12V v2.4, EPS 2.92 compliant)
• Virtually inaudible SilentWings3 135mm FDB cooling fan
• 80 PLUS Platinum certified efficiency (up to 94%)
• Premium 105°C rated parts enhance stability and reliability
• Powerful GPU support with seven PCI-E connectors
• User-friendly cable management reduces clutter and improves airflow
• NVIDIA SLI Ready and AMD CrossFire X certified
• ErP 2014 ready and meets Energy Star 6.0 guidelines
• Zero load design supports Intel’s Deep Power Down C6 & C7 modes
• Overclocking key selects between single or multiple +12V rails
• Active Power Factor correction (0.99) with Universal AC input
• Intelligent speed control for up to four case fans
• Safety Protections :OCP, OVP, UVP, SCP, OTP, and OPP
• 5-Year warranty
Here is what be quiet! has to say about the Dark Power Pro 11 750W PSU: "It is a fact of the modern world that high technology requires constant refinement and unending improvement – and that is even truer for those who would be leaders. Dark Power Pro power supplies are renowned as the world’s quietest and most efficient high-performance PSUs. The Dark Power Pro 11 750W model takes that a step further with a power conversion topology that delivers 80Plus Platinum performance, add to that an unparalleled array of enhancements that augment this unit’s compatibility, convenience of use, reliability, and safety, and the result is the most technologically-advanced power supply be quiet! has ever built.”
Introduction, Specifications, and Packaging
Western Digital launched their My Passport Wireless nearly two years ago. It was a nifty device that could back up or offload SD cards without the need for a laptop, making it ideal for photographers in the field. I came away from that review wondering just how much more you could pack into a device like that, and today I get to find out:
Not to be confused with the My Passport Pro (a TB-connected portable RAID storage device), the My Passport Wireless Pro is meant for on-the-go photographers who seek to back up their media while in the field but also lighten their backpacks. The concept is simple - have a small device capable of offloading (or backing up) SD cards without having to lug along your laptop and a portable hard drive to do so. Add in a wireless hotspot with WAN pass-through along with mobile apps to access the media and you can almost get away without bringing a laptop at all. Oh, and did I mention this one can also import photos and videos from your smartphone while charging it via USB?
- Capacity: 2TB and 3TB
- Battery: 6,400 mAH / 24WH
- UHS-I SD Card Reader
- USB 3.0 (upstream) port for data and charging
- USB 2.0 (downstream) port for importing and charging smartphones
- 802.11AC + N dual band (2.4 / 5 GHz) WiFi
- 2.4A Travel Charge Adapter (included)
- Plex Media Server capable
- Available 'My Cloud' mobile apps
No surprises here. 2.4W power adapter is included this time around, which is a nice touch.
ARM Releases Egil Specs
The final product that ARM showed us at that Austin event is the latest video processing unit that will be integrated into their Mali GPUs. The Egil video processor is a next generation unit that will be appearing later this year with the latest products that utilize Mali GPUs up and down the spectrum. It is not tied to the latest G71 GPU, but rather can be used with a multitude of current Mali products.
Video is one of the biggest usage cases for modern SOCs in mobile devices. People constantly stream and record video from their handhelds and tablets, and there are some real drawbacks in current video processor products from a variety of sources. We have seen the amazing increase in pixel density on phones and tablets and the power draw to render video effectively on these products has gone up. We have also seen the introduction of new codecs that require a serious amount of processing capabilities to decode.
Egil is a scalable product that can go from one core to six. A single core can display video from a variety of codecs at 1080P and up to 80 fps. The six core solution can play back 4K video at 120 Hz. This is assuming that the Egil processor is produced on a 16nm FF process or smaller and running at 800 MHz. This provides a lot of flexibility with SOC manufacturers that allows them to adequately tailor their products for specific targets and markets.
The cores themselves are fixed function blocks with dedicated controllers and control logic. Previous video processors were more heavy on the decode aspects rather than encode. Now that we have more pervasive streaming from mobile devices and cameras/optics that can support higher resolutions and bitrates, ARM has redesigned Egil to offer extensive encoding capabilities. Not only does it add this capability, but it enhances it by not only decoding at 4K but being able to encode four 1080p30 streams at the same time.
Egil will eventually find its way into other products such as TVs. These custom SOCs will be even more important as 4K playback and media become more common plus potential new functionality that has yet to be implemented effectively on TVs. For the time being we will likely see this in mobile first, with the initial products hitting the market in the second half of 2016.
ARM is certainly on a roll this year with introducing new CPU, GPU, and now video processors. We will start to see these products being introduced throughout the end of this year and into the next. The company certainly has not been resting or letting potential competitors get the edge on them. Their products are always focused on consuming low amounts of power, but the potential performance looks to satisfy even power hungry users in the mobile and appliance markets. Egil is another solid looking member to the lineup that brings some impressive performance and codec support for both decoding and encoding.
Introduction, Dynamic Write Acceleration, and Packaging
Micron joined Intel in announcing their joint venture production of IMFT 3D NAND just a bit over a year ago. The industry was naturally excited since IMFT has historically enabled relatively efficient production, ultimately resulting in reduced SSD prices over time. I suspect this time things will be no different as IMFT's 3D Flash has been aiming high die capacities since its inception, and I suspect their second generation will *double* per-die capacities while keeping speeds reasonable thanks to a quad-plane design implemented from the start of this endeavor. Of course, I'm getting ahead of myself a bit as there are no consumer products sporting this flash just yet - well not until today at least:
Marketed under Micron's consumer brand Crucial, the MX300 is their first entrant into the consumer space, as well as the first consumer SSD sporting IMFT 3D NAND. Crucial is known for their budget-minded SSDs, and for the MX300 they chose to go with the best cost/GB they could manage with what they had to work with. That meant putting this new 3D NAND into TLC mode. Now there are many TLC haters out there, but remember this is 3D NAND. Samsung's 850 EVO can exceed 500 MB/sec writes to TLC at its 500GB capacity point, and this MX300 is a product that is launching with *only* a 750GB capacity, so its TLC speed should be at least reasonable.
(the return of) Dynamic Write Acceleration
Dynamic Write Acceleration in action during a sequential fill - that last slowest part was my primary concern for the mX300.
TLC is not the only story here because Crucial has included their Dynamic Write Acceleration (DWA) technology into the MX300. This is a tech where the SSD controller is able to dynamically switch flash programming modes of the flash pool, doing so at the block level. This appears to be a feature unique to IMFT flash, as every other 'hybrid' SSD we have tested had a static SLC cache area. DWA's ability to switch flash modes on-the-fly has always fascinated me on paper, but I just haven't been impressed by Micron's previous attempts to implement it. The M600 was a bit all over the place on its write consistency, and that SSD was flipping blocks between SLC and MLC. With the MX300 flipping between SLC and *TLC*, there was a possibility of far more noticeable slow downs in the cases where large writes were taking place and the controller was caught trying to scavenge space in the background.
New Latency Percentile vs. legacy IO Percentile, shown here highlighting a performance inconsistency seen in the Toshiba OCZ RD400. Note which line more closely represents the Latency Distribution (gray) also on this plot.
Introduction and Features
SilverStone is a veteran in the PC power supply industry and they continue to offer a full line of enclosures, power supplies, fans, coolers, and PC accessories. They have raised the bar in their Strider power supply series, which now includes three 80 Plus Titanium certified units, the ST60F-TI, ST70F-TI, and ST80F-TI. These three units are billed as being “the world’s smallest 80 Plus Titanium, full-modular ATX power supplies”; with a chassis that is only 150mm (5.9”) deep.
The 80 Plus Titanium efficiency standards were introduced in 2012 and are the most demanding specifications to date. In addition to raising the efficiency requirements at 20%, 50% and 100% loads, the Titanium standard adds a new requirement at 10%. This insures that a Titanium certified power supply will operate with at least 90% efficiency over the full range of loads and deliver up to 94% efficiency at a 50% load.
I’m also happy to report that SilverStone is now providing a 5-year warranty on both the Strider Titanium and Strider Platinum series power supplies; up from their standard 3-year warranty.
SilverStone ST60F-TI Power Supply Key Features:
• 80 Plus Titanium certified for super-high efficiency
• Compact design with a depth of 150mm for easy integration
• All-modular, flat ribbon-style cables
• 100% all Japanese made capacitors
• Strict ±3% voltage regulation and low AC ripple and noise
• Powerful single +12V rail
• Four PCI-E connectors for multiple GPU support
• Safety Protections: OCP, OTP, OPP, UVP, OVP, and SCP
• Quiet 120mm fan with Fluid Dynamic bearing
• 140mm dust filter
• 5-Year warranty
Introduction and Specifications
The Corsair VOID Surround Gaming Headset is a hybrid product of sorts, combining a traditional stereo gaming headset with a Dolby Headphone-enabled USB dongle to unlock virtual 7.1 surround sound. We’ll have a look, and listen, in this review.
The market for gaming headsets being what it is, one of the most important factors with each new product inevitably becomes price. There are different tiers of products out there from many companies, and Corsair themselves offer a few different choices and various price-points. With the VOID Surround we have a pretty affordable option at $79.99, which is about half the price of the previous wired pair of gaming headphones I looked it, Logitech’s G633.
One of the advantages Corsair offers with this VOID headset is a pair of 50mm drivers, which theoretically offer better bass than 40mm options (though of course size alone is not a guarantee). The 7.1 surround effect is via Dolby Headphone, which is a virtual effect that is commonly found with single-driver options such as this. If the effect is convincing, a headset like the VOID can save the user a lot of money over the pricey discrete multi-driver options on the market.
Introduction and Features
Corsair continues to expand their extensive power supply lineup with the addition of two new small form factor (SFX) units, the SF450 and SF600. The SF Series power supplies are fully modular and optimized for quiet operation and high efficiency. Both power supplies feature Zero RPM Fan Mode, which means the fan doesn’t start to spin until the power supply is under a moderate to heavy load. The SF450 and SF600 are 80 Plus Gold certified for high efficiency and come with a 5-year warranty.
While the SF Series is designed for use in small form factor enclosures, Corsair’s SF Series power supplies can also be used in standard ATX cases to save room via the optional SFX to ATX adapter bracket. As you can see in the photo below, the SF Series power supply is much smaller in all three dimensions than a standard ATX power supply. We will be taking a detailed look at the new SF600W power supply in this review.
SF Series 600W vs. ATX Series 650W
Corsair SF Series 600W PSU Key Features:
• Small Form Factor (SFX) design
• Very quiet with Zero RPM Fan Mode
• 92mm cooling fan optimized for low noise
• 80 Plus Gold certified for high efficiency
• All-modular, flat ribbon-style cables
• 100% all Japanese made 105°C capacitors
• ATX12V v2.4 and EPS 2.92 compliant
• 6th Generation Intel Core processor Ready
• Safety Protections: OCP, OVP, UVP, SCP, OTP, and OPP
• 7-Year warranty
Introduction and Unboxing
A few years ago, Ryan reviewed the Couchmaster. It was a simple keyboard and mouse holder that suspended those parts above your lap, much like a computer chair, but at your couch. It was a cool concept, but at the time, living room PC gaming hadn't gained much popularity. While we don't all suddenly have living room PCs, the concept has gained some steam. We've seen recent launches of devices like the Corsair Bulldog - a rather beefy DIY living room PC meant to handle enough hardware to support living room gaming at up to 4K resolutions. This left a bit of a gap in Corsair's lineup. They make keyboards, mice, and now a living room PC, but where do you put those peripherals while sitting on your couch? Enter the Corsair Lapdog:
Above is the setup process staged with the keyboard and mouse plugged into the integrated 4-port USB 3.0 hub. Note that we did not need to plug in both keyboard connectors as there is no need to use the USB pass-through feature of these keyboards as the mouse gets its own dedicated port. Owners of the older K70 RGBs might note that even though the early models did not come with a pass-through port, they still had an additional connector for additional USB current. Fear not, as the second plug of those keyboards is also not needed here since the Lapdog uses a powered USB 3.0 hub that can provide sufficient current to light up those models over that single connector.
The cable that combines both power and USB connection from the Lapdog to the wall/PC is 16 feet long, which should provide plenty of space to stretch between just about any TV + couch combination. It was a great idea by Corsair to combine the USB cable and power cable in this way, minimizing the mess and cable clutter that reaches across the floor. You get another 5 feet or so of length for the 12V power adapater as well, so install should be a breeze for users.
Here we see the removable block-off plate. This comes pre-installed in case the user intends to use a K65 (short-body) keyboard. For those cases, the plate keeps the surface flush while covering the area normally used by the number pad. We are installing a K70 model and will be removing the plate for our configuration.
In case you're wondering how to remove the various cover plates and mouse pad in order to complete the installation, there is a mini hex driver built-in to the back of the foam lap pad.
Looking at the bottom of the Lapdog keyboard/mouse housing, we see six magnets that mate with the appropriate places on the bottom of the foam lap pad. The pad is made of cloth covered polyurethane foam. It does not appear to be memory foam and is fairly rigid, which is desirable as we need to keep the keyboard and mouse on a reasonably firm surface when using it on a lap.
On the right edge of the Lapdog we have rear ports for power and USB 3.0 back to the PC, and on the side, we have another pair of USB 3.0 ports off of the internal powered hub. This lets you do other cool stuff like plugging in portable USB storage or even connecting and charging your phone.
With the build complete, I'd just like to comment on how seamlessly the corsair keyboards blend with the rest of the Lapdog. The anodized brushed aluminum is a perfect match, though it does add some weight to the completed product. There is a slight lip at the bottom and right edges of the mouse pad which keep it from sliding off when not in use.
After setup, I spent some quality time with the Lapdog. In gaming, it definitely works as advertised. With the device on your lap, WASD + mouse gaming is essentially where your hands naturally rest with the default positioning, making gaming just about the same as doing so on a desktop. The lap pad design helps to keep it from sliding around on your lap while in use, and the overall bulk and heft of the unit keep it firmly planted on your lap. It is not overly heavy, and I feel that going any lighter would negatively impact stability.
I also tried some actual writing on the Lapdog (I used it to write this article). While the typical gaming position is natural when centered, the left offset of the keyboard means that any serious typing requires you to scoot everything over to the right. The keyboard side is heavier than the mousing side, so there are no tipping issues when doing so. Even if you were to place the center of the Lapdog over your right leg, centering the keyboard on your lap, its weight will still keep the Lapdog planted on your left, so no issues there. Long periods of typing may put a strain on your back if you tend to lean forward off of the front edge of your couch, but the Lapdog is really meant to be a 'lay back' experience, and extended typing is certainly doable in that position with a bit of practice.
The Corsair Lapdog is available for $119.99, which I feel is a fair price given the high-grade components and solid build quality. If you're into PC gaming from the comfort of your couch, the Corsair Lapdog looks to be the best solution your you!
Fractal Design has reduced their excellent Define S enclosure all the way down from ATX to mini-ITX, and the Define Nano S offers plenty of room for a small form-factor case.
Large mini-ITX cases have become the trend in the past year or so, with the NZXT Manta the most recent (and possibly the most extreme) example. Fractal Design's Nano S isn't quite as large as the Manta, but it is cavernous inside thanks to a completely open internal layout. There are no optical drive bays, no partitions for PSU or storage, and really not much of anything inside the main compartment at all as Fractal Design has essentially miniaturized the Define S enclosure.
We have the windowed version of the Define Nano S for review here, which adds some interest to a very understated design. There is still something very sophisticated about this sort of industrial design, and I must admit to liking it quite a bit myself. Details such as the side vents for front panel air intake do add some interest, and that big window helps add some style as well (and builders could always add some increasingly ubiquitous RGB lighting inside!).
AMD gets aggressive
At its Computex 2016 press conference in Taipei today, AMD has announced the branding and pricing, along with basic specifications, for one of its upcoming Polaris GPUs shipping later this June. The Radeon RX 480, based on Polaris 10, will cost just $199 and will offer more than 5 TFLOPS of compute capability. This is an incredibly aggressive move obviously aimed at continuing to gain market share at NVIDIA's expense. Details of the product are listed below.
|RX 480||GTX 1070||GTX 980||GTX 970||R9 Fury||R9 Nano||R9 390X||R9 390|
|GPU||Polaris 10||GP104||GM204||GM204||Fiji Pro||Fiji XT||Hawaii XT||Grenada Pro|
|Rated Clock||?||1506 MHz||1126 MHz||1050 MHz||1000 MHz||up to 1000 MHz||1050 MHz||1000 MHz|
|Memory Clock||8000 MHz||8000 MHz||7000 MHz||7000 MHz||500 MHz||500 MHz||6000 MHz||6000 MHz|
|Memory Interface||256-bit||256-bit||256-bit||256-bit||4096-bit (HBM)||4096-bit (HBM)||512-bit||512-bit|
|Memory Bandwidth||256 GB/s||256 GB/s||224 GB/s||196 GB/s||512 GB/s||512 GB/s||384 GB/s||384 GB/s|
|TDP||150 watts||150 watts||165 watts||145 watts||275 watts||175 watts||275 watts||230 watts|
|Peak Compute||> 5.0 TFLOPS||5.7 TFLOPS||4.61 TFLOPS||3.4 TFLOPS||7.20 TFLOPS||8.19 TFLOPS||5.63 TFLOPS||5.12 TFLOPS|
The RX 480 will ship with 36 CUs totaling 2304 stream processors based on the current GCN breakdown of 64 stream processors per CU. AMD didn't list clock speeds and instead is only telling us that the performance offered will exceed 5 TFLOPS of compute; how much is still a mystery and will likely change based on final clocks.
The memory system is powered by a 256-bit GDDR5 memory controller running at 8 Gbps and hitting 256 GB/s of throughput. This is the same resulting memory bandwidth as NVIDIA's new GeForce GTX 1070 graphics card.
AMD also tells us that the TDP of the card is 150 watts, again matching the GTX 1070, though without more accurate performance data it's hard to assume anything about the new architectural efficiency of the Polaris GPUs built on the 14nm Global Foundries process.
Obviously the card will support FreeSync and all of AMD's VR features, in addition to being DP 1.3 and 1.4 ready.
AMD stated that the RX 480 will launch on June 29th.
I know that many of you will want us to start guessing at what performance level the new RX 480 will actually fall, and trust me, I've been trying to figure it out. Based on TFLOPS rating and memory bandwidth alone, it seems possible that the RX 480 could compete with the GTX 1070. But if that were the case, I don't think even AMD is crazy enough to set the price this far below where the GTX 1070 launched, $379.
I would expect the configuration of the GCN architecture to remain mostly unchanged on Polaris, compared to Hawaii, for the same reasons that we saw NVIDIA leave Pascal's basic compute architecture unchanged compared to Maxwell. Moving to the new process node was the primary goal and adding to that with drastic shifts in compute design might overly complicate product development.
In the past, we have observed that AMD's GCN architecture tends to operate slightly less efficiently in terms of rated maximum compute capability versus realized gaming performance, at least compared to Maxwell and now Pascal. With that in mind, the >5 TFLOPS offered by the RX 480 likely lies somewhere between the Radeon R9 390 and R9 390X in realized gaming output. If that is the case, the Radeon RX 480 should have performance somewhere between the GeForce GTX 970 and the GeForce GTX 980.
AMD claims that the RX 480 at $199 is set to offer a "premium VR experience" that has previously be limited to $500 graphics cards (another reference to the original price of the GTX 980 perhaps...). AMD claims this should have a dramatic impact on increasing the TAM (total addressable market) for VR.
In a notable market survey, price was a leading barrier to adoption of VR. The $199 SEP for select Radeon™ RX Series GPUs is an integral part of AMD’s strategy to dramatically accelerate VR adoption and unleash the VR software ecosystem. AMD expects that its aggressive pricing will jumpstart the growth of the addressable market for PC VR and accelerate the rate at which VR headsets drop in price:
- More affordable VR-ready desktops and notebooks
- Making VR accessible to consumers in retail
- Unleashing VR developers on a larger audience
- Reducing the cost of entry to VR
AMD calls this strategy of starting with the mid-range product its "Water Drop" strategy with the goal "at releasing new graphics architectures in high volume segments first to support continued market share growth for Radeon GPUs."
So what do you guys think? Are you impressed with what Polaris looks like its going to be now?
Bristol Ridge Takes on Mobile: E2 Through FX
It is no secret that AMD has faced an uphill battle since the release of the original Core 2 processors from Intel. While stayed mostly competitive through the Phenom II years, they hit some major performance issues when moving to the Bulldozer architecture. While on paper the idea of Chip Multi-Threading sounded fantastic, AMD was never able to get the per thread performance up to expectations. While their CPUs performed well in heavily multi-threaded applications, they just were never seen in as positive of a light as the competing Intel products.
The other part of the performance equation that has hammered AMD is the lack of a new process node that would allow it to more adequately compete with Intel. When AMD was at 32 nm PD-SOI, Intel had introduced its 22nm TriGate/FinFET. AMD then transitioned to a 28nm HKMG planar process that was more size optimized than 32nm, but did not drastically improve upon power and transistor switching performance.
So AMD had a double whammy on their hands with an underperforming architecture and limitted to no access to advanced process nodes that would actually improve their power and speed situation. They could not force their foundry partners to spend billions on a crash course in FinFET technology to bring that to market faster, so they had to iterate and innovate on their designs.
Bristol Ridge is the fruit of that particular labor. It is also the end point to the architecture that was introduced with Bulldozer way back in 2011.
It has been nearly two years since the release of the Haswell-E platform, which began with the launch of the Core i7-5960X processor. Back then, the introduction of an 8-core consumer processor was the primary selling point; along with the new X99 chipset and DDR4 memory support. At the time, I heralded the processor as “easily the fastest consumer processor we have ever had in our hands” and “nearly impossible to beat.” So what has changed over the course of 24 months?
Today Intel is launching Broadwell-E, the follow up to Haswell-E, and things look very much the same as they did before. There are definitely a couple of changes worth noting and discussing, including the move to a 10-core processor option as well as Turbo Boost Max Technology 3.0, which is significantly more interesting than its marketing name implies. Intel is sticking with the X99 platform (good for users that might want to upgrade), though the cost of these new processors is more than slightly disappointing based on trends elsewhere in the market.
This review of the new Core i7-6950X 10-core Broadwell-E processor is going to be quick, and to the point: what changes, what is the performance, how does it overclock, and what will it cost you?
New Products for 2017
PC Perspective was invited to Austin, TX on May 11 and 12 to participate in ARM’s yearly tech day. Also invited were a handful of editors and analysts that cover the PC and mobile markets. Those folks were all pretty smart, so it is confusing as to why they invited me. Perhaps word of my unique talent of screenshoting PDFs into near-unreadable JPGs preceded me? Regardless of the reason, I was treated to two full days of in-depth discussion of the latest generation of CPU and GPU cores, 10nm test chips, and information on new licensing options.
Today ARM is announcing their next CPU core with the introduction of the Cortex-A73. They are also unwrapping the latest Mali-G71 graphics technology. Other technologies such as the CCI-550 interconnect are also revealed. It is a busy and important day for ARM, especially in light of Intel seemingly abandoning the sub-milliwatt mobile market.
ARM previously announced the Cortex-A72 in February, 2015. Since that time it has been seen in most flagship mobile devices in late 2015 and throughout 2016. The market continues to evolve, and as such the workloads and form factors have pushed ARM to continue to develop and improve their CPU technology.
The Sofia Antipolis, France design group is behind the new A73. The previous several core architectures had been developed by the Cambridge group. As such, the new design differs quite dramatically from the previous A72. I was actually somewhat taken aback by the differences in the design philosophy of the two groups and the changes between the A72 and A73, but the generational jumps we have seen in the past make a bit more sense to me.
The marketplace is constantly changing when it comes to workloads and form factors. More and more complex applications are being ported to mobile devices, including hot technologies like AR and VR. Other technologies include 3D/360 degree video, greater than 20 MP cameras, and 4K/8K displays and their video playback formats. Form factors on the other hand have continued to decrease in size, especially in overall height. We have relatively large screens on most premium devices, but the designers have continued to make these phones thinner and thinner throughout the years. This has put a lot of pressure on ARM and their partners to increase performance while keeping TDPs in check, and even reducing them so they more adequately fit in the TDP envelope of these extremely thin devices.
GP104 Strikes Again
It’s only been three weeks since NVIDIA unveiled the GeForce GTX 1080 and GTX 1070 graphics cards at a live streaming event in Austin, TX. But it feels like those two GPUs, one of which hasn't even been reviewed until today, have already drastically shifted the landscape of graphics, VR and PC gaming.
Half of the “new GPU” stories are told, with AMD due to follow up soon with Polaris, but it was clear to anyone watching the enthusiast segment with a hint of history that a line was drawn in the sand that day. There is THEN, and there is NOW. Today’s detailed review of the GeForce GTX 1070 completes NVIDIA’s first wave of NOW products, following closely behind the GeForce GTX 1080.
Interestingly, and in a move that is very uncharacteristic of NVIDIA, detailed specifications of the GeForce GTX 1070 were released on GeForce.com well before today’s reviews. With information on the CUDA core count, clock speeds, and memory bandwidth it was possible to get a solid sense of where the GTX 1070 performed; and I imagine that many of you already did the napkin math to figure that out. There is no more guessing though - reviews and testing are all done, and I think you'll find that the GTX 1070 is as exciting, if not more so, than the GTX 1080 due to the performance and pricing combination that it provides.
Let’s dive in.
We’ve probably all lost data at some point, and many of us have tried various drive recovery solutions over the years. Of these, Disk Drill has been available for Mac OS X users for some time, but the company currently offers a Windows compatible version, released last year. The best part? It’s totally free (and not in the ad-ridden, drowning in popups kind of way). So does it work? Using some of my own data as a guinea pig, I decided to find out.
The interface is clean and simple
To begin with I’ll list the features of Disk Drill as Clever Files describes it on their product page:
- Any Drive
- Our free data recovery software for Windows PC can recover data from virtually any storage device - including internal and external hard drives, USB flash drives, iPods, memory cards, and more.
- Recovery Options
- Disk Drill has several different recovery algorithms, including Undelete, Protected Data, Quick Scan, and Deep Scan. It will run through them one at a time until your lost data is found.
- Speed & Simplicity
- It’s as easy as one click: Disk Drill scans start with just the click of a button. There’s no complicated interface with too many options, just click, sit back and wait for your files to appear.
- All File Systems
- Different types of hard drives and memory cards have different ways of storing data. Whether your media has a FAT, exFAT or NTFS file system, is HFS+ Mac drive or Linux EXT2/3/4, Disk Drill can recover deleted files.
- Partition Recovery
- Sometimes your data is still on your drive, but a partition has been lost or reformatted. Disk Drill can help you find the “map” to your old partition and rebuild it, so your files can be recovered.
- Recovery Vault
- In addition to deleted files recovery, Disk Drill also protects your PC from future data loss. Recovery Vault keeps a record of all deleted files, making it much easier to recover them.
- Disk Drill For Windows - Free download here
The Recovery Process
(No IDE hard drives were harmed in the making of this photo)
My recovery process involved an old 320GB IDE drive, which was used for backup until a power outage-related data corruption (I didn’t own a UPS at the time, and the drive was in the process of writing) which left me without a valid partition. At one point I had given up and formatted the drive; thinking all of my original backup was lost. Thankfully I didn’t use it much after this, and it’s been sitting on a shelf for years.
There are different methods that can be employed to recover lost or deleted data. One of these is to scan for the file headers (or signatures), which contain information about what type of file it is (i.e. Microsoft Word, JPEG image, etc.). There are advanced recovery methods that attempt to reconstruct an entire file system, preserving the folder structures and the original files names. Unfortunately, this is not a simple (or fast) process, and is generally left to the professionals.
First, Some Background
NVIDIA's Rumored GP102
When GP100 was announced, Josh and I were discussing, internally, how it would make sense in the gaming industry. Recently, an article on WCCFTech cited anonymous sources, which should always be taken with a dash of salt, that claimed NVIDIA was planning a second architecture, GP102, between GP104 and GP100. As I was writing this editorial about it, relating it to our own speculation about the physics of Pascal, VideoCardz claims to have been contacted by the developers of AIDA64, seemingly on-the-record, also citing a GP102 design.
I will retell chunks of the rumor, but also add my opinion to it.
In the last few generations, each architecture had a flagship chip that was released in both gaming and professional SKUs. Neither audience had access to a chip that was larger than the other's largest of that generation. Clock rates and disabled portions varied by specific product, with gaming usually getting the more aggressive performance for slightly better benchmarks. Fermi had GF100/GF110, Kepler had GK110/GK210, and Maxwell had GM200. Each of these were available in Tesla, Quadro, and GeForce cards, especially Titans.
Maxwell was interesting, though. NVIDIA was unable to leave 28nm, which Kepler launched on, so they created a second architecture at that node. To increase performance without having access to more feature density, you need to make your designs bigger, more optimized, or more simple. GM200 was giant and optimized, but, to get the performance levels it achieved, also needed to be more simple. Something needed to go, and double-precision (FP64) performance was the big omission. NVIDIA was upfront about it at the Titan X launch, and told their GPU compute customers to keep purchasing Kepler if they valued FP64.
Introduction, Specifications and Packaging
The OCZ RevoDrive has been around for a good long while. We looked at the first ever RevoDrive back in 2010. It was a bold move for the time, as PCIe SSDs were both rare and very expensive at that time. OCZ's innovation was to implement a new VCA RAID controller which kept latencies low and properly scaled with increased Queue Depth. OCZ got a lot of use out of this formula, later expanding to the RevoDrive 3 x2 which expanded to four parallel SSDs, all the way to the enterprise Z-Drive R4 which further expanded that out to eight RAIDed SSDs.
OCZ's RevoDrive lineup circa 2011.
The latter was a monster of an SSD both in physical size and storage capacity. Its performance was also impressive given that it launched five years ago. After being acquired by Toshiba, OCZ re-spun the old VCA-driven SSD one last time in the form of a RevoDrive 350, but it was the same old formula and high-latency SandForce controllers (updated with in-house Toshiba flash). The RevoDrive line needed to ditch that dated tech and move into the world of NVMe, and today it has!
Here is the new 'Toshiba OCZ RD400', branded as such under the recent rebadging that took place on OCZ's site. The Trion 150 and Vertex 180 have also been relabeled as TR150 and VT180. This new RD400 has some significant changes over the previous iterations of that line. The big one is that it is now a lean M.2 part which can come on/with an optional adapter card for those not having an available M.2 slot.
Introduction and Technical Specifications
Courtesy of GIGABYTE
The X99P-SLI motherboard is the newest member of their Ultra Durable board line, updated with newest technological innovations including USB 3.1 and Thunderbolt 3. The board supports all Intel LGA2011-3 based processors paired with DDR4 memory in up to a quad channel configuration. GIGABYTE priced the X99P-SLI at an approachable MSRP of $249.99.
Courtesy of GIGABYTE
Courtesy of GIGABYTE
Courtesy of GIGABYTE
Like all members of the Ultra-Durable board line, the X99P-SLI was over engineered to take whatever abuse is thrown its way, featuring an 6+4-phase digital power system with International Rectify Gen 4 digital PWM controllers and Gen 3 PowIRstage controllers, Server Level chokes, and long life Durable Black Solid capacitors. The board also features GIGABYTE's next generation PCIe x16 slots with PCIe Metal Shielding - steel reinforced overlays to provide extra vertical support for graphics cards featuring large and heavy coolers.
10nm Sooner Than Expected?
It seems only yesterday that we had the first major GPU released on 16nm FF+ and now we are talking about ARM about to receive their first 10nm FF test chips! Well, in fact it was yesterday that NVIDIA formally released performance figures on the latest GeForce GTX 1080 which is based on TSMC’s 16nm FF+ process technology. Currently TSMC is going full bore on their latest process node and producing the fastest current graphics chip around. It has taken the foundry industry as a whole a lot longer to develop FinFET technology than expected, but now that they have that piece of the puzzle seemingly mastered they are moving to a new process node at an accelerated rate.
TSMC’s 10nm FF is not well understood by press and analysts yet, but we gather that it is more of a marketing term than a true drop to 10 nm features. Intel has yet to get past 14nm and does not expect 10 nm production until well into next year. TSMC is promising their version in the second half of 2016. We cannot assume that TSMC’s version will match what Intel will be doing in terms of geometries and electrical characteristics, but we do know that it is a step past TSMC’s 16nm FF products. Lithography will likely get a boost with triple patterning exposure. My guess is that the back end will also move away from the “20nm metal” stages that we see with 16nm. All in all, it should be an improved product from what we see with 16nm, but time will tell if it can match the performance and density of competing lines that bear the 10nm name from Intel, Samsung, and GLOBALFOUNDRIES.
ARM has a history of porting their architectures to new process nodes, but they are being a bit more aggressive here than we have seen in the past. It used to be that ARM would announce a new core or technology, and it would take up to two years to be introduced into the market. Now we are seeing technology announcements and actual products hitting the scenes about nine months later. With the mobile market continuing to grow we expect to see products quicker to market still.
The company designed a simplified test chip to tape out and send to TSMC for test production on the aforementioned 10nm FF process. The chip was taped out in December, 2015. The design was shipped to TSMC for mask production and wafer starts. ARM is expecting the finished wafers to arrive this month.
A new architecture with GP104
Table of Contents
- Asynchronous compute discussion
- Is only 2-Way SLI supported?
- Overclocking over 2.0 GHz
- Dissecting the Founders Edition
- Benchmarks begin
- VR Testing
- Impressive power efficiency
- Performance per dollar discussion
- Ansel screenshot tool
The summer of change for GPUs has begun with today’s review of the GeForce GTX 1080. NVIDIA has endured leaks, speculation and criticism for months now, with enthusiasts calling out NVIDIA for not including HBM technology or for not having asynchronous compute capability. Last week NVIDIA’s CEO Jen-Hsun Huang went on stage and officially announced the GTX 1080 and GTX 1070 graphics cards with a healthy amount of information about their supposed performance and price points. Issues around cost and what exactly a Founders Edition is aside, the event was well received and clearly showed a performance and efficiency improvement that we were not expecting.
The question is, does the actual product live up to the hype? Can NVIDIA overcome some users’ negative view of the Founders Edition to create a product message that will get the wide range of PC gamers looking for an upgrade path an option they’ll take?
I’ll let you know through the course of this review, but what I can tell you definitively is that the GeForce GTX 1080 clearly sits alone at the top of the GPU world.