All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Introduction and Features
SilverStone is a veteran in the PC power supply industry and they continue to offer a full line of enclosures, power supplies, fans, coolers, and PC accessories. They have raised the bar in their Strider power supply series, which now includes three 80 Plus Titanium certified units, the ST60F-TI, ST70F-TI, and ST80F-TI. These three units are billed as being “the world’s smallest 80 Plus Titanium, full-modular ATX power supplies”; with a chassis that is only 150mm (5.9”) deep.
The 80 Plus Titanium efficiency standards were introduced in 2012 and are the most demanding specifications to date. In addition to raising the efficiency requirements at 20%, 50% and 100% loads, the Titanium standard adds a new requirement at 10%. This insures that a Titanium certified power supply will operate with at least 90% efficiency over the full range of loads and deliver up to 94% efficiency at a 50% load.
I’m also happy to report that SilverStone is now providing a 5-year warranty on both the Strider Titanium and Strider Platinum series power supplies; up from their standard 3-year warranty.
SilverStone ST60F-TI Power Supply Key Features:
• 80 Plus Titanium certified for super-high efficiency
• Compact design with a depth of 150mm for easy integration
• All-modular, flat ribbon-style cables
• 100% all Japanese made capacitors
• Strict ±3% voltage regulation and low AC ripple and noise
• Powerful single +12V rail
• Four PCI-E connectors for multiple GPU support
• Safety Protections: OCP, OTP, OPP, UVP, OVP, and SCP
• Quiet 120mm fan with Fluid Dynamic bearing
• 140mm dust filter
• 5-Year warranty
Introduction and Specifications
The Corsair VOID Surround Gaming Headset is a hybrid product of sorts, combining a traditional stereo gaming headset with a Dolby Headphone-enabled USB dongle to unlock virtual 7.1 surround sound. We’ll have a look, and listen, in this review.
The market for gaming headsets being what it is, one of the most important factors with each new product inevitably becomes price. There are different tiers of products out there from many companies, and Corsair themselves offer a few different choices and various price-points. With the VOID Surround we have a pretty affordable option at $79.99, which is about half the price of the previous wired pair of gaming headphones I looked it, Logitech’s G633.
One of the advantages Corsair offers with this VOID headset is a pair of 50mm drivers, which theoretically offer better bass than 40mm options (though of course size alone is not a guarantee). The 7.1 surround effect is via Dolby Headphone, which is a virtual effect that is commonly found with single-driver options such as this. If the effect is convincing, a headset like the VOID can save the user a lot of money over the pricey discrete multi-driver options on the market.
Introduction and Features
Corsair continues to expand their extensive power supply lineup with the addition of two new small form factor (SFX) units, the SF450 and SF600. The SF Series power supplies are fully modular and optimized for quiet operation and high efficiency. Both power supplies feature Zero RPM Fan Mode, which means the fan doesn’t start to spin until the power supply is under a moderate to heavy load. The SF450 and SF600 are 80 Plus Gold certified for high efficiency and come with a 5-year warranty.
While the SF Series is designed for use in small form factor enclosures, Corsair’s SF Series power supplies can also be used in standard ATX cases to save room via the optional SFX to ATX adapter bracket. As you can see in the photo below, the SF Series power supply is much smaller in all three dimensions than a standard ATX power supply. We will be taking a detailed look at the new SF600W power supply in this review.
SF Series 600W vs. ATX Series 650W
Corsair SF Series 600W PSU Key Features:
• Small Form Factor (SFX) design
• Very quiet with Zero RPM Fan Mode
• 92mm cooling fan optimized for low noise
• 80 Plus Gold certified for high efficiency
• All-modular, flat ribbon-style cables
• 100% all Japanese made 105°C capacitors
• ATX12V v2.4 and EPS 2.92 compliant
• 6th Generation Intel Core processor Ready
• Safety Protections: OCP, OVP, UVP, SCP, OTP, and OPP
• 7-Year warranty
Introduction and Unboxing
A few years ago, Ryan reviewed the Couchmaster. It was a simple keyboard and mouse holder that suspended those parts above your lap, much like a computer chair, but at your couch. It was a cool concept, but at the time, living room PC gaming hadn't gained much popularity. While we don't all suddenly have living room PCs, the concept has gained some steam. We've seen recent launches of devices like the Corsair Bulldog - a rather beefy DIY living room PC meant to handle enough hardware to support living room gaming at up to 4K resolutions. This left a bit of a gap in Corsair's lineup. They make keyboards, mice, and now a living room PC, but where do you put those peripherals while sitting on your couch? Enter the Corsair Lapdog:
Above is the setup process staged with the keyboard and mouse plugged into the integrated 4-port USB 3.0 hub. Note that we did not need to plug in both keyboard connectors as there is no need to use the USB pass-through feature of these keyboards as the mouse gets its own dedicated port. Owners of the older K70 RGBs might note that even though the early models did not come with a pass-through port, they still had an additional connector for additional USB current. Fear not, as the second plug of those keyboards is also not needed here since the Lapdog uses a powered USB 3.0 hub that can provide sufficient current to light up those models over that single connector.
The cable that combines both power and USB connection from the Lapdog to the wall/PC is 16 feet long, which should provide plenty of space to stretch between just about any TV + couch combination. It was a great idea by Corsair to combine the USB cable and power cable in this way, minimizing the mess and cable clutter that reaches across the floor. You get another 5 feet or so of length for the 12V power adapater as well, so install should be a breeze for users.
Here we see the removable block-off plate. This comes pre-installed in case the user intends to use a K65 (short-body) keyboard. For those cases, the plate keeps the surface flush while covering the area normally used by the number pad. We are installing a K70 model and will be removing the plate for our configuration.
In case you're wondering how to remove the various cover plates and mouse pad in order to complete the installation, there is a mini hex driver built-in to the back of the foam lap pad.
Looking at the bottom of the Lapdog keyboard/mouse housing, we see six magnets that mate with the appropriate places on the bottom of the foam lap pad. The pad is made of cloth covered polyurethane foam. It does not appear to be memory foam and is fairly rigid, which is desirable as we need to keep the keyboard and mouse on a reasonably firm surface when using it on a lap.
On the right edge of the Lapdog we have rear ports for power and USB 3.0 back to the PC, and on the side, we have another pair of USB 3.0 ports off of the internal powered hub. This lets you do other cool stuff like plugging in portable USB storage or even connecting and charging your phone.
With the build complete, I'd just like to comment on how seamlessly the corsair keyboards blend with the rest of the Lapdog. The anodized brushed aluminum is a perfect match, though it does add some weight to the completed product. There is a slight lip at the bottom and right edges of the mouse pad which keep it from sliding off when not in use.
After setup, I spent some quality time with the Lapdog. In gaming, it definitely works as advertised. With the device on your lap, WASD + mouse gaming is essentially where your hands naturally rest with the default positioning, making gaming just about the same as doing so on a desktop. The lap pad design helps to keep it from sliding around on your lap while in use, and the overall bulk and heft of the unit keep it firmly planted on your lap. It is not overly heavy, and I feel that going any lighter would negatively impact stability.
I also tried some actual writing on the Lapdog (I used it to write this article). While the typical gaming position is natural when centered, the left offset of the keyboard means that any serious typing requires you to scoot everything over to the right. The keyboard side is heavier than the mousing side, so there are no tipping issues when doing so. Even if you were to place the center of the Lapdog over your right leg, centering the keyboard on your lap, its weight will still keep the Lapdog planted on your left, so no issues there. Long periods of typing may put a strain on your back if you tend to lean forward off of the front edge of your couch, but the Lapdog is really meant to be a 'lay back' experience, and extended typing is certainly doable in that position with a bit of practice.
The Corsair Lapdog is available for $119.99, which I feel is a fair price given the high-grade components and solid build quality. If you're into PC gaming from the comfort of your couch, the Corsair Lapdog looks to be the best solution your you!
Fractal Design has reduced their excellent Define S enclosure all the way down from ATX to mini-ITX, and the Define Nano S offers plenty of room for a small form-factor case.
Large mini-ITX cases have become the trend in the past year or so, with the NZXT Manta the most recent (and possibly the most extreme) example. Fractal Design's Nano S isn't quite as large as the Manta, but it is cavernous inside thanks to a completely open internal layout. There are no optical drive bays, no partitions for PSU or storage, and really not much of anything inside the main compartment at all as Fractal Design has essentially miniaturized the Define S enclosure.
We have the windowed version of the Define Nano S for review here, which adds some interest to a very understated design. There is still something very sophisticated about this sort of industrial design, and I must admit to liking it quite a bit myself. Details such as the side vents for front panel air intake do add some interest, and that big window helps add some style as well (and builders could always add some increasingly ubiquitous RGB lighting inside!).
AMD gets aggressive
At its Computex 2016 press conference in Taipei today, AMD has announced the branding and pricing, along with basic specifications, for one of its upcoming Polaris GPUs shipping later this June. The Radeon RX 480, based on Polaris 10, will cost just $199 and will offer more than 5 TFLOPS of compute capability. This is an incredibly aggressive move obviously aimed at continuing to gain market share at NVIDIA's expense. Details of the product are listed below.
|RX 480||GTX 1070||GTX 980||GTX 970||R9 Fury||R9 Nano||R9 390X||R9 390|
|GPU||Polaris 10||GP104||GM204||GM204||Fiji Pro||Fiji XT||Hawaii XT||Grenada Pro|
|Rated Clock||?||1506 MHz||1126 MHz||1050 MHz||1000 MHz||up to 1000 MHz||1050 MHz||1000 MHz|
|Memory Clock||8000 MHz||8000 MHz||7000 MHz||7000 MHz||500 MHz||500 MHz||6000 MHz||6000 MHz|
|Memory Interface||256-bit||256-bit||256-bit||256-bit||4096-bit (HBM)||4096-bit (HBM)||512-bit||512-bit|
|Memory Bandwidth||256 GB/s||256 GB/s||224 GB/s||196 GB/s||512 GB/s||512 GB/s||384 GB/s||384 GB/s|
|TDP||150 watts||150 watts||165 watts||145 watts||275 watts||175 watts||275 watts||230 watts|
|Peak Compute||> 5.0 TFLOPS||5.7 TFLOPS||4.61 TFLOPS||3.4 TFLOPS||7.20 TFLOPS||8.19 TFLOPS||5.63 TFLOPS||5.12 TFLOPS|
The RX 480 will ship with 36 CUs totaling 2304 stream processors based on the current GCN breakdown of 64 stream processors per CU. AMD didn't list clock speeds and instead is only telling us that the performance offered will exceed 5 TFLOPS of compute; how much is still a mystery and will likely change based on final clocks.
The memory system is powered by a 256-bit GDDR5 memory controller running at 8 Gbps and hitting 256 GB/s of throughput. This is the same resulting memory bandwidth as NVIDIA's new GeForce GTX 1070 graphics card.
AMD also tells us that the TDP of the card is 150 watts, again matching the GTX 1070, though without more accurate performance data it's hard to assume anything about the new architectural efficiency of the Polaris GPUs built on the 14nm Global Foundries process.
Obviously the card will support FreeSync and all of AMD's VR features, in addition to being DP 1.3 and 1.4 ready.
AMD stated that the RX 480 will launch on June 29th.
I know that many of you will want us to start guessing at what performance level the new RX 480 will actually fall, and trust me, I've been trying to figure it out. Based on TFLOPS rating and memory bandwidth alone, it seems possible that the RX 480 could compete with the GTX 1070. But if that were the case, I don't think even AMD is crazy enough to set the price this far below where the GTX 1070 launched, $379.
I would expect the configuration of the GCN architecture to remain mostly unchanged on Polaris, compared to Hawaii, for the same reasons that we saw NVIDIA leave Pascal's basic compute architecture unchanged compared to Maxwell. Moving to the new process node was the primary goal and adding to that with drastic shifts in compute design might overly complicate product development.
In the past, we have observed that AMD's GCN architecture tends to operate slightly less efficiently in terms of rated maximum compute capability versus realized gaming performance, at least compared to Maxwell and now Pascal. With that in mind, the >5 TFLOPS offered by the RX 480 likely lies somewhere between the Radeon R9 390 and R9 390X in realized gaming output. If that is the case, the Radeon RX 480 should have performance somewhere between the GeForce GTX 970 and the GeForce GTX 980.
AMD claims that the RX 480 at $199 is set to offer a "premium VR experience" that has previously be limited to $500 graphics cards (another reference to the original price of the GTX 980 perhaps...). AMD claims this should have a dramatic impact on increasing the TAM (total addressable market) for VR.
In a notable market survey, price was a leading barrier to adoption of VR. The $199 SEP for select Radeon™ RX Series GPUs is an integral part of AMD’s strategy to dramatically accelerate VR adoption and unleash the VR software ecosystem. AMD expects that its aggressive pricing will jumpstart the growth of the addressable market for PC VR and accelerate the rate at which VR headsets drop in price:
- More affordable VR-ready desktops and notebooks
- Making VR accessible to consumers in retail
- Unleashing VR developers on a larger audience
- Reducing the cost of entry to VR
AMD calls this strategy of starting with the mid-range product its "Water Drop" strategy with the goal "at releasing new graphics architectures in high volume segments first to support continued market share growth for Radeon GPUs."
So what do you guys think? Are you impressed with what Polaris looks like its going to be now?
Bristol Ridge Takes on Mobile: E2 Through FX
It is no secret that AMD has faced an uphill battle since the release of the original Core 2 processors from Intel. While stayed mostly competitive through the Phenom II years, they hit some major performance issues when moving to the Bulldozer architecture. While on paper the idea of Chip Multi-Threading sounded fantastic, AMD was never able to get the per thread performance up to expectations. While their CPUs performed well in heavily multi-threaded applications, they just were never seen in as positive of a light as the competing Intel products.
The other part of the performance equation that has hammered AMD is the lack of a new process node that would allow it to more adequately compete with Intel. When AMD was at 32 nm PD-SOI, Intel had introduced its 22nm TriGate/FinFET. AMD then transitioned to a 28nm HKMG planar process that was more size optimized than 32nm, but did not drastically improve upon power and transistor switching performance.
So AMD had a double whammy on their hands with an underperforming architecture and limitted to no access to advanced process nodes that would actually improve their power and speed situation. They could not force their foundry partners to spend billions on a crash course in FinFET technology to bring that to market faster, so they had to iterate and innovate on their designs.
Bristol Ridge is the fruit of that particular labor. It is also the end point to the architecture that was introduced with Bulldozer way back in 2011.
It has been nearly two years since the release of the Haswell-E platform, which began with the launch of the Core i7-5960X processor. Back then, the introduction of an 8-core consumer processor was the primary selling point; along with the new X99 chipset and DDR4 memory support. At the time, I heralded the processor as “easily the fastest consumer processor we have ever had in our hands” and “nearly impossible to beat.” So what has changed over the course of 24 months?
Today Intel is launching Broadwell-E, the follow up to Haswell-E, and things look very much the same as they did before. There are definitely a couple of changes worth noting and discussing, including the move to a 10-core processor option as well as Turbo Boost Max Technology 3.0, which is significantly more interesting than its marketing name implies. Intel is sticking with the X99 platform (good for users that might want to upgrade), though the cost of these new processors is more than slightly disappointing based on trends elsewhere in the market.
This review of the new Core i7-6950X 10-core Broadwell-E processor is going to be quick, and to the point: what changes, what is the performance, how does it overclock, and what will it cost you?
New Products for 2017
PC Perspective was invited to Austin, TX on May 11 and 12 to participate in ARM’s yearly tech day. Also invited were a handful of editors and analysts that cover the PC and mobile markets. Those folks were all pretty smart, so it is confusing as to why they invited me. Perhaps word of my unique talent of screenshoting PDFs into near-unreadable JPGs preceded me? Regardless of the reason, I was treated to two full days of in-depth discussion of the latest generation of CPU and GPU cores, 10nm test chips, and information on new licensing options.
Today ARM is announcing their next CPU core with the introduction of the Cortex-A73. They are also unwrapping the latest Mali-G71 graphics technology. Other technologies such as the CCI-550 interconnect are also revealed. It is a busy and important day for ARM, especially in light of Intel seemingly abandoning the sub-milliwatt mobile market.
ARM previously announced the Cortex-A72 in February, 2015. Since that time it has been seen in most flagship mobile devices in late 2015 and throughout 2016. The market continues to evolve, and as such the workloads and form factors have pushed ARM to continue to develop and improve their CPU technology.
The Sofia Antipolis, France design group is behind the new A73. The previous several core architectures had been developed by the Cambridge group. As such, the new design differs quite dramatically from the previous A72. I was actually somewhat taken aback by the differences in the design philosophy of the two groups and the changes between the A72 and A73, but the generational jumps we have seen in the past make a bit more sense to me.
The marketplace is constantly changing when it comes to workloads and form factors. More and more complex applications are being ported to mobile devices, including hot technologies like AR and VR. Other technologies include 3D/360 degree video, greater than 20 MP cameras, and 4K/8K displays and their video playback formats. Form factors on the other hand have continued to decrease in size, especially in overall height. We have relatively large screens on most premium devices, but the designers have continued to make these phones thinner and thinner throughout the years. This has put a lot of pressure on ARM and their partners to increase performance while keeping TDPs in check, and even reducing them so they more adequately fit in the TDP envelope of these extremely thin devices.
GP104 Strikes Again
It’s only been three weeks since NVIDIA unveiled the GeForce GTX 1080 and GTX 1070 graphics cards at a live streaming event in Austin, TX. But it feels like those two GPUs, one of which hasn't even been reviewed until today, have already drastically shifted the landscape of graphics, VR and PC gaming.
Half of the “new GPU” stories are told, with AMD due to follow up soon with Polaris, but it was clear to anyone watching the enthusiast segment with a hint of history that a line was drawn in the sand that day. There is THEN, and there is NOW. Today’s detailed review of the GeForce GTX 1070 completes NVIDIA’s first wave of NOW products, following closely behind the GeForce GTX 1080.
Interestingly, and in a move that is very uncharacteristic of NVIDIA, detailed specifications of the GeForce GTX 1070 were released on GeForce.com well before today’s reviews. With information on the CUDA core count, clock speeds, and memory bandwidth it was possible to get a solid sense of where the GTX 1070 performed; and I imagine that many of you already did the napkin math to figure that out. There is no more guessing though - reviews and testing are all done, and I think you'll find that the GTX 1070 is as exciting, if not more so, than the GTX 1080 due to the performance and pricing combination that it provides.
Let’s dive in.
We’ve probably all lost data at some point, and many of us have tried various drive recovery solutions over the years. Of these, Disk Drill has been available for Mac OS X users for some time, but the company currently offers a Windows compatible version, released last year. The best part? It’s totally free (and not in the ad-ridden, drowning in popups kind of way). So does it work? Using some of my own data as a guinea pig, I decided to find out.
The interface is clean and simple
To begin with I’ll list the features of Disk Drill as Clever Files describes it on their product page:
- Any Drive
- Our free data recovery software for Windows PC can recover data from virtually any storage device - including internal and external hard drives, USB flash drives, iPods, memory cards, and more.
- Recovery Options
- Disk Drill has several different recovery algorithms, including Undelete, Protected Data, Quick Scan, and Deep Scan. It will run through them one at a time until your lost data is found.
- Speed & Simplicity
- It’s as easy as one click: Disk Drill scans start with just the click of a button. There’s no complicated interface with too many options, just click, sit back and wait for your files to appear.
- All File Systems
- Different types of hard drives and memory cards have different ways of storing data. Whether your media has a FAT, exFAT or NTFS file system, is HFS+ Mac drive or Linux EXT2/3/4, Disk Drill can recover deleted files.
- Partition Recovery
- Sometimes your data is still on your drive, but a partition has been lost or reformatted. Disk Drill can help you find the “map” to your old partition and rebuild it, so your files can be recovered.
- Recovery Vault
- In addition to deleted files recovery, Disk Drill also protects your PC from future data loss. Recovery Vault keeps a record of all deleted files, making it much easier to recover them.
- Disk Drill For Windows - Free download here
The Recovery Process
(No IDE hard drives were harmed in the making of this photo)
My recovery process involved an old 320GB IDE drive, which was used for backup until a power outage-related data corruption (I didn’t own a UPS at the time, and the drive was in the process of writing) which left me without a valid partition. At one point I had given up and formatted the drive; thinking all of my original backup was lost. Thankfully I didn’t use it much after this, and it’s been sitting on a shelf for years.
There are different methods that can be employed to recover lost or deleted data. One of these is to scan for the file headers (or signatures), which contain information about what type of file it is (i.e. Microsoft Word, JPEG image, etc.). There are advanced recovery methods that attempt to reconstruct an entire file system, preserving the folder structures and the original files names. Unfortunately, this is not a simple (or fast) process, and is generally left to the professionals.
First, Some Background
NVIDIA's Rumored GP102
When GP100 was announced, Josh and I were discussing, internally, how it would make sense in the gaming industry. Recently, an article on WCCFTech cited anonymous sources, which should always be taken with a dash of salt, that claimed NVIDIA was planning a second architecture, GP102, between GP104 and GP100. As I was writing this editorial about it, relating it to our own speculation about the physics of Pascal, VideoCardz claims to have been contacted by the developers of AIDA64, seemingly on-the-record, also citing a GP102 design.
I will retell chunks of the rumor, but also add my opinion to it.
In the last few generations, each architecture had a flagship chip that was released in both gaming and professional SKUs. Neither audience had access to a chip that was larger than the other's largest of that generation. Clock rates and disabled portions varied by specific product, with gaming usually getting the more aggressive performance for slightly better benchmarks. Fermi had GF100/GF110, Kepler had GK110/GK210, and Maxwell had GM200. Each of these were available in Tesla, Quadro, and GeForce cards, especially Titans.
Maxwell was interesting, though. NVIDIA was unable to leave 28nm, which Kepler launched on, so they created a second architecture at that node. To increase performance without having access to more feature density, you need to make your designs bigger, more optimized, or more simple. GM200 was giant and optimized, but, to get the performance levels it achieved, also needed to be more simple. Something needed to go, and double-precision (FP64) performance was the big omission. NVIDIA was upfront about it at the Titan X launch, and told their GPU compute customers to keep purchasing Kepler if they valued FP64.
Introduction, Specifications and Packaging
The OCZ RevoDrive has been around for a good long while. We looked at the first ever RevoDrive back in 2010. It was a bold move for the time, as PCIe SSDs were both rare and very expensive at that time. OCZ's innovation was to implement a new VCA RAID controller which kept latencies low and properly scaled with increased Queue Depth. OCZ got a lot of use out of this formula, later expanding to the RevoDrive 3 x2 which expanded to four parallel SSDs, all the way to the enterprise Z-Drive R4 which further expanded that out to eight RAIDed SSDs.
OCZ's RevoDrive lineup circa 2011.
The latter was a monster of an SSD both in physical size and storage capacity. Its performance was also impressive given that it launched five years ago. After being acquired by Toshiba, OCZ re-spun the old VCA-driven SSD one last time in the form of a RevoDrive 350, but it was the same old formula and high-latency SandForce controllers (updated with in-house Toshiba flash). The RevoDrive line needed to ditch that dated tech and move into the world of NVMe, and today it has!
Here is the new 'Toshiba OCZ RD400', branded as such under the recent rebadging that took place on OCZ's site. The Trion 150 and Vertex 180 have also been relabeled as TR150 and VT180. This new RD400 has some significant changes over the previous iterations of that line. The big one is that it is now a lean M.2 part which can come on/with an optional adapter card for those not having an available M.2 slot.
Introduction and Technical Specifications
Courtesy of GIGABYTE
The X99P-SLI motherboard is the newest member of their Ultra Durable board line, updated with newest technological innovations including USB 3.1 and Thunderbolt 3. The board supports all Intel LGA2011-3 based processors paired with DDR4 memory in up to a quad channel configuration. GIGABYTE priced the X99P-SLI at an approachable MSRP of $249.99.
Courtesy of GIGABYTE
Courtesy of GIGABYTE
Courtesy of GIGABYTE
Like all members of the Ultra-Durable board line, the X99P-SLI was over engineered to take whatever abuse is thrown its way, featuring an 6+4-phase digital power system with International Rectify Gen 4 digital PWM controllers and Gen 3 PowIRstage controllers, Server Level chokes, and long life Durable Black Solid capacitors. The board also features GIGABYTE's next generation PCIe x16 slots with PCIe Metal Shielding - steel reinforced overlays to provide extra vertical support for graphics cards featuring large and heavy coolers.
10nm Sooner Than Expected?
It seems only yesterday that we had the first major GPU released on 16nm FF+ and now we are talking about ARM about to receive their first 10nm FF test chips! Well, in fact it was yesterday that NVIDIA formally released performance figures on the latest GeForce GTX 1080 which is based on TSMC’s 16nm FF+ process technology. Currently TSMC is going full bore on their latest process node and producing the fastest current graphics chip around. It has taken the foundry industry as a whole a lot longer to develop FinFET technology than expected, but now that they have that piece of the puzzle seemingly mastered they are moving to a new process node at an accelerated rate.
TSMC’s 10nm FF is not well understood by press and analysts yet, but we gather that it is more of a marketing term than a true drop to 10 nm features. Intel has yet to get past 14nm and does not expect 10 nm production until well into next year. TSMC is promising their version in the second half of 2016. We cannot assume that TSMC’s version will match what Intel will be doing in terms of geometries and electrical characteristics, but we do know that it is a step past TSMC’s 16nm FF products. Lithography will likely get a boost with triple patterning exposure. My guess is that the back end will also move away from the “20nm metal” stages that we see with 16nm. All in all, it should be an improved product from what we see with 16nm, but time will tell if it can match the performance and density of competing lines that bear the 10nm name from Intel, Samsung, and GLOBALFOUNDRIES.
ARM has a history of porting their architectures to new process nodes, but they are being a bit more aggressive here than we have seen in the past. It used to be that ARM would announce a new core or technology, and it would take up to two years to be introduced into the market. Now we are seeing technology announcements and actual products hitting the scenes about nine months later. With the mobile market continuing to grow we expect to see products quicker to market still.
The company designed a simplified test chip to tape out and send to TSMC for test production on the aforementioned 10nm FF process. The chip was taped out in December, 2015. The design was shipped to TSMC for mask production and wafer starts. ARM is expecting the finished wafers to arrive this month.
A new architecture with GP104
Table of Contents
- Asynchronous compute discussion
- Is only 2-Way SLI supported?
- Overclocking over 2.0 GHz
- Dissecting the Founders Edition
- Benchmarks begin
- VR Testing
- Impressive power efficiency
- Performance per dollar discussion
- Ansel screenshot tool
The summer of change for GPUs has begun with today’s review of the GeForce GTX 1080. NVIDIA has endured leaks, speculation and criticism for months now, with enthusiasts calling out NVIDIA for not including HBM technology or for not having asynchronous compute capability. Last week NVIDIA’s CEO Jen-Hsun Huang went on stage and officially announced the GTX 1080 and GTX 1070 graphics cards with a healthy amount of information about their supposed performance and price points. Issues around cost and what exactly a Founders Edition is aside, the event was well received and clearly showed a performance and efficiency improvement that we were not expecting.
The question is, does the actual product live up to the hype? Can NVIDIA overcome some users’ negative view of the Founders Edition to create a product message that will get the wide range of PC gamers looking for an upgrade path an option they’ll take?
I’ll let you know through the course of this review, but what I can tell you definitively is that the GeForce GTX 1080 clearly sits alone at the top of the GPU world.
NVIDIA's Ansel Technology
“In-game photography” is an interesting concept. Not too long ago, it was difficult to just capture the user's direct experience with a title. Print screen could only hold a single screenshot at a time, which allowed Steam and FRAPS to provide a better user experience. FRAPS also made video more accessible to the end-user, but it output huge files and, while it wasn't too expensive, it needed to be purchased online, which was a big issue ten-or-so years ago.
Seeing that their audience would enjoy video captures, NVIDIA introduced ShadowPlay a couple of years ago. The feature allowed users to, not only record video, but also capture the last few minutes. It did this with hardware acceleration, and it did this for free (for compatible GPUs). While I don't use ShadowPlay, preferring the control of OBS, it's a good example of how NVIDIA wants to support their users. They see these features as a value-add, which draw people to their hardware.
Introduction, Specifications, and Packaging
ICY DOCK has made themselves into a sort of Swiss Army knife of dockable and hot-swappable storage solutions. From multi-bay desktop external devices to internal hot-swap enclosures, these guys have just about every conceivable way to convert storage form factors covered. We’ve looked at some of their other offerings in the past, but this week we will focus on a pair of their ToughArmor series products.
As you can no doubt see here, these two enclosures aim to cram as many 2.5” x 7mm form factor devices into the smallest space possible. They also offer hot swap capability and feature front panel power + activity LEDs. As the name would imply, these are built to be extremely durable, with ICY DOCK proudly running them over with a truck in some of their product photos.
Read on for our full review of the ICY DOCK ToughArmor MB998SP-B and MB993SK-B!
Lower Power, Same Performance
AMD is in a strange position in that there is a lot of excitement about their upcoming Zen architecture, but we are still many months away from that introduction. AMD obviously needs to keep the dollars flowing in, and part of that means that we get refreshes now and then of current products. The “Kaveri” products that have been powering the latest APUs from AMD have received one of those refreshes. AMD has done some redesigning of the chip and tweaked the process technology used to manufacture them. The resulting product is the “Godavari” refresh that offers slightly higher clockspeeds as well as better overall power efficiency as compared to the previous “Kaveri” products.
One of the first refreshes was the A8-7670K that hit the ground in November of 2015. This is a slightly cut down part that features 6 GPU compute units vs. the 8 that a fully enabled Godavari chip has. This continues to be a FM2+ based chip with a 95 watt TDP. The clockspeed of this part goes from 3.6 GHz to 3.9 GHz. The GPU portion runs at the same 757 MHz that the original A10-7850K ran at. It is interesting to note that it is still a 95 watt TDP part with essentially the same clockspeeds as the 7850K, but with two fewer GPU compute units.
The other product being covered here is a bit more interesting. The A10-7860K looks to be a larger improvement from the previous 7850K in terms of power and performance. It shares the same CPU clockspeed range as the 7850K (3.6 GHz to 3.9 GHz), but improves upon the GPU clockspeed by hitting around 800 MHz. At first this seems underwhelming until we realize that AMD has lowered the TDP from 95 watts down to 65 watts. Less power consumed and less heat produced for the same performance from the CPU side and improved performance from the GPU seems like a nice advance.
AMD continues to utilize GLOBALFOUNDRIES 28 nm Bulk/HKMG process for their latest APUs and will continue to do so until Zen is released late this year. This is not the same 28 nm process that we were introduced to over four years ago. Over that time improvements have been made to improve yields and bins, as well as optimize power and clockspeed. GF also can adjust the process on a per batch basis to improve certain aspects of a design (higher speed, more leakage, lower power, etc.). They cannot produce miracles though. Do not expect 22 nm FinFET performance or density with these latest AMD products. Those kinds of improvements will show up with Samsung/GF’s 14nm LPP and TSMC’s 16nm FF+ lines. While AMD will be introducing GPUs on 14nm LPP this summer, the Zen launch in late 2016 will be the first AMD CPU to utilize that advanced process.
Introduction and Technical Specifications
Courtesy of ECS
The ECS Z170-Claymore motherboard is the newest offering in ECS' L337 product line with support for the Intel Z170 Express chipset. The Z170-Claymore is a more enthusiast-friendly design then some of their previous offerings with a slew of features sure to entice gamers and power users alike. ECS priced this board competitively with an MSRP of $159.99, a price point sure to appeal to a wide swath of users given the board's integrated feature set.
Courtesy of ECS
Courtesy of ECS
ECS took out all of the stops with the Z170-Claymore, integrating a host of features together with high quality components for a compelling product. The board was designed with a 12-phase digital power delivery system, using high efficiency chokes and MOSFETs, as well as solid core capacitors for optimal board performance under any operating conditions. ECS integrated the following features into the Z170-Claymore board: four SATA 3 ports; one SATA-Express port; a PCIe X2 M.2 port; a Realtek GigE NIC; five PCI-Express x16 slots; 2-digit diagnostic LED display; on-board power and reset buttons; Realtek audio solution; integrated DisplayPort and HDMI video port support; and USB 2.0, 3.0, and 3.1 Gen2 port support.