All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
** Edit **
The tool is now available for download from Samsung here. Another note is that they intend to release an ISO / DOS version of the tool at the end of the month (for Lunix and Mac users). We assume this would be a file system agnostic version of the tool, which would either update all flash or wipe the drive. We suspect it would be the former.
** End edit **
As some of you may have been tracking, there was an issue with Samsung 840 EVO SSDs where ‘stale’ data (data which had not been touched for some period of time after writing it) saw slower read speeds as time since written extended beyond a period of weeks or months. The rough effect was that the read speed of old data would begin to slow roughly one month after written, and after a few more months would eventually reach a speed of ~50-100 MB/sec, varying slightly with room temperature. Speeds would plateau at this low figure, and more importantly, even at this slow speed, no users reported lost data while this effect was taking place.
An example of file read speeds slowing relative to file age.
Since we first published on this, we have been coordinating with Samsung to learn the root causes of this issue, how they will be fixed, and we have most recently been testing a pre-release version of the fix for this issue. First let's look at the newest statement from Samsung:
Because of an error in the flash management software algorithm in the 840 EVO, a drop in performance occurs on data stored for a long period of time AND has been written only once. SSDs usually calibrate changes in the statuses of cells over time via the flash management software algorithm. Due to the error in the software algorithm, the 840 EVO performed read-retry processes aggressively, resulting in a drop in overall read performance. This only occurs if the data was kept in its initial cell without changing, and there are no symptoms of reduced read performance if the data was subsequently migrated from those cells or overwritten. In other words, as the SSD is used more and more over time, the performance decrease disappears naturally. For those who want to solve the issue quickly, this software restores the read performance by rewriting the old data. The time taken to complete the procedure depends on the amount of data stored.
This partially confirms my initial theory in that the slow down was related to cell voltage drift over time. Here's what that looks like:
As you can see above, cell voltages will shift to the left over time. The above example is for MLC. TLC in the EVO will have not 4 but 8 divisions, meaning even smaller voltage shifts might cause the apparent flipping of bits when a read is attempted. An important point here is that all flash does this - the key is to correct for it, and that correction is what was not happening with the EVO. The correction is quite simple really. If the controller sees errors during reading, it follows a procedure that in part adapts to and adjusts for cell drift by adjusting the voltage thresholds for how the bits are interpreted. With the thresholds adapted properly, the SSD can then read at full speed and without the need for error correction. This process was broken in the EVO, and that adaptation was not taking place, forcing the controller to perform error correction on *all* data once those voltages had drifted near their default thresholds. This slowed the read speed tremendously. Below is a worst case example:
We are happy to say that there is a fix, and while it won't be public until
some time tomorrow now, we have been green lighted by Samsung to publish our findings.
Introduction and Technical Specifications
Courtesy of GIGABYTE
The Z97X-UD5H motherboard is one of the middle tier offerings in GIGABYTE's channel line of boards. GIGABYTE updated the previous revision of their UD5H board, integrating the Intel Z97 Express chipset as well as updated heat sink and power circuitry design. At an MSRP of $189.99, the Z97X-UD5H offers a premium feature set at an affordable price.
Courtesy of GIGABYTE
Courtesy of GIGABYTE
GIGABYTE designed the board in accordance with the latest revision of their Ultra Durable design specifications, integrating a 12-phase digital power system so that the board would remain stable under any operating conditions. Ultra Durable brings several high-end power components into the board's design: International Rectifier (IR) manufactured PowIRstage™ ICs and PWM controllers, Nippon Chemi-con manufactured Black Solid capacitors with a 10k hour operational rating at 105C, 15 micron gold plating on the CPU socket pins, and two 0.070mm copper layers embedded into the PCB for optimal heat dissipation. The Z97X-UD5H motherboard includes the following integrated features: six SATA 3 ports; one SATA Express 10 Gb/s ports; one M.2 10 Gb/s port; dual Gigabit NICs - an Intel I217-V NIC and a Qualcomm® Atheros Killer E2201 NIC; three PCI-Express x16 slot; two PCI-Express x1 slots; two PCI slots; 2-digit diagnostic LED display; on-board power, reset, and CMOS clear buttons; Dual-Bios and active BIOS switches; integrated voltage measurement points; and USB 2.0 and 3.0 port support.
Courtesy of GIGABYTE
SLI Setup and Testing Configuration
The idea of multi-GPU gaming is pretty simple on the surface. By adding another GPU into your gaming PC, the game and the driver are able to divide the workload of the game engine and send half of the work to one GPU and half to another, then combining that work on to your screen in the form of successive frames. This should make the average frame rate much higher, improve smoothness and just basically make the gaming experience better. However, implementation of multi-GPU technologies like NVIDIA SLI and AMD CrossFire are much more difficult than the simply explanation above. We have traveled many steps in this journey and while things have improved in several key areas, there is still plenty of work to be done in others.
As it turns out, support for GPUs beyond two seems to be one of those areas ready for improvement.
When the new NVIDIA GeForce GTX 980 launched last month my initial review of the product included performance results for GTX 980 cards running in a 2-Way SLI configuration, by far the most common derivative. As it happens though, another set of reference GeForce GTX 980 cards found there way to our office and of course we needed to explore the world of 3-Way and 4-Way SLI support and performance on the new Maxwell GPU.
The dirty secret for 3-Way and 4-Way SLI (and CrossFire for that matter) is that it just doesn't work as well or as smoothly as 2-Way configurations. Much more work is put into standard SLI setups as those are by far the most common and it doesn't help that optimizing for 3-4 GPUs is more complex. Some games will scale well, others will scale poorly; hell some even scale the other direction.
Let's see what the current state of high GPU count SLI is with the GeForce GTX 980 and whether or not you should consider purchasing more than of these new flagship parts.
Introduction and Features
Corsair recently released three new HXi Series Fully Modular power supplies: the HX1000i, HX850i, and the HX750i. All three power supplies are 80 Plus Platinum certified and support the Corsair Link digital interface. Corsair continues to offer a full line of high quality power supplies, memory components, cases, cooling components, SSDs and accessories to market for the PC enthusiast and professional alike. In this review we will be taking a detailed look at the HXi Series 1000W fully modular power supply.
All three of the new Corsair HXi Series power supplies are optimized for silence and high efficiency. Zero RPM fan mode means the fan doesn’t spin until the PSU is under heavy load, and the fan itself is custom-designed by Corsair for low noise operation even at high loads. Flat ribbon-style black cables are fully modular to facilitate fast, clean builds.
The Corsair HXi Series is built with high-quality components, including all Japanese electrolytic capacitors, and is guaranteed to deliver clean, stable, continuous power, even at ambient temperatures as high as 50°C. HXi Series users can also install Corsair Link software to monitor power usage, efficiency, and fan speed.
80 Plus Platinum: High Efficiency – Low Heat HXi Series PSUs are 80 Plus Platinum certified, making them among the most efficient on the market. With efficiency of at least 92% at 50% load, your PC will remain cool and quiet, potentially saving money in the process.
Corsair Link Ready While the HXi is an analog power supply, it features a built in analog to digital bridge to communicate vital information to the Corsair Link software via USB. This allows the user to monitor and log fan speed, current and voltage of the +3.3V, +5V, and +12V rails, monitor power out, display power in and efficiency, and enable/disable OCP on the +12V rails.
Zero RPM Fan Mode offers silent operation at low to moderate loads. Thanks to a highly efficient design, the HXi Series power supplies generate minimal heat and are able to operate in a silent, zero RPM Fan Mode for up to 40% of the PSU’s maximum load (at 25°C room temperature). This means the HXi power supply can be completely silent while the PC is performing less intensive tasks. As the load and temperatures rise within the PSU, the thermally controlled fan gradually spins up for quiet operation even during more demanding computing.
Optimized for Low Noise Corsair continues to branch out beyond memory and power supplies and is paying close attention to fans and their applications. Within a PSU, the most important feature of a fan is high static pressure, allowing the fan to push air through the relatively high density of components. The NR135P intake fan was specifically designed to move more air through the power supply components with less noise. Fan blades are properly balanced to prevent resonance at higher RPMs and the fan features fluid dynamic bearings for quiet operation and long life.
In addition to the specially designed Corsair cooling fan the components on the HXi Series PCB are laid out to allow air to easily flow between them. The HXi PSUs also include fully modular cables made flat for easy installation and reduced airflow resistance.
Corsair HX1000i PSU Features summary:
• 1,000W continuous DC output (up to 50°C)
• 7-Year Warranty and Comprehensive Customer Support
• 80 PLUS Platinum certified, at least 92% efficiency under 50% load
• Corsair Link ready for real-time monitoring and control
• Fully modular cables for easy installation
• Flat ribbon-style, low profile cables help optimize airflow
• Zero RPM Fan Mode for silent operation up to 40% load
• Quiet fluid dynamic fan bearing for long life and quiet operation
• High quality components including all Japanese electrolytic capacitors
• Active Power Factor correction (0.99) with Universal AC input
• Safety Protections : OCP, OVP, UVP, SCP, OTP, and OPP
• MSRP for the HX100i : $229.99 USD
Introduction and Technical Specifications
Courtesy of ASUS
The ASUS Maximus VII Formula motherboard is one of the newest members of the ROG (Republic of Gamers) product line, integrating several new features to elevate the board to an entirely new level over is predecessor. From outward appearance the Maximus VII Formula looks very similar to its previous revision, the Maximus VI Formula. However, ASUS made some under-the-hood enhancements and minor layout adjustments to the board, utilizing the functionality of the integrated Intel Z97 chipset. The Maximus VII Formula comes with a premium MSRP of $369.00, but is well worth the cost given the premium feature set and performance potential of the board.
Courtesy of ASUS
Courtesy of ASUS
Courtesy of ASUS
ASUS designed the Maximus VII Formula motherboard with a top-rated 8-phase digital power delivery system, combining 60A-rated BlackWing chokes, NexFET MOSFETs with a 90% efficiency rating, and 10k Japanese-source Black Metallic capacitors, for unprecedented system stability under any circumstance. Additionally, ASUS integrate their updated SupremeFX Formula audio system for superior audio fidelity through the integrated audio ports. The Maximus VII Formula contains the following features integrated into its design: six SATA 3 ports; an M.2 (NGFF) 10 Gb/s port integrated into the ASUS mPCIe Combo III card; two SATA Express 10 Gb/s ports; an Intel I218V GigE NIC; an AzureWave (Broadcomm chipset) 802.11ac Wi-Fi and Bluetooth controller integrated into the ASUS mPCIe Combo III card; three PCI-Express Gen3 x16 slots; three PCI-Express Gen2 x1 slots; 2-digit diagnostic LED display; on-board power, reset, CMOS clear, Keybot, MemOK!, BIOS Flashback, ROG Connect, and Sonic SoundStage buttons; Probelt voltage measurement points; OC Panel support; SupremeFX Formula 2014 audio solution; CrossChill Hybrid air and water cooled VRM copper-based cooling solution; ROG Armor overlay; and USB 2.0 and 3.0 port support.
Quick Performance Comparison
Earlier this week, we posted a brief story that looked at the performance of Middle-earth: Shadow of Mordor on the latest GPUs from both NVIDIA and AMD. Last week also marked the release of the v1.11 patch for Sniper Elite 3 that introduced an integrated benchmark mode as well as support for AMD Mantle.
I decided that this was worth a quick look with the same line up of graphics cards that we used to test Shadow of Mordor. Let's see how the NVIDIA and AMD battle stacks up here.
For those unfamiliar with the Sniper Elite series, the focuses on the impact of an individual sniper on a particular conflict and Sniper Elite 3 doesn't change up that formula much. If you have ever seen video of a bullet slowly going through a body, allowing you to see the bones/muscle of the particular enemy being killed...you've probably been watching the Sniper Elite games.
Gore and such aside, the game is fun and combines sniper action with stealth and puzzles. It's worth a shot if you are the kind of gamer that likes to use the sniper rifles in other FPS titles.
But let's jump straight to performance. You'll notice that in this story we are not using our Frame Rating capture performance metrics. That is a direct result of wanting to compare Mantle to DX11 rendering paths - since we have no way to create an overlay for Mantle, we have resorted to using FRAPs and the integrated benchmark mode in Sniper Elite 3.
Our standard GPU test bed was used with a Core i7-3960X processor, an X79 motherboard, 16GB of DDR3 memory, and the latest drivers for both parties involved. That means we installed Catalyst 14.9 for AMD and 344.16 for NVIDIA. We'll be comparing the GeForce GTX 980 to the Radeon R9 290X, and the GTX 970 to the R9 290. We will also look at SLI/CrossFire scaling at the high end.
Introduction and Technical Specifications
be quiet! is a relative new comer to the US computer component market with an award-winning reputation for high quality power supplies and components in their native Germany. Recently, they have branched out into the highly competive cooling space with their Dark Rock and Shadow Rock cooler lines. The Dark Rock Pro 3 is the newest member of their Dark Rock cooler line, featuring a massive dual tower radiator and dual fan design. The Shadow Rock Slim is a recent addition to their Shadow Rock line, featuring a smaller footprint single radiator design to maximize motherboard compatibility with the cooler. The Dark Rock Pro 3 comes with a premium MSRP of $89.99, while the Shadown Rock Slim is available at an MSRP of $49.99.
Courtesy of be quiet!
Courtesy of be quiet!
As a flagship solution, the Dark Rock Pro 3 cooler has most of the nice to haves to find on top-end coolers from other manufactures - copper base plate and heat pipes, nickel plating on all copper surfaces, thick dual radiator construction, dual fans, and a massive amount of heat pipes to wick the heat away from the CPU surface as fast as possible. be quiet! developed the cooler using dual fans, a 120mm front fan and a 135mm inner fan. Heat is transfered from the copper base plate to the dual aluminum radiators by a dense array of seven copper heat pipes. All surfaces are nickel coated for corrosion protection and scratch resistance with the nickel plating black colored to give the cooler a sleek and menacing appearance.
If there is one message that I get from NVIDIA's GeForce GTX 900M-series announcement, it is that laptop gaming is a first-class citizen in their product stack. Before even mentioning the products, the company provided relative performance differences between high-end desktops and laptops. Most of the rest of the slide deck is showing feature-parity with the desktop GTX 900-series, and a discussion about battery life.
First, the parts. Two products have been announced: The GeForce GTX 980M and the GeForce GTX 970M. Both are based on the 28nm Maxwell architecture. In terms of shading performance, the GTX 980M has a theoretical maximum of 3.189 TFLOPs, and the GTX 970M is calculated at 2.365 TFLOPs (at base clock). On the desktop, this is very close to the GeForce GTX 770 and the GeForce GTX 760 Ti, respectively. This metric is most useful when you're compute bandwidth-bound, at high resolution with complex shaders.
The full specifications are:
|GTX 980M||GTX 970M||
|Memory||Up to 4GB||Up to 3GB||4GB||4GB||4GB/8GB|
|Memory Rate||2500 MHz||2500 MHz||7.0 (GT/s)||7.0 (GT/s)||2500 MHz|
As for the features, it should be familiar for those paying attention to both desktop 900-series and the laptop 800M-series product launches. From desktop Maxwell, the 900M-series is getting VXGI, Dynamic Super Resolution, and Multi-Frame Sampled AA (MFAA). From the latest generation of Kepler laptops, the new GPUs are getting an updated BatteryBoost technology. From the rest of the GeForce ecosystem, they will also get GeForce Experience, ShadowPlay, and so forth.
For VXGI, DSR, and MFAA, please see Ryan's discussion for the desktop Maxwell launch. Information about these features is basically identical to what was given in September.
BatteryBoost, on the other hand, is a bit different. NVIDIA claims that the biggest change is just raw performance and efficiency, giving you more headroom to throttle. Perhaps more interesting though, is that GeForce Experience will allow separate one-click optimizations for both plugged-in and battery use cases.
The power efficiency demonstrated with the Maxwell GPU in Ryan's original GeForce GTX 980 and GTX 970 review is even more beneficial for the notebook market where thermal designs are physically constrained. Longer battery life, as well as thinner and lighter gaming notebooks, will see tremendous advantages using a GPU that can run at near peak performance on the maximum power output of an integrated battery. In NVIDIA's presentation, they mention that while notebooks on AC power can use as much as 230 watts of power, batteries tend to peak around 100 watts. Given that a full speed, desktop-class GTX 980 has a TDP of 165 watts, compared to the 250 watts of a Radeon R9 290X, translates into notebook GPU performance that will more closely mirror its desktop brethren.
Of course, you probably will not buy your own laptop GPU; rather, you will be buying devices which integrate these. There are currently five designs across four manufacturers that are revealed (see image above). Three contain the GeForce GTX 980M, one has a GTX 970M, and the other has a pair of GTX 970Ms. Prices and availability are not yet announced.
In what can most definitely be called the best surprise of the fall game release schedule, the open-world action game set in the Lord of the Rings world, Middle-earth: Shadow of Mordor has been receiving impressive reviews from gamers and the media. (GiantBomb.com has a great look at it if you are new to the title.) What also might be a surprise to some is that the PC version of the game can be quite demanding on even the latest PC hardware, pulling in frame rates only in the low-60s at 2560x1440 with its top quality presets.
Late last week I spent a couple of days playing around with Shadow of Mordor as well as the integrated benchmark found inside the Options menu. I wanted to get an idea of the performance characteristics of the game to determine if we might include this in our full-time game testing suite update we are planning later in the fall. To get some sample information I decided to run through a couple of quality presets with the top two cards from NVIDIA and AMD and compare them.
Without a doubt, the visual style of Shadow of Mordor is stunning – with the game settings cranked up high the world, characters and fighting scenes look and feel amazing. To be clear, in the build up to this release we had really not heard anything from the developer or NVIDIA (there is an NVIDIA splash screen at the beginning) about the title which is out of the ordinary. If you are looking for a game that is both fun to play (I am 4+ hours in myself) and can provide a “wow” factor to show off your PC rig then this is definitely worth picking up.
Introduction and Test System Setup
A while ago, in our review of the WD Red 6TB HDD, we noted an issue with the performance of queued commands. This could potentially impact the performance of those drives in multithreaded usage scenarios. While Western Digital acted quickly to get updated drives into the supply chain, some of the first orders might have been shipped unpatched drives. To be clear, an unpatched 5TB or 6TB Red still performs well, just not as well as it *could* perform with the corrected firmware installed.
We received updated samples from WD, as well as applying a firmware update to the samples used in our original review. We were able to confirm that the update does in fact work, and brings a WD60EFRX-68MYMN0 to the identical and improved performance characteristics of a WD60EFRX-68MYMN1 (note the last digit). In this article we will briefly clarify those performance differences, now that we have data more consistent with the vast majority of 5 and 6TB Reds that are out in the wild.
Test System Setup
We currently employ a pair of testbeds. A newer ASUS P8Z77-V Pro/Thunderbolt and an ASUS Z87-PRO. Storage performance variance between both boards has been deemed negligible.
PC Perspective would like to thank ASUS, Corsair, and Kingston for supplying some of the components of our test rigs.
|Hard Drive Test System Setup|
|CPU||Intel Core i7-4770K|
|Motherboard||ASUS P8Z77-V Pro/TB / ASUS Z87-PRO|
|Memory||Kingston HyperX 4GB DDR3-2133 CL9|
|Hard Drive||G.Skill 32GB SLC SSD|
|Video Card||Intel® HD Graphics 4600|
|Power Supply||Corsair CMPSU-650TX|
|Operating System||Windows 8.1 X64 (Update 1)|
- PCMark Vantage and 7
- HDTach *omitted due to incompatibility with >2TB devices*
- PCPer File Copy Test
Introduction and Design
A little over a year ago, we posted our review of the Lenovo Y500, which was a gaming notebook that leveraged not one, but two discrete video adapters (2 x NVIDIA GeForce GT 650M in SLI, to be exact) to achieve respectable gaming performance at a reasonable price point (around $1,200 at the time of the review).
Well—take away nearly a pound of weight (to 5.7 lbs), slim the case down to around an inch thick, update the chipset, and remove one video card, and you’ve got the Lenovo Y50 Touch, which ought to be able to improve upon the Y500 in nearly every area if the specifications add up to typical results. Here’s the full list of what our review unit includes:
While the GTX 860M (2 GB) is a far cry from, say, the GTX 880M (8 GB) we had the pleasure of testing in MSI’s GT70 2PE, it’s still a very capable card that should provide satisfactory results without breaking the bank (or the back). The rest of the spec sheet is conventional fare for a budget gaming notebook, with the only other surprise being the inclusion of a touchscreen—an option which replaces the traditional matte LCD panel in the standard Y50.
The configuration we received has already been slightly updated to include a CPU that’s a nudge better than the i7-4700HQ: the i7-4710HQ (which gains it 100 MHz in Turbo Boost clock rate). Otherwise, the specs are identical, and the street price is very close to that of the Y500 we originally reviewed: $1,139. Currently, an extra 10 bucks will also score you an external DVD+/-RW drive, and just 90 bucks more will boost your GTX 860M’s VRAM to 4 GB (from 2 GB) and your system RAM to 16 GB from 8 GB. That’s really not a bad deal at all.
Installation and Overview
While once a very popular way to cool your PC, the art of custom water loops tapered off in the early 2000s as the benefits of better cooling, and overclocking in general, met with diminished returns. In its place grew a host of companies offering closed loop system, individually sealed coolers for processors and even graphics cards that offered some of the benefits of standard water cooling (noise, performance) without the hassle of setting up a water cooling configuration manually.
A bit of a resurgence has occurred in the last year or two though where the art and styling provided by custom water loop cooling is starting to reassert itself into the PC enthusiast mindset. Some companies never left (EVGA being one of them), but it appears that many of the users are returning to it. Consider me part of that crowd.
During a live stream we held with EVGA's Jacob Freeman, the very first prototype of the EVGA Hydro Copper was shown and discussed. Lucky for us, I was able to coerce Jacob into leaving the water block with me for a few days to do some of our testing and see just how much capability we could pull out of the GM204 GPU and a GeForce GTX 980.
Our performance preview today will look at the water block itself, installation, performance and temperature control. Keep in mind that this is a very early prototype, the first one to make its way to US shores. There will definitely be some changes and updates (in both the hardware and the software support for overclocking) before final release in mid to late October. Should you consider this ~$150 Hydro Copper water block for your GTX 980?
Introduction and Features
Today we have a double header for your reading enjoyment. Not one, but two Platinum Series power supplies from Seasonic. The two latest additions to Seasonic’s flagship product line are the Platinum 1050W and Platinum 1200W PSUs. The power supplies feature tight voltage regulation (±1~2%), quiet operation (fanless mode), and high efficiency (80Plus Platinum certified). Both PSUs are fully modular and come backed by a 7-year warranty.
Seasonic is a well known and highly respected OEM that produces some of the best PC power supplies on the market today. In addition to supplying power supplies to many big-name companies who re-brand the units with their own name, Seasonic also sells a full line of power supplies under the Seasonic name. Both new power supplies feature an improved Hybrid Fan control circuit and upgraded copper conduction bars on the main PCB, which together help increase efficiency and performance.
Seasonic Platinum 1050W & 1200W Special Features
Ultra Tight Voltage Regulation Improved load voltage regulation keeps the voltage fluctuations on the 12V output within +2% and -0% (no negative tolerance), and on the 3.3V and 5V outputs between +1% and -1%, which (under 80 Plus load conditions) results in smooth and stable operation.
Seasonic Hybrid Silent Fan Control The industry first, advanced three-phased thermal control balances between silence and cooling. The Hybrid Silent Fan Control provides three operational stages: Fanless, Silent and Cooling Mode. In addition, a selector switch is provided to allow for manual selection between the Seasonic S2FC (fan control without Fanless Mode) or S3FC (fan control including Fanless Mode).
Reduced Cooling Fan Hysteresis is achieved by a new fan control IC, which optimizes how frequently the fan switches on and off. At 25°C ambient temperature the fan turns on when the load rises above 30% (±5%) and turns off when the load drops below 20% % (±5%). Due to this lag in response the fan switches on and off less frequently, which reduces power loss in Fanless and Silent Mode.
Dual Copper Conduction Bars on the power supply PCB help reduce impedance and minimize voltage drop, which further improves efficiency and performance.
80Plus Platinum The Platinum 1050W & 1200W power supplies are certified in accordance to the 80PLUS organization's Platinum standards, offering performance and energy savings with up to ≥92% efficiency and a true power factor of greater than 0.9 PF.
Full Modular Design (DC to DC) The Seasonic Platinum Series power supplies feature an integrated DC connector panel with onboard VRM (Voltage Regulator Module) that enables not only near perfect DC-to-DC conversion with reduction of current loss/impedance and increase of efficiency but also a fully modular DC cabling that enables maximum flexibility of integration and forward compatibility.
Seasonic Platinum Series 1050W & 1200W PSU Key Features:
• 80Plus Platinum certified super high efficiency
• Ultra tight voltage regulation
• Fully Modular Cable design with flat ribbon-style cables
• Seasonic DC Connector Panel with integrated VRMs
• DC to DC Converter design
• Hybrid Silent Fan Control (3 modes of operation: Fanless, Silent and Cooling)
• High-quality Sanyo Denki San Ace dual ball bearing fan with PWM
• Ultra-tight voltage regulation (+2% and -0% +12V rail)
• Dual copper conduction bars on PCB for improved efficiency and performance
• Supports multi-GPU technologies
• Conductive polymer aluminum solid capacitors
• High reliability 105°C Japanese made electrolytic capacitors
• ErP Lot 6 2013 compliant and Intel Haswell processor ready
• High current Gold plated terminals with Easy Swap connectors
• Active PFC (0.99 PF typical) with Universal AC input
• 7-Year manufacturer's warranty worldwide
Introduction and Specifications
Today Micron lifted the review embargo on their new M600 SSD lineup. We covered their press launch a couple of weeks ago, but as a recap, the headline new feature is the new Dynamic Write Acceleration feature. As this is a new (and untested) feature that completely changes the way an SSD must be tested, we will be diving deep on this one later in this article. For the moment, let's dispose with the formalities.
Here are the samples we received for testing:
It's worth noting that since all M600 models use 16nm 128Gbit dies, packaging is expected to have a negligible impact on performance. This means the 256GB MSATA sample should perform equally to its 2.5" SATA counterpart. The same goes for comparisons against M.2 form factor units. More detail is present in the specs below:
Highlights from the above specs are the increased write speeds (no doubt thanks to Dynamic Write Acceleration) and improved endurance figures. For reference, the prior gen Micron models were rated at 72TB (mostly regardless of capacity), so seeing figures upwards of 400TB indicates Micron's confidence in their 16nm process.
Sorry to disappoint here, but the M600 is an OEM targeted drive, meaning its 'packaging' will likely be the computer it comes installed in. If you manage to find it through a reseller, it will likely come in OEM-style brown/white box packaging.
We have been evaluating these samples for just under a week and have logged *many* hours on them, so let's get to it!
One Small Step
While most articles surrounding the iPhone 6 and iPhone 6 Plus this far have focused around user experience and larger screen sizes, performance, and in particular the effect of Apple's transition to the 20nm process node for the A8 SoC have been our main questions regarding these new phones. Naturally, I decided to put my personal iPhone 6 though our usual round of benchmarks.
First, let's start with 3DMark.
Comparing the 3DMark scores of the new Apple A8 to even the last generation A7 provides a smaller improvement than we are used to seeing generation-to-generation with Apple's custom ARM implementations. When you compare the A8 to something like the NVIDIA Tegra K1, which utilizes desktop-class GPU cores, the overall score blows Apple out of the water. Even taking a look at the CPU-bound physics score, the K1 is still a winner.
A 78% performance advantage in overall score when compared the A8 shows just how much of a powerhouse NVIDIA has with the K1. (Though clearly power envelopes are another matter entirely.)
If we look at more CPU benchmarks, like the browser-based Google Octane and SunSpider tests, the A8 starts to shine more.
While the A8 edges out the A7 to be the best performing device and 54% faster than the K1 in SunSpider, the A8 and K1 are neck and neck in the Google Octane benchmark.
Moving back to a graphics heavy benchmark, GFXBench's Manhattan test, the Tegra K1 has a 75% percent performance advantage over the A8 though it is 36% faster than the previous A7 silicon.
These early results are certainly a disappointment compared to the usual generation-to-generation performance increase we see with Apple SoCs.
However, the other aspect to look at is power efficiency. With normal use I have noticed a substantial increase in battery life of my iPhone 6 over the last generation iPhone 5S. While this may be due to a small (about 1 wH) increase in battery capacity, I think more can be credited to this being an overall more efficient device. Certain choices like sticking to a highly optimized Dual Core CPU design and Quad Core GPU, as well as a reduction in process node to 20nm all contribute to increased battery life, while surpassing the performance of the last generation Apple A7.
In that way, the A8 moves the bar forward for Apple and is a solid first attempt at using the 20nm silicon technology at TSMC. There is a strong potential that further refined parts (like the expected A8x for the iPad revisions) Apple will be able to further surpass 28nm silicon in performance and efficiency.
Introduction and Technical Specifications
Courtesy of EVGA
The X99 Classified motherboard is EVGA's premier product offering for their Intel X99 chipset motherboard line. The board supports the Intel LGA2011-3 based processors along with DDR4 memory in a quad channel configuration. The X99 Classified board is a synthesis product for EVGA with all of the innovations from previous boards integrated for a superior offering. A premium product like this comes at a premium price point with an MSRP of $399.99.
Courtesy of EVGA
The X99 Classified has a 10 phase digital power system and high performance solid state capacitors integrated to power the CPU under any circumstances thrown its way. EVGA designed the following features into the X99 Classified: 10 SATA 3 ports; two M.2 PCIe x4 capable ports; dual Intel Gigabit NICs - an Intel I217 and an Intel I210; five PCI-Express x16 slots; one PCI-Express x4 slot; 2-digit diagnostic LED display; on-board power, reset, and dual CMOS clear buttons; triple BIOS selector and Turbo switches; PCIe disable switch jumper block; integrated Probe IT voltage measurement system; GPU Link headers and cables; and USB 2.0 and 3.0 port support.
Courtesy of EVGA
Technical Specifications (taken from the EVGA website)
|Microprocessor support||Intel Socket 2011-3 Processor|
|PCH||Intel X99 chipset|
|System Memory support||Supports Quad channel DDR4 up to 3000MHz+ (OC).
Supports up to 128GB of DDR4 memory.
|USB 2.0 Ports||8 x from Intel X99 PCH – 6x external, 2x internal
Supports hot plug
Supports wake-up from S1 and S3 mode
Supports USB 2.0 protocol up to a 480 Mbps transmission rate
|USB 3.0 Ports||6 x from Intel X99 PCH – 4x external, 2x internal
Supports transfer speeds up to 5Gbps
Backwards compatible USB 2.0 and USB 1.1 support
|SATA Ports||Intel X99 PCH Controller
6 x SATA 3/6G (600 MB/s) data transfer rate
- Support for RAID 0, RAID 1, RAID 5, AND RAID 10
- Supports hot plug
4 x SATA3/6G AHCI Only
|Onboard LAN||1 x Intel i217 Gigabit Ethernet PHY
1 x Intel i210 Gigabit Ethernet MAC
Supports 10/100/1000 Mb/sec Ethernet
|Audio||Creative Core 3D (CA0132) Controller
6 Channel HD Audio
|PCI-E Slots||5 x PCI-E x14 Mechanical Slots
- Arrangement - 1 x16, 2 x 16, 3 x8, 4 x8*
1 x PCI-E x4 Slot
*PCI-E lane distribution listed REQUIRES 40 lane CPU
|Operating Systems||Supports Windows 8 / 7|
|Size||EATX form factor
12 inches x 10.375 inches (305x264mm)
Here they come - the G-Sync monitors are finally arriving at our doors! A little over a month ago we got to review the ASUS ROG Swift PG278Q, a 2560x1440 144 Hz monitor that was the first retail-ready display to bring NVIDIA's variable refresh technology to consumers. It was a great first option with a high refresh rate along with support for ULMB (ultra low motion blur) technology, giving users a shot at either option.
Today we are taking a look at our second G-Sync monitor that will hit streets sometime in mid-October with an identical $799 price point. The Acer XB280HK is a 28-in 4K monitor with a maximum refresh rate of 60 Hz and of course, support for NVIDIA G-Sync.
The Acer XB280HK, first announced at Computex in June, is the first 4K monitor on the market to be announced with support for variable refresh. It isn't that far behind the first low-cost 4K monitors to hit the market, period: the ASUS PB287Q and the Samsung U28D590D both shipped in May of 2014 with very similar feature sets, minus G-Sync. I discussed much of the general usability benefits (and issues) that arose when using a consumer 4K panel with Windows 8.1 in those reviews, so you'll want to be sure you read up on that in addition to the discussion of 4K + G-Sync we'll have today.
While we dive into the specifics on the Acer XB280HK monitor today, I will skip over most of the discussion about G-Sync, how it works and why we want it. In our ASUS PG278Q review I had a good, concise discussion on the technical background of NVIDIA G-Sync technology and how it improves gaming.
The idea of G-Sync is pretty easy to understand, though the implementation method can get a bit more hairy. G-Sync introduces a variable refresh rate to a monitor, allowing the display to refresh at wide range of rates rather than at fixed intervals. More importantly, rather than the monitor dictating what rate this refresh occurs at to the PC, the graphics now tells the monitor when to refresh in a properly configured G-Sync setup. This allows a monitor to match the refresh rate of the screen to the draw rate of the game being played (frames per second) and that simple change drastically improves the gaming experience for several reasons.
Introduction and Technical Specifications
Courtesy of Noctua
Noctua is a well known player in the CPU cooling business with their focus on high quality solutions that don't kill your eardrums. The NH-D15 cooler is their current flagship product, building upon the design of their much loved NH-D14 cooler for an even higher performance product offering. The NH-D15 is composed of dual cooling towers, threaded through by six heat pipes. The heat pipes and copper base are all nickel-plated, giving the unit the signature Noctua look. We put the NH-D15 up against other high-performance solutions to best gage its cooling abilities. High performance comes at a cost with the NH-D15 being no exception at a $99.99 MSRP.
Courtesy of Noctua
Courtesy of Noctua
Courtesy of Noctua
The NH-D15 incorporates everything that Noctua has learned in designing its NH-D14 and U-series coolers, coming up with an extreme performance product that maintains almost universal motherboard compatibility. The cooler features twin 150mm wide cooling towers with airflow provided by dual NF-A15 150mm, 1500RPM fans. The heat transfers from the copper base plate to the aluminum radiator towers via six copper heat pipes. The copper base and heat pipes are all nickel-plated, providing scratch and corrosion resistance without affecting thermal transfer capabilities. To ensure optimal acoustics, the NH-A15 fans have rubber corner guards on all four corners to minimize fan vibration and vibration transfer to the radiator. The CPU base plate is seamless and polished to a mirror finish, ensuring an optimal mating surface.
Investigating the issue
** Edit ** (24 Sep)
We have updated this story with temperature effects on the read speed of old data. Additional info on page 3.
** End edit **
** Edit 2 ** (26 Sep)
New quote from Samsung:
"We acknowledge the recent issue associated with the Samsung 840 EVO SSDs and are qualifying a firmware update to address the issue. While this issue only affects a small subset of all 840 EVO users, we regret any inconvenience experienced by our customers. A firmware update that resolves the issue will be available on the Samsung SSD website soon. We appreciate our customer’s support and patience as we work diligently to resolve this issue."
** End edit 2 **
** Edit 3 **
The firmware update and performance restoration tool has been tested. Results are found here.
** End edit 3 **
Over the past week or two, there have been growing rumblings from owners of Samsung 840 and 840 EVO SSDs. A few reports scattered across internet forums gradually snowballed into lengthy threads as more and more people took a longer look at their own TLC-based Samsung SSD's performance. I've spent the past week following these threads, and the past few days evaluating this issue on the 840 and 840 EVO samples we have here at PC Perspective. This post is meant to inform you of our current 'best guess' as to just what is happening with these drives, and just what you should do about it.
The issue at hand is an apparent slow down in the reading of 'stale' data on TLC-based Samsung SSDs. Allow me to demonstrate:
You might have seen what looks like similar issues before, but after much research and testing, I can say with some confidence that this is a completely different and unique issue. The old X25-M bug was the result of random writes to the drive over time, but the above result is from a drive that only ever saw a single large file write to a clean drive. The above drive was the very same 500GB 840 EVO sample used in our prior review. It did just fine in that review, and at afterwards I needed a quick temporary place to put a HDD image file and just happened to grab that EVO. The file was written to the drive in December of 2013, and if it wasn't already apparent from the above HDTach pass, it was 442GB in size. This brings on some questions:
- If random writes (i.e. flash fragmentation) are not causing the slow down, then what is?
- How long does it take for this slow down to manifest after a file is written?
The GM204 Architecture
James Clerk Maxwell's equations are the foundation of our society's knowledge about optics and electrical circuits. It is a fitting tribute from NVIDIA to include Maxwell as a code name for a GPU architecture and NVIDIA hopes that features, performance, and efficiency that they have built into the GM204 GPU would be something Maxwell himself would be impressed by. Without giving away the surprise conclusion here in the lead, I can tell you that I have never seen a GPU perform as well as we have seen this week, all while changing the power efficiency discussion in as dramatic a fashion.
To be fair though, this isn't our first experience with the Maxwell architecture. With the release of the GeForce GTX 750 Ti and its GM107 GPU, NVIDIA put the industry on watch and let us all ponder if they could possibly bring such a design to a high end, enthusiast class market. The GTX 750 Ti brought a significantly lower power design to a market that desperately needed it, and we were even able to showcase that with some off-the-shelf PC upgrades, without the need for any kind of external power.
That was GM107 though; today's release is the GM204, indicating that not only are we seeing the larger cousin of the GTX 750 Ti but we also have at least some moderate GPU architecture and feature changes from the first run of Maxwell. The GeForce GTX 980 and GTX 970 are going to be taking on the best of the best products from the GeForce lineup as well as the AMD Radeon family of cards, with aggressive pricing and performance levels to match. And, for those that understand the technology at a fundamental level, you will likely be surprised by how much power it requires to achieve these goals. Toss in support for things like a new AA method, Dynamic Super Resolution, and even improved SLI performance and you can see why doing it all on the same process technology is impressive.
The NVIDIA Maxwell GM204 Architecture
The NVIDIA Maxwell GM204 graphics processor was built from the ground up with an emphasis on power efficiency. As it was stated many times during the technical sessions we attended last week, the architecture team learned quite a bit while developing the Kepler-based Tegra K1 SoC and much of that filtered its way into the larger, much more powerful product you see today. This product is fast and efficient, but it was all done while working on the same TSMC 28nm process technology used on the Kepler GTX 680 and even AMD's Radeon R9 series of products.
The fundamental structure of GM204 is setup like the GM107 product shipped as the GTX 750 Ti. There is an array of GPCs (Graphics Processing Clustsers), each comprised of multiple SMs (Streaming Multiprocessors, also called SMMs for this Maxwell derivative) and external memory controllers. The GM204 chip (the full implementation of which is found on the GTX 980), consists of 4 GPCs, 16 SMMs and four 64-bit memory controllers.