All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Introduction and Features
In this review we will be taking a detailed look at High Power’s new Astro GD 1200W power supply. All of the power supplies in the Astro GD Series are fully modular, have a single +12V output, and are 80 Plus Gold certified for high efficiency. There are currently sixteen different power supplies in the Astro Series and nine models in the fully modular Astro GD Series. The new AGD-1200F is king of the hill with the highest rated output of 1,200 watts.
Along with 80 Plus Gold certified high efficiency, the Astro GD1200W power supply has been designed for quiet operation. It uses a dual ball bearing 135mm fan and a smart fan speed control, which automatically switches between two operating modes: silent mode and cooling mode. Unlike some other power supplies that keep the fan turned off during low output, the AGD-1200 fan spins all the time. The Smart Fan Control adjusts the fan operation mode automatically according to the system loading and ambient temperature for quiet operation. The fan speed starts out slow and quiet and gradually ramps up as the load increases. The PSU also incorporates an off-delay fan feature that keeps the fan spinning for a few seconds after the system is turned off.
High Power Astro GD-1200W PSU Key Features:
• 1,200W continuous DC output
• 80 PLUS Gold certified (87%~90% efficiency at 20-100% load)
• Silent Design (automatically adjusts between silent and cooling modes)
• Advanced DC-to-DC converters (3.3V and 5V)
• Fully modular cables for easy installation
• Flat ribbon-style, low profile cables help optimize airflow
• High quality components including all Japanese made capacitors
• Active Power Factor correction (0.99) with Universal AC input
• Safety Protections : OCP, OVP, UVP, SCP, OTP, and OPP
• MSRP for the Astro GD-1200W PSU: $239.99 USD
Meet the Inateck barebones tool-free HDD
Recently Inatek sent over two products to test out, the FEU3NS-1 USB 3.0 HDD Tool Free External Enclosure and the BP2001 10W Bluetooth Stereo Speaker. Inatek has been around for a while, though originally their products were only available in the EU they have recently expanded to North America. They sell a variety of peripherals such as PCIe USB cards, cables and chargers as well as Bluetooth input devices and mobile device protectors, in addion to external HDDs enclosures and of course Bluetooth speakers.
The first product to take a look at is the USB 3.0 enclosure which ships with a USB cable and manual in addition to the tool free USB HDD enclosure. It is a very simple product at a very low price and is small enough to stick in a laptop bag without having an unsightly bulge. The base model is currently $14 on Amazon and for an extra $5 you can get one which supports USB Attached SCSI Protocol to allow an SSD to hit full speed when installed in the enclosure. The USB 3.0 cable is a dual male cable; no proprietary plugs or breakable adapters needed to make this work and as enough power can be provided over USB that this is the only cable you will need. The only compatibility issue concerns the relatively uncommon 12mm 2.5" drives which will not fit, 9.5mm and 7mm are both acceptable and there is a removable cushion to keep your 7mm drive nice and snug.
It could be a good... start.
So this is what happens when you install pre-release software on a production machine.
Sure, I only trusted it as far as a second SSD with Windows 7 installed, but it would be fair to say that I immersed myself in the experience. It was also not the first time that I evaluated upcoming Microsoft OSes on my main machine, having done the same for Windows Vista and Windows 7 as both were in production. Windows 8 was the odd one out, which was given my laptop. In this case, I was in the market for a new SSD and was thus willing to give it a chance, versus installing Windows 7 again.
So far, my experience has been roughly positive. The first two builds have been glitchy. In the first three days, I have rebooted my computer more times than I have all year (which is about 1-2 times per month). It could be the Windows Key + Arrow Key combinations dropping randomly, Razer Synapse deciding to go on strike a couple of times until I reinstall it, the four-or-so reboots required to install a new build, and so forth. You then also have the occasional issue of a Windows service (or DWM.exe) deciding that it would max out a core or two.
But it is pre-release software! That is all stuff to ignore. The only reason I am even mentioning it is so people do not follow in my footsteps and install it on their production machines, unless they are willing to have pockets of downtime here or there. Even then, the latest build, 9879, has been fairly stable. It has been installed all day and has not given me a single issue. This is good, because it is the last build we will get until 2015.
What we will not ignore is the features. For the first two builds, it was annoying to use with multiple monitors. Supposedly to make it easier to align items, mouse cursors would remain locked inside each monitor's boundary until you provide enough velocity to have it escape to the next one. This was the case with Windows 8.1 as well, but you were given registry entries to disable the feature. Those keys did not work with Windows 10. But, with Build 9879, that seems to have been disabled unless you are currently dragging a window. In this case, a quick movement would pull windows between monitors, while a slow movement would perform a Snap.
This is me getting ready to snap a window on the edge between two monitors with just my mouse.
In a single build, they turned this feature from something I wanted to disable, to something that actually performs better (in my opinion) than Windows 7. It feels great.
Now on to a not-so-pleasant experience: updating builds.
Simply put, you can click "Check Now" and "Download Update" all that you want, but it will just sit there doing nothing until it feels like it. During the update from 9860 to 9879, I was waiting with the PC Settings app open for three hours. At some point, I got suspicious and decided to monitor network traffic: nothing. So I did the close app, open app, re-check dance a few times, and eventually gave up. About a half of an hour after I closed PC Settings the last time, my network traffic spiked to the maximum that my internet allows, which task manager said was going to a Windows service.
Shortly after, I was given the option to install the update. After finishing what I was doing, I clicked the install button and... it didn't seem to do anything. After about a half of an hour, it prompted me to restart my computer with a full screen message that you cannot click past to save your open windows - it is do it or postpone it one or more hours, there is no in-between. About another twenty minutes (and four-or-five reboots) after I chose to reboot, I was back up and running.
Is that okay? Sure. When you update, you clearly need to do stuff and that could take your computer several minutes. It would be unrealistic to complain about a 20-minute install. The only real problem is that it waits for extended periods of time doing nothing (measured, literally nothing) until it decides that the time is right, and that time is NOW! It may have been three hours after you originally cared, but the time is NOW!
Come on Microsoft, let us know what is going on behind the scenes, and give us reliable options to pause or suspend the process before the big commitment moments.
So that is where I am, one highly positive experience and one slightly annoying one. Despite my concerns about Windows Store (which I have discussed at length in the past and are still valid) this operating system seems to be on a great path. It is a work in progress. I will keep you up to date, as my machine is kept up to date.
MSI Redefines AM3+ Value
It is no secret that AMD’s AM3+ motherboard ecosystem has languished for the past year or so, with very few examples of new products hitting the scene. This is understandable since AMD has not updated the chipset options for AM3+, and only recently did they release updated processors in the form of the FX-8370 and FX-8370e. It has been two years since the release of the original FX-8350 and another year since the high TDP FX-9000 series of parts. For better or for worse, AMD is pushing their APUs far harder to consumers than the aging AM3+ platform.
MSI has refined their "Gaming" series of products with a distinctive look that catches the eye.
This does not mean that the AM3+ ecosystem is non-viable to both AMD and consumers. While Intel has stayed ahead of AMD in terms of IPC, TDP, and process technology the overall competitiveness of the latest AM3+ parts are still quite good when considering price. Yes, these CPUs will run hotter and pull more power than the Intel parts they are directly competing against, but when we look at the prices of comparable motherboards and the CPUs themselves, AMD still holds a price/performance advantage. The AM3+ processors that feature six and eight cores (3 and 4 modules) are solid performers in a wide variety of applications. The top end eight core products compete well against the latest Intel parts in many gaming scenarios, as well as productivity applications which leverage multiple threads.
When the Vishera based FX processors were initially introduced we saw an influx of new AM3+ designs that would support these new processors, as well as the planned 220 watt TDP variants that would emerge later. From that point on we have only seen a smattering of new products based on AM3+. From all the available roadmaps from AMD that we have seen, we do not expect there to be new products based on Steamroller or Excavator architectures on the AM3+ platform. AMD is relying on their HSA enabled APUs to retain marketshare and hopefully drive new software technologies that will leverage these products. The Future really is Fusion…
MSI is bucking this trend. The company still sees value in the AM3+ market, and they are introducing a new product that looks to more adequately fit the financial realities of that marketplace. We already have high end boards from MSI, ASRock, Asus, and Gigabyte that are feature packed and go for a relatively low price for enthusiast motherboards. On the other end of the spectrum we have barebone motherboards based on even older chipsets (SB710/750 based). In between we often see AMD 970 based boards that offer a tolerable mix of features attached to a low price.
The bundle is fair, but not exciting. It offers the basics to get a user up and running quickly.
The MSI 970 Gaming motherboard is a different beast as compared to the rest of the market. It is a Gaming branded board which offers a host of features that can be considered high end, but at the same time being offered for a price less than $100 US. MSI looks to explore this sweet spot with a motherboard that far outpunches its weight class. This board is a classic balance of price vs. features, but it addresses this balance in a rather unique way. Part of it might be marketing, but a good chunk of it is smart and solid engineering.
Introduction: The Core Series Shrinks Down
Image credit: Fractal Design
The Core 1100 from Fractal Design is a small micro-ATX case, essentially a miniature version of the previously reviewed Core 3300. With its small dimensions the Core 1100 targets micro-ATX and mini-ITX builders, and provides another option not only in Fractal Design's budget lineup, but in the crowded budget enclosure market.
The price level for the Core 1100 has fluctuated a bit on Amazon since I began this review, with prices ranging from a high of $50 down to a low of just $39. It is currently $39.99 at Newegg, so the price should soon stabilize at Amazon and other retailers. At the ~$40 level this could easily be a compelling option for a smaller build, though admittedly the design of these Core series cases is purely functional. Ultimately any enclosure recommendation will depend on ease of use and thermal performance/noise, which is exactly what we will look at in this review.
Introduction, Specifications and Packaging
G.Skill is likely better known for their RAM offerings, but they have actually been in the SSD field since the early days. My first SSD RAID was on a pair of G.Skill Flash SSDs. While they were outmaneuvered by the X25-M, they were equipped with SLC flash, and G.Skill offered them at a significantly lower price than the Samsung OEM units they were based on.
Since those early days of flash, G.Skill has introduced a few additional models but has not been known as a major player in the SSD market. That is set to change today, with their introduction of the Phoenix Blade PCIe SSD:
If you're eager to know what is inside or how it works, I'll set your mind at ease with this brief summary. The Phoenix Blade is essentially an OCZ RevoDrive 350, but with beefier specs and improved performance. The same SandForce 2281 controllers and Toshiba flash are used. The difference comes in the form of a smaller form factor (half height vs. full height PCIe), and the type of PCIe to SATA bridge chip used. More on that on the disassembly page.
Core M 5Y70 Specifications
Back in August of this year, Intel invited me out to Portland, Oregon to talk about the future of processors and process technology. Broadwell is the first microarchitecture to ship on Intel's newest 14nm process technology and the performance and power implications of it are as impressive as they are complex. We finally have the first retail product based on Broadwell-Y in our hands and I am eager to see how this combination of technology is going to be implemented.
If you have not read through my article that dives into the intricacies of the 14nm process and the architectural changes coming with Broadwell, then I would highly recommend that you do so before diving any further into this review. Our Intel Core M Processor: Broadwell Architecture and 14nm Process Reveal story clearly explains the "how" and "why" for many of the decisions that determined the direction the Core M 5Y70 heads in.
As I stated at the time:
"The information provided by Intel about Broadwell-Y today shows me the company is clearly innovating and iterating on its plans set in place years ago with the focus on power efficiency. Broadwell and the 14nm process technology will likely be another substantial leap between Intel and AMD in the x86 tablet space and should make an impact on other tablet markets (like Android) as long as pricing can remain competitive. That 14nm process gives Intel an advantage that no one else in the industry can claim and unless Intel begins fabricating processors for the competition (not completely out of the question), that will remain a house advantage."
With a background on Intel's goals with Broadwell-Y, let's look at the first true implementation.
Introduction: The HTPC Slims Down
There are many reasons to consider a home theater PC (HTPC) these days, and aside from the full functionality of a personal computer an HTPC can provide unlimited access to digital content from various sources. “Cord-cutting”, the term adopted for cancelling one’s cable or satellite TV service in favor of streaming content online, is gaining steam. Of course there are great self-contained solutions for streaming like the Roku and Apple TV, and one doesn't have to be a cord-cutter to use an HTPC for TV content, as CableCard users will probably tell you. But for those of us who want more control over our entertainment experience the limitless options provided by a custom build makes HTPC compelling. Small form-factor (SFF) computing is easier than ever with the maturation of the Mini-ITX form factor and decreasing component costs.
The Case for HTPC
For many prospective HTPC builders the case is a major consideration rather than an afterthought (it certainly is for me, anyway). This computer build is not only going into the most visible room in many homes, but the level of noise generated by the system is of concern as well. Clearly, searching for the perfect enclosure for the living room can be a major undertaking depending on your needs and personal style. And as SFF computing has gained popularity in the marketplace there are a growing number of enclosures being introduced by various manufacturers, which can only help in the search for the perfect case.
A manufacturer new on the HTPC enclosure scene is a company called Perfect Home Theater, a distributor of high-end home theater components. The enclosures from P.H.T. are slick looking aluminum designs supporting the gamut of form-factors from ATX all the way down to thin mini-ITX. The owner of Perfect Home Theater, Zygmunt Wojewoda, is also the designer of the ultra low-profile enclosure we’re looking at today, the T-ITX-6.
As you can see it is a wide enclosure, built to match the width of standard components. And it’s really thin. Only 40mm tall, or 48mm total including the feet. Naturally this introduces more tradeoffs for the end user, as the build is strictly limited to thin mini-ITX motherboards. Though the enclosure is wide enough to theoretically house an ATX motherboard, the extremely low height would prevent it.
Since the introduction of the first low cost 4K TVs in the form of the SEIKI SE50UY04, and then into the wild world of MST 4K monitors from ASUS and others, and finally with the release of single stream low cost 4K panels, PC Perspective has been covering the monitor resolution revolution heavily. Just look at these reviews:
- SEIKI SE50UY04 50-in 4K 3840x2160 TV Unboxing and Preview
- SEIKI SE39UY04 39-in 4K 3840x2160 TV Unboxing and Overview
- ASUS PQ321Q 31.5-in 4K 60 Hz Tiled Monitor Review
- Samsung U28D590D 28-in 4K Single Stream 60 Hz Monitor
- ASUS PB287Q 4K UHD 28-in Monitor Review
- Acer XB280HK 28-in 4K G-Sync Monitor Review
Today we bring in another vendor's 4K consumer monitor and put it to the test, pitting against the formidable options from ASUS, Samsung, Acer and others. The Philips 288P6LJEB 4K 60 Hz monitor closely mirrors many of the specifications and qualities of other low-cost 4K panels, but with a couple of twits that help it stand out.
The Philips display is a 28-in class TN panel, has a 60 Hz refresh rate when utilizing the DisplayPort 1.2 connection option but adds connection capability that most other 4K panels in this price range leave off. Here are the specs from Philips:
Given that we are anticipating a launch of the Samsung 850 EVO very shortly, it is a good time to back fill on the complete performance picture of the 850 Pro series. We have done several full capacity roundups of various SSD models over the past months, and the common theme with all of them is that as the die count is reduced in lower capacity models, so is the parallelism that can be achieved. This effect varies based on what type of flash memory die is used, but the end result is mostly an apparent reduction in write performance. Fueling this issue is the increase in flash memory die capacity over time.
There are two different ways to counteract the effects of write speed reductions caused by larger capacity / fewer dies:
- Reduce die capacity.
- Increase write performance per die.
Recently there has been a trend towards *lower* capacity dies. Micron makes their 16nm flash in both 128Gbit and 64Gbit. Shifting back towards the 64Gbit dies in lower capacity SSD models helps them keep the die count up, increasing overall parallelism, and therefore keeping write speeds and random IO performance relatively high.
Mini-ITX Sized Package with a Full Sized GPU
PC components seem to be getting smaller. Micro-ATX used to not be very popular for the mainstream enthusiast, but that has changed as of late. Mini-ITX is now the hot form factor these days with plenty of integrated features on motherboards and interesting case designs to house them in. Enthusiast graphics cards tend to be big, and that is a problem for some of these small cases. Manufacturers are responding to this by squeezing every ounce of cooling performance into smaller cards that more adequately fit in these small chassis.
MSI is currently offering their midrange cards in these mini-ITX liveries. The card we have today is the GTX 760 Mini-ITX Gaming. The GTX 760 is a fairly popular card due to it being fairly quick, but not too expensive. It is still based on the GK104, though fairly heavily cut down from a fully functional die. The GTX 760 features 1152 CUDA Cores divided into 6 SMXs. A fully functional GK104 is 1536 CUDA Cores and 8 SMXs. The stock clock on the GTX 760 is 980 MHz with a boost up to 1033 MHz.
The pricing for the GTX 760 cards is actually fairly high as compared to similarly performing products from AMD. NVIDIA feels that they offer a very solid product at that price and do not need to compete directly with AMD on a performance per dollar basis. Considering that NVIDIA has stayed very steady in terms of marketshare, they probably have a valid point. Overall the GTX 760 performs in the same general area as a R9 270X and R9 280, but again the AMD parts have a significant advantage in terms of price.
The challenges for making a high performing, small form factor card are focused on power delivery and thermal dissipation. Can the smaller PCB still have enough space for all of the VRMs required with such a design? Can the manufacturer develop a cooling solution that will keep the GPU in the designed thermal envelope? MSI has taken a shot at these issues with their GTX 760 Mini-ITX OC edition card.
Often times, one of the suggestions of what to do with older PC components is to dedicate it to a Home Theater PC. While in concept this might seem like a great idea, you can do a lot of things with full control over the box hooked up to your TV, I think it's a flawed concept.
With a HTPC, some of the most desired traits include low power consumption, quiet operation, all while maintaining a high performance level so you can do things like transcode video quickly. Older components that you have outgrown don't tend to be nearly as efficient as newer components. To have a good HTPC experience, you really want to pick components from the ground up, which is why I was excited to take a look at the Steiger Dynamics Maven Core HTPC.
As it was shipped to us, our Maven Core is equipped with an Intel Core i5-4690K and an NVIDIA GTX 980. By utilizing two of the most power efficient architectures available, Intel's Haswell and NVIDIA's Maxwell, the Maven should be able to sip power while maintaining low temperature and noise. While a GTX 980 might be overkill for just HTPC applications, it opens up a lot of possibilities for couch-style PC gaming with things like Steam Big Picture mode.
From the outside, the hand-brushed aluminum Steiger Dynamics system takes the form of traditional high-end home theater gear. At 6.85-in tall, or almost 4U if you are comfortable with that measurement system, the Maven Core is a large device, but does not stand out in a collection of AV equipment. Additionally, when you consider the standard Blu-Ray drive and available Ceton InfiniTV Quad PCIe CableCARD tuner giving this system the capability of replacing both a cable set top box and dedicated Blu-Ray player all together, the size becomes easier to deal with.
Digging deeper into the hardware specs of the Maven Core we find some familiar components. The Intel Core i5-4690K sits in an ASUS Z97-A motherboard along with 8GB of Corsair DDR3-1866 memory. For storage we have a 250GB Samsung 840 EVO SSD paired with a Western Digital 3TB Hard Drive for mass storage of your media.
Cooling for the CPU is provided by a Corsair H90 with a single Phanteks fan to help keep the noise down. Steiger Dynamics shipped our system with a Seasonic Platinum-series 650W power supply, including their custom cabling option. For $100, they will ship your system with custom, individually sleeved Power Supply and SATA drive cables. The sleeving and cable management are impressive, but $100 would be a difficult upsell of a PC that you are likely never going to see the inside of.
As we mentioned earlier, this machine also shipped with a Ceton InfiniTV 4 PCIe CableCARD tuner. While CableCARD is a much maligned technology that never really took off, when you get it working it can be impressive. Our impressions of the InfiniTV can be found later in this review.
When Intel revealed their miniature PC platform in 2012, the new “Next Unit of Computing” (NUC) was a tiny motherboard with a custom case, and admittedly very little compute power. Well, maybe not so much with the admittedly: “The Intel NUC is an ultra-compact form factor PC measuring 4-inch by 4-inch. Anything your tower PC can do, the Intel NUC can do and in 4 inches of real estate.” That was taken from Intel’s NUC introduction, and though their assertion was perhaps a bit premature, technology does continue its rapid advance in the small form-factor space. We aren’t there yet by any means, but the fact that a mini-ITX computer can be built with the power of an ATX rig (limited to single-GPU, of course) suggests that it could happen for a mini-PC in the not so distant future.
With NUC the focus was clearly on efficiency over performance, and with very low power and noise there were practical applications for such a device to offset the marginal "desktop" performance. The viability of a NUC would definitely depend on the user and their particular needs, of course. If you could find a place for such a device (such as a living room) it may have been worth the cost, as the first of the NUC kits were fairly expensive (around $300 and up) and did not include storage or memory. These days a mini PC can be found starting as low as $100 or so, but most still do not include any memory or storage. They are tiny barebones PC kits after all, so adding components is to be expected...right?
It’s been a couple of years now, and the platform continues to evolve - and shrink to some startlingly small sizes. Of the Intel-powered micro PC kits on today’s market the LIVA from ECS manages to push the boundaries of this category in both directions. In addition to boasting a ridiculously small size - actually the smallest in the world according to ECS - the LIVA is also very affordable. It carries a list price of just $179 (though it can be found for less), and that includes onboard memory and storage. And this is truly a Windows PC platform, with full Windows 8.1 driver support from ECS (previous versions are not supported).
A Civ for a New Generation
Turn-based strategy games have long been defined by the Civilization series. Civ 5 took up hours and hours of the PC Perspective team's non-working hours (and likely the working ones too) and it looks like the new Civilization: Beyond Earth has the chance to do the same. Early reviews of the game from Gamespot, IGN, and Polygon are quite positive, and that's great news for a PC-only release; they can sometimes get overlooked in the games' media.
For us, the game offers an interesting opportunity to discuss performance. Beyond Earth is definitely going to be more CPU-bound than the other games that we tend to use in our benchmark suite, but the fact that this game is new, shiny, and even has a Mantle implementation (AMD's custom API) makes interesting for at least a look at the current state of performance. Both NVIDIA and AMD sent have released drivers with specific optimization for Beyond Earth as well. This game is likely to be popular and it deserves the attention it gets.
Civilization: Beyond Earth, a turn-based strategy game that can take a very long time to complete, ships with an integrated benchmark mode to help users and the industry test performance under different settings and hardware configurations. To enable it, you simple add "-benchmark results.csv" to the Steam game launch options and then start up the game normally. Rather than taking you to the main menu, you'll be transported into a view of a map that represents a somewhat typical gaming state for a long term session. The game will use the last settings you ran the game at to measure your system's performance, without the modified launch options, so be sure to configure that before you prepare to benchmark.
The output of this is the "result.csv" file, saved to your Steam game install root folder. In there, you'll find a list of numbers, separated by commas, representing the frame times for each frame rendering during the run. You don't get averages, a minimum, or a maximum without doing a little work. Fire up Excel or Google Docs and remember the formula:
1000 / Average (All Frame Times) = Avg FPS
It's a crude measurement that doesn't take into account any errors, spikes, or other interesting statistical data, but at least you'll have something to compare with your friends.
Our testing settings
Just as I have done in recent weeks with Shadow of Mordor and Sniper Elite 3, I ran some graphics cards through the testing process with Civilization: Beyond Earth. These include the GeForce GTX 980 and Radeon R9 290X only, along with SLI and CrossFire configurations. The R9 290X was run in both DX11 and Mantle.
- Core i7-3960X
- ASUS Rampage IV Extreme X79
- 16GB DDR3-1600
- GeForce GTX 980 Reference (344.48)
- ASUS R9 290X DirectCU II (14.9.2 Beta)
Mantle Additions and Improvements
AMD is proud of this release as it introduces a few interesting things alongside the inclusion of the Mantle API.
- Enhanced-quality Anti-Aliasing (EQAA): Improves anti-aliasing quality by doubling the coverage samples (vs. MSAA) at each AA level. This is automatically enabled for AMD users when AA is enabled in the game.
- Multi-threaded command buffering: Utilizing Mantle allows a game developer to queue a much wider flow of information between the graphics card and the CPU. This communication channel is especially good for multi-core CPUs, which have historically gone underutilized in higher-level APIs. You’ll see in your testing that Mantle makes a notable difference in smoothness and performance high-draw-call late game testing.
- Split-frame rendering: Mantle empowers a game developer with total control of multi-GPU systems. That “total control” allows them to design an mGPU renderer that best matches the design of their game. In the case of Civilization: Beyond Earth, Firaxis has selected a split-frame rendering (SFR) subsystem. SFR eliminates the latency penalties typically encountered by AFR configurations.
EQAA is an interesting feature as it improves on the quality of MSAA (somewhat) by doubling the coverage sample count while maintaining the same color sample count as MSAA. So 4xEQAA will have 4 color samples and 8 coverage samples while 4xMSAA would have 4 of each. Interestingly, Firaxis has decided the EQAA will be enabled on Beyond Earth anytime a Radeon card is detected (running in Mantle or DX11) and AA is enabled at all. So even though in the menus you might see 4xMSAA enabled, you are actually running at 4xEQAA. For NVIDIA users, 4xMSAA means 4xMSAA. Performance differences should be negligible though, according to AMD (who would actually be "hurt" by this decision if it brought down FPS).
Introduction and Technical Specifications
Courtesy of MSI
MSI upped the ante with their X99S Gaming 9 AC board, combining their black and red Dragon-inspired design with support for the newest Intel LGA2011-3 socket processors and DDR4 memory modules. The board features heat sinks over all the expected areas as well as a large LED-lit heat sink over the X99 chipset. MSI also integrates an armor-style overlay covering their audio components and an overlay cover for their rear panel. One of their most interesting additions is the MSI Streaming Engine, touted to assist with graphics encoding to make up for the lack of the integrated graphics processor in the Haswell-E CPUs. As a flagship board, the MSI X99S Gaming 9 AC comes at a flagship price with an MSRP of $429.99.
Courtesy of MSI
Courtesy of MSI
MSI integrated an 8-phase digital power delivery system into the X99S Gaming 9 AC, combining Hi-C and Dark capacitors with super ferrite chokes for optimal power delivery with enhanced power efficiency characteristics. The board includes the following integrated features: eight SATA 3 ports; one SATA Express port; one M.2 PCIe x4 32 Gb/s port; a Qualcomm® Atheros Killer E2205 NIC; Intel 802.11ac Wi-Fi and Bluetooth; five PCI-Express x16 slots; a 2-digit diagnostic LED display; on-board power, reset, CMOS clear, and OC-Genie buttons; Slow Mode boot, Multi-BIOS, OC Genie mode, and Audio Power switches; Realtek audio solution with isolated audio PCB and Nichicon audio capacitors; dedicated per-channel headphone OP-AMPs; integrated V-Check voltage measurement points; Streaming Engine with integrated AVerMedia HD H.264 encoding chip; and USB 2.0 and 3.0 port support.
Courtesy of MSI
GeForce GTX 980M Performance Testing
When NVIDIA launched the GeForce GTX 980 and GTX 970 graphics cards last month, part of the discussion at our meetings also centered around the mobile variants of Maxwell. The NDA was a bit later though and Scott wrote up a short story announcing the release of the GTX 980M and the GTX 970M mobility GPUs. Both of these GPUs are based on the same GM204 design as the desktop cards, though as you should have come to expect by now, do so with lower specifications than the similarly-named desktop options. Take a look:
|GTX 980M||GTX 970M||
|Memory||Up to 4GB||Up to 3GB||4GB||4GB||4GB/8GB|
|Memory Rate||2500 MHz||2500 MHz||7.0 (GT/s)||7.0 (GT/s)||2500 MHz|
Just like the desktop models, GTX 980M and GTX 970M are built on the 28nm process technology and are tweaked and built for power efficiency - one of the reasons the mobile release of this product is so interesting.
With a CUDA core count of 1536, the GTX 980M has 33% fewer shader cores than the desktop GTX 980, along with a slightly lower base clock speed. The result is a peak theoretical performance of 3.189 TFLOPs, compared to 4.6 TFLOPs on the GTX 980 desktop. In fact, that is only slightly higher than the GTX 880M based on Kepler, that clocks in with the same CUDA core count (1536) but a TFLOP capability of 2.9. Bear in mind that the GTX 880M is using a different architecture design than the GTX 980M; Maxwell's design advantages go beyond just CUDA core count and clock speed.
The GTX 970M is even smaller, with a CUDA core count of 1280 and peak performance rated at 2.365 TFLOPs. Also notice that the memory bus width has shrunk from 256-bit to 192-bit for this part.
As is typically the case with mobile GPUs, the memory speed of the GTX 980M and GTX 970M is significantly lower than the desktop parts. While the GeForce GTX 980 and 970 that install in your desktop PC will have memory running at 7.0 GHz, the mobile versions will run at 5.0 GHz in order to conserve power.
From a feature set stand point though, the GTX 980M/970M are very much the same as the desktop parts that I looked at in September. You will have support for VXGI, NVIDIA's new custom global illumination technology, Multi-Frame AA and maybe most interestingly, Dynamic Super Resolution (DSR). DSR allows you to render a game at a higher resolution and then use a custom filter to down sample it back to your panel's native resolution. For mobile gamers that are using 1080p screens (as our test sample shipped with) this is a good way to utilize the power of your GPU for less power-hungry games, while getting a surprisingly good image at the same time.
Introduction and Features
EZ-Clone in Standalone disk-cloning mode
Kingwin’s new EZ-Clone (Model: USI-2535CLU3) is a HDD/SSD adapter that can be used as a standalone disk-cloning device or as an external hard drive adapter. When used in standalone mode, the self-powered EZ-Cone can quickly clone one SATA/IDE drive to a new SATA drive in minutes (IDE to SATA or SATA to SATA) without being connected to a PC.
EZ-Clone being used as an external drive adapter
When used as an external drive adapter the EZ-Clone provides connectors for attaching two SATA drives (SSD or HDD) and one IDE hard drive in 2.5” or 3.5” form factors. The EZ-Clone adapter connects to a PC using the high-speed USB 3.0 interface. When used as an external drive adapter, the user can access up to two external drives at the same time (two SATA drives or one SATA and one IDE drive).
Kingwin EZ-Clone Key Features: (from the Kingwin website)
• EZ-Clone model: USI-2535CLU3
• External USB 3.0 to dual-SATA & single-IDE clone adapter
• Standalone disk duplicator with One-Touch Clone Button (no PC required)
• Supports 2.5” and 3.5” IDE and SATA drives (HDD or SSD)
• Compatible with SATA I/II/III (1.5/3.0/6.0 Gbps)
• SATA Drive Hot-swap compatibility
• Supports hard drives up to 3TB disk size
• Dual output power supply with standard 4-pin and SATA power connectors
• Up to 5 Gbps data transfer rate with USB 3.0 (also compatible with USB 2.0)
• USB Plug-and-play capability
• 2 Drive LEDs (red) and four Clone Progress LEDs (blue)
• Screw-less, easy to attach connectors
• Windows and Mac OS compatible (no driver installation required)
• 1-Year Warranty from Kingwin
• MSRP $39.99 USD ($33.99 from Amazon.com, Oct. 2014)
** Edit **
The tool is now available for download from Samsung here. Another note is that they intend to release an ISO / DOS version of the tool at the end of the month (for Lunix and Mac users). We assume this would be a file system agnostic version of the tool, which would either update all flash or wipe the drive. We suspect it would be the former.
** End edit **
As some of you may have been tracking, there was an issue with Samsung 840 EVO SSDs where ‘stale’ data (data which had not been touched for some period of time after writing it) saw slower read speeds as time since written extended beyond a period of weeks or months. The rough effect was that the read speed of old data would begin to slow roughly one month after written, and after a few more months would eventually reach a speed of ~50-100 MB/sec, varying slightly with room temperature. Speeds would plateau at this low figure, and more importantly, even at this slow speed, no users reported lost data while this effect was taking place.
An example of file read speeds slowing relative to file age.
Since we first published on this, we have been coordinating with Samsung to learn the root causes of this issue, how they will be fixed, and we have most recently been testing a pre-release version of the fix for this issue. First let's look at the newest statement from Samsung:
Because of an error in the flash management software algorithm in the 840 EVO, a drop in performance occurs on data stored for a long period of time AND has been written only once. SSDs usually calibrate changes in the statuses of cells over time via the flash management software algorithm. Due to the error in the software algorithm, the 840 EVO performed read-retry processes aggressively, resulting in a drop in overall read performance. This only occurs if the data was kept in its initial cell without changing, and there are no symptoms of reduced read performance if the data was subsequently migrated from those cells or overwritten. In other words, as the SSD is used more and more over time, the performance decrease disappears naturally. For those who want to solve the issue quickly, this software restores the read performance by rewriting the old data. The time taken to complete the procedure depends on the amount of data stored.
This partially confirms my initial theory in that the slow down was related to cell voltage drift over time. Here's what that looks like:
As you can see above, cell voltages will shift to the left over time. The above example is for MLC. TLC in the EVO will have not 4 but 8 divisions, meaning even smaller voltage shifts might cause the apparent flipping of bits when a read is attempted. An important point here is that all flash does this - the key is to correct for it, and that correction is what was not happening with the EVO. The correction is quite simple really. If the controller sees errors during reading, it follows a procedure that in part adapts to and adjusts for cell drift by adjusting the voltage thresholds for how the bits are interpreted. With the thresholds adapted properly, the SSD can then read at full speed and without the need for error correction. This process was broken in the EVO, and that adaptation was not taking place, forcing the controller to perform error correction on *all* data once those voltages had drifted near their default thresholds. This slowed the read speed tremendously. Below is a worst case example:
We are happy to say that there is a fix, and while it won't be public until
some time tomorrow now, we have been green lighted by Samsung to publish our findings.
Introduction and Technical Specifications
Courtesy of GIGABYTE
The Z97X-UD5H motherboard is one of the middle tier offerings in GIGABYTE's channel line of boards. GIGABYTE updated the previous revision of their UD5H board, integrating the Intel Z97 Express chipset as well as updated heat sink and power circuitry design. At an MSRP of $189.99, the Z97X-UD5H offers a premium feature set at an affordable price.
Courtesy of GIGABYTE
Courtesy of GIGABYTE
GIGABYTE designed the board in accordance with the latest revision of their Ultra Durable design specifications, integrating a 12-phase digital power system so that the board would remain stable under any operating conditions. Ultra Durable brings several high-end power components into the board's design: International Rectifier (IR) manufactured PowIRstage™ ICs and PWM controllers, Nippon Chemi-con manufactured Black Solid capacitors with a 10k hour operational rating at 105C, 15 micron gold plating on the CPU socket pins, and two 0.070mm copper layers embedded into the PCB for optimal heat dissipation. The Z97X-UD5H motherboard includes the following integrated features: six SATA 3 ports; one SATA Express 10 Gb/s ports; one M.2 10 Gb/s port; dual Gigabit NICs - an Intel I217-V NIC and a Qualcomm® Atheros Killer E2201 NIC; three PCI-Express x16 slot; two PCI-Express x1 slots; two PCI slots; 2-digit diagnostic LED display; on-board power, reset, and CMOS clear buttons; Dual-Bios and active BIOS switches; integrated voltage measurement points; and USB 2.0 and 3.0 port support.
Courtesy of GIGABYTE
SLI Setup and Testing Configuration
The idea of multi-GPU gaming is pretty simple on the surface. By adding another GPU into your gaming PC, the game and the driver are able to divide the workload of the game engine and send half of the work to one GPU and half to another, then combining that work on to your screen in the form of successive frames. This should make the average frame rate much higher, improve smoothness and just basically make the gaming experience better. However, implementation of multi-GPU technologies like NVIDIA SLI and AMD CrossFire are much more difficult than the simply explanation above. We have traveled many steps in this journey and while things have improved in several key areas, there is still plenty of work to be done in others.
As it turns out, support for GPUs beyond two seems to be one of those areas ready for improvement.
When the new NVIDIA GeForce GTX 980 launched last month my initial review of the product included performance results for GTX 980 cards running in a 2-Way SLI configuration, by far the most common derivative. As it happens though, another set of reference GeForce GTX 980 cards found there way to our office and of course we needed to explore the world of 3-Way and 4-Way SLI support and performance on the new Maxwell GPU.
The dirty secret for 3-Way and 4-Way SLI (and CrossFire for that matter) is that it just doesn't work as well or as smoothly as 2-Way configurations. Much more work is put into standard SLI setups as those are by far the most common and it doesn't help that optimizing for 3-4 GPUs is more complex. Some games will scale well, others will scale poorly; hell some even scale the other direction.
Let's see what the current state of high GPU count SLI is with the GeForce GTX 980 and whether or not you should consider purchasing more than one of these new flagship parts.