All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
If you’re a fan of digital video and music, you’ve likely heard the name “Plex” floating around. Plex (not to be confused with EVE Online’s in-game subscription commodity) is free media center software that lets users manage and stream a wide array of videos, audio files, and pictures to virtually any computer and a growing number of mobile devices and electronics. As a Plex user from the very beginning, I’ve seen the software change and evolve over the years into the versatile and powerful service it is today.
My goal with this article twofold. First, as an avid Plex user, I’d like to introduce the software to users have yet to hear about or try it. Second, for those already using or experimenting with Plex, I hope that I can provide some “best practices” when it comes to configuring your servers, managing your media, or just using the software in general.
Before we dive into the technical aspects of Plex, let’s look at a brief overview of the software’s history and the main components that comprise the Plex ecosystem today.
Although now widely supported on a range of platforms, Plex was born in early 2008 as an OS X fork of the Xbox Media Center project (XBMC). Lovingly named “OSXBMC” (get it?) by its creators, the software was initially a simple media player for Mac, with roughly the same capabilities as the XBMC project from which it was derived. (Note: XBMC changed its name to “Kodi” in August, although you’ll still find plenty of people referring to the software by its original name).
A few months into the project, the OSXBMC team decided to change the name to “Plex” and things really started to take off for the nascent media software. Unlike the XBMC/Kodi community, which focused its efforts primarily on the playback client, the Plex team decided to bifurcate the project with two distinct components: a dedicated media server and a dedicated playback client.
The dedicated media server made Plex unique among its media center peers. Once properly set up, it gave users with very little technical knowledge the ability to maintain a server that was capable of delivering their movies, TV shows, music, and pictures on demand throughout the house and, later, the world. We'll take a more detailed look at each of the Plex components next.
The “brains” behind the entire Plex ecosystem is Plex Media Server (PMS). This software, available for Windows, Linux, and OS X, manages your media database, metadata, and any necessary transcoding, which is one of its best features. Although far from error-free, the PMS encoding engine can convert virtually any video codec and container on the fly to a format requested by a client device. Want to play a high-bitrate 1080p MKV file with a 7.1 DTS-HD MA soundtrack on your Roku? No problem; Plex will seamlessly transcode that high quality source file to the proper format for Roku, as well as your iPad, or your Galaxy S5, and many other devices, all without having to store multiple copies of your video files.
There are smart people that work at AMD. A quick look at the company's products, including the APU lineup as well as the discrete GPU fields, clearly indicates a lineup of talent in engineering, design, marketing and business. It's not perfect of course, and very few companies can claim to be, but the strengths of AMD are there and easily discernible to those of us on the outside looking in with the correct vision.
Because AMD has smart people working hard to improve the company, they are also aware of its shortcomings. For many years now, the thorn of GPU software has been sticking in AMD's side, tarnishing the name of Radeon and the products it releases. Even though the Catalyst graphics driver has improved substantially year after year, the truth is that NVIDIA's driver team has been keeping ahead of AMD consistently in basically all regards: features, driver installation, driver stability, performance improvements over time.
If knowing is half the battle, acting on that knowledge is at least another 49%. AMD is hoping to address driver concerns now and into the future with the release of the Catalyst Omega driver. This driver sets itself apart from previous releases in several different ways, starting with a host of new features, some incremental performance improvements and a drastically amped up testing and validation process.
AMD considers this a "special edition" driver and is something that they plan to repeat on a yearly basis. That note in itself is an interesting point - is that often enough to really change the experience and perception of the Catalyst driver program going forward? Though AMD does include some specific numbers of tested cases for its validation of the Omega driver (441,000+ automated test runs, 11,000+ manual test runs) we don't have side by side data from NVIDIA to compare it to. If AMD is only doing a roundup of testing like this once a year, but NVIDIA does it more often, then AMD might soon find itself back in the same position it has been.
UPDATE: There has been some confusion based on this story that I want to correct. AMD informed us that it is still planning on releasing other drivers throughout the year that will address performance updates for specific games and bug fixes for applications and titles released between today and the pending update for the next "special edition." AMD is NOT saying that they will only have a driver drop once a year.
But before we worry about what's going to happen in the future, let's look into what AMD has changed and added to the new Catalyst Omega driver released today.
Introduction, Specifications and Packaging
Mid last year, Samsung introduced the 840 EVO. This was their evolutionary step from the 840 Pro, which had launched a year prior. While the Pro was a performance MLC SSD, the EVO was TLC, and for most typical proved just as speedy. The reason for this was Samsung’s inclusion of a small SLC cache on each TLC die. Dubbed TurboWrite, this write-back cache gave the EVO the best write performance of any TLC-based SSD on the market. Samsung had also introduced a DRAM cache based RAPID mode - included with their Magician value added software solution. The EVO was among the top selling SSDs since its launch, despite a small hiccup quickly corrected by Samsung.
Fast forward to June of this year where we saw the 850 Pro. Having tested the waters with 24-layer 3D VNAND, Samsung revises this design, increasing the layer count to 32 and reducing the die capacity from 128Gbit to 86Gbit. The smaller die capacity enables a 50% performance gain, stacked on top of the 100% write speed gain accomplished by the reduced cross talk of the 3D VNAND architecture. These changes did great things for the performance of the 850 Pro, especially in the lower capacities. While competing 120/128GB SSDs were typically limited to 150 MB/sec write speeds, the 128GB 850 Pro cruises along at over 3x that speed, nearly saturating the SATA interface. The performance might have been great, but so was the cost - 850 Pro’s have stuck around $0.70/GB since their launch, forcing budget conscious upgraders to seek competing solutions. What we needed was an 850 EVO, and now I can happily say here it is:
As the 840 EVO was a pretty big deal, I believe the 850 EVO has an equal chance of success, so instead of going for a capacity roundup, this first piece will cover the 120GB and 500GB capacities. A surprising number of our readers run a pair of smaller capacity 840 EVOs in a RAID, so we will be testing a matched pair of 850 EVOs in RAID-0. To demonstrate the transparent performance boosting of RAPID, I’ll also run both capacities through our full test suite with RAPID mode enabled. There is lots of testing to get through, so let’s get cracking!
In the last few years NZXT has emerged as a popular choice for computer builds with stylish cases for a variety of needs. The newest member of the H series, the H440, promises quiet performance and offers a clean look by eliminating optical drive bays entirely from the design. While this might be a deal-breaker for some, the days of the ODD seem to be numbered as more enclosures are making the move away from the 5.25" bay.
Image credit: NZXT
But we aren't looking at just any H440 today, as NZXT has sent along a completely custom version designed in alliance with gaming accessory maker Razer to be "the ultimate gamer's chassis". (This case is currently available direct from NZXT's online store.) In this review we'll look at just what makes this H440 different, and test out a complete build while we're at it. Performance will be as big a metric as appearance here since the H440 is after all an enclosure designed for silence, with noise dampening an integral part of NZXT's construction of the case.
Green with Envy?
From the outset you'll notice the Razer branding extends beyond just special paint and trim, as custom lighting is installed right out of the box to give this incarnation of the H440 a little more gaming personality (though this lighting can be switched off, if desired). Not only do the front and side logos and power button light up green, but the bottom of the case features effects lighting to cast an eerie green glow on your desktop or floor.
Image credit: NZXT
Introduction and Technical Specifications
Courtesy of ASUS
The ASUS Maximus VII Impact motherboard is among ASUS' ROG (Republic of Gamers) board offerings in their Intel Z97 Express product line. The board builds on the strengths of its predecessor with the a similar layout and add-in card design implementation. ASUS augmented the new version of the board with an updated chipset and as well as additional support for the latest hard drive and audio technologies. The Maximus VII Impact has a premium price of $239.99 for its small status, but come packed full for features and power to more than justify the cost.
Courtesy of ASUS
Courtesy of ASUS
Courtesy of ASUS
ASUS did not pull any punches in designing the Maximus VII Impact board, integrating a similar 8-phase digital power system as found on the Maximus VII Formula ATX board. The power system combines 60A-rated BlackWing chokes, NexFET MOSFETs with a 90% efficiency rating, and 10k Japanese-source Black Metallic capacitors onto an upright board to minimize the footprint of those components. Additionally, ASUS integrated their updated SupremeFX Impact II audio system for superior audio fidelity using the included SupremeVX Impact II add-in card.
We’ve been tracking NVIDIA’s G-Sync for quite a while now. The comments section on Ryan’s initial article erupted with questions, and many of those were answered in a follow-on interview with NVIDIA’s Tom Petersen. The idea was radical – do away with the traditional fixed refresh rate and only send a new frame to the display when it has just completed rendering by the GPU. There are many benefits here, but the short version is that you get the low-latency benefit of V-SYNC OFF gaming combined with the image quality (lack of tearing) that you would see if V-SYNC was ON. Despite the many benefits, there are some potential disadvantages that come from attempting to drive an LCD panel at varying periods of time, as opposed to the fixed intervals that have been the norm for over a decade.
As the first round of samples came to us for review, the current leader appeared to be the ASUS ROG Swift. A G-Sync 144 Hz display at 1440P was sure to appeal to gamers who wanted faster response than the 4K 60 Hz G-Sync alternative was capable of. Due to what seemed to be large consumer demand, it has taken some time to get these panels into the hands of consumers. As our Storage Editor, I decided it was time to upgrade my home system, placed a pre-order, and waited with anticipation of finally being able to shift from my trusty Dell 3007WFP-HC to a large panel that can handle >2x the FPS.
Fast forward to last week. My pair of ROG Swifts arrived, and some other folks I knew had also received theirs. Before I could set mine up and get some quality gaming time in, my bro FifthDread and his wife both noted a very obvious flicker on their Swifts within the first few minutes of hooking them up. They reported the flicker during game loading screens and mid-game during background content loading occurring in some RTS titles. Prior to hearing from them, the most I had seen were some conflicting and contradictory reports on various forums (not limed to the Swift, though that is the earliest panel and would therefore see the majority of early reports), but now we had something more solid to go on. That night I fired up my own Swift and immediately got to doing what I do best – trying to break things. We have reproduced the issue and intend to demonstrate it in a measurable way, mostly to put some actual data out there to go along with those trying to describe something that is borderline perceptible for mere fractions of a second.
First a bit of misnomer correction / foundation laying:
- The ‘Screen refresh rate’ option you see in Windows Display Properties is actually a carryover from the CRT days. In terms of an LCD, it is the maximum rate at which a frame is output to the display. It is not representative of the frequency at which the LCD panel itself is refreshed by the display logic.
- LCD panel pixels are periodically updated by a scan, typically from top to bottom. Newer / higher quality panels repeat this process at a rate higher than 60 Hz in order to reduce the ‘rolling shutter’ effect seen when panning scenes or windows across the screen.
- In order to engineer faster responding pixels, manufacturers must deal with the side effect of faster pixel decay between refreshes. This is a balanced by increasing the frequency of scanning out to the panel.
- The effect we are going to cover here has nothing to do with motion blur, LightBoost, backlight PWM, LightBoost combined with G-Sync (not currently a thing, even though Blur Busters has theorized on how it could work, their method would not work with how G-Sync is actually implemented today).
With all of that out of the way, let’s tackle what folks out there may be seeing on their own variable refresh rate displays. Based on our testing so far, the flicker only presented at times when a game enters a 'stalled' state. These are periods where you would see a split-second freeze in the action, like during a background level load during game play in some titles. It also appears during some game level load screens, but as those are normally static scenes, they would have gone unnoticed on fixed refresh rate panels. Since we were absolutely able to see that something was happening, we wanted to be able to catch it in the act and measure it, so we rooted around the lab and put together some gear to do so. It’s not a perfect solution by any means, but we only needed to observe differences between the smooth gaming and the ‘stalled state’ where the flicker was readily observable. Once the solder dust settled, we fired up a game that we knew could instantaneously swing from a high FPS (144) to a stalled state (0 FPS) and back again. As it turns out, EVE Online does this exact thing while taking an in-game screen shot, so we used that for our initial testing. Here’s what the brightness of a small segment of the ROG Swift does during this very event:
Measured panel section brightness over time during a 'stall' event. Click to enlarge.
The relatively small ripple to the left and right of center demonstrate the panel output at just under 144 FPS. Panel redraw is in sync with the frames coming from the GPU at this rate. The center section, however, represents what takes place when the input from the GPU suddenly drops to zero. In the above case, the game briefly stalled, then resumed a few frames at 144, then stalled again for a much longer period of time. Completely stopping the panel refresh would result in all TN pixels bleeding towards white, so G-Sync has a built-in failsafe to prevent this by forcing a redraw every ~33 msec. What you are seeing are the pixels intermittently bleeding towards white and periodically being pulled back down to the appropriate brightness by a scan. The low latency panel used in the ROG Swift does this all of the time, but it is less noticeable at 144, as you can see on the left and right edges of the graph. An additional thing that’s happening here is an apparent rise in average brightness during the event. We are still researching the cause of this on our end, but this brightness increase certainly helps to draw attention to the flicker event, making it even more perceptible to those who might have not otherwise noticed it.
Some of you might be wondering why this same effect is not seen when a game drops to 30 FPS (or even lower) during the course of normal game play. While the original G-Sync upgrade kit implementation simply waited until 33 msec had passed until forcing an additional redraw, this introduced judder from 25-30 FPS. Based on our observations and testing, it appears that NVIDIA has corrected this in the retail G-Sync panels with an algorithm that intelligently re-scans at even multiples of the input frame rate in order to keep the redraw rate relatively high, and therefore keeping flicker imperceptible – even at very low continuous frame rates.
A few final points before we go:
- This is not limited to the ROG Swift. All variable refresh panels we have tested (including 4K) see this effect to a more or less degree than reported here. Again, this only occurs when games instantaneously drop to 0 FPS, and not when those games dip into low frame rates in a continuous fashion.
- The effect is less perceptible (both visually and with recorded data) at lower maximum refresh rate settings.
- The effect is not present at fixed refresh rates (G-Sync disabled or with non G-Sync panels).
This post was primarily meant as a status update and to serve as something for G-Sync users to point to when attempting to explain the flicker they are perceiving. We will continue researching, collecting data, and coordinating with NVIDIA on this issue, and will report back once we have more to discuss.
During the research and drafting of this piece, we reached out to and worked with NVIDIA to discuss this issue. Here is their statement:
"All LCD pixel values relax after refreshing. As a result, the brightness value that is set during the LCD’s scanline update slowly relaxes until the next refresh.
This means all LCDs have some slight variation in brightness. In this case, lower frequency refreshes will appear slightly brighter than high frequency refreshes by 1 – 2%.
When games are running normally (i.e., not waiting at a load screen, nor a screen capture) - users will never see this slight variation in brightness value. In the rare cases where frame rates can plummet to very low levels, there is a very slight brightness variation (barely perceptible to the human eye), which disappears when normal operation resumes."
So there you have it. It's basically down to the physics of how an LCD panel works at varying refresh rates. While I agree that it is a rare occurrence, there are some games that present this scenario more frequently (and noticeably) than others. If you've noticed this effect in some games more than others, let us know in the comments section below.
(Editor's Note: We are continuing to work with NVIDIA on this issue and hope to find a way to alleviate the flickering with either a hardware or software change in the future.)
It has been a couple of months since the release of the GeForce GTX 970 and the GM204 GPU that it is based on. After the initial wave of stock on day one, NVIDIA had admittedly struggled to keep these products available. Couple that with rampant concerns over coil whine from some non-reference designs, and you could see why we were a bit hesitant to focus and spend our time on retail GTX 970 reviews.
These issues appear to be settled for the most part. Finding GeForce GTX 970 cards is no longer a problem and users with coil whine are getting RMA replacements from NVIDIA's partners. Because of that, we feel much more comfortable reporting our results with the various retail cards that we have in house, and you'll see quite a few reviews coming from PC Perspective in the coming weeks.
But let's start with the MSI GeForce GTX 970 4GB Gaming card. Based on user reviews, this is one of the most popular retail cards. MSI's Gaming series of cards combines a custom cooler that typically runs quieter and more efficient than reference design, and it comes with a price tag that is within arms reach of the lower cost options as well.
The MSI GeForce GTX 970 4GB Gaming
MSI continues with its Dragon Army branding, and its associated black/red color scheme, which I think is appealing to a wide range of users. I'm sure NVIDIA would like to see a green or neutral color scheme, but hey, there are only so many colors to go around.
It has become increasingly apparent that flash memory die shrinks have hit a bit of a brick wall in recent years. The issues faced by the standard 2D Planar NAND process were apparent very early on. This was no real secret - here's a slide seen at the 2009 Flash Memory Summit:
Despite this, most flash manufacturers pushed the envelope as far as they could within the limits of 2D process technology, balancing shrinks with reliability and performance. One of the largest flash manufacturers was Intel, having joined forces with Micron in a joint venture dubbed IMFT (Intel Micron Flash Technologies). Intel remained in lock-step with Micron all the way up to 20nm, but chose to hold back at the 16nm step, presumably in order to shift full focus towards alternative flash technologies. This was essentially confirmed late last week, with Intel's announcement of a shift to 3D NAND production.
Intel's press briefing seemed to focus more on cost efficiency than performance, and after reviewing the very few specs they released about this new flash, I believe we can do some theorizing as to the potential performance of this new flash memory. From the above illustration, you can see that Intel has chosen to go with the same sort of 3D technology used by Samsung - a 32 layer vertical stack of flash cells. This requires the use of an older / larger process technology, as it is too difficult to etch these holes at a 2x nm size. What keeps the die size reasonable is the fact that you get a 32x increase in bit density. Going off of a rough approximation from the above photo, imagine that 50nm die (8 Gbit), but with 32 vertical NAND layers. That would yield a 256 Gbit (32 GB) die within roughly the same footprint.
Representation of Samsung's 3D VNAND in 128Gbit and 86 Gbit variants.
20nm planar (2D) = yellow square, 16nm planar (2D) = blue square.
Image republished with permission from Schiltron Corporation.
It's likely a safe bet that IMFT flash will be going for a cost/GB far cheaper than the competing Samsung VNAND, and going with a relatively large 256 Gbit (vs. VNAND's 86 Gbit) per-die capacity is a smart move there, but let's not forget that there is a catch - write speed. Most NAND is very fast on reads, but limited on writes. Shifting from 2D to 3D NAND netted Samsung a 2x speed boost per die, and another effective 1.5x speed boost due to their choice to reduce per-die capacity from 128 Gbit to 86 Gbit. This effective speed boost came from the fact that a given VNAND SSD has 50% more dies to reach the same capacity as an SSD using 128 Gbit dies.
Now let's examine how Intel's choice of a 256 Gbit die impacts performance:
- Intel SSD 730 240GB = 16x128 Gbit 20nm dies
- 270 MB/sec writes and ~17 MB/sec/die
- Crucial MX100 128GB = 8x128Gbit 16nm dies
- 150 MB/sec writes and ~19 MB/sec/die
- Samsung 850 Pro 128GB = 12x86Gbit VNAND dies
- 470MB/sec writes and ~40 MB/sec/die
If we do some extrapolation based on the assumption that IMFT's move to 3D will net the same ~2x write speed improvement seen by Samsung, combined with their die capacity choice of 256Gbit, we get this:
- Future IMFT 128GB SSD = 4x256Gbit 3D dies
- 40 MB/sec/die x 4 dies = 160MB/sec
Even rounding up to 40 MB/sec/die, we can see that also doubling the die capacity effectively negates the performance improvement. While the IMFT flash equipped SSD will very likely be a lower cost product, it will (theoretically) see the same write speed limits seen in today's SSDs equipped with IMFT planar NAND. Now let's go one layer deeper on theoretical products and assume that Intel took the 18-channel NVMe controller from their P3700 Series and adopted it to a consumer PCIe SSD using this new 3D NAND. The larger die size limits the minimum capacity you can attain and still fully utilize their 18 channel controller, so with one die per channel, you end up with this product:
- Theoretical 18 channel IMFT PCIE 3D NAND SSD = 18x256Gbit 3D dies
- 40 MB/sec/die x 18 dies = 720 MB/sec
- 18x32GB (die capacity) = 576GB total capacity
Overprovisioning decisions aside, the above would be the lowest capacity product that could fully utilize the Intel PCIe controller. While the write performance is on the low side by PCIe SSD standards, the cost of such a product could easily be in the $0.50/GB range, or even less.
In summary, while we don't have any solid performance data, it appears that Intel's new 3D NAND is not likely to lead to a performance breakthrough in SSD speeds, but their choice on a more cost-effective per-die capacity for their new 3D NAND is likely to give them significant margins and the wiggle room to offer SSDs at a far lower cost/GB than we've seen in recent years. This may be the step that was needed to push SSD costs into a range that can truly compete with HDD technology.
It's that time of year again! When those of us lucky enough with the ability, get and share the best in technology with our friends and family. You are already the family IT manager so why not help spread the holiday cheer by picking up some items for them, and hey...maybe for you. :)
This year we are going to break up the guide into categories. We'll have a page dedicated to PC components, one for mobile devices like notebooks and tablets and one for PC accessories. Then, after those specific categories, we'll have an open ended collection of pages where each PC Perspective team member can throw in some wildcards.
We thank you for your support of PC Perspective through all of 2014. The links included below embed our affiliate code to Amazon.com (when applicable) and if you are doing other shopping for the holidays this year we would appreciate it if you used the button above before perusing Amazon.com. In case you want to know the affiliate cod directly, it is: pcper04-20.
Intel Core i7-4790K Haswell Processor
Last year our pick for the best high-performance processor was the Core i7-4770K, and it sold for $379. This year we have a part running 500 MHz faster, though at higher power, for $80 less. If you are still waiting for a time to upgrade your processor (and hey, games will need more cores sooner rather than later!), the Core i7-4790K looks like a great option and now looks like a great time.
NVIDIA GeForce GTX 980 4GB
Likely the most controversial selection in our gift guide, the GeForce GTX 980 is an interesting product. It's expensive compared to the other options from AMD like the Radeon R9 290X or even the R9 290, but it is also a better performing part; just not by much. The selection process of a GTX 980 stems from other things: G-Sync support, game bundles with Far Cry 4 and The Crew available, GeForce Experience, driver stability and frequency, etc. The GTX 970 is another good choice along these lines but as you'll see below...AMD has a strong contender as well.
Introduction: Defining the Quiet Enclosure
The Define R5 is the direct successor to Fractal Design's R4 enclosure, and it arrives with the promise of a completely improved offering in the silent case market. Fractal Design has unveiled the case today, and we have the day-one review ready for you!
We've looked at a couple of budget cases recently from the Swedish enclosure maker, and though still affordable with an MSRP of $109.99 (a windowed version will also be available for $10 more) the Define R5 from Fractal Design looks like a premium part throughout. In keeping with the company's minimalist design aesthetic it features clean styling, and is a standard mid-tower form factor supporting boards from ATX down to mini-ITX. The R5 also offers considerable cooling flexibility with many mounting options for fans and radiators.
The Silent Treatment
One of two included 1000 RPM hydraulic-bearing GP-14 silent fans
There are always different needs to consider when picking an enclosure, from price to application. And with silent cases there is an obvious need to for superior sound-dampening properties, though airflow must be maintained to prevent cooking components as well. With today's review we'll examine the case inside and out and see how a complete build performs with temperature and noise testing.
Introduction: Caged Beast
The D Frame Mini from In Win is a wild-looking, wildly expensive case that defies convention in many ways.
First of all, calling the In Win D Frame mini an enclosure is a bit of a stretch. The design is part open-air case, part roll cage. Of course open air cases are not a new concept, but this is certainly a striking implementation; a design almost more akin to a testbench in some ways. When installed the components will be more open to the air than otherwise, as only the sides of the frame are covered (with panels made of tempered glass).
The most noticeable design aspect of the D Frame mini are the welded tubes that make up the frame. The tubes are aluminum and resemble the frame of an aluminum bicycle, right down to the carefully welded joints. Around the perimeter of the frame are rather sizable soft plastic/rubber bumpers that protect the enclosure and help eliminate vibrations. Due to the design there is no specific orientation required for the enclosure, and it sits equally well in each direction.
There is support for 240mm radiators, virtually unlimited water cooling support given the mostly open design, and room for extra-long graphics cards and power supplies. The frame looks and feels like it could withstand just about anything, but it should probably be kept away from small children and pets given the ease with which fans and other components could be touched. And the D Frame mini is extremely expensive at $350. Actually, it’s just kind of extreme in general!
Introduction, Specifications and Packaging
In recent years, Plextor has branched beyond their renowned lines of optical storage devices, and into the realm of SSDs. They have done fairly well so far, treading carefully on their selection of controllers and form factors. Their most recent offerings include the M6S and M6M (reviewed here), and are based on Marvell controllers coupled with Toshiba flash. Given that the most recent Marvell controllers are also available in a PCIe variant, Plextor also chose to offer their M6 series in PCIe half height and M.2 form factor. These last two offerings are not simply SATA SSDs bridged over to PCIe, they are natively PCIe 2.0 x2 (1 GB/s), which gives a nice boost over the current SATA limit of 6Gb/sec (600 MB/sec). Today we are going to kill two birds with one stone by evaluating the half-height PCIe version:
As you can see, this is nothing more than the M.2 version on a Plextor branded interposer board. All results of this review should be identical to the bare M.2 unit plugged into a PCIe 2.0 x2 capable M.2 port on either a motherboard or mobile device. Note that those devices need to support the 2280 form factor, which is 80mm in length.
Here's the M.2 version installed on an ASUS X99-Deluxe, as tested by Morry.
MFAA Technology Recap
In mid-September NVIDIA took the wraps off of the GeForce GTX 980 and GTX 970 GPUs, the first products based on the GM204 GPU utilizing the Maxwell architecture. Our review of the chip, those products and the package that NVIDIA had put together was incredibly glowing. Not only was performance impressive but they were able to offer that performance with power efficiency besting anything else on the market.
Of course, along with the new GPU were a set of new product features coming along for the ride. Two of the most impressive were Dynamic Super Resolution (DSR) and Multi-Frame Sampled AA (MFAA) but only one was available at launch: DSR. With it, you could take advantage of the extreme power of the GTX 980/970 with older games, render in a higher resolution than your panel, and have it filtered down to match your screen in post. The results were great. But NVIDIA spent as much time talking about MFAA (not mother-fu**ing AA as it turned out) during the product briefings and I was shocked when I found out the feature wouldn't be ready to test or included along with launch.
That changes today with the release of NVIDIA's 344.75 driver, the first to implement support for the new and potentially important anti-aliasing method.
Before we dive into the results of our testing, both in performance and image quality, let's get a quick recap on what exactly MFAA is and how it works.
Here is what I wrote back in September in our initial review:
While most of the deep, architectural changes in GM204 are based around power and area efficiency, there are still some interesting feature additions NVIDIA has made to these cards that depend on some specific hardware implementations. First up is a new antialiasing method called MFAA, or Multi-Frame Sampled AA. This new method alternates the AA sample pattern, which is now programmable via software, in both temporal and spatial directions.
The goal is to change the AA sample pattern in a way to produce near 4xMSAA quality at the effective cost of 2x MSAA (in terms of performance). NVIDIA showed a couple of demos of this in action during the press meetings but the only gameplay we saw was in a static scene. I do have some questions about how this temporal addition is affected by fast motion on the screen, though NVIDIA asserts that MFAA will very rarely ever fall below the image quality of standard 2x MSAA.
That information is still correct but we do have a little bit more detail on how this works than we did before. For reasons pertaining to patents NVIDIA seems a bit less interested in sharing exact details than I would like to see, but we'll work with what we have.
Introduction, Specifications and Packaging
At that time we only knew that Phison was going to team up with another SSD manufacturer to get these to market. We now know that manufacturer is Corsair, and their new product is to be called the Neutron XT. How do we know this? Well, we've got one sitting right here:
While the Neutron has not officially launched (pricing is not even available), we have been afforded an early look into the performance of this new controller / SSD. While this is suspected to be a cost effective entry into the SSD marketplace, for now all we can do is evaluate the performance, so let's get to it!
Introduction and Features
In this review we will be taking a detailed look at High Power’s new Astro GD 1200W power supply. All of the power supplies in the Astro GD Series are fully modular, have a single +12V output, and are 80 Plus Gold certified for high efficiency. There are currently sixteen different power supplies in the Astro Series and nine models in the fully modular Astro GD Series. The new AGD-1200F is king of the hill with the highest rated output of 1,200 watts.
Along with 80 Plus Gold certified high efficiency, the Astro GD1200W power supply has been designed for quiet operation. It uses a dual ball bearing 135mm fan and a smart fan speed control, which automatically switches between two operating modes: silent mode and cooling mode. Unlike some other power supplies that keep the fan turned off during low output, the AGD-1200 fan spins all the time. The Smart Fan Control adjusts the fan operation mode automatically according to the system loading and ambient temperature for quiet operation. The fan speed starts out slow and quiet and gradually ramps up as the load increases. The PSU also incorporates an off-delay fan feature that keeps the fan spinning for a few seconds after the system is turned off.
High Power Astro GD-1200W PSU Key Features:
• 1,200W continuous DC output
• 80 PLUS Gold certified (87%~90% efficiency at 20-100% load)
• Silent Design (automatically adjusts between silent and cooling modes)
• Advanced DC-to-DC converters (3.3V and 5V)
• Fully modular cables for easy installation
• Flat ribbon-style, low profile cables help optimize airflow
• High quality components including all Japanese made capacitors
• Active Power Factor correction (0.99) with Universal AC input
• Safety Protections : OCP, OVP, UVP, SCP, OTP, and OPP
• MSRP for the Astro GD-1200W PSU: $239.99 USD
Meet the Inateck barebones tool-free HDD
Recently Inatek sent over two products to test out, the FEU3NS-1 USB 3.0 HDD Tool Free External Enclosure and the BP2001 10W Bluetooth Stereo Speaker. Inatek has been around for a while, though originally their products were only available in the EU they have recently expanded to North America. They sell a variety of peripherals such as PCIe USB cards, cables and chargers as well as Bluetooth input devices and mobile device protectors, in addion to external HDDs enclosures and of course Bluetooth speakers.
The first product to take a look at is the USB 3.0 enclosure which ships with a USB cable and manual in addition to the tool free USB HDD enclosure. It is a very simple product at a very low price and is small enough to stick in a laptop bag without having an unsightly bulge. The base model is currently $14 on Amazon and for an extra $5 you can get one which supports USB Attached SCSI Protocol to allow an SSD to hit full speed when installed in the enclosure. The USB 3.0 cable is a dual male cable; no proprietary plugs or breakable adapters needed to make this work and as enough power can be provided over USB that this is the only cable you will need. The only compatibility issue concerns the relatively uncommon 12mm 2.5" drives which will not fit, 9.5mm and 7mm are both acceptable and there is a removable cushion to keep your 7mm drive nice and snug.
It could be a good... start.
So this is what happens when you install pre-release software on a production machine.
Sure, I only trusted it as far as a second SSD with Windows 7 installed, but it would be fair to say that I immersed myself in the experience. It was also not the first time that I evaluated upcoming Microsoft OSes on my main machine, having done the same for Windows Vista and Windows 7 as both were in production. Windows 8 was the odd one out, which was given my laptop. In this case, I was in the market for a new SSD and was thus willing to give it a chance, versus installing Windows 7 again.
So far, my experience has been roughly positive. The first two builds have been glitchy. In the first three days, I have rebooted my computer more times than I have all year (which is about 1-2 times per month). It could be the Windows Key + Arrow Key combinations dropping randomly, Razer Synapse deciding to go on strike a couple of times until I reinstall it, the four-or-so reboots required to install a new build, and so forth. You then also have the occasional issue of a Windows service (or DWM.exe) deciding that it would max out a core or two.
But it is pre-release software! That is all stuff to ignore. The only reason I am even mentioning it is so people do not follow in my footsteps and install it on their production machines, unless they are willing to have pockets of downtime here or there. Even then, the latest build, 9879, has been fairly stable. It has been installed all day and has not given me a single issue. This is good, because it is the last build we will get until 2015.
What we will not ignore is the features. For the first two builds, it was annoying to use with multiple monitors. Supposedly to make it easier to align items, mouse cursors would remain locked inside each monitor's boundary until you provide enough velocity to have it escape to the next one. This was the case with Windows 8.1 as well, but you were given registry entries to disable the feature. Those keys did not work with Windows 10. But, with Build 9879, that seems to have been disabled unless you are currently dragging a window. In this case, a quick movement would pull windows between monitors, while a slow movement would perform a Snap.
This is me getting ready to snap a window on the edge between two monitors with just my mouse.
In a single build, they turned this feature from something I wanted to disable, to something that actually performs better (in my opinion) than Windows 7. It feels great.
Now on to a not-so-pleasant experience: updating builds.
Simply put, you can click "Check Now" and "Download Update" all that you want, but it will just sit there doing nothing until it feels like it. During the update from 9860 to 9879, I was waiting with the PC Settings app open for three hours. At some point, I got suspicious and decided to monitor network traffic: nothing. So I did the close app, open app, re-check dance a few times, and eventually gave up. About a half of an hour after I closed PC Settings the last time, my network traffic spiked to the maximum that my internet allows, which task manager said was going to a Windows service.
Shortly after, I was given the option to install the update. After finishing what I was doing, I clicked the install button and... it didn't seem to do anything. After about a half of an hour, it prompted me to restart my computer with a full screen message that you cannot click past to save your open windows - it is do it or postpone it one or more hours, there is no in-between. About another twenty minutes (and four-or-five reboots) after I chose to reboot, I was back up and running.
Is that okay? Sure. When you update, you clearly need to do stuff and that could take your computer several minutes. It would be unrealistic to complain about a 20-minute install. The only real problem is that it waits for extended periods of time doing nothing (measured, literally nothing) until it decides that the time is right, and that time is NOW! It may have been three hours after you originally cared, but the time is NOW!
Come on Microsoft, let us know what is going on behind the scenes, and give us reliable options to pause or suspend the process before the big commitment moments.
So that is where I am, one highly positive experience and one slightly annoying one. Despite my concerns about Windows Store (which I have discussed at length in the past and are still valid) this operating system seems to be on a great path. It is a work in progress. I will keep you up to date, as my machine is kept up to date.
MSI Redefines AM3+ Value
It is no secret that AMD’s AM3+ motherboard ecosystem has languished for the past year or so, with very few examples of new products hitting the scene. This is understandable since AMD has not updated the chipset options for AM3+, and only recently did they release updated processors in the form of the FX-8370 and FX-8370e. It has been two years since the release of the original FX-8350 and another year since the high TDP FX-9000 series of parts. For better or for worse, AMD is pushing their APUs far harder to consumers than the aging AM3+ platform.
MSI has refined their "Gaming" series of products with a distinctive look that catches the eye.
This does not mean that the AM3+ ecosystem is non-viable to both AMD and consumers. While Intel has stayed ahead of AMD in terms of IPC, TDP, and process technology the overall competitiveness of the latest AM3+ parts are still quite good when considering price. Yes, these CPUs will run hotter and pull more power than the Intel parts they are directly competing against, but when we look at the prices of comparable motherboards and the CPUs themselves, AMD still holds a price/performance advantage. The AM3+ processors that feature six and eight cores (3 and 4 modules) are solid performers in a wide variety of applications. The top end eight core products compete well against the latest Intel parts in many gaming scenarios, as well as productivity applications which leverage multiple threads.
When the Vishera based FX processors were initially introduced we saw an influx of new AM3+ designs that would support these new processors, as well as the planned 220 watt TDP variants that would emerge later. From that point on we have only seen a smattering of new products based on AM3+. From all the available roadmaps from AMD that we have seen, we do not expect there to be new products based on Steamroller or Excavator architectures on the AM3+ platform. AMD is relying on their HSA enabled APUs to retain marketshare and hopefully drive new software technologies that will leverage these products. The Future really is Fusion…
MSI is bucking this trend. The company still sees value in the AM3+ market, and they are introducing a new product that looks to more adequately fit the financial realities of that marketplace. We already have high end boards from MSI, ASRock, Asus, and Gigabyte that are feature packed and go for a relatively low price for enthusiast motherboards. On the other end of the spectrum we have barebone motherboards based on even older chipsets (SB710/750 based). In between we often see AMD 970 based boards that offer a tolerable mix of features attached to a low price.
The bundle is fair, but not exciting. It offers the basics to get a user up and running quickly.
The MSI 970 Gaming motherboard is a different beast as compared to the rest of the market. It is a Gaming branded board which offers a host of features that can be considered high end, but at the same time being offered for a price less than $100 US. MSI looks to explore this sweet spot with a motherboard that far outpunches its weight class. This board is a classic balance of price vs. features, but it addresses this balance in a rather unique way. Part of it might be marketing, but a good chunk of it is smart and solid engineering.
Introduction: The Core Series Shrinks Down
Image credit: Fractal Design
The Core 1100 from Fractal Design is a small micro-ATX case, essentially a miniature version of the previously reviewed Core 3300. With its small dimensions the Core 1100 targets micro-ATX and mini-ITX builders, and provides another option not only in Fractal Design's budget lineup, but in the crowded budget enclosure market.
The price level for the Core 1100 has fluctuated a bit on Amazon since I began this review, with prices ranging from a high of $50 down to a low of just $39. It is currently $39.99 at Newegg, so the price should soon stabilize at Amazon and other retailers. At the ~$40 level this could easily be a compelling option for a smaller build, though admittedly the design of these Core series cases is purely functional. Ultimately any enclosure recommendation will depend on ease of use and thermal performance/noise, which is exactly what we will look at in this review.
Introduction, Specifications and Packaging
G.Skill is likely better known for their RAM offerings, but they have actually been in the SSD field since the early days. My first SSD RAID was on a pair of G.Skill Flash SSDs. While they were outmaneuvered by the X25-M, they were equipped with SLC flash, and G.Skill offered them at a significantly lower price than the Samsung OEM units they were based on.
Since those early days of flash, G.Skill has introduced a few additional models but has not been known as a major player in the SSD market. That is set to change today, with their introduction of the Phoenix Blade PCIe SSD:
If you're eager to know what is inside or how it works, I'll set your mind at ease with this brief summary. The Phoenix Blade is essentially an OCZ RevoDrive 350, but with beefier specs and improved performance. The same SandForce 2281 controllers and Toshiba flash are used. The difference comes in the form of a smaller form factor (half height vs. full height PCIe), and the type of PCIe to SATA bridge chip used. More on that on the disassembly page.