All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Battlefield 4 Results
At the end of my first Frame Rating evaluation of the GTX 970 after the discovery of the memory architecture issue, I proposed the idea that SLI testing would need to be done to come to a more concrete conclusion on the entire debate. It seems that our readers and the community at large agreed with us in this instance, repeatedly asking for those results in the comments of the story. After spending the better part of a full day running and re-running SLI results on a pair of GeForce GTX 970 and GTX 980 cards, we have the answers you're looking for.
Today's story is going to be short on details and long on data, so if you want the full back story on what is going on why we are taking a specific look at the GTX 970 in this capacity, read here:
- Part 1: NVIDIA issues initial statement
- Part 2: Full GTX 970 memory architecture disclosed
- Part 3: Frame Rating: GTX 970 vs GTX 980
- Part 4: Frame Rating: GTX 970 SLI vs GTX 980 SLI (what you are reading now)
Okay, are we good now? Let's dive into the first set of results in Battlefield 4.
Battlefield 4 Results
Just as I did with the first GTX 970 performance testing article, I tested Battlefield 4 at 3840x2160 (4K) and utilized the game's ability to linearly scale resolution to help me increase GPU memory allocation. In the game settings you can change that scaling option by a percentage: I went from 110% to 150% in 10% increments, increasing the load on the GPU with each step.
Memory allocation between the two SLI configurations was similar, but not as perfectly aligned with each other as we saw with our single GPU testing.
In a couple of cases, at 120% and 130% scaling, the GTX 970 cards in SLI are actually each using more memory than the GTX 980 cards. That difference is only ~100MB but that delta was not present at all in the single GPU testing.
It has been an abnormal week for us here at PC Perspective. Our typical review schedule has pretty much flown out the window, and the past seven days have been filled with learning, researching, retesting, and publishing. That might sound like the norm, but in these cases the process was initiated by tips from our readers. Last Saturday (24 Jan), a few things were brewing:
- Ryan was informed by NVIDIA that the memory layout of the GTX 970 was different than expected.
- The huge (now 168 page) overclock.net forum thread about the Samsung 840 EVO slowdown was once again gaining traction.
- Someone got G-Sync working on a laptop integrated display.
We had to do a bit of triage here of course, as we can only research and write so quickly. Ryan worked the GTX 970 piece as it was the hottest item. I began a few days of research and testing on the 840 EVO slow down issue reappearing on some drives, and we kept tabs on that third thing, which at the time seemed really farfetched. With those two first items taken care of, Ryan shifted his efforts to GTX 970 SLI testing while I shifted my focus to finding out of there was any credence to this G-Sync laptop thing.
A few weeks ago, an ASUS Nordic Support rep inadvertently leaked an interim build of the NVIDIA driver. This was a mobile driver build (version 346.87) focused at their G751 line of laptops. One recipient of this driver link posted it to the ROG forum back on the 20th. A fellow by the name Gamenab, owning the same laptop cited in that thread, presumably stumbled across this driver, tried it out, and was more than likely greeted by this popup after the installation completed:
Now I know what you’re thinking, and it’s probably the same thing anyone would think. How on earth is this possible? To cut a long story short, while the link to the 346.87 driver was removed shortly after being posted to that forum, we managed to get our hands on a copy of it, installed it on the ASUS G751 that we had in for review, and wouldn’t you know it we were greeted by the same popup!
Ok, so it’s a popup, could it be a bug? We checked NVIDIA control panel and the options were consistent with that of a G-Sync connected system. We fired up the pendulum demo and watched the screen carefully, passing the machine around the office to be inspected by all. We then fired up some graphics benchmarks that were well suited to show off the technology (Unigine Heaven, Metro: Last Light, etc), and everything looked great – smooth steady pans with no juddering or tearing to be seen. Ken Addison, our Video Editor and jack of all trades, researched the panel type and found that it was likely capable of 100 Hz refresh. We quickly dug created a custom profile, hit apply, and our 75 Hz G-Sync laptop was instantly transformed into a 100 Hz G-Sync laptop!
Ryan's Note: I think it is important here to point out that we didn't just look at demos and benchmarks for this evaluation but actually looked at real-world gameplay situations. Playing through Metro: Last Light showed very smooth pans and rotation, Assassin's Creed played smoothly as well and flying through Unigine Heaven manually was a great experience. Crysis 3, Battlefield 4, etc. This was NOT just a couple of demos that we ran through - the variable refresh portion of this mobile G-Sync enabled panel was working and working very well.
At this point in our tinkering, we had no idea how or why this was working, but there was no doubt that we were getting a similar experience as we have seen with G-Sync panels. As I digested what was going on, I thought surely this can’t be as good as it seems to be… Let’s find out, shall we?
Well here we are again with this Samsung 840 EVO slow down issue cropping up here, there, and everywhere. The story for this one is so long and convoluted that I’m just going to kick this piece off with a walk through of what was happening with this particular SSD, and what was attempted so far to fix it:
The Samsung 840 EVO is a consumer-focused TLC SSD. Normally TLC SSDs suffer from reduced write speeds when compared to their MLC counterparts, as writing operations take longer for TLC than for MLC (SLC is even faster). Samsung introduced a novel way of speeding things up with their TurboWrite caching method, which adds a fast SLC buffer alongside the slower flash. This buffer is several GB in size, and helps the 840 EVO maintain fast write speeds in most typical usage scenarios, but the issue with the 840 EVO is not its write speed – the problem is read speed. Initial reviews did not catch this issue as it only impacted data that had been stagnant for a period of roughly 6-8 weeks. As files aged their read speeds were reduced, starting from the speedy (and expected) 500 MB/sec and ultimately reaching a worst case speed of 50-100 MB/sec:
There were other variables that impacted the end result, which further complicated the flurry of reports coming in from seemingly everywhere. The slow speeds turned out to be the result of the SSD controller working extra hard to apply error correction to the data coming in from flash that was (reportedly) miscalibrated at the factory. This miscalibration caused the EVO to incorrectly adapt to cell voltage drifts over time (an effect that occurs in all flash-based storage – TLC being the most sensitive). Ambient temperature could even impact the slower read speeds as the controller was working outside of its expected load envelope and thermally throttled itself when faced with bulk amounts of error correction.
An example of file read speed slowing relative to age, thanks to a tool developed by Techie007.
Once the community reached sufficient critical mass to get Samsung’s attention, they issued a few statements and ultimately pushed out a combination firmware and tool to fix EVO’s that were seeing this issue. The 840 EVO Performance Restoration Tool was released just under two months after the original thread on the Overclock.net forums was started. Despite a quick update a few weeks later, that was not a bad turnaround considering Intel took three months to correct a firmware issue of one of their own early SSDs. While the Intel patch restored full performance to their X25-M, the Samsung update does not appear to be faring so well now that users have logged a few additional months after applying their fix.
A Summary Thus Far
UPDATE 2/2/15: We have another story up that compares the GTX 980 and GTX 970 in SLI as well.
It has certainly been an interesting week for NVIDIA. It started with the release of the new GeForce GTX 960, a $199 graphics card that brought the latest iteration of Maxwell's architecture to a lower price point, competing with the Radeon R9 280 and R9 285 products. But then the proverbial stuff hit the fan with a memory issue on the GeForce GTX 970, the best selling graphics card of the second half of 2014. NVIDIA responded to the online community on Saturday morning but that was quickly followed up with a more detailed expose on the GTX 970 memory hierarchy, which included a couple of important revisions to the specifications of the GTX 970 as well.
At the heart of all this technical debate is a performance question: does the GTX 970 suffer from lower performance because of of the 3.5GB/0.5GB memory partitioning configuration? Many forum members and PC enthusiasts have been debating this for weeks with many coming away with an emphatic yes.
The newly discovered memory system of the GeForce GTX 970
Yesterday I spent the majority of my day trying to figure out a way to validate or invalidate these types of performance claims. As it turns out, finding specific game scenarios that will consistently hit targeted memory usage levels isn't as easy as it might first sound and simple things like the order of start up can vary that as well (and settings change orders). Using Battlefield 4 and Call of Duty: Advanced Warfare though, I think I have presented a couple of examples that demonstrate the issue at hand.
Performance testing is a complicated story. Lots of users have attempted to measure performance on their own setup, looking for combinations of game settings that sit below the 3.5GB threshold and those that cross above it, into the slower 500MB portion. The issue for many of these tests is that they lack access to both a GTX 970 and a GTX 980 to really compare performance degradation between cards. That's the real comparison to make - the GTX 980 does not separate its 4GB into different memory pools. If it has performance drops in the same way as the GTX 970 then we can wager the memory architecture of the GTX 970 is not to blame. If the two cards perform differently enough, beyond the expected performance delta between two cards running at different clock speeds and with different CUDA core counts, then we have to question the decisions that NVIDIA made.
There has also been concern over the frame rate consistency of the GTX 970. Our readers are already aware of how deceptive an average frame rate alone can be, and why looking at frame times and frame time consistency is so much more important to guaranteeing a good user experience. Our Frame Rating method of GPU testing has been in place since early 2013 and it tests exactly that - looking for consistent frame times that result in a smooth animation and improved gaming experience.
Users at reddit.com have been doing a lot of subjective testing
We will be applying Frame Rating to our testing today of the GTX 970 and its memory issues - does the division of memory pools introduce additional stutter into game play? Let's take a look at a couple of examples.
Introduction and Technical Specifications
Courtesy of Primochill
The Wet Bench open-air test bench is Primochill's premier case offering. This acrylic-based enclosure features an innovative design allowing for easy access to the motherboard and PCIe cards without the hassle of removing case panels and mounting screws associated with a typical case motherboard change out. With a starting MSRP of $139.95, the Wet Bench is priced competitively in light of the configurability and features offered with the case.
Courtesy of Primochill
Courtesy of Primochill
The Wet Bench is unique in its design - Primochill built it to support custom water cooling solutions from the ground up. The base kit supports mounting the water cooling kit's radiator to the back plate, up to a 360mm size (supporting 3x120mm fans). Primochill also offers an optional backplate with support for up to a 480mm radiator (supporting up to 4x120mm fans).
A few secrets about GTX 970
UPDATE 1/28/15 @ 10:25am ET: NVIDIA has posted in its official GeForce.com forums that they are working on a driver update to help alleviate memory performance issues in the GTX 970 and that they will "help out" those users looking to get a refund or exchange.
Yes, that last 0.5GB of memory on your GeForce GTX 970 does run slower than the first 3.5GB. More interesting than that fact is the reason why it does, and why the result is better than you might have otherwise expected. Last night we got a chance to talk with NVIDIA’s Senior VP of GPU Engineering, Jonah Alben on this specific concern and got a detailed explanation to why gamers are seeing what they are seeing along with new disclosures on the architecture of the GM204 version of Maxwell.
NVIDIA's Jonah Alben, SVP of GPU Engineering
For those looking for a little background, you should read over my story from this weekend that looks at NVIDIA's first response to the claims that the GeForce GTX 970 cards currently selling were only properly utilizing 3.5GB of the 4GB frame buffer. While it definitely helped answer some questions it raised plenty more which is whey we requested a talk with Alben, even on a Sunday.
Let’s start with a new diagram drawn by Alben specifically for this discussion.
GTX 970 Memory System
Believe it or not, every issue discussed in any forum about the GTX 970 memory issue is going to be explained by this diagram. Along the top you will see 13 enabled SMMs, each with 128 CUDA cores for the total of 1664 as expected. (Three grayed out SMMs represent those disabled from a full GM204 / GTX 980.) The most important part here is the memory system though, connected to the SMMs through a crossbar interface. That interface has 8 total ports to connect to collections of L2 cache and memory controllers, all of which are utilized in a GTX 980. With a GTX 970 though, only 7 of those ports are enabled, taking one of the combination L2 cache / ROP units along with it. However, the 32-bit memory controller segment remains.
You should take two things away from that simple description. First, despite initial reviews and information from NVIDIA, the GTX 970 actually has fewer ROPs and less L2 cache than the GTX 980. NVIDIA says this was an error in the reviewer’s guide and a misunderstanding between the engineering team and the technical PR team on how the architecture itself functioned. That means the GTX 970 has 56 ROPs and 1792 KB of L2 cache compared to 64 ROPs and 2048 KB of L2 cache for the GTX 980. Before people complain about the ROP count difference as a performance bottleneck, keep in mind that the 13 SMMs in the GTX 970 can only output 52 pixels/clock and the seven segments of 8 ROPs each (56 total) can handle 56 pixels/clock. The SMMs are the bottleneck, not the ROPs.
A new GPU, a familiar problem
Editor's Note: Don't forget to join us today for a live streaming event featuring Ryan Shrout and NVIDIA's Tom Petersen to discuss the new GeForce GTX 960. It will be live at 1pm ET / 10am PT and will include ten (10!) GTX 960 prizes for participants! You can find it all at http://www.pcper.com/live
There are no secrets anymore. Calling today's release of the NVIDIA GeForce GTX 960 a surprise would be like calling another Avenger's movie unexpected. If you didn't just assume it was coming chances are the dozens of leaks of slides and performance would get your attention. So here it is, today's the day, NVIDIA finally upgrades the mainstream segment that was being fed by the GTX 760 for more than a year and half. But does the brand new GTX 960 based on Maxwell move the needle?
But as you'll soon see, the GeForce GTX 960 is a bit of an odd duck in terms of new GPU releases. As we have seen several times in the last year or two with a stagnant process technology landscape, the new cards aren't going be wildly better performing than the current cards from either NVIDIA for AMD. In fact, there are some interesting comparisons to make that may surprise fans of both parties.
The good news is that Maxwell and the GM206 GPU will price out starting at $199 including overclocked models at that level. But to understand what makes it different than the GM204 part we first need to dive a bit into the GM206 GPU and how it matches up with NVIDIA's "small" GPU strategy of the past few years.
The GM206 GPU - Generational Complexity
First and foremost, the GTX 960 is based on the exact same Maxwell architecture as the GTX 970 and GTX 980. The power efficiency, the improved memory bus compression and new features all make their way into the smaller version of Maxwell selling for $199 as of today. If you missed the discussion on those new features including MFAA, Dynamic Super Resolution, VXGI you should read that page of our original GTX 980 and GTX 970 story from last September for a bit of context; these are important aspects of Maxwell and the new GM206.
NVIDIA's GM206 is essentially half of the full GM204 GPU that you find on the GTX 980. That includes 1024 CUDA cores, 64 texture units and 32 ROPs for processing, a 128-bit memory bus and 2GB of graphics memory. This results in half of the memory bandwidth at 112 GB/s and half of the peak compute capability at 2.30 TFLOPS.
Introducing Windows 10 (Again)
I did not exactly make too many unsafe predictions, but let's recap the Windows 10 Consumer announcement anyway. The briefing was a bit on the slow side, at least if you are used to E3 keynotes, but it contained a fair amount of useful information. Some of the things discussed are future-oriented, but some will arrive soon. So let's get right into it.
Price and Upgrade Options
Microsoft has not announced an official price for Windows 10, if the intent is to install it on a new PC. If you are attempting to upgrade a machine that currently runs Windows 7 or Windows 8.1, then that will be a free upgrade if done within the first year. Windows Phone 8.1 users are also eligible for a no-cost upgrade to Windows 10 if done in the first year.
Quote Terry Myerson of Microsoft, “Once a device is upgraded to Windows 10, we will be keeping it current for the supported lifetime of the device.” This is not elaborated on, but it seems like a weird statement given what we have traditionally expected from Windows. One possible explanation is that Microsoft intends for Windows to be a subscription service going forward, which would be the most obvious extension of “Windows as a Service”. On the other hand, they could be going for the per-device revenue option with Bing, Windows Store, and other initiatives being long tail. If so, I am a bit confused about what constitutes a new device for systems that are regularly upgraded, like what our readers are typically interested in. All of that will eventually be made clear, but not yet.
A New Build for Windows 10
Late in the keynote, Microsoft announced the availability of new preview builds for Windows 10. This time, users of Windows Phone 8.1 will also be able to see the work in progress. PC “Insiders” will get access to their build “in the next week” and phones will get access “in Feburary”. Ars Technica seems to believe that this is scheduled for Sunday, February 1st, which is a really weird time to release a build but their source might be right.
We don't know exactly what will be in it, though. In my predictions, I guessed that a DirectX 12 SDK might be available (or at least some demos) in the next build. That has not been mentioned, which probably would have been if it were true. I expect the next possibility (if we're not surprised in the next one-to-ten days when the build drops) is Game Developers Conference (GDC 2015), which starts on March 2nd.
The New Web Browser: Project Spartan
My guess was that Spartan would be based on DirectX 12. Joe Belfiore said that it is using a new, standards-compliant rendering engine and basically nothing more. The event focused on specific features. The first is note taking, which basically turns the web browser into a telestrator that can also accept keyboard comment blocks. The second is a reading mode that alters content into a Microsoft Word-like column. The third is “reading lists”, which is basically a “read it later” feature that does offline caching. The fourth is Adobe PDF support, which works with the other features of Spartan such as note taking and reading lists.
Which Transitions Into Cortana
The fifth feature of Spartan is Cortana integration, which will provide auto-suggestions based on the information that the assistant software has. The example they provided was auto-suggesting the website for his wife's flight. Surprisingly, when you attempt to control a Spartan, Cortana does not say “There's two of us in here now, remember?” You know, in an attempt to let you know she's service that's integrated into the browser.
Otherwise, it's an interesting demo. I might even end up using it when it comes out, but these sorts of things do not really interest me too much. We have been at the point where, for my usage, the operating system is really not in the way anymore. It feels like there is very little friction between me and getting what I want done, done. Of course, people felt that way about rotary phones until touch-tone came out, and I keep an open mind to better methods. It's just hard to get me excited about voice-activated digital assistants.
As I stated before, DirectX 12 was mentioned but a release date was not confirmed. What they did mention was a bit of relative performance. DirectX 12 supposedly uses about half of the power consumption of DirectX 11, which is particularly great for mobile applications. It can also handle scenes with many more objects. A FutureMark demo was displayed, with the DirectX 11 version alongside a DirectX 12 version. The models seem fairly simple, but the DirectX 12 version appears to running at over 100 FPS when the DirectX 11 version outright fails.
Other gaming features were mentioned. First, Windows 10 will allow shadow recording the last 30 seconds of footage from any game. You might think that NVIDIA would be upset about that, and they might be, but that is significantly less time than ShadowPlay or other recording methods. Second, Xbox One will be able to stream gameplay to any PC in your house. I expect this is the opposite direction than what people hope for, rather wishing for high-quality PC footage to be easily streamed to TVs with a simple interface. It will probably serve a purpose for some use case, though.
Well that was a pretty long event, clocking in at almost two-and-a-half hours. The end had a surprise announcement of an augmented reality (not virtual reality) headset, called the “HoloLens”, which is developed by the Kinect team. I am deliberately not elaborating on it because I was not at the event and I have not tried it. I will say that the most interesting part about it, for me, is the Skype integration, because that probably hints at Microsoft's intentions with the product.
For the rest of us, it touched on a number of interesting features but, like the Enterprise event, did not really dive in. It would have been nice to get some technical details about DirectX 12, but that obviously does not cater to the intended audience. Unless an upcoming build soft-launches a DirectX 12 preview (or Spartan) so that we can do our own discovery, we will probably need to wait until GDC and/or BUILD to find out more.
Until then, you could watch the on-demand version at Microsoft's website.
Introduction, Specifications and Packaging
Today Samsung has lifted the review embargo on their new Portable SSD T1. This represents Samsung's first portable SSD, and aims to serve as another way to make their super speedy VNAND available. We first saw the Samsung T1 at CES, and I've been evaluating the performance if this little drive for the past week:
We'll dive more into the details as this review progresses.
The T1 comes well packaged, with a small instruction manual and a flat style short USB 3.0 cable. The drive itself is very light - ours weighed in right at 1 ounce.
Introduction and Technical Specifications
Courtesy of ASUS
The Rampage V Extreme is ASUS' premier product for their ROG (Republic of Gamers) line of Intel X99-based motherboards. The board offers support for all Intel LGA2011-3 based processors paired with DDR4 memory operating in up to a quad channel configuration. Given the feature-packed nature and premium ROG board-branding, the board's $499.99 MSRP does not come at that much of a surprise.
Courtesy of ASUS
Courtesy of ASUS
ASUS designed the Rampage V Extreme to handle anything an enthusiast could throw its way, integrating an 8-phase digital power system into is Extreme Engine Digi+ IV to power the board. Extreme Engine Digi+ IV combines ASUS' custom designed Digi+ EPU chipset, IR (International Rectifier) PowIRStage MOSFETs, MicroFine Alloy chokes, and 10k Black Metallic capacitors for unrivaled power delivery capabilities. ASUS also bundles their OC Panel device for on-the-fly overclocking and board monitoring, as well as SupremeFX 2014 audio solution for flawless audio.
Introduction, Specs, and First Impressions
In our review of the original LIVA mini-PC we found it to be an interesting product, but it was difficult to identify a specific use-case for it; a common problem with the mini-PC market. Could the tiny Windows-capable machine be a real desktop replacement? That first LIVA just wasn't there yet. The Intel Bay Trail-M SoC was outmatched when playing 1080p Flash video content and system performance was a little sluggish overall in Windows 8.1, which wasn't aided by the limitation of 2GB RAM. (Performance was better overall with Ubuntu.) The price made it tempting but it was too underpowered as one's only PC - though a capable machine for many tasks.
Fast forward to today, when the updated version has arrived on my desk. The updated LIVA has a cool new name - the “X” - and the mini computer's case has more style than before (very important!). Perhaps more importantly, the X boasts upgraded internals as well. Could this new LIVA be the one to replace a desktop for productivity and multimedia? Is this the moment we see the mini-PC come into its own? There’s only one way to find out. But first, I have to take it out of the box.
Chipset: Intel® Bay Trail-M/Bay Trail-I SOC
Memory: DDR3L 2GB/4GB
Expansion Slot: 1 x mSATA for SSD
Storage: eMMC 64GB/32GB
Audio: HD Audio Subsystem by Realtek ALC283
LAN: Realtek RTL8111G Gigabit Fast Ethernet Controller
USB: 1 x USB3.0 Port, 2 x USB2.0 Ports
Video Output: 1 x HDMI Port, 1 x VGA Port
Wireless: WiFi 802.11 b/g/n & Bluetooth 4.0
PCB Size: 115 x 75 mm
Dimension: 135 x 83 x 40 mm
VESA Support: 75mm / 100mm
Adapter Input: AC 100-240V, Output: DC 12V / 3A
OS Support: Linux based OS, Windows 7 (via mSATA SSD) Windows 8/8.1
Thanks to ECS for providing the LIVA X for review!
Packaging and Contents
The LIVA X arrives in a smaller box than its predecessor, and one with a satin finish cuz it's extra fancy.
Introduction and Features
Corsair’s new Carbide Series 330R Titanium Edition case is an update to their popular 330R quiet mid-tower enclosure. The new 330R Titanium Edition features both cosmetic and functional changes with the addition of a Titanium-look brushed aluminum front panel and three-speed fan control switch. In addition, the 330R incorporates excellent sound absorption material for quiet operation, numerous cooling options, and support for multiple, extended length VGA cards. The 330R enclosure features a full length, hinged front door and comes with one 140mm intake fan in the front and one 120mm exhaust fan on the back with five optional fan mounting locations along with support for liquid cooling radiators. There are currently 18 different models in the Carbide Series ranging from $49.99 up to $149.99 USD.
Foundation for a quiet PC
Here is what Corsair has to say about their Carbide Series 330R Titanium Edition enclosure: “The Carbide Series 330R Titanium Edition starts with the award-winning original 330R, and adds a brushed aluminum front panel with a three-speed fan controller. It’s designed for systems that will go into media rooms, bedrooms, dorm rooms, or any place where both silence and performance are essential. Sound damped doors and panels and clever intake fan design are combined with generous expansion room and builder-friendly features to allow you to build a silent PC that can pack a lot of power for gaming and high definition media streaming.”
Introduction and Technical Specifications
Courtesy of ECS
The ECS Z97-Machine motherboard is one of the boards in ECS' L337 product line, offering in-built support for the Intel Z97 Express chipset. ECS rethought their board design with the Z97-Machine, creating a stripped down enthusiast-friendly product that does not compromise on any of the design areas important to the expected performance of the board. At an MSRP of $139.99, ECS hits a lucrative price-point that many other manufacturers have failed to reach with an integrated Intel Z97 chipset in light of the offered features and performance.
Courtesy of ECS
The ECS Z97-Machine motherboard offers an interesting cost-to-performance proposition, cutting back on unnecessary features to keep the overall cost down while not sacrificing on quality of the core components. ECS designed the board with a 6-phase digital power delivery system, using high efficients chokes (ICY CHOKES), MOSFETs rated at up to 90% efficiency, and Nichicon-source aluminum capacitors for optimal board performance under any operating conditions. The Z97-Machine board offers the following in-built features: four SATA 3 ports; an M.2 (NGFF) 10 Gb/s port; an Intel I218-V GigE NIC; two PCI-Express Gen3 x16 slots; 3 PCI-Express x1 slots; 2-digit diagnostic LED display; on-board power and reset buttons; voltage measurement points; Realtek audio solution with ESS Sabre32 DAC; integrated VGA, DVI, and HDMI video port support; and USB 2.0 and 3.0 port support.
NVIDIA's Tegra X1
NVIDIA seems to like begin on a one year cycle with their latest Tegra products. Many years ago we were introduced to the Tegra 2, and the year after that the Tegra 3, and the year after that the Tegra 4. Well, NVIDIA did spice up their naming scheme to get away from the numbers (not to mention the potential stigma of how many of those products actually made an impact in the industry). Last year's entry was the Tegra K1 based on the Kepler graphics technology. These products were interesting due to the use of the very latest, cutting edge graphics technology in a mobile/low power format. The Tegra K1 64 bit variant used two “Denver” cores that were actually designed by NVIDIA.
While technically interesting, the Tegra K1 series have made about the same impact as the previous versions. The Nexus 9 was the biggest win for NVIDIA with these parts, and we have heard of a smattering of automotive companies using Tegra K1 in those applications. NVIDIA uses the Tegra K1 in their latest Shield tablet, but they do not typically release data regarding the number of products sold. The Tegra K1 looks to be the most successful product since the original Tegra 2, but the question of how well they actually sold looms over the entire brand.
So why the history lesson? Well, we have to see where NVIDIA has been to get a good idea of where they are heading next. Today, NVIDIA is introducing the latest Tegra product, and it is going in a slightly different direction than what many had expected.
The reference board with 4 GB of LPDDR4.
Introduction and Features
be quiet! claims to be Germany’s number one brand in PC power supplies and they are continuing to expand sales into North American markets. In this review we will be taking a detailed look at the new be quiet! Straight Power 10 800W power supply with Cable Management. There are four power supplies in the Straight Power 10 CM Series, which include 500W, 600W, 700W, and 800W models.
be quiet! designed the Straight Power 10 CM Series to provide high efficiency with minimal noise for systems that demand whisper-quiet operation without compromising on power quality. In addition to the Straight Power 10 Series, be quiet! offers a full range of power supplies in ATX, SFX, and TFX form factors.
(Courtesy of be quiet!)
All of the new Straight Power 10 Cable Management Series power supplies are semi-modular (all cables are modular except for the fixed 24-pin ATX cable). Along with 80 Plus Gold certified high efficiency, the Straight Power 10 800W modular power supply has been designed for quiet operation. It uses be quiet!’s latest SilentWings3 135mm fan for virtually silent operation (at low to mid power levels). The fan speed starts out very slow and remains slow and quiet through mid power levels.
be quiet! Straight Power 10 800W CM PSU Key Features:
• 800W continuous DC output (ATX12V v2.4, EPS 2.92 compliant)
• Virtually inaudible SilentWings3 135mm cooling fan
• 80 PLUS Gold certified efficiency (up to 93%)
• Premium 105°C rated parts enhance stability and reliability
• Powerful GPU support with four PCI-E connectors
• User-friendly cable management reduces clutter and improves airflow
• NVIDIA SLI Ready and AMD CrossFire X certified
• ErP 2014 ready and meets Energy Star 6.0 guidelines
• Zero load design supports Intel’s Deep Power Down C6 & C7 modes
• Fully Intel Haswell compatible
• Active Power Factor correction (0.99) with Universal AC input
• German product conception, design and quality control
• Safety Protections : OCP, OVP, UVP, SCP, OTP, and OPP
• 5-Year warranty
• MSRP for the Straight Power 10 800W CM PSU: $169.00 USD
Big Power, Small Size
Though the mindset that a small PC is a slow PC is fading, there are still quite a few readers out there that believe the size of your components will indicate how well they perform. That couldn't be further from the case, and this week we decided to build a small, but not tiny, PC to showcase that small can be beautiful too!
Below you will find a complete list of parts and components used in our build - but let me say right off the bat, to help alleviate as much vitriol in the comments as possible, there are quite a few ways you could build this system to either get a lower price, or higher performance, or quieter design, etc. Our selections were based on a balance of both with a nod towards expansion in a few cases.
Take a look:
|MicroATX Gaming Build|
|Processor||Intel Core i7-4790K - $334
Corsair Hydro Series H80i - $87
|Motherboard||Gigabyte Z97MX-Gaming 5 - $127|
|Memory||G.Skill Ripjaws X 8GB DDR3-2133 - $88|
|Graphics Card||EVGA GeForce GTX 970 FTW - $399|
|Storage||Samsung 250GB 850 EVO - $139
Western Digital 2TB Green - $79
|Case||Corsair Carbide Series Air 240 - $89|
|Power Supply||Seasonic Platinum 860 watt PSU - $174|
|OS||Windows 8.1 x64 - $92|
|Total Price||$1602 - Amazon Full Cart|
The starting point for this system is the Intel Core i7-4790K, the top-end Haswell processor for the Z97 chipset. In fact, the Core i7-4790K is a Devil's Canyon part, created by Intel to appease the enthusiast looking for an overclockable and high clocked quad-core part. This CPU will only lag behind the likes of the Haswell-E LGA2011 processors, but at just $340 or so, is significantly less expensive. Cooling the 4790K is Corsair's Hydro Series H80i double-thickness self contained water cooler.
For the motherboard I selected the Gigabyte Z97MX-Gaming 5, a MicroATX motherboard that combines performance and features in a mATX form factor, perfect for our build. This board includes support for SLI and CrossFire, has audio OP-AMP support, USB ports dedicated for DACs, M.2 storage support, Killer networking and more.
NVIDIA's G-Sync technology and the monitors that integrate it continue to be one of hottest discussion topics surrounding PC technology and PC gaming. We at PC Perspective have dived into the world of variable refresh rate displays in great detail, discussing the technological reasons for it's existence, talking with co-creator Tom Petersen in studio, doing the first triple-panel Surround G-Sync testing as well as reviewing several different G-Sync monitor's available on the market. We were even the first to find the reason behind the reported flickering a 0 FPS on G-Sync monitors.
A lot of has happened in the world of displays in the year or more since NVIDIA first announced G-Sync technology including a proliferation of low cost 4K panels as well as discussion of FreeSync, AMD's standards-based alternative to G-Sync. We are still waiting for our first hands on time (other than a static demo) with monitors supporting FreeSync / AdaptiveSync and it is quite likely that will occur at CES this January. If it doesn't, AMD is going to have some serious explaining to do...
But today we are looking at the new Acer XB270H, a 1920x1080 27-in monitor with G-Sync support and a 144 Hz refresh rate; a unique combination. In fact, there is no other 27-in 144 Hz 1080p monitor on the market that we are aware of after a quick search of Newegg.com and Amazon.com. But does this monitor offer the same kind of experience as the ASUS ROG Swift PG278Q or even the Acer XB280HK 4K G-Sync panels?
Drobo is frequently referred to as ‘the Apple of external storage products’. They got this name because their products go for the simplest possible out-of-the-box experience. Despite their simplicity, the BeyondRAID concept these units employ remains extremely robust and highly resistant to data loss in even the most extreme cases of drive failures and data loss. I reviewed the DroboPro 8-bay unit over 5 years ago and was so impressed by it that I continue to use one to this day (and it has never lost data, despite occasional hard drive failures).
Over those past 5 years since our review of the DroboPro, Drobo (then known as Data Robotics) has also had a bit of an Apple story. Their original CEO started the company but was ousted by the board in late 2009. He then started Connected Data in 2011, quickly growing to the point where they merged with Drobo in 2013. This was not just a merger of companies, it was a merger of their respective products. The original Transporter was only a single drive unit, where Drobo’s tech supercharged that personal cloud capability to scale all the way up to corporate environments.
Many would say that for that period where their original CEO was absent, Drobo’s products turned more towards profitability, perhaps too soon for the company, as the products released during that period were less than stellar. We actually got a few of those Drobos in for review, but their performance was so inconsistent that we spent more time trying to figure out what was causing the issues than completing a review we could stand behind. With their founder back in the CEO chair, Drobo's path was turned back to its roots - making a good, fast, and low cost product for their customers. This was what they wanted to accomplish back in 2009, but in many ways the available tech was not up to speed yet. USB 2.0 was the fastest widely available standard, aside from iSCSI over Gigabit (but that was pricey to implement and appeared in the DroboPro). Nowadays things are very different. USB 3.0 controllers are vastly more compatible and faster than they used to be, as is SATA controller hardware and ARM microcontrollers. These developments would ultimately enable Drobo to introduce what they wanted to in the first place:
This is the third generation 4-Bay Drobo. The 4-Bay model is what started it all for them, but was a bit underpowered and limited to USB 2.0 speeds. The second gen unit launched mid 2008, adding FireWire as a faster connection option, but it was still slower than most would have liked given its $500 price tag. This third generation unit promises to change all of that.
USB is once again the only connectivity option, but this time it’s USB 3.0. There have previously been other 5-bay Drobos with this as an option (Drobo S, S gen 2, 5D, Mini), but many of those units saw compatibility issues with some USB 3.0 host controllers. We experienced some of these same frustrating incompatibilities first hand, and can confirm those frustrations. Drobo is putting that behind them with a revised chipset, and today we will put it all to the test.
Introduction, Specs, and First Impressions
BitFenix has been making enclosures for the PC market since 2010 (with the massive Colossus E-ATX case), and came to prominence a couple of years later with the introduction of the Prodigy enclosure. While the company has expanded to produce power supplies and peripherals they are still primarily a case manufacturer, as evidenced by the now 31 different models on their product page. Not content to iterate on their existing designs, BitFenix has consistently introduced new chassis ideas for different form-factors and needs.
We reviewed the Colossus Micro-ATX case back in March, and it is again an enclosure built for the venerable micro-ATX form-factor that we’re looking at here. Quite the opposite of the Colossus Micro-ATX's squat design, the Pandora is smooth and very slim.
In the world of computer cases there are many variations, but they are mostly boxes with splashes of style and the occasional window. Companies like In Win are at the opposite end of the spectrum, but the design choices for a case with commitment to artistic intent often entail a considerable price tag, and In Win consistently prices itself out of the mainstream market. So what about the middle ground? Enter the BitFenix Pandora. It boasts eye-catching looks, a slim design that seems even more so given the curved panels, and even has a color LCD screen that can be programmed with the image file of your choice!
The Pandora features a programmable color LCD display, to which I affixed this incredible logo
I don’t want to dissolve into meaningless superlatives, but the Pandora is a striking design. When it was shown at Computex earlier in 2014 it was listed as a mini-ITX enclosure, and while it definitely supports mini-ITX motherboards it is the final product’s micro-ATX support that we focus on in this review. And while it would have been large as a mini-ITX enclosure the Pandora is fairly small as an mini-ATX case, most notably due to that slim profile. This comes at a price, as there won’t be as much room for storage with such a narrow width (and those looking for any optical drive support must look elsewhere). And speaking of price, while the "core" version of the case starts at around $110, this version with programmable display is currently selling for just under $160. Steep, but not outrageous either.
Meet the M320
Logitech is brand synonymous with mice, joysticks and other peripherals, providing a handy way to interact with your computer for over 20 years. Anyone who has used a computer for any amount of time knows Logitech and have used a variety of their products. Their peripheral lineup has come a long way from the beginnings, with washable keyboards, webcams and mice with over two dozen programmable buttons.
In this case we are looking at the M320 Wireless Mouse with three buttons and scroll wheel, a rubberized grip shaped for the right hand and an offset optical sensor with 1000 dpi resolution.
The Logitech M320 comes in a user friendly clamshell package with cut out flap on the back which is actually effective in opening the packaging without the need of a utility knife or a couple of stitches on your hand. Perhaps even more impressive is the fact that it ships with a battery included; not the rechargeable kind but certainly a nice touch for those of us who remember receiving toys that were unusable until someone made a trip to the store to pick up the required mix of AAA's, D's or 9V's. The documentation claims the battery will last for two years and while there was obviously no way to put that to the test the automatic sleep mode and physical power switch will ensure that your battery life will not be inconveniently short.