All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Introduction and Background
We first got a peek of USB 3.1 at CES 2015. MSI had a cool demo showing some throughput figures including read and write speeds as high as 690 MB/s, well over the ~450 MB/s we see on USB 3.0 options shipping today.
We were of course eager to play around with this for ourselves, and MSI was happy to oblige, sending along a box of goodies:
Stuff we will be testing today (Samsung T1 was not part of the MSI demo).
For those unaware, USB 3.1 (also known as Superspeed+), while only a 0.1 increment in numbering, incorporates a doubling of raw throughput and some dramatic improvements to the software overhead of the interface.
Don't be confused between the USB 3.1 standard and the new USB Type-C connector - they are unrelated and independent of each other.
Yes, you’re all going to have to buy *more* cables in the future.
Type-C connectors will enable more simple cable design and thinner connections going forward but USB 3.1 will exist in both Type-A/B and Type-C going forward. Our benchmarking today will utilize Type-A.
New Features and Specifications
It is increasingly obvious that in the high end smartphone and tablet market, much like we saw occur over the last several years in the PC space, consumers are becoming more concerned with features and experiences than just raw specifications. There is still plenty to drool over when looking at and talking about 4K screens in the palm of your hand, octa-core processors and mobile SoC GPUs measuring performance in hundreds of GFLOPS, but at the end of the day the vast majority of consumers want something that does something to “wow” them.
As a result, device manufacturers and SoC vendors are shifting priorities for performance, features and how those are presented both the public and to the media. Take this week’s Qualcomm event in San Diego where a team of VPs, PR personnel and engineers walked me through the new Snapdragon 810 processor. Rather than showing slide after slide of comparative performance numbers to the competition, I was shown room after room of demos. Wi-Fi, LTE, 4K capture and playback, gaming capability, thermals, antennae modifications, etc. The goal is showcase the experience of the entire platform – something that Qualcomm has been providing for longer than just about anyone in this business, while educating consumers on the need for balance too.
As a 15-year veteran of the hardware space my first reaction here couldn’t have been scripted any more precisely: a company that doesn’t show performance numbers has something to hide. But I was given time with a reference platform featuring the Snapdragon 810 processor in a tablet form-factor and the results show impressive increases over the 801 and 805 processors from the previous family. Rumors of the chips heat issues seem overblown, but that part will be hard to prove for sure until we get retail hardware in our hands to confirm.
Today’s story will outline the primary feature changes of the Snapdragon 810 SoC, though there was so much detail presented at the event with such a short window of time for writing that I definitely won’t be able to get to it all. I will follow up the gory specification details with performance results compared to a wide array of other tablets and smartphones to provide some context to where 810 stands in the market.
Introduction and Features
Last month we took a look at SilverStone’s small form-factor power supply the SFX-600, which delivered 600 watts from a compact SFX enclosure. Today we are looking at SilverStone’s new Strider Gold ST1500-GS, which is a 1,500 watt ATX form-factor power supply. The Strider Gold 1500W PSU is fully modular and built for high efficiency operation. What makes the ST1500-GS unique is the relatively short enclosure, which is only 180mm (7.1”) deep!
SilverStone ST1500-GS ATX Power Supply
There are currently five different models available in the Strider Gold S Series, which include the ST55F-G, ST65F-G, ST75-GS, ST85F-GS, and ST1500-GS. All of the Strider Gold S Series PSUs are designed to be fully modular, 80 Plus Gold certified, and small in size. While the typical 1500W power supply enclosure measures 220mm (8.7”) deep, the Strider Gold ST1500-GS is housed in a 180mm chassis.
(Courtesy of SilverStone)
SilverStone Strider Gold S Series ST1500-GS PSU Key Features:
• 1,500 watts DC power output (1600W peak power)
• High efficiency with 80 Plus Gold certification
• 100% Modular cables
• 24/7 Continuous power output with 40°C operating temperature
• Strict ±3% voltage regulation and low AC ripple & noise
• Dedicated single +12V rail (125A)
• Quiet 135mm ball bearing fan
• Eight PCI-E 8/2-pin connectors support multiple high-end graphic adapters
• Conforms to ATX12V and EPS standards
• Universal AC input (100-240V) with Active PFC
• Dimensions: 150mm (W) x 86mm (H) x 180mm (L)
• $319.99 USD
Introduction, Specifications and Packaging
Plextor launched their M6e PCIe SSD in mid-2014. This was the first consumer retail available native PCIe SSD. While previous solutions such as the OCZ RevoDrive bridged SATA SSD controllers to PCIe through a RAID or VCA device, the M6e went with a Marvell controller that could speak directly to the host system over a PCIe 2.0 x2 link. Since M.2 was not widely available at launch time, Plextor also made the M6e available with a half-height PCIe interposer, making for a painless upgrade for those on older non M.2 motherboards (which at that time was the vast majority).
With the M6e out for only a few months time (and in multiple versions), I was surprised to see Plextor launch an additonal version of it at the 2015 CES this past January. Announced alongside the upcoming M7e, the M6e Black Edition is essentially a pimped out version of the original M6e PCIe:
We left CES with a sample of the M6e Black, but had to divert our attention to a few other pressing issues shortly after. With all of that behind us, it's time to get back to cranking out the storage goodness, so let's get to it!
A baker's dozen of GTX 960
Back on the launch day of the GeForce GTX 960, we hosted NVIDIA's Tom Petersen for a live stream. During the event, NVIDIA and its partners provided ten GTX 960 cards for our live viewers to win which we handed out through about an hour and a half. An interesting idea was proposed during the event - what would happen if we tried to overclock all of the product NVIDIA had brought along to see what the distribution of results looked like? After notifying all the winners of their prizes and asking for permission from each, we started the arduous process of testing and overclocking a total of 13 (10 prizes plus our 3 retail units already in the office) different GTX 960 cards.
Hopefully we will be able to provide a solid base of knowledge for buyers of the GTX 960 that we don't normally have the opportunity to offer: what is the range of overclocking you can expect and what is the average or median result. I think you will find the data interesting.
The 13 Contenders
Our collection of thirteen GTX 960 cards includes a handful from ASUS, EVGA and MSI. The ASUS models are all STRIX models, the EVGA cards are of the SSC variety, and the MSI cards include a single Gaming model and three 100ME. (The only difference between the Gaming and 100ME MSI cards is the color of the cooler.)
To be fair to the prize winners, I actually assigned each of them a specific graphics card before opening them up and testing them. I didn't want to be accused of favoritism by giving the best overclockers to the best readers!
SFF PCs get an upgrade
Ultra compact computers, otherwise known as small form factor PCs, are a rapidly increasing market as consumers realize that, for nearly all purposes other than gaming and video editing, Ultrabook-class hardware is "fast enough". I know that some of our readers will debate that fact, and we welcome the discussion, but as CPU architectures continue to improve in both performance and efficiency, you will be able to combine higher performance into smaller spaces. The Gigabyte BRIX platform is the exact result that you expect to see with that combination.
Previously, we have seen several other Gigabyte BRIX devices including our first desktop interaction with Iris Pro graphics, the BRIX Pro. Unfortunately though, that unit was plagued by noise issues - the small fan spun pretty fast to cool a 65 watt processor. For a small computer that would likely sit on top of your desk, that's a significant drawback.
Intel Ivy Bridge NUC, Gigabyte BRIX S Broadwell, Gigabyte BRIX Pro Haswell
This time around, Gigabyte is using the new Broadwell-U architecture in the Core i7-5500U and its significantly lower, 15 watt TDP. That does come with some specification concessions though, including a dual-core CPU instead of a quad-core CPU and a peak Turbo clock rate that is 900 MHz lower. Comparing the Broadwell BRIX S to the more relevant previous generation based on Haswell, we get essentially the same clock speed, a similar TDP, but also an improved core architecture.
Today we are going to look at the new Gigabyte BRIX S featuring the Core i7-5500U and an NFC chip for some interesting interactions. The "S" designates that this model could support a full size 2.5-in hard drive in addition to the mSATA port.
ARM Releases Top Cortex Design to Partners
ARM has an interesting history of releasing products. The company was once in the shadowy background of the CPU world, but with the explosion of mobile devices and its relevance in that market, ARM has had to adjust how it approaches the public with their technologies. For years ARM has announced products and technology, only to see it ship one to two years down the line. It seems that with the increased competition in the marketplace from Apple, Intel, NVIDIA, and Qualcomm ARM is now pushing to license out its new IP in a way that will enable their partners to achieve a faster time to market.
The big news this time is the introduction of the Cortex A72. This is a brand new design that will be based on the ARMv8-A instruction set. This is a 64 bit capable processor that is also backwards compatible with 32 bit applications programmed for ARMv7 based processors. ARM does not go into great detail about the product other than it is significantly faster than the previous Cortex-A15 and Cortex-A57.
The previous Cortex-A15 processors were announced several years back and made their first introduction in late 2013/early 2014. These were still 32 bit processors and while they had good performance for the time, they did not stack up well against the latest A8 SOCs from Apple. The A53 and A57 designs were also announced around two years ago. These are the first 64 bit designs from ARM and were meant to compete with the latest custom designs from Apple and Qualcomm’s upcoming 64 bit part. We are only now just seeing these parts make it into production, and even Qualcomm has licensed the A53 and A57 designs to insure a faster time to market for this latest batch of next-generation mobile devices.
We can look back over the past five years and see that ARM is moving forward in announcing their parts and then having their partners ship them within a much shorter timespan than we were used to seeing. ARM is hoping to accelerate the introduction of its new parts within the next year.
Battlefield 4 Results
At the end of my first Frame Rating evaluation of the GTX 970 after the discovery of the memory architecture issue, I proposed the idea that SLI testing would need to be done to come to a more concrete conclusion on the entire debate. It seems that our readers and the community at large agreed with us in this instance, repeatedly asking for those results in the comments of the story. After spending the better part of a full day running and re-running SLI results on a pair of GeForce GTX 970 and GTX 980 cards, we have the answers you're looking for.
Today's story is going to be short on details and long on data, so if you want the full back story on what is going on why we are taking a specific look at the GTX 970 in this capacity, read here:
- Part 1: NVIDIA issues initial statement
- Part 2: Full GTX 970 memory architecture disclosed
- Part 3: Frame Rating: GTX 970 vs GTX 980
- Part 4: Frame Rating: GTX 970 SLI vs GTX 980 SLI (what you are reading now)
Okay, are we good now? Let's dive into the first set of results in Battlefield 4.
Battlefield 4 Results
Just as I did with the first GTX 970 performance testing article, I tested Battlefield 4 at 3840x2160 (4K) and utilized the game's ability to linearly scale resolution to help me increase GPU memory allocation. In the game settings you can change that scaling option by a percentage: I went from 110% to 150% in 10% increments, increasing the load on the GPU with each step.
Memory allocation between the two SLI configurations was similar, but not as perfectly aligned with each other as we saw with our single GPU testing.
In a couple of cases, at 120% and 130% scaling, the GTX 970 cards in SLI are actually each using more memory than the GTX 980 cards. That difference is only ~100MB but that delta was not present at all in the single GPU testing.
It has been an abnormal week for us here at PC Perspective. Our typical review schedule has pretty much flown out the window, and the past seven days have been filled with learning, researching, retesting, and publishing. That might sound like the norm, but in these cases the process was initiated by tips from our readers. Last Saturday (24 Jan), a few things were brewing:
- Ryan was informed by NVIDIA that the memory layout of the GTX 970 was different than expected.
- The huge (now 168 page) overclock.net forum thread about the Samsung 840 EVO slowdown was once again gaining traction.
- Someone got G-Sync working on a laptop integrated display.
We had to do a bit of triage here of course, as we can only research and write so quickly. Ryan worked the GTX 970 piece as it was the hottest item. I began a few days of research and testing on the 840 EVO slow down issue reappearing on some drives, and we kept tabs on that third thing, which at the time seemed really farfetched. With those two first items taken care of, Ryan shifted his efforts to GTX 970 SLI testing while I shifted my focus to finding out of there was any credence to this G-Sync laptop thing.
A few weeks ago, an ASUS Nordic Support rep inadvertently leaked an interim build of the NVIDIA driver. This was a mobile driver build (version 346.87) focused at their G751 line of laptops. One recipient of this driver link posted it to the ROG forum back on the 20th. A fellow by the name Gamenab, owning the same laptop cited in that thread, presumably stumbled across this driver, tried it out, and was more than likely greeted by this popup after the installation completed:
Now I know what you’re thinking, and it’s probably the same thing anyone would think. How on earth is this possible? To cut a long story short, while the link to the 346.87 driver was removed shortly after being posted to that forum, we managed to get our hands on a copy of it, installed it on the ASUS G751 that we had in for review, and wouldn’t you know it we were greeted by the same popup!
Ok, so it’s a popup, could it be a bug? We checked NVIDIA control panel and the options were consistent with that of a G-Sync connected system. We fired up the pendulum demo and watched the screen carefully, passing the machine around the office to be inspected by all. We then fired up some graphics benchmarks that were well suited to show off the technology (Unigine Heaven, Metro: Last Light, etc), and everything looked great – smooth steady pans with no juddering or tearing to be seen. Ken Addison, our Video Editor and jack of all trades, researched the panel type and found that it was likely capable of 100 Hz refresh. We quickly dug created a custom profile, hit apply, and our 75 Hz G-Sync laptop was instantly transformed into a 100 Hz G-Sync laptop!
Ryan's Note: I think it is important here to point out that we didn't just look at demos and benchmarks for this evaluation but actually looked at real-world gameplay situations. Playing through Metro: Last Light showed very smooth pans and rotation, Assassin's Creed played smoothly as well and flying through Unigine Heaven manually was a great experience. Crysis 3, Battlefield 4, etc. This was NOT just a couple of demos that we ran through - the variable refresh portion of this mobile G-Sync enabled panel was working and working very well.
At this point in our tinkering, we had no idea how or why this was working, but there was no doubt that we were getting a similar experience as we have seen with G-Sync panels. As I digested what was going on, I thought surely this can’t be as good as it seems to be… Let’s find out, shall we?
Well here we are again with this Samsung 840 EVO slow down issue cropping up here, there, and everywhere. The story for this one is so long and convoluted that I’m just going to kick this piece off with a walk through of what was happening with this particular SSD, and what was attempted so far to fix it:
The Samsung 840 EVO is a consumer-focused TLC SSD. Normally TLC SSDs suffer from reduced write speeds when compared to their MLC counterparts, as writing operations take longer for TLC than for MLC (SLC is even faster). Samsung introduced a novel way of speeding things up with their TurboWrite caching method, which adds a fast SLC buffer alongside the slower flash. This buffer is several GB in size, and helps the 840 EVO maintain fast write speeds in most typical usage scenarios, but the issue with the 840 EVO is not its write speed – the problem is read speed. Initial reviews did not catch this issue as it only impacted data that had been stagnant for a period of roughly 6-8 weeks. As files aged their read speeds were reduced, starting from the speedy (and expected) 500 MB/sec and ultimately reaching a worst case speed of 50-100 MB/sec:
There were other variables that impacted the end result, which further complicated the flurry of reports coming in from seemingly everywhere. The slow speeds turned out to be the result of the SSD controller working extra hard to apply error correction to the data coming in from flash that was (reportedly) miscalibrated at the factory. This miscalibration caused the EVO to incorrectly adapt to cell voltage drifts over time (an effect that occurs in all flash-based storage – TLC being the most sensitive). Ambient temperature could even impact the slower read speeds as the controller was working outside of its expected load envelope and thermally throttled itself when faced with bulk amounts of error correction.
An example of file read speed slowing relative to age, thanks to a tool developed by Techie007.
Once the community reached sufficient critical mass to get Samsung’s attention, they issued a few statements and ultimately pushed out a combination firmware and tool to fix EVO’s that were seeing this issue. The 840 EVO Performance Restoration Tool was released just under two months after the original thread on the Overclock.net forums was started. Despite a quick update a few weeks later, that was not a bad turnaround considering Intel took three months to correct a firmware issue of one of their own early SSDs. While the Intel patch restored full performance to their X25-M, the Samsung update does not appear to be faring so well now that users have logged a few additional months after applying their fix.
A Summary Thus Far
UPDATE 2/2/15: We have another story up that compares the GTX 980 and GTX 970 in SLI as well.
It has certainly been an interesting week for NVIDIA. It started with the release of the new GeForce GTX 960, a $199 graphics card that brought the latest iteration of Maxwell's architecture to a lower price point, competing with the Radeon R9 280 and R9 285 products. But then the proverbial stuff hit the fan with a memory issue on the GeForce GTX 970, the best selling graphics card of the second half of 2014. NVIDIA responded to the online community on Saturday morning but that was quickly followed up with a more detailed expose on the GTX 970 memory hierarchy, which included a couple of important revisions to the specifications of the GTX 970 as well.
At the heart of all this technical debate is a performance question: does the GTX 970 suffer from lower performance because of of the 3.5GB/0.5GB memory partitioning configuration? Many forum members and PC enthusiasts have been debating this for weeks with many coming away with an emphatic yes.
The newly discovered memory system of the GeForce GTX 970
Yesterday I spent the majority of my day trying to figure out a way to validate or invalidate these types of performance claims. As it turns out, finding specific game scenarios that will consistently hit targeted memory usage levels isn't as easy as it might first sound and simple things like the order of start up can vary that as well (and settings change orders). Using Battlefield 4 and Call of Duty: Advanced Warfare though, I think I have presented a couple of examples that demonstrate the issue at hand.
Performance testing is a complicated story. Lots of users have attempted to measure performance on their own setup, looking for combinations of game settings that sit below the 3.5GB threshold and those that cross above it, into the slower 500MB portion. The issue for many of these tests is that they lack access to both a GTX 970 and a GTX 980 to really compare performance degradation between cards. That's the real comparison to make - the GTX 980 does not separate its 4GB into different memory pools. If it has performance drops in the same way as the GTX 970 then we can wager the memory architecture of the GTX 970 is not to blame. If the two cards perform differently enough, beyond the expected performance delta between two cards running at different clock speeds and with different CUDA core counts, then we have to question the decisions that NVIDIA made.
There has also been concern over the frame rate consistency of the GTX 970. Our readers are already aware of how deceptive an average frame rate alone can be, and why looking at frame times and frame time consistency is so much more important to guaranteeing a good user experience. Our Frame Rating method of GPU testing has been in place since early 2013 and it tests exactly that - looking for consistent frame times that result in a smooth animation and improved gaming experience.
Users at reddit.com have been doing a lot of subjective testing
We will be applying Frame Rating to our testing today of the GTX 970 and its memory issues - does the division of memory pools introduce additional stutter into game play? Let's take a look at a couple of examples.
Introduction and Technical Specifications
Courtesy of Primochill
The Wet Bench open-air test bench is Primochill's premier case offering. This acrylic-based enclosure features an innovative design allowing for easy access to the motherboard and PCIe cards without the hassle of removing case panels and mounting screws associated with a typical case motherboard change out. With a starting MSRP of $139.95, the Wet Bench is priced competitively in light of the configurability and features offered with the case.
Courtesy of Primochill
Courtesy of Primochill
The Wet Bench is unique in its design - Primochill built it to support custom water cooling solutions from the ground up. The base kit supports mounting the water cooling kit's radiator to the back plate, up to a 360mm size (supporting 3x120mm fans). Primochill also offers an optional backplate with support for up to a 480mm radiator (supporting up to 4x120mm fans).
A few secrets about GTX 970
UPDATE 1/28/15 @ 10:25am ET: NVIDIA has posted in its official GeForce.com forums that they are working on a driver update to help alleviate memory performance issues in the GTX 970 and that they will "help out" those users looking to get a refund or exchange.
Yes, that last 0.5GB of memory on your GeForce GTX 970 does run slower than the first 3.5GB. More interesting than that fact is the reason why it does, and why the result is better than you might have otherwise expected. Last night we got a chance to talk with NVIDIA’s Senior VP of GPU Engineering, Jonah Alben on this specific concern and got a detailed explanation to why gamers are seeing what they are seeing along with new disclosures on the architecture of the GM204 version of Maxwell.
NVIDIA's Jonah Alben, SVP of GPU Engineering
For those looking for a little background, you should read over my story from this weekend that looks at NVIDIA's first response to the claims that the GeForce GTX 970 cards currently selling were only properly utilizing 3.5GB of the 4GB frame buffer. While it definitely helped answer some questions it raised plenty more which is whey we requested a talk with Alben, even on a Sunday.
Let’s start with a new diagram drawn by Alben specifically for this discussion.
GTX 970 Memory System
Believe it or not, every issue discussed in any forum about the GTX 970 memory issue is going to be explained by this diagram. Along the top you will see 13 enabled SMMs, each with 128 CUDA cores for the total of 1664 as expected. (Three grayed out SMMs represent those disabled from a full GM204 / GTX 980.) The most important part here is the memory system though, connected to the SMMs through a crossbar interface. That interface has 8 total ports to connect to collections of L2 cache and memory controllers, all of which are utilized in a GTX 980. With a GTX 970 though, only 7 of those ports are enabled, taking one of the combination L2 cache / ROP units along with it. However, the 32-bit memory controller segment remains.
You should take two things away from that simple description. First, despite initial reviews and information from NVIDIA, the GTX 970 actually has fewer ROPs and less L2 cache than the GTX 980. NVIDIA says this was an error in the reviewer’s guide and a misunderstanding between the engineering team and the technical PR team on how the architecture itself functioned. That means the GTX 970 has 56 ROPs and 1792 KB of L2 cache compared to 64 ROPs and 2048 KB of L2 cache for the GTX 980. Before people complain about the ROP count difference as a performance bottleneck, keep in mind that the 13 SMMs in the GTX 970 can only output 52 pixels/clock and the seven segments of 8 ROPs each (56 total) can handle 56 pixels/clock. The SMMs are the bottleneck, not the ROPs.
A new GPU, a familiar problem
Editor's Note: Don't forget to join us today for a live streaming event featuring Ryan Shrout and NVIDIA's Tom Petersen to discuss the new GeForce GTX 960. It will be live at 1pm ET / 10am PT and will include ten (10!) GTX 960 prizes for participants! You can find it all at http://www.pcper.com/live
There are no secrets anymore. Calling today's release of the NVIDIA GeForce GTX 960 a surprise would be like calling another Avenger's movie unexpected. If you didn't just assume it was coming chances are the dozens of leaks of slides and performance would get your attention. So here it is, today's the day, NVIDIA finally upgrades the mainstream segment that was being fed by the GTX 760 for more than a year and half. But does the brand new GTX 960 based on Maxwell move the needle?
But as you'll soon see, the GeForce GTX 960 is a bit of an odd duck in terms of new GPU releases. As we have seen several times in the last year or two with a stagnant process technology landscape, the new cards aren't going be wildly better performing than the current cards from either NVIDIA for AMD. In fact, there are some interesting comparisons to make that may surprise fans of both parties.
The good news is that Maxwell and the GM206 GPU will price out starting at $199 including overclocked models at that level. But to understand what makes it different than the GM204 part we first need to dive a bit into the GM206 GPU and how it matches up with NVIDIA's "small" GPU strategy of the past few years.
The GM206 GPU - Generational Complexity
First and foremost, the GTX 960 is based on the exact same Maxwell architecture as the GTX 970 and GTX 980. The power efficiency, the improved memory bus compression and new features all make their way into the smaller version of Maxwell selling for $199 as of today. If you missed the discussion on those new features including MFAA, Dynamic Super Resolution, VXGI you should read that page of our original GTX 980 and GTX 970 story from last September for a bit of context; these are important aspects of Maxwell and the new GM206.
NVIDIA's GM206 is essentially half of the full GM204 GPU that you find on the GTX 980. That includes 1024 CUDA cores, 64 texture units and 32 ROPs for processing, a 128-bit memory bus and 2GB of graphics memory. This results in half of the memory bandwidth at 112 GB/s and half of the peak compute capability at 2.30 TFLOPS.
Introducing Windows 10 (Again)
I did not exactly make too many unsafe predictions, but let's recap the Windows 10 Consumer announcement anyway. The briefing was a bit on the slow side, at least if you are used to E3 keynotes, but it contained a fair amount of useful information. Some of the things discussed are future-oriented, but some will arrive soon. So let's get right into it.
Price and Upgrade Options
Microsoft has not announced an official price for Windows 10, if the intent is to install it on a new PC. If you are attempting to upgrade a machine that currently runs Windows 7 or Windows 8.1, then that will be a free upgrade if done within the first year. Windows Phone 8.1 users are also eligible for a no-cost upgrade to Windows 10 if done in the first year.
Quote Terry Myerson of Microsoft, “Once a device is upgraded to Windows 10, we will be keeping it current for the supported lifetime of the device.” This is not elaborated on, but it seems like a weird statement given what we have traditionally expected from Windows. One possible explanation is that Microsoft intends for Windows to be a subscription service going forward, which would be the most obvious extension of “Windows as a Service”. On the other hand, they could be going for the per-device revenue option with Bing, Windows Store, and other initiatives being long tail. If so, I am a bit confused about what constitutes a new device for systems that are regularly upgraded, like what our readers are typically interested in. All of that will eventually be made clear, but not yet.
A New Build for Windows 10
Late in the keynote, Microsoft announced the availability of new preview builds for Windows 10. This time, users of Windows Phone 8.1 will also be able to see the work in progress. PC “Insiders” will get access to their build “in the next week” and phones will get access “in Feburary”. Ars Technica seems to believe that this is scheduled for Sunday, February 1st, which is a really weird time to release a build but their source might be right.
We don't know exactly what will be in it, though. In my predictions, I guessed that a DirectX 12 SDK might be available (or at least some demos) in the next build. That has not been mentioned, which probably would have been if it were true. I expect the next possibility (if we're not surprised in the next one-to-ten days when the build drops) is Game Developers Conference (GDC 2015), which starts on March 2nd.
The New Web Browser: Project Spartan
My guess was that Spartan would be based on DirectX 12. Joe Belfiore said that it is using a new, standards-compliant rendering engine and basically nothing more. The event focused on specific features. The first is note taking, which basically turns the web browser into a telestrator that can also accept keyboard comment blocks. The second is a reading mode that alters content into a Microsoft Word-like column. The third is “reading lists”, which is basically a “read it later” feature that does offline caching. The fourth is Adobe PDF support, which works with the other features of Spartan such as note taking and reading lists.
Which Transitions Into Cortana
The fifth feature of Spartan is Cortana integration, which will provide auto-suggestions based on the information that the assistant software has. The example they provided was auto-suggesting the website for his wife's flight. Surprisingly, when you attempt to control a Spartan, Cortana does not say “There's two of us in here now, remember?” You know, in an attempt to let you know she's service that's integrated into the browser.
Otherwise, it's an interesting demo. I might even end up using it when it comes out, but these sorts of things do not really interest me too much. We have been at the point where, for my usage, the operating system is really not in the way anymore. It feels like there is very little friction between me and getting what I want done, done. Of course, people felt that way about rotary phones until touch-tone came out, and I keep an open mind to better methods. It's just hard to get me excited about voice-activated digital assistants.
As I stated before, DirectX 12 was mentioned but a release date was not confirmed. What they did mention was a bit of relative performance. DirectX 12 supposedly uses about half of the power consumption of DirectX 11, which is particularly great for mobile applications. It can also handle scenes with many more objects. A FutureMark demo was displayed, with the DirectX 11 version alongside a DirectX 12 version. The models seem fairly simple, but the DirectX 12 version appears to running at over 100 FPS when the DirectX 11 version outright fails.
Other gaming features were mentioned. First, Windows 10 will allow shadow recording the last 30 seconds of footage from any game. You might think that NVIDIA would be upset about that, and they might be, but that is significantly less time than ShadowPlay or other recording methods. Second, Xbox One will be able to stream gameplay to any PC in your house. I expect this is the opposite direction than what people hope for, rather wishing for high-quality PC footage to be easily streamed to TVs with a simple interface. It will probably serve a purpose for some use case, though.
Well that was a pretty long event, clocking in at almost two-and-a-half hours. The end had a surprise announcement of an augmented reality (not virtual reality) headset, called the “HoloLens”, which is developed by the Kinect team. I am deliberately not elaborating on it because I was not at the event and I have not tried it. I will say that the most interesting part about it, for me, is the Skype integration, because that probably hints at Microsoft's intentions with the product.
For the rest of us, it touched on a number of interesting features but, like the Enterprise event, did not really dive in. It would have been nice to get some technical details about DirectX 12, but that obviously does not cater to the intended audience. Unless an upcoming build soft-launches a DirectX 12 preview (or Spartan) so that we can do our own discovery, we will probably need to wait until GDC and/or BUILD to find out more.
Until then, you could watch the on-demand version at Microsoft's website.
Introduction, Specifications and Packaging
Today Samsung has lifted the review embargo on their new Portable SSD T1. This represents Samsung's first portable SSD, and aims to serve as another way to make their super speedy VNAND available. We first saw the Samsung T1 at CES, and I've been evaluating the performance if this little drive for the past week:
We'll dive more into the details as this review progresses.
The T1 comes well packaged, with a small instruction manual and a flat style short USB 3.0 cable. The drive itself is very light - ours weighed in right at 1 ounce.
Introduction and Technical Specifications
Courtesy of ASUS
The Rampage V Extreme is ASUS' premier product for their ROG (Republic of Gamers) line of Intel X99-based motherboards. The board offers support for all Intel LGA2011-3 based processors paired with DDR4 memory operating in up to a quad channel configuration. Given the feature-packed nature and premium ROG board-branding, the board's $499.99 MSRP does not come at that much of a surprise.
Courtesy of ASUS
Courtesy of ASUS
ASUS designed the Rampage V Extreme to handle anything an enthusiast could throw its way, integrating an 8-phase digital power system into is Extreme Engine Digi+ IV to power the board. Extreme Engine Digi+ IV combines ASUS' custom designed Digi+ EPU chipset, IR (International Rectifier) PowIRStage MOSFETs, MicroFine Alloy chokes, and 10k Black Metallic capacitors for unrivaled power delivery capabilities. ASUS also bundles their OC Panel device for on-the-fly overclocking and board monitoring, as well as SupremeFX 2014 audio solution for flawless audio.
Introduction, Specs, and First Impressions
In our review of the original LIVA mini-PC we found it to be an interesting product, but it was difficult to identify a specific use-case for it; a common problem with the mini-PC market. Could the tiny Windows-capable machine be a real desktop replacement? That first LIVA just wasn't there yet. The Intel Bay Trail-M SoC was outmatched when playing 1080p Flash video content and system performance was a little sluggish overall in Windows 8.1, which wasn't aided by the limitation of 2GB RAM. (Performance was better overall with Ubuntu.) The price made it tempting but it was too underpowered as one's only PC - though a capable machine for many tasks.
Fast forward to today, when the updated version has arrived on my desk. The updated LIVA has a cool new name - the “X” - and the mini computer's case has more style than before (very important!). Perhaps more importantly, the X boasts upgraded internals as well. Could this new LIVA be the one to replace a desktop for productivity and multimedia? Is this the moment we see the mini-PC come into its own? There’s only one way to find out. But first, I have to take it out of the box.
Chipset: Intel® Bay Trail-M/Bay Trail-I SOC
Memory: DDR3L 2GB/4GB
Expansion Slot: 1 x mSATA for SSD
Storage: eMMC 64GB/32GB
Audio: HD Audio Subsystem by Realtek ALC283
LAN: Realtek RTL8111G Gigabit Fast Ethernet Controller
USB: 1 x USB3.0 Port, 2 x USB2.0 Ports
Video Output: 1 x HDMI Port, 1 x VGA Port
Wireless: WiFi 802.11 b/g/n & Bluetooth 4.0
PCB Size: 115 x 75 mm
Dimension: 135 x 83 x 40 mm
VESA Support: 75mm / 100mm
Adapter Input: AC 100-240V, Output: DC 12V / 3A
OS Support: Linux based OS, Windows 7 (via mSATA SSD) Windows 8/8.1
Thanks to ECS for providing the LIVA X for review!
Packaging and Contents
The LIVA X arrives in a smaller box than its predecessor, and one with a satin finish cuz it's extra fancy.
Introduction and Features
Corsair’s new Carbide Series 330R Titanium Edition case is an update to their popular 330R quiet mid-tower enclosure. The new 330R Titanium Edition features both cosmetic and functional changes with the addition of a Titanium-look brushed aluminum front panel and three-speed fan control switch. In addition, the 330R incorporates excellent sound absorption material for quiet operation, numerous cooling options, and support for multiple, extended length VGA cards. The 330R enclosure features a full length, hinged front door and comes with one 140mm intake fan in the front and one 120mm exhaust fan on the back with five optional fan mounting locations along with support for liquid cooling radiators. There are currently 18 different models in the Carbide Series ranging from $49.99 up to $149.99 USD.
Foundation for a quiet PC
Here is what Corsair has to say about their Carbide Series 330R Titanium Edition enclosure: “The Carbide Series 330R Titanium Edition starts with the award-winning original 330R, and adds a brushed aluminum front panel with a three-speed fan controller. It’s designed for systems that will go into media rooms, bedrooms, dorm rooms, or any place where both silence and performance are essential. Sound damped doors and panels and clever intake fan design are combined with generous expansion room and builder-friendly features to allow you to build a silent PC that can pack a lot of power for gaming and high definition media streaming.”
Introduction and Technical Specifications
Courtesy of ECS
The ECS Z97-Machine motherboard is one of the boards in ECS' L337 product line, offering in-built support for the Intel Z97 Express chipset. ECS rethought their board design with the Z97-Machine, creating a stripped down enthusiast-friendly product that does not compromise on any of the design areas important to the expected performance of the board. At an MSRP of $139.99, ECS hits a lucrative price-point that many other manufacturers have failed to reach with an integrated Intel Z97 chipset in light of the offered features and performance.
Courtesy of ECS
The ECS Z97-Machine motherboard offers an interesting cost-to-performance proposition, cutting back on unnecessary features to keep the overall cost down while not sacrificing on quality of the core components. ECS designed the board with a 6-phase digital power delivery system, using high efficients chokes (ICY CHOKES), MOSFETs rated at up to 90% efficiency, and Nichicon-source aluminum capacitors for optimal board performance under any operating conditions. The Z97-Machine board offers the following in-built features: four SATA 3 ports; an M.2 (NGFF) 10 Gb/s port; an Intel I218-V GigE NIC; two PCI-Express Gen3 x16 slots; 3 PCI-Express x1 slots; 2-digit diagnostic LED display; on-board power and reset buttons; voltage measurement points; Realtek audio solution with ESS Sabre32 DAC; integrated VGA, DVI, and HDMI video port support; and USB 2.0 and 3.0 port support.