Author:
Manufacturer: Intel

Intel Pushes Broadwell to the Next Unit of Computing

Intel continues to invest a significant amount of money into this small form factor product dubbed the Next Unit of Computing, or NUC. When it was initially released in December of 2012, the NUC was built as an evolutionary step of the desktop PC, part of a move for Intel to find new and unique form factors that its processors can exist in. With a 4" x 4" motherboard design the NUC is certainly a differentiating design and several of Intel's partners have adopted it for products of their: Gigabyte's BRIX line being the most relevant. 

But Intel's development team continues to push the NUC platform forward and today we are evaluating the most recent iteration. The Intel NUC5i5RYK is based on the latest 14nm Broadwell processor and offers improved CPU performance, a higher speed GPU and lower power consumption. All of this is packed into a smaller package than any previous NUC on the market and the result is both impressive and totally expected.

A Walk Around the NUC

To most poeple the latest Intel NUC will look very similar to the previous models based on Ivy Bridge and Haswell. You'd be right of course - the fundamental design is unchanged. But Intel continues to push forward in small ways, nipping and tucking away. But the NUC is still just a box. An incredibly small one with a lot of hardware crammed into it, but a box none the less.

IMG_1619.jpg

While I can appreciate the details including the black and silver colors and rounded edges, I think that Intel needs to find a way to add some more excitement into the NUC product line going forward. Admittedly, it is hard to inovate in that directions with a focus on size and compression.

Continue reading our review of the Intel NUC NUC5i5RYK SFF!!

Author:
Manufacturer: ASUS

Technology Background

Just over a week or so ago Allyn spent some time with the MSI X99A Gaming 9 ACK motherboard, a fact that might seem a little odd to our frequent readers. Why would our storage editor be focusing on a motherboard? USB 3.1 of course! When we visited MSI at CES in January they were the first company to show working USB 3.1 hardware and performance numbers that we were able duplicate in our testing when MSI sent us similar hardware.

150213-160838.jpg

But ASUS is in this game as well, preparing its product lines with USB 3.1 support courtesy of the same ASMedia controller we looked at before. ASUS has a new revision of several motherboards planned with integrated on-board USB 3.1 but is also going to be releasing an add-in card with USB 3.1 support for existing systems.

Today we are going to test that add-in card to measure ASUS' implementation of USB 3.1 and see how it stacks up to what MSI had to offer and what improvements and changes you can expect from USB 3.0.

USB 3.1 Technology Background

Despite the simple point denomination change in USB 3.1, also known as SuperSpeed+, the technological and speed differences in the newest revision of USB are substantial. Allyn did a good job of summarizing the changes that include a 10 Gbps link interface and a dramatic drop in encoding overhead that enables peak theoretical performance improvements of 2.44x compared to USB 3.0.

120606_lecroy_4-.jpg

USB 3.1 is rated at 10 Gbps, twice that of USB 3.0. The little-reported-on nugget of info from the USB 3.1 specification relates to how they classify the raw vs. expected speeds. Taking USB 3.0 as an example, Superspeed can handle a raw 5Gbps data rate, but after subtracting out the overhead (packet framing, flow control, etc), you are left with ~450MB/s of real throughput. Superspeed+ upgrades the bit encoding type from 8b/10b (80% efficient) to 128b/132b (97% efficient) *in addition to* the doubling of raw data rate. This means that even after accounting for overhead, Superspeed+’s best case throughput should work out to ~1.1GB/s. That’s not a 2x speed improvement – it is actually 2.44x of USB 3.0 speed. Superspeed+ alright!

Continue reading our preview of USB 3.1 Performance on ASUS hardware!

Subject: General Tech
Manufacturer: ASUS

Introduction

The ASUS STRIX TACTIC PRO is a premium mechanical gaming keyboard featuring Cherry MX Brown switches and some serious style.

tactic_main.jpg

Keyboards are a very personal thing, and as this is one of the three primary interfaces with the system itself (along with the mouse and display), feel will help decide the experience. Without a doubt mechanical keyboard have become very popular with enthusiasts, but as more manufacturers have started offering them - and the market has begun to saturate - it becomes much more difficult to pick a starting point if you're new to the game. To further complicate a buying decision there are different types of key switches used in these keyboards, and each variety has its own properties and unique feel.

tactic_key_angle.jpg

And on the subject of key switches, this particular keyboard built with the brown variety of the Cherry MX switches, and ASUS offers the option of Cherry MX Black, Blue, and Red switches with the STRIX TACTIC PRO as well. Our own Scott Michaud covered the topic of key switches in great detail last year, and that article is a great starting point that helps explain the different types of switches available, and how they differ.

Cherry MX Brown

The Cherry MX Brown switch in action

I'll go into the feel of the keyboard on the next page, but quickly I'll say that MX Brown switches have a good feel without being too "clicky", but they are certainly more stiff feeling than a typical membrane keyboard. While it's impossible to really describe how the keyboard will feel to a particular user, we can certainly cover the features and performance of this keyboard to help with a purchasing decision in this crowded market. At $150 the STRIX TACTIC PRO carries a premium price, but as you'll see this is also a premium product.

Continue reading our review of the STRIX TACTIC PRO mechanical gaming keyboard!!

Author:
Subject: Mobile
Manufacturer: Dell

Specifications

Flagship. Premium. Best in class. These are the terms that Dell and Intel muttered to me during a conference call to discuss the new Dell Venue 8 7000 tablet. It’s a bullish claim and one that would likely have been received with a sideways eye roll or a shrug had I not been able to get a short amount of hands on time with the device at CES in January. The idea that Dell would develop an Android tablet that bests what more established brands like Nexus and Samsung have created, AND that that same tablet would be powered by an Intel processor rather than a Qualcomm, NVIDIA or Samsung chip would have seemed laughable last year. But after a solid three weeks with the Venue 8 7000 I am prepared to make the statement: this is my favorite tablet. Not my favorite Intel tablet, not my favorite Android tablet: just plain favorite.

The Venue 8 7000 combines style, design, technology and visuals that are simply unmatched by anything else in the Android word and rivals anything that Apple has created to date. There are a couple of warts that center around the camera and gaming performance that won’t drop your jaw, but for the majority of use cases the user experience is as exceptional as the looks.

IMG_1737.jpg

Maybe best of all, this tablet starts at just $399 and is available today.

Dell Venue 8 7000 Specifications

Let’s begin the review by looking at the raw specifications of the Dell Venue 8 7000. Even though hardware specifications don’t tell a complete story of any device, especially a tablet that is based so much on experience, it is important to get a good baseline expectation.

  Dell Venue 8 7000 (Model 7840)
Processor Intel Atom Z3580 Quad-Core 2.33 GHz
Graphics PowerVR G6430
Memory 2GB LPDDR3-1600
Screen 2560x1600 OLED 8.4-in (359 ppi)
Storage 16GB eMMC
MicroSD Slot (up to 512GB)
Camera 8MP Rear + Dual 720p Depth
2MP Front
Wireless Intel 7260 802.11ac 1x1 Dual Band
Bluetooth 4.0
Connection USB 2.0 (power and data)
Headphone jack
Battery 21 Whr
5900 mAh
Dimensions 215.8mm x 124.4mm x 6mm
8.5" x 4.88" x 0.24"
305g (10.76oz)
OS Android 4.4.4
Price $399 MSRP

The center of the Venue 8 7000 is the Intel Atom Z3580 quad-core processor with a peak clock rate of 2.3 GHz and a base clock rate of 500 MHz. The Z3580 is a 22nm processor based on the Moorefield platform and Silvermont architecture. I first got information about the Silvermont architecture back in May of 2013 so it seems a bit dated in some regards, but the performance and power efficiency is still there to compete with the rival options from ARM.. The Venue 8 7000 includes an LPDDR3-1600 controller and there is 2GB of memory; a decent amount but we are seeing quite a few smartphones with more system memory like the OnePlus One.

Continue reading our review of the Dell Venue 8 7000 Android Tablet!!

Manufacturer: Corsair

Introduction and Features

Our first Corsair power supply up for review in 2015 is the CS Series Modular 850W PSU; the CS850M. Corsair's CS Series Modular PSUs are designed for basic desktop use and light to moderate gaming where low energy use, low noise, simple installation, and good value are important. The Modular CS Series now includes five models; the CS450M, CS550M, CS650M, CS750M, and the new CS850M.  All of the power supplies in the CS Series feature modular cables, high efficiency (80 Plus Gold certified) and quiet operation. In addition, Corsair continues to offer a full line of high quality power supplies, memory components, cases, cooling components, SSDs and accessories for the PC market.  

2-CS850M-side.jpg

Here is what Corsair has to say about their CS Series Modular PSUs: “The CS-M Series is designed for basic and midrange PCs, but offers features and performance traditionally reserved for higher-end models. 80 Plus Gold efficiency and a thermally controlled fan ensure quiet operation and lower energy use, and the modular, detachable cable set makes installations and upgrades faster and better looking.”

 “80 Plus Gold efficiency reduces operating cost and excess heat. Since it generates less heat, the fan doesn’t need to work as hard, and you’ll enjoy near silent operation. The flat black modular cables with clearly-marked connectors make installation fast and straightforward, with good-looking results.

3-CS850M-front.jpg

Corsair CS Series Modular PSU Key Features:

•    Five Models: 450W up to 850W
•    Compliant with the latest ATX12V v2.4 and EPS 2.92 standards
•    Backward compatible with ATX12V 2.2, 2.31 and ATX12V 2.01 systems
•    4th Generation Intel® Core™ processor ready (Haswell & Z87 motherboards)
•    80 Plus Gold certified for high efficiency (=90% under real world loads)
•    Modular cables (only use the cables you need)
•    Low-profile, flat modular cables reduce air friction and maximize airflow
•    Active PFC with Universal AC input (100-240VAC)
•    Multi GPU ready
•    Safety: OVP, UVP, SCP, OPP, and OTP
•    Approvals: FCC, ICES, UL, CUL, TUV, CCC, CE, RCM, CB, EAC, KC, BSMI, ROHS, WEEE
•    3-Year warranty and lifetime access to Corsair’s tech support & customer service
•    MSRP: $139.99 USD

Please continue reading our Corsair CS850M power supply review!!!

Subject: Storage
Manufacturer: Crucial

Introduction, Specifications and Packaging

Introduction:

Micron's Crucial brand has been cranking out some great low cost SSDs for the past several years now. While their early drives pushed into the SATA 6Gb/sec interface before most of the competition, their performance was inconsistent and lagged behind some of the other more nimble solutions available at that time. This pattern was broken around the time of the M550 and MX100 launches. Those two drives were heavily competitive in performance and even moreso in pricing. Actually the pricing is probably the bigger story - when they launched, one of our readers caught a 512GB MX100 on sale for $125 ($0.24/GB)! We are coming up on a year since the MX100, and at CES 2015 Micron launched a pair of SSD models - the BX100 and MX200. Today we are going to look at the BX100 series:

150212-172437.jpg

Crucial aims to make the BX100 as their lowest cost/GB SSD ever - even cheaper than the MX100. Since Micron makes the flash, the best way to drive costs down is to use a lower cost controller. The Silicon Motion SM2246EN is cheaper to procure than the equivalent Marvell part, yet still performs rather well.

SM2246EN Block Diagram.jpg

The Silicon Motion SM2246EN SSD controller

This is a great controller, as we have seen in our prior review of the ADATA SP610, Corsair Neutron LX, and Angelbird SSD WRK. From the specs, we can see that Micron has somehow infused their variant with increased write speeds even though it appears to use the same flash as those competing models listed above. We'll see how this plays out as the review progresses.

Read on for the full review!

Subject: Networking
Manufacturer: Thecus

Introduction: This Is Not a NAS

The new WSS NAS series from Thecus contains some very interesting devices, and particularly so at the entry-level price with the unit we’re looking at today. WSS is the abbreviation for Windows Storage Server (in this case it’s 2012 R2), and this provides a huge increase in functionality compared to a standard NAS, as you might imagine.

w2000_desk.jpg

Need a server? Just add a keyboard, mouse, and monitor

It’s really quite remarkable what Thecus is doing in partnership with Microsoft here in terms of value, as this entry 2-bay unit costs just $350. While this may seem high for a dual-bay NAS, we  really aren’t talking about a NAS at all with this - which will be readily apparent to the user upon first powering it up. We are talking about a full-scale server here, replete with Windows Server 2012 R2 Essentials goodness. Of course a savvy user could easily deploy a small server in a home or office, and there are many advantages to managed solutions beyond the simple NAS appliances. But the advantage of a NAS is just that: it is significantly less complex and accessible for a consumer. The W2000 presents a very interesting option due to one particular aspect of its own accessibility: price. At $350 you are getting a very compact server with internal hardware much more akin to a standard desktop than you might imagine, and it ships installed with Microsoft's Windows Storage Server 2012 R2 Essentials.

What is “Storage” Server Essentials?

Ok, so I was a little confused as to the specific difference with the Storage version of the Server OS, unless it was simply a licensing distinction. My research first brought me to this quote from Microsoft:

“Windows Storage Server 2012 R2 Essentials is based on Windows Server 2012 R2. In fact, when it comes to functionality, you get key some features that aren’t included in these first two editions.”

After looking through the available documentation it appears as though Storage Server Essentials is, essentially, just Server Essentials with the distinction of being licensed differently. Microsoft TechNet defines it further:

“A computer that runs Windows Storage Server is referred to as a storage appliance. Windows Storage Server is based on the Windows Server operating system, and it is specifically optimized for use with network-attached storage devices. Windows Storage Server offers you a platform to build storage appliances that are customized for your hardware.”

w2000_box_2.jpg

Continue reading our review of the Thecus W2000 Windows Storage Server NAS!!

Subject: Storage
Manufacturer: MSI

Introduction and Background

We first got a peek of USB 3.1 at CES 2015. MSI had a cool demo showing some throughput figures including read and write speeds as high as 690 MB/s, well over the ~450 MB/s we see on USB 3.0 options shipping today. 

150212-152133.jpg

We were of course eager to play around with this for ourselves, and MSI was happy to oblige, sending along a box of goodies:

150211-164711.jpg

Stuff we will be testing today (Samsung T1 was not part of the MSI demo).

For those unaware, USB 3.1 (also known as Superspeed+), while only a 0.1 increment in numbering, incorporates a doubling of raw throughput and some dramatic improvements to the software overhead of the interface.

USB 3.1 speed.png

Don't be confused between the USB 3.1 standard and the new USB Type-C connector - they are unrelated and independent of each other.

usb3.1-3.jpg

Yes, you’re all going to have to buy *more* cables in the future.

Type-C connectors will enable more simple cable design and thinner connections going forward but USB 3.1 will exist in both Type-A/B and Type-C going forward. Our benchmarking today will utilize Type-A.

Read on for some more detail and speed tests of this new specification.

Author:
Subject: Processors, Mobile
Manufacturer: Qualcomm

New Features and Specifications

Introduction

It is increasingly obvious that in the high end smartphone and tablet market, much like we saw occur over the last several years in the PC space, consumers are becoming more concerned with features and experiences than just raw specifications. There is still plenty to drool over when looking at and talking about 4K screens in the palm of your hand, octa-core processors and mobile SoC GPUs measuring performance in hundreds of GFLOPS, but at the end of the day the vast majority of consumers want something that does something to “wow” them.

As a result, device manufacturers and SoC vendors are shifting priorities for performance, features and how those are presented both the public and to the media. Take this week’s Qualcomm event in San Diego where a team of VPs, PR personnel and engineers walked me through the new Snapdragon 810 processor. Rather than showing slide after slide of comparative performance numbers to the competition, I was shown room after room of demos. Wi-Fi, LTE, 4K capture and playback, gaming capability, thermals, antennae modifications, etc. The goal is showcase the experience of the entire platform – something that Qualcomm has been providing for longer than just about anyone in this business, while educating consumers on the need for balance too.

hw1.jpg

As a 15-year veteran of the hardware space my first reaction here couldn’t have been scripted any more precisely: a company that doesn’t show performance numbers has something to hide. But I was given time with a reference platform featuring the Snapdragon 810 processor in a tablet form-factor and the results show impressive increases over the 801 and 805 processors from the previous family. Rumors of the chips heat issues seem overblown, but that part will be hard to prove for sure until we get retail hardware in our hands to confirm.

Today’s story will outline the primary feature changes of the Snapdragon 810 SoC, though there was so much detail presented at the event with such a short window of time for writing that I definitely won’t be able to get to it all. I will follow up the gory specification details with performance results compared to a wide array of other tablets and smartphones to provide some context to where 810 stands in the market.

hw4.jpg

Let’s dive in! Continue reading our preview of the new Qualcomm Snapdragon 810 SoC!!

Manufacturer: SilverStone

Introduction and Features

2-ST1500-Banner.jpg

Introduction

Last month we took a look at SilverStone’s small form-factor power supply the SFX-600, which delivered 600 watts from a compact SFX enclosure. Today we are looking at SilverStone’s new Strider Gold ST1500-GS, which is a 1,500 watt ATX form-factor power supply. The Strider Gold 1500W PSU is fully modular and built for high efficiency operation. What makes the ST1500-GS unique is the relatively short enclosure, which is only 180mm (7.1”) deep!

3-ST1500-diag.jpg

SilverStone ST1500-GS ATX Power Supply

There are currently five different models available in the Strider Gold S Series, which include the ST55F-G, ST65F-G, ST75-GS, ST85F-GS, and ST1500-GS. All of the Strider Gold S Series PSUs are designed to be fully modular, 80 Plus Gold certified, and small in size. While the typical 1500W power supply enclosure measures 220mm (8.7”) deep, the Strider Gold ST1500-GS is housed in a 180mm chassis.

4-Size-comp.jpg

(Courtesy of SilverStone)

SilverStone Strider Gold S Series ST1500-GS PSU Key Features:
 
•    1,500 watts DC power output (1600W peak power)
•    High efficiency with 80 Plus Gold certification
•    100% Modular cables
•    24/7 Continuous power output with 40°C operating temperature
•    Strict ±3% voltage regulation and low AC ripple & noise
•    Dedicated single +12V rail (125A)
•    Quiet 135mm ball bearing fan
•    Eight PCI-E 8/2-pin connectors support multiple high-end graphic adapters
•    Conforms to ATX12V and EPS standards
•    Universal AC input (100-240V) with Active PFC
•    Dimensions: 150mm (W) x 86mm (H) x 180mm (L)
•    $319.99 USD

Please continue reading our review of the SilverStone ST1500-GS PSU !!!

Subject: Storage
Manufacturer: Plextor
Tagged: ssd, plextor, pcie, 256GB

Introduction, Specifications and Packaging

Introduction:

Plextor launched their M6e PCIe SSD in mid-2014. This was the first consumer retail available native PCIe SSD. While previous solutions such as the OCZ RevoDrive bridged SATA SSD controllers to PCIe through a RAID or VCA device, the M6e went with a Marvell controller that could speak directly to the host system over a PCIe 2.0 x2 link. Since M.2 was not widely available at launch time, Plextor also made the M6e available with a half-height PCIe interposer, making for a painless upgrade for those on older non M.2 motherboards (which at that time was the vast majority).

With the M6e out for only a few months time (and in multiple versions), I was surprised to see Plextor launch an additonal version of it at the 2015 CES this past January. Announced alongside the upcoming M7e, the M6e Black Edition is essentially a pimped out version of the original M6e PCIe:

DSC07414_resize.JPG

We left CES with a sample of the M6e Black, but had to divert our attention to a few other pressing issues shortly after. With all of that behind us, it's time to get back to cranking out the storage goodness, so let's get to it!

Read on for the full review!

Author:
Manufacturer: NVIDIA

A baker's dozen of GTX 960

Back on the launch day of the GeForce GTX 960, we hosted NVIDIA's Tom Petersen for a live stream. During the event, NVIDIA and its partners provided ten GTX 960 cards for our live viewers to win which we handed out through about an hour and a half. An interesting idea was proposed during the event - what would happen if we tried to overclock all of the product NVIDIA had brought along to see what the distribution of results looked like? After notifying all the winners of their prizes and asking for permission from each, we started the arduous process of testing and overclocking a total of 13 (10 prizes plus our 3 retail units already in the office) different GTX 960 cards.

Hopefully we will be able to provide a solid base of knowledge for buyers of the GTX 960 that we don't normally have the opportunity to offer: what is the range of overclocking you can expect and what is the average or median result. I think you will find the data interesting.

The 13 Contenders

Our collection of thirteen GTX 960 cards includes a handful from ASUS, EVGA and MSI. The ASUS models are all STRIX models, the EVGA cards are of the SSC variety, and the MSI cards include a single Gaming model and three 100ME. (The only difference between the Gaming and 100ME MSI cards is the color of the cooler.)

cards2.jpg

Jenga!

To be fair to the prize winners, I actually assigned each of them a specific graphics card before opening them up and testing them. I didn't want to be accused of favoritism by giving the best overclockers to the best readers!

Continue reading our overclocking testing of 13 GeForce GTX 960 cards!!

Author:
Manufacturer: Gigabyte

SFF PCs get an upgrade

Ultra compact computers, otherwise known as small form factor PCs, are a rapidly increasing market as consumers realize that, for nearly all purposes other than gaming and video editing, Ultrabook-class hardware is "fast enough". I know that some of our readers will debate that fact, and we welcome the discussion, but as CPU architectures continue to improve in both performance and efficiency, you will be able to combine higher performance into smaller spaces. The Gigabyte BRIX platform is the exact result that you expect to see with that combination.

Previously, we have seen several other Gigabyte BRIX devices including our first desktop interaction with Iris Pro graphics, the BRIX Pro. Unfortunately though, that unit was plagued by noise issues - the small fan spun pretty fast to cool a 65 watt processor. For a small computer that would likely sit on top of your desk, that's a significant drawback. 

IMG_1511.JPG

Intel Ivy Bridge NUC, Gigabyte BRIX S Broadwell, Gigabyte BRIX Pro Haswell

This time around, Gigabyte is using the new Broadwell-U architecture in the Core i7-5500U and its significantly lower, 15 watt TDP. That does come with some specification concessions though, including a dual-core CPU instead of a quad-core CPU and a peak Turbo clock rate that is 900 MHz lower. Comparing the Broadwell BRIX S to the more relevant previous generation based on Haswell, we get essentially the same clock speed, a similar TDP, but also an improved core architecture.

Today we are going to look at the new Gigabyte BRIX S featuring the Core i7-5500U and an NFC chip for some interesting interactions. The "S" designates that this model could support a full size 2.5-in hard drive in addition to the mSATA port. 

Let's dive in and take a look!

Author:
Subject: Processors
Manufacturer: ARM

ARM Releases Top Cortex Design to Partners

ARM has an interesting history of releasing products.  The company was once in the shadowy background of the CPU world, but with the explosion of mobile devices and its relevance in that market, ARM has had to adjust how it approaches the public with their technologies.  For years ARM has announced products and technology, only to see it ship one to two years down the line.  It seems that with the increased competition in the marketplace from Apple, Intel, NVIDIA, and Qualcomm ARM is now pushing to license out its new IP in a way that will enable their partners to achieve a faster time to market.

arm_01.jpg

The big news this time is the introduction of the Cortex A72.  This is a brand new design that will be based on the ARMv8-A instruction set.  This is a 64 bit capable processor that is also backwards compatible with 32 bit applications programmed for ARMv7 based processors.  ARM does not go into great detail about the product other than it is significantly faster than the previous Cortex-A15 and Cortex-A57.

The previous Cortex-A15 processors were announced several years back and made their first introduction in late 2013/early 2014.  These were still 32 bit processors and while they had good performance for the time, they did not stack up well against the latest A8 SOCs from Apple.  The A53 and A57 designs were also announced around two years ago.  These are the first 64 bit designs from ARM and were meant to compete with the latest custom designs from Apple and Qualcomm’s upcoming 64 bit part.  We are only now just seeing these parts make it into production, and even Qualcomm has licensed the A53 and A57 designs to insure a faster time to market for this latest batch of next-generation mobile devices.

arm_02.jpg

We can look back over the past five years and see that ARM is moving forward in announcing their parts and then having their partners ship them within a much shorter timespan than we were used to seeing.  ARM is hoping to accelerate the introduction of its new parts within the next year.

Click here to continue reading about ARM's latest releases!

Author:
Manufacturer: NVIDIA

Battlefield 4 Results

At the end of my first Frame Rating evaluation of the GTX 970 after the discovery of the memory architecture issue, I proposed the idea that SLI testing would need to be done to come to a more concrete conclusion on the entire debate. It seems that our readers and the community at large agreed with us in this instance, repeatedly asking for those results in the comments of the story. After spending the better part of a full day running and re-running SLI results on a pair of GeForce GTX 970 and GTX 980 cards, we have the answers you're looking for.

Today's story is going to be short on details and long on data, so if you want the full back story on what is going on why we are taking a specific look at the GTX 970 in this capacity, read here:

Okay, are we good now? Let's dive into the first set of results in Battlefield 4.

Battlefield 4 Results

Just as I did with the first GTX 970 performance testing article, I tested Battlefield 4 at 3840x2160 (4K) and utilized the game's ability to linearly scale resolution to help me increase GPU memory allocation. In the game settings you can change that scaling option by a percentage: I went from 110% to 150% in 10% increments, increasing the load on the GPU with each step.

bf4settings.jpg

Memory allocation between the two SLI configurations was similar, but not as perfectly aligned with each other as we saw with our single GPU testing.

bf4mem.png

In a couple of cases, at 120% and 130% scaling, the GTX 970 cards in SLI are actually each using more memory than the GTX 980 cards. That difference is only ~100MB but that delta was not present at all in the single GPU testing.

Continue reading our look at Frame Rating comparisons between GTX 970 and GTX 980 cards in SLI!

Manufacturer: NVIDIA

Introduction

It has been an abnormal week for us here at PC Perspective. Our typical review schedule has pretty much flown out the window, and the past seven days have been filled with learning, researching, retesting, and publishing. That might sound like the norm, but in these cases the process was initiated by tips from our readers. Last Saturday (24 Jan), a few things were brewing:

We had to do a bit of triage here of course, as we can only research and write so quickly. Ryan worked the GTX 970 piece as it was the hottest item. I began a few days of research and testing on the 840 EVO slow down issue reappearing on some drives, and we kept tabs on that third thing, which at the time seemed really farfetched. With those two first items taken care of, Ryan shifted his efforts to GTX 970 SLI testing while I shifted my focus to finding out of there was any credence to this G-Sync laptop thing.

A few weeks ago, an ASUS Nordic Support rep inadvertently leaked an interim build of the NVIDIA driver. This was a mobile driver build (version 346.87) focused at their G751 line of laptops. One recipient of this driver link posted it to the ROG forum back on the 20th. A fellow by the name Gamenab, owning the same laptop cited in that thread, presumably stumbled across this driver, tried it out, and was more than likely greeted by this popup after the installation completed:

gsync panel connected-.png

Now I know what you’re thinking, and it’s probably the same thing anyone would think. How on earth is this possible? To cut a long story short, while the link to the 346.87 driver was removed shortly after being posted to that forum, we managed to get our hands on a copy of it, installed it on the ASUS G751 that we had in for review, and wouldn’t you know it we were greeted by the same popup!

Ok, so it’s a popup, could it be a bug? We checked NVIDIA control panel and the options were consistent with that of a G-Sync connected system. We fired up the pendulum demo and watched the screen carefully, passing the machine around the office to be inspected by all. We then fired up some graphics benchmarks that were well suited to show off the technology (Unigine Heaven, Metro: Last Light, etc), and everything looked great – smooth steady pans with no juddering or tearing to be seen. Ken Addison, our Video Editor and jack of all trades, researched the panel type and found that it was likely capable of 100 Hz refresh. We quickly dug created a custom profile, hit apply, and our 75 Hz G-Sync laptop was instantly transformed into a 100 Hz G-Sync laptop!

Ryan's Note: I think it is important here to point out that we didn't just look at demos and benchmarks for this evaluation but actually looked at real-world gameplay situations. Playing through Metro: Last Light showed very smooth pans and rotation, Assassin's Creed played smoothly as well and flying through Unigine Heaven manually was a great experience. Crysis 3, Battlefield 4, etc. This was NOT just a couple of demos that we ran through - the variable refresh portion of this mobile G-Sync enabled panel was working and working very well.

custom hz--.png

At this point in our tinkering, we had no idea how or why this was working, but there was no doubt that we were getting a similar experience as we have seen with G-Sync panels. As I digested what was going on, I thought surely this can’t be as good as it seems to be… Let’s find out, shall we?

Continue reading our story on Mobile G-Sync and impressions of our early testing!!

Subject: Storage
Manufacturer: Samsung

Introduction

Well here we are again with this Samsung 840 EVO slow down issue cropping up here, there, and everywhere. The story for this one is so long and convoluted that I’m just going to kick this piece off with a walk through of what was happening with this particular SSD, and what was attempted so far to fix it:

IMG_0007.JPG

The Samsung 840 EVO is a consumer-focused TLC SSD. Normally TLC SSDs suffer from reduced write speeds when compared to their MLC counterparts, as writing operations take longer for TLC than for MLC (SLC is even faster). Samsung introduced a novel way of speeding things up with their TurboWrite caching method, which adds a fast SLC buffer alongside the slower flash. This buffer is several GB in size, and helps the 840 EVO maintain fast write speeds in most typical usage scenarios, but the issue with the 840 EVO is not its write speed – the problem is read speed. Initial reviews did not catch this issue as it only impacted data that had been stagnant for a period of roughly 6-8 weeks. As files aged their read speeds were reduced, starting from the speedy (and expected) 500 MB/sec and ultimately reaching a worst case speed of 50-100 MB/sec:

840 EVO 512 test hdtach-2-.png

There were other variables that impacted the end result, which further complicated the flurry of reports coming in from seemingly everywhere. The slow speeds turned out to be the result of the SSD controller working extra hard to apply error correction to the data coming in from flash that was (reportedly) miscalibrated at the factory. This miscalibration caused the EVO to incorrectly adapt to cell voltage drifts over time (an effect that occurs in all flash-based storage – TLC being the most sensitive). Ambient temperature could even impact the slower read speeds as the controller was working outside of its expected load envelope and thermally throttled itself when faced with bulk amounts of error correction.

900x900px-LL-4985de76_2014-09-1720.18.26TestresultsforC_0.png

An example of file read speed slowing relative to age, thanks to a tool developed by Techie007.

Once the community reached sufficient critical mass to get Samsung’s attention, they issued a few statements and ultimately pushed out a combination firmware and tool to fix EVO’s that were seeing this issue. The 840 EVO Performance Restoration Tool was released just under two months after the original thread on the Overclock.net forums was started. Despite a quick update a few weeks later, that was not a bad turnaround considering Intel took three months to correct a firmware issue of one of their own early SSDs. While the Intel patch restored full performance to their X25-M, the Samsung update does not appear to be faring so well now that users have logged a few additional months after applying their fix.

Continue reading our look at the continued problems with the Samsung 840 EVO SSD!

Author:
Manufacturer: NVIDIA

A Summary Thus Far

UPDATE 2/2/15: We have another story up that compares the GTX 980 and GTX 970 in SLI as well.

It has certainly been an interesting week for NVIDIA. It started with the release of the new GeForce GTX 960, a $199 graphics card that brought the latest iteration of Maxwell's architecture to a lower price point, competing with the Radeon R9 280 and R9 285 products. But then the proverbial stuff hit the fan with a memory issue on the GeForce GTX 970, the best selling graphics card of the second half of 2014. NVIDIA responded to the online community on Saturday morning but that was quickly followed up with a more detailed expose on the GTX 970 memory hierarchy, which included a couple of important revisions to the specifications of the GTX 970 as well.

At the heart of all this technical debate is a performance question: does the GTX 970 suffer from lower performance because of of the 3.5GB/0.5GB memory partitioning configuration? Many forum members and PC enthusiasts have been debating this for weeks with many coming away with an emphatic yes.

GM204_arch.jpg

The newly discovered memory system of the GeForce GTX 970

Yesterday I spent the majority of my day trying to figure out a way to validate or invalidate these types of performance claims. As it turns out, finding specific game scenarios that will consistently hit targeted memory usage levels isn't as easy as it might first sound and simple things like the order of start up can vary that as well (and settings change orders). Using Battlefield 4 and Call of Duty: Advanced Warfare though, I think I have presented a couple of examples that demonstrate the issue at hand.

Performance testing is a complicated story. Lots of users have attempted to measure performance on their own setup, looking for combinations of game settings that sit below the 3.5GB threshold and those that cross above it, into the slower 500MB portion. The issue for many of these tests is that they lack access to both a GTX 970 and a GTX 980 to really compare performance degradation between cards. That's the real comparison to make - the GTX 980 does not separate its 4GB into different memory pools. If it has performance drops in the same way as the GTX 970 then we can wager the memory architecture of the GTX 970 is not to blame. If the two cards perform differently enough, beyond the expected performance delta between two cards running at different clock speeds and with different CUDA core counts, then we have to question the decisions that NVIDIA made.

IMG_9792.JPG

There has also been concern over the frame rate consistency of the GTX 970. Our readers are already aware of how deceptive an average frame rate alone can be, and why looking at frame times and frame time consistency is so much more important to guaranteeing a good user experience. Our Frame Rating method of GPU testing has been in place since early 2013 and it tests exactly that - looking for consistent frame times that result in a smooth animation and improved gaming experience.

reddit.jpg

Users at reddit.com have been doing a lot of subjective testing

We will be applying Frame Rating to our testing today of the GTX 970 and its memory issues - does the division of memory pools introduce additional stutter into game play? Let's take a look at a couple of examples.

Continue reading our look at GTX 970 Performance Testing using Frame Rating!

Manufacturer: Primochill

Introduction and Technical Specifications

Introduction

02-primochill-wet-bench_0.PNG

Courtesy of Primochill

The Wet Bench open-air test bench is Primochill's premier case offering. This acrylic-based enclosure features an innovative design allowing for easy access to the motherboard and PCIe cards without the hassle of removing case panels and mounting screws associated with a typical case motherboard change out. With a starting MSRP of $139.95, the Wet Bench is priced competitively in light of the configurability and features offered with the case.

03-360-rad-plate_0.PNG

Courtesy of Primochill

04-480-rad-plate_0.PNG

Courtesy of Primochill

The Wet Bench is unique in its design - Primochill built it to support custom water cooling solutions from the ground up. The base kit supports mounting the water cooling kit's radiator to the back plate, up to a 360mm size (supporting 3x120mm fans). Primochill also offers an optional backplate with support for up to a 480mm radiator (supporting up to 4x120mm fans).

Continue reading our review of the Primochill Wet Bench kit!

Author:
Manufacturer: NVIDIA

A few secrets about GTX 970

UPDATE 1/28/15 @ 10:25am ET: NVIDIA has posted in its official GeForce.com forums that they are working on a driver update to help alleviate memory performance issues in the GTX 970 and that they will "help out" those users looking to get a refund or exchange.

Yes, that last 0.5GB of memory on your GeForce GTX 970 does run slower than the first 3.5GB. More interesting than that fact is the reason why it does, and why the result is better than you might have otherwise expected. Last night we got a chance to talk with NVIDIA’s Senior VP of GPU Engineering, Jonah Alben on this specific concern and got a detailed explanation to why gamers are seeing what they are seeing along with new disclosures on the architecture of the GM204 version of Maxwell.

alben.jpg

NVIDIA's Jonah Alben, SVP of GPU Engineering

For those looking for a little background, you should read over my story from this weekend that looks at NVIDIA's first response to the claims that the GeForce GTX 970 cards currently selling were only properly utilizing 3.5GB of the 4GB frame buffer. While it definitely helped answer some questions it raised plenty more which is whey we requested a talk with Alben, even on a Sunday.

Let’s start with a new diagram drawn by Alben specifically for this discussion.

GM204_arch.jpg

GTX 970 Memory System

Believe it or not, every issue discussed in any forum about the GTX 970 memory issue is going to be explained by this diagram. Along the top you will see 13 enabled SMMs, each with 128 CUDA cores for the total of 1664 as expected. (Three grayed out SMMs represent those disabled from a full GM204 / GTX 980.) The most important part here is the memory system though, connected to the SMMs through a crossbar interface. That interface has 8 total ports to connect to collections of L2 cache and memory controllers, all of which are utilized in a GTX 980. With a GTX 970 though, only 7 of those ports are enabled, taking one of the combination L2 cache / ROP units along with it. However, the 32-bit memory controller segment remains.

You should take two things away from that simple description. First, despite initial reviews and information from NVIDIA, the GTX 970 actually has fewer ROPs and less L2 cache than the GTX 980. NVIDIA says this was an error in the reviewer’s guide and a misunderstanding between the engineering team and the technical PR team on how the architecture itself functioned. That means the GTX 970 has 56 ROPs and 1792 KB of L2 cache compared to 64 ROPs and 2048 KB of L2 cache for the GTX 980. Before people complain about the ROP count difference as a performance bottleneck, keep in mind that the 13 SMMs in the GTX 970 can only output 52 pixels/clock and the seven segments of 8 ROPs each (56 total) can handle 56 pixels/clock. The SMMs are the bottleneck, not the ROPs.

Continue reading our explanation and summary about the NVIDIA GTX 970 3.5GB Memory Issue!!