Author:
Subject: Editorial
Manufacturer: Various

Process Technology Overview

We have been very spoiled throughout the years.  We likely did not realize exactly how spoiled we were until it became very obvious that the rate of process technology advances hit a virtual brick wall.  Every 18 to 24 months we were treated to a new, faster, more efficient process node that was opened up to fabless semiconductor firms and we were treated to a new generation of products that would blow our hair back.  Now we have been in a virtual standstill when it comes to new process nodes from the pure-play foundries.

Few did not expect the 28 nm node to live nearly as long as it has.  Some of the first cracks in the façade actually came from Intel.  Their 22 nm Tri-Gate (FinFET) process took a little bit longer to get off the ground than expected.  We also noticed some interesting electrical features from the products developed on that process.  Intel skewed away from higher clockspeeds and focused on efficiency and architectural improvements rather than staying at generally acceptable TDPs and leapfrogging the competition by clockspeed alone.  Overclockers noticed that the newer parts did not reach the same clockspeed heights as previous products such as the 32 nm based Sandy Bridge processors.  Whether this decision was intentional from Intel or not is debatable, but my gut feeling here is that they responded to the technical limitations of their 22 nm process.  Yields and bins likely dictated the max clockspeeds attained on these new products.  So instead of vaulting over AMD’s products, they just slowly started walking away from them.

samsung-fab.jpg

Samsung is one of the first pure-play foundries to offer a working sub-20 nm FinFET product line. (Photo courtesy of ExtremeTech)

When 28 nm was released the plans on the books were to transition to 20 nm products based on planar transistors, thereby bypassing the added expense of developing FinFETs.  It was widely expected that FinFETs were not necessarily required to address the needs of the market.  Sadly, that did not turn out to be the case.  There are many other factors as to why 20 nm planar parts are not common, but the limitations of that particular process node has made it a relatively niche process node that is appropriate for smaller, low power ASICs (like the latest Apple SOCs).  The Apple A8 is rumored to be around 90 mm square, which is a far cry from the traditional midrange GPU that goes from 250 mm sq. to 400+ mm sq.

The essential difficulty of the 20 nm planar node appears to be a lack of power scaling to match the increased transistor density.  TSMC and others have successfully packed in more transistors into every square mm as compared to 28 nm, but the electrical characteristics did not scale proportionally well.  Yes, there are improvements there per transistor, but when designers pack in all those transistors into a large design, TDP and voltage issues start to arise.  As TDP increases, it takes more power to drive the processor, which then leads to more heat.  The GPU guys probably looked at this and figured out that while they can achieve a higher transistor density and a wider design, they will have to downclock the entire GPU to hit reasonable TDP levels.  When adding these concerns to yields and bins for the new process, the advantages of going to 20 nm would be slim to none at the end of the day.

Click here to read the rest of the 28 nm GPU editorial!

Manufacturer: Miller Electric

Features and Specifications

If you are looking for the biggest, baddest, power supply on the planet, then we have an exclusive review for you today.  But you better have a truck, a couple of strong friends, and very deep pockets!  

2a-Miller-Power.jpg

Miller Electric has been manufacturing large, industrial-grade power supplies since 1929.  They are one of the few power supply manufactures who actually design and build their own products, right here in the USA.  Miller’s The Power of Blue series of high-capacity power supplies includes models that go all the way up to an astounding 10,500 watts!

While we were not able to obtain a review sample of Miller’s flagship 10.5kW unit, we do have an exclusive review of the Miller XMT 300 PC, which can deliver up to 4,500 watts of pure DC power.   Talk about having some extra reserve capacity… wow!

2b-IMG_2796.jpg

The Miller XMT 300 PC is an external power supply that is about twice the size of a typical mid-tower case and is designed to normally sit on the floor.  It features a unique single, high-capacity +12V rail capable of delivering up to 375A (4,500W).  A power distribution module mounts inside the PC where the normal ATX power supply would go and breaks down the incoming +12V to the minor rails +3.3V, +5V, etc., along with providing a standard set of cables and connectors.  This allows one XMT 300 PC to power multiple PCs at the same time; up to twenty computers.

Miller XMT 300 PC Key Features:

•    Monstrous, single rail +12V output (up to 375A peak)
•    External main power unit sits on the floor
•    Can support multiple PCs at the same time
•    #6AWG copper cables for minimal voltage drop
•    Automatic fan speed control for optimal cooling and minimal noise
•    High efficiency operation (up to 87%)
•    Active Power Factor Correction  
•    1-Phase or 3-Phase line voltage
•    3-Year warranty  

Miller XMT 300 PC Specifications:

3-Specs-Table.gif

Please continue reading our review of the Miller 10,000W PSU !!!

Subject: Storage
Manufacturer: Samsung

Introduction, Specifications and Packaging

Introduction:

Following the same pattern that Samsung led with the 840 Pro and 840 EVO, history has repeated itself with the 850 Pro and 850 EVO. With the 850 EVO launching late last year and being quite successful, it was only a matter of time before Samsung expanded past the 2.5" form factor for this popular SSD. Today is that day:

150330-182303_DxO.jpg

Today we will be looking at the MSATA and M.2 form factors. To clarify, the M.2 units are still using a SATA controller and connection, and must therefore be installed in a system capable of linking SATA lanes to its M.2 port. As both products are SATA, the DRAM cache based RAPID mode included with their Magician value added software is also available for these models. We won't be using RAPID for this review, but we did take a look at it in a prior article.

Given that 850 EVOs use VNAND - a vastly different technology than the planar NAND used in the 840 EVO, we suspect it is not subject to the same flash cell drift related issues (hopefully to be corrected soon) in the 840 EVO. Only time will tell for sure on that front, but we have not see any of those issues present in 850 EVO models since their launch.

Picture5.png

Cross sectional view of Samsung's 32-layer VNAND. Photo by TechInsights.

Samsung sampled us the M.2 SATA in 120GB and 500GB, and the MSATA in 120GB and 1TB. Since both are SATA-based, these are only physical packaging differences. The die counts are the same as the 2.5" desktop counterparts. While the pair of 120GB models should be essentially identical, we'll throw both in with the results to validate the slight differences in stated specs below.

Continue reading our review of these new Samsung 850 EVOs!!

Author:
Subject: Displays
Manufacturer: LG

A monitor for those that like it long

It takes a lot to really impress someone that sits in front of dual 2560x1600 30-in IPS screens all day, but the LG 34UM95 did just that. With a 34-in diagonal 3440x1440 resolution panel forming a 21:9 aspect ratio, built on LG IPS technology for flawless viewing angles, this monitor creates a work and gaming experience that is basically unmatched in today's market. Whether you need to open up a half-dozen Excel or Word documents, keep an eye on your Twitter feed while looking at 12 browsers or run games at near Eyefinity/Surround levels without bezels, the LG 34UM95 is a perfect option.

Originally priced north of $1200, the 34UM95 and many in LG's 21:9 lineup have dropped in price considerably, giving them more avenues into users' homes. There are obvious gaming advantages to the 34-in display compared to a pair of 1920x1080 panels (no bezel, 20% more pixels) but if you have a pair of 2560x1440 screens you are going to be giving up a bit. Some games might not handle 21:9 resolutions well either, just as we continue to see Eyefinity/Surround unsupported occasionally.

Productivity users will immediately see an improvement, both for those us inundated with spreadsheets, web pages and text documents as well as the more creative types with Adobe Premiere timelines. I know that Ken would definitely have approved us keeping this monitor here at the office for his use.

Check out the video above for more thoughts on the LG 34UM95!

Author:
Manufacturer: Various

It's more than just a branding issue

As a part of my look at the first wave of AMD FreeSync monitors hitting the market, I wrote an analysis of how the competing technologies of FreeSync and G-Sync differ from one another. It was a complex topic that I tried to state in as succinct a fashion as possible given the time constraints and that the article subject was on FreeSync specifically. I'm going to include a portion of that discussion here, to recap:

First, we need to look inside the VRR window, the zone in which the monitor and AMD claims that variable refresh should be working without tears and without stutter. On the LG 34UM67 for example, that range is 48-75 Hz, so frame rates between 48 FPS and 75 FPS should be smooth. Next we want to look above the window, or at frame rates above the 75 Hz maximum refresh rate of the window. Finally, and maybe most importantly, we need to look below the window, at frame rates under the minimum rated variable refresh target, in this example it would be 48 FPS.

AMD FreeSync offers more flexibility for the gamer than G-Sync around this VRR window. For both above and below the variable refresh area, AMD allows gamers to continue to select a VSync enabled or disabled setting. That setting will be handled as you are used to it today when your game frame rate extends outside the VRR window. So, for our 34UM67 monitor example, if your game is capable of rendering at a frame rate of 85 FPS then you will either see tearing on your screen (if you have VSync disabled) or you will get a static frame rate of 75 FPS, matching the top refresh rate of the panel itself. If your game is rendering at 40 FPS, lower than the minimum VRR window, then you will again see the result of tearing (with VSync off) or the potential for stutter and hitching (with VSync on).

But what happens with this FreeSync monitor and theoretical G-Sync monitor below the window? AMD’s implementation means that you get the option of disabling or enabling VSync.  For the 34UM67 as soon as your game frame rate drops under 48 FPS you will either see tearing on your screen or you will begin to see hints of stutter and judder as the typical (and previously mentioned) VSync concerns again crop their head up. At lower frame rates (below the window) these artifacts will actually impact your gaming experience much more dramatically than at higher frame rates (above the window).

G-Sync treats this “below the window” scenario very differently. Rather than reverting to VSync on or off, the module in the G-Sync display is responsible for auto-refreshing the screen if the frame rate dips below the minimum refresh of the panel that would otherwise be affected by flicker. So, in a 30-144 Hz G-Sync monitor, we have measured that when the frame rate actually gets to 29 FPS, the display is actually refreshing at 58 Hz, each frame being “drawn” one extra instance to avoid flicker of the pixels but still maintains a tear free and stutter free animation. If the frame rate dips to 25 FPS, then the screen draws at 50 Hz. If the frame rate drops to something more extreme like 14 FPS, we actually see the module quadruple drawing the frame, taking the refresh rate back to 56 Hz. It’s a clever trick that keeps the VRR goals and prevents a degradation of the gaming experience. But, this method requires a local frame buffer and requires logic on the display controller to work. Hence, the current implementation in a G-Sync module.

As you can see, the topic is complicated. So Allyn and I (and an aging analog oscilloscope) decided to take it upon ourselves to try and understand and teach the implementation differences with the help of some science. The video below is where the heart of this story is focused, though I have some visual aids embedded after it.

Still not clear on what this means for frame rates and refresh rates on current FreeSync and G-Sync monitors? Maybe this will help.

Continue reading our story dissecting NVIDIA G-Sync and AMD FreeSync!!

Author:
Manufacturer: Futuremark

Our first DX12 Performance Results

Late last week, Microsoft approached me to see if I would be interested in working with them and with Futuremark on the release of the new 3DMark API Overhead Feature Test. Of course I jumped at the chance, with DirectX 12 being one of the hottest discussion topics among gamers, PC enthusiasts and developers in recent history. Microsoft set us up with the latest iteration of 3DMark and the latest DX12-ready drivers from AMD, NVIDIA and Intel. From there, off we went.

First we need to discuss exactly what the 3DMark API Overhead Feature Test is (and also what it is not). The feature test will be a part of the next revision of 3DMark, which will likely ship in time with the full Windows 10 release. Futuremark claims that it is the "world's first independent" test that allows you to compare the performance of three different APIs: DX12, DX11 and even Mantle.

It was almost one year ago that Microsoft officially unveiled the plans for DirectX 12: a move to a more efficient API that can better utilize the CPU and platform capabilities of future, and most importantly current, systems. Josh wrote up a solid editorial on what we believe DX12 means for the future of gaming, and in particular for PC gaming, that you should check out if you want more background on the direction DX12 has set.

3dmark-api-overhead-screenshot.jpg

One of DX12 keys for becoming more efficient is the ability for developers to get closer to the metal, which is a phrase to indicate that game and engine coders can access more power of the system (CPU and GPU) without having to have its hand held by the API itself. The most direct benefit of this, as we saw with AMD's Mantle implementation over the past couple of years, is improved quantity of draw calls that a given hardware system can utilize in a game engine.

Continue reading our overview of the new 3DMark API Overhead Feature Test with early DX12 Performance Results!!

Subject: Storage
Manufacturer: OCZ Storage Solutions

Introduction, Specifications and Packaging

Introduction:

OCZ has been on a fairly steady release track since their aquisition by Toshiba. Having previously pared down their product lines, taking a minimalist approach, the other half of that cycle has taken place with releases like the OCZ AMD Radeon R7. Today we see another addition to OCZ's lineup, in the form of a newer Vector - the Vector 180 Series:

ocz_vector180_image1.jpg

Today we will run all three available capacities (240GB, 480GB, and 960GB) through our standard round of testing. I've thrown in an R7 as a point of comparison, as well as a hand full of the competition.

Specifications:

OCZ Vector 180 slides - 7.jpg

Here are the specs from OCZ's slide presentation, included here as it gives a good spec comparison across OCZ's SATA product range.

Packaging:

DSC09886.JPG

Standard packaging here. 3.5" adapter bracket and Acronis 2013 cloning software product key included.

Continue reading our review of the new OCZ Vector 180 SSD!!

Author:
Subject: Mobile
Manufacturer: Dell

Specifications

The perfect laptop; it is every manufacturer’s goal. Obviously no one has gotten there yet (or we would have all stopped writing reviews of them). At CES this past January, we got our first glimpse of a new flagship Ultrabook from Dell: the XPS 13. It got immediate attention for some of the physical characteristics it included, like an ultra-thin bezel and a 13-in screen in the body of a typical 11-in laptop, all while being built in a sleek thin and light design. It’s not a gaming machine, despite what you might remember from the XPS line, but the Intel Core-series Broadwell-U processor keeps performance speedy in standard computing tasks.

01.jpg

As a frequent traveler that tends to err on the side of thin and light designs, as opposed to high performance notebooks with discrete graphics, the Dell XPS 13 is immediately compelling on a personal level as well. I have long been known as a fan of what Lenovo builds for this space, trusting my work machine requirements to the ThinkPad line for years and year. Dell’s new XPS 13 is a strong contender to take away that top spot for me and perhaps force me down the path of an upgrade of my own. So, you might consider this review as my personal thesis on the viability of said change.

The Dell XPS 13 Specifications

First, make sure as you hunt around the web for information on the XPS 13 that you are focusing on the new 2015 model. Much like we see from Apple, Dell reuses model names and that can cause confusion unless you know what specifications to look for or exactly what sub-model you need. Trust me, the new XPS 13 is much better than anything that existed before.

Continue reading our review of the Dell XPS 13 Notebook!

Manufacturer: EVGA

Introduction and Features

Introduction

EVGA has just announced the arrival of two new GS power supplies in their popular SuperNOVA line, the 550GS and 650GS. Both power supplies are 80 Plus Gold certified and feature all modular cables, high-quality Japanese brand capacitors, and a super quiet 120mm cooling fan (with the ability to operate in silent, fan-less mode at low power levels). The 550GS and 650GS are housed in a compact chassis (150mm deep) and are backed by a 5-year warranty. These new GS units are manufactured by Seasonic and will start selling for $89.99 (550GS) and $99.99 (650GS) this spring.

2-GS-Banner.jpg

EVGA was founded in 1999 with headquarters in Brea, California. They continue to specialize in producing NVIDIA based graphics adapters and Intel based motherboards and keep expanding their PC power supply product line, which currently includes twenty-two models ranging from the high-end 1,600W SuperNOVA T2 to the budget minded EVGA 400W power supply.

3a-PSUs.jpg

In this review we will be taking a detailed look at both the EVGA SuperNOVA 550GS and 650GS power supplies. It’s nice when we receive two slightly different units in the same product series to look for consistency during testing.

Here is what EVGA has to say about the new SuperNOVA GS Gold PSUs: “The EVGA GS power supply lineup has arrived. These power supplies offer superior performance, at extremely low noise levels making these the perfect power supplies for a low noise environment. These power supplies also feature ECO mode meaning that the fan will run only when necessary, further reducing noise level and power consumption.

The EVGA SuperNOVA 550GS/650GS power supply raises the bar with 550W/650W of continuous power delivery and 90% (115VAC) / 92% (220VAC-240VAC) efficiency. A fully modular design with compact dimensions reduces case clutter and 100% Japanese capacitors ensures that only the absolute best components are used. This gives you the best stability, reliability, overclockability and unparalleled control. The EVGA SuperNOVA 550GS/650GS is the ultimate tool to eliminate all system bottlenecks and achieve unrivaled performance."

 

3b-Fron-cables.jpg

EVGA SuperNOVA 550 GS and 650 GS Gold PSU Key Features:

•    Fully modular cables to reduce clutter and improve airflow
•    80PLUS Gold certified, with up to 90% (115VAC) / 92% (240VAC) efficiency
•    Tight voltage regulation (2%), stable power with low AC ripple and noise
•    Highest quality Japanese brand capacitors ensure long-term reliability
•    Quiet 120mm Teflon Nano-Steel bearing cooling fan
•    ECO Intelligent Thermal Control allows silent, fan-less operation at low power
•    NVIDIA SLI & AMD Crossfire Ready
•    Compliance with ErP Lot 6 2013 Requirement
•    Active Power Factor correction (0.99) with Universal AC input
•    Complete Protections: OVP, UVP, OPP, and SCP
•    Compact chassis only 150mm (5.9”) deep
•    5-Year warranty

Please continue reading our review of the EVGA SuperNOVA 550/650 GS Gold PSUs!!!

Manufacturer: SilverStone

Introduction and First Impressions

The Fortress FT05 is the fifth iteration of SilverStone's Fortress series of enclosures, and, like the latest Raven case, this leverages the complete removal of 5.25" bays to reduce its overall size. We've seen this before as the FT03 completely removed optical support, but this enclosure is related far more closely to the current Raven enclosure than any of its predecessors.

FT05_Angle_Front.jpg

Introduction: The Heart of a Raven

If you're familiar with SilverStone's product lineup you'll know about the Fortress and Raven enclosures which both currently feature an unusual 90° motherboard orientation. This layout places I/O on the top of the case, and helps expel warm air straight up. The Fortress was originally a more conventional design with a standard motherboard layout, but SilverStone switched this to mirror the Raven series with the second version, the FT02. However, just as the Raven series diverged from the original design language and layout of the RV01 with later versions, the Fortress series has undergone some radical changes since its introduction. With this fifth version of the Fortress SilverStone has converged the two enclosure lines, and the FT05 is essentially a more businesslike version of the Raven RV05 - though the design's more conventional exterior also contains noise-dampening material which helps to further differentiate the two enclosures.

Much as the current Raven owes much of its design to an earlier version, in that case the RV01, this new Fortress is a return to the design of the FT02. That earlier Fortress was a large (and quite expensive) case that combined great expandability with excellent cooling, taking the RV01's 90° layout and opening up the interior for an expansive, easy-to-manage interior. A considerable amount of the second gen's interior was devoted to storage, and the front of the case was dominated by 5.25" drive bays.

FT02.jpg

The second-generation Fortress FT02 interior

Continue reading our review of the SilverStone Fortress FT05 enclosure!!

Author:
Subject: Displays
Manufacturer: AMD

What is FreeSync?

FreeSync: What began as merely a term for AMD’s plans to counter NVIDIA’s launch of G-Sync (and mocking play on NVIDIA’s trade name) has finally come to fruition, keeping the name - and the attitude. As we have discussed, AMD’s Mantle API was crucial to pushing the industry in the correct and necessary direction for lower level APIs, though NVIDIA’s G-Sync deserves the same credit for recognizing and imparting the necessity of a move to a variable refresh display technology. Variable refresh displays can fundamentally change the way that PC gaming looks and feels when they are built correctly and implemented with care, and we have seen that time and time again with many different G-Sync enabled monitors at our offices. It might finally be time to make the same claims about FreeSync.

But what exactly is FreeSync? AMD has been discussing it since CES in early 2014, claiming that they would bypass the idea of a custom module that needs to be used by a monitor to support VRR, and instead go the route of open standards using a modification to DisplayPort 1.2a from VESA. FreeSync is based on AdaptiveSync, an optional portion of the DP standard that enables a variable refresh rate courtesy of expanding the vBlank timings of a display, and it also provides a way to updating EDID (display ID information) to facilitate communication of these settings to the graphics card. FreeSync itself is simply the AMD brand for this implementation, combining the monitors with correctly implemented drivers and GPUs that support the variable refresh technology.

disp4.jpg

A set of three new FreeSync monitors from Acer, LG and BenQ.

Fundamentally, FreeSync works in a very similar fashion to G-Sync, utilizing the idea of the vBlank timings of a monitor to change how and when it updates the screen. The vBlank signal is what tells the monitor to begin drawing the next frame, representing the end of the current data set and marking the beginning of a new one. By varying the length of time this vBlank signal is set to, you can force the monitor to wait any amount of time necessary, allowing the GPU to end the vBlank instance exactly when a new frame is done drawing. The result is a variable refresh rate monitor, one that is in tune with the GPU render rate, rather than opposed to it. Why is that important? I wrote in great detail about this previously, and it still applies in this case:

The idea of G-Sync (and FreeSync) is pretty easy to understand, though the implementation method can get a bit more hairy. G-Sync (and FreeSync) introduces a variable refresh rate to a monitor, allowing the display to refresh at wide range of rates rather than at fixed intervals. More importantly, rather than the monitor dictating what rate this refresh occurs at to the PC, the graphics now tells the monitor when to refresh in a properly configured G-Sync (and FreeSync) setup. This allows a monitor to match the refresh rate of the screen to the draw rate of the game being played (frames per second) and that simple change drastically improves the gaming experience for several reasons.

slides01.jpg

Gamers today are likely to be very familiar with V-Sync, short for vertical sync, which is an option in your graphics card’s control panel and in your game options menu. When enabled, it forces the monitor to draw a new image on the screen at a fixed interval. In theory, this would work well and the image is presented to the gamer without artifacts. The problem is that games that are played and rendered in real time rarely fall into a very specific frame rate. With only a couple of exceptions, games frame rates will fluctuate based on the activity happening on the screen: a rush of enemies, a changed camera angle, an explosion or falling building. Instantaneous frame rates can vary drastically, from 30, to 60, to 90, and force the image to be displayed only at set fractions of the monitor's refresh rate, which causes problems.

Continue reading our first impressions of the newly released AMD FreeSync technology!!

Author:
Manufacturer: NVIDIA

GM200 Specifications

With the release of the GeForce GTX 980 back in September of 2014, NVIDIA took the lead in performance with single GPU graphics cards. The GTX 980 and GTX 970 were both impressive options. The GTX 970 offered better performance than the R9 290 as did the GTX 980 compared to the R9 290X; on top of that, both did so while running at lower power consumption and while including new features like DX12 feature level support, HDMI 2.0 and MFAA (multi-frame antialiasing). Because of those factors, the GTX 980 and GTX 970 were fantastic sellers, helping to push NVIDIA’s market share over 75% as of the 4th quarter of 2014.

IMG_1954.JPG

But in the back of our mind, and in the minds of many NVIDIA fans, we knew that the company had another GPU it was holding on to: the bigger, badder version of Maxwell. The only question was going to be WHEN the company would release it and sell us a new flagship GeForce card. In most instances, this decision is based on the competitive landscape, such as when AMD might be finally updating its Radeon R9 290X Hawaii family of products with the rumored R9 390X. Perhaps NVIDIA is tired of waiting or maybe the strategy is to launch soon before Fiji GPUs make their debut. Either way, NVIDIA officially took the wraps off of the new GeForce GTX TITAN X at the Game Developers Conference two weeks ago.

At the session hosted by Epic Games’ Tim Sweeney, NVIDIA CEO Jen-Hsun Huang arrived when Tim lamented about needing more GPU horsepower for their UE4 content. In his hands he had the first TITAN X GPU and talked about only a couple of specifications: the card would have 12GB of memory and it would be based on a GPU with 8 billion transistors.

Since that day, you have likely seen picture after picture, rumor after rumor, about specifications, pricing and performance. Wait no longer: the GeForce GTX TITAN X is here. With a $999 price tag and a GPU with 3072 CUDA cores, we clearly have a new king of the court.

Continue reading our review of the NVIDIA GeForce GTX Titan X 12GB Graphics Card!!

Manufacturer: Lian Li

Introduction and First Impressions

The Lian Li PC-Q33 is a mini-ITX enclosure with a cube-like appearance and a hinged construction that makes it easy to access the components within.

Q33_Main.jpg

Introduction

When a builder is contemplating a mini-ITX system the primary driver is going to be the size. It’s incredible that we've reached the point where we can have a powerful single-GPU system with minimal (if any) tradeoffs from the tiny mITX form-factor, but the components need to be housed in an appropriately small enclosure or the entire purpose is defeated. However working within small enclosures is often more difficult, unless the enclosure has been specifically designed to account for this. Certainly no slouch in the design department, Lian Li is no stranger to small, lightweight mini-ITX designs like this. The NCASE M1 (a personal favorite) was manufactured by the company after all, and in some ways the PC-Q33 is reminiscent of that design - in build quality and materials if nothing else. The Q33 features aluminum construction and is very light, and while compact the design of the enclosure allows for effortless component installation. The secret? A hinged design that allows the front of the enclosure to swing down providing full access to the interior.

Q33_Fold_Angle.jpg

This approach to accessibility with a small enclosure is a welcome one, and especially so considering the price of the PC-Q33, which retails for $95 on Newegg and can be found for around $105 on Amazon as well. This is still a high cost for many considering a small build and enters the premium price range for an enclosure, but remember the Q33 features an aluminum construction which typically carries a considerably higher cost than steel and plastic. Of course if the case is frustrating to use or has poor thermals than the materials used are meaningless, so in this review we’ll look at the build process and thermal results with the Q33 to see if it’s a good value. My initial impression is that the price is actually low, but that’s coming from someone who looks at a lot of cases and develops a familiarity with the average retail prices in each category.

Continue reading our review of the Lian Li PC-Q33 SFF Chassis!!

Subject: Storage
Manufacturer: Patriot Memory

Introduction, Specs and Packaging

Introduction:

We're getting back into USB device roundup testing. To kick it off, Patriot passed along a couple of USB samples for review. First up is the Supersonic Phoenix 256GB:

150311-202020.jpg

Specs:

  • Read speed: Up to 260MB/s
  • Write speed: up to 170MB/s
  • Compact and lightweight
  • Stylish 3D design
  • USB Powered
  • SuperSpeed USB 3.0
  • Compatible with Windows 8, Windows 7, Windows Vista, Windows XP, Windows 2000, Windows ME, Linux 2.4 and later, Mac OS9, X and later

Next up is their Supersonic Rage 2:

150311-202124.jpg

  • Up to 400MB/s Read; Up to 300MB/s Write
  • Durable design extends the life of your drive
  • Rubber coated housing protects from drops, spills, daily abuse
  • Retractable design protects USB connector when drive not in use
  • LED Light Indicator
  • Compatible with Windows® 8, Windows® 8.1, Windows® 7,
    Windows Vista®, Windows XP®, Windows 2000®, Windows® ME,
    Linux 2.4 and later, Mac® OS9, X

Packaging:

The Phoenix comes well packaged with a necessary USB 3.0 cable:

150311-201956.jpg

The Rage 2 comes in very simple packaging:

150311-201338.jpg

Read on for the results!

Subject: Motherboards
Manufacturer: ASUS

Introduction and Technical Specifications

Introduction

02-board.jpg

Courtesy of ASUS

The X99-A is the base level board in ASUS' Intel X99 line of motherboard offering. Don't let the term "base level offering" throw you off though, ASUS put their best foot forward in designing this beauty. The board features full support for all Intel LGA2011-3 based processors paired with DDR4 memory operating in up to a quad channel configuration. Priced at a competitive price point of $274.99, the X99-A gives the more feature-packed (and vastly more expensive) boards a run for their money.

03-fly-apart.jpg

Courtesy of ASUS

04-profile.jpg

Courtesy of ASUS

Just because the X99-A motherboard is designed to be the "entry-level" model of ASUS' X99 product line does not mean that they skimped on its design or features. The X99-A features the enhanced OC Socket and an 8+4 phase digital power system similar to that featured on its more costly siblings, centered around the Extreme Engine Digi+ IV solution. Extreme Engine Digi+ IV combines ASUS' custom designed Digi+ EPU chipset, IR (International Rectifier) sourced MOSFETs, high-quality chokes, and 10k Black Metallic capacitors for unrivaled power delivery capabilities. The board is further augmented by the integration of ASUS' Crystal Sound 2 audio subsystem for superior audio reproduction.

Continue reading our review of the ASUS X99-A motherboard!

Author:
Subject: Mobile
Manufacturer: Samsung

Introduction and Specifications

Had you asked me just a few years ago if 6-inch phones would not only be a viable option, but a dominant force in the mobile computing market, I would have likely rolled my eyes. At that time phones were small, tablets were big, and phablets were laughed at. Today, no one is laughing at the Galaxy Note 4, the latest iteration in Samsung’s created space of larger-than-you-probably-thought-you-wanted smartphones. Nearly all consumers are amazed by the size of the screen and the real estate this class of phone provides but some are instantly off put by the way the phone feels in the hand – it can come off as foreign, cumbersome, and unusable.

07.jpg

In my time with the new Galaxy Note 4 – my first extended-use experience with a phone of this magnitude – I have come to see the many positive traits that a larger phone can offer. There are some trade-offs of course, including the pocket/purse viability debate. One thing beyond question is that a large phone means a big screen. One that can display a large amount of data whether that be on a website or in a note-taking application. The extra screen real estate can instantly improve your productivity. To that end Samsung also provides a multi-tasking framework that lets you run multiple programs in a side-by-side view, similar to what the original version of Windows 8 did. It might seem unnecessary for an Android device, but as soon as you find the situation where you need it going back to a device without it can feel archaic.

A larger phone also means that there is more room for faster hardware, a larger camera sensor, and a bigger battery. Samsung even includes an active stylus called the S-Pen in the body of the device – something that few other modern tablets/phablets/phones feature.

Continue reading our review of the Samsung Galaxy Note 4 Smartphone!!

Subject: Mobile
Manufacturer: Lenovo

Introduction and Design

P9260212.jpg

Although the target market and design emphasis may be different, there is one thing consumer and business-grade laptops have in common: a drift away from processing power and toward portability and efficiency.  At the risk of repeating our introduction for the massive MSI GT72 gaming notebook we reviewed last month, it seems that battery life, temperature, and power consumption get all the attention these days.  And arguably, it makes sense for most people: it’s true that CPU performance gains have in years past greatly outstripped the improvements in battery life, and that likewise performance gains could be realized far more easily by upgrading storage device speed (such as by replacing conventional hard drives with solid-state drives) than by continuing to focus on raw CPU power and clock rates.  As a result, we’ve seen many mobile CPU speeds plateauing or even dropping in exchange for a reduction in power consumption, while simultaneously cases have slimmed and battery life has jumped appreciably across the board.

But what if you’re one of the minority who actually appreciates and needs raw computing power?  Fortunately, Lenovo’s ThinkPad W series still has you covered.  This $1,500 workstation is the business equivalent of the consumer-grade gaming notebook.  It’s one of the few designs where portability takes a backseat to raw power and ridiculous spec.  Users shopping for a ThinkPad workstation aren’t looking to go unplugged all day long on an airplane tray table.  They’re looking for power, reliability, and premium design, with function over form as a rule.  And that’s precisely what they’ll get.

 

specs.png

Beyond the fairly-typical (and very powerful) Intel Core i7-4800MQ CPU—often found in gaming PCs and workstations—and just 8 GB of DDR3-1600 MHz RAM (single-channel) is a 256 GB SSD and a unique feature to go along with the WQHD+ display panel: built-in X-Rite Pantone color sensor which can be used to calibrate the panel simply by closing the lid when prompted.  How well this functions is another topic entirely, but at the very least, it’s a novel idea.

P9260199.jpg

Continue reading our full Lenovo ThinkPad W540 Review!!

Introduction and Features

Introduction

2-Banner-1.jpg

Earlier this year we took a look at SilverStone’s ST1500-GS power supply unit, which currently has the highest rated output in the Strider Gold S Series. Today we are looking at SilverStone’s second generation Strider Gold ST75F-GS V2.0, which is a 750 watt power supply that comes housed in a short chassis; only 140mm (5.5”) deep for easy integration. It’s nice to get a different model from the same series in for review to see how the series overall performs. SilverStone claims the Strider Gold S Series are the world’s smallest, full-modular ATX power supplies.

3-ST75F-GS.jpg

SilverStone SST-ST75F-GS V2.0 750W ATX Power Supply

There are currently five different models available in the Strider Gold S Series, which include the ST55F-G, ST65F-G, ST75-GS, ST85F-GS, and ST1500-GS. All of the Strider Gold S Series PSUs are designed to be fully modular, 80 Plus Gold certified, and small in size. While the typical 750W power supply enclosure measures 160mm (6.3”) deep, the Strider Gold ST75F-GS is housed in a 140mm chassis (5.5”).

4-Dimensions.jpg

(Courtesy of SilverStone)

SilverStone Strider Gold S Series ST75F-GS V2.0 PSU Key Features:
 
•    750 watts DC power output
•    Compact design with a depth of only 140mm for easy integration
•    High efficiency with 80 Plus Gold certification
•    100% Modular cables
•    24/7 Continuous power output with 40°C operating temperature
•    Strict ±3% voltage regulation and low AC ripple & noise
•    Dedicated single +12V rail (62.5A)
•    Quiet 120mm cooling fan
•    Four PCI-E 8/2-pin connectors support multiple high-end graphic adapters
•    Conforms to ATX12V and EPS standards
•    Universal AC input (90-264V) with Active PFC
•    Dimensions: 150mm (W) x 86mm (H) x 140mm (L)
•    $134.99 USD

Please continue reading our review of the SilverStone ST75F-GS PSU !!!

Author:
Manufacturer: NVIDIA

Finally, a SHIELD Console

NVIDIA is filling out the family of the SHIELD brand today with the announcement of SHIELD, a set-top box powered by the Tegra X1 processor. SHIELD will run Android TV and act as a game playing, multimedia watching, GRID streaming device. Selling for $199 and available in May of this year, there is a lot to discuss.

Odd naming scheme aside, the SHIELD looks to be an impressive little device, sitting on your home theater or desk and bringing a ton of connectivity and performance to your TV. Running Android TV means the SHIELD will have access to the entire library of Google Play media including music, movies and apps. SHIELD supports 4K video playback at 60 Hz thanks to an HDMI 2.0 connection and fully supports H.265/HEVC decode thanks to Tegra X1 processor.

10.jpg

Here is a full breakdown of the device's specifications.

  NVIDIA SHIELD Specifications
Processor NVIDIA® Tegra® X1 processor with 256-core Maxwell™ GPU with 3GB RAM
Video Features 4K Ultra-HD Ready with 4K playback and capture up to 60 fps (VP9, H265, H264)
Audio 7.1 and 5.1 surround sound pass through over HDMI
High-resolution audio playback up to 24-bit/192kHz over HDMI and USB
High-resolution audio upsample to 24-bit/192hHz over USB
Storage 16 GB
Wireless 802.11ac 2x2 MIMO 2.4 GHz and 5 GHz Wi-Fi
Bluetooth 4.1/BLE
Interfaces Gigabit Ethernet
HDMI 2.0
Two USB 3.0 (Type A)
Micro-USB 2.0
MicroSD slot (supports 128GB cards)
IR Receiver (compatible with Logitech Harmony)
Gaming Features NVIDIA GRID™ streaming service
NVIDIA GameStream™
SW Updates SHIELD software upgrades directly from NVIDIA
Power 40W power adapter
Weight and Size Weight: 23oz / 654g
Height: 5.1in / 130mm
Width: 8.3in / 210mm
Depth: 1.0in / 25mm
OS Android TV™, Google Cast™ Ready
Bundled Apps PLEX
In the box NVIDIA SHIELD
NVIDIA SHIELD controller
HDMI cable (High Speed), USB cable (Micro-USB to USB)
Power adapter (Includes plugs for North America, Europe, UK)
Requirements TV with HDMI input, Internet access
Options SHIELD controller, SHIELD remove, SHIELD stand

Obviously the most important feature is the Tegra X1 SoC, built on an 8-core 64-bit ARM processor and a 256 CUDA Core Maxwell architecture GPU. This gives the SHIELD set-top more performance than basically any other mobile part on the market, and demos showing Doom 3 and Crysis 3 running natively on the hardware drive the point home. With integrated HEVC decode support the console is the first Android TV device to offer the support for 4K video content at 60 FPS.

Even though storage is only coming in at 16GB, the inclusion of an MicroSD card slot enabled expansion to as much as 128GB more for content and local games.

11.jpg

The first choice for networking will be the Gigabit Ethernet port, but the 2x2 dual-band 802.11ac wireless controller means that even those of us that don't have hardwired Internet going to our TV will be able to utilize all the performance and features of SHIELD.

Continue reading our preview of the NVIDIA SHIELD set-top box!!

Author:
Manufacturer: AMD

Liquid...get it?

As GDC progresses here in San Francisco, AMD took the wraps off of a new SDK for game developers to use to improve experiences with virtual reality (VR) headsets. Called LiquidVR, the goal is provide a smooth and stutter free VR experience that is universal across all headset hardware and to keep the wearer, be it a gamer or professional user, immersed.

01.jpg

AMD's CTO of Graphics, Raja Koduri spoke with us about the three primary tenets of the LiquidVR initiative. The 'three Cs' as it is being called are Comfort, Compatibility and Compelling Content. Ignoring the fact that we have four C's in that phrase, the premise is straight forward. Comfortable use of VR means there is little to no issues with neusea and that can be fixed with ultra-low latency between motion (of your head) and photons (hitting your eyes). For compatibility, AMD would like to assure that all VR headsets are treated equally and all provide the best experience. Oculus, HTC and others should operate in a simple, plug-and-play style. Finally, the content story is easy to grasp with a focus on solid games and software to utilize VR but AMD also wants to ensure that the rendering is scalable across different hardware and multiple GPUs.

03.jpg

To address these tenets AMD has built four technologies into LiquidVR: late data latching, asynchronous shaders, affinity multi-GPU, and direct-to-display.

02.jpg

The idea behind late data latching is to get the absolute most recent raw data from the VR engine to the users eyes. This means that rather than asking for the head position of a gamer at the beginning of a render job, LiquidVR will allow the game to ask for it at the end of the rendering pipeline, which might seem counter-intuitive. Late latch means the users head movement is tracked until the end of the frame render rather until just the beginning, saving potentially 5-10ms of delay.

04.jpg

Continue reading our first impressions of the new AMD LiquidVR SDK for virtual reality!!