All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
A monitor for those that like it long
It takes a lot to really impress someone that sits in front of dual 2560x1600 30-in IPS screens all day, but the LG 34UM95 did just that. With a 34-in diagonal 3440x1440 resolution panel forming a 21:9 aspect ratio, built on LG IPS technology for flawless viewing angles, this monitor creates a work and gaming experience that is basically unmatched in today's market. Whether you need to open up a half-dozen Excel or Word documents, keep an eye on your Twitter feed while looking at 12 browsers or run games at near Eyefinity/Surround levels without bezels, the LG 34UM95 is a perfect option.
Originally priced north of $1200, the 34UM95 and many in LG's 21:9 lineup have dropped in price considerably, giving them more avenues into users' homes. There are obvious gaming advantages to the 34-in display compared to a pair of 1920x1080 panels (no bezel, 20% more pixels) but if you have a pair of 2560x1440 screens you are going to be giving up a bit. Some games might not handle 21:9 resolutions well either, just as we continue to see Eyefinity/Surround unsupported occasionally.
Productivity users will immediately see an improvement, both for those us inundated with spreadsheets, web pages and text documents as well as the more creative types with Adobe Premiere timelines. I know that Ken would definitely have approved us keeping this monitor here at the office for his use.
Check out the video above for more thoughts on the LG 34UM95!
It's more than just a branding issue
As a part of my look at the first wave of AMD FreeSync monitors hitting the market, I wrote an analysis of how the competing technologies of FreeSync and G-Sync differ from one another. It was a complex topic that I tried to state in as succinct a fashion as possible given the time constraints and that the article subject was on FreeSync specifically. I'm going to include a portion of that discussion here, to recap:
First, we need to look inside the VRR window, the zone in which the monitor and AMD claims that variable refresh should be working without tears and without stutter. On the LG 34UM67 for example, that range is 48-75 Hz, so frame rates between 48 FPS and 75 FPS should be smooth. Next we want to look above the window, or at frame rates above the 75 Hz maximum refresh rate of the window. Finally, and maybe most importantly, we need to look below the window, at frame rates under the minimum rated variable refresh target, in this example it would be 48 FPS.
AMD FreeSync offers more flexibility for the gamer than G-Sync around this VRR window. For both above and below the variable refresh area, AMD allows gamers to continue to select a VSync enabled or disabled setting. That setting will be handled as you are used to it today when your game frame rate extends outside the VRR window. So, for our 34UM67 monitor example, if your game is capable of rendering at a frame rate of 85 FPS then you will either see tearing on your screen (if you have VSync disabled) or you will get a static frame rate of 75 FPS, matching the top refresh rate of the panel itself. If your game is rendering at 40 FPS, lower than the minimum VRR window, then you will again see the result of tearing (with VSync off) or the potential for stutter and hitching (with VSync on).
But what happens with this FreeSync monitor and theoretical G-Sync monitor below the window? AMD’s implementation means that you get the option of disabling or enabling VSync. For the 34UM67 as soon as your game frame rate drops under 48 FPS you will either see tearing on your screen or you will begin to see hints of stutter and judder as the typical (and previously mentioned) VSync concerns again crop their head up. At lower frame rates (below the window) these artifacts will actually impact your gaming experience much more dramatically than at higher frame rates (above the window).
G-Sync treats this “below the window” scenario very differently. Rather than reverting to VSync on or off, the module in the G-Sync display is responsible for auto-refreshing the screen if the frame rate dips below the minimum refresh of the panel that would otherwise be affected by flicker. So, in a 30-144 Hz G-Sync monitor, we have measured that when the frame rate actually gets to 29 FPS, the display is actually refreshing at 58 Hz, each frame being “drawn” one extra instance to avoid flicker of the pixels but still maintains a tear free and stutter free animation. If the frame rate dips to 25 FPS, then the screen draws at 50 Hz. If the frame rate drops to something more extreme like 14 FPS, we actually see the module quadruple drawing the frame, taking the refresh rate back to 56 Hz. It’s a clever trick that keeps the VRR goals and prevents a degradation of the gaming experience. But, this method requires a local frame buffer and requires logic on the display controller to work. Hence, the current implementation in a G-Sync module.
As you can see, the topic is complicated. So Allyn and I (and an aging analog oscilloscope) decided to take it upon ourselves to try and understand and teach the implementation differences with the help of some science. The video below is where the heart of this story is focused, though I have some visual aids embedded after it.
Still not clear on what this means for frame rates and refresh rates on current FreeSync and G-Sync monitors? Maybe this will help.
Our first DX12 Performance Results
Late last week, Microsoft approached me to see if I would be interested in working with them and with Futuremark on the release of the new 3DMark API Overhead Feature Test. Of course I jumped at the chance, with DirectX 12 being one of the hottest discussion topics among gamers, PC enthusiasts and developers in recent history. Microsoft set us up with the latest iteration of 3DMark and the latest DX12-ready drivers from AMD, NVIDIA and Intel. From there, off we went.
First we need to discuss exactly what the 3DMark API Overhead Feature Test is (and also what it is not). The feature test will be a part of the next revision of 3DMark, which will likely ship in time with the full Windows 10 release. Futuremark claims that it is the "world's first independent" test that allows you to compare the performance of three different APIs: DX12, DX11 and even Mantle.
It was almost one year ago that Microsoft officially unveiled the plans for DirectX 12: a move to a more efficient API that can better utilize the CPU and platform capabilities of future, and most importantly current, systems. Josh wrote up a solid editorial on what we believe DX12 means for the future of gaming, and in particular for PC gaming, that you should check out if you want more background on the direction DX12 has set.
One of DX12 keys for becoming more efficient is the ability for developers to get closer to the metal, which is a phrase to indicate that game and engine coders can access more power of the system (CPU and GPU) without having to have its hand held by the API itself. The most direct benefit of this, as we saw with AMD's Mantle implementation over the past couple of years, is improved quantity of draw calls that a given hardware system can utilize in a game engine.
Introduction, Specifications and Packaging
OCZ has been on a fairly steady release track since their aquisition by Toshiba. Having previously pared down their product lines, taking a minimalist approach, the other half of that cycle has taken place with releases like the OCZ AMD Radeon R7. Today we see another addition to OCZ's lineup, in the form of a newer Vector - the Vector 180 Series:
Today we will run all three available capacities (240GB, 480GB, and 960GB) through our standard round of testing. I've thrown in an R7 as a point of comparison, as well as a hand full of the competition.
Here are the specs from OCZ's slide presentation, included here as it gives a good spec comparison across OCZ's SATA product range.
Standard packaging here. 3.5" adapter bracket and Acronis 2013 cloning software product key included.
The perfect laptop; it is every manufacturer’s goal. Obviously no one has gotten there yet (or we would have all stopped writing reviews of them). At CES this past January, we got our first glimpse of a new flagship Ultrabook from Dell: the XPS 13. It got immediate attention for some of the physical characteristics it included, like an ultra-thin bezel and a 13-in screen in the body of a typical 11-in laptop, all while being built in a sleek thin and light design. It’s not a gaming machine, despite what you might remember from the XPS line, but the Intel Core-series Broadwell-U processor keeps performance speedy in standard computing tasks.
As a frequent traveler that tends to err on the side of thin and light designs, as opposed to high performance notebooks with discrete graphics, the Dell XPS 13 is immediately compelling on a personal level as well. I have long been known as a fan of what Lenovo builds for this space, trusting my work machine requirements to the ThinkPad line for years and year. Dell’s new XPS 13 is a strong contender to take away that top spot for me and perhaps force me down the path of an upgrade of my own. So, you might consider this review as my personal thesis on the viability of said change.
The Dell XPS 13 Specifications
First, make sure as you hunt around the web for information on the XPS 13 that you are focusing on the new 2015 model. Much like we see from Apple, Dell reuses model names and that can cause confusion unless you know what specifications to look for or exactly what sub-model you need. Trust me, the new XPS 13 is much better than anything that existed before.
Introduction and Features
EVGA has just announced the arrival of two new GS power supplies in their popular SuperNOVA line, the 550GS and 650GS. Both power supplies are 80 Plus Gold certified and feature all modular cables, high-quality Japanese brand capacitors, and a super quiet 120mm cooling fan (with the ability to operate in silent, fan-less mode at low power levels). The 550GS and 650GS are housed in a compact chassis (150mm deep) and are backed by a 5-year warranty. These new GS units are manufactured by Seasonic and will start selling for $89.99 (550GS) and $99.99 (650GS) this spring.
EVGA was founded in 1999 with headquarters in Brea, California. They continue to specialize in producing NVIDIA based graphics adapters and Intel based motherboards and keep expanding their PC power supply product line, which currently includes twenty-two models ranging from the high-end 1,600W SuperNOVA T2 to the budget minded EVGA 400W power supply.
In this review we will be taking a detailed look at both the EVGA SuperNOVA 550GS and 650GS power supplies. It’s nice when we receive two slightly different units in the same product series to look for consistency during testing.
Here is what EVGA has to say about the new SuperNOVA GS Gold PSUs: “The EVGA GS power supply lineup has arrived. These power supplies offer superior performance, at extremely low noise levels making these the perfect power supplies for a low noise environment. These power supplies also feature ECO mode meaning that the fan will run only when necessary, further reducing noise level and power consumption.
The EVGA SuperNOVA 550GS/650GS power supply raises the bar with 550W/650W of continuous power delivery and 90% (115VAC) / 92% (220VAC-240VAC) efficiency. A fully modular design with compact dimensions reduces case clutter and 100% Japanese capacitors ensures that only the absolute best components are used. This gives you the best stability, reliability, overclockability and unparalleled control. The EVGA SuperNOVA 550GS/650GS is the ultimate tool to eliminate all system bottlenecks and achieve unrivaled performance."
EVGA SuperNOVA 550 GS and 650 GS Gold PSU Key Features:
• Fully modular cables to reduce clutter and improve airflow
• 80PLUS Gold certified, with up to 90% (115VAC) / 92% (240VAC) efficiency
• Tight voltage regulation (2%), stable power with low AC ripple and noise
• Highest quality Japanese brand capacitors ensure long-term reliability
• Quiet 120mm Teflon Nano-Steel bearing cooling fan
• ECO Intelligent Thermal Control allows silent, fan-less operation at low power
• NVIDIA SLI & AMD Crossfire Ready
• Compliance with ErP Lot 6 2013 Requirement
• Active Power Factor correction (0.99) with Universal AC input
• Complete Protections: OVP, UVP, OPP, and SCP
• Compact chassis only 150mm (5.9”) deep
• 5-Year warranty
Introduction and First Impressions
The Fortress FT05 is the fifth iteration of SilverStone's Fortress series of enclosures, and, like the latest Raven case, this leverages the complete removal of 5.25" bays to reduce its overall size. We've seen this before as the FT03 completely removed optical support, but this enclosure is related far more closely to the current Raven enclosure than any of its predecessors.
Introduction: The Heart of a Raven
If you're familiar with SilverStone's product lineup you'll know about the Fortress and Raven enclosures which both currently feature an unusual 90° motherboard orientation. This layout places I/O on the top of the case, and helps expel warm air straight up. The Fortress was originally a more conventional design with a standard motherboard layout, but SilverStone switched this to mirror the Raven series with the second version, the FT02. However, just as the Raven series diverged from the original design language and layout of the RV01 with later versions, the Fortress series has undergone some radical changes since its introduction. With this fifth version of the Fortress SilverStone has converged the two enclosure lines, and the FT05 is essentially a more businesslike version of the Raven RV05 - though the design's more conventional exterior also contains noise-dampening material which helps to further differentiate the two enclosures.
Much as the current Raven owes much of its design to an earlier version, in that case the RV01, this new Fortress is a return to the design of the FT02. That earlier Fortress was a large (and quite expensive) case that combined great expandability with excellent cooling, taking the RV01's 90° layout and opening up the interior for an expansive, easy-to-manage interior. A considerable amount of the second gen's interior was devoted to storage, and the front of the case was dominated by 5.25" drive bays.
The second-generation Fortress FT02 interior
What is FreeSync?
FreeSync: What began as merely a term for AMD’s plans to counter NVIDIA’s launch of G-Sync (and mocking play on NVIDIA’s trade name) has finally come to fruition, keeping the name - and the attitude. As we have discussed, AMD’s Mantle API was crucial to pushing the industry in the correct and necessary direction for lower level APIs, though NVIDIA’s G-Sync deserves the same credit for recognizing and imparting the necessity of a move to a variable refresh display technology. Variable refresh displays can fundamentally change the way that PC gaming looks and feels when they are built correctly and implemented with care, and we have seen that time and time again with many different G-Sync enabled monitors at our offices. It might finally be time to make the same claims about FreeSync.
But what exactly is FreeSync? AMD has been discussing it since CES in early 2014, claiming that they would bypass the idea of a custom module that needs to be used by a monitor to support VRR, and instead go the route of open standards using a modification to DisplayPort 1.2a from VESA. FreeSync is based on AdaptiveSync, an optional portion of the DP standard that enables a variable refresh rate courtesy of expanding the vBlank timings of a display, and it also provides a way to updating EDID (display ID information) to facilitate communication of these settings to the graphics card. FreeSync itself is simply the AMD brand for this implementation, combining the monitors with correctly implemented drivers and GPUs that support the variable refresh technology.
A set of three new FreeSync monitors from Acer, LG and BenQ.
Fundamentally, FreeSync works in a very similar fashion to G-Sync, utilizing the idea of the vBlank timings of a monitor to change how and when it updates the screen. The vBlank signal is what tells the monitor to begin drawing the next frame, representing the end of the current data set and marking the beginning of a new one. By varying the length of time this vBlank signal is set to, you can force the monitor to wait any amount of time necessary, allowing the GPU to end the vBlank instance exactly when a new frame is done drawing. The result is a variable refresh rate monitor, one that is in tune with the GPU render rate, rather than opposed to it. Why is that important? I wrote in great detail about this previously, and it still applies in this case:
The idea of G-Sync (and FreeSync) is pretty easy to understand, though the implementation method can get a bit more hairy. G-Sync (and FreeSync) introduces a variable refresh rate to a monitor, allowing the display to refresh at wide range of rates rather than at fixed intervals. More importantly, rather than the monitor dictating what rate this refresh occurs at to the PC, the graphics now tells the monitor when to refresh in a properly configured G-Sync (and FreeSync) setup. This allows a monitor to match the refresh rate of the screen to the draw rate of the game being played (frames per second) and that simple change drastically improves the gaming experience for several reasons.
Gamers today are likely to be very familiar with V-Sync, short for vertical sync, which is an option in your graphics card’s control panel and in your game options menu. When enabled, it forces the monitor to draw a new image on the screen at a fixed interval. In theory, this would work well and the image is presented to the gamer without artifacts. The problem is that games that are played and rendered in real time rarely fall into a very specific frame rate. With only a couple of exceptions, games frame rates will fluctuate based on the activity happening on the screen: a rush of enemies, a changed camera angle, an explosion or falling building. Instantaneous frame rates can vary drastically, from 30, to 60, to 90, and force the image to be displayed only at set fractions of the monitor's refresh rate, which causes problems.
With the release of the GeForce GTX 980 back in September of 2014, NVIDIA took the lead in performance with single GPU graphics cards. The GTX 980 and GTX 970 were both impressive options. The GTX 970 offered better performance than the R9 290 as did the GTX 980 compared to the R9 290X; on top of that, both did so while running at lower power consumption and while including new features like DX12 feature level support, HDMI 2.0 and MFAA (multi-frame antialiasing). Because of those factors, the GTX 980 and GTX 970 were fantastic sellers, helping to push NVIDIA’s market share over 75% as of the 4th quarter of 2014.
But in the back of our mind, and in the minds of many NVIDIA fans, we knew that the company had another GPU it was holding on to: the bigger, badder version of Maxwell. The only question was going to be WHEN the company would release it and sell us a new flagship GeForce card. In most instances, this decision is based on the competitive landscape, such as when AMD might be finally updating its Radeon R9 290X Hawaii family of products with the rumored R9 390X. Perhaps NVIDIA is tired of waiting or maybe the strategy is to launch soon before Fiji GPUs make their debut. Either way, NVIDIA officially took the wraps off of the new GeForce GTX TITAN X at the Game Developers Conference two weeks ago.
At the session hosted by Epic Games’ Tim Sweeney, NVIDIA CEO Jen-Hsun Huang arrived when Tim lamented about needing more GPU horsepower for their UE4 content. In his hands he had the first TITAN X GPU and talked about only a couple of specifications: the card would have 12GB of memory and it would be based on a GPU with 8 billion transistors.
Since that day, you have likely seen picture after picture, rumor after rumor, about specifications, pricing and performance. Wait no longer: the GeForce GTX TITAN X is here. With a $999 price tag and a GPU with 3072 CUDA cores, we clearly have a new king of the court.
Introduction and First Impressions
The Lian Li PC-Q33 is a mini-ITX enclosure with a cube-like appearance and a hinged construction that makes it easy to access the components within.
When a builder is contemplating a mini-ITX system the primary driver is going to be the size. It’s incredible that we've reached the point where we can have a powerful single-GPU system with minimal (if any) tradeoffs from the tiny mITX form-factor, but the components need to be housed in an appropriately small enclosure or the entire purpose is defeated. However working within small enclosures is often more difficult, unless the enclosure has been specifically designed to account for this. Certainly no slouch in the design department, Lian Li is no stranger to small, lightweight mini-ITX designs like this. The NCASE M1 (a personal favorite) was manufactured by the company after all, and in some ways the PC-Q33 is reminiscent of that design - in build quality and materials if nothing else. The Q33 features aluminum construction and is very light, and while compact the design of the enclosure allows for effortless component installation. The secret? A hinged design that allows the front of the enclosure to swing down providing full access to the interior.
This approach to accessibility with a small enclosure is a welcome one, and especially so considering the price of the PC-Q33, which retails for $95 on Newegg and can be found for around $105 on Amazon as well. This is still a high cost for many considering a small build and enters the premium price range for an enclosure, but remember the Q33 features an aluminum construction which typically carries a considerably higher cost than steel and plastic. Of course if the case is frustrating to use or has poor thermals than the materials used are meaningless, so in this review we’ll look at the build process and thermal results with the Q33 to see if it’s a good value. My initial impression is that the price is actually low, but that’s coming from someone who looks at a lot of cases and develops a familiarity with the average retail prices in each category.
Introduction, Specs and Packaging
We're getting back into USB device roundup testing. To kick it off, Patriot passed along a couple of USB samples for review. First up is the Supersonic Phoenix 256GB:
- Read speed: Up to 260MB/s
- Write speed: up to 170MB/s
- Compact and lightweight
- Stylish 3D design
- USB Powered
- SuperSpeed USB 3.0
- Compatible with Windows 8, Windows 7, Windows Vista, Windows XP, Windows 2000, Windows ME, Linux 2.4 and later, Mac OS9, X and later
Next up is their Supersonic Rage 2:
- Up to 400MB/s Read; Up to 300MB/s Write
- Durable design extends the life of your drive
- Rubber coated housing protects from drops, spills, daily abuse
- Retractable design protects USB connector when drive not in use
- LED Light Indicator
- Compatible with Windows® 8, Windows® 8.1, Windows® 7,
Windows Vista®, Windows XP®, Windows 2000®, Windows® ME,
Linux 2.4 and later, Mac® OS9, X
The Phoenix comes well packaged with a necessary USB 3.0 cable:
The Rage 2 comes in very simple packaging:
Introduction and Technical Specifications
Courtesy of ASUS
The X99-A is the base level board in ASUS' Intel X99 line of motherboard offering. Don't let the term "base level offering" throw you off though, ASUS put their best foot forward in designing this beauty. The board features full support for all Intel LGA2011-3 based processors paired with DDR4 memory operating in up to a quad channel configuration. Priced at a competitive price point of $274.99, the X99-A gives the more feature-packed (and vastly more expensive) boards a run for their money.
Courtesy of ASUS
Courtesy of ASUS
Just because the X99-A motherboard is designed to be the "entry-level" model of ASUS' X99 product line does not mean that they skimped on its design or features. The X99-A features the enhanced OC Socket and an 8+4 phase digital power system similar to that featured on its more costly siblings, centered around the Extreme Engine Digi+ IV solution. Extreme Engine Digi+ IV combines ASUS' custom designed Digi+ EPU chipset, IR (International Rectifier) sourced MOSFETs, high-quality chokes, and 10k Black Metallic capacitors for unrivaled power delivery capabilities. The board is further augmented by the integration of ASUS' Crystal Sound 2 audio subsystem for superior audio reproduction.
Introduction and Specifications
Had you asked me just a few years ago if 6-inch phones would not only be a viable option, but a dominant force in the mobile computing market, I would have likely rolled my eyes. At that time phones were small, tablets were big, and phablets were laughed at. Today, no one is laughing at the Galaxy Note 4, the latest iteration in Samsung’s created space of larger-than-you-probably-thought-you-wanted smartphones. Nearly all consumers are amazed by the size of the screen and the real estate this class of phone provides but some are instantly off put by the way the phone feels in the hand – it can come off as foreign, cumbersome, and unusable.
In my time with the new Galaxy Note 4 – my first extended-use experience with a phone of this magnitude – I have come to see the many positive traits that a larger phone can offer. There are some trade-offs of course, including the pocket/purse viability debate. One thing beyond question is that a large phone means a big screen. One that can display a large amount of data whether that be on a website or in a note-taking application. The extra screen real estate can instantly improve your productivity. To that end Samsung also provides a multi-tasking framework that lets you run multiple programs in a side-by-side view, similar to what the original version of Windows 8 did. It might seem unnecessary for an Android device, but as soon as you find the situation where you need it going back to a device without it can feel archaic.
A larger phone also means that there is more room for faster hardware, a larger camera sensor, and a bigger battery. Samsung even includes an active stylus called the S-Pen in the body of the device – something that few other modern tablets/phablets/phones feature.
Introduction and Design
Although the target market and design emphasis may be different, there is one thing consumer and business-grade laptops have in common: a drift away from processing power and toward portability and efficiency. At the risk of repeating our introduction for the massive MSI GT72 gaming notebook we reviewed last month, it seems that battery life, temperature, and power consumption get all the attention these days. And arguably, it makes sense for most people: it’s true that CPU performance gains have in years past greatly outstripped the improvements in battery life, and that likewise performance gains could be realized far more easily by upgrading storage device speed (such as by replacing conventional hard drives with solid-state drives) than by continuing to focus on raw CPU power and clock rates. As a result, we’ve seen many mobile CPU speeds plateauing or even dropping in exchange for a reduction in power consumption, while simultaneously cases have slimmed and battery life has jumped appreciably across the board.
But what if you’re one of the minority who actually appreciates and needs raw computing power? Fortunately, Lenovo’s ThinkPad W series still has you covered. This $1,500 workstation is the business equivalent of the consumer-grade gaming notebook. It’s one of the few designs where portability takes a backseat to raw power and ridiculous spec. Users shopping for a ThinkPad workstation aren’t looking to go unplugged all day long on an airplane tray table. They’re looking for power, reliability, and premium design, with function over form as a rule. And that’s precisely what they’ll get.
Beyond the fairly-typical (and very powerful) Intel Core i7-4800MQ CPU—often found in gaming PCs and workstations—and just 8 GB of DDR3-1600 MHz RAM (single-channel) is a 256 GB SSD and a unique feature to go along with the WQHD+ display panel: built-in X-Rite Pantone color sensor which can be used to calibrate the panel simply by closing the lid when prompted. How well this functions is another topic entirely, but at the very least, it’s a novel idea.
Introduction and Features
Earlier this year we took a look at SilverStone’s ST1500-GS power supply unit, which currently has the highest rated output in the Strider Gold S Series. Today we are looking at SilverStone’s second generation Strider Gold ST75F-GS V2.0, which is a 750 watt power supply that comes housed in a short chassis; only 140mm (5.5”) deep for easy integration. It’s nice to get a different model from the same series in for review to see how the series overall performs. SilverStone claims the Strider Gold S Series are the world’s smallest, full-modular ATX power supplies.
SilverStone SST-ST75F-GS V2.0 750W ATX Power Supply
There are currently five different models available in the Strider Gold S Series, which include the ST55F-G, ST65F-G, ST75-GS, ST85F-GS, and ST1500-GS. All of the Strider Gold S Series PSUs are designed to be fully modular, 80 Plus Gold certified, and small in size. While the typical 750W power supply enclosure measures 160mm (6.3”) deep, the Strider Gold ST75F-GS is housed in a 140mm chassis (5.5”).
(Courtesy of SilverStone)
SilverStone Strider Gold S Series ST75F-GS V2.0 PSU Key Features:
• 750 watts DC power output
• Compact design with a depth of only 140mm for easy integration
• High efficiency with 80 Plus Gold certification
• 100% Modular cables
• 24/7 Continuous power output with 40°C operating temperature
• Strict ±3% voltage regulation and low AC ripple & noise
• Dedicated single +12V rail (62.5A)
• Quiet 120mm cooling fan
• Four PCI-E 8/2-pin connectors support multiple high-end graphic adapters
• Conforms to ATX12V and EPS standards
• Universal AC input (90-264V) with Active PFC
• Dimensions: 150mm (W) x 86mm (H) x 140mm (L)
• $134.99 USD
Finally, a SHIELD Console
NVIDIA is filling out the family of the SHIELD brand today with the announcement of SHIELD, a set-top box powered by the Tegra X1 processor. SHIELD will run Android TV and act as a game playing, multimedia watching, GRID streaming device. Selling for $199 and available in May of this year, there is a lot to discuss.
Odd naming scheme aside, the SHIELD looks to be an impressive little device, sitting on your home theater or desk and bringing a ton of connectivity and performance to your TV. Running Android TV means the SHIELD will have access to the entire library of Google Play media including music, movies and apps. SHIELD supports 4K video playback at 60 Hz thanks to an HDMI 2.0 connection and fully supports H.265/HEVC decode thanks to Tegra X1 processor.
Here is a full breakdown of the device's specifications.
|NVIDIA SHIELD Specifications|
|Processor||NVIDIA® Tegra® X1 processor with 256-core Maxwell™ GPU with 3GB RAM|
|Video Features||4K Ultra-HD Ready with 4K playback and capture up to 60 fps (VP9, H265, H264)|
|Audio||7.1 and 5.1 surround sound pass through over HDMI
High-resolution audio playback up to 24-bit/192kHz over HDMI and USB
High-resolution audio upsample to 24-bit/192hHz over USB
|Wireless||802.11ac 2x2 MIMO 2.4 GHz and 5 GHz Wi-Fi
Two USB 3.0 (Type A)
MicroSD slot (supports 128GB cards)
IR Receiver (compatible with Logitech Harmony)
|Gaming Features||NVIDIA GRID™ streaming service
|SW Updates||SHIELD software upgrades directly from NVIDIA|
|Power||40W power adapter|
|Weight and Size||Weight: 23oz / 654g
Height: 5.1in / 130mm
Width: 8.3in / 210mm
Depth: 1.0in / 25mm
|OS||Android TV™, Google Cast™ Ready|
|In the box||NVIDIA SHIELD
NVIDIA SHIELD controller
HDMI cable (High Speed), USB cable (Micro-USB to USB)
Power adapter (Includes plugs for North America, Europe, UK)
|Requirements||TV with HDMI input, Internet access|
|Options||SHIELD controller, SHIELD remove, SHIELD stand|
Obviously the most important feature is the Tegra X1 SoC, built on an 8-core 64-bit ARM processor and a 256 CUDA Core Maxwell architecture GPU. This gives the SHIELD set-top more performance than basically any other mobile part on the market, and demos showing Doom 3 and Crysis 3 running natively on the hardware drive the point home. With integrated HEVC decode support the console is the first Android TV device to offer the support for 4K video content at 60 FPS.
Even though storage is only coming in at 16GB, the inclusion of an MicroSD card slot enabled expansion to as much as 128GB more for content and local games.
The first choice for networking will be the Gigabit Ethernet port, but the 2x2 dual-band 802.11ac wireless controller means that even those of us that don't have hardwired Internet going to our TV will be able to utilize all the performance and features of SHIELD.
As GDC progresses here in San Francisco, AMD took the wraps off of a new SDK for game developers to use to improve experiences with virtual reality (VR) headsets. Called LiquidVR, the goal is provide a smooth and stutter free VR experience that is universal across all headset hardware and to keep the wearer, be it a gamer or professional user, immersed.
AMD's CTO of Graphics, Raja Koduri spoke with us about the three primary tenets of the LiquidVR initiative. The 'three Cs' as it is being called are Comfort, Compatibility and Compelling Content. Ignoring the fact that we have four C's in that phrase, the premise is straight forward. Comfortable use of VR means there is little to no issues with neusea and that can be fixed with ultra-low latency between motion (of your head) and photons (hitting your eyes). For compatibility, AMD would like to assure that all VR headsets are treated equally and all provide the best experience. Oculus, HTC and others should operate in a simple, plug-and-play style. Finally, the content story is easy to grasp with a focus on solid games and software to utilize VR but AMD also wants to ensure that the rendering is scalable across different hardware and multiple GPUs.
To address these tenets AMD has built four technologies into LiquidVR: late data latching, asynchronous shaders, affinity multi-GPU, and direct-to-display.
The idea behind late data latching is to get the absolute most recent raw data from the VR engine to the users eyes. This means that rather than asking for the head position of a gamer at the beginning of a render job, LiquidVR will allow the game to ask for it at the end of the rendering pipeline, which might seem counter-intuitive. Late latch means the users head movement is tracked until the end of the frame render rather until just the beginning, saving potentially 5-10ms of delay.
Who Should Care? Thankfully, Many People
The Khronos Group has made three announcements today: Vulkan (their competitor to DirectX 12), OpenCL 2.1, and SPIR-V. Because there is actually significant overlap, we will discuss them in a single post rather than splitting them up. Each has a role in the overall goal to access and utilize graphics and compute devices.
Before we get into what everything is and does, let's give you a little tease to keep you reading. First, Khronos designs their technologies to be self-reliant. As such, while there will be some minimum hardware requirements, the OS pretty much just needs to have a driver model. Vulkan will not be limited to Windows 10 and similar operating systems. If a graphics vendor wants to go through the trouble, which is a gigantic if, Vulkan can be shimmed into Windows 8.x, Windows 7, possibly Windows Vista despite its quirks, and maybe even Windows XP. The words “and beyond” came up after Windows XP, but don't hold your breath for Windows ME or anything. Again, the further back in Windows versions you get, the larger the “if” becomes but at least the API will not have any “artificial limitations”.
Outside of Windows, the Khronos Group is the dominant API curator. Expect Vulkan on Linux, Mac, mobile operating systems, embedded operating systems, and probably a few toasters somewhere.
On that topic: there will not be a “Vulkan ES”. Vulkan is Vulkan, and it will run on desktop, mobile, VR, consoles that are open enough, and even cars and robotics. From a hardware side, the API requires a minimum of OpenGL ES 3.1 support. This is fairly high-end for mobile GPUs, but it is the first mobile spec to require compute shaders, which are an essential component of Vulkan. The presenter did not state a minimum hardware requirement for desktop GPUs, but he treated it like a non-issue. Graphics vendors will need to be the ones making the announcements in the end, though.
Introduction and First Impressions
The RV05 is the current iteration of SilverStone's Raven enclosure series, and a reinvention of their ATX enthusiast design with a revised layout that eliminates 5.25" drive bays for a smaller footprint.
Return to Form
The fifth edition of SilverStone's Raven is a return to form of sorts, as it owes more to the design of the original RV01 than the next three to follow. The exterior again has an aggressive, angular look with the entire enclosure sitting up slightly at the rear and tilted forward. Though the overall effect is likely less visually exciting than the original, depending on taste, in its simplicity the design feels more refined and modern than the RV01. Some of the sharpest angles have been eliminated or softened, though the squat stance coupled with its smaller size gives the RV05 an energetic appearance - as if it's ready to strike. (OK, I know it's just a computer case, but still...)
The Raven series is important to the case market as a pioneer of the 90º motherboard layout for ATX systems, expanding on the design originally developed by Intel for the short-lived BTX form-factor. In the layout implemented in the Raven series the motherboard is installed with the back IO panel facing up, which requires the graphics card to be installed vertically. This vertical orientation assists with heat removal by exploiting the tendency of warm air to rise, and when implemented in an enclosure like the RV05 it can create an excellent thermal environment for your components. The RV05 features large fans at the bottom of the case that push air upward and across the components on the motherboard, forcing warm air to exit through a well-ventilated top panel.
And the RV05 isn't just a working example of an interesting thermal profile, it's actually a really cool-looking enclosure with some premium features and suprisingly low price for a product like this at $129 on Amazon as this was written. In our review of the RV05 we'll be taking a close look at the case and build process, and of course we'll test the thermal performance with some CPU and GPU workloads to find out just how well this design performs.
SoFIA, Cherry Trail Make Debuts
Mobile World Congress is traditionally dominated by Samsung, Qualcomm, HTC, and others yet Intel continues to make in-roads into the mobile market. Though the company has admittedly lost a lot of money during this growing process, Intel pushes forward with today's announcement of a trio of new processor lines that keep the Atom brand. The Atom x3, the Atom x5, and the Atom x7 will be the company's answer in 2015 for a wide range of products, starting at the sub-$75 phone market and stretching up to ~$400 tablets and all-in-ones.
There are some significant differences in these Atom processors, more than the naming scheme might indicate.
Intel Atom x3 SoFIA Processor
For years now we have questioned Intel's capability to develop a processor that could fit inside the thermal envelope that is required for a smartphone while also offering performance comparable to Qualcomm, MediaTek, and others. It seemed that the x86 architecture was a weight around Intel's ankles rather than a float lifting it up. Intel's answer was the development of SoFIA, (S)mart (o)r (F)eature phone with (I)ntel (A)rchitecture. The project started about 2 years ago leading to product announcements finally reaching us today. SoFIA parts are "designed for budget smartphones; SoFIA is set to give Qualcomm and MediaTek a run for their money in this rapidly growing part of the market."
The SoFIA processors are based on the same Silvermont architecture as the current generation of Atom processors, but they are more tuned for power efficiency. Originally planned to be a dual-core only option, Intel has actually built both dual-core and quad-core variants that will pair with varying modem options to create a combination that best fit target price points and markets. Intel has partnered with RockChip for these designs, even though the architecture is completely IA/x86 based. Production will be done on a 28nm process technology at an unnamed vendor, though you can expect that to mean TSMC. This allows RockChip access to the designs, to help accelerate development, and to release them into the key markets that Intel is targeting.