All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
A fury unlike any other...
Officially unveiled by AMD during E3 last week, we are finally ready to show you our review of the brand new Radeon R9 Fury X graphics card. Very few times has a product launch meant more to a company, and to its industry, than the Fury X does this summer. AMD has been lagging behind in the highest-tiers of the graphics card market for a full generation. They were depending on the 2-year-old Hawaii GPU to hold its own against a continuous barrage of products from NVIDIA. The R9 290X, despite using more power, was able to keep up through the GTX 700-series days, but the release of NVIDIA's Maxwell architecture forced AMD to move the R9 200-series parts into the sub-$350 field. This is well below the selling prices of NVIDIA's top cards.
The AMD Fury X hopes to change that with a price tag of $650 and a host of new features and performance capabilities. It aims to once again put AMD's Radeon line in the same discussion with enthusiasts as the GeForce series.
The Fury X is built on the new AMD Fiji GPU, an evolutionary part based on AMD's GCN (Graphics Core Next) architecture. This design adds a lot of compute horsepower (4,096 stream processors) and it also is the first consumer product to integrate HBM (High Bandwidth Memory) support with a 4096-bit memory bus!
Of course the question is: what does this mean for you, the gamer? Is it time to start making a place in your PC for the Fury X? Let's find out.
One hub to rule them all!
Inateck sent along a small group of connectivity devices for us to evaluate. One such item was their HB7003 7 port USB 3.0 hub:
This is a fairly standard powered USB hub with one exception - high speed charging. Thanks to an included 36W power adapter and support for Battery Charging Specification 1.2, the HB7003 can charge devices at up to 1.5 Amps at 5 Volts. This is not to be confused with 'Quick Charging', which uses a newer specification and more unique hardware.
- L/W/H: 6.06" x 1.97" x 0.83"
- Ports: 7
- Speed: USB 3.0 5Gbps (backwards compatible with USB 2.0 and 1.1)
- Windows Vista / OSX 10.8.4 and newer supported without drivers
Densely packed brown box. Exactly how such a product should be packaged.
Power adapter (~6 foot cord), ~4.5 foot USB 3.0 cord, instruction manual, and the hub itself.
Some quick charging tests revealed that the HB7003 had no issue exceeding 1.0 Amp charging rates, but fell slightly short of a full 1.5A charge rate due to the output voltage falling a little below the full 5V. Some voltage droop is common with this sort of device, but it did have some effect. In one example, an iPad Air drew 1.3A (13% short of a full 1.5A). Not a bad charging rate considering, but if you are expecting a fast charge of something like an iPad, its dedicated 2.1A charger is obviously the better way to go.
Performance and Usability:
As you can see above, even though the port layout is on a horizontal plane, Inateck has spaced the ports enough that most devices should be able to sit side by side. Some wider devices may take up an extra port, but with seven to work with, the majority of users should have enough available ports even if one or two devices overlap an adjacent port. In the above configuration, we had no issue saturating the throughput to each connected device. I also stepped up to a Samsung USB T1 which also negotiated at the expected USB 3.0 speeds.
Pricing and Availability
- $34.99 (Amazon)
Inateck is selling it these direct from their Amazon store (link above).
- Clean design 7-port USB 3.0 hub.
- Port spacing sufficient for most devices without interference.
- 1.5A per port charging.
- Low cost.
- 'Wall wart' power adapter may block additional power strip outlets.
At just $35, the Inateck HB7003 is a good quality 7-port USB 3.0 hub. All ports can charge devices at up to 1.5A while connecting them to the host at data rates up to 5 Gbps. The only gripe I had was that the hub was a bit on the light weight side and as a result it easily slid around on the desk when the attached cords were disturbed, but some travelers might see light weight as a bonus. Overall this is a simple, no frills USB 3.0 hub that gets the job done nicely.
Qualcomm’s GPU History
Despite its market dominance, Qualcomm may be one of the least known contenders in the battle for the mobile space. While players like Apple, Samsung, and even NVIDIA are often cited as the most exciting and most revolutionary, none come close to the sheer sales, breadth of technology, and market share that Qualcomm occupies. Brands like Krait and Snapdragon have helped push the company into the top 3 semiconductor companies in the world, following only Intel and Samsung.
Founded in July 1985, seven industry veterans came together in the den of Dr. Irwin Jacobs’ San Diego home to discuss an idea. They wanted to build “Quality Communications” (thus the name Qualcomm) and outlined a plan that evolved into one of the telecommunications industry’s great start-up success stories.
Though Qualcomm sold its own handset business to Kyocera in 1999, many of today’s most popular mobile devices are powered by Qualcomm’s Snapdragon mobile chipsets with integrated CPU, GPU, DSP, multimedia CODECs, power management, baseband logic and more. In fact the typical “chipset” from Qualcomm encompasses up to 20 different chips of different functions besides just the main application processor. If you are an owner of a Galaxy Note 4, Motorola Droid Turbo, Nexus 6, or Samsung Galaxy S5, then you are most likely a user of one of Qualcomm’s Snapdragon chipsets.
Qualcomm’s GPU History
Before 2006, the mobile GPU as we know it today was largely unnecessary. Feature phones and “dumb” phones were still the large majority of the market with smartphones and mobile tablets still in the early stages of development. At this point all the visual data being presented on the screen, whether on a small monochrome screen or with the color of a PDA, was being drawn through a software renderer running on traditional CPU cores.
But by 2007, the first fixed-function, OpenGL ES 1.0 class of GPUs started shipping in mobile devices. These dedicated graphics processors were originally focused on drawing and updating the user interface on smartphones and personal data devices. Eventually these graphics units were used for what would be considered the most basic gaming tasks.
The new Radeon R9 300-series
The new AMD Radeon R9 and R7 300-series of graphics cards are coming into the world with a rocky start. We have seen rumors and speculation about what GPUs are going to be included, what changes would be made and what prices these would be shipping at for what seems like months, and in truth it has been months. AMD's Radeon R9 290 and R9 290X based on the new Hawaii GPU launched nearly 2 years ago, while the rest of the 200-series lineup was mostly a transition of existing products in the HD 7000-family. The lone exception was the Radeon R9 285, a card based on a mysterious new GPU called Tonga that showed up late to the game to fill a gap in the performance and pricing window for AMD.
AMD's R9 300-series, and the R7 300-series in particular, follows a very similar path. The R9 390 and R9 390X are still based on the Hawaii architecture. Tahiti is finally retired and put to pasture, though Tonga lives on as the Radeon R9 380. Below that you have the Radeon R7 370 and 360, the former based on the aging GCN 1.0 Curacao GPU and the latter based on Bonaire. On the surface its easy to refer to these cards with the dreaded "R-word"...rebrands. And though that seems to be the case there are some interesting performance changes, at least at the high end of this stack, that warrant discussion.
And of course, AMD partners like Sapphire are using this opportunity of familiarity with the GPU and its properties to release newer product stacks. In this case Sapphire is launching the new Nitro brand for a series of cards that it is aimed at what it considers the most common type of gamer: one that is cost conscious and craves performance over everything else.
The result is a stack of GPUs with prices ranging from about $110 up to ~$400 that target the "gamer" group of GPU buyers without the added price tag that some other lines include. Obviously it seems a little crazy to be talking about a line of graphics cards that is built for gamers (aren't they all??) but the emphasis is to build a fast card that is cool and quiet without the additional cost of overly glamorous coolers, LEDs or dip switches.
Today I am taking a look at the new Sapphire Nitro R9 390 8GB card, but before we dive head first into that card and its performance, let's first go over the changes to the R9-level of AMD's product stack.
Fiji: A Big and Necessary Jump
Fiji has been one of the worst kept secrets in a while. The chip has been talked about, written about, and rumored about seemingly for ages. The chip has promised to take on NVIDIA at the high end by bringing about multiple design decisions that are aimed to give it a tremendous leap in performance and efficiency as compared to previous GCN architectures. NVIDIA released their Maxwell based products last year and added to that this year with the Titan X and the GTX 980 Ti. These are the parts that Fiji is aimed to compete with.
The first product that Fiji will power is the R9 Fury X with integrated water cooling.
AMD has not been standing still, but their R&D budgets have been taking a hit as of late. The workforce has also been pared down to the bare minimum (or so I hope) while still being able to design, market, and sell products to the industry. This has affected their ability to produce as large a quantity of new chips as NVIDIA has in the past year. Cut-backs are likely not the entirety of the story, but they have certainly affected it.
The plan at AMD seems to be to focus on very important products and technologies, and then migrate those technologies to new products and lines when it makes the most sense. Last year we saw the introduction of “Tonga” which was the first major redesign after the release of the GCN 1.1 based Hawaii which powers the R9 290 and R9 390 series. Tonga delivered double the tessellation performance over Hawaii, it improved overall architecture efficiency, and allowed AMD to replace the older Tahiti and Pitcairn chips with an updated unit that featured xDMA and TrueAudio support. Tonga was a necessary building block that allowed AMD to produce a chip like Fiji.
You love the dock!
Today we'll take a quick look at the Inateck FD2002 USB 3.0 to dual SATA dock:
This is a UASP capable dock that should provide near full SATA 6Gb/sec throughput to each of two connected SATA SSDs or HDDs. This particular dock has no RAID capability, but exchanges that for an offline cloning / duplication mode. While the FD2002 uses ASMedia silicon to perform these tasks, similar limitations are inherent in competing hardware fron JMicron, which comes with a similar toggle of either RAID or cloning capability. Regardless, Inateck made the logical choice with the FD2002, as hot swap docks are not the best choice for hardware RAID.
The pair of ASMedia chips moving data within the FD2002. The ASM1153E on the left couples the USB 3.0 link to the ASM1091R, which multiplexes to the pair of SATA ports and apparently adds cloning functionality.
Introduction and Technical Specifications
The measure of a true modder is not in how powerful he can make his system by throwing money at it, but in how well he can innovate to make his components run better with what he or she has on hand. Some make artistic statements with their truly awe-inspiring cases, while others take the dremel and clamps to their beloved video cards in an attempt to eek out that last bit of performance. This article serves the later of the two. Don't get me wrong, the card will look nice once we're done with it, but the point here is to re-use components on hand where possible to minimize the cost while maximizing the performance (and sound) benefits.
EVGA GTX 970 SC Graphics Card
Courtesy of EVGA
We started with an EVGA GTX 970 SC card with 4GB ram and bundled with the new revision of EVGA's ACX cooler, ACX 2.0. This card is well built with a slight factory overclock out of the box. The ACX 2.0 cooler is a redesigned version of the initial version of the cooler included with the card, offering better cooling potential with fan's not activated for active cooling until the GPU block temperature breeches 60C.
Courtesy of EVGA
WATERCOOL HeatKiller GPU-X3 Core GPU Waterblock
Courtesy of WATERCOOL
For water cooling the EVGA GTX 970 SC GPU, we decided to use the WATERCOOL HeatKiller GPU-X3 Core water block. This block features a POM-based body with a copper core for superior heat transfer from the GPU to the liquid medium. The HeatKiller GPU-X3 Core block is a GPU-only cooler, meaning that the memory and integrated VRM circuitry will not be actively cooled by the block. The decision to use a GPU only block rather than a full cover block was two fold - availability and cost. I had a few of these on hand, making of an easy decision cost-wise.
Introduction and Specifications
The ASUS Zenfone 2 is a 5.5-inch smartphone with a premium look and the specs to match. But the real story here is that it sells for just $199 or $299 unlocked, making it a tempting alternative to contract phones without the concessions often made with budget devices; at least on paper. Let's take a closer look to see how the new Zenfone 2 stacks up! (Note: a second sample unit was provided by Gearbest.com.)
When I first heard about the Zenfone 2 from ASUS I was eager to check it out given its mix of solid specs, nice appearance, and a startlingly low price. ASUS has created something that has the potential to transcend the disruptive nature of a phone like the Moto E, itself a $149 alternative to contract phones that we reviewed recently. With its premium specs to go along with very low unlocked pricing, the Zenfone 2 could be more than just a bargain device, and if it performs well it could make some serious waves in the smartphone industry.
The Zenfone 2 also features a 5.5-inch IPS LCD screen with 1920x1080 resolution (in line with an iPhone 6 Plus or the OnePlus One), and beyond the internal hardware ASUS has created a phone that looks every bit the part of a premium device that one would expect to cost hundreds more. In fact, without spoiling anything up front, I will say that the context of price won't be necessary to judge the merit of the Zenfone 2; it stands on its own as a smartphone, and not simply a budget phone.
The big question is going to be how the Zenfone 2 compares to existing phones, and with its quad-core Intel Atom SoC this is something of an unknown. Intel has been making a push to enter the U.S. smartphone market (with earlier products more widely available in Europe) and the Zenfone 2 marks an important milestone for both Intel and ASUS in this regard. The Z3580 SoC powering my review unit certainly sounds fast on paper with its 4 cores clocked up to 2.33 GHz, and no less than 4 GB of RAM on board (and a solid 64 GB onboard storage as well).
Introduction and Design
With the introduction of the ASUS G751JT-CH71, we’ve now got our first look at the newest ROG notebook design revision. The celebrated design language remains the same, and the machine’s lineage is immediately discernible. However, unlike the $2,000 G750JX-DB71 unit we reviewed a year and a half ago, this particular G751JT configuration is 25% less expensive at just $1,500. So first off, what’s changed on the inside?
(Editor's Note: This is NOT the recent G-Sync version of the ASUS G751 notebook that was announced at Computex. This is the previously released version, one that I am told will continue to sell for the foreseeable future and one that will come at a lower overall price than the G-Sync enabled model. Expect a review on the G-Sync derivative very soon!)
Quite a lot, as it turns out. For starters, we’ve moved all the way from the 700M series to the 900M series—a leap which clearly ought to pay off in spades in terms of GPU performance. The CPU and RAM remain virtually equivalent, while the battery has migrated from external to internal and enjoyed a 100 mAh bump in the process (from 5900 to 6000 mAh). So what’s with the lower price then? Well, apart from the age difference, it’s the storage: the G750JX featured both a 1 TB storage drive and a 256 GB SSD, while the G751JT-CH71 drops the SSD. That’s a small sacrifice in our book, especially when an SSD is so easily added thereafter. By the way, if you’d rather simply have ASUS handle that part of the equation for you, you can score a virtually equivalent configuration (chipset and design evolutions notwithstanding) in the G751JT-DH72 for $1750—still $250 less than the G750JX we reviewed.
Digging in a Little Deeper into the DiRT
Over the past few weeks I have had the chance to play the early access "DiRT Rally" title from Codemasters. This is a much more simulation based title that is currently PC only, which is a big switch for Codemasters and how they usually release their premier racing offerings. I was able to get a hold of Paul Coleman from Codemasters and set up a written interview with him. Paul's answers will be in italics.
Who are you, what do you do at Codemasters, and what do you do in your spare time away from the virtual wheel?
Hi my name is Paul Coleman and I am the Chief Games Designer on DiRT Rally. I’m responsible for making sure that the game is the most authentic representation of the sport it can be, I’m essentially representing the player in the studio. In my spare time I enjoy going on road trips with my family in our 1M Coupe. I’ve been co-driving in real world rally events for the last three years and I’ve used that experience to write and voice the co-driver calls in game.
If there is one area that DiRT has really excelled at is keeping frame rate consistent throughout multiple environments. Many games, especially those using cutting edge rendering techniques, often have dramatic frame rate drops at times. How do you get around this while still creating a very impressive looking game?
The engine that DiRT Rally has been built on has been constantly iterated on over the years and we have always been looking at ways of improving the look of the game while maintaining decent performance. That together with the fact that we work closely with GPU manufacturers on each project ensures that we stay current. We also have very strict performance monitoring systems that have come from optimising games for console. These systems have proved very useful when building DiRT Rally even though the game is exclusively on PC.
How do you balance out different controller use cases? While many hard core racers use a wheel, I have seen very competitive racing from people using handheld controllers as well as keyboards. Do you handicap/help those particular implementations so as not to make it overly frustrating to those users? I ask due to the difference in degrees of precision that a gamepad has vs. a wheel that can rotate 900 degrees.
Again this comes back to the fact that we have traditionally developed for console where the primary input device is a handheld controller. This is an area that other sims don’t usually have to worry about but for us it was second nature. There are systems that we have that add a layer between the handheld controller or keyboard and the game which help those guys but the wheel is without a doubt the best way to experience DiRT Rally as it is a direct input.
Introduction and Technical Specifications
Courtesy of ASUS
The ASUS Z97-Pro Gamer motherboard is one of the newest motherboards to join their Intel Z97-based line of products. The board is squarely targeted at gaming enthusiasts, combining an appealing aesthetic with a compelling feature set for a must-have product. At an MSRP of $169.99, the Z97-Pro Gamer offers premium features to budget-conscious gamers.
Courtesy of ASUS
Courtesy of ASUS
Courtesy of ASUS
Even with its aggressive price-point, ASUS coupled the Z97-Pro Gamer motherboard with a top-rated power system dubbed Gamer's Guardian. The board includes an 8+2 phase digital power delivery system, 10k Japanese-source Black Metallic capacitors, DIGI+ VRMs, ESD (electro-static discharge) modules protecting the integrated ports, and DRAM overcurrent protection with integrated resetable fuses. For superior sound fedelity, ASUS integrated their updated SupremeFX audio system into the board's design, featuring illuminated board sheilding, 300 ohm headphone amplifier, and ELNA audio capacitors.
Introduction and Specifications
Seiki has spent the past few years making quite the entrance into the display market. Starting with LCD TVs, they seemingly came out of nowhere back in April of 2013 with a 50” 4K display that was available at a very competitive price at that time. Since then, we’ve seen a few more display releases out of Seiki, and they were becoming popular among home theater enthusiasts on a budget and for gamers who wanted a bigger panel in front of them. Last June, Seiki announced a desktop line of 4K monitors. These would not just be repurposed televisions, but ground-up designs intended for desktop professionals and gamers alike. The most eagerly awaited part of this announcement was promised 60 Hz support at 4K resolutions.
Just under a year later, we are happy to bring you a review of the first iteration on this new Seiki Pro lineup:
It's been a while since we reviewed Intel's SSD 750 PCIe NVMe fire-breathing SSD, and since that launch we more recently had some giveaways and contests. We got the prizes in to be sent out to the winners, but before that happened, we had this stack of hardware sitting here. It just kept staring down at me (literally - this is the view from my chair):
That stack of 5 Intel SSD 750’s was burning itself into my periphery as I worked on an upcoming review of the new Seiki Pro 40” 4K display. A few feet in the other direction was our CPU testbed machine, an ASUS X99-Deluxe with a 40-lane Intel Core i7-5960 CPU installed. I just couldn't live with myself if we sent these prizes out without properly ‘testing’ them first, so then this happened:
This will not be a typical complete review, as this much hardware in parallel is not realistically comparable to even the craziest power user setup. It is more just a couple of hours of playing with an insane hardware configuration and exploring the various limits and bottlenecks we were sure to run into. We’ll do a few tests in a some different configurations and let you know what we found out.
Digging into a specific market
A little while ago, I decided to think about processor design as a game. You are given a budget of complexity, which is determined by your process node, power, heat, die size, and so forth, and the objective is to lay out features in the way that suits your goal and workload best. While not the topic of today's post, GPUs are a great example of what I mean. They make the assumption that in a batch of work, nearby tasks are very similar, such as the math behind two neighboring pixels on the screen. This assumption allows GPU manufacturers to save complexity by chaining dozens of cores together into not-quite-independent work groups. The circuit fits the work better, and thus it lets more get done in the same complexity budget.
Carrizo is aiming at a 63 million unit per year market segment.
This article is about Carrizo, though. This is AMD's sixth-generation APU, starting with Llano's release in June 2011. For this launch, Carrizo is targeting the 15W and 35W power envelopes for $400-$700 USD notebook devices. AMD needed to increase efficiency on the same, 28nm process that we have seen in their product stack since Kabini and Temash were released in May of 2013. They tasked their engineers to optimize their APU's design for these constraints, which led to dense architectures and clever features on the same budget of complexity, rather than smaller transistors or a bigger die.
15W was their primary target, and they claim to have exceeded their own expectations.
Backing up for a second. Beep. Beep. Beep. Beep.
When I met with AMD last month, I brought up the Bulldozer architecture with many individuals. I suspected that it was a quite clever design that didn't reach its potential because of external factors. As I started this editorial, processor design is a game and, if you can save complexity by knowing your workload, you can do more with less.
Bulldozer looked like it wanted to take a shortcut by cutting elements that its designers believed would be redundant going forward. First and foremost, two cores share a single floating point (decimal) unit. While you need some floating point capacity, upcoming workloads could use the GPU for a massive increase in performance, which is right there on the same die. As such, the complexity that is dedicated to every second FPU can be cut and used for something else. You can see this trend throughout various elements of the architecture.
A substantial upgrade for Thunderbolt
Today at Computex, Intel took the wraps off of the latest iteration of Thunderbolt, a technology that I am guessing many of you thought was dead in the water. It turns out that's not the case, and this new set of features that Thunderbolt 3 offers may in fact push it over the crest and give it the momentum needed to become a useable and widespread standard.
First, Thunderbolt 3 starts with a new piece of silicon, code named Alpine Ridge. Not only does Alpine Ridge increase the available Thunderbolt bandwidth to 40 Gbps but it also adds a native USB 3.1 host controller on the chip itself. And, as mobile users will be glad to see, Intel is going to start utilizing the new USB Type-C (USB-C) connector as the standard port rather than mini DisplayPort.
This new connector type, that was already a favorite among PC Perspective staff because of its size and its reversibility, will now be the way connectivity and speed increases this generation with Thunderbolt. This slide does a good job of summarizing the key take away from the TB3 announcement: 40 Gbps, support for two 4K 60 Hz displays, 100 watt (bi-directional) charging capability, 15 watt device power and support for four protocols including Thunderbolt, DisplayPort, USB and PCI Express.
Protocol support is important and Thunderbolt 3 over USB-C will be able to connect directly to a DisplayPort monitor, to an external USB 3.1 storage drive, an old thumb drive or a new Thunderbolt 3 docking station. This is truly unrivaled flexibility from a single connector. The USB 3.1 controller is backward compatible as well: feel free to connect any USB device to it that you can adapt to the Type-C connection.
From a raw performance perspective Thunderbolt 3 offers a total of 40 Gbps of bi-directional bandwidth, twice that of Thunderbolt 2 and 4x what we get with USB 3.1. That offers users the ability to combine many different devices, multiple displays and network connections and have plenty of headroom.
With Thunderbolt 3 you get twice as much raw video bandwidth, two DP 1.2 streams, allowing you to run not just a single 4K display at 60 Hz but two of them, all over a single TB3 cable. If you want to connect a 5K display though, you will be limited to just one of them.
For mobile users, which I think is the area where Thunderbolt 3 will be the most effective, the addition of USB 3.1 allows for charging capability up to 100 watts. This is in addition to the 15 watts of power that Thunderbolt provides to devices directly - think external storage, small hubs/docks, etc.
Introduction and First Impressions
Phanteks has expanded their Enthoo enclosure lineup with a new ATX version of the popular EVOLV case, and it offers a striking design and some unique features to help it stand out in the mid-tower market.
Phanteks first came to my attention with their large double tower cooler PH-TC14, which competes directly with the Noctua NH-D14 in the CPU air-cooling market. But like a lot of other cooling companies (Cooler Master, Corsair, etc.) Phanteks also offers a full lineup of enclosures as well. Of these the Enthoo EVOLV, which until today has only been available in a micro-ATX and mini-ITX version, has been well-received and has a angular, minimalist look that I like quite a bit. Enter the EVOLV ATX.
With the larger size to this new EVOLV ATX there is not only room for a full-size motherboard, but much more room for components and cooling as well. The internal layout is very similar to the recently reviewed Fractal Design Define S enclosure, with no storage (5.25” or 3.5”) inside the front of the case, which gives the EVOLV ATX a totally open layout. The front is solid metal (though well vented) so we’ll see how this affects cooling, and it will be interesting to see how Phanteks has approached internal storage with the design as well. Let’s get started!
When NVIDIA launched the GeForce GTX Titan X card only back in March of this year, I knew immediately that the GTX 980 Ti would be close behind. The Titan X was so different from the GTX 980 when it came to pricing and memory capacity (12GB, really??) that NVIDIA had set up the perfect gap with which to place the newly minted GTX 980 Ti. Today we get to take the wraps off of that new graphics card and I think you'll be impressed with what you find, especially when you compare its value to the Titan X.
Based on the same Maxwell architecture and GM200 GPU, with some minor changes to GPU core count, memory size and boost speeds, the GTX 980 Ti finds itself in a unique spot in the GeForce lineup. Performance-wise it's basically identical in real-world game testing to the GTX Titan X, yet is priced $350 less that that 12GB behemoth. Couple that with a modest $50 price drop in the GTX 980 cards and you have all markers of an enthusiast graphics card that will sell as well as any we have seen in recent generations.
The devil is in all the other details, of course. AMD has its own plans for this summer but the Radeon R9 290X is still sitting there at a measly $320, undercutting the GTX 980 Ti by more than half. NVIDIA seems to be pricing its own GPUs as if it isn't even concerned with what AMD and the Radeon brand are doing. That could be dangerous if it goes on too long, but for today, can the R9 290X put up enough fight with the aging Hawaii XT GPU to make its value case to gamers on the fence?
Will the GeForce GTX 980 Ti be the next high-end GPU to make a splash in the market, or will it make a thud at the bottom of the GPU gene pool? Let's dive into it, shall we?
Introduction and Features
Corsair continues to offer a huge selection of memory products, PC cases, SSDs, power supplies, coolers, gaming peripherals, and PC accessories! The 780T Full-Tower case is one of the new additions to Corsair’s Graphite Series of PC enclosures for 2015 and is available in either black or white. The 780T is a premium case loaded with features that will enable quick, easy, and good-looking builds along with plenty of room and numerous case cooling options. The 780T comes with three 140mm Corsair fans pre-installed with numerous mounting locations for additional fans. The 780T also provides excellent support for liquid cooling with mounting locations for two 360mm radiators. The full-tower enclosure can mount E-ATX and XL-ATX motherboards with room for multiple, high-end graphic adapters up to 14” (355mm) in length. There are currently 16 different models in the Graphite Series ranging in price from $69.99 up to $189.99 USD.
Graphite Series 780T Black ($179.99) Graphite Series 780T White ($189.99)
In this review we will be taking a detailed look at the Graphite Series 780T White Full-Tower case. Here is what Corsair has to say about their new 780T enclosure: “The stunning Graphite Series 780T Full-Tower PC case can satisfy the most hardcore gamer or overclocker with ample room for nine drives and nearly a dozen large cooling fans. Into water cooling? You’ll appreciate the generous space for dual 360mm radiators. And, you’ll get everything done faster: the 780T offers easy maintenance shortcuts like tool-free removal of side panels and hard drives. A three-speed fan control button and generous options for peripheral connections make the front-panel a true time saver.”
Graphite Series 780T Full-Tower Case Key Features:
• Large, Full-Tower PC case (available in black or white)
• Premium design with rounded corners and sleek, cohesive styling
• Latched side panels for easy tool-free access
• Large acrylic side window to show off internal components
• Dual 140mm LED intake fans and a 140mm exhaust fan included
• Locations for up to nine total case fans
• Supports 120mm, 240mm, and 360mm radiators for water-cooling
• Supports XL-ATX, E-ATX, ATX, MicroATX and Mini-ITX motherboards
• Six 3.5” / 2.5” tool-less HDD/SSD bays (can be removed if not needed)
• Three 2.5” tool-less SSD bays
• Three-speed fan control switch on top panel with LED gauge
• Two USB 3.0 and two USB 2.0 ports on top panel
• Two 5.25” front exposed drive bays
• Removable mesh dust filters (front, top, and bottom)
• Up to 355mm (14”) of space for long graphics cards
• Up to 200mm (7.8”) of space for CPU coolers
• Cable routing cutouts to keep cables out of the airflow path
The 780T White Full-Tower case features a beautiful white matte finish with black accents. All internal surfaces finished in black. The two 140mm intake fans behind the front grill incorporate white LEDs (the black version comes with red LEDs).
Announced just this past June at last year’s Google I/O event, Android TV is a platform developed by Google, running Android 5.0 and higher, that aims to create an interactive experience for the TV. This platform can be built into a TV directly as well as into set-top style boxes, like the NVIDIA SHIELD we are looking at today. The idea is to bring the breadth of apps and content to the TV through the Android operating system in a way that is both convenient and intuitive.
NVIDIA announced SHIELD back in March at GDC as the first product to use the company’s latest Tegra processor, the X1. This SoC combines an 8-core big.LITTLE ARM processor design with a 256-core implementation of the NVIDIA Maxwell GPU architecture, providing GPU performance previously unseen in an Android device. I have already spent some time with the NVIDIA SHIELD at various events and the promise was clearly there to make it a leading option for Android TV adoption, but obviously there were questions to be answered.
Today’s article will focus on my early impressions with the NVIDIA SHIELD, having used it both in the office and at home for a handful of days. As you’ll see during the discussion there are still some things to be ironed out, some functionality that needs to be added before SHIELD and Android TV can really be called a must-buy product. But I do think it will get there.
And though this review will focus on the NVIDIA SHIELD, it’s impossible not to marry the success of SHIELD with the success of Google’s Android TV. The dominant use case for SHIELD is as a media playback device, with the gaming functionality as a really cool side project for enthusiasts and gamers looking for another outlet. For SHIELD to succeed, Google needs to prove that Android TV can improve over other integrated smart TV platforms as well as other set-top box platforms like Boxee, Roku and even the upcoming Apple TV refresh.
But first, let’s get an overview of the NVIDIA SHIELD device, pricing and specifications, before diving into my experiences with the platform as a whole.
Big Things, Small Packages
Sapphire isn’t a brand we have covered in a while, so it is nice to see a new and interesting product drop on our door. Sapphire was a relative unknown until around the release of the Radeon 9700 Pro days. This was around the time when ATI decided that they did not want to be so vertically integrated, so allowed other companies to start buying their chips and making their own cards. This was done to provide a bit of stability for ATI pricing, as they didn’t have to worry about a volatile component market that could cause their margins to plummet. By selling just the chips to partners, ATI could more adequately control margins on their own product while allowing their partners to make their own deals and component choices for the finished card.
ATI had very limited graphics card production of their own, so they often would farm out production to second sources. One of these sources ended up turning into Sapphire. When ATI finally allowed other partners to produce and brand their own ATI based products, Sapphire already had a leg up on the competition by being a large producer already of ATI products. They soon controlled a good portion of the marketplace by their contacts, pricing, and close relationship with ATI.
Since this time ATI has been bought up by AMD and they no longer produce any ATI branded cards. Going vertical when it come to producing their own chips and video cards was obviously a bad idea, we can look back at 3dfx and their attempt at vertical integration and how that ended for the company. AMD obviously produces an initial reference version of their cards and coolers, but allows their partners to sell the “sticker” version and then develop their own designs. This has worked very well for both NVIDIA and AMD, and it has allowed their partners to further differentiate their product from the competition.
Sapphire usually does a bang up job on packaging the graphics card. Oh look, a mousepad!
Sapphire is not as big of a player as they used to be, but they are still one of the primary partners of AMD. It would not surprise me in the least if they still produced the reference designs for AMD and then distributed those products to other partners. Sapphire is known for building a very good quality card and their cooling solutions have been well received as well. The company does have some stiff competition from the likes of Asus, MSI, and others for this particular market. Unlike those two particular companies, Sapphire obviously does not make any NVIDIA based boards. This has been a blessing and a curse, depending on what the cycle is looking like between AMD and NVIDIA and who has dominance in any particular marketplace.