All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Introduction and Technical Specifications
Courtesy of Noctua
Noctua is a well known name in the enthusiast world for its high-end CPU cooler products. Their flagship cooler, the NH-D14, features a nickel-plated copper base with dual radiator towers actively cooled by low noise 120mm and 140mm fans. The NH-D14 can be used with all current Intel and AMD CPU offerings. The cooler was put to the test against other similarly classed air and water-based cooling systems to see just how well Noctua's design would hold up. The Noctua NH-D14 does not come cheap with a retail price at $99.99, but its performance and utility should make up for that initial outlay.
Courtesy of Noctua
Courtesy of Noctua
The Noctua NH-D14 cooler is everything you would expect in a premium CPU cooler - nickel-plating for corrosion resistance, twin-tower radiators for massive heat dissipation potential, and copper / aluminum hybrid design for optimal heat transfer from the CPU. Noctua designed the NH-D14 with a total of six heat pipes, laid out in a U-shaped design which passes through the copper base plate and terminates in the radiator towers. The bottom of the copper base plate leaves the factory ground flat and polished to a mirror-like finish, ensuring optimal interfacing with the CPU surface.
Courtesy of Noctua
Noctua included the following components in with the base cooler: SecureFirm2™ multi-socket mounting kit, NF-P14 140mm fan, NF-P12 120mm fan, four fan mounting brackets, a dual-ended fan power cable, two single-fan low power cables, a case badge, and NT-H1 thermal compound.
PC Component Selections
It's that time of year again! When those of us lucky enough with the ability, get and share the best in technology with our friends and family. You are already the family IT manager so why not help spread the holiday cheer by picking up some items for them, and hey...maybe for you. :)
This year we are going to break up the guide into categories. We'll have a page dedicated to PC components, one for mobile devices like notebooks and tablets and one for PC accessories. Then, after those specific categories, we'll have an open ended collection of pages where each PC Perspective team member can throw in some wildcards.
Our Amazon code is: pcper04-20
Intel Core i7-4770K Haswell Processor
The Intel Core i7-4770K is likely the best deal in computing performance today, after to power just about any configuration of PC you can think of without breaking much a sweat. You want to game? This part has you covered? You want to encode some video? The four cores and included HyperThreading support provide just about as much power as you could need. Yes there are faster processors in the form of the the Ivy Bridge-E and even 10+ core Xeon processors, but those are significantly more expensive. For a modest price of $299 you can get what is generally considered the "best" processor on the market.
Corsair Carbide Series Air 540 Case
Cases are generally considered a PC component that is more about the preference of the buyer but there are still fundamentals that make cases good, solid cases. The new Corsair Carbide Air 540 is unique in a lot of ways. The square-ish shape allows for a division of your power supply, hard drives and SSDs from the other motherboard-attached components. Even though the case is a bit shorter than others on the market, there is plenty of working room inside thanks to the Corsair dual-chamber setup and it even includes a pair of high-performance Corsair AF140L fans for intake and exhaust. The side panel window is HUGE allowing you to show off your goods and nice touches like the rubber grommeted cable routing cut outs and dust filters make this one of the best mid-range cases available.
Another retail card reveals the results
Since the release of the new AMD Radeon R9 290X and R9 290 graphics cards, we have been very curious about the latest implementation of AMD's PowerTune technology and its scaling of clock frequency as a result of the thermal levels of each graphics card. In the first article covering this topic, I addressed the questions from AMD's point of view - is this really a "configurable" GPU as AMD claims or are there issues that need to be addressed by the company?
The biggest problems I found were in the highly variable clock speeds from game to game and from a "cold" GPU to a "hot" GPU. This affects the way many people in the industry test and benchmark graphics cards as running a game for just a couple of minutes could result in average and reported frame rates that are much higher than what you see 10-20 minutes into gameplay. This was rarely something that had to be dealt with before (especially on AMD graphics cards) so to many it caught them off-guard.
Because of the new PowerTune technology, as I have discussed several times before, clock speeds are starting off quite high on the R9 290X (at or near the 1000 MHz quoted speed) and then slowly drifting down over time.
Another wrinkle occurred when Tom's Hardware reported that retail graphics cards they had seen were showing markedly lower performance than the reference samples sent to reviewers. As a result, AMD quickly released a new driver that attempted to address the problem by normalizing to fan speeds (RPM) rather than fan voltage (percentage). The result was consistent fan speeds on different cards and thus much closer performance.
However, with all that being said, I was still testing retail AMD Radeon R9 290X and R9 290 cards that were PURCHASED rather than sampled, to keep tabs on the situation.
Introduction, Specifications and Packaging
If you're into the laptop storage upgrade scene, you hear the same sort of arguments all the time. "Do I go with a HDD for a large capacity and low cost/GB, but suffer performance"? "I want an SSD, but can't afford the capacity I need"! The ideal for this scenario is to combine both - go with a small capacity SSD for your operating system and apps, while going with a larger HDD for bulk storage at a lower cost/GB. The catch here is that most mobile platforms only come with a single 2.5" 9.5mm storage bay, and you just can't physically fit a full SSD and a full HDD into that space, can you? Well today Western Digital has answered that challenge with the Black2 Dual Drive:
Yup, we're not kidding. This is a 120GB SSD *and* a 1TB HDD in a single package. Not a hybrid. Two drives, and it's nothing short of a work of art.
The 7 Year Console Refresh
The consoles are coming! The consoles are coming! Ok, that is not necessarily true. One is already here and the second essentially is too. This of course brings up the great debate between PCs and consoles. The past has been interesting when it comes to console gaming, as often the consoles would be around a year ahead of PCs in terms of gaming power and prowess. This is no longer the case with this generation of consoles. Cutting edge is now considered mainstream when it comes to processing and graphics. The real incentive to buy this generation of consoles is a lot harder to pin down as compared to years past.
The PS4 retails for $399 US and the upcoming Xbox One is $499. The PS4’s price includes a single controller, while the Xbox’s package includes not just a controller, but also the next generation Kinect device. These prices would be comparable to some low end PCs which include keyboard, mouse, and a monitor that could be purchased from large brick and mortar stores like Walmart and Best Buy. Happily for most of us, we can build our machines to our own specifications and budgets.
As a directive from on high (the boss), we were given the task of building our own low-end gaming and productivity machines at a price as close to that of the consoles and explaining which solution would be superior at the price points given. The goal was to get as close to $500 as possible and still have a machine that would be able to play most recent games at reasonable resolutions and quality levels.
Does downloading make a difference?
I posted a story earlier this week that looked at the performance of the new PS4 when used with three different 2.5-in storage options: the stock 500GB hard drive, a 1TB hybrid SSHD and a 240GB SSD. The results were fairly interesting (and got a good bit of attention) but some readers wanted more data. In particular, many asked how things might change if you went the full digital route and purchased games straight from the Sony's PlayStation Network. I also will compare boot times for each of the tested storage devices.
You should definitely check out the previous article if you missed it. It not only goes through the performance comparison but also details how to change the hard drive on the PS4 from the physical procedure to the software steps necessary. The article also details the options we selected for our benchmarking.
- HGST 500GB 5400 RPM HDD - $50 - $0.10/GB
- Seagate 1TB Hybrid SSHD - $122 - $0.12/GB
- Corsair 240GB Force GS SSD - $189 - $0.78/GB
Today I purchased a copy of Assassin's Creed IV from the PSN store (you're welcome Ubisoft) and got to testing. The process was the same: start the game then load the first save spot. Again, each test was run three times and the averages were reported. The PS4 was restarted between each run.
The top section of results is the same that was presented earlier - average load times for AC IV when the game is installed from the Blu-ray. The second set is new and includes average load times fro AC IV after the installation from the PlayStation Network; no disc was in the drive during testing.
Load time improvements
On Friday Sony released the PlayStation 4 onto the world. The first new console launch in 7 years, the PS4 has a lot to live up to, but our story today isn't going to attempt to weigh the value of the hardware or software ecosystem. Instead, after our PS4 teardown video from last week, we got quite a few requests for information on storage performance with the PS4 and what replacement hardware might offer gamers.
Hard Drive Replacement Process
Changing the hard drive in your PlayStation 4 is quite simple, a continuation of a policy Sony's policy with the PS3.
Installation starts with the one semi-transparent panel on the top of the unit, to the left of the light bar. Obviously make sure your PS4 is completely turned off and unplugged.
Simply slide it to the outside of the chassis and wiggle it up to release. There are no screws or anything to deal with yet.
Once inside you'll find a screw with the PS4 shapes logos on them; that is screw you need to remove to pull out the hard drive cage.
Introduction and Features
Today Western Digital launched an important addition to their Personal Cloud Storage NAS family - the My Cloud EX4:
The My Cloud EX4 is Western Digital's answer to the increased demand for larger personal storage devices. When folks look for places to consolidate all of their bulk files, media, system backups, etc, they tend to extend past what is possible with a single hard drive. Here is Western Digital's projection on where personal storage is headed:
Where the My Cloud was a single drive solution, the My Cloud EX4 extends that capability to span up to four 3.5" drives. When it comes to devices that span across several drives, the number 4 is a bit of a sweet spot, as it enables several RAID configurations:
Everything but online capacity expansion (where the user can swap drives one at a time to a larger capacitiy) is suppoted. While WD has stated that feature will be available in a future update, I find it a bit risky to intentionally and repeatedly fail an array by pulling drives and forcing rebuilds. It just makes more sense to back up the data and re-create a fresh array with the new larger drives installed.
Ok, so we've got the groundwork down with a 4-bay NAS device. What remains to be seen is how Western Digital has implemented the feature set. There is a lot to get through here, so let's get to it.
NVIDIA Tegra Note Program
Clearly, NVIDIA’s Tegra line has not been as successful as the company had hoped and expected. The move for the discrete GPU giant into the highly competitive world of the tablet and phone SoCs has been slower than expected, and littered with roadblocks that were either unexpected or that NVIDIA thought would be much easier to overcome.
The truth is that this was always a long play for the company; success was never going to be overnight and anyone that thought that was likely or possible was deluded. Part of it has to do with the development cycle of the ARM ecosystem. NVIDIA is used to a rather quick development, production, marketing and sales pattern thanks to its time in high performance GPUs, but the SoC world is quite different. By the time a device based on a Tegra chip is found in the retail channel it had to go through an OEM development cycle, NVIDIA SoC development cycle and even an ARM Cortex CPU development cycle. The result is an extended time frame from initial product announcement to retail availability.
Partly due to this, and partly due to limited design wins in the mobile markets, NVIDIA has started to develop internal-designed end-user devices that utilize its Tegra SoC processors. This has the benefit of being much faster to market – while most SoC vendors develop reference platforms during the normal course of business, NVIDIA is essentially going to perfect and productize them.
More Details from Lisa Su
The executives at AMD like to break their own NDAs. Then again, they are the ones typically setting these NDA dates, so it isn’t a big deal. It is no secret that Kaveri has been in the pipeline for some time. We knew a lot of the basic details of the product, but there were certainly things that were missing. Lisu Su went up onstage and shared a few new details with us.
Kaveri will be made up of 4 “Steamroller” cores, which are enhanced versions of the previous Bulldozer/Trinity/Vishera families of products. Nearly everything in the processor is doubled. It now has dual decode, more cache, larger TLBs, and a host of other smaller features that all add up to greater single thread performance and better multi-threaded handling and performance. Integer performance will be improved, and the FPU/MMX/SSE unit now features 2 x 128 bit FMAC units which can “fuse” and support AVX 256.
However, there was no mention of the fabled 6 core Kaveri. At this time, it is unlikely that particular product will be launched anytime soon.
Introduction and Design
With few exceptions, it’s generally been taken for granted that gaming notebooks are going to be hefty devices. Portability is rarely the focus, with weight and battery life alike usually sacrificed in the interest of sheer power. But the MSI GE40 2OC—the lightest 14-inch gaming notebook currently available—seeks to compromise while retaining the gaming prowess. Trending instead toward the form factor of a large Ultrabook, the GE40 is both stylish and manageable (and perhaps affordable at around $1,300)—but can its muscle withstand the reduction in casing real estate?
While it can’t hang with the best of the 15-inch and 17-inch crowd, in context with its 14-inch peers, the GE40’s spec sheet hardly reads like it’s been the subject of any sort of game-changing handicap:
One of the most popular CPUs for Haswell gaming notebooks has been the 2.4 GHz (3.4 GHz Turbo) i7-4700MQ. But the i7-4702MQ in the GE40-20C is nearly as powerful (managing 2.2 GHz and 3.2 GHz in those same areas respectively), and it features a TDP that’s 10 W lower at just 37 W. That’s ideal for notebooks such as the GE40, which seek to provide a thinner case in conjunction with uncompromising performance. Meanwhile, the NVIDIA GTX 760M is no slouch, even if it isn’t on the same level as the 770s and 780s that we’ve been seeing in some 15.6-inch and 17.3-inch gaming beasts.
Elsewhere, it’s business as usual, with 8 GB of RAM and a 120 GB SSD rounding out the major bullet points. Nearly everything here is on par with the best of rival 14-inch gaming models with the exception of the 900p screen resolution (which is bested by some notebooks, such as Dell’s Alienware 14 and its 1080p panel).
EVGA Brings Custom GTX 780 Ti Early
Reference cards for new graphics card releases are very important for a number of reasons. Most importantly, these are the cards presented to the media and reviewers that judge the value and performance of these cards out of the gate. These various articles are generally used by readers and enthusiasts to make purchasing decisions, and if first impressions are not good, it can spell trouble. Also, reference cards tend to be the first cards sold in the market (see the recent Radeon R9 290/290X launch) and early adopters get the same technology in their hands; again the impressions reference cards leave will live in forums for eternity.
All that being said, retail cards are where partners can differentiate and keep the various GPUs relevant for some time to come. EVGA is probably the most well known NVIDIA partner and is clearly their biggest outlet for sales. The ACX cooler is one we saw popularized with the first GTX 700-series cards and the company has quickly adopted it to the GTX 780 Ti, released by NVIDIA just last week.
I would normally have a full review for you as soon as we could but thanks to a couple of upcoming trips that will keep me away from the GPU test bed, that will take a little while longer. However, I thought a quick preview was in order to show off the specifications and performance of the EVGA GTX 780 Ti ACX.
As expected, the EVGA ACX design of the GTX 780 Ti is overclocked. While the reference card runs at a base clock of 875 MHz and a typical boost clock of 928 MHz, this retail model has a base clock of 1006 MHz and a boost clock of 1072 MHz. This means that all 2,880 CUDA cores are going to run somewhere around 15% faster on the EVGA ACX model than the reference GTX 780 Ti SKUs.
We should note that though the cooler is custom built by EVGA, the PCB design of this GTX 780 Ti card remains the same as the reference models.
An issue of variance
AMD just sent along an email to the press with a new driver to use for Radeon R9 290X and Radeon R9 290 testing going forward. Here is the note:
We’ve identified that there’s variability in fan speeds across AMD R9 290 series boards. This variability in fan speed translates into variability of the cooling capacity of the fan-sink.
The flexibility of AMD PowerTune technology enables us to correct this variability in a driver update. This update will normalize the fan RPMs to the correct values.
The correct target RPM values are 2200RPM for the AMD Radeon R9 290X ‘Quiet mode’, and 2650RPM for the R9 290. You can verify these in GPU-Z.
If you’re working on stories relating to R9 290 series products, please use this driver as it will reduce any variability in fan speeds. This driver will be posted publicly tonight.
Great! This is good news! Except it also creates some questions.
When we first tested the R9 290X and the R9 290, we discussed the latest iteration of AMD's PowerTune technology. That feature attempts to keep clocks as high as possible under the constraints of temperature and power. I took issue with the high variability of clock speeds on our R9 290X sample, citing this graph:
I then did some digging into the variance and the claims that AMD was building a "configurable" GPU. In that article we found that there were significant performance deltas between "hot" and "cold" GPUs; we noticed that doing simple, quick benchmarks would produce certain results that were definitely not real-world in nature. At the default 40% fan speed, Crysis 3 showed 10% variance with the 290X at 2560x1440:
GK110 in all its glory
I bet you didn't realize that October and November were going to become the onslaught of graphics cards it has been. I know I did not and I tend to have a better background on these things than most of our readers. Starting with the release of the AMD Radeon R9 280X, 270X and R7 260X in the first week of October, it has pretty much been a non-stop battle between NVIDIA and AMD for the hearts, minds, and wallets of PC gamers.
Shortly after the Tahiti refresh came NVIDIA's move into display technology with G-Sync, a variable refresh rate feature that will work with upcoming monitors from ASUS and others as long as you have a GeForce Kepler GPU. The technology was damned impressive, but I am still waiting for NVIDIA to send over some panels for extended testing.
Later in October we were hit with the R9 290X, the Hawaii GPU that brought AMD back in the world of ultra-class single GPU card performance. It has produced stellar benchmarks and undercut the prices (then at least) of the GTX 780 and GTX TITAN. We tested it in both single and multi-GPU configurations and found that AMD had made some impressive progress in fixing its frame pacing issues, even with Eyefinity and 4K tiled displays.
NVIDIA dropped a driver release with ShadowPlay that allows gamers to record playback locally without a hit on performance. I posted a roundup of R9 280X cards which showed alternative coolers and performance ranges. We investigated the R9 290X Hawaii GPU and the claims that performance is variable and configurable based on fan speeds. Finally, the R9 290 (non-X model) was released this week to more fanfare than the 290X thanks to its nearly identical performance and $399 price tag.
And today, yet another release. NVIDIA's GeForce GTX 780 Ti takes the performance of the GK110 and fully unlocks it. The GTX TITAN uses one fewer SMX and the GTX 780 has three fewer SMX units so you can expect the GTX 780 Ti to, at the very least, become the fastest NVIDIA GPU available. But can it hold its lead over the R9 290X and validate its $699 price tag?
Introduction, Specifications and Packaging
It has been a while since OCZ introduced their Vector SSD, and it was in need of a refresh to bring its pricing more in-line with the competition, which had been equipping their products with physically smaller flash dies (therefore reducing cost). Today, OCZ launched a refresh to their Vector - now dubbed the Vector 150:
The OCZ strategy changed up a while back. They removed a lot of redundancy and confusing product lines, consolidating everything into a few simple solutions. Here's a snapsot of that strategy, showing the prior and newer iterations of three simple solutions:
The Vector 150 we look at today falls right into the middle here. I just love the 'ENTHUSIST' icon they went with:
Why is there a bourbon review on a PC-centric website?
We can’t live, eat, and breathe PC technology all the time. All of us have outside interests that may not intersect with the PC and mobile market. I think we would be pretty boring people if that were the case. Yes, our professional careers are centered in this area, but our personal lives do diverge from the PC world. You certainly can’t drink a GPU, though I’m sure somebody out there has tried.
The bottle is unique to Wyoming Whiskey. The bourbon has a warm, amber glow about it as well. Picture courtesy of Wyoming Whiskey
Many years ago I became a beer enthusiast. I loved to sample different concoctions, I would brew my own, and I settled on some personal favorites throughout the years. Living in Wyoming is not necessarily conducive to sampling many different styles and types of beers, and so I was in a bit of a rut. A few years back a friend of mine bought me a bottle of Tomatin 12 year single malt scotch, and I figured this would be an interesting avenue to move down since I had tapped out my selection of new and interesting beers (Wyoming has terrible beer distribution).
More of the same for a lot less cash
The week before Halloween, AMD unleashed a trick on the GPU world under the guise of the Radeon R9 290X and it was the fastest single GPU graphics card we had tested to date. With a surprising price point of $549, it was able to outperform the GeForce GTX 780 (and GTX TITAN in most cases) while under cutting the competitions price by $100. Not too bad!
Today's release might be more surprising (and somewhat confusing). The AMD Radeon R9 290 4GB card is based on the same Hawaii GPU with a few less compute units enabled (CUs) and an even more aggressive price and performance placement. Seriously, has AMD lost its mind?
Can a card with a $399 price tag cut into the same performance levels as the JUST DROPPED price of $499 for the GeForce GTX 780?? And, if so, what sacrifices are being made by users that adopt it? Why do so many of our introduction sentences end in question marks?
The R9 290 GPU - Hawaii loses a small island
If you are new to the Hawaii GPU and you missed our first review of the Radeon R9 290X from last month, you should probably start back there. The architecture is very similar to that of the HD 7000-series Tahiti GPUs with some modest changes to improve efficiency with the biggest jump in raw primitives per second to 4/clock over 2/clock.
The R9 290 is based on Hawaii though it has four fewer compute units (CUs) than the R9 290X. When I asked AMD if that meant there was one fewer CU per Shader Engine or if they were all removed from a single Engine, they refused to really answer. Instead, several "I'm not allowed to comment on the specific configuration" lines were given. This seems pretty odd as NVIDIA has been upfront about the dual options for its derivative GPU models. Oh well.
When AMD released the Radeon R9 290X last month, I came away from the review very impressed with the performance and price point the new flagship graphics card was presented with. My review showed that the 290X was clearly faster than the NVIDIA GeForce GTX 780 and (and that time) was considerably less expensive as well - a win-win for AMD without a doubt.
But there were concerns over a couple of aspects of the cards design. First was the temperature and, specifically, how AMD was okay with this rather large silicon hitting 95C sustained. Another concern, AMD has also included a switch at the top of the R9 290X to switch fan profiles. This switch essentially creates two reference defaults and makes it impossible for us to set a baseline of performance. These different modes only changed the maximum fan speed that the card was allowed to reach. Still, performance changed because of this setting thanks to the newly revised (and updated) AMD PowerTune technology.
We also saw, in our initial review, a large variation in clock speeds both from one game to another as well as over time (after giving the card a chance to heat up). This led me to create the following graph showing average clock speeds 5-7 minutes into a gaming session with the card set to the default, "quiet" state. Each test is over a 60 second span.
Clearly there is variance here which led us to more questions about AMD's stance. Remember when the Kepler GPUs launched. AMD was very clear that variance from card to card, silicon to silicon, was bad for the consumer as it created random performance deltas between cards with otherwise identical specifications.
When it comes to the R9 290X, though, AMD claims both the GPU (and card itself) are a customizable graphics solution. The customization is based around the maximum fan speed which is a setting the user can adjust inside the Catalyst Control Center. This setting will allow you to lower the fan speed if you are a gamer desiring a quieter gaming configuration while still having great gaming performance. If you are comfortable with a louder fan, because headphones are magic, then you have the option to simply turn up the maximum fan speed and gain additional performance (a higher average clock rate) without any actual overclocking.
Introduction and Technical Specifications
Courtesy of GIGABYTE
The GIGABYTE Z87X-UD5H is the upper tier of their Z87 desktop board line. The board supports the latest generation of Intel LGA1150-based processors and is crammed full of all the bells and whistles you would expect on more costly boards. With an MSRP of $229.99, the board is aggressively priced to compete with the other upper tier Z87 desktop boards like the MSI Z87 MPower.
Courtesy of GIGABYTE
Besides an impressive 16-phase digital power delivery system dedicated to the processor, the Z87X-UD5H features GIGABYTE's Ultra Durable 5 Plus technology. Ultra Durable 5 Plus brings several high-end power components into the board's design: International Rectifier (IR) manufactured PowIRstage™ ICs and PWM controllers, Nippon Chemi-con manufactured Black Solid capacitors with a 10k hour operational rating at 105C, 15 micron gold plating on the CPU socket pins, and two 0.070mm copper layers imbedded into the PCB for optimal heat dissipation. In addition to the Ultra Durable 5 Plus power features, GIGABYTE designed the board with the following features: 10 SATA 6Gb/s ports; two Intel GigE NICs; three PCI-Express x16 slots for up to dual-card NVIDIA SLI or AMD CrossFire support; two PCI-Express x1 slots; a PCI slot; on board power, reset, and BIOS reset buttons; switch BIOS and Dual-BIOS switches; 2-digit diagnostic LED display; integrated voltage measurement points; and USB 2.0 and 3.0 port support.
Courtesy of GIGABYTE
ASUS R9 280X DirectCU II TOP
Earlier this month AMD took the wraps off of a revamped and restyled family of GPUs under the Radeon R9 and R7 brands. When I reviewed the R9 280X, essentially a lower cost version of the Radoen HD 7970 GHz Edition, I came away impressed with the package AMD was able to put together. Though there was no new hardware to really discuss with the R9 280X, the price drop placed the cards in a very aggressive position adjacent the NVIDIA GeForce line-up (including the GeForce GTX 770 and the GTX 760).
As a result, I fully expect the R9 280X to be a great selling GPU for those gamers with a mid-range budget of $300.
But another of the benefits of using an existing GPU architecture is the ability for board partners to very quickly release custom built versions of the R9 280X. Companies like ASUS, MSI, and Sapphire are able to have overclocked and custom-cooled alternatives to the 3GB $300 card, almost immediately, by simply adapting the HD 7970 PCB.
Today we are going to be reviewing a set of three different R9 280X cards: the ASUS DirectCU II, MSI Twin Frozr Gaming, and the Sapphire TOXIC.