All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Will it Strike Again?
It can now be claimed that we are arguably in our 4th generation of Lightning products from MSI. It can also be claimed that the 3rd generation of products really put that brand on the mainstream map. The R6970 and N580GTX (and XE version) set new standards for enthusiast grade graphics cards. Outstanding construction, unique pcb design, high quality (and quantity) of components, and a good eye for overall price have all been hallmarks of these cards. These were honestly some of my favorite video cards of all time. Call me biased, but I think when looking through other reviews those writers felt much the same. MSI certainly hit a couple of homeruns with their three Lightning offerings of 2011.
Time does not stand still. Resting on laurels is always the surest way to lose out to more aggressive competitors. It is now 2012 and AMD has already launched the latest generation of HD 7000 chips, with the top end being the HD 7970. This particular product was launched in late December, but cards were not available until January 9th of 2012. We are now at the end of March where we see a decent volume of products on the shelves, as well as some of the first of the non-reference designs hitting the streets. Currently Asus has its DirectCU II based 7970, but now we finally get to see the Lightning treatment.
MSI has not sat upon their laurels it seems. They are taking an aggressive approach to the new Lightning series of cards, and they implement quite a few unique features that have not been seen on any other product before. Now the question is did they pull it off? Throwing more features at something does not necessarily equal success. The increase in complexity of a design combined with other unknowns with the new features could make it a failure. Just look at the R5870 Lightning for proof. That particular card tread new ground, but did so in a way that did not adequately differentiate itself from reference HD 5870 designs. So what is new and how does it run? Let us dig in!
The Kepler Architecture
Join us today at 12pm EST / 9am CST as PC Perspective hosts a Live Review on the new GeForce GTX 680 graphics card. We will discuss the new GPU technology, important features like GPU Boost, talk about performance compared to AMD's lineup and we will also have NVIDIA's own Tom Petersen on hand to run some demos and answer questions from viewers. You can find it all at http://pcper.com/live!!
NVIDIA fans have been eagerly waiting for the new Kepler architecture ever since CEO Jen-Hsun Huang first mentioned it in September 2010. In the interim, we have seen the birth of a complete lineup of AMD graphics cards based on its Southern Islands architecture including the Radeon HD 7970, HD 7950, HD 7800s and HD 7700s. To the gamer looking for an upgrade it would appear that NVIDIA had fallen behind; but the company is hoping that today's release of the GeForce GTX 680 will put them back in the driver's seat.
This new $499 graphics card will directly compete against the Radeon HD 7970, and it brings quite a few "firsts" to NVIDIA's lineup. This NVIDIA card is the first desktop 28nm GPU, the first to offer a clock speed over 1 GHz, the first to support triple-panel gaming on a single card, and the first to offer "boost" clocks that vary from game to game. Interested yet? Let's get to the good stuff.
The Kepler Architecture
In many ways, the new 28nm Kepler architecture is just an update to the Fermi design that was first introduced in the GF100 chip. NVIDIA's Jonah Alben summed things up pretty nicely for us in a discussion stating that "there are lots of tiny things changing (in Kepler) rather than a few large things which makes it difficult to tell a story."
GTX 680 Block Diagram
There are few people in the gaming industry that you simply must pay attention to when they speak. One of them is John Carmack, founder of id Software and a friend of the site, creator of Doom. Another is Epic Games' Tim Sweeney, another pioneer in the field of computer graphics that brought us the magic of Unreal before bringing the rest of the gaming industry the Unreal Engine.
At DICE 2012, a trade show for game developers to demo their wares and learn from each other, Sweeney gave a talk on the future of computing hardware and its future. (You can see the source of my information and slides here at Gamespot.) Many pundits, media and even developers have brought up the idea that the next console generation that we know is coming will be the last - we will have reached the point in our computing capacity that gamers and designers will be comfortable with the quality and realism provided. Forever.
Think about that a moment; has anything ever appeared so obviously crazy? Yet, in a world where gaming has seemed to regress into the handheld spaces of iPhone and iPad, many would have you believe that it is indeed the case. Companies like NVIDIA and AMD that spend billions of dollars developing new high-powered graphics technologies would simply NOT do so anymore and instead focus only on low power. Actually...that is kind of happening with NVIDIA Tegra and AMD's move to APUs, but both claim that the development of leading graphics technology is what allows them to feed the low end - the sub-$100 graphics cards, SoC for phones and tablets and more.
Sweeney started the discussion by teaching everyone a little about human anatomy.
The human eye has been studied quite extensively and the amount of information we know about it would likely surprise. With 120 million monochrome receptors and 5M color, the eye and brain are able to do what even our most advanced cameras are unable to.
Completing the Family
When we went to Austin, Texas to sit with AMD and learn about the Radeon HD 7900 series of cards for the first time, an interesting thing happened. While the official meeting was about the performance of the Radeon HD 7970 and HD 7950, when things started to settle several AMD employees couldn't help but discuss Cape Verde (7700-series) and Pitcairn (7800-series) GPUs. In particular, the HD 7800 cards were generating a lot of excitement internally as a spiritual follow up to the wildly successful HD 5800 and HD 5700 series of cards in terms of price and performance characteristics.
So while the Radeon HD 7970 and HD 7950 are being labeled as the world's fastest GPUs, and the Radeon HD 7700 is the fastest GPU for everyone, the HD 7800s are where many of our readers will look when upgrading their machines while staying within a budget.
Be sure to check out our video review posted here and then continue on to our full, written review for all the benchmarks and analysis!!!
AMD Gets the Direct CU Treatment
In the previous roundup I covered the DirectCU II models from Asus featuring NVIDIA based chips. These boards included the GTX 580, 570, and 560 products. All of these were DirectCU II based with all the updated features that are included as compared to the original DirectCU products. With the AMD parts Asus has split the top four products into two categories; DirectCU II and the original DirectCU. When we start looking at thermal properties and price points, we will see why Asus took this route.
AMD has had a strong couple of years with their graphics chips. While they were not able to take the single GPU performance crown in this previous generation, their products were very capable and competitive across the board and at every price point. In fact, there are some features that these cards have at particular price points that make them very desirable in quite a few applications. In particular are the 2 GB of memory on the HD 6900 series cards where the competition from NVIDIA at those price points features 1 GB and 1.25 GB. In titles such as Skyrim, with the HD texture DLC enabled, these cards start to limit performance at 1920x1080 and above due to the memory requirements needed for these higher resolution textures.
MSI's Alex Chang Speaks Up
MSI was founded in 1986 and started producing motherboards and video cards for the quickly growing PC market. Throughout the life of the company they have further diversified their offerings to include barebones systems, notebooks, networking/communication devices, and industrial products. While MSI has a nice base of products, they are still primarily a motherboard and video card company. In the past 10 years MSI has become one of the top brands in North America for video cards, and they have taken a very aggressive approach to design with these products.
I had the chance to send MSI quite a few questions concerning their video card business and how they develop their products.
What is your name, title, and how long have you worked at MSI?
My name is Bob, and I’m…. actually, I’m just Alex Chang. I’m the Associate Marketing Manager. I’ve been with the company for 2 years.
Typically how long does it take from the original reference design card release to when we can first expect to see a Twin Frozr III based card hit retail? How much longer does it take to create the “Lightning” based products?
Historically, we’ve seen the introduction of a non-reference thermal solution within 2-4 weeks of product launch. As an example, GTX580 was launched in November 2010, and by December there was already a reference PCB GTX580 w/ the Twin Frozr II cooler.
In the case of Lightning cards, the development timeframe is longer due to more R&D, validation, and procurement of components. With GTX580, the timeframe was around 6 months, but moving forward MSI is pulling in the launch timeframe of our flagship products.
Southern Islands Get Small
When AMD first started to talk to me about the upcoming Southern Islands GPUs they tried to warn me. Really they did. "Be prepared for just an onslaught of card releases for 2012," I was told. In much the same strategy the company took with the HD 6000 series of cards, the new Radeon HD 7000 cards have been trickling out, part by part, so as to make sure the name "AMD" and the brand "Radeon" are showing up as often as possible in your news feeds and on my keyboard. In late December we wrote our review of the Radeon HD 7970 3GB flagship card and then followed that up in January with a review of the Radeon HD 7950. In those briefings were told in a general way about Cape Verde, the Radeon HD 7700 series, and Pitcairn, the Radeon HD 7800 series, but without the details of performance, specifications or release dates. We have the answer for one more of these families now: Cape Verde.
Cape Verde is the smallest of the Southern Islands dies and falls into the sub-$175 graphics market depending on card vendors' pricing and overclocking settings. The real question we all wanted to know is what performance levels these new cards were going to offer and if they could be the TRUE successor to popular Radeon HD 5770. While the answer will take pages and pages of details to cement into place, I can say that while an impressive card, I wasn't as excited as I had wanted to be.
But I am getting ahead of myself... Check out our video review right here and then keep reading on for the full evaluation!!
AMD Cape Verde - the smallest of the Southern Islands
GPU companies like to brag when they are on top - you'll see that as a recurring theme in our story today. One such case is the success of the Radeon HD 5770 that mentioned above - it still today sits on the throne of the most adopted DX11 capable GPU on the Steam Hardware Survey, one of our best places for information on the general PC gamer.
While the inclusion of it, as well as the Radeon HD 5870 and HD 5850, on this list are great for AMD a couple of years ago, the lack of a 6000-series card here shows us that users need another reason to upgrade; another card that is mass market enough (ala under $200) and offers performance advantages that really push gamers to spend that extra cheddar.
Bring in the Cape Verde GPU...
Four Displays for Under $70
Running multiple displays on your PC is becoming a trend that everyone is trying to jump on board with thanks in large part to the push of Eyefinity from AMD over the past few years. Gaming is a great application for multi-display configurations but in truth game compatibility and game benefits haven't reached the level I had hoped they would by 2012. But while gaming still has a way to go, the consumer applications for having more than a single monitor continue to expand and cement themselves in the minds of users.
Galaxy is the only NVIDIA partner that is really taking this market seriously with an onslaught of cards branded as MDT, Multiple Display Technology. Using non-NVIDIA hardware in conjunction with NVIDIA GPUs, Galaxy has created some very unique products for consumers like the recently reviewed GeForce GTX 570 MDT. Today we are going to be showing you the new Galaxy MDT GeForce GT 520 offering that brings support for a total of four simultaneous display outputs to a card with a reasonable cost of under $120.
The Galaxy MDT GeForce GT 520
Long time readers of PC Perspective already likely know what to expect based on the GPU we are using here but the Galaxy MDT model offers quite a few interesting changes.
The retail packaging clearly indicates the purpose of this card for users looking at running more than two displays. The GT 520 is not an incredibly powerful GPU when it comes to gaming but Galaxy isn't really pushing the card in that manner. Here are the general specs of the GPU for those that are interested:
- 48 CUDA cores
- 810 MHz core clock
- 1GB DD3 memory
- 900 MHz memory clock
- 64-bit memory bus width
- 4 ROPs
- DirectX 11 support
3 NV for DCII
The world of video cards is a much changed place over the past few years. Where once we saw only “sticker versions” of cards mass produced by a handful of manufacturers, we are now seeing some really nice differentiation from the major manufacturers. While the first iterations of these new cards are typically mass produced by NVIDIA or AMD and then distributed to their partners for initial sales, these manufacturers are now more consistently getting their own unique versions out to retail in record time. MSI was one of the first to put out their own unique designs, but now we are seeing Asus becoming much more aggressive with products of their own.
The DirectCU II line is Asus’ response to the growing number of original designs from other manufacturers. The easiest way to categorize these designs is that they straddle nicely the very high end and extreme products like the MSI Lightning series and those of the reference design boards with standard cooling. These are unique designs that integrate features and cooling solutions that are well above that of reference cards.
DirectCU II applies primarily to the cooling solutions on these boards. The copper heatipipes in the DirectCU II cooler are in direct contact with the GPU. These heatpipes then are distributed through two separate aluminum fin arrays, each with their own fan. So each card has either a dual slot or triple slot cooling solution with two 80 mm fans that dynamically adjust to the temperature of the chip. The second part of this is branded “Super Alloy Power” in which Asus has upgraded most of the electrical components on the board to match higher specifications. Hi-C caps, proadlizers, polymer caps, and higher quality chokes round out the upgraded components which should translate into more stable overclocked performance and a longer lifespan.
Tahiti Gets Clipped
It has been just over a month since we first got our hands on the AMD Southern Islands architecture in the form of the Radeon HD 7970 3GB graphics card. It was then a couple of long weeks as we waited for the consumer to get the chance to buy that same hardware though we had to admit that the $550+ price tags were scaring many away. Originally we were going to have both the Radeon HD 7970 and the Radeon HD 7950 in our hands before January 9th, but that didn't pan out and instead the little brother was held in waiting a bit longer.
Today we are reviewing that sibling, the Radeon HD 7950 3GB GPU that offers basically the same technology and feature set with a slightly diminished core and a matching, slightly diminished price. In truth I don't think that the estimated MSRP of $449 is going to really capture that many more hearts than the $549 price of the HD 7970 did, but AMD is hoping that they can ride their performance advantage to as many profits as they can while they wait for NVIDIA to properly react.
Check out our video review right here and then continue on to our complete benchmarking analysis!!
Southern Islands Gets Scaled Back a Bit
As I said above, the Radeon HD 7950 3GB is pretty similar to the HD 7970. It is based on the same 28nm, DirectX 11.1, PCI Express 3.0, 4.31 billion transistor GPU and includes the same massive 3GB frame buffer as its older brother. The Tahiti GPU is the first of its kind of all of those facets but it has a few of the computational portions disabled.
If you haven't read up on the Southern Islands architecture, or Tahiti GPU based around it, you are missing quite a bit of important information on the current lineup of parts from AMD. I would very much encourage you to head over to our Radeon HD 7970 3GB Tahiti review and look over the first three pages as it provides a detailed breakdown of the new features and the pretty dramatic shift in design that Southern Islands introduced to the AMD GPU team.
Guess what? Overclocked.
The NVIDIA GTX 580 GPU, based on the GF110 Fermi architecture, is old but it isn't forgotten. Released in November of 2010, NVIDIA had held the single GPU performance grown for more than a year before it was usurped by AMD and the Radeon HD 7970 just this month. Still, the GTX 580 is a solid high-end enthusiast graphics card that has wide spread availability and custom designed, overclocked models from numerous vendors making it a viable option.
Gigabyte sent us this overclocked and custom cooled model quite a while ago but we had simply fallen behind with other reviews until just after CES. In today's market the card has a bit of a different role to fill - it surely won't be able to pass up the new AMD Radeon HD 7970 but can it fight the good fight and keep NVIDIA's current lineup of GPUs more competitive until Kepler finally shows himself?
The Gigabyte GTX 580 1.5GB Super Overclock Card
With the age of the GTX 580 designs, Gigabyte had plenty of time to perfect their PCB and cooler design. This model, the Super Overclock (GV-N580SO-15I), comes in well ahead of the standard reference speeds of the GTX 580 but sticks to the same 1.5 GB frame buffer.
The clock speed is set at 855 MHz core and 1025 MHz memory, compared to the 772 MHz core speed and 1002 MHz clock rate of the reference design. That is a very healthy 10% clock rate difference that should equate to nearly that big of a gap in gaming performance where the GPU is the real bottleneck.
Retail-ready HD 7970
We first showed off the power of the new AMD Radeon HD 7970 3GB graphics card in our reference review posted on December 22nd. If you haven't read all about the new Southern Islands architecture and the Tahiti chip that powers the HD 7970 then you should already be clicking the link above to my review to get up to speed. Once you have done so, please return here to continue.
Welcome back, oh wise one. Now we are ready to proceed. By now you already know that the Radeon HD 7970 is the fastest GPU on the planet, besting the NVIDIA GTX 580 by a solid 20-30% in most cases. For our first retail card review we are going to be looking at the XFX Black Edition Double Dissipation that overclocks the GPU and memory clocks slightly and offers a new cooler that promises to be more efficient and quieter.
Let's put XFX to the test!
The XFX Radeon HD 7970 3GB Black Edition Double Dissipation
Because of the use of a completely custom cooler, the XFX HD 7970 Black Edition Double Dissipation looks completely different than the reference model we tested last month though the feature set remains identical. The silver and black motif works well here.
The First 28nm GPU Architecture
It is going to be an exciting 2012. Both AMD and NVIDIA are going to be bringing gamers entirely new GPU architectures, Intel has Ivy Bridge up its sleeve and the CPU side of AMD is looking forward to the introduction of the Piledriver lineup. Today though we end 2011 with the official introduction of the AMD Southern Islands GPU design, a completely new architecture from the ground up that engineers have been working on for more than three years.
This GPU will be the first on several fronts: the first 28nm part, the first cards with support for PCI Express 3.0 and the first to officially support DirectX 11.1 coming with Windows 8. Southern Islands is broken up into three different families starting with Tahiti at the high-end, Pitcairn for sweet spot gaming and Cape Verde for budget discrete options. The Radeon HD 7970 card that is launching today with availability in early January is going to be the top-end single GPU option, based on Tahiti.
Let's see what 4.31 billion transistors buys you in today's market. I have embedded a very short video review here as well for your perusal but of course, you should continue down a bit further for the entire, in-depth review of the Radeon HD 7970 GPU.
Southern Islands - Starting with Tahiti
Before we get into benchmark results we need to get a better understanding of this completely new GPU design that was first divulged in June at the AMD Fusion Developer Summit. At that time, our own
lovely and talented Josh Walrath wrote up a great preview of the architecture that remains accurate and pertinent for today's release. We will include some of Josh's analysis here and interject with anything new that we have learned from AMD about the Southern Islands architecture.
When NVIDIA introduced the G80, they took a pretty radical approach to GPU design. Instead of going with previous VLIW architectures which would support operations such as Vec4+Scalar, they went with a completely scalar architecture. This allowed a combination of flexibility of operation types, ease of scheduling, and a high utilization of compute units. AMD has taken a somewhat similar, but still unique approach to their new architecture.
Galaxy Continues the MDT Push
One of the key selling points for the AMD Radeon series of graphics cards the last few generations has been Eyefinity - the ability to run more than two displays off of a single card while also allowing for 3+ display gaming configurations. NVIDIA-based solutions required a pair of GPUs running in SLI for this functionality, either standard SLI or the "SLI-on-a-card" solutions like the GTX 590.
However, another solution has appeared from Galaxy, an NVIDIA partner that has created a series of boards with the MDT moniker - Multi-Display Technology. Using a separate on-board chip the company has created GTX 560 Ti, GTX 570 and GTX 580 cards that can output to 4 or 5 monitors using only a single NVIDIA GPU, cutting down on costs while offering a feature that no other single-GPU solution could.
Today we are going to be reviewing the Galaxy GeForce GTX 570 MDT X4 card that promises 4 display outputs and a triple-panel seamless gaming surface option for users that want to explore gaming on more than a single monitor inside the NVIDIA ecosystem.
A Temporary Card with a Permanent Place in Our Heart
Today NVIDIA and its partners are announcing availability of a new graphics card that bridges the gap between the $230 GTX 560 Ti and the $330 GTX 570 currently on the market. The new card promises to offer performance right between those two units with a price to match but with a catch: it is a limited edition part with expected availability only through the next couple of months.
When we first heard rumors about this product back in October I posited that the company would be crazy to simply call this the GeForce GTX 560 Ti Special Edition. Well...I guess this makes me the jackass. This new ~$290 GPU will be officially called the "GeForce GTX 560 Ti with 448 Cores".
The GeForce GTX 560 Ti 448 Core Edition
The GeForce GTX 560 Ti with 448 cores is actually not a GTX 560 Ti at all and in fact is not even built on a GF114 GPU - instead we are looking at a GF110 GPU (the same used on the GeForce GTX 580 and GTX 570 graphics cards) with another SM disabled.
GeForce GTX 580 Diagram
The above diagram shows a full GF110 GPU sporting 512 CUDA cores and the full 16 SMs (simultaneous multiprocessors) along with all the bells and whistles that go along with that $450 card. This includes a 384-bit memory bus and a 1.5 GB frame buffer that all adds up to still being the top performing single graphics card on the market today.
Introduction, Campaign Testing
As you might have noticed, we’re a bit excited about Battlefield 3 here at PC Perspective. It promised to pay attention to what PC gamers want, and shockingly, it has come through. Dedicated servers and huge multi-player matches are supported, and the browser based interface is excellent.
If we’re honest, a lot of our hearts have been stirred simply by the way the game looks. There aren’t many titles that really let a modern mid-range graphics card stretch its legs, even at 1080p resolution. Battlefield 3, however, can be demanding - and it looks beautiful. Even with the presets at medium, it’s one of the most attractive games ever.
But what does this mean for laptops? Has the resident laptop reviewer at PC Perspective, I know that gaming remains a challenge. The advancements over the last few years have been spectacular, but even so, most of the laptops we test can’t run Just Cause 2 at a playable framerate even with all detail set to low and a resolution of just 1366x768.
To find out if mobile gamers were given consideration by the developers of Battlefield 3, I installed the game on three different laptops. The results only go to show how far mobile gaming has come, and has far it has to go.
EVGA Changes the Game Again
Dual-GPU graphics cards are becoming an interesting story. While both NVIDIA and AMD have introduced their own reference dual-GPU designs for quite some time, it is the custom build models from board vendors like ASUS and EVGA that really peak our interest because of their unique nature. Earlier this year EVGA released the GTX 460 2Win card that brought the worlds first (and only) graphics card with a pair of the GTX 460 GPUs on-board.
ASUS has released dual-GPU options as well including the ARES dual Radeon HD 5870 last year and the MARS II dual GTX 580 just this past August but they were both prohibitively rare and expensive. The EVGA "2Win" series, which we can call it now that there are two of them, is still expensive but much more in line with the performance per dollar of the rest of the graphics card market. When the company approached us last week about the new GTX 560 Ti 2Win, we jumped at the chance to review it.
The EVGA GeForce GTX 560 Ti 2Win 2GB
The new GTX 560 Ti 2Win from EVGA follows directly in the footsteps of the GTX 460 model - we are essentially looking at a pair of GTX 560 Ti GPUs on a single PCB running in SLI multi-GPU mode. Clock speeds, memory capacity, performance - it should all be pretty much the same as if you were running a pair of GTX 560 Ti cards independently.
Just as with the GTX 460 2Win, EVGA is the very first company to offer such a product. NVIDIA didn't design a reference platform and pass it along to everyone like they did with the GTX 590 - this is all EVGA.
The Alienware M17x Giveth
Mobile graphics cards are really a different beast than the desktop variants. Despite have similar names and model numbers, the specifications vary greatly as the GTX 580M isn't equivalent to the GTX 580 and the HD 6990M isn't even a dual-GPU product. Also, getting the capability to do a direct head-to-head is almost always a tougher task thanks to the notebook market's penchant for single-vendor SKUs.
Over the past week or two, I was lucky enough to get my hands on a pair of Alienware M17x notebooks, one sporting the new AMD Radeon HD 6990M discrete graphics solution and the other with the NVIDIA GeForce GTX 580M.
AMD Radeon HD 6990M on the left; NVIDIA GeForce GTX 580M on the right
Also unlike the desktop market - the time from announcement of a new mobile GPU product to when you can actually BUY a system including it tends to be pretty long. Take the two GPUs we are looking at today for example: the HD 6990M launched in July and we are only just now finally seeing machines ship in volume; the GTX 580M in June.
Well, problems be damned, we had the pair in our hands for a few short days and I decided to put them through the ringer in our GPU testing suite and added Battlefield 3 in for good measure as well. The goal was to determine which GPU was actually the "world's fastest" as both companies claimed to be.
You don't have 3D Vision 2? Loser.
In conjunction with GeForce LAN 6 current taking place on the USS Hornet in Alameda, NVIDIA is announcing an upgrade to the lineup of 3D Vision technologies. Originally released back in January of 2009, 3D Vision was one of the company's grander attempts to change the way PC gamers, well, game. Unfortunately for NVIDIA and the gaming community, running a 3D Vision setup required a new, much more expensive display as well as some glasses that originally ran $199.
While many people, including myself, were enamored with 3D technology when we first got our hands on it, the novelty kind of wore off and I found myself quickly back on the standard panels for gaming. The reasons were difficult to discern at first but it definitely came down to some key points:
- Panel resolution
- Panel size
- Image quality
The cost was obvious - having to pay nearly double for a 3D Vision capable display just didn't jive for most PC gamers and then the need to have to purchase $200 glasses made it even less likely that you would plop down the credit card. Initial 3D Vision ready displays, while also being hard to find, were limited to a resolution of 1680x1050 and were only available in 22-in form factors. Obviously if you were interested in 3D technology you were likely a discerning gamer and running at lower resolutions would be less than ideal.
The new glasses - less nerdy?
Yes, 24-in and 1080p panels did come in 2010 but by then much of the hype surrounding 3D Vision had worn off. To top it all off, even if you did adopt a 3D Vision kit of your own you realized that the brightness of the display was basically halved when operating in 3D mode - with one shutter of your glasses covered at any given time, you only receive half the total output from the screen leaving the image quality kind of drab and washed out.
RAGE is not as dependant on your graphics hardware as it is on your CPU and storage system (which may be an industry first); the reason for which we will discover when talking about the texture pop-up issue on the next page.
The first id Software designed game since the release of Doom 3 in August of 2004, RAGE has a lot riding on it. Not only is this the introduction of the idTech 5 game engine but also culminates more than 4 years of development and the first new IP from the developer since the creation of Quake. And since the first discussions and demonstrations of Carmack's new MegaTexture technology, gamers have been expecting a lot as well.
Would this game be impressive enough on the visuals to warrant all the delays we have seen? Would it push today's GPUs in a way that few games are capable of? It looks like we have answers to both of those questions and you might be a bit disappointed.
First, let's get to the heart of the performance question: will your hardware play RAGE? Chances are, very much so. I ran through some tests of RAGE on a variety of hardware including the GeForce GTX 580, 560 Ti, 460 1GB and the Radeon HD 6970, HD 6950, HD 6870 and HD 5850. The test bed included an Intel Core i7-965 Nehalem CPU, 6GB of DDR3-1333 memory running off of a 600GB VelociRaptor hard drive. Here are the results from our performance tests running at 1920x1080 resolution with 4x AA enabled in the game options: