All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
A new SKU for a new battle
On launch day we hosted AMD's Evan Groenke for an in-studio live interview and discussion of about the Radeon HD 7970 GHz Edition. For the on-demand version of that event, check it out right here. Enjoy!
AMD has had a good run in the discrete graphics market for quite some time. With the Radeon HD 5000 series, the company was able to take a commanding mindshare (if not marketshare) lead from NVIDIA. While that diminished some with the HD 6000 series going up against NVIDIA's GTX 500 family, the release of the HD 7970 and HD 7950 just before the end of 2011 stepped it up again. AMD was the first to market with a 28nm GPU, the first to support DX11.1, the first with a 3GB frame buffer and the new products were simply much faster than what NVIDIA had at the time.
AMD enjoyed that crowned location on the GPU front all the way until the NVIDIA GeForce GTX 680 launched in March. In a display of technology that most reviewers never thought possible, NVIDIA had a product that was faster, more power efficient and matched or exceeded just about every feature of the AMD Radeon HD 7000 cards. Availability problems plagued NVIDIA for several months (and we just now seeing the end of the shortage) and even caused us to do nearly-weekly "stock checks" to update readers. Prices on the HD 7900 cards have slowly crept down to find a place where they are relevant in the market, but AMD appears to not really want to take a back seat to NVIDIA again.
While visiting with AMD in Seattle for the Fusion Developer Summit a couple of weeks ago, we were briefed on a new secret: Tahiti 2, or Tahiti XT2 internally. An updated Radeon HD 7970 GPU that was going to be shipping soon with higher clock speeds and a new "boost" technology in order to combat the GTX 680. Even better, this card was going to have a $499 price tag.
The GK107 GPU
What does $399 buy these days?
I think it is pretty safe to say that MSI makes some pretty nice stuff when it comes to video cards. Their previous generation of the HD 6000 and GTX 500 series of cards were quite popular, and we reviewed more than a handful here. That generation of cards really seemed to stake MSI’s reputation as one of the top video card vendors in the industry in terms of quality, features, and cooling innovation. Now we are moving onto a new generation of cards from both AMD and NVIDIA, and the challenges of keeping up MSI’s reputation seem to have increased.
The competition has become much more aggressive as of late. Asus has some unique solutions, and companies such as XFX have stepped up their designs to challenge the best of the industry. MSI has found themselves to be in a much more crowded space with upgraded cooler designs, robust feature sets, and pricing that reflects the larger selection of products that fit such niches. The question here is if MSI’s design methodology for non-reference cards is up to the challenge.
Previously I was able to review the R7970 Lightning from MSI, and it was an impressive card. I had some initial teething problems with that particular model, but a BIOS flash later and some elbow grease allowed it to work as advertised. Today I am looking at the R7950 TwinFrozr3GD5/OC. This card looks to feature a reference PCB combined with a Twin Frozr III cooling solution. I was not entirely sure what to expect with this card, since the Lightning was such a challenge at first.
XFX Throws into the Midrange Ring
Who is this XFX? This is a brand that I have not dealt with in a long time. In fact, the last time I had an XFX card was some five years ago, and it was in the form of the GeForce 8800 GTX XXX Edition. This was a pretty awesome card for the time, and it seemed to last forever in terms of performance and features in the new DX 10 world that was 2007/2008. This was a heavily overclocked card, and it would get really loud during gaming sessions. I can honestly say though that this particular card was troublefree and well built.
XFX has not always had a great reputation though, and the company has gone through some very interesting twists and turns over the years. XFX is a subsidiary of Pine Technologies. Initially XFX dealt strictly with NVIDIA based products, but a few years back when the graphics market became really tight, NVIDIA dropped several manufacturers and focused their attention on the bigger partners. Among the victims of this tightening were BFG Technologies and XFX. Unlike BFG, XFX was able to negotiate successfully with AMD to transition their product lineup to Radeon products. Since then XFX has been very aggressive in pursuing unique designs based on these AMD products. While previous generation designs did not step far from the reference products, this latest generation is a big step forward for XFX.
When the Fermi architecture was first discussed in September of 2009 at the NVIDIA GPU Technology Conference it marked an interesting turn for the company. Not only was NVIDIA releasing details about a GPU that wasn’t going to be available to consumers for another six months, but also that NVIDIA was building GPUs not strictly for gaming anymore – HPC and GPGPU were a defining target of all the company’s resources going forward.
Kepler on the other hand seemed to go back in the other direction with a consumer graphics release in March of this year without discussion of the Tesla / Quadro side of the picture. While the company liked to tout that Kepler was built for gamers I think you’ll find that with the information NVIDIA released today, Kepler was still very much designed to be an HPC powerhouse. More than likely NVIDIA’s release schedules were altered by the very successful launch of AMD’s Tahiti graphics cards under the HD 7900 brand. As a result, gamers got access to GK104 before NVIDIA’s flagship professional conference and the announcement of GK110 – a 7.1 billion transistor GPU aimed squarely at parallel computing workloads.
With the Fermi design NVIDIA took a gamble and changed directions with its GPU design betting that it could develop a microprocessor that was primarily intended for the professional markets while still appealing to the gaming markets that have sustained it for the majority of the company’s existence. While the GTX 480 flagship consumer card and the GTX 580 to some degree had overheating and efficiency drawbacks for gaming workloads compared to AMD GPUs, the GTX 680 based on Kepler GK104 has improved on them greatly. NVIDIA has still designed Kepler for high-performance computing though with a focus this time on power efficiency as well as performance though we haven’t seen the true king of this product line until today.
GK110 Die Shot
Built on the 28nm process technology from TSMC, GK110 is an absolutely MASSIVE chip built on 7.1 billion transistors and though NVIDIA hasn’t given us a die size, it is likely coming close the reticle limit of 550 square millimeters. NVIDIA is proud to call this chip the most ‘architecturally complex’ microprocessor ever built and while impressive, it means there is potential for some issues when it comes to producing a chip of this size. This GPU will be able to offer more than 1 TFlop of double precision computing power with greater than 80% efficiency and 3x the performance per watt of Fermi designs.
NVIDIA puts its head in the clouds
Today at the 2012 NVIDIA GPU Technology Conference (GTC), NVIDIA took the wraps off a new cloud gaming technology that promises to reduce latency and improve the quality of streaming gaming using the power of NVIDIA GPUs. Dubbed GeForce GRID, NVIDIA is offering the technology to online services like Gaikai and OTOY.
The goal of GRID is to bring the promise of "console quality" gaming to every device a user has. The term "console quality" is kind of important here as NVIDIA is trying desperately to not upset all the PC gamers that purchase high-margin GeForce products. The goal of GRID is pretty simple though and should be seen as an evolution of the online streaming gaming that we have covered in the past–like OnLive. Being able to play high quality games on your TV, your computer, your tablet or even your phone without the need for high-performance and power hungry graphics processors through streaming services is what many believe the future of gaming is all about.
GRID starts with the Kepler GPU - what NVIDIA is now dubbing the first "cloud GPU" - that has the capability to virtualize graphics processing while being power efficient. The inclusion of a hardware fixed-function video encoder is important as well as it will aid in the process of compressing images that are delivered over the Internet by the streaming gaming service.
This diagram shows us how the Kepler GPU handles and accelerates the processing required for online gaming services. On the server side, the necessary process for an image to find its way to the user is more than just a simple render to a frame buffer. In current cloud gaming scenarios the frame buffer would have to be copied to the main system memory, compressed on the CPU and then sent via the network connection. With NVIDIA's GRID technology that capture and compression happens on the GPU memory and thus can be on its way to the gamer faster.
The results are H.264 streams that are compressed quickly and efficiently to be sent out over the network and return to the end user on whatever device they are using.
GK104 takes a step down
While the graphics power found in the new GeForce GTX 690, the GeForce GTX 680 and even the Radeon HD 7970 are incredibly impressive, if we are really honest with ourselves the real meat of the GPU market buys options much lower than $999. Today's not-so-well-kept-secret release of the GeForce GTX 670 attempts to bring the price to entry of the NVIDIA Kepler architecture down to a more attainable level while also resetting the performance per dollar metrics of the GPU world once again.
The GeForce GTX 670 is in fact a very close cousin to the GeForce GTX 680 with only a single SMX unit disabled and a more compelling $399 price tag.
The GTX 670 GPU - Nearly as fast as the GTX 680
The secret is out - GK104 finds its way onto a third graphics card in just two months - but in this iteration the hardware has been reduced slightly.
The GTX 670 block diagram we hacked together above is really just a GTX 680 diagram with a single SMX unit disabled. While the GTX 680 sported a total of 1536 CUDA cores broken up into eight 192 core SMX units, the new GTX 670 will include 1344 cores. This will also drop the texture units to 112 (from 128 on the GTX 680) though the ROP count stays at 32 thanks to the continued use of a 256-bit memory interface.
GTX 690 Specifications
On Thursday May the 3rd at 10am PDT / 1pm EDT, stop by the PC Perspective Live page for an NVIDIA and PC Perspective hosted event surrounding the GeForce GTX 690 graphics card. Ryan Shrout and Tom Petersen will be on hand to talk about the technology, the performance characteristics as well as answer questions from the community from the chat room, twitter, etc. Be sure to catch it all at http://pcper.com/live
Okay, so it's not a surprise to you at all, or if it is, you haven't been paying attention. Today is the first on-sale date and review release for the new NVIDIA GeForce GTX 690 4GB dual-GPU Kepler graphics card that we first announced in late April. This is the dream card any PC gamer out there combining a pair of GTX 680 GK104 GPUs on a single PCB and running them in a single slot SLI configuration and is easily the fastest single card we have ever tested. It also the most expensive reference card we have ever seen with a hefty $999 price tag.
So how does it perform? How about efficiency and power consumption - does the GTX 690 suffer the same problems the GTX 590 did? Can AMD hope to compete with a dual-GPU HD 7990 card in the future? All that and more in our review!
Kepler Architecture Overview
For those of you that may have missed the boat on the GTX 680 launch, the first card to use NVIDIA's new Kepler GPU architecture, you should definitely head over and read my review and analysis of that before heading into the deep-dive on the GTX 690 here today.
Kepler is a 3.54 billion transistor GPU with 1536 CUDA cores / stream processors contained within and even in a single GPU configuration is able produce some impressive PC gaming performance results. The new SMX-based design has some modest differences from Fermi the most dramatic of which is the removal of the "hot clock" - the factor that ran the shaders and twice the clock speed of the rest of the GPU. Now, the entire chip runs at one speed, higher than 1 GHz on the GTX 680.
Each SMX on Kepler now includes 192 CUDA cores as opposed to the 32 cores found in each SM on Fermi - a change that has increased efficiency and performance per watt quite dramatically.
As I said above, there are lot more details on the changes in our GeForce GTX 680 review.
The GeForce GTX 690 Specifications
Many of the details surrounding the GTX 690 have already been revealed by NVIDIA's CEO Jen-Hsun Huang during a GeForce LAN event in China last week. The card is going to be fast, expensive and is built out of components and materials we haven't seen any graphics card utilize before.
Depsite the high performance level of the card, the GTX 690 isn't much heavier and isn't much longer than the reference GTX 680 card. We'll go over the details surrounding the materials, cooler and output configuration on the next page, but let's take some time just to look and debate the performance specifications.
When the NVIDIA GeForce GTX 680 launched in March we were incredibly impressed with the performance and technology that the GPU was able to offer while also being power efficient. Fast forward nearly a month and we are still having problems finding the card in stock - a HUGE negative towards the perception of the card and the company at this point.
Still, we are promised by NVIDIA and its partners that they will soon have more on shelves, so we continue to look at the performance configurations and prepare articles and reviews for you. Today we are taking a look at the Galaxy GeForce GTX 680 2GB card - their most basic model that is based on the reference design.
If you haven't done all the proper reading about the GeForce GTX 680 and the Kepler GPU, you should definitely check out my article from March that goes into a lot more detail on that subject before diving into our review of the Galaxy card.
The Card, In Pictures
The Galaxy GTX 680 is essentially identical to the reference design with the addition of some branding along the front and top of the card. The card is still a dual-slot design, still requires a pair of 6-pin power connections and uses a very quiet fan in relation to the competition from AMD.
Will it Strike Again?
It can now be claimed that we are arguably in our 4th generation of Lightning products from MSI. It can also be claimed that the 3rd generation of products really put that brand on the mainstream map. The R6970 and N580GTX (and XE version) set new standards for enthusiast grade graphics cards. Outstanding construction, unique pcb design, high quality (and quantity) of components, and a good eye for overall price have all been hallmarks of these cards. These were honestly some of my favorite video cards of all time. Call me biased, but I think when looking through other reviews those writers felt much the same. MSI certainly hit a couple of homeruns with their three Lightning offerings of 2011.
Time does not stand still. Resting on laurels is always the surest way to lose out to more aggressive competitors. It is now 2012 and AMD has already launched the latest generation of HD 7000 chips, with the top end being the HD 7970. This particular product was launched in late December, but cards were not available until January 9th of 2012. We are now at the end of March where we see a decent volume of products on the shelves, as well as some of the first of the non-reference designs hitting the streets. Currently Asus has its DirectCU II based 7970, but now we finally get to see the Lightning treatment.
MSI has not sat upon their laurels it seems. They are taking an aggressive approach to the new Lightning series of cards, and they implement quite a few unique features that have not been seen on any other product before. Now the question is did they pull it off? Throwing more features at something does not necessarily equal success. The increase in complexity of a design combined with other unknowns with the new features could make it a failure. Just look at the R5870 Lightning for proof. That particular card tread new ground, but did so in a way that did not adequately differentiate itself from reference HD 5870 designs. So what is new and how does it run? Let us dig in!
The Kepler Architecture
Join us today at 12pm EST / 9am CST as PC Perspective hosts a Live Review on the new GeForce GTX 680 graphics card. We will discuss the new GPU technology, important features like GPU Boost, talk about performance compared to AMD's lineup and we will also have NVIDIA's own Tom Petersen on hand to run some demos and answer questions from viewers. You can find it all at http://pcper.com/live!!
NVIDIA fans have been eagerly waiting for the new Kepler architecture ever since CEO Jen-Hsun Huang first mentioned it in September 2010. In the interim, we have seen the birth of a complete lineup of AMD graphics cards based on its Southern Islands architecture including the Radeon HD 7970, HD 7950, HD 7800s and HD 7700s. To the gamer looking for an upgrade it would appear that NVIDIA had fallen behind; but the company is hoping that today's release of the GeForce GTX 680 will put them back in the driver's seat.
This new $499 graphics card will directly compete against the Radeon HD 7970, and it brings quite a few "firsts" to NVIDIA's lineup. This NVIDIA card is the first desktop 28nm GPU, the first to offer a clock speed over 1 GHz, the first to support triple-panel gaming on a single card, and the first to offer "boost" clocks that vary from game to game. Interested yet? Let's get to the good stuff.
The Kepler Architecture
In many ways, the new 28nm Kepler architecture is just an update to the Fermi design that was first introduced in the GF100 chip. NVIDIA's Jonah Alben summed things up pretty nicely for us in a discussion stating that "there are lots of tiny things changing (in Kepler) rather than a few large things which makes it difficult to tell a story."
GTX 680 Block Diagram
There are few people in the gaming industry that you simply must pay attention to when they speak. One of them is John Carmack, founder of id Software and a friend of the site, creator of Doom. Another is Epic Games' Tim Sweeney, another pioneer in the field of computer graphics that brought us the magic of Unreal before bringing the rest of the gaming industry the Unreal Engine.
At DICE 2012, a trade show for game developers to demo their wares and learn from each other, Sweeney gave a talk on the future of computing hardware and its future. (You can see the source of my information and slides here at Gamespot.) Many pundits, media and even developers have brought up the idea that the next console generation that we know is coming will be the last - we will have reached the point in our computing capacity that gamers and designers will be comfortable with the quality and realism provided. Forever.
Think about that a moment; has anything ever appeared so obviously crazy? Yet, in a world where gaming has seemed to regress into the handheld spaces of iPhone and iPad, many would have you believe that it is indeed the case. Companies like NVIDIA and AMD that spend billions of dollars developing new high-powered graphics technologies would simply NOT do so anymore and instead focus only on low power. Actually...that is kind of happening with NVIDIA Tegra and AMD's move to APUs, but both claim that the development of leading graphics technology is what allows them to feed the low end - the sub-$100 graphics cards, SoC for phones and tablets and more.
Sweeney started the discussion by teaching everyone a little about human anatomy.
The human eye has been studied quite extensively and the amount of information we know about it would likely surprise. With 120 million monochrome receptors and 5M color, the eye and brain are able to do what even our most advanced cameras are unable to.
Completing the Family
When we went to Austin, Texas to sit with AMD and learn about the Radeon HD 7900 series of cards for the first time, an interesting thing happened. While the official meeting was about the performance of the Radeon HD 7970 and HD 7950, when things started to settle several AMD employees couldn't help but discuss Cape Verde (7700-series) and Pitcairn (7800-series) GPUs. In particular, the HD 7800 cards were generating a lot of excitement internally as a spiritual follow up to the wildly successful HD 5800 and HD 5700 series of cards in terms of price and performance characteristics.
So while the Radeon HD 7970 and HD 7950 are being labeled as the world's fastest GPUs, and the Radeon HD 7700 is the fastest GPU for everyone, the HD 7800s are where many of our readers will look when upgrading their machines while staying within a budget.
Be sure to check out our video review posted here and then continue on to our full, written review for all the benchmarks and analysis!!!
AMD Gets the Direct CU Treatment
In the previous roundup I covered the DirectCU II models from Asus featuring NVIDIA based chips. These boards included the GTX 580, 570, and 560 products. All of these were DirectCU II based with all the updated features that are included as compared to the original DirectCU products. With the AMD parts Asus has split the top four products into two categories; DirectCU II and the original DirectCU. When we start looking at thermal properties and price points, we will see why Asus took this route.
AMD has had a strong couple of years with their graphics chips. While they were not able to take the single GPU performance crown in this previous generation, their products were very capable and competitive across the board and at every price point. In fact, there are some features that these cards have at particular price points that make them very desirable in quite a few applications. In particular are the 2 GB of memory on the HD 6900 series cards where the competition from NVIDIA at those price points features 1 GB and 1.25 GB. In titles such as Skyrim, with the HD texture DLC enabled, these cards start to limit performance at 1920x1080 and above due to the memory requirements needed for these higher resolution textures.
MSI's Alex Chang Speaks Up
MSI was founded in 1986 and started producing motherboards and video cards for the quickly growing PC market. Throughout the life of the company they have further diversified their offerings to include barebones systems, notebooks, networking/communication devices, and industrial products. While MSI has a nice base of products, they are still primarily a motherboard and video card company. In the past 10 years MSI has become one of the top brands in North America for video cards, and they have taken a very aggressive approach to design with these products.
I had the chance to send MSI quite a few questions concerning their video card business and how they develop their products.
What is your name, title, and how long have you worked at MSI?
My name is Bob, and I’m…. actually, I’m just Alex Chang. I’m the Associate Marketing Manager. I’ve been with the company for 2 years.
Typically how long does it take from the original reference design card release to when we can first expect to see a Twin Frozr III based card hit retail? How much longer does it take to create the “Lightning” based products?
Historically, we’ve seen the introduction of a non-reference thermal solution within 2-4 weeks of product launch. As an example, GTX580 was launched in November 2010, and by December there was already a reference PCB GTX580 w/ the Twin Frozr II cooler.
In the case of Lightning cards, the development timeframe is longer due to more R&D, validation, and procurement of components. With GTX580, the timeframe was around 6 months, but moving forward MSI is pulling in the launch timeframe of our flagship products.
Southern Islands Get Small
When AMD first started to talk to me about the upcoming Southern Islands GPUs they tried to warn me. Really they did. "Be prepared for just an onslaught of card releases for 2012," I was told. In much the same strategy the company took with the HD 6000 series of cards, the new Radeon HD 7000 cards have been trickling out, part by part, so as to make sure the name "AMD" and the brand "Radeon" are showing up as often as possible in your news feeds and on my keyboard. In late December we wrote our review of the Radeon HD 7970 3GB flagship card and then followed that up in January with a review of the Radeon HD 7950. In those briefings were told in a general way about Cape Verde, the Radeon HD 7700 series, and Pitcairn, the Radeon HD 7800 series, but without the details of performance, specifications or release dates. We have the answer for one more of these families now: Cape Verde.
Cape Verde is the smallest of the Southern Islands dies and falls into the sub-$175 graphics market depending on card vendors' pricing and overclocking settings. The real question we all wanted to know is what performance levels these new cards were going to offer and if they could be the TRUE successor to popular Radeon HD 5770. While the answer will take pages and pages of details to cement into place, I can say that while an impressive card, I wasn't as excited as I had wanted to be.
But I am getting ahead of myself... Check out our video review right here and then keep reading on for the full evaluation!!
AMD Cape Verde - the smallest of the Southern Islands
GPU companies like to brag when they are on top - you'll see that as a recurring theme in our story today. One such case is the success of the Radeon HD 5770 that mentioned above - it still today sits on the throne of the most adopted DX11 capable GPU on the Steam Hardware Survey, one of our best places for information on the general PC gamer.
While the inclusion of it, as well as the Radeon HD 5870 and HD 5850, on this list are great for AMD a couple of years ago, the lack of a 6000-series card here shows us that users need another reason to upgrade; another card that is mass market enough (ala under $200) and offers performance advantages that really push gamers to spend that extra cheddar.
Bring in the Cape Verde GPU...
Four Displays for Under $70
Running multiple displays on your PC is becoming a trend that everyone is trying to jump on board with thanks in large part to the push of Eyefinity from AMD over the past few years. Gaming is a great application for multi-display configurations but in truth game compatibility and game benefits haven't reached the level I had hoped they would by 2012. But while gaming still has a way to go, the consumer applications for having more than a single monitor continue to expand and cement themselves in the minds of users.
Galaxy is the only NVIDIA partner that is really taking this market seriously with an onslaught of cards branded as MDT, Multiple Display Technology. Using non-NVIDIA hardware in conjunction with NVIDIA GPUs, Galaxy has created some very unique products for consumers like the recently reviewed GeForce GTX 570 MDT. Today we are going to be showing you the new Galaxy MDT GeForce GT 520 offering that brings support for a total of four simultaneous display outputs to a card with a reasonable cost of under $120.
The Galaxy MDT GeForce GT 520
Long time readers of PC Perspective already likely know what to expect based on the GPU we are using here but the Galaxy MDT model offers quite a few interesting changes.
The retail packaging clearly indicates the purpose of this card for users looking at running more than two displays. The GT 520 is not an incredibly powerful GPU when it comes to gaming but Galaxy isn't really pushing the card in that manner. Here are the general specs of the GPU for those that are interested:
- 48 CUDA cores
- 810 MHz core clock
- 1GB DD3 memory
- 900 MHz memory clock
- 64-bit memory bus width
- 4 ROPs
- DirectX 11 support
3 NV for DCII
The world of video cards is a much changed place over the past few years. Where once we saw only “sticker versions” of cards mass produced by a handful of manufacturers, we are now seeing some really nice differentiation from the major manufacturers. While the first iterations of these new cards are typically mass produced by NVIDIA or AMD and then distributed to their partners for initial sales, these manufacturers are now more consistently getting their own unique versions out to retail in record time. MSI was one of the first to put out their own unique designs, but now we are seeing Asus becoming much more aggressive with products of their own.
The DirectCU II line is Asus’ response to the growing number of original designs from other manufacturers. The easiest way to categorize these designs is that they straddle nicely the very high end and extreme products like the MSI Lightning series and those of the reference design boards with standard cooling. These are unique designs that integrate features and cooling solutions that are well above that of reference cards.
DirectCU II applies primarily to the cooling solutions on these boards. The copper heatipipes in the DirectCU II cooler are in direct contact with the GPU. These heatpipes then are distributed through two separate aluminum fin arrays, each with their own fan. So each card has either a dual slot or triple slot cooling solution with two 80 mm fans that dynamically adjust to the temperature of the chip. The second part of this is branded “Super Alloy Power” in which Asus has upgraded most of the electrical components on the board to match higher specifications. Hi-C caps, proadlizers, polymer caps, and higher quality chokes round out the upgraded components which should translate into more stable overclocked performance and a longer lifespan.
Tahiti Gets Clipped
It has been just over a month since we first got our hands on the AMD Southern Islands architecture in the form of the Radeon HD 7970 3GB graphics card. It was then a couple of long weeks as we waited for the consumer to get the chance to buy that same hardware though we had to admit that the $550+ price tags were scaring many away. Originally we were going to have both the Radeon HD 7970 and the Radeon HD 7950 in our hands before January 9th, but that didn't pan out and instead the little brother was held in waiting a bit longer.
Today we are reviewing that sibling, the Radeon HD 7950 3GB GPU that offers basically the same technology and feature set with a slightly diminished core and a matching, slightly diminished price. In truth I don't think that the estimated MSRP of $449 is going to really capture that many more hearts than the $549 price of the HD 7970 did, but AMD is hoping that they can ride their performance advantage to as many profits as they can while they wait for NVIDIA to properly react.
Check out our video review right here and then continue on to our complete benchmarking analysis!!
Southern Islands Gets Scaled Back a Bit
As I said above, the Radeon HD 7950 3GB is pretty similar to the HD 7970. It is based on the same 28nm, DirectX 11.1, PCI Express 3.0, 4.31 billion transistor GPU and includes the same massive 3GB frame buffer as its older brother. The Tahiti GPU is the first of its kind of all of those facets but it has a few of the computational portions disabled.
If you haven't read up on the Southern Islands architecture, or Tahiti GPU based around it, you are missing quite a bit of important information on the current lineup of parts from AMD. I would very much encourage you to head over to our Radeon HD 7970 3GB Tahiti review and look over the first three pages as it provides a detailed breakdown of the new features and the pretty dramatic shift in design that Southern Islands introduced to the AMD GPU team.
Guess what? Overclocked.
The NVIDIA GTX 580 GPU, based on the GF110 Fermi architecture, is old but it isn't forgotten. Released in November of 2010, NVIDIA had held the single GPU performance grown for more than a year before it was usurped by AMD and the Radeon HD 7970 just this month. Still, the GTX 580 is a solid high-end enthusiast graphics card that has wide spread availability and custom designed, overclocked models from numerous vendors making it a viable option.
Gigabyte sent us this overclocked and custom cooled model quite a while ago but we had simply fallen behind with other reviews until just after CES. In today's market the card has a bit of a different role to fill - it surely won't be able to pass up the new AMD Radeon HD 7970 but can it fight the good fight and keep NVIDIA's current lineup of GPUs more competitive until Kepler finally shows himself?
The Gigabyte GTX 580 1.5GB Super Overclock Card
With the age of the GTX 580 designs, Gigabyte had plenty of time to perfect their PCB and cooler design. This model, the Super Overclock (GV-N580SO-15I), comes in well ahead of the standard reference speeds of the GTX 580 but sticks to the same 1.5 GB frame buffer.
The clock speed is set at 855 MHz core and 1025 MHz memory, compared to the 772 MHz core speed and 1002 MHz clock rate of the reference design. That is a very healthy 10% clock rate difference that should equate to nearly that big of a gap in gaming performance where the GPU is the real bottleneck.
Get notified when we go live!