Carmack Speaks

Last week we were in Dallas, Texas covering Quakecon 2011 as well as hosting our very own PC Perspective Hardware Workshop.  While we had over 1100 attendees at the event and had a blast judging the case mod contest, one of the highlights of the event is always getting to sit down with John Carmack and pick his brain about topics of interest.  We got about 30 minutes of John's time over the weekend and pestered him with questions about the GPU hardware race, how Intel's intergrated graphics (and AMD Fusion) fit in the future of PCs, the continuing debate about ray tracing, rasterization, voxels and infinite detail engines, key technologies for PC gamers like multi-display engines and a lot more!

One of our most read articles of all time was our previous interview with Carmack that focused a lot more on the ray tracing and rasterization debate.  If you never read that, much of it is still very relevant today and is worth reading over. 

carmack201c.png

This year though John has come full circle on several things including ray tracing, GPGPU workloads and even the advantages that console hardware has over PC gaming hardware.

Continue reading to see the full video interview and our highlights from it!!

Author:
Manufacturer: MSI Computers

A Thing of Beauty

Tired of hearing about MSI’s latest video cards?  Me neither!  It seems we have been on a roll lately with the latest and greatest from MSI, but happily that will soon change.  In the meantime, we do have another MSI card to go over.  This one is probably the most interesting of the group so far.  It also is very, very expensive for a single GPU product.  This card is obviously not for everyone, but there is a market for such high end parts still.

msi_n580gtx_01.jpg

The N580GTX Lightning Xtreme Edition is the latest entry to the high end, single GPU market.  This is an area that MSI has been really leading the way in terms of features and performance.  Their Lightning series, since the NVIDIA GTX 2x0 days, has been redefining that particular market with unique designs that offer tangible benefits over reference based cards.  MSI initially released the N580GTX Lightning to high accolades, but with this card they have added a few significant features.

Continuing reading our review of the MSI N580GTX Lightning Xtreme Edition!!

Author:
Manufacturer: General

How much will these Bitcoin mining configurations cost you in power?

Earlier this week we looked at Bitcoin mining performance across a large range of GPUs but we had many requests for estimates on the cost of the power to drive them.  At the time we were much more interested in the performance of these configurations but now that we have that information and we started to look at the potential profitability of doing something like this, look at the actual real-world cost of running a mining machine 24 hours a day, 7 days a week became much more important. 

This led us to today's update where we will talk about the average cost of power, and thus the average cost of running our 16 different configurations, in 50 different locations across the United States.  We got our data from the U.S. Energy Information Administration website where they provide average retail prices on electricity divided up by state and by region.  For use today, we downloaded the latest XLS file (which has slightly more updated information than the website as of this writing) and started going to work with some simple math. 

Here is how your state matches up:

kwh-1.jpg

kwh-2.jpg

The first graph shows the rates in alphabetical order by state, the second graph in order from the most expensive to the least.  First thing we noticed: if you live in Hawaii, I hope you REALLY love the weather.  And maybe it's time to look into that whole solar panel thing, huh?  Because Hawaii was SO FAR out beyond our other data points, we are going to be leaving it out of our calculations and instead are going to ask residents and those curious to just basically double one of our groupings.

Keep reading to get the full rundown on how power costs will affect your mining operations, and why it may not make sense to mine AT ALL with NVIDIA graphics cards! 

Author:
Manufacturer: General

What is a Bitcoin?

This article looking at Bitcoins and the performance of various GPUs with mining them was really a big team effort at PC Perspective.  Props goes out to Tim Verry for doing the research on the process of mining and helping to explain what Bitcoins are all about.  Ken Addison did a great job doing through an alottment of graphics cards running our GUIMiner and getting the data you will see presented later.  Scott Michaud helped with some graphics and imagery and I'm the monkey that just puts it all together at the end.

** Update 7/13/11 **  We recently wrote another piece on the cost of the power to run our Bitcoin mining operations used in this performance article.  Based on the individual prices of electric in all 50 states of the US, we found that the cost of the power to run some cards exceeded the value of the Bitcoin currency based on today's exchange rates.  I would highly recommend you check out that story as well after giving this performance-based article a thorough reading.  ** End Update **

A new virtual currency called Bitcoin has been receiving a great deal of news fanfare, criticism and user adoption. The so called cryptographic currency uses strong encryption methods to eliminate the need for trust when buying and selling goods over the Internet in addition to a peer-to-peer distributed timestamp server that maintains a public record of every transaction to prevent double spending of the electronic currency. The aspect of Bitcoin that has caused the most criticism and recent large rise in growth lies in is its inherent ability to anonymize the real life identities of users (though the transactions themselves are public) and the ability to make money by supporting the Bitcoin network in verifying pending transactions through a process called “mining” respectively. Privacy, security, cutting out the middle man and making it easy for users to do small casual transactions without fees as well as the ability to be rewarded for helping to secure the network by mining are all selling points (pun intended) of the currency.

When dealing with a more traditional and physical local currency, there is a need to for both parties to trust the currency but not much need to trust each other as handing over cash is fairly straightforward. One does not need to trust the other person as much as if it were a check which could bounce. Once it has changed hands, the buyer can not go and spend that money elsewhere as it is physically gone. Transactions over the Internet; however, greatly reduce the convenience of that local currency, and due to the series of tubes’ inability to carry cash through the pipes, services like Paypal as well as credit cards and checks are likely to be used in its place. While these replacements are convenient, they also are much riskier than cash as fraudulent charge-backs and disputes are likely to occur, leaving the seller in a bad position. Due to this risk, sellers have to factor a certain percentage of expected fraud into their prices in addition to collecting as much personally identifiable information as possible. Bitcoin seeks to remedy these risks by bringing the convenience of a local currency to the virtual plane with irreversible transactions, a public record of all transactions, and the ability to trust strong cryptography instead of the need for trusting people.

paymentpal.jpg

There are a number of security measures inherent in the Bitcoin protocol that assist with these security goals. Foremost, bitcoin uses strong public and private key cryptography to secure coins to a user. Money is handled by a bitcoin wallet, which is a program such as the official bitcoin client that creates public/private key pairs that allow you to send and receive money. You are further able to generate new receiving addresses whenever you want within the client. The wallet.dat file is the record of all your key pairs and thus your bitcoins and contains 100 address/key pairs (though you are able to generate new ones beyond that). Then, to send money one only needs to sign the bitcoin with their private key and send it to the recipient’s public key. This creates a chain of transactions that are secured by these public and private key pairs from person to person. Unfortunately this cryptography alone is not able to prevent double spending, meaning that Person A could sign the bitcoin with his private key to Person B, but also could do the same to Person C and so on. This issue is where the peer-to-peer and distributed computing aspect of the bitcoin protocol come into play. By using a peer-to-peer distributed timestamp server, the bitcoin protocol creates a public record of every transaction that prevents double spending of bitcoins. Once the bitcoin has been signed to a public key (receiving address) with the user’s private key, and the network confirms this transaction the bitcoins can no longer be spent by Person A as the network has confirmed that the coin belongs to Person B now, and they are the only ones that can spend it using their private key.

Keep reading our article that details the theories behind Bitcoins as well as the performance of modern GPUs in mining them!  

Author:
Manufacturer: AMD

Architecture Details

Introduction

Just a couple of weeks ago we took the cover off of AMD's Llano processor for the first time in the form of the Sabine platform: Llano's mobile derivative.  In that article we wrote in great detail about the architecture and how it performed on the stage of the notebook market - it looked very good when compared to the Intel Sandy Bridge machines we had on-hand.  Battery life is one of the most important aspects of evaluating a mobile configuration with performance and features taking a back seat the majority of the time.  In the world of the desktop though, that isn't necessarily the case. 

Desktop computers, even those meant for a low-cost and mainstream market, don't find power consumption as crucial and instead focus on the features and performance of your platform almost exclusively.  There are areas where power and heat are more scrutinized such as the home theater PC market and small form-factor machines but in general you need to be sure to hit a homerun with performance per dollar in this field.  Coming into this article we had some serious concerns about Llano and its ability to properly address this specifically. 

How did our weeks with the latest AMD Fusion APU turn out?  There is a ton of information that needed to be addressed including a look at the graphics performance in comparison to Sandy Bridge, how the quad-core "Stars" x86 CPU portion stands up to modern options, how the new memory controller affects graphics performance, Dual Graphics, power consumption and even a whole new overclocking methodology.  Keep reading and you'll get all the answers you are looking for.

Llano Architecture

We spent a LOT of time in our previous Llano piece discussing the technical details of the new Llano Fusion CPU/GPU architecture and the fundamentals are essentially identical from the mobile part to the new desktop releases.  Because of that, much of the information here is going to be a repeat with some minor changes in the forms of power envelopes, etc.  

slide01.jpg

The platform diagram above gives us an overview of what components will make up a system built on the Llano Fusion APU design.  The APU itself is made up 2 or 4 x86 CPU cores that come from the Stars family released with the Phenom / Phenom II processors.  They do introduce a new Turbo Core feature that we will discuss later that is somewhat analogous to what Intel has done with its processors with Turbo Boost.

Continue reading our AMD A-series Llano desktop review for all the benchmarks and information!

Author:
Manufacturer: AMD

Introducing the AMD FSA

At AMD’s Fusion 11 conference, we were treated to a nice overview of AMD’s next generation graphics architecture.  With the recent change in their lineup going from the previous VLIW-5 setup (powered their graphics chips from the Radeon HD 2900 through the latest “Barts” chip running the HD 6800 series) to the new VLIW-4 (HD 6900), many were not expecting much from AMD in terms of new and unique designs.  The upcoming “Southern Isles” were thought to be based on the current VLIW-4 architecture, and would feature more performance and a few new features due to the die shrink to 28 nm.  It turns out that speculation is wrong.

amd_fsa01.jpg

In late Q4 of this year we should see the first iteration of this new architecture that was detailed today by Eric Demers.  The overview detailed some features that will not make it into this upcoming product, but eventually it will all be added in over the next three years or so.  Historically speaking, AMD has placed graphics first, with GPGPU/compute as the secondary functionality of their GPUs.  While we have had compute abilities since the HD 1800/1900 series of products, AMD has not been as aggressive with compute as has its primary competition.  From the G80 GPUs and beyond, NVIDIA has pushed compute harder and farther than AMD has.  With its mature CUDA development tools and the compute heavy Fermi architecture, NVIDIA has been a driving force in this particular market.  Now that AMD has released two APU based products (Llano and Brazos), they are starting to really push OpenCL, Direct Compute, and the recently announced C++ AMP.

Continue reading for all the details on AMD's Graphics Core Next!

Author:
Manufacturer: MSI Computers
Tagged:

HAWK Through the Ages

Have I mentioned that MSI is putting out some pretty nice video cards as of late?  Last year I was able to take a look at their R5770 HAWK, and this year so far I have taken a look at their HD 6950 OC Twin Frozr II and the superb R6970 Lightning.  All of these cards have impressed me to a great degree.  MSI has really excelled in their pursuit of designing unique cards which often stand out from the crowd, without breaking the budget.  Or even being all that more expensive than their reference design brothers.

n560gtxtih_01.jpg

MSI now has two ranges of enthusiast cards that are not based on reference designs.  At the high end we have the Lightning products which cater to the high performance/LN2 crowd.  These currently include the R6970 Lightning, the N580 GTX Lightning, and the newly introduced N580 GTX Lightning Extreme (3 GB of frame buffer!).  The second tier of custom designed enthusiast parts is the HAWK series.  These are often much more affordable than the Lightning, but they carry over a lot of the design philosophies of the higher end parts.  These are aimed at the sweet spot, but will carry a slightly higher price tag than the reference products.  This is not necessarily a bad thing, as for the extra money a user will get extra features and performance.

Author:
Manufacturer: Lucid

AMD and Virtual Vsync for Lucid Virtu

Lucid has grown from a small startup that we thought might have a chance to survive in the world of AMD and NVIDIA to a major player in the computing space.  Its latest and most successful software architecture was released into the wild with the Z68 chipset as Lucid Virtu - software that enabled users to take advantage of both the performance of a discrete graphics card and the intriguing features of the integrated graphics of Intel's Sandy Bridge CPU. 

While at Computex 2011 in Taiwan we met with the President of Lucid, Offir Remez, who was excited to discuss a few key new additions to the Virtu suite with the new version titled "Virtu Universal".  The new addition is support for AMD platforms including current 890-based integrated graphics options as well the upcoming AMD Llano (and more) APU CPU/GPU combinations.  It is hard to see a reason for Virtu on current AMD platforms like the 890 series as there are no compelling features on the integrated graphics on that front but with the pending release of Llano you can be sure that AMD is going to integrate some of its own interesting GP-GPU features that will compete with the QuickSync technology of Sandy Bridge among other things.  To see Lucid offer support for AMD this early is a good sign for day-of availability on the platform later this year.

lucid01.jpg

The second pillar of Lucid's announcement with Virtu Universal was the addition of support for the mobile space, directly competing with NVIDIA and AMD's own hardware-specific switchable graphics solutions.  By far the most successful this far has been NVIDIA's Optimus which has filtered its way down basically into all major OEMs and in most of the major notebook releases that include both integrated and discrete graphics solutions.  The benefit that Lucid offers is that it will work with BOTH Intel and AMD platforms simplifying the product stack quite a bit.  

Read on for more information and some videos of Virtual Vsync in action!

Author:
Manufacturer: AMD
Tagged: v7900, v5900, firepro, amd

The FirePro Products get Cayman

Introduction

(Thanks to Steve Grever for providing insight on the product placement and attending the Professional Graphics Editor's Day in Austin, TX for us!)

On May 11, AMD invited a handful of technology journalists and hardware reviewers to Northern Islands FirePro Tech Day to unveil a pair of new professional graphics cards – the V5900 and V7900. We were under an NDA to discuss the new GPUs at that time, but now that the gag order has been lifted, we can finally give our readers an in-depth look at these mid-range and high-end graphics card offerings sporting custom features like Eyefinity, Geometry Boost and Power Tune technologies.

Slide03.jpg

Sandeep Gupte, AMD’s product management director, introduced the new graphics cards during the one-day event and stated they will “deliver productivity and performance to professionals regardless of where they are working.” This is an interesting statement, but AMD is committed to providing graphics solutions beyond professional workstations to include mobile workstations, tablets and thin clients to increase productivity and performance across various form factors and operating systems.

Slide04.jpg

Last year’s FirePro lineup helped AMD increase their unit share by six points in the professional graphics market. This share increase puts them at about 16 percent overall, which was also supported by sales with Tier 1 OEMs like HP and Dell. This percentage of market share has improved over the single-digit shares AMD experienced in this market back in 2007.

Hit that "Read More" link below for the full story!

Author:
Manufacturer: MSI Computers

MSI R6970 Lightning: High Speed, Low Drag

MSI has been on a tear as of late with their video card offerings.  The Twin Frozr II and III series have all received positive reviews, people seem to be buying their products, and the company has taken some interesting turns in how they handle overall design and differentiation in a very crowded graphics marketplace.  This did not happen overnight, and MSI has been a driving force in how the video card business has developed.

r6970_box1.jpg

Perhaps a company’s reputation is best summed up by what the competition has to say about them.  I remember well back in 1999 when Tyan was first considering going into the video card business.  Apparently they were going to release a NVIDIA TnT-2 based card to the marketplace, and attempt to work their way upwards with more offerings.  This particular project was nixed by management.  A few years later Tyan attempted the graphics business again, but this time with some ATI Radeon 9000 series of cards.  Their biggest seller was their 9200 cards, but they also offered their Tachyone 9700 Pro.  In talking with Tyan about where they were, the marketing guy simply looked at me and said, “You know, if we had pursued graphics back in 1999 we might be in the same position that MSI is in now.”

Author:
Manufacturer: EVGA
Tagged: nvidia, gtx 460, gpu, evga, 2win

A Card Unlike Any Other

Introduction

In all honesty, there aren't many graphics cards that really get our attention these days.  There are GPUs that do that - releases like the Radeon HD 6990 and the GeForce GTX 590 get our juices flowing to see what new performance and features they can offer.  But in terms of individual vendor-specific designs, there are very few that make us "perk up" much more than just seeing another reference card come across the test bed.  

The ASUS ARES dual-5870 card was probably the last one do to that - and for $1200 is better have!  EVGA is ready with another card though that definitely made us interested, and for a much more reasonable price of $419 or so.

evga3.png

The EVGA GeForce GTX 460 2WIN is a custom built card that combines a pair of GTX 460 1GB GPUs on a single PCB to create a new level of performance and pricing that we found was unmatched in the market today.  And even better, the features improved as well by utilizing the power of both GPUs in an SLI configuration.  

Read on and see why the GTX 460 2WIN might be my new favorite graphics card!

Author:
Manufacturer: AMD
Tagged: turks, radeon, htpc, amd, 6670, 6570

Introduction and the new Turks GPU

Introduction

It seems that the graphics card wars have really heated up recently.  With the release of the Radeon HD 6990 4GB and the GeForce GTX 590 3GB card it might seem that EVERYONE was spending $600 on their next GPU purchase.  Obviously that isn't the case and the world of the sub-$100 card, while way less sexy, is just as important.

slide01.jpg

This week AMD has announced a slew of new options to address this market including the Radeon HD 6670, HD 6570 and even the HD 6450.  Topping out at $99, the Radeon HD 6670 offers good performance, strong HTPC features and low power consumption.  NVIDIA's competition is still reasonable though as we compare how the now price-dropped GeForce GTS 450 sits into the stack.

Author:
Manufacturer: AMD
Tagged:

Barts sees a lower price

The Radeon HD 6790 1GB is a new $150 graphics cards based on the same Barts architecture found in the Radeon HD 6800-series of cards. Obviously there are fewer shader processors and lower clock speeds, though the 256-bit memory bus remains for better memory throughput. The goal is obvious: unseat the GTX 550 Ti (and the GTX 460) from NVIDIA as the best option for this price segment. Does it succeed?

Introduction

The hits just keep coming.  In a time of the year that is traditionally a bit slower (after CES but before Computex) the world of the graphics card continues to evolve and iterate at a blindly pace.  Since January we have seen the release of the GeForce GTX 560 Ti and the Radeon HD 6950 1GB cards, the Radeon HD 6990 4GB dual-GPU behemoth, the GeForce GTX 550 Ti budget card and even NVIDIA's new GeForce GTX 590 dual-GPU card.  It has been a busy time for our GPU testing labs and the result is a GPU market that is both stronger and more confusing than ever. 

Today AMD is releasing another competitor to the ~$150 graphics card market with the Radeon HD 6790 1GB based on the Barts architecture we first tested with the HD 6800-series in October 2010.  When NVIDIA released the GTX 550 Ti in this same price field, AMD was dependent on the higher priced HD 6850 and the previous generation HD 5770 to hold its ground, both of which did so quite well.  The GTX 550 Ti was over priced at launch and is only now starting to see the correction needed to make it a viable selection.

AMD had different plans though and the Radeon HD 6790 will enter into the $150 space as the performance and pricing leader if neither of NVIDIA's GeForce GTX 550 Ti or GTX 460 768MB can make a stand.

Radeon HD 6790 Specs and Reference Design

The new Radeon HD 6790 will sit in the current product stack from AMD just below the HD 6850 and above the currently available HD 5770 offerings.  For those that might have already done their research, yes there is indeed a Radeon HD 6770 being sold by AMD but it is only offered to OEMs for now while end users are left without it. 

According to this roadmap from AMD the HD 6790 will be the only 6700-series selection from AMD for the retail channel until at least Q3 of 2011. 
Author:
Manufacturer: NVIDIA
Tagged:

1024 CUDA cores on a card

The "Top Secret" GTX 590 turns out to be both better and worse than the Radeon HD 6990 4GB depending on some vary particular use cases. In realm of $700 graphics cards, this is definitely something you want to pay attention to. But I am getting ahead of myself; let's first dive into the design on the GTX 590 and see what's under the hood.The High-End Battle Commences

Just a couple of weeks ago AMD released the Radeon HD 6990 4GB card, the first high-end dual-GPU graphics card we have seen released in quite a while it seems.  Before that, the Radeon HD 5970 had been sitting on the throne as the fastest single card for even longer - the GeForce GTX 295 was NVIDIA's last attempt at the crown.  Even before we got our hands on the HD 6990 though, we were told by various NVIDIA personnel to "wait what we have in store."  Well, we have done so and today we are here to review and discuss NVIDIA's entry into the dual-GPU realm for 2011, the GeForce GTX 590 3GB.

The "Top Secret" GTX 590 turns out to be both better and worse than the Radeon HD 6990 4GB depending on some vary particular use cases.  In realm of $700 graphics cards, this is definitely something you want to pay attention to.  But I am getting ahead of myself; let's first dive into the design on the GTX 590 and see what's under the hood.

Note: If you would like to check out our video comparison between the GeForce GTX 590 and the Radeon HD 6990 before moving on, please do!

Author:
Manufacturer: MSI
Tagged:

From the HD 5770 HAWK to Now

MSI hits the scene with their own version of the AMD HD 6950. This features the Twin Frozr II cooling solution and has a price point slightly above where most reference HD 6950s sit. When combined with MSI's Afterburner software, this card becomes an interesting tool in a gamer's arsenal. We find out how it runs when combined with the Catalyst 11.4 Preview Driver, which delivers some significant improvements in performance for the Cayman family of chips.Last year I reviewed the MSI HD 5770 HAWK video card, and I came away impressed by the engineering that MSI brought to the table.  The card was quiet, it was efficient, it didn’t build up any significant levels of heat, and it was pretty affordable as compared to a bone stock HD 5770 based on the reference design.  The board could also overclock.  It was a budget enthusiast board that wouldn’t empty the pocket, but still give a lot of DX11 bang for the buck.

Then on the other hand we had the MSI HD 5870 Lightning.  This was a card that had a lot of promise.  This particular card had a custom PCB design with high end power circuitry, quality components, and the TwinFrozr fan design.  All of this came to naught.  The board would not overclock any further than the reference HD 5870 that we had seen for some months before, and in fact the board appeared to pull a little bit more power at the same speeds as a reference board.  This was almost the exact polar opposite of the HD 5770 HAWK.

The product I am looking at today is an interesting hybrid from MSI.  MSI has taken the stock HD 6950 reference PCB, populated it with slightly higher rated components (though not up to their “Military Class” standards), and put on the Twin Frozr II cooling solution.  This is more in line with the reference version of the HD 6950, but the addition of better cooling and advanced fan profiles gives it a boost above the reference, without going into the stratified air of producing another “Lightning” type of product.  This has allowed MSI to get a differentiated product out in fairly short order, and still give consumers something extra to potentially make their buying decision on.

Author:
Manufacturer: AMD
Tagged:

What all that extra power gets you

Just a couple of weeks back AMD released the new Radeon HD 6990 4GB graphics card to world and it was easily crowned the king of the GPU world. With performance that beat out AMD's own Radeon HD 5970 and walked past the single GPU based GeForce GTX 580 1.5GB from NVIDIA, the HD 6990 offered the most performance in the smallest space you could buy - and for a hefty $699 MSRP.The Leftovers

Just a couple of weeks back AMD released the new Radeon HD 6990 4GB graphics card to world and it was easily crowned the king of the GPU world.  With performance that beat out AMD's own Radeon HD 5970 and walked past the single GPU based GeForce GTX 580 1.5GB from NVIDIA, the HD 6990 offered the most performance in the smallest space you could buy - and for a hefty $699 MSRP.  (Note that they are selling for more than that as of today...)

One of the interesting features of the card was a unique hardware switch on the top of the card that is used to switch between standard clock rates of 830 MHz and a 375 watt power rating and a higher voltage, higher clock rate along with the ability to breach the 375 watt limit set by the PCI Express standard. 

Along with the move from 830 MHz core clock to a 880 MHz core clock (which by itself wouldn't really be notable), the HD 6990 cards move from a voltage of 1.175v stock to a slightly improved 1.2v for additional overclocking headroom.  In conjunction with this, the PowerTune implementation (which uses hardware to limit maximum power consumption levels) gets tweaked to allow for more power consumption.  This is good news for overclockers again.

Here is my quote from the original HD 6990 story:

When you move that BIOS switch on the HD 6990 from the standard setting to the overclocked setting, you aren't just changing the clock speed of the GPU but you are also changing the default settings for PowerTune.  Instead of a target load power consumption of about 375 watts, the overclocked card will be able to target as high as 450 watts using some updated and improved circuitry on the board.  It is worth nothing though that AMD is forced to make this 450 watt option an "overclocked" setting because it does exceeded the power draw of the PCI Express slot and associated connectors and would cause a fit for vendors attempt to selling systems using the HD 6990 to consumers.  Enthusiasts that buy this card themselves though will have that option and we are glad that AMD continues to support readers like ours by enabling this type of thing.

Unfortunately, because of some time constraints, we didn't get to play around with this overclocked setting originally but today, we rectify that situation. 

In our story today you will see a collection of benchmarks, all run at the 2560x1600 resolution that actually stresses the HD 6990, comparing the default 830/1200 speeds to the automatically overclocked settings of 880/1250 that result from flipping that overclocking switch.  Though I realize that not many users have 30-in displays with 2560x1600 screens, the higher pixel count should also represent performance scaling and changes on multi-display Eyefinity configurations. 

After those tests, you will see our experiences with additional overclocking attempts through AMD's Overdrive software in the Catalyst Control Center.

Our testing configuration was the same as all of our recent GPU articles:

  • Testing Configuration
  • ASUS P6X58D Premium Motherboard
  • Intel Core i7-965 @ 3.33 GHz Processor
  • 3 x 2GB Corsair DDR3-1333 MHz Memory
  • Western Digital VelociRaptor 600GB HDD
  • Corsair Professional Series 1200w PSU

 

 
Author:
Manufacturer: MSI
Tagged:

Another Fermi debuts

It is the inevitable march of technology - we see a new GPU released at the high-end of the price spectrum and some subset of it will find its way to the low-end. The slow drizzle of cards in this series started with the 580 and 570, based on the same Fermi architecture as the GTX 400 cards (with some improvements in efficiency), continued with the GTX 560 Ti in January and with the GTX 550 Ti that we are seeing today.Introduction

It is the inevitable march of technology - we see a new GPU released at the high-end of the price spectrum and some subset of it will find its way to the low-end.  It could be merely days apart, or it could be months, as we see here with the GTX 580 release coming way back in November of 2010.  The slow drizzle of cards in this series started with the 580 and 570, based on the same Fermi architecture as the GTX 400 cards (with some improvements in efficiency), continued with the GTX 560 Ti in January and with the GTX 550 Ti that we are seeing today.

But does this new low cost option from NVIDIA stack up well against competition from AMD or from their own previous designs?  Let's first find out the basic specifications of the GPU and dive into the benchmarks.

The GeForce GTX 550 Ti GPU

The GeForce GTX 550 Ti (previously dubbed GF116) continues with the trend NVIDIA has perfected of taking large GPUs and shrinking them down to fit into different price segments, in this case the ~$150 mark.  While the GTX 580 is a beast of silicon with 512 shader cores and a 384-bit memory bus to keep it fed, the GTX 560 Ti was shrunk to 384 cores and a more manageable 256-bit memory bus.

The new GTX 550 Ti GPU will feature half as many cores at 192 with a matching 192-bit memory bus.  You might remember that the GTS 450 card (that was resting in a similar price point) also came with 192 shader processors but only a 128-bit memory interface with its 1GB of GDDR5 memory.  This time around the added width and high clock speeds will give the GTX 550 Ti a much improved memory system.

In terms of pure memory bandwidth, the GTX 550 Ti will provide 70% more than the previous generation which is always good news for gamers on a budget; the memory runs at 1025 MHz in the reference designs compared to the 900 MHz of the GTS 450. 

Speaking of those reference specifications, here they are.  At 900 MHz core clock, the GTX 550 Ti will without a doubt be faster than the GTS 450 (that ran at 783 MHz) - but if there was any other outcome we would be completely perplexed.  The real questions is how it is will fare against other similarly priced components that exist today. 

The 116 watt power consumption of the GTX 550 Ti comes in at 10 watts higher than what the reference GTS 450 cards were rated at.

NVIDIA is confident that the performance edge that the GTX 550 Ti offers will make it the new best option for gamers looking for a ~$150 graphics card.  Above you can see the performance improvements that are associated with the clock increases from the GTS 450 to the GTX 550 as well as those associated with the 192-bit memory bus interface. 
 
Author:
Manufacturer: AMD
Tagged:

Antilles Architecture and Design

The AMD Radeon HD 6990 4GB card has been known by the media and even gamers since the first announcements from the Cayman launch last year but finally today we are able to discuss the technology behind it and the gaming performance it will provide users willing to shell out the $700 it will take to acquire. Stop in and see if your mortgage is worth this graphics card!

"Look at the size of that thing!"
-Wedge Antilles, in reference to the first Death Star

Graphics card that are this well endowed don't come along very often; the last was the Radeon HD 5970 from AMD back in November of 2009.  In a world where power efficiency is touted as a key feature it has become almost a stigma to have an add-in card in your system that might pull 350-400 watts of power.  Considering we were just writing about a complete AMD Fusion platform that used 34 watts IN TOTAL under load, it is an easy task to put killer gaming products like the HD 6990 in an unfair and unreasonable light. 

But we aren't those people.  Do most people need a $700, 400 watt graphics card?  Nope.  Do they want it though?  Yup.  And we are here to show it to you.

A new take on the dual-GPU design

Both AMD and NVIDIA have written this story before: take one of your top level GPUs and double them up on a single PCB or card design to plug into a single PCI Express slot and get maximum performance.  CrossFire (or SLI) in a single slot - lots to like about that. 

The current GPU lineup paints an interesting picture with the Fermi-based GTX 500 series from NVIDIA and the oddly segregated AMD HD 6800 and HD 6900 series of cards.  Cayman, the redesigned architecture AMD released as the HD 6970 and HD 6950, brings a lot of changes to the Evergreen design used in previous cards.  It has done fairly well in the market though it didn't improve the landscape for AMD discrete graphics as much as many had thought it would and NVIDIA's graphics chips have remained very relevant. 

With the rumors swirling about a new dual-GPU option from AMD there was some discussion on whether it would be an HD 6800 / Evergreen based design or an HD 6900 / Cayman contraption.  Let's just get that mystery out of the way:

With the VLIW4 microarchitecture we absolutely are seeing a dual Cayman card and with a surprisingly high clock speed of 830 MHz out of the gate with lots of headroom for the overclocker in all of us.  There are 1536 stream processors per GPU for a total of 3072 and a raw computing power of more than 5 TeraFLOPs.   This is analogous to the HD 6970 GPU that shares the 1536 shader count but runs at a clock rate of 880 MHz.

The memory architecture runs a bit slower as well at 5.0 Gbps (versus the 5.5 Gbps on the HD 6970) but we are still getting a full 2GB per GPU for a grand-spanking-total of 4GB on this single card.  Load power on the board is rated at "<375 watts" and just barely makes the budget for PCI Express based solutions with the provided dual 8-pin power connectors. 

You might remember that AMD introduced a dual-BIOS switch with the HD 6900 cards as well that would allow users to easily revert back to the original BIOS and settings should their overclocking attempts take a turn for the worse.  For this card though, they are taking a slightly different approach by having the switch pull duties as an overclocking option directly, pushing up the clock frequency from 830 MHz to 880 MHz.  That might not seem like that dramatic of a change (and it isn't) but more noticeable is the change in voltage on the GPUs (going from 1.12v to 1.175v) and what that does to the power consumption and PowerTune options on the card for further tweaking.  More on that below.
 
Author:
Manufacturer: General
Tagged:

The eternal debate

Without a doubt, one of the most frequent questions we get here at PC Perspective, on the PC Perspective Podcast or even This Week in Computer Hardware, is "do I need to upgrade my graphics card yet?" The problem is this question has very different answers based on your use cases, what games you like to play or are planning on playing in the future, what other hardware is in your system, etc and thus can be a very complicated situation.

Author:
Manufacturer: NVIDIA
Tagged:

The Galaxy GeForce GTX 560 Ti GC

The GeForce GTX 560 Ti GPU was released to the world earlier this week and we have a handful of retail options in the lab to poke with the testing stick and see what turns up. With a card from Galaxy, MSI and Palit, we see how much performance gain you'll see with their stock overclocks as well as how high we can push the GPU with the improved coolers.