Author:
Manufacturer: Galaxy

Galaxy Continues the MDT Push

One of the key selling points for the AMD Radeon series of graphics cards the last few generations has been Eyefinity - the ability to run more than two displays off of a single card while also allowing for 3+ display gaming configurations.  NVIDIA-based solutions required a pair of GPUs running in SLI for this functionality, either standard SLI or the "SLI-on-a-card" solutions like the GTX 590. 

However, another solution has appeared from Galaxy, an NVIDIA partner that has created a series of boards with the MDT moniker - Multi-Display Technology.  Using a separate on-board chip the company has created GTX 560 Ti, GTX 570 and GTX 580 cards that can output to 4 or 5 monitors using only a single NVIDIA GPU, cutting down on costs while offering a feature that no other single-GPU solution could.

12.jpg

Today we are going to be reviewing the Galaxy GeForce GTX 570 MDT X4 card that promises 4 display outputs and a triple-panel seamless gaming surface option for users that want to explore gaming on more than a single monitor inside the NVIDIA ecosystem.  

Continue reading our review of the Galaxy GeForce GTX 570 MDT X4 1.25GB Graphics Card!!

Author:
Manufacturer: NVIDIA

A Temporary Card with a Permanent Place in Our Heart

Today NVIDIA and its partners are announcing availability of a new graphics card that bridges the gap between the $230 GTX 560 Ti and the $330 GTX 570 currently on the market.  The new card promises to offer performance right between those two units with a price to match but with a catch: it is a limited edition part with expected availability only through the next couple of months.

When we first heard rumors about this product back in October I posited that the company would be crazy to simply call this the GeForce GTX 560 Ti Special Edition.  Well...I guess this makes me the jackass.  This new ~$290 GPU will be officially called the "GeForce GTX 560 Ti with 448 Cores". 

Seriously.

The GeForce GTX 560 Ti 448 Core Edition

The GeForce GTX 560 Ti with 448 cores is actually not a GTX 560 Ti at all and in fact is not even built on a GF114 GPU - instead we are looking at a GF110 GPU (the same used on the GeForce GTX 580 and GTX 570 graphics cards) with another SM disabled.  

block_580.jpg

GeForce GTX 580 Diagram

The above diagram shows a full GF110 GPU sporting 512 CUDA cores and the full 16 SMs (simultaneous multiprocessors) along with all the bells and whistles that go along with that $450 card.  This includes a 384-bit memory bus and a 1.5 GB frame buffer that all adds up to still being the top performing single graphics card on the market today.  

Continue reading our review of the GeForce GTX 560 Ti 448 Core Graphics Card!!

Author:
Manufacturer: Electronic Arts

Introduction, Campaign Testing

Introduction

bf3iontro.jpg

As you might have noticed, we’re a bit excited about Battlefield 3 here at PC Perspective. It promised to pay attention to what PC gamers want, and shockingly, it has come through. Dedicated servers and huge multi-player matches are supported, and the browser based interface is excellent.

If we’re honest, a lot of our hearts have been stirred simply by the way the game looks. There aren’t many titles that really let a modern mid-range graphics card stretch its legs, even at 1080p resolution. Battlefield 3, however, can be demanding - and it looks beautiful. Even with the presets at medium, it’s one of the most attractive games ever.

But what does this mean for laptops? Has the resident laptop reviewer at PC Perspective, I know that gaming remains a challenge. The advancements over the last few years have been spectacular, but even so, most of the laptops we test can’t run Just Cause 2 at a playable framerate even with all detail set to low and a resolution of just 1366x768. 

To find out if mobile gamers were given consideration by the developers of Battlefield 3, I installed the game on three different laptops. The results only go to show how far mobile gaming has come, and has far it has to go.

Continue reading our article on mobile Battlefield 3 performance!!

Author:
Manufacturer: EVGA

EVGA Changes the Game Again

Introduction

Dual-GPU graphics cards are becoming an interesting story.  While both NVIDIA and AMD have introduced their own reference dual-GPU designs for quite some time, it is the custom build models from board vendors like ASUS and EVGA that really peak our interest because of their unique nature.  Earlier this year EVGA released the GTX 460 2Win card that brought the worlds first (and only) graphics card with a pair of the GTX 460 GPUs on-board.  

ASUS has released dual-GPU options as well including the ARES dual Radeon HD 5870 last year and the MARS II dual GTX 580 just this past August but they were both prohibitively rare and expensive.  The EVGA "2Win" series, which we can call it now that there are two of them, is still expensive but much more in line with the performance per dollar of the rest of the graphics card market.  When the company approached us last week about the new GTX 560 Ti 2Win, we jumped at the chance to review it.

The EVGA GeForce GTX 560 Ti 2Win 2GB

The new GTX 560 Ti 2Win from EVGA follows directly in the footsteps of the GTX 460 model - we are essentially looking at a pair of GTX 560 Ti GPUs on a single PCB running in SLI multi-GPU mode.  Clock speeds, memory capacity, performance - it should all be pretty much the same as if you were running a pair of GTX 560 Ti cards independently.

01.jpg

Just as with the GTX 460 2Win, EVGA is the very first company to offer such a product.  NVIDIA didn't design a reference platform and pass it along to everyone like they did with the GTX 590 - this is all EVGA.

Continue reading our review of the EVGA GeForce GTX 560 Ti 2Win!!!

Author:
Manufacturer: Various

The Alienware M17x Giveth

Mobile graphics cards are really a different beast than the desktop variants.  Despite have similar names and model numbers, the specifications vary greatly as the GTX 580M isn't equivalent to the GTX 580 and the HD 6990M isn't even a dual-GPU product.  Also, getting the capability to do a direct head-to-head is almost always a tougher task thanks to the notebook market's penchant for single-vendor SKUs.  

Over the past week or two, I was lucky enough to get my hands on a pair of Alienware M17x notebooks, one sporting the new AMD Radeon HD 6990M discrete graphics solution and the other with the NVIDIA GeForce GTX 580M.  

01.jpg

AMD Radeon HD 6990M on the left; NVIDIA GeForce GTX 580M on the right

Also unlike the desktop market - the time from announcement of a new mobile GPU product to when you can actually BUY a system including it tends to be pretty long.  Take the two GPUs we are looking at today for example: the HD 6990M launched in July and we are only just now finally seeing machines ship in volume; the GTX 580M in June.

Well, problems be damned, we had the pair in our hands for a few short days and I decided to put them through the ringer in our GPU testing suite and added Battlefield 3 in for good measure as well.  The goal was to determine which GPU was actually the "world's fastest" as both companies claimed to be.

Continue reading our comparison of the GeForce GTX 580M and Radeon HD 6990M mobility GPUs!!

Author:
Manufacturer: NVIDIA

You don't have 3D Vision 2? Loser.

In conjunction with GeForce LAN 6 current taking place on the USS Hornet in Alameda, NVIDIA is announcing an upgrade to the lineup of 3D Vision technologies.  Originally released back in January of 2009, 3D Vision was one of the company's grander attempts to change the way PC gamers, well, game.  Unfortunately for NVIDIA and the gaming community, running a 3D Vision setup required a new, much more expensive display as well as some glasses that originally ran $199.

While many people, including myself, were enamored with 3D technology when we first got our hands on it, the novelty kind of wore off and I found myself quickly back on the standard panels for gaming.  The reasons were difficult to discern at first but it definitely came down to some key points:

  • Cost
  • Panel resolution
  • Panel size
  • Image quality

The cost was obvious - having to pay nearly double for a 3D Vision capable display just didn't jive for most PC gamers and then the need to have to purchase $200 glasses made it even less likely that you would plop down the credit card.  Initial 3D Vision ready displays, while also being hard to find, were limited to a resolution of 1680x1050 and were only available in 22-in form factors.  Obviously if you were interested in 3D technology you were likely a discerning gamer and running at lower resolutions would be less than ideal.  

09.jpg

The new glasses - less nerdy?

Yes, 24-in and 1080p panels did come in 2010 but by then much of the hype surrounding 3D Vision had worn off.  To top it all off, even if you did adopt a 3D Vision kit of your own you realized that the brightness of the display was basically halved when operating in 3D mode - with one shutter of your glasses covered at any given time, you only receive half the total output from the screen leaving the image quality kind of drab and washed out.  

Continue reading our preview of NVIDIA 3D Vision 2 technology!!

Author:
Manufacturer: id Software

RAGE Peforms...well

RAGE is not as dependant on your graphics hardware as it is on your CPU and storage system (which may be an industry first); the reason for which we will discover when talking about the texture pop-up issue on the next page.

Introduction

The first id Software designed game since the release of Doom 3 in August of 2004, RAGE has a lot riding on it.  Not only is this the introduction of the idTech 5 game engine but also culminates more than 4 years of development and the first new IP from the developer since the creation of Quake.  And since the first discussions and demonstrations of Carmack's new MegaTexture technology, gamers have been expecting a lot as well. 

02-screen.jpg

Would this game be impressive enough on the visuals to warrant all the delays we have seen?  Would it push today's GPUs in a way that few games are capable of?  It looks like we have answers to both of those questions and you might be a bit disappointed.  

Performance Characteristics

First, let's get to the heart of the performance question: will your hardware play RAGE?  Chances are, very much so.  I ran through some tests of RAGE on a variety of hardware including the GeForce GTX 580, 560 Ti, 460 1GB and the Radeon HD 6970, HD 6950, HD 6870 and HD 5850.  The test bed included an Intel Core i7-965 Nehalem CPU, 6GB of DDR3-1333 memory running off of a 600GB VelociRaptor hard drive.  Here are the results from our performance tests running at 1920x1080 resolution with 4x AA enabled in the game options:

Continue reading our initial look at RAGE performance and image quality!!

Author:
Manufacturer: PC Perspective

The Basics

Introduction

If you have been visiting PC Perspective at all over the last week there is no doubt you have seen a lot of discussion about the currently running Battlefield 3 beta.  We posted an article looking at performance of several different GPUs in the game and then followed it up with a look at older cards like the GeForce 9800 GT.  We did a live stream of some PC Perspective staff playing BF3 with readers and fans, showed off and tested the locked Caspian Border map and even looked at multi-GPU scaling performance.  It was a lot of testing and a lot of time, but now that we have completed it, we are ready to summarize our findings in a piece that many have been clamoring for - a Battlefield 3 system build guide.

05.jpg

The purpose of this article is simple: gather our many hours or testing and research and present the results in a way that simply says "here is the hardware we recommend."  It is a the exact same philosophy that makes our PC Perspective Hardware Leaderboard so successful as it gives the reader all the information they need, all in one place.

Continue reading our guide for building a system for Battlefield 3!!

Author:
Manufacturer: EA

The Battlefield 3 Beta

Update 2 (9/30/11): We have some quick results from our time on the Caspian Border map as well if you are interested - check them out!

It was an exciting day at PC Perspective yesterday with much our time dedicated to finding, installing and playing the new Battlefield 3 public beta.  Released on the morning of the 26th to those of you who had pre-ordered BF3 on Origin or those of you who had purchased Medal of Honor prior to July 26th, getting the beta a couple of days early should give those of you with more FPS talent than me a leg up once the open beta starts on Thursday the 29th.

My purpose in playing Battlefield 3 yesterday was purely scientific, of course.  We wanted to test a handful of cards from both AMD and NVIDIA to see how the beta performed.  With all of the talk about needing to upgrade your system and the relatively high recommended system requirements, there is a lot of worry that just about anyone without a current generation GPU is going to need to shell out some cash.  

screen06.jpg

Is that a justified claim?  While we didn't have time yet to test EVERY card we have at the office we did put some of the more recent high end, mid-range and lower cost GPUs to the test.  

Continue reading our initial performance results with the Battlefield 3 beta!!

Author:
Manufacturer: NVIDIA
Tagged: vsmp, tegra, nvidia, kal-el

Kal-El Tegra SoC to use 5 cores

Recent news from NVIDIA has unveiled some interesting new technical details about the upcoming Kal-El ARM-based Tegra SoC.  While we have known for some time that this chip would include a quad-core processor and would likely be the first ARM-based quad-core part on the market, NVIDIA's Matt Wuebbling spilled the beans on a new technology called "Variable SMP" (vSMP) and a fifth core on the die.

kalel1.png

An updated diagram shows the fifth "companion" core - Courtesy NVIDIA

This patented technology allows the upcoming Tegra processor to address a couple of key issues that affect smartphones and tablets: standby power consumption and manufacturing process deviations.  Even though all five of the cores on Kal-El are going to be based on the ARM Cortex A9 design they will have very different power characteristics due to variations in the TSMC 40nm process technology that builds them.  Typical of most foundries and process technologies, TSMC has both a "high performance" and a "low power" derivative of the 40nm technology usually aimed at different projects.  The higher performing variation will run at faster clock speeds but will also have more transistor leakages thus increasing overall power consumption.  The low power option does just the opposite: lowers the frequency ceiling while using less power at idle and usage states.  

kalel2.png

CPU power and performance curves - Courtesy NVIDIA

NVIDIA's answer to this dilemma is to have both - a single A9 core built on the low power transistors and quad A9s built on the higher performing transistors.  The result is the diagram you saw at the top of this story with a quad-core SoC with a single ARM-based "companion."  NVIDIA is calling this strategy Variable Symmetric Multiprocessing and using some integrated hardware tricks it is able to switch between operating on the lower power core OR one to four of the higher power cores.  The low power process will support operating frequencies up to only 500 MHz while the high speed process transistors will be able to hit well above 1-1.2 GHz. 

Keep reading for more on the 5-core quad-core Kal-El SoC from NVIDIA!!

Author:
Manufacturer: ASUS

Is a GTX 590 just not enough for you?

A Legacy of Unique Engineering

ASUS has often been one of only a handful of companies that really pushes the limits of technology on their custom designed products including graphics cards, sound cards, notebooks, motherboards and more.  Just a little over a year ago I wrote a review of the ASUS ARES Dual Radeon HD 5870 graphics card - the first of its kind and it was labeled the "Ultimate Graphics Card" at the time.  Life on the top of mountain doesn't last that long in the world of the GPU though and time (and the GTX 590 and HD 6990) left the Greek god of war in the rearview mirror.

asus01.jpg

This time around we have a successor to the MARS - the NVIDIA version that combines two top-level GPUs on a single PCB.  The new ASUS MARS II we are reviewing today is a pair of binned GTX 580 GPUs paired together for full-time SLI and built with a limited edition run of 999 units.  In many ways the MARS II and the ARES share a lot of traits: custom designed cooling and PCB, a unique aesthetic design, limited edition status and significant physical weight as well.  Of course, the price tag is also pretty high and if you aren't comfortable reading about a $1300 graphics card you might as well turn around now...  For those that dare though, you can be sure that the MARS II will have you dreaming about PC gaming power for years to come!

Continue reading our review of the ASUS MARS II Dual GTX 50 3GB!!!

Author:
Manufacturer: ASUS

Republic of Gamers Means Business

Introduction

I have got to be honest with you - most of the time getting me excited for graphics cards any more is a chore.  Unless we are talking about a new architecture from NVIDIA or AMD, card vendors are hard pressed to the same attention from me they used to a couple of years ago when every card release was something to pay attention to.  Over the next week or so though it turns out that ASUS and Gigabyte have a few noteworthy items definitely worth some grade-A analysis and reviewing starting with today's: the ASUS ROG Matrix GTX 580 beast.

asus01.jpg

The Republic of Gamers brand is reserved for the highest-end parts from the company that are obviously targeted at three main segments.  First, the serious gamers and enthusiasts that demand the top level performance either because they can't stand to lose at gaming or just want nothing but the best for their own experiences.  Secondly are the professional overclockers that live on features and capabilities that most of us could only dream of pushing and that need LN2 to get the job done.  Finally, the case modding groups that demand not only great performance, but sexy designs that add to the aesthetics of the design as whole and aren't boring.  The ROG brand does a very commendable job of hitting all three of these groups in general and specifically with the new Matrix-series GTX 580.  

In the following pages we will document what makes this card different, how it performs, how it overclocks and why it might be the best GTX 580 card on the market today.

Continue reading our review of the ASUS ROG Matrix GTX 580 Platinum!!

Carmack Speaks

Last week we were in Dallas, Texas covering Quakecon 2011 as well as hosting our very own PC Perspective Hardware Workshop.  While we had over 1100 attendees at the event and had a blast judging the case mod contest, one of the highlights of the event is always getting to sit down with John Carmack and pick his brain about topics of interest.  We got about 30 minutes of John's time over the weekend and pestered him with questions about the GPU hardware race, how Intel's intergrated graphics (and AMD Fusion) fit in the future of PCs, the continuing debate about ray tracing, rasterization, voxels and infinite detail engines, key technologies for PC gamers like multi-display engines and a lot more!

One of our most read articles of all time was our previous interview with Carmack that focused a lot more on the ray tracing and rasterization debate.  If you never read that, much of it is still very relevant today and is worth reading over. 

carmack201c.png

This year though John has come full circle on several things including ray tracing, GPGPU workloads and even the advantages that console hardware has over PC gaming hardware.

Continue reading to see the full video interview and our highlights from it!!

Author:
Manufacturer: MSI Computers

A Thing of Beauty

Tired of hearing about MSI’s latest video cards?  Me neither!  It seems we have been on a roll lately with the latest and greatest from MSI, but happily that will soon change.  In the meantime, we do have another MSI card to go over.  This one is probably the most interesting of the group so far.  It also is very, very expensive for a single GPU product.  This card is obviously not for everyone, but there is a market for such high end parts still.

msi_n580gtx_01.jpg

The N580GTX Lightning Xtreme Edition is the latest entry to the high end, single GPU market.  This is an area that MSI has been really leading the way in terms of features and performance.  Their Lightning series, since the NVIDIA GTX 2x0 days, has been redefining that particular market with unique designs that offer tangible benefits over reference based cards.  MSI initially released the N580GTX Lightning to high accolades, but with this card they have added a few significant features.

Continuing reading our review of the MSI N580GTX Lightning Xtreme Edition!!

Author:
Manufacturer: General

How much will these Bitcoin mining configurations cost you in power?

Earlier this week we looked at Bitcoin mining performance across a large range of GPUs but we had many requests for estimates on the cost of the power to drive them.  At the time we were much more interested in the performance of these configurations but now that we have that information and we started to look at the potential profitability of doing something like this, look at the actual real-world cost of running a mining machine 24 hours a day, 7 days a week became much more important. 

This led us to today's update where we will talk about the average cost of power, and thus the average cost of running our 16 different configurations, in 50 different locations across the United States.  We got our data from the U.S. Energy Information Administration website where they provide average retail prices on electricity divided up by state and by region.  For use today, we downloaded the latest XLS file (which has slightly more updated information than the website as of this writing) and started going to work with some simple math. 

Here is how your state matches up:

kwh-1.jpg

kwh-2.jpg

The first graph shows the rates in alphabetical order by state, the second graph in order from the most expensive to the least.  First thing we noticed: if you live in Hawaii, I hope you REALLY love the weather.  And maybe it's time to look into that whole solar panel thing, huh?  Because Hawaii was SO FAR out beyond our other data points, we are going to be leaving it out of our calculations and instead are going to ask residents and those curious to just basically double one of our groupings.

Keep reading to get the full rundown on how power costs will affect your mining operations, and why it may not make sense to mine AT ALL with NVIDIA graphics cards! 

Author:
Manufacturer: General

What is a Bitcoin?

This article looking at Bitcoins and the performance of various GPUs with mining them was really a big team effort at PC Perspective.  Props goes out to Tim Verry for doing the research on the process of mining and helping to explain what Bitcoins are all about.  Ken Addison did a great job doing through an alottment of graphics cards running our GUIMiner and getting the data you will see presented later.  Scott Michaud helped with some graphics and imagery and I'm the monkey that just puts it all together at the end.

** Update 7/13/11 **  We recently wrote another piece on the cost of the power to run our Bitcoin mining operations used in this performance article.  Based on the individual prices of electric in all 50 states of the US, we found that the cost of the power to run some cards exceeded the value of the Bitcoin currency based on today's exchange rates.  I would highly recommend you check out that story as well after giving this performance-based article a thorough reading.  ** End Update **

A new virtual currency called Bitcoin has been receiving a great deal of news fanfare, criticism and user adoption. The so called cryptographic currency uses strong encryption methods to eliminate the need for trust when buying and selling goods over the Internet in addition to a peer-to-peer distributed timestamp server that maintains a public record of every transaction to prevent double spending of the electronic currency. The aspect of Bitcoin that has caused the most criticism and recent large rise in growth lies in is its inherent ability to anonymize the real life identities of users (though the transactions themselves are public) and the ability to make money by supporting the Bitcoin network in verifying pending transactions through a process called “mining” respectively. Privacy, security, cutting out the middle man and making it easy for users to do small casual transactions without fees as well as the ability to be rewarded for helping to secure the network by mining are all selling points (pun intended) of the currency.

When dealing with a more traditional and physical local currency, there is a need to for both parties to trust the currency but not much need to trust each other as handing over cash is fairly straightforward. One does not need to trust the other person as much as if it were a check which could bounce. Once it has changed hands, the buyer can not go and spend that money elsewhere as it is physically gone. Transactions over the Internet; however, greatly reduce the convenience of that local currency, and due to the series of tubes’ inability to carry cash through the pipes, services like Paypal as well as credit cards and checks are likely to be used in its place. While these replacements are convenient, they also are much riskier than cash as fraudulent charge-backs and disputes are likely to occur, leaving the seller in a bad position. Due to this risk, sellers have to factor a certain percentage of expected fraud into their prices in addition to collecting as much personally identifiable information as possible. Bitcoin seeks to remedy these risks by bringing the convenience of a local currency to the virtual plane with irreversible transactions, a public record of all transactions, and the ability to trust strong cryptography instead of the need for trusting people.

paymentpal.jpg

There are a number of security measures inherent in the Bitcoin protocol that assist with these security goals. Foremost, bitcoin uses strong public and private key cryptography to secure coins to a user. Money is handled by a bitcoin wallet, which is a program such as the official bitcoin client that creates public/private key pairs that allow you to send and receive money. You are further able to generate new receiving addresses whenever you want within the client. The wallet.dat file is the record of all your key pairs and thus your bitcoins and contains 100 address/key pairs (though you are able to generate new ones beyond that). Then, to send money one only needs to sign the bitcoin with their private key and send it to the recipient’s public key. This creates a chain of transactions that are secured by these public and private key pairs from person to person. Unfortunately this cryptography alone is not able to prevent double spending, meaning that Person A could sign the bitcoin with his private key to Person B, but also could do the same to Person C and so on. This issue is where the peer-to-peer and distributed computing aspect of the bitcoin protocol come into play. By using a peer-to-peer distributed timestamp server, the bitcoin protocol creates a public record of every transaction that prevents double spending of bitcoins. Once the bitcoin has been signed to a public key (receiving address) with the user’s private key, and the network confirms this transaction the bitcoins can no longer be spent by Person A as the network has confirmed that the coin belongs to Person B now, and they are the only ones that can spend it using their private key.

Keep reading our article that details the theories behind Bitcoins as well as the performance of modern GPUs in mining them!  

Author:
Manufacturer: AMD

Architecture Details

Introduction

Just a couple of weeks ago we took the cover off of AMD's Llano processor for the first time in the form of the Sabine platform: Llano's mobile derivative.  In that article we wrote in great detail about the architecture and how it performed on the stage of the notebook market - it looked very good when compared to the Intel Sandy Bridge machines we had on-hand.  Battery life is one of the most important aspects of evaluating a mobile configuration with performance and features taking a back seat the majority of the time.  In the world of the desktop though, that isn't necessarily the case. 

Desktop computers, even those meant for a low-cost and mainstream market, don't find power consumption as crucial and instead focus on the features and performance of your platform almost exclusively.  There are areas where power and heat are more scrutinized such as the home theater PC market and small form-factor machines but in general you need to be sure to hit a homerun with performance per dollar in this field.  Coming into this article we had some serious concerns about Llano and its ability to properly address this specifically. 

How did our weeks with the latest AMD Fusion APU turn out?  There is a ton of information that needed to be addressed including a look at the graphics performance in comparison to Sandy Bridge, how the quad-core "Stars" x86 CPU portion stands up to modern options, how the new memory controller affects graphics performance, Dual Graphics, power consumption and even a whole new overclocking methodology.  Keep reading and you'll get all the answers you are looking for.

Llano Architecture

We spent a LOT of time in our previous Llano piece discussing the technical details of the new Llano Fusion CPU/GPU architecture and the fundamentals are essentially identical from the mobile part to the new desktop releases.  Because of that, much of the information here is going to be a repeat with some minor changes in the forms of power envelopes, etc.  

slide01.jpg

The platform diagram above gives us an overview of what components will make up a system built on the Llano Fusion APU design.  The APU itself is made up 2 or 4 x86 CPU cores that come from the Stars family released with the Phenom / Phenom II processors.  They do introduce a new Turbo Core feature that we will discuss later that is somewhat analogous to what Intel has done with its processors with Turbo Boost.

Continue reading our AMD A-series Llano desktop review for all the benchmarks and information!

Author:
Manufacturer: AMD

Introducing the AMD FSA

At AMD’s Fusion 11 conference, we were treated to a nice overview of AMD’s next generation graphics architecture.  With the recent change in their lineup going from the previous VLIW-5 setup (powered their graphics chips from the Radeon HD 2900 through the latest “Barts” chip running the HD 6800 series) to the new VLIW-4 (HD 6900), many were not expecting much from AMD in terms of new and unique designs.  The upcoming “Southern Isles” were thought to be based on the current VLIW-4 architecture, and would feature more performance and a few new features due to the die shrink to 28 nm.  It turns out that speculation is wrong.

amd_fsa01.jpg

In late Q4 of this year we should see the first iteration of this new architecture that was detailed today by Eric Demers.  The overview detailed some features that will not make it into this upcoming product, but eventually it will all be added in over the next three years or so.  Historically speaking, AMD has placed graphics first, with GPGPU/compute as the secondary functionality of their GPUs.  While we have had compute abilities since the HD 1800/1900 series of products, AMD has not been as aggressive with compute as has its primary competition.  From the G80 GPUs and beyond, NVIDIA has pushed compute harder and farther than AMD has.  With its mature CUDA development tools and the compute heavy Fermi architecture, NVIDIA has been a driving force in this particular market.  Now that AMD has released two APU based products (Llano and Brazos), they are starting to really push OpenCL, Direct Compute, and the recently announced C++ AMP.

Continue reading for all the details on AMD's Graphics Core Next!

Author:
Manufacturer: MSI Computers
Tagged:

HAWK Through the Ages

Have I mentioned that MSI is putting out some pretty nice video cards as of late?  Last year I was able to take a look at their R5770 HAWK, and this year so far I have taken a look at their HD 6950 OC Twin Frozr II and the superb R6970 Lightning.  All of these cards have impressed me to a great degree.  MSI has really excelled in their pursuit of designing unique cards which often stand out from the crowd, without breaking the budget.  Or even being all that more expensive than their reference design brothers.

n560gtxtih_01.jpg

MSI now has two ranges of enthusiast cards that are not based on reference designs.  At the high end we have the Lightning products which cater to the high performance/LN2 crowd.  These currently include the R6970 Lightning, the N580 GTX Lightning, and the newly introduced N580 GTX Lightning Extreme (3 GB of frame buffer!).  The second tier of custom designed enthusiast parts is the HAWK series.  These are often much more affordable than the Lightning, but they carry over a lot of the design philosophies of the higher end parts.  These are aimed at the sweet spot, but will carry a slightly higher price tag than the reference products.  This is not necessarily a bad thing, as for the extra money a user will get extra features and performance.

Author:
Manufacturer: Lucid

AMD and Virtual Vsync for Lucid Virtu

Lucid has grown from a small startup that we thought might have a chance to survive in the world of AMD and NVIDIA to a major player in the computing space.  Its latest and most successful software architecture was released into the wild with the Z68 chipset as Lucid Virtu - software that enabled users to take advantage of both the performance of a discrete graphics card and the intriguing features of the integrated graphics of Intel's Sandy Bridge CPU. 

While at Computex 2011 in Taiwan we met with the President of Lucid, Offir Remez, who was excited to discuss a few key new additions to the Virtu suite with the new version titled "Virtu Universal".  The new addition is support for AMD platforms including current 890-based integrated graphics options as well the upcoming AMD Llano (and more) APU CPU/GPU combinations.  It is hard to see a reason for Virtu on current AMD platforms like the 890 series as there are no compelling features on the integrated graphics on that front but with the pending release of Llano you can be sure that AMD is going to integrate some of its own interesting GP-GPU features that will compete with the QuickSync technology of Sandy Bridge among other things.  To see Lucid offer support for AMD this early is a good sign for day-of availability on the platform later this year.

lucid01.jpg

The second pillar of Lucid's announcement with Virtu Universal was the addition of support for the mobile space, directly competing with NVIDIA and AMD's own hardware-specific switchable graphics solutions.  By far the most successful this far has been NVIDIA's Optimus which has filtered its way down basically into all major OEMs and in most of the major notebook releases that include both integrated and discrete graphics solutions.  The benefit that Lucid offers is that it will work with BOTH Intel and AMD platforms simplifying the product stack quite a bit.  

Read on for more information and some videos of Virtual Vsync in action!