Author:
Manufacturer: MSI Computer

MSI's Alex Chang Speaks Up

MSI was founded in 1986 and started producing motherboards and video cards for the quickly growing PC market.  Throughout the life of the company they have further diversified their offerings to include barebones systems, notebooks, networking/communication devices, and industrial products.  While MSI has a nice base of products, they are still primarily a motherboard and video card company.  In the past 10 years MSI has become one of the top brands in North America for video cards, and they have taken a very aggressive approach to design with these products.

msi_logo_fx.jpg

I had the chance to send MSI quite a few questions concerning their video card business and how they develop their products.

What is your name, title, and how long have you worked at MSI?

My name is Bob, and I’m…. actually, I’m just Alex Chang. I’m the Associate Marketing Manager. I’ve been with the company for 2 years.

Typically how long does it take from the original reference design card release to when we can first expect to see a Twin Frozr III based card hit retail?  How much longer does it take to create the “Lightning” based products?

Historically, we’ve seen the introduction of a non-reference thermal solution within 2-4 weeks of product launch. As an example, GTX580 was launched in November 2010, and by December there was already a reference PCB GTX580 w/ the Twin Frozr II cooler.

In the case of Lightning cards, the development timeframe is longer due to more R&D, validation, and procurement of components. With GTX580, the timeframe was around 6 months, but moving forward MSI is pulling in the launch timeframe of our flagship products.

r5770_cont1.jpg

Continue reading our interview with MSI's Alex Chang!!

Author:
Manufacturer: AMD

Southern Islands Get Small

When AMD first started to talk to me about the upcoming Southern Islands GPUs they tried to warn me.  Really they did.  "Be prepared for just an onslaught of card releases for 2012," I was told.  In much the same strategy the company took with the HD 6000 series of cards, the new Radeon HD 7000 cards have been trickling out, part by part, so as to make sure the name "AMD" and the brand "Radeon" are showing up as often as possible in your news feeds and on my keyboard.  In late December we wrote our review of the Radeon HD 7970 3GB flagship card and then followed that up in January with a review of the Radeon HD 7950.  In those briefings were told in a general way about Cape Verde, the Radeon HD 7700 series, and Pitcairn, the Radeon HD 7800 series, but without the details of performance, specifications or release dates.  We have the answer for one more of these families now: Cape Verde.

slides01.jpg

Cape Verde is the smallest of the Southern Islands dies and falls into the sub-$175 graphics market depending on card vendors' pricing and overclocking settings.  The real question we all wanted to know is what performance levels these new cards were going to offer and if they could be the TRUE successor to popular Radeon HD 5770.  While the answer will take pages and pages of details to cement into place, I can say that while an impressive card, I wasn't as excited as I had wanted to be.

But I am getting ahead of myself...  Check out our video review right here and then keep reading on for the full evaluation!!

AMD Cape Verde - the smallest of the Southern Islands

GPU companies like to brag when they are on top - you'll see that as a recurring theme in our story today.  One such case is the success of the Radeon HD 5770 that mentioned above - it still today sits on the throne of the most adopted DX11 capable GPU on the Steam Hardware Survey, one of our best places for information on the general PC gamer.  

slides02.jpg

While the inclusion of it, as well as the Radeon HD 5870 and HD 5850, on this list are great for AMD a couple of years ago, the lack of a 6000-series card here shows us that users need another reason to upgrade; another card that is mass market enough (ala under $200) and offers performance advantages that really push gamers to spend that extra cheddar.

Bring in the Cape Verde GPU...

Continue reading our review of the Radeon HD 7770 1GB GHz Edition and HD 7750 Graphics cards!!

Author:
Manufacturer: Galaxy

Four Displays for Under $70

Running multiple displays on your PC is becoming a trend that everyone is trying to jump on board with thanks in large part to the push of Eyefinity from AMD over the past few years.  Gaming is a great application for multi-display configurations but in truth game compatibility and game benefits haven't reached the level I had hoped they would by 2012.  But while gaming still has a way to go, the consumer applications for having more than a single monitor continue to expand and cement themselves in the minds of users.

Galaxy is the only NVIDIA partner that is really taking this market seriously with an onslaught of cards branded as MDT, Multiple Display Technology.  Using non-NVIDIA hardware in conjunction with NVIDIA GPUs, Galaxy has created some very unique products for consumers like the recently reviewed GeForce GTX 570 MDT.  Today we are going to be showing you the new Galaxy MDT GeForce GT 520 offering that brings support for a total of four simultaneous display outputs to a card with a reasonable cost of under $120.

The Galaxy MDT GeForce GT 520

Long time readers of PC Perspective already likely know what to expect based on the GPU we are using here but the Galaxy MDT model offers quite a few interesting changes.

01.jpg

02.jpg

The retail packaging clearly indicates the purpose of this card for users looking at running more than two displays.  The GT 520 is not an incredibly powerful GPU when it comes to gaming but Galaxy isn't really pushing the card in that manner.  Here are the general specs of the GPU for those that are interested:

  • 48 CUDA cores
  • 810 MHz core clock
  • 1GB DD3 memory
  • 900 MHz memory clock
  • 64-bit memory bus width
  • 4 ROPs
  • DirectX 11 support

Continue reading our review of the Galaxy MDT GeForce 520 graphics card!!

Author:
Manufacturer: Asus

3 NV for DCII

The world of video cards is a much changed place over the past few years.  Where once we saw only “sticker versions” of cards mass produced by a handful of manufacturers, we are now seeing some really nice differentiation from the major manufacturers.  While the first iterations of these new cards are typically mass produced by NVIDIA or AMD and then distributed to their partners for initial sales, these manufacturers are now more consistently getting their own unique versions out to retail in record time.  MSI was one of the first to put out their own unique designs, but now we are seeing Asus becoming much more aggressive with products of their own.

adcII_01.jpg

The DirectCU II line is Asus’ response to the growing number of original designs from other manufacturers.  The easiest way to categorize these designs is that they straddle nicely the very high end and extreme products like the MSI Lightning series and those of the reference design boards with standard cooling.  These are unique designs that integrate features and cooling solutions that are well above that of reference cards.

DirectCU II applies primarily to the cooling solutions on these boards.  The copper heatipipes in the DirectCU II cooler are in direct contact with the GPU.  These heatpipes then are distributed through two separate aluminum fin arrays, each with their own fan.  So each card has either a dual slot or triple slot cooling solution with two 80 mm fans that dynamically adjust to the temperature of the chip.  The second part of this is branded “Super Alloy Power” in which Asus has upgraded most of the electrical components on the board to match higher specifications.  Hi-C caps, proadlizers, polymer caps, and higher quality chokes round out the upgraded components which should translate into more stable overclocked performance and a longer lifespan.

Read the entire article here.

Author:
Manufacturer: AMD

Tahiti Gets Clipped

It has been just over a month since we first got our hands on the AMD Southern Islands architecture in the form of the Radeon HD 7970 3GB graphics card.  It was then a couple of long weeks as we waited for the consumer to get the chance to buy that same hardware though we had to admit that the $550+ price tags were scaring many away. Originally we were going to have both the Radeon HD 7970 and the Radeon HD 7950 in our hands before January 9th, but that didn't pan out and instead the little brother was held in waiting a bit longer.

Today we are reviewing that sibling, the Radeon HD 7950 3GB GPU that offers basically the same technology and feature set with a slightly diminished core and a matching, slightly diminished price.  In truth I don't think that the estimated MSRP of $449 is going to really capture that many more hearts than the $549 price of the HD 7970 did, but AMD is hoping that they can ride their performance advantage to as many profits as they can while they wait for NVIDIA to properly react.  

Check out our video review right here and then continue on to our complete benchmarking analysis!!

Southern Islands Gets Scaled Back a Bit

As I said above, the Radeon HD 7950 3GB is pretty similar to the HD 7970.  It is based on the same 28nm, DirectX 11.1, PCI Express 3.0, 4.31 billion transistor GPU and includes the same massive 3GB frame buffer as its older brother.  The Tahiti GPU is the first of its kind of all of those facets but it has a few of the computational portions disabled.

If you haven't read up on the Southern Islands architecture, or Tahiti GPU based around it, you are missing quite a bit of important information on the current lineup of parts from AMD.  I would very much encourage you to head over to our Radeon HD 7970 3GB Tahiti review and look over the first three pages as it provides a detailed breakdown of the new features and the pretty dramatic shift in design that Southern Islands introduced to the AMD GPU team.  

block-7950.jpg

Continue reading our full review of the Radeon HD 7950 3GB graphics card!!

Author:
Manufacturer: Gigabyte

Guess what? Overclocked.

The NVIDIA GTX 580 GPU, based on the GF110 Fermi architecture, is old but it isn't forgotten.  Released in November of 2010, NVIDIA had held the single GPU performance grown for more than a year before it was usurped by AMD and the Radeon HD 7970 just this month.  Still, the GTX 580 is a solid high-end enthusiast graphics card that has wide spread availability and custom designed, overclocked models from numerous vendors making it a viable option.

Gigabyte sent us this overclocked and custom cooled model quite a while ago but we had simply fallen behind with other reviews until just after CES.  In today's market the card has a bit of a different role to fill - it surely won't be able to pass up the new AMD Radeon HD 7970 but can it fight the good fight and keep NVIDIA's current lineup of GPUs more competitive until Kepler finally shows himself?

The Gigabyte GTX 580 1.5GB Super Overclock Card

With the age of the GTX 580 designs, Gigabyte had plenty of time to perfect their PCB and cooler design.  This model, the Super Overclock (GV-N580SO-15I), comes in well ahead of the standard reference speeds of the GTX 580 but sticks to the same 1.5 GB frame buffer.

01.jpg

The clock speed is set at 855 MHz core and 1025 MHz memory, compared to the 772 MHz core speed and 1002 MHz clock rate of the reference design.  That is a very healthy 10% clock rate difference that should equate to nearly that big of a gap in gaming performance where the GPU is the real bottleneck.  

Continue reading our review of the Gigabyte GTX 580 1.5GB Super Overclock graphics card!!

Author:
Manufacturer: XFX

Retail-ready HD 7970

We first showed off the power of the new AMD Radeon HD 7970 3GB graphics card in our reference review posted on December 22nd.  If you haven't read all about the new Southern Islands architecture and the Tahiti chip that powers the HD 7970 then you should already be clicking the link above to my review to get up to speed. Once you have done so, please return here to continue.

...

Welcome back, oh wise one.  Now we are ready to proceed.  By now you already know that the Radeon HD 7970 is the fastest GPU on the planet, besting the NVIDIA GTX 580 by a solid 20-30% in most cases.  For our first retail card review we are going to be looking at the XFX Black Edition Double Dissipation that overclocks the GPU and memory clocks slightly and offers a new cooler that promises to be more efficient and quieter.  

Let's put XFX to the test!

The XFX Radeon HD 7970 3GB Black Edition Double Dissipation

01.jpg

Because of the use of a completely custom cooler, the XFX HD 7970 Black Edition Double Dissipation looks completely different than the reference model we tested last month though the feature set remains identical.  The silver and black motif works well here.

Continue reading our review of the XFX Radeon HD 7970 3GB Black Edition Double Dissipation!!

Author:
Manufacturer: AMD

The First 28nm GPU Architecture

It is going to be an exciting 2012. Both AMD and NVIDIA are going to be bringing gamers entirely new GPU architectures, Intel has Ivy Bridge up its sleeve and the CPU side of AMD is looking forward to the introduction of the Piledriver lineup. Today though we end 2011 with the official introduction of the AMD Southern Islands GPU design, a completely new architecture from the ground up that engineers have been working on for more than three years.

This GPU will be the first on several fronts: the first 28nm part, the first cards with support for PCI Express 3.0 and the first to officially support DirectX 11.1 coming with Windows 8. Southern Islands is broken up into three different families starting with Tahiti at the high-end, Pitcairn for sweet spot gaming and Cape Verde for budget discrete options. The Radeon HD 7970 card that is launching today with availability in early January is going to be the top-end single GPU option, based on Tahiti.

Let's see what 4.31 billion transistors buys you in today's market.  I have embedded a very short video review here as well for your perusal but of course, you should continue down a bit further for the entire, in-depth review of the Radeon HD 7970 GPU.

Southern Islands - Starting with Tahiti

Before we get into benchmark results we need to get a better understanding of this completely new GPU design that was first divulged in June at the AMD Fusion Developer Summit. At that time, our own lovely and talented Josh Walrath wrote up a great preview of the architecture that remains accurate and pertinent for today's release. We will include some of Josh's analysis here and interject with anything new that we have learned from AMD about the Southern Islands architecture.

When NVIDIA introduced the G80, they took a pretty radical approach to GPU design. Instead of going with previous VLIW architectures which would support operations such as Vec4+Scalar, they went with a completely scalar architecture. This allowed a combination of flexibility of operation types, ease of scheduling, and a high utilization of compute units. AMD has taken a somewhat similar, but still unique approach to their new architecture.

slide21.jpg

Continue reading our review of the AMD Radeon HD 7970 3GB graphics card and Southern Islands architecture!!

Author:
Manufacturer: Galaxy

Galaxy Continues the MDT Push

One of the key selling points for the AMD Radeon series of graphics cards the last few generations has been Eyefinity - the ability to run more than two displays off of a single card while also allowing for 3+ display gaming configurations.  NVIDIA-based solutions required a pair of GPUs running in SLI for this functionality, either standard SLI or the "SLI-on-a-card" solutions like the GTX 590. 

However, another solution has appeared from Galaxy, an NVIDIA partner that has created a series of boards with the MDT moniker - Multi-Display Technology.  Using a separate on-board chip the company has created GTX 560 Ti, GTX 570 and GTX 580 cards that can output to 4 or 5 monitors using only a single NVIDIA GPU, cutting down on costs while offering a feature that no other single-GPU solution could.

12.jpg

Today we are going to be reviewing the Galaxy GeForce GTX 570 MDT X4 card that promises 4 display outputs and a triple-panel seamless gaming surface option for users that want to explore gaming on more than a single monitor inside the NVIDIA ecosystem.  

Continue reading our review of the Galaxy GeForce GTX 570 MDT X4 1.25GB Graphics Card!!

Author:
Manufacturer: NVIDIA

A Temporary Card with a Permanent Place in Our Heart

Today NVIDIA and its partners are announcing availability of a new graphics card that bridges the gap between the $230 GTX 560 Ti and the $330 GTX 570 currently on the market.  The new card promises to offer performance right between those two units with a price to match but with a catch: it is a limited edition part with expected availability only through the next couple of months.

When we first heard rumors about this product back in October I posited that the company would be crazy to simply call this the GeForce GTX 560 Ti Special Edition.  Well...I guess this makes me the jackass.  This new ~$290 GPU will be officially called the "GeForce GTX 560 Ti with 448 Cores". 

Seriously.

The GeForce GTX 560 Ti 448 Core Edition

The GeForce GTX 560 Ti with 448 cores is actually not a GTX 560 Ti at all and in fact is not even built on a GF114 GPU - instead we are looking at a GF110 GPU (the same used on the GeForce GTX 580 and GTX 570 graphics cards) with another SM disabled.  

block_580.jpg

GeForce GTX 580 Diagram

The above diagram shows a full GF110 GPU sporting 512 CUDA cores and the full 16 SMs (simultaneous multiprocessors) along with all the bells and whistles that go along with that $450 card.  This includes a 384-bit memory bus and a 1.5 GB frame buffer that all adds up to still being the top performing single graphics card on the market today.  

Continue reading our review of the GeForce GTX 560 Ti 448 Core Graphics Card!!

Author:
Manufacturer: Electronic Arts

Introduction, Campaign Testing

Introduction

bf3iontro.jpg

As you might have noticed, we’re a bit excited about Battlefield 3 here at PC Perspective. It promised to pay attention to what PC gamers want, and shockingly, it has come through. Dedicated servers and huge multi-player matches are supported, and the browser based interface is excellent.

If we’re honest, a lot of our hearts have been stirred simply by the way the game looks. There aren’t many titles that really let a modern mid-range graphics card stretch its legs, even at 1080p resolution. Battlefield 3, however, can be demanding - and it looks beautiful. Even with the presets at medium, it’s one of the most attractive games ever.

But what does this mean for laptops? Has the resident laptop reviewer at PC Perspective, I know that gaming remains a challenge. The advancements over the last few years have been spectacular, but even so, most of the laptops we test can’t run Just Cause 2 at a playable framerate even with all detail set to low and a resolution of just 1366x768. 

To find out if mobile gamers were given consideration by the developers of Battlefield 3, I installed the game on three different laptops. The results only go to show how far mobile gaming has come, and has far it has to go.

Continue reading our article on mobile Battlefield 3 performance!!

Author:
Manufacturer: EVGA

EVGA Changes the Game Again

Introduction

Dual-GPU graphics cards are becoming an interesting story.  While both NVIDIA and AMD have introduced their own reference dual-GPU designs for quite some time, it is the custom build models from board vendors like ASUS and EVGA that really peak our interest because of their unique nature.  Earlier this year EVGA released the GTX 460 2Win card that brought the worlds first (and only) graphics card with a pair of the GTX 460 GPUs on-board.  

ASUS has released dual-GPU options as well including the ARES dual Radeon HD 5870 last year and the MARS II dual GTX 580 just this past August but they were both prohibitively rare and expensive.  The EVGA "2Win" series, which we can call it now that there are two of them, is still expensive but much more in line with the performance per dollar of the rest of the graphics card market.  When the company approached us last week about the new GTX 560 Ti 2Win, we jumped at the chance to review it.

The EVGA GeForce GTX 560 Ti 2Win 2GB

The new GTX 560 Ti 2Win from EVGA follows directly in the footsteps of the GTX 460 model - we are essentially looking at a pair of GTX 560 Ti GPUs on a single PCB running in SLI multi-GPU mode.  Clock speeds, memory capacity, performance - it should all be pretty much the same as if you were running a pair of GTX 560 Ti cards independently.

01.jpg

Just as with the GTX 460 2Win, EVGA is the very first company to offer such a product.  NVIDIA didn't design a reference platform and pass it along to everyone like they did with the GTX 590 - this is all EVGA.

Continue reading our review of the EVGA GeForce GTX 560 Ti 2Win!!!

Author:
Manufacturer: Various

The Alienware M17x Giveth

Mobile graphics cards are really a different beast than the desktop variants.  Despite have similar names and model numbers, the specifications vary greatly as the GTX 580M isn't equivalent to the GTX 580 and the HD 6990M isn't even a dual-GPU product.  Also, getting the capability to do a direct head-to-head is almost always a tougher task thanks to the notebook market's penchant for single-vendor SKUs.  

Over the past week or two, I was lucky enough to get my hands on a pair of Alienware M17x notebooks, one sporting the new AMD Radeon HD 6990M discrete graphics solution and the other with the NVIDIA GeForce GTX 580M.  

01.jpg

AMD Radeon HD 6990M on the left; NVIDIA GeForce GTX 580M on the right

Also unlike the desktop market - the time from announcement of a new mobile GPU product to when you can actually BUY a system including it tends to be pretty long.  Take the two GPUs we are looking at today for example: the HD 6990M launched in July and we are only just now finally seeing machines ship in volume; the GTX 580M in June.

Well, problems be damned, we had the pair in our hands for a few short days and I decided to put them through the ringer in our GPU testing suite and added Battlefield 3 in for good measure as well.  The goal was to determine which GPU was actually the "world's fastest" as both companies claimed to be.

Continue reading our comparison of the GeForce GTX 580M and Radeon HD 6990M mobility GPUs!!

Author:
Manufacturer: NVIDIA

You don't have 3D Vision 2? Loser.

In conjunction with GeForce LAN 6 current taking place on the USS Hornet in Alameda, NVIDIA is announcing an upgrade to the lineup of 3D Vision technologies.  Originally released back in January of 2009, 3D Vision was one of the company's grander attempts to change the way PC gamers, well, game.  Unfortunately for NVIDIA and the gaming community, running a 3D Vision setup required a new, much more expensive display as well as some glasses that originally ran $199.

While many people, including myself, were enamored with 3D technology when we first got our hands on it, the novelty kind of wore off and I found myself quickly back on the standard panels for gaming.  The reasons were difficult to discern at first but it definitely came down to some key points:

  • Cost
  • Panel resolution
  • Panel size
  • Image quality

The cost was obvious - having to pay nearly double for a 3D Vision capable display just didn't jive for most PC gamers and then the need to have to purchase $200 glasses made it even less likely that you would plop down the credit card.  Initial 3D Vision ready displays, while also being hard to find, were limited to a resolution of 1680x1050 and were only available in 22-in form factors.  Obviously if you were interested in 3D technology you were likely a discerning gamer and running at lower resolutions would be less than ideal.  

09.jpg

The new glasses - less nerdy?

Yes, 24-in and 1080p panels did come in 2010 but by then much of the hype surrounding 3D Vision had worn off.  To top it all off, even if you did adopt a 3D Vision kit of your own you realized that the brightness of the display was basically halved when operating in 3D mode - with one shutter of your glasses covered at any given time, you only receive half the total output from the screen leaving the image quality kind of drab and washed out.  

Continue reading our preview of NVIDIA 3D Vision 2 technology!!

Author:
Manufacturer: id Software

RAGE Peforms...well

RAGE is not as dependant on your graphics hardware as it is on your CPU and storage system (which may be an industry first); the reason for which we will discover when talking about the texture pop-up issue on the next page.

Introduction

The first id Software designed game since the release of Doom 3 in August of 2004, RAGE has a lot riding on it.  Not only is this the introduction of the idTech 5 game engine but also culminates more than 4 years of development and the first new IP from the developer since the creation of Quake.  And since the first discussions and demonstrations of Carmack's new MegaTexture technology, gamers have been expecting a lot as well. 

02-screen.jpg

Would this game be impressive enough on the visuals to warrant all the delays we have seen?  Would it push today's GPUs in a way that few games are capable of?  It looks like we have answers to both of those questions and you might be a bit disappointed.  

Performance Characteristics

First, let's get to the heart of the performance question: will your hardware play RAGE?  Chances are, very much so.  I ran through some tests of RAGE on a variety of hardware including the GeForce GTX 580, 560 Ti, 460 1GB and the Radeon HD 6970, HD 6950, HD 6870 and HD 5850.  The test bed included an Intel Core i7-965 Nehalem CPU, 6GB of DDR3-1333 memory running off of a 600GB VelociRaptor hard drive.  Here are the results from our performance tests running at 1920x1080 resolution with 4x AA enabled in the game options:

Continue reading our initial look at RAGE performance and image quality!!

Author:
Manufacturer: PC Perspective

The Basics

Introduction

If you have been visiting PC Perspective at all over the last week there is no doubt you have seen a lot of discussion about the currently running Battlefield 3 beta.  We posted an article looking at performance of several different GPUs in the game and then followed it up with a look at older cards like the GeForce 9800 GT.  We did a live stream of some PC Perspective staff playing BF3 with readers and fans, showed off and tested the locked Caspian Border map and even looked at multi-GPU scaling performance.  It was a lot of testing and a lot of time, but now that we have completed it, we are ready to summarize our findings in a piece that many have been clamoring for - a Battlefield 3 system build guide.

05.jpg

The purpose of this article is simple: gather our many hours or testing and research and present the results in a way that simply says "here is the hardware we recommend."  It is a the exact same philosophy that makes our PC Perspective Hardware Leaderboard so successful as it gives the reader all the information they need, all in one place.

Continue reading our guide for building a system for Battlefield 3!!

Author:
Manufacturer: EA

The Battlefield 3 Beta

Update 2 (9/30/11): We have some quick results from our time on the Caspian Border map as well if you are interested - check them out!

It was an exciting day at PC Perspective yesterday with much our time dedicated to finding, installing and playing the new Battlefield 3 public beta.  Released on the morning of the 26th to those of you who had pre-ordered BF3 on Origin or those of you who had purchased Medal of Honor prior to July 26th, getting the beta a couple of days early should give those of you with more FPS talent than me a leg up once the open beta starts on Thursday the 29th.

My purpose in playing Battlefield 3 yesterday was purely scientific, of course.  We wanted to test a handful of cards from both AMD and NVIDIA to see how the beta performed.  With all of the talk about needing to upgrade your system and the relatively high recommended system requirements, there is a lot of worry that just about anyone without a current generation GPU is going to need to shell out some cash.  

screen06.jpg

Is that a justified claim?  While we didn't have time yet to test EVERY card we have at the office we did put some of the more recent high end, mid-range and lower cost GPUs to the test.  

Continue reading our initial performance results with the Battlefield 3 beta!!

Author:
Manufacturer: NVIDIA
Tagged: vsmp, tegra, nvidia, kal-el

Kal-El Tegra SoC to use 5 cores

Recent news from NVIDIA has unveiled some interesting new technical details about the upcoming Kal-El ARM-based Tegra SoC.  While we have known for some time that this chip would include a quad-core processor and would likely be the first ARM-based quad-core part on the market, NVIDIA's Matt Wuebbling spilled the beans on a new technology called "Variable SMP" (vSMP) and a fifth core on the die.

kalel1.png

An updated diagram shows the fifth "companion" core - Courtesy NVIDIA

This patented technology allows the upcoming Tegra processor to address a couple of key issues that affect smartphones and tablets: standby power consumption and manufacturing process deviations.  Even though all five of the cores on Kal-El are going to be based on the ARM Cortex A9 design they will have very different power characteristics due to variations in the TSMC 40nm process technology that builds them.  Typical of most foundries and process technologies, TSMC has both a "high performance" and a "low power" derivative of the 40nm technology usually aimed at different projects.  The higher performing variation will run at faster clock speeds but will also have more transistor leakages thus increasing overall power consumption.  The low power option does just the opposite: lowers the frequency ceiling while using less power at idle and usage states.  

kalel2.png

CPU power and performance curves - Courtesy NVIDIA

NVIDIA's answer to this dilemma is to have both - a single A9 core built on the low power transistors and quad A9s built on the higher performing transistors.  The result is the diagram you saw at the top of this story with a quad-core SoC with a single ARM-based "companion."  NVIDIA is calling this strategy Variable Symmetric Multiprocessing and using some integrated hardware tricks it is able to switch between operating on the lower power core OR one to four of the higher power cores.  The low power process will support operating frequencies up to only 500 MHz while the high speed process transistors will be able to hit well above 1-1.2 GHz. 

Keep reading for more on the 5-core quad-core Kal-El SoC from NVIDIA!!

Author:
Manufacturer: ASUS

Is a GTX 590 just not enough for you?

A Legacy of Unique Engineering

ASUS has often been one of only a handful of companies that really pushes the limits of technology on their custom designed products including graphics cards, sound cards, notebooks, motherboards and more.  Just a little over a year ago I wrote a review of the ASUS ARES Dual Radeon HD 5870 graphics card - the first of its kind and it was labeled the "Ultimate Graphics Card" at the time.  Life on the top of mountain doesn't last that long in the world of the GPU though and time (and the GTX 590 and HD 6990) left the Greek god of war in the rearview mirror.

asus01.jpg

This time around we have a successor to the MARS - the NVIDIA version that combines two top-level GPUs on a single PCB.  The new ASUS MARS II we are reviewing today is a pair of binned GTX 580 GPUs paired together for full-time SLI and built with a limited edition run of 999 units.  In many ways the MARS II and the ARES share a lot of traits: custom designed cooling and PCB, a unique aesthetic design, limited edition status and significant physical weight as well.  Of course, the price tag is also pretty high and if you aren't comfortable reading about a $1300 graphics card you might as well turn around now...  For those that dare though, you can be sure that the MARS II will have you dreaming about PC gaming power for years to come!

Continue reading our review of the ASUS MARS II Dual GTX 50 3GB!!!

Author:
Manufacturer: ASUS

Republic of Gamers Means Business

Introduction

I have got to be honest with you - most of the time getting me excited for graphics cards any more is a chore.  Unless we are talking about a new architecture from NVIDIA or AMD, card vendors are hard pressed to the same attention from me they used to a couple of years ago when every card release was something to pay attention to.  Over the next week or so though it turns out that ASUS and Gigabyte have a few noteworthy items definitely worth some grade-A analysis and reviewing starting with today's: the ASUS ROG Matrix GTX 580 beast.

asus01.jpg

The Republic of Gamers brand is reserved for the highest-end parts from the company that are obviously targeted at three main segments.  First, the serious gamers and enthusiasts that demand the top level performance either because they can't stand to lose at gaming or just want nothing but the best for their own experiences.  Secondly are the professional overclockers that live on features and capabilities that most of us could only dream of pushing and that need LN2 to get the job done.  Finally, the case modding groups that demand not only great performance, but sexy designs that add to the aesthetics of the design as whole and aren't boring.  The ROG brand does a very commendable job of hitting all three of these groups in general and specifically with the new Matrix-series GTX 580.  

In the following pages we will document what makes this card different, how it performs, how it overclocks and why it might be the best GTX 580 card on the market today.

Continue reading our review of the ASUS ROG Matrix GTX 580 Platinum!!