Author:
Manufacturer: NVIDIA

NVIDIA puts its head in the clouds

Today at the 2012 NVIDIA GPU Technology Conference (GTC), NVIDIA took the wraps off a new cloud gaming technology that promises to reduce latency and improve the quality of streaming gaming using the power of NVIDIA GPUs.  Dubbed GeForce GRID, NVIDIA is offering the technology to online services like Gaikai and OTOY.  

01.jpg

The goal of GRID is to bring the promise of "console quality" gaming to every device a user has.  The term "console quality" is kind of important here as NVIDIA is trying desperately to not upset all the PC gamers that purchase high-margin GeForce products.  The goal of GRID is pretty simple though and should be seen as an evolution of the online streaming gaming that we have covered in the past–like OnLive.  Being able to play high quality games on your TV, your computer, your tablet or even your phone without the need for high-performance and power hungry graphics processors through streaming services is what many believe the future of gaming is all about. 

02.jpg

GRID starts with the Kepler GPU - what NVIDIA is now dubbing the first "cloud GPU" - that has the capability to virtualize graphics processing while being power efficient.  The inclusion of a hardware fixed-function video encoder is important as well as it will aid in the process of compressing images that are delivered over the Internet by the streaming gaming service. 

 

03.jpg

This diagram shows us how the Kepler GPU handles and accelerates the processing required for online gaming services.  On the server side, the necessary process for an image to find its way to the user is more than just a simple render to a frame buffer.  In current cloud gaming scenarios the frame buffer would have to be copied to the main system memory, compressed on the CPU and then sent via the network connection.  With NVIDIA's GRID technology that capture and compression happens on the GPU memory and thus can be on its way to the gamer faster.

The results are H.264 streams that are compressed quickly and efficiently to be sent out over the network and return to the end user on whatever device they are using. 

Continue reading our editorial on the new NVIDIA GeForce GRID cloud gaming technology!!

Author:
Manufacturer: NVIDIA

GK104 takes a step down

While the graphics power found in the new GeForce GTX 690, the GeForce GTX 680 and even the Radeon HD 7970 are incredibly impressive, if we are really honest with ourselves the real meat of the GPU market buys options much lower than $999.  Today's not-so-well-kept-secret release of the GeForce GTX 670 attempts to bring the price to entry of the NVIDIA Kepler architecture down to a more attainable level while also resetting the performance per dollar metrics of the GPU world once again.

02.JPG

The GeForce GTX 670 is in fact a very close cousin to the GeForce GTX 680 with only a single SMX unit disabled and a more compelling $399 price tag.

The GTX 670 GPU - Nearly as fast as the GTX 680

The secret is out - GK104 finds its way onto a third graphics card in just two months - but in this iteration the hardware has been reduced slightly. 

blockdiagram.jpg

The GTX 670 block diagram we hacked together above is really just a GTX 680 diagram with a single SMX unit disabled.  While the GTX 680 sported a total of 1536 CUDA cores broken up into eight 192 core SMX units, the new GTX 670 will include 1344 cores.  This will also drop the texture units to 112 (from 128 on the GTX 680) though the ROP count stays at 32 thanks to the continued use of a 256-bit memory interface.

Continue reading our review of the NVIDIA GeForce GTX 670 2GB graphics card!!

Author:
Manufacturer: NVIDIA

GTX 690 Specifications

On Thursday May the 3rd at 10am PDT / 1pm EDT, stop by the PC Perspective Live page for an NVIDIA and PC Perspective hosted event surrounding the GeForce GTX 690 graphics card. Ryan Shrout and Tom Petersen will be on hand to talk about the technology, the performance characteristics as well as answer questions from the community from the chat room, twitter, etc. Be sure to catch it all at http://pcper.com/live

Okay, so it's not a surprise to you at all, or if it is, you haven't been paying attention.  Today is the first on-sale date and review release for the new NVIDIA GeForce GTX 690 4GB dual-GPU Kepler graphics card that we first announced in late April.  This is the dream card any PC gamer out there combining a pair of GTX 680 GK104 GPUs on a single PCB and running them in a single slot SLI configuration and is easily the fastest single card we have ever tested.  It also the most expensive reference card we have ever seen with a hefty $999 price tag. 

16.jpg

So how does it perform?  How about efficiency and power consumption - does the GTX 690 suffer the same problems the GTX 590 did?  Can AMD hope to compete with a dual-GPU HD 7990 card in the future?  All that and more in our review!

Kepler Architecture Overview

For those of you that may have missed the boat on the GTX 680 launch, the first card to use NVIDIA's new Kepler GPU architecture, you should definitely head over and read my review and analysis of that before heading into the deep-dive on the GTX 690 here today.  

dieshot.jpg

Kepler is a 3.54 billion transistor GPU with 1536 CUDA cores / stream processors contained within and even in a single GPU configuration is able produce some impressive PC gaming performance results.  The new SMX-based design has some modest differences from Fermi the most dramatic of which is the removal of the "hot clock" - the factor that ran the shaders and twice the clock speed of the rest of the GPU.  Now, the entire chip runs at one speed, higher than 1 GHz on the GTX 680.  

Each SMX on Kepler now includes 192 CUDA cores as opposed to the 32 cores found in each SM on Fermi - a change that has increased efficiency and performance per watt quite dramatically.  

As I said above, there are lot more details on the changes in our GeForce GTX 680 review.

The GeForce GTX 690 Specifications

Many of the details surrounding the GTX 690 have already been revealed by NVIDIA's CEO Jen-Hsun Huang during a GeForce LAN event in China last week.  The card is going to be fast, expensive and is built out of components and materials we haven't seen any graphics card utilize before.

nvidia06.jpg

Depsite the high performance level of the card, the GTX 690 isn't much heavier and isn't much longer than the reference GTX 680 card.  We'll go over the details surrounding the materials, cooler and output configuration on the next page, but let's take some time just to look and debate the performance specifications.

Continue reading our review of the NVIDIA GeForce GTX 690 dual-Kepler graphics card!!

Author:
Manufacturer: Galaxy

Retail Ready

When the NVIDIA GeForce GTX 680 launched in March we were incredibly impressed with the performance and technology that the GPU was able to offer while also being power efficient.  Fast forward nearly a month and we are still having problems finding the card in stock - a HUGE negative towards the perception of the card and the company at this point.

Still, we are promised by NVIDIA and its partners that they will soon have more on shelves, so we continue to look at the performance configurations and prepare articles and reviews for you.  Today we are taking a look at the Galaxy GeForce GTX 680 2GB card - their most basic model that is based on the reference design.

If you haven't done all the proper reading about the GeForce GTX 680 and the Kepler GPU, you should definitely check out my article from March that goes into a lot more detail on that subject before diving into our review of the Galaxy card.

The Card, In Pictures

01.jpg

The Galaxy GTX 680 is essentially identical to the reference design with the addition of some branding along the front and top of the card.  The card is still a dual-slot design, still requires a pair of 6-pin power connections and uses a very quiet fan in relation to the competition from AMD.

Continue reading our review of the Galaxy GeForce GTX 680 2GB Graphics Card!!

Author:
Manufacturer: MSI

Will it Strike Again?

 It can now be claimed that we are arguably in our 4th generation of Lightning products from MSI. It can also be claimed that the 3rd generation of products really put that brand on the mainstream map. The R6970 and N580GTX (and XE version) set new standards for enthusiast grade graphics cards. Outstanding construction, unique pcb design, high quality (and quantity) of components, and a good eye for overall price have all been hallmarks of these cards. These were honestly some of my favorite video cards of all time. Call me biased, but I think when looking through other reviews those writers felt much the same. MSI certainly hit a couple of homeruns with their three Lightning offerings of 2011.

r7970_01.jpg

Time does not stand still.  Resting on laurels is always the surest way to lose out to more aggressive competitors.  It is now 2012 and AMD has already launched the latest generation of HD 7000 chips, with the top end being the HD 7970.  This particular product was launched in late December, but cards were not available until January 9th of 2012.  We are now at the end of March where we see a decent volume of products on the shelves, as well as some of the first of the non-reference designs hitting the streets.  Currently Asus has its DirectCU II based 7970, but now we finally get to see the Lightning treatment.

 MSI has not sat upon their laurels it seems.  They are taking an aggressive approach to the new Lightning series of cards, and they implement quite a few unique features that have not been seen on any other product before.  Now the question is did they pull it off?  Throwing more features at something does not necessarily equal success.  The increase in complexity of a design combined with other unknowns with the new features could make it a failure.  Just look at the R5870 Lightning for proof.  That particular card tread new ground, but did so in a way that did not adequately differentiate itself from reference HD 5870 designs.  So what is new and how does it run?  Let us dig in!

Continue reading our review of the MSI Radeon HD 7970 3GB Lightning Graphics Card!!

Author:
Manufacturer: NVIDIA

The Kepler Architecture

Join us today at 12pm EST / 9am CST as PC Perspective hosts a Live Review on the new GeForce GTX 680 graphics card.  We will discuss the new GPU technology, important features like GPU Boost, talk about performance compared to AMD's lineup and we will also have NVIDIA's own Tom Petersen on hand to run some demos and answer questions from viewers.  You can find it all at http://pcper.com/live!!

NVIDIA fans have been eagerly waiting for the new Kepler architecture ever since CEO Jen-Hsun Huang first mentioned it in September 2010. In the interim, we have seen the birth of a complete lineup of AMD graphics cards based on its Southern Islands architecture including the Radeon HD 7970, HD 7950, HD 7800s and HD 7700s.  To the gamer looking for an upgrade it would appear that NVIDIA had fallen behind; but the company is hoping that today's release of the GeForce GTX 680 will put them back in the driver's seat.

This new $499 graphics card will directly compete against the Radeon HD 7970, and it brings quite a few "firsts" to NVIDIA's lineup.  This NVIDIA card is the first desktop 28nm GPU, the first to offer a clock speed over 1 GHz, the first to support triple-panel gaming on a single card, and the first to offer "boost" clocks that vary from game to game.  Interested yet?  Let's get to the good stuff.

The Kepler Architecture

In many ways, the new 28nm Kepler architecture is just an update to the Fermi design that was first introduced in the GF100 chip.  NVIDIA's Jonah Alben summed things up pretty nicely for us in a discussion stating that "there are lots of tiny things changing (in Kepler) rather than a few large things which makes it difficult to tell a story." 

arch01.png

GTX 680 Block Diagram

Continue reading our review of the new NVIDIA GeForce GTX 680 2GB Graphics Card!!

Author:
Manufacturer: Epic Games

The Truth

There are few people in the gaming industry that you simply must pay attention to when they speak.  One of them is John Carmack, founder of id Software and a friend of the site, creator of Doom.  Another is Epic Games' Tim Sweeney, another pioneer in the field of computer graphics that brought us the magic of Unreal before bringing the rest of the gaming industry the Unreal Engine. 

At DICE 2012, a trade show for game developers to demo their wares and learn from each other, Sweeney gave a talk on the future of computing hardware and its future.  (You can see the source of my information and slides here at Gamespot.) Many pundits, media and even developers have brought up the idea that the next console generation that we know is coming will be the last - we will have reached the point in our computing capacity that gamers and designers will be comfortable with the quality and realism provided.  Forever. 

tim-sweeney.jpg

Think about that a moment; has anything ever appeared so obviously crazy?  Yet, in a world where gaming has seemed to regress into the handheld spaces of iPhone and iPad, many would have you believe that it is indeed the case.  Companies like NVIDIA and AMD that spend billions of dollars developing new high-powered graphics technologies would simply NOT do so anymore and instead focus only on low power.  Actually...that is kind of happening with NVIDIA Tegra and AMD's move to APUs, but both claim that the development of leading graphics technology is what allows them to feed the low end - the sub-$100 graphics cards, SoC for phones and tablets and more.

Sweeney started the discussion by teaching everyone a little about human anatomy. 

01.jpg

The human eye has been studied quite extensively and the amount of information we know about it would likely surprise.  With 120 million monochrome receptors and 5M color, the eye and brain are able to do what even our most advanced cameras are unable to.

Continue reading our story on the computing needs for visual computing!!

Author:
Manufacturer: AMD

Completing the Family

When we went to Austin, Texas to sit with AMD and learn about the Radeon HD 7900 series of cards for the first time, an interesting thing happened.  While the official meeting was about the performance of the Radeon HD 7970 and HD 7950, when things started to settle several AMD employees couldn't help but discuss Cape Verde (7700-series) and Pitcairn (7800-series) GPUs.  In particular, the HD 7800 cards were generating a lot of excitement internally as a spiritual follow up to the wildly successful HD 5800 and HD 5700 series of cards in terms of price and performance characteristics. 

slide01.jpg

So while the Radeon HD 7970 and HD 7950 are being labeled as the world's fastest GPUs, and the Radeon HD 7700 is the fastest GPU for everyone, the HD 7800s are where many of our readers will look when upgrading their machines while staying within a budget.  

Be sure to check out our video review posted here and then continue on to our full, written review for all the benchmarks and analysis!!!

Continue reading our review of the Radeon HD 7870 and HD 7850 Graphics Cards!!

Author:
Manufacturer: Asus

AMD Gets the Direct CU Treatment

In the previous roundup I covered the DirectCU II models from Asus featuring NVIDIA based chips.  These boards included the GTX 580, 570, and 560 products.  All of these were DirectCU II based with all the updated features that are included as compared to the original DirectCU products.  With the AMD parts Asus has split the top four products into two categories; DirectCU II and the original DirectCU.  When we start looking at thermal properties and price points, we will see why Asus took this route.

asus_4amd_01.jpg

AMD has had a strong couple of years with their graphics chips.  While they were not able to take the single GPU performance crown in this previous generation, their products were very capable and competitive across the board and at every price point.  In fact, there are some features that these cards have at particular price points that make them very desirable in quite a few applications.  In particular are the 2 GB of memory on the HD 6900 series cards where the competition from NVIDIA at those price points features 1 GB and 1.25 GB.  In titles such as Skyrim, with the HD texture DLC enabled, these cards start to limit performance at 1920x1080 and above due to the memory requirements needed for these higher resolution textures.

Read the entire article here.

Author:
Manufacturer: MSI Computer

MSI's Alex Chang Speaks Up

MSI was founded in 1986 and started producing motherboards and video cards for the quickly growing PC market.  Throughout the life of the company they have further diversified their offerings to include barebones systems, notebooks, networking/communication devices, and industrial products.  While MSI has a nice base of products, they are still primarily a motherboard and video card company.  In the past 10 years MSI has become one of the top brands in North America for video cards, and they have taken a very aggressive approach to design with these products.

msi_logo_fx.jpg

I had the chance to send MSI quite a few questions concerning their video card business and how they develop their products.

What is your name, title, and how long have you worked at MSI?

My name is Bob, and I’m…. actually, I’m just Alex Chang. I’m the Associate Marketing Manager. I’ve been with the company for 2 years.

Typically how long does it take from the original reference design card release to when we can first expect to see a Twin Frozr III based card hit retail?  How much longer does it take to create the “Lightning” based products?

Historically, we’ve seen the introduction of a non-reference thermal solution within 2-4 weeks of product launch. As an example, GTX580 was launched in November 2010, and by December there was already a reference PCB GTX580 w/ the Twin Frozr II cooler.

In the case of Lightning cards, the development timeframe is longer due to more R&D, validation, and procurement of components. With GTX580, the timeframe was around 6 months, but moving forward MSI is pulling in the launch timeframe of our flagship products.

r5770_cont1.jpg

Continue reading our interview with MSI's Alex Chang!!

Author:
Manufacturer: AMD

Southern Islands Get Small

When AMD first started to talk to me about the upcoming Southern Islands GPUs they tried to warn me.  Really they did.  "Be prepared for just an onslaught of card releases for 2012," I was told.  In much the same strategy the company took with the HD 6000 series of cards, the new Radeon HD 7000 cards have been trickling out, part by part, so as to make sure the name "AMD" and the brand "Radeon" are showing up as often as possible in your news feeds and on my keyboard.  In late December we wrote our review of the Radeon HD 7970 3GB flagship card and then followed that up in January with a review of the Radeon HD 7950.  In those briefings were told in a general way about Cape Verde, the Radeon HD 7700 series, and Pitcairn, the Radeon HD 7800 series, but without the details of performance, specifications or release dates.  We have the answer for one more of these families now: Cape Verde.

slides01.jpg

Cape Verde is the smallest of the Southern Islands dies and falls into the sub-$175 graphics market depending on card vendors' pricing and overclocking settings.  The real question we all wanted to know is what performance levels these new cards were going to offer and if they could be the TRUE successor to popular Radeon HD 5770.  While the answer will take pages and pages of details to cement into place, I can say that while an impressive card, I wasn't as excited as I had wanted to be.

But I am getting ahead of myself...  Check out our video review right here and then keep reading on for the full evaluation!!

AMD Cape Verde - the smallest of the Southern Islands

GPU companies like to brag when they are on top - you'll see that as a recurring theme in our story today.  One such case is the success of the Radeon HD 5770 that mentioned above - it still today sits on the throne of the most adopted DX11 capable GPU on the Steam Hardware Survey, one of our best places for information on the general PC gamer.  

slides02.jpg

While the inclusion of it, as well as the Radeon HD 5870 and HD 5850, on this list are great for AMD a couple of years ago, the lack of a 6000-series card here shows us that users need another reason to upgrade; another card that is mass market enough (ala under $200) and offers performance advantages that really push gamers to spend that extra cheddar.

Bring in the Cape Verde GPU...

Continue reading our review of the Radeon HD 7770 1GB GHz Edition and HD 7750 Graphics cards!!

Author:
Manufacturer: Galaxy

Four Displays for Under $70

Running multiple displays on your PC is becoming a trend that everyone is trying to jump on board with thanks in large part to the push of Eyefinity from AMD over the past few years.  Gaming is a great application for multi-display configurations but in truth game compatibility and game benefits haven't reached the level I had hoped they would by 2012.  But while gaming still has a way to go, the consumer applications for having more than a single monitor continue to expand and cement themselves in the minds of users.

Galaxy is the only NVIDIA partner that is really taking this market seriously with an onslaught of cards branded as MDT, Multiple Display Technology.  Using non-NVIDIA hardware in conjunction with NVIDIA GPUs, Galaxy has created some very unique products for consumers like the recently reviewed GeForce GTX 570 MDT.  Today we are going to be showing you the new Galaxy MDT GeForce GT 520 offering that brings support for a total of four simultaneous display outputs to a card with a reasonable cost of under $120.

The Galaxy MDT GeForce GT 520

Long time readers of PC Perspective already likely know what to expect based on the GPU we are using here but the Galaxy MDT model offers quite a few interesting changes.

01.jpg

02.jpg

The retail packaging clearly indicates the purpose of this card for users looking at running more than two displays.  The GT 520 is not an incredibly powerful GPU when it comes to gaming but Galaxy isn't really pushing the card in that manner.  Here are the general specs of the GPU for those that are interested:

  • 48 CUDA cores
  • 810 MHz core clock
  • 1GB DD3 memory
  • 900 MHz memory clock
  • 64-bit memory bus width
  • 4 ROPs
  • DirectX 11 support

Continue reading our review of the Galaxy MDT GeForce 520 graphics card!!

Author:
Manufacturer: Asus

3 NV for DCII

The world of video cards is a much changed place over the past few years.  Where once we saw only “sticker versions” of cards mass produced by a handful of manufacturers, we are now seeing some really nice differentiation from the major manufacturers.  While the first iterations of these new cards are typically mass produced by NVIDIA or AMD and then distributed to their partners for initial sales, these manufacturers are now more consistently getting their own unique versions out to retail in record time.  MSI was one of the first to put out their own unique designs, but now we are seeing Asus becoming much more aggressive with products of their own.

adcII_01.jpg

The DirectCU II line is Asus’ response to the growing number of original designs from other manufacturers.  The easiest way to categorize these designs is that they straddle nicely the very high end and extreme products like the MSI Lightning series and those of the reference design boards with standard cooling.  These are unique designs that integrate features and cooling solutions that are well above that of reference cards.

DirectCU II applies primarily to the cooling solutions on these boards.  The copper heatipipes in the DirectCU II cooler are in direct contact with the GPU.  These heatpipes then are distributed through two separate aluminum fin arrays, each with their own fan.  So each card has either a dual slot or triple slot cooling solution with two 80 mm fans that dynamically adjust to the temperature of the chip.  The second part of this is branded “Super Alloy Power” in which Asus has upgraded most of the electrical components on the board to match higher specifications.  Hi-C caps, proadlizers, polymer caps, and higher quality chokes round out the upgraded components which should translate into more stable overclocked performance and a longer lifespan.

Read the entire article here.

Author:
Manufacturer: AMD

Tahiti Gets Clipped

It has been just over a month since we first got our hands on the AMD Southern Islands architecture in the form of the Radeon HD 7970 3GB graphics card.  It was then a couple of long weeks as we waited for the consumer to get the chance to buy that same hardware though we had to admit that the $550+ price tags were scaring many away. Originally we were going to have both the Radeon HD 7970 and the Radeon HD 7950 in our hands before January 9th, but that didn't pan out and instead the little brother was held in waiting a bit longer.

Today we are reviewing that sibling, the Radeon HD 7950 3GB GPU that offers basically the same technology and feature set with a slightly diminished core and a matching, slightly diminished price.  In truth I don't think that the estimated MSRP of $449 is going to really capture that many more hearts than the $549 price of the HD 7970 did, but AMD is hoping that they can ride their performance advantage to as many profits as they can while they wait for NVIDIA to properly react.  

Check out our video review right here and then continue on to our complete benchmarking analysis!!

Southern Islands Gets Scaled Back a Bit

As I said above, the Radeon HD 7950 3GB is pretty similar to the HD 7970.  It is based on the same 28nm, DirectX 11.1, PCI Express 3.0, 4.31 billion transistor GPU and includes the same massive 3GB frame buffer as its older brother.  The Tahiti GPU is the first of its kind of all of those facets but it has a few of the computational portions disabled.

If you haven't read up on the Southern Islands architecture, or Tahiti GPU based around it, you are missing quite a bit of important information on the current lineup of parts from AMD.  I would very much encourage you to head over to our Radeon HD 7970 3GB Tahiti review and look over the first three pages as it provides a detailed breakdown of the new features and the pretty dramatic shift in design that Southern Islands introduced to the AMD GPU team.  

block-7950.jpg

Continue reading our full review of the Radeon HD 7950 3GB graphics card!!

Author:
Manufacturer: Gigabyte

Guess what? Overclocked.

The NVIDIA GTX 580 GPU, based on the GF110 Fermi architecture, is old but it isn't forgotten.  Released in November of 2010, NVIDIA had held the single GPU performance grown for more than a year before it was usurped by AMD and the Radeon HD 7970 just this month.  Still, the GTX 580 is a solid high-end enthusiast graphics card that has wide spread availability and custom designed, overclocked models from numerous vendors making it a viable option.

Gigabyte sent us this overclocked and custom cooled model quite a while ago but we had simply fallen behind with other reviews until just after CES.  In today's market the card has a bit of a different role to fill - it surely won't be able to pass up the new AMD Radeon HD 7970 but can it fight the good fight and keep NVIDIA's current lineup of GPUs more competitive until Kepler finally shows himself?

The Gigabyte GTX 580 1.5GB Super Overclock Card

With the age of the GTX 580 designs, Gigabyte had plenty of time to perfect their PCB and cooler design.  This model, the Super Overclock (GV-N580SO-15I), comes in well ahead of the standard reference speeds of the GTX 580 but sticks to the same 1.5 GB frame buffer.

01.jpg

The clock speed is set at 855 MHz core and 1025 MHz memory, compared to the 772 MHz core speed and 1002 MHz clock rate of the reference design.  That is a very healthy 10% clock rate difference that should equate to nearly that big of a gap in gaming performance where the GPU is the real bottleneck.  

Continue reading our review of the Gigabyte GTX 580 1.5GB Super Overclock graphics card!!

Author:
Manufacturer: XFX

Retail-ready HD 7970

We first showed off the power of the new AMD Radeon HD 7970 3GB graphics card in our reference review posted on December 22nd.  If you haven't read all about the new Southern Islands architecture and the Tahiti chip that powers the HD 7970 then you should already be clicking the link above to my review to get up to speed. Once you have done so, please return here to continue.

...

Welcome back, oh wise one.  Now we are ready to proceed.  By now you already know that the Radeon HD 7970 is the fastest GPU on the planet, besting the NVIDIA GTX 580 by a solid 20-30% in most cases.  For our first retail card review we are going to be looking at the XFX Black Edition Double Dissipation that overclocks the GPU and memory clocks slightly and offers a new cooler that promises to be more efficient and quieter.  

Let's put XFX to the test!

The XFX Radeon HD 7970 3GB Black Edition Double Dissipation

01.jpg

Because of the use of a completely custom cooler, the XFX HD 7970 Black Edition Double Dissipation looks completely different than the reference model we tested last month though the feature set remains identical.  The silver and black motif works well here.

Continue reading our review of the XFX Radeon HD 7970 3GB Black Edition Double Dissipation!!

Author:
Manufacturer: AMD

The First 28nm GPU Architecture

It is going to be an exciting 2012. Both AMD and NVIDIA are going to be bringing gamers entirely new GPU architectures, Intel has Ivy Bridge up its sleeve and the CPU side of AMD is looking forward to the introduction of the Piledriver lineup. Today though we end 2011 with the official introduction of the AMD Southern Islands GPU design, a completely new architecture from the ground up that engineers have been working on for more than three years.

This GPU will be the first on several fronts: the first 28nm part, the first cards with support for PCI Express 3.0 and the first to officially support DirectX 11.1 coming with Windows 8. Southern Islands is broken up into three different families starting with Tahiti at the high-end, Pitcairn for sweet spot gaming and Cape Verde for budget discrete options. The Radeon HD 7970 card that is launching today with availability in early January is going to be the top-end single GPU option, based on Tahiti.

Let's see what 4.31 billion transistors buys you in today's market.  I have embedded a very short video review here as well for your perusal but of course, you should continue down a bit further for the entire, in-depth review of the Radeon HD 7970 GPU.

Southern Islands - Starting with Tahiti

Before we get into benchmark results we need to get a better understanding of this completely new GPU design that was first divulged in June at the AMD Fusion Developer Summit. At that time, our own lovely and talented Josh Walrath wrote up a great preview of the architecture that remains accurate and pertinent for today's release. We will include some of Josh's analysis here and interject with anything new that we have learned from AMD about the Southern Islands architecture.

When NVIDIA introduced the G80, they took a pretty radical approach to GPU design. Instead of going with previous VLIW architectures which would support operations such as Vec4+Scalar, they went with a completely scalar architecture. This allowed a combination of flexibility of operation types, ease of scheduling, and a high utilization of compute units. AMD has taken a somewhat similar, but still unique approach to their new architecture.

slide21.jpg

Continue reading our review of the AMD Radeon HD 7970 3GB graphics card and Southern Islands architecture!!

Author:
Manufacturer: Galaxy

Galaxy Continues the MDT Push

One of the key selling points for the AMD Radeon series of graphics cards the last few generations has been Eyefinity - the ability to run more than two displays off of a single card while also allowing for 3+ display gaming configurations.  NVIDIA-based solutions required a pair of GPUs running in SLI for this functionality, either standard SLI or the "SLI-on-a-card" solutions like the GTX 590. 

However, another solution has appeared from Galaxy, an NVIDIA partner that has created a series of boards with the MDT moniker - Multi-Display Technology.  Using a separate on-board chip the company has created GTX 560 Ti, GTX 570 and GTX 580 cards that can output to 4 or 5 monitors using only a single NVIDIA GPU, cutting down on costs while offering a feature that no other single-GPU solution could.

12.jpg

Today we are going to be reviewing the Galaxy GeForce GTX 570 MDT X4 card that promises 4 display outputs and a triple-panel seamless gaming surface option for users that want to explore gaming on more than a single monitor inside the NVIDIA ecosystem.  

Continue reading our review of the Galaxy GeForce GTX 570 MDT X4 1.25GB Graphics Card!!

Author:
Manufacturer: NVIDIA

A Temporary Card with a Permanent Place in Our Heart

Today NVIDIA and its partners are announcing availability of a new graphics card that bridges the gap between the $230 GTX 560 Ti and the $330 GTX 570 currently on the market.  The new card promises to offer performance right between those two units with a price to match but with a catch: it is a limited edition part with expected availability only through the next couple of months.

When we first heard rumors about this product back in October I posited that the company would be crazy to simply call this the GeForce GTX 560 Ti Special Edition.  Well...I guess this makes me the jackass.  This new ~$290 GPU will be officially called the "GeForce GTX 560 Ti with 448 Cores". 

Seriously.

The GeForce GTX 560 Ti 448 Core Edition

The GeForce GTX 560 Ti with 448 cores is actually not a GTX 560 Ti at all and in fact is not even built on a GF114 GPU - instead we are looking at a GF110 GPU (the same used on the GeForce GTX 580 and GTX 570 graphics cards) with another SM disabled.  

block_580.jpg

GeForce GTX 580 Diagram

The above diagram shows a full GF110 GPU sporting 512 CUDA cores and the full 16 SMs (simultaneous multiprocessors) along with all the bells and whistles that go along with that $450 card.  This includes a 384-bit memory bus and a 1.5 GB frame buffer that all adds up to still being the top performing single graphics card on the market today.  

Continue reading our review of the GeForce GTX 560 Ti 448 Core Graphics Card!!

Author:
Manufacturer: Electronic Arts

Introduction, Campaign Testing

Introduction

bf3iontro.jpg

As you might have noticed, we’re a bit excited about Battlefield 3 here at PC Perspective. It promised to pay attention to what PC gamers want, and shockingly, it has come through. Dedicated servers and huge multi-player matches are supported, and the browser based interface is excellent.

If we’re honest, a lot of our hearts have been stirred simply by the way the game looks. There aren’t many titles that really let a modern mid-range graphics card stretch its legs, even at 1080p resolution. Battlefield 3, however, can be demanding - and it looks beautiful. Even with the presets at medium, it’s one of the most attractive games ever.

But what does this mean for laptops? Has the resident laptop reviewer at PC Perspective, I know that gaming remains a challenge. The advancements over the last few years have been spectacular, but even so, most of the laptops we test can’t run Just Cause 2 at a playable framerate even with all detail set to low and a resolution of just 1366x768. 

To find out if mobile gamers were given consideration by the developers of Battlefield 3, I installed the game on three different laptops. The results only go to show how far mobile gaming has come, and has far it has to go.

Continue reading our article on mobile Battlefield 3 performance!!