Author:
Manufacturer: NVIDIA

GK106 Completes the Circle

The release of the various Kepler-based graphics cards have been interesting to watch from the outside.  Though NVIDIA certainly spiced things up with the release of the GeForce GTX 680 2GB card back in March, and then with the dual-GPU GTX 690 4GB graphics card, for quite quite some time NVIDIA was content to leave the sub-$400 markets to AMD's Radeon HD 7000 cards.  And of course NVIDIA's own GTX 500-series.

But gamers and enthusiasts are fickle beings - knowing that the GTX 660 was always JUST around the corner, many of you were simply not willing to buy into the GTX 560s floating around Newegg and other online retailers.  AMD benefited greatly from this lack of competition and only recently has NVIDIA started to bring their latest generation of cards to the price points MOST gamers are truly interested in. 

Today we are going to take a look at the brand new GeForce GTX 660, a graphics cards with 2GB of frame buffer that will have a starting MSRP of $229.  Coming in $80 under the GTX 660 Ti card released just last month, does the more vanilla GTX 660 have what it takes to replace the success of the GTX 460?

The GK106 GPU and GeForce GTX 660 2GB

NVIDIA's GK104 GPU is used in the GeForce GTX 690, GTX 680, GTX 670 and even the GTX 660 Ti.  We saw the much smaller GK107 GPU with the GT 640 card, a release I was not impressed with at all.  With the GTX 660 Ti starting at $299 and the GT 640 at $120, there was a WIDE gap in NVIDIA's 600-series lineup that the GTX 660 addresses with an entirely new GPU, the GK106.

First, let's take a quick look at the reference card from NVIDIA for the GeForce GTX 660 2GB - it doesn't differ much from the reference cards for the GTX 660 Ti and even the GTX 670.

01.jpg

The GeForce GTX 660 uses the same half-length PCB that we saw for the first time with the GTX 670 and this will allow retail partners a lot of flexibility with their card designs. 

Continue reading our review of the GeForce GTX 660 graphics card!

Author:
Manufacturer: Various

Multiple Contenders - EVGA SC

One of the most anticipated graphics card releases of the year occurred this month in the form of the GeForce GTX 660 Ti from NVIDIA, and as you would expect we were there on the day one with an in-depth review of the card at reference speeds. 

IMG_7336.JPG

The GeForce GTX 660 Ti is based on GK104, and what you might find interesting is that it is nearly identical to the specifications of the GTX 670.  Both utilize 7 SMX units for a total of 1344 stream processors – or CUDA cores – and both run at a reference clock speed of 915 MHz base and 980 MHz Boost.  Both include 112 texture units though the GeForce GTX 660 Ti does see a drop in ROP count from 32 to 24. Also, L2 cache drops from 512KB to 384KB along with a memory bus width drop from 256-bit to 192-bit. 

We already spent quite a lot of time talking about the GTX 660 Ti compared to the other NVIDIA and AMD GPUs in the market in our review (linked above) as well as on our most recent episode of the PC Perspective Podcast.  Today's story is all about the retail cards we received from various vendors including EVGA, Galaxy, MSI and Zotac. We are going to show you each card's design, the higher clocked settings that were implemented, performance differences between them and finally the overclocking comparisons of all four.  

Continue reading our roundup of four NVIDIA GeForce GTX 660 Ti cards!!

Author:
Manufacturer: NVIDIA

Another GK104 Option for $299

If you missed our live stream with PC Perspective's Ryan Shrout and NVIDIA's Tom Petersen discussing the new GeForce GTX 660 Ti you can find the replay at this link!!

While NVIDIA doesn't like us to use the codenames anymore, very few GPUs are as flexible and as stout as the GK104.  Originally released with the GTX 680 and then with the dual-GPU beast known as the GTX 690 and THEN with the more modestly priced GTX 670, this single chip has caused AMD quite a few headaches.  It appears things will only be worse with the release of the new GeForce GTX 660 Ti today, once again powered by GK104 and the Kepler architecture at the $299 price point.

01.jpg

While many PC gamers lament about the lack of games that really push hardware today, NVIDIA has been promoting the GTX 660 Ti as the upgrade option of choice for gamers on a 2 -4 year cycle.  Back in 2008 the GTX 260 was the mid-range enthusiast option while in 2010 it was the GTX 470 based on Fermi.  NVIDIA claims GTX 260 users will see more than 3x the performance increase with the 660 Ti all while generating those pixels more efficiently. 

blockdiagram.jpg

I mentioned that the GeForce GTX 660 Ti was based on GK104 and what you mind find interesting is that it is nearly identical to the specifications of the GTX 670.  Both utilize 7 SMXs for a total of 1344 stream processors or CUDA cores and both run at a reference clock speed of 915 MHz base and 980 MHz Boost.  Both include 112 texture units though the GeForce GTX 660 Ti does see a drop in ROP count from 32 to 24 and L2 cache drops from 512KB to 384KB.  Why?

Continue reading our review of the new NVIDIA GeForce GTX 660 Ti 2GB Graphics Card!!

Author:
Manufacturer: MSI

Spicing up the GTX 670

The Power Edition graphics card series from MSI is a relatively new addition to its lineup. The Power Edition often mimics that of the higher-end Lightning series, but at a far lower price (and perhaps a smaller feature set). This allows MSI to split the difference between the reference class boards and the high end Lightning GPUs.

msi_gtx670_pe_01.jpg

Doing this allows users a greater variety of products to choose from, and to better tailor users' purchases by their needs and financial means. Not everyone wants to pay $600 for a GTX 680 Lightning, but what if someone was able to get similar cooling, quality, and overclocking potential for a much lower price?  This is what MSI has done with one of its latest Power Edition cards.

The GTX 670 Power Edition

The NVIDIA GTX 670 cards have received accolades throughout the review press. It is a great combination of performance, power consumption, heat production, and price. It certainly caused AMD a great amount of alarm, and it hurriedly cut prices on the HD 7900 series of cards in response. The GTX 670 is a slightly cut-down version of the full GTX 680, and it runs very close to the clock speed of its bigger brother. In fact, other than texture and stream unit count, the cards are nearly identical.

Continue reading the entire review!

Author:
Manufacturer: Galaxy

Overclocked and 4GB Strong

Even though the Kepler GK104 GPU is now matured in the market, there is still a ton of life left in this not-so-small chip and Galaxy sent us a new graphics card to demonstrate just that.  The Galaxy GeForce GTX 670 GC 4GB card that we are reviewing today takes the GTX 670 GPU (originally released and reviewed on May 10th) and juices it up on two different fronts: clock rates and memory capacity.

IMG_7221.JPG

The Galaxy GTX 670 GC 4GB graphics card is based on GK104 as mentioned below and meets most of the same specifications as the reference GTX 670.  That includes 1344 CUDA cores or stream processors, 112 texture units and 32 ROP units along with a 256-bit GDDR5 memory bus. 

The GC title indicates that the Galaxy GTX 670 GC 4GB is overclocked as well - this card runs at 1006 MHz base clock, 1085 MHz Boost clock and 1500 MHz memory clock.  Compared to the defaults of 915 MHz, 980 MHz and 1500 MHz (respectively) this Galaxy model gets a 10% increase in clock speed though we'll see how much that translates into gaming performance as we go through our review.

Of course, also in the title of the review, the Galaxy GTX 670 GC includes 4GB of frame buffer, twice as much as the reference cards.  The goal is obviously to attract gamers with high resolution screens (2560x1600 or 2560x1440) as well as users interested in triple panel NVIDIA Surround gaming.  We test both of those resolutions in our game collection on the following pages to see just how that works out. 

IMG_7224.JPG

Continue reading our review of the Galaxy GeForce GTX 670 GC 4GB graphics card!

Author:
Manufacturer: AMD

7950 gets a quick refresh

Back in June, AMD released (or at least announced) an update to the Radeon HD 7970 3GB card called the GHz Edition. Besides the higher clock speeds, the card was the first AMD offering to include PowerTune with Boost–a dynamic clock scaling capability that allowed the GPU to increase clock speeds when power and temperature allowed. 

While similar in ideology to the GPU Boost that NVIDIA invented with the GTX 680 Kepler launch, AMD's Boost is completely predictable and repeatable. Everyone's HD 7970 GHz Edition performs exactly the same regardless of your system or environment. 

boost01.jpg

Here is some commentary that I had on the technology back in June that remains unchanged:

AMD's PowerTune with Boost technology differs from NVIDIA's GPU Boost in a couple of important ways. First, much to its original premise, AMD can guarantee exactly how all Radeon HD 7970 3GB GHz Edition graphics cards will operate, and at what speeds in any given environment. There should be no variability between the card that I get and the card that you can buy online. Using digital temperature estimation in conjunction with voltage control, the PowerTune implementation of boost is completely deterministic.

As the above diagram illustrates, the "new" part of PowerTune with the GHz Edition is the ability to vary the voltage of the GPU in real-time to address a wider range of qualified clock speeds. On the previous HD 7970s the voltage was a locked static voltage in its performance mode, meaning that it would not increase or decrease during load operations. As AMD stated to us in a conversation just prior to launch, "by having multiple voltages that can be invoked, we can be at a more optimal clock/voltage combination more of the time, and deliver higher average performance."

The problem I have with AMD's boost technology is that they are obviously implementing this as a reaction to NVIDIA's technology. That isn't necessarily a bad thing, but the tech feels a little premature because of it. We were provided no tools prior to launch to actually monitor the exact clock speed of the GPU in real-time. The ability to monitor these very small changes in clock speed are paramount to our ability to verify the company's claims, and without it we will have questions about the validity of results. GPU-Z and other applications we usually use to monitor clock speeds (including AMD's driver) only report 1050 MHz as the clock speed–no real-time dynamic changes are being reported.

(As a side note, AMD has promised to showcase their internal tool to show real-time clock speed changes in our Live Review at http://pcper.com/live on Friday the 22nd, 11am PDT / 2pm EDT.) [It has since been archived for your viewing pleasure.]

A couple of points to make here: AMD still has not released that tool to show us internal steps of clock speeds, and instead told me today that they were waiting for an updated API to allow other software (including their own CCC) to be able to report the precise results. 

01.jpg

Today AMD is letting us preview the new HD 7950 3GB card that will be shipping soon with updated clock speeds and Boost support. The new base clock speed of the HD 7950 will be 850 MHz, compared to the 800 MHz of the original reference HD 7950. The GPU will be able to boost as high as 925 MHz. That should give the new 7950s a solid performance gain over the original with a clock speed increase of as much as 15%.

Continue reading our coverage of the 7950 with Boost, and get your hands on the new firmware!

Author:
Manufacturer: MSI

The HAWK Returns

The $300 to $400 range of video cards has become quite crowded as of late.  If we can remember way back to March when AMD introduced their HD 7800 series of cards, and later that month we saw NVIDIA release their GTX 680 card.  Even though NVIDIA held the price/performance crown, AMD continued to offer their products at what many considered to be grossly overpriced considering the competition.  Part of this was justified because NVIDIA simply could not meet demand of their latest card, and they were often unavailable for purchase at MSRPs.  Eventually AMD started cutting back prices, but this led to another issue.  The HD 7950 was approaching the price of the HD 7870 GHz Edition.  The difference in prices between these products was around $20, but the 7950 was around 20% faster than the 7870.  This made the HD 7870 (and the slightly higher priced overclocked models) a very unattractive option for users.

r7870h_01.jpg

It seems as though AMD and their partners have finally rectified this situation, and just in time.  With NVIDIA finally being able to adequately provide stock for both the GTX 680 and GTX 670, the prices on the upper-midrange cards has taken a nice drop to where we feel they should be.  We are now starting to see some very interesting products based on the HD 7850 and HD 7870 cards, one of which we are looking at today.

The MSI R7870 HAWK

The R7870 Hawk utilizes the AMD HD 7870 GPU.  This chip has a reference speed of 1 GHz, but with the Hawk it is increased to a full 1100 MHz.  The GPU has the entire 20 compute units enabled featuring 1280 stream processors.  It has the 256 bit memory bus running 2GB of GDDR-5 memory at 1200 MHz, which gives a total bandwidth of 160 GB/sec.  I am somewhat disappointed that MSI did not give the memory speed a boost, but at least the user can enable that for themselves through the Afterburner software.

Continue reading our review of the MSI R7870 HAWK Graphics card!!

Author:
Manufacturer: AMD

A new SKU for a new battle

On launch day we hosted AMD's Evan Groenke for an in-studio live interview and discussion of about the Radeon HD 7970 GHz Edition.  For the on-demand version of that event, check it out right here.  Enjoy!

AMD has had a good run in the discrete graphics market for quite some time. With the Radeon HD 5000 series, the company was able to take a commanding mindshare (if not marketshare) lead from NVIDIA. While that diminished some with the HD 6000 series going up against NVIDIA's GTX 500 family, the release of the HD 7970 and HD 7950 just before the end of 2011 stepped it up again. AMD was the first to market with a 28nm GPU, the first to support DX11.1, the first with a 3GB frame buffer and the new products were simply much faster than what NVIDIA had at the time. 

AMD enjoyed that crowned location on the GPU front all the way until the NVIDIA GeForce GTX 680 launched in March. In a display of technology that most reviewers never thought possible, NVIDIA had a product that was faster, more power efficient and matched or exceeded just about every feature of the AMD Radeon HD 7000 cards. Availability problems plagued NVIDIA for several months (and we just now seeing the end of the shortage) and even caused us to do nearly-weekly "stock checks" to update readers. Prices on the HD 7900 cards have slowly crept down to find a place where they are relevant in the market, but AMD appears to not really want to take a back seat to NVIDIA again.

amd01.jpg

While visiting with AMD in Seattle for the Fusion Developer Summit a couple of weeks ago, we were briefed on a new secret: Tahiti 2, or Tahiti XT2 internally. An updated Radeon HD 7970 GPU that was going to be shipping soon with higher clock speeds and a new "boost" technology in order to combat the GTX 680. Even better, this card was going to have a $499 price tag.

Continue reading our review of AMD Radeon HD 7970 3GB GHz Edition graphics card!!

Author:
Manufacturer: Galaxy

The GK107 GPU

With the release of the Kepler architecture in March of this year, NVIDIA has once again seemed to take back the hearts of PC gamers with a GPU that is both powerful and power efficient. The GK104 has seen a product implementation as the GTX 680, the dual-GPU GTX 690 and most recently as the GTX 670. With a price tag of $399 though, there is still a very large portion of the graphics card market that Kepler hasn’t touched but is being addressed firmly by AMD’s Radeon 7000 series.
 
01.jpg
 
Today we are going to be taking a look at NVIDIA’s latest offering, the sub-$100 card known as the GeForce GT 640 based on a completely new chip, the GK107. Specifically, we have the Galaxy GeForce GT 640 GC factory overclocked card.
 
NVIDIA’s GK107
 
Compared to the GK104 part, GK107 is a much smaller chip and the GT 640 implementation of it contains two SMX units and 384 CUDA cores. That is a significant drop off compared to the GTX 680 (1536 cores) and the GTX 670 (1344 cores) but it should really come as no surprise to those of you that follow the NVIDIA GPU families of the past.  The chip will have 32 texture units and 16 ROPs. 
 
Author:
Manufacturer: MSI

What does $399 buy these days?

I think it is pretty safe to say that MSI makes some pretty nice stuff when it comes to video cards. Their previous generation of the HD 6000 and GTX 500 series of cards were quite popular, and we reviewed more than a handful here. That generation of cards really seemed to stake MSI’s reputation as one of the top video card vendors in the industry in terms of quality, features, and cooling innovation. Now we are moving onto a new generation of cards from both AMD and NVIDIA, and the challenges of keeping up MSI’s reputation seem to have increased.

r7850tf3_01.jpg

The competition has become much more aggressive as of late. Asus has some unique solutions, and companies such as XFX have stepped up their designs to challenge the best of the industry. MSI has found themselves to be in a much more crowded space with upgraded cooler designs, robust feature sets, and pricing that reflects the larger selection of products that fit such niches. The question here is if MSI’s design methodology for non-reference cards is up to the challenge.

Previously I was able to review the R7970 Lightning from MSI, and it was an impressive card. I had some initial teething problems with that particular model, but a BIOS flash later and some elbow grease allowed it to work as advertised. Today I am looking at the R7950 TwinFrozr3GD5/OC. This card looks to feature a reference PCB combined with a Twin Frozr III cooling solution. I was not entirely sure what to expect with this card, since the Lightning was such a challenge at first.

Click to read the entire article.

Author:
Manufacturer: XFX

XFX Throws into the Midrange Ring

Who is this XFX? This is a brand that I have not dealt with in a long time. In fact, the last time I had an XFX card was some five years ago, and it was in the form of the GeForce 8800 GTX XXX Edition. This was a pretty awesome card for the time, and it seemed to last forever in terms of performance and features in the new DX 10 world that was 2007/2008. This was a heavily overclocked card, and it would get really loud during gaming sessions. I can honestly say though that this particular card was troublefree and well built.

xfx_r7800_01.jpg

XFX has not always had a great reputation though, and the company has gone through some very interesting twists and turns over the years. XFX is a subsidiary of Pine Technologies. Initially XFX dealt strictly with NVIDIA based products, but a few years back when the graphics market became really tight, NVIDIA dropped several manufacturers and focused their attention on the bigger partners. Among the victims of this tightening were BFG Technologies and XFX. Unlike BFG, XFX was able to negotiate successfully with AMD to transition their product lineup to Radeon products. Since then XFX has been very aggressive in pursuing unique designs based on these AMD products. While previous generation designs did not step far from the reference products, this latest generation is a big step forward for XFX.

Click to continue reading the entire review.

Author:
Manufacturer: NVIDIA

GK110 Specifications

When the Fermi architecture was first discussed in September of 2009 at the NVIDIA GPU Technology Conference it marked an interesting turn for the company. Not only was NVIDIA releasing details about a GPU that wasn’t going to be available to consumers for another six months, but also that NVIDIA was building GPUs not strictly for gaming anymore – HPC and GPGPU were a defining target of all the company’s resources going forward.

Kepler on the other hand seemed to go back in the other direction with a consumer graphics release in March of this year without discussion of the Tesla / Quadro side of the picture. While the company liked to tout that Kepler was built for gamers I think you’ll find that with the information NVIDIA released today, Kepler was still very much designed to be an HPC powerhouse. More than likely NVIDIA’s release schedules were altered by the very successful launch of AMD’s Tahiti graphics cards under the HD 7900 brand. As a result, gamers got access to GK104 before NVIDIA’s flagship professional conference and the announcement of GK110 – a 7.1 billion transistor GPU aimed squarely at parallel computing workloads.

Kepler GK110

With the Fermi design NVIDIA took a gamble and changed directions with its GPU design betting that it could develop a microprocessor that was primarily intended for the professional markets while still appealing to the gaming markets that have sustained it for the majority of the company’s existence. While the GTX 480 flagship consumer card and the GTX 580 to some degree had overheating and efficiency drawbacks for gaming workloads compared to AMD GPUs, the GTX 680 based on Kepler GK104 has improved on them greatly. NVIDIA has still designed Kepler for high-performance computing though with a focus this time on power efficiency as well as performance though we haven’t seen the true king of this product line until today.

dieshot.jpg

GK110 Die Shot

Built on the 28nm process technology from TSMC, GK110 is an absolutely MASSIVE chip built on 7.1 billion transistors and though NVIDIA hasn’t given us a die size, it is likely coming close the reticle limit of 550 square millimeters. NVIDIA is proud to call this chip the most ‘architecturally complex’ microprocessor ever built and while impressive, it means there is potential for some issues when it comes to producing a chip of this size. This GPU will be able to offer more than 1 TFlop of double precision computing power with greater than 80% efficiency and 3x the performance per watt of Fermi designs.

02.jpg

Continue reading our overview of the newly announced NVIDIA Kepler GK110 GPU!

Author:
Manufacturer: NVIDIA

NVIDIA puts its head in the clouds

Today at the 2012 NVIDIA GPU Technology Conference (GTC), NVIDIA took the wraps off a new cloud gaming technology that promises to reduce latency and improve the quality of streaming gaming using the power of NVIDIA GPUs.  Dubbed GeForce GRID, NVIDIA is offering the technology to online services like Gaikai and OTOY.  

01.jpg

The goal of GRID is to bring the promise of "console quality" gaming to every device a user has.  The term "console quality" is kind of important here as NVIDIA is trying desperately to not upset all the PC gamers that purchase high-margin GeForce products.  The goal of GRID is pretty simple though and should be seen as an evolution of the online streaming gaming that we have covered in the past–like OnLive.  Being able to play high quality games on your TV, your computer, your tablet or even your phone without the need for high-performance and power hungry graphics processors through streaming services is what many believe the future of gaming is all about. 

02.jpg

GRID starts with the Kepler GPU - what NVIDIA is now dubbing the first "cloud GPU" - that has the capability to virtualize graphics processing while being power efficient.  The inclusion of a hardware fixed-function video encoder is important as well as it will aid in the process of compressing images that are delivered over the Internet by the streaming gaming service. 

 

03.jpg

This diagram shows us how the Kepler GPU handles and accelerates the processing required for online gaming services.  On the server side, the necessary process for an image to find its way to the user is more than just a simple render to a frame buffer.  In current cloud gaming scenarios the frame buffer would have to be copied to the main system memory, compressed on the CPU and then sent via the network connection.  With NVIDIA's GRID technology that capture and compression happens on the GPU memory and thus can be on its way to the gamer faster.

The results are H.264 streams that are compressed quickly and efficiently to be sent out over the network and return to the end user on whatever device they are using. 

Continue reading our editorial on the new NVIDIA GeForce GRID cloud gaming technology!!

Author:
Manufacturer: NVIDIA

GK104 takes a step down

While the graphics power found in the new GeForce GTX 690, the GeForce GTX 680 and even the Radeon HD 7970 are incredibly impressive, if we are really honest with ourselves the real meat of the GPU market buys options much lower than $999.  Today's not-so-well-kept-secret release of the GeForce GTX 670 attempts to bring the price to entry of the NVIDIA Kepler architecture down to a more attainable level while also resetting the performance per dollar metrics of the GPU world once again.

02.JPG

The GeForce GTX 670 is in fact a very close cousin to the GeForce GTX 680 with only a single SMX unit disabled and a more compelling $399 price tag.

The GTX 670 GPU - Nearly as fast as the GTX 680

The secret is out - GK104 finds its way onto a third graphics card in just two months - but in this iteration the hardware has been reduced slightly. 

blockdiagram.jpg

The GTX 670 block diagram we hacked together above is really just a GTX 680 diagram with a single SMX unit disabled.  While the GTX 680 sported a total of 1536 CUDA cores broken up into eight 192 core SMX units, the new GTX 670 will include 1344 cores.  This will also drop the texture units to 112 (from 128 on the GTX 680) though the ROP count stays at 32 thanks to the continued use of a 256-bit memory interface.

Continue reading our review of the NVIDIA GeForce GTX 670 2GB graphics card!!

Author:
Manufacturer: NVIDIA

GTX 690 Specifications

On Thursday May the 3rd at 10am PDT / 1pm EDT, stop by the PC Perspective Live page for an NVIDIA and PC Perspective hosted event surrounding the GeForce GTX 690 graphics card. Ryan Shrout and Tom Petersen will be on hand to talk about the technology, the performance characteristics as well as answer questions from the community from the chat room, twitter, etc. Be sure to catch it all at http://pcper.com/live

Okay, so it's not a surprise to you at all, or if it is, you haven't been paying attention.  Today is the first on-sale date and review release for the new NVIDIA GeForce GTX 690 4GB dual-GPU Kepler graphics card that we first announced in late April.  This is the dream card any PC gamer out there combining a pair of GTX 680 GK104 GPUs on a single PCB and running them in a single slot SLI configuration and is easily the fastest single card we have ever tested.  It also the most expensive reference card we have ever seen with a hefty $999 price tag. 

16.jpg

So how does it perform?  How about efficiency and power consumption - does the GTX 690 suffer the same problems the GTX 590 did?  Can AMD hope to compete with a dual-GPU HD 7990 card in the future?  All that and more in our review!

Kepler Architecture Overview

For those of you that may have missed the boat on the GTX 680 launch, the first card to use NVIDIA's new Kepler GPU architecture, you should definitely head over and read my review and analysis of that before heading into the deep-dive on the GTX 690 here today.  

dieshot.jpg

Kepler is a 3.54 billion transistor GPU with 1536 CUDA cores / stream processors contained within and even in a single GPU configuration is able produce some impressive PC gaming performance results.  The new SMX-based design has some modest differences from Fermi the most dramatic of which is the removal of the "hot clock" - the factor that ran the shaders and twice the clock speed of the rest of the GPU.  Now, the entire chip runs at one speed, higher than 1 GHz on the GTX 680.  

Each SMX on Kepler now includes 192 CUDA cores as opposed to the 32 cores found in each SM on Fermi - a change that has increased efficiency and performance per watt quite dramatically.  

As I said above, there are lot more details on the changes in our GeForce GTX 680 review.

The GeForce GTX 690 Specifications

Many of the details surrounding the GTX 690 have already been revealed by NVIDIA's CEO Jen-Hsun Huang during a GeForce LAN event in China last week.  The card is going to be fast, expensive and is built out of components and materials we haven't seen any graphics card utilize before.

nvidia06.jpg

Depsite the high performance level of the card, the GTX 690 isn't much heavier and isn't much longer than the reference GTX 680 card.  We'll go over the details surrounding the materials, cooler and output configuration on the next page, but let's take some time just to look and debate the performance specifications.

Continue reading our review of the NVIDIA GeForce GTX 690 dual-Kepler graphics card!!

Author:
Manufacturer: Galaxy

Retail Ready

When the NVIDIA GeForce GTX 680 launched in March we were incredibly impressed with the performance and technology that the GPU was able to offer while also being power efficient.  Fast forward nearly a month and we are still having problems finding the card in stock - a HUGE negative towards the perception of the card and the company at this point.

Still, we are promised by NVIDIA and its partners that they will soon have more on shelves, so we continue to look at the performance configurations and prepare articles and reviews for you.  Today we are taking a look at the Galaxy GeForce GTX 680 2GB card - their most basic model that is based on the reference design.

If you haven't done all the proper reading about the GeForce GTX 680 and the Kepler GPU, you should definitely check out my article from March that goes into a lot more detail on that subject before diving into our review of the Galaxy card.

The Card, In Pictures

01.jpg

The Galaxy GTX 680 is essentially identical to the reference design with the addition of some branding along the front and top of the card.  The card is still a dual-slot design, still requires a pair of 6-pin power connections and uses a very quiet fan in relation to the competition from AMD.

Continue reading our review of the Galaxy GeForce GTX 680 2GB Graphics Card!!

Author:
Manufacturer: MSI

Will it Strike Again?

 It can now be claimed that we are arguably in our 4th generation of Lightning products from MSI. It can also be claimed that the 3rd generation of products really put that brand on the mainstream map. The R6970 and N580GTX (and XE version) set new standards for enthusiast grade graphics cards. Outstanding construction, unique pcb design, high quality (and quantity) of components, and a good eye for overall price have all been hallmarks of these cards. These were honestly some of my favorite video cards of all time. Call me biased, but I think when looking through other reviews those writers felt much the same. MSI certainly hit a couple of homeruns with their three Lightning offerings of 2011.

r7970_01.jpg

Time does not stand still.  Resting on laurels is always the surest way to lose out to more aggressive competitors.  It is now 2012 and AMD has already launched the latest generation of HD 7000 chips, with the top end being the HD 7970.  This particular product was launched in late December, but cards were not available until January 9th of 2012.  We are now at the end of March where we see a decent volume of products on the shelves, as well as some of the first of the non-reference designs hitting the streets.  Currently Asus has its DirectCU II based 7970, but now we finally get to see the Lightning treatment.

 MSI has not sat upon their laurels it seems.  They are taking an aggressive approach to the new Lightning series of cards, and they implement quite a few unique features that have not been seen on any other product before.  Now the question is did they pull it off?  Throwing more features at something does not necessarily equal success.  The increase in complexity of a design combined with other unknowns with the new features could make it a failure.  Just look at the R5870 Lightning for proof.  That particular card tread new ground, but did so in a way that did not adequately differentiate itself from reference HD 5870 designs.  So what is new and how does it run?  Let us dig in!

Continue reading our review of the MSI Radeon HD 7970 3GB Lightning Graphics Card!!

Author:
Manufacturer: NVIDIA

The Kepler Architecture

Join us today at 12pm EST / 9am CST as PC Perspective hosts a Live Review on the new GeForce GTX 680 graphics card.  We will discuss the new GPU technology, important features like GPU Boost, talk about performance compared to AMD's lineup and we will also have NVIDIA's own Tom Petersen on hand to run some demos and answer questions from viewers.  You can find it all at http://pcper.com/live!!

NVIDIA fans have been eagerly waiting for the new Kepler architecture ever since CEO Jen-Hsun Huang first mentioned it in September 2010. In the interim, we have seen the birth of a complete lineup of AMD graphics cards based on its Southern Islands architecture including the Radeon HD 7970, HD 7950, HD 7800s and HD 7700s.  To the gamer looking for an upgrade it would appear that NVIDIA had fallen behind; but the company is hoping that today's release of the GeForce GTX 680 will put them back in the driver's seat.

This new $499 graphics card will directly compete against the Radeon HD 7970, and it brings quite a few "firsts" to NVIDIA's lineup.  This NVIDIA card is the first desktop 28nm GPU, the first to offer a clock speed over 1 GHz, the first to support triple-panel gaming on a single card, and the first to offer "boost" clocks that vary from game to game.  Interested yet?  Let's get to the good stuff.

The Kepler Architecture

In many ways, the new 28nm Kepler architecture is just an update to the Fermi design that was first introduced in the GF100 chip.  NVIDIA's Jonah Alben summed things up pretty nicely for us in a discussion stating that "there are lots of tiny things changing (in Kepler) rather than a few large things which makes it difficult to tell a story." 

arch01.png

GTX 680 Block Diagram

Continue reading our review of the new NVIDIA GeForce GTX 680 2GB Graphics Card!!

Author:
Manufacturer: Epic Games

The Truth

There are few people in the gaming industry that you simply must pay attention to when they speak.  One of them is John Carmack, founder of id Software and a friend of the site, creator of Doom.  Another is Epic Games' Tim Sweeney, another pioneer in the field of computer graphics that brought us the magic of Unreal before bringing the rest of the gaming industry the Unreal Engine. 

At DICE 2012, a trade show for game developers to demo their wares and learn from each other, Sweeney gave a talk on the future of computing hardware and its future.  (You can see the source of my information and slides here at Gamespot.) Many pundits, media and even developers have brought up the idea that the next console generation that we know is coming will be the last - we will have reached the point in our computing capacity that gamers and designers will be comfortable with the quality and realism provided.  Forever. 

tim-sweeney.jpg

Think about that a moment; has anything ever appeared so obviously crazy?  Yet, in a world where gaming has seemed to regress into the handheld spaces of iPhone and iPad, many would have you believe that it is indeed the case.  Companies like NVIDIA and AMD that spend billions of dollars developing new high-powered graphics technologies would simply NOT do so anymore and instead focus only on low power.  Actually...that is kind of happening with NVIDIA Tegra and AMD's move to APUs, but both claim that the development of leading graphics technology is what allows them to feed the low end - the sub-$100 graphics cards, SoC for phones and tablets and more.

Sweeney started the discussion by teaching everyone a little about human anatomy. 

01.jpg

The human eye has been studied quite extensively and the amount of information we know about it would likely surprise.  With 120 million monochrome receptors and 5M color, the eye and brain are able to do what even our most advanced cameras are unable to.

Continue reading our story on the computing needs for visual computing!!

Author:
Manufacturer: AMD

Completing the Family

When we went to Austin, Texas to sit with AMD and learn about the Radeon HD 7900 series of cards for the first time, an interesting thing happened.  While the official meeting was about the performance of the Radeon HD 7970 and HD 7950, when things started to settle several AMD employees couldn't help but discuss Cape Verde (7700-series) and Pitcairn (7800-series) GPUs.  In particular, the HD 7800 cards were generating a lot of excitement internally as a spiritual follow up to the wildly successful HD 5800 and HD 5700 series of cards in terms of price and performance characteristics. 

slide01.jpg

So while the Radeon HD 7970 and HD 7950 are being labeled as the world's fastest GPUs, and the Radeon HD 7700 is the fastest GPU for everyone, the HD 7800s are where many of our readers will look when upgrading their machines while staying within a budget.  

Be sure to check out our video review posted here and then continue on to our full, written review for all the benchmarks and analysis!!!

Continue reading our review of the Radeon HD 7870 and HD 7850 Graphics Cards!!