Author:
Manufacturer: NVIDIA

Getting even more life from GK104

Have you guys heard about this new GPU from NVIDIA?  It’s called GK104 and it turns out that the damn thing is found yet another graphics card this year – the new GeForce GTX 760.  Yup, you read that right, what NVIDIA is saying is the last update to the GeForce lineup through Fall 2013 is going to be based on the same GK104 design that we have previously discussed in reviews of the GTX 680, GTX 670, GTX 660 Ti, GTX 690 and more recently, the GTX 770. This isn’t a bad thing though!  GK104 has done a fantastic job in every field and market segment that NVIDIA has tossed it into with solid performance and even better performance per watt than the competition.  It does mean however that talking up the architecture is kind of mind numbing at this point…

block.jpg

If you are curious about the Kepler graphics architecture and the GK104 in particular, I’m not going to stop you from going back and reading over my initial review of the GTX 680 from January of 2012.  The new GTX 760 takes the same GPU, adds a new and improved version of GPU Boost (the same we saw in the GTX 770) and lowers down the specifications a bit to enable NVIDIA to hit a new price point.  The GTX 760 will be replacing the GTX 660 Ti – that card will be falling into the ether but the GTX 660 will remain, as will everything below it including the GTX 650 Ti Boost, 650 Ti and plain old 650.  The GTX 670 went the way of the dodo with the release of the GTX 770.

01.jpg

Even though the GTX 690 isn't on this list, NVIDIA says it isn't EOL

As for the GeForce GTX 760 it will ship with 1152 CUDA cores running at a base clock of 980 MHz and a typical boost clock of 1033 MHz.  The memory speed remains at 6.0 GHz on a 256-bit memory bus and you can expect to find both 2GB and 4GB frame buffer options from retail partners upon launch.  The 1152 CUDA cores are broken up over 6 SMX units and that means you’ll see some parts with 3 GPCs and others with 4 – NVIDIA claims any performance delta between them will be negligible. 

Continue reading our review of the NVIDIA GeForce GTX 760 2GB Graphics Card!!

Manufacturer: Adobe

OpenCL Support in a Meaningful Way

Adobe had OpenCL support since last year. You would never benefit from its inclusion unless you ran one of two AMD mobility chips under Mac OSX Lion, but it was there. Creative Cloud, predictably, furthers this trend with additional GPGPU support for applications like Photoshop and Premiere Pro.

This leads to some interesting points:

  • How OpenCL is changing the landscape between Intel and AMD
  • What GPU support is curiously absent from Adobe CC for one reason or another
  • Which GPUs are supported despite not... existing, officially.

adobe-cs-products.jpg

This should be very big news for our readers who do production work whether professional or for a hobby. If not, how about a little information about certain GPUs that are designed to compete with the GeForce 700-series?

Read on for our thoughts, after the break.

Author:
Manufacturer: NVIDIA

Kepler-based Mobile GPUs

Late last month, just before the tech world blew up from the mess that is Computex, NVIDIA announced a new line of mobility discrete graphics parts under the GTX 700M series label.  At the time we simply posted some news and specifications about the new products but left performance evaluation for a later time.  Today we have that for the highest end offering, the GeForce GTX 780M. 

As with most mobility GPU releases it seems, the GTX 700M series is not really a new GPU and only offers cursory feature improvements.  Based completely on the Kepler line of parts, the GTX 700M will range from 1536 CUDA cores on the GTX 780M to 768 cores on the GTX 760M. 

slide2.jpg

The flagship GTX 780M is essentially a desktop GTX 680 card in a mobile form factor with lower clock speeds.  With 1536 CUDA cores running at 823 MHz and boosting to higher speeds depending on the notebook configuration, a 256-bit memory controller running at 5 GHz, the GTX 780M will likely be the fastest mobile GPU you can buy.  (And we’ll be testing that in the coming pages.) 

The GTX 760M, 765M and 770M offering ranges of performance that scale down to 768 cores at 657 MHz.  NVIDIA claims we’ll see the GTX 760M in systems as small as 14-in and below with weights at 2kg or so from vendors like MSI and Acer.  For Ultrabooks and thinner machines you’ll have to step down to smaller, less power hungry GPUs like the GT 750 and 740 but even then we expect NVIDIA to have much faster gaming performance than the Haswell-based processor graphics.

Continue reading our performance review of the new NVIDIA GeForce GTX 780M mobility GPU!!

Author:
Manufacturer: NVIDIA

A necessary gesture

NVIDIA views the gaming landscape as a constantly shifting medium that starts with the PC.  But the company also sees mobile gaming, cloud gaming and even console gaming as part of the overall ecosystem.  But that is all tied together by an investment in content – the game developers and game publishers that make the games that we play on PCs, tablets, phones and consoles.

nv14.jpg

The slide above shows NVIDIA targeting for each segment – expect for consoles obviously.  NVIDIA GRID will address the cloud gaming infrastructure, GeForce and the GeForce Experience will continue with the PC systems and NVIDIA SHIELD and the Tegra SoC will get the focus for the mobile and tablet spaces.  I find it interesting that NVIDIA has specifically called out Steam under the PC – maybe a hint of the future for the upcoming Steam Box?

The primary point of focus for today’s press meeting was to talk about the commitment that NVIDIA has to the gaming world and to developers.  AMD has been talking up their 4-point attack on gaming that starts really with the dominance in the console markets.  But NVIDIA has been the leader in the PC world for many years and doesn’t see that changing.

nv02.jpg

With several global testing facilities, the most impressive of which exists in Russia, NVIDIA tests more games, more hardware and more settings combinations than you can possibly imagine.  They tune drivers and find optimal playing settings for more than 100 games that are now wrapped up into the GeForce Experience software.  They write tools for developers to find software bottlenecks and test for game streaming latency (with the upcoming SHIELD). They invest more in those areas than any other hardware vendor.

nv03.jpg

This is a list of technologies that NVIDIA claims they invented or developed – an impressive list that includes things like programmable shaders, GPU compute, Boost technology and more. 

nv04.jpg

Many of these turned out to be very important in the development and advancement of gaming – not for PCs but for ALL gaming. 

Continue reading our editorial on NVIDIA's stance on it's future in PC gaming!!

Author:
Manufacturer: NVIDIA

GK104 gets cheaper and faster

A week ago today we posted our review of the GeForce GTX 780, NVIDIA's attempt to split the difference between the GTX 680 and the GTX Titan graphics cards in terms of performance and pricing.  Today NVIDIA launches the GeForce GTX 770 that, even though it has a fancy new name, is a card and a GPU that you are very familiar with.

arch01.png

The NVIDIA GK104 GPU Diagram

Based on GK104, the same GPU that powers the GTX 680 (released in March 2012), GTX 670 and the GTX 690 (though in a pair), the new GeForce GTX 770 has very few changes from the previous models that are really worth noting.  NVIDIA has updated the GPU Boost technology to 2.0 (more granular, better controls in software) but the real changes come in the clocks speeds.

specs2.png

The GTX 770 is still built around 4 GPCs and 8 SMXs for a grand total of 1536 CUDA cores, 128 texture units and 32 ROPs.  The clock speeds have increased from 1006 MHz base clock and 1058 MHz Boost up to 1046 MHz base and 1085 MHz Boost.  That is a pretty minor speed bump in reality, an increase of just 4% or so over the previous clock speeds. 

NVIDIA did bump up the GDDR5 memory speed considerably though, going from 6.0 Gbps to 7.0 Gbps, or 1750 MHz.  The memory bus width remains 256-bits wide but the total memory bandwidth has jumped up to 224.3 GB/s.

Maybe the best change for PC gamers is the new starting MSRP for the GeForce GTX 770 at $399 - a full $50-60 less than the GTX 680 was selling for as of yesterday.  If you happened to pick up a GTX 680 recently, you are going to want to look into your return options as this will surely annoying the crap out of you.

If you want more information on the architecture design of the GK104 GPU, check out our initial article on the chips release from last year.  Otherwise, with those few specification changes out of the way, let's move on to some interesting information.

The NVIDIA GeForce GTX 770 2GB Reference Card

Tired of this design yet?  If so, you'll want to look into some of the non-reference options I'll show you on the next page from other vendors, but I for one am still taken with the design of these cards.  You will find a handful of vendors offering up re-branded GTX 770 options at the outset of release but most will have their own SKUs to showcase.

IMG_9918.JPG

Continue reading our review of the NVIDIA GeForce GTX 770 graphics card!!

Author:
Manufacturer: NVIDIA

GK110 Gets a Lower Price Point

If you want to ask us some questions about the GTX 780 or our review, join us for a LIVE STREAM at 2pm EDT / 11am PDT on our LIVE page.

When NVIDIA released the GeForce GTX Titan in February there was a kind of collective gasp across the enthusiast base.  Half of that intake of air was from people amazed at the performance they were seeing on a single GPU graphics cards powered by the GK110 chip.  The other half was from people aghast of the $1000 price point that NVIDIA launched it at.  The GTX Titan was the fastest single GPU card in the world, without any debate, but with it came a cost we hadn't seen in some time.  Even with the debate between it, the GTX 690 and the HD 7990, the Titan was likely my favorite GPU, cost no concerns.

IMG_9863.JPG

Today we see the extension of the GK110, by cutting it back some, and releasing a new card.  The GeForce GTX 780 3GB is based on the same chip as the GTX Titan but with additional SMX units disabled, a lower CUDA core count and less memory.  But as you'll soon see, the performance delta between it and the GTX 680 and Radeon HD 7970 GHz is pretty impressive.  The $650 price tag though - maybe not.

We held a live stream the day this review launched at http://pcper.com/live.  You can see the replay that goes over our benchmark results and thoughts on the GTX 780 below.

 

The GeForce GTX 780 - A Cut Down GK110

As I mentioned above, the GTX 780 is a pared-down GK110 GPU and for more information on that particular architecture change, you should really take a look at my original GTX Titan launch article from February.  There is a lot more that is different on this part compared to GK104 than simple shader counts, but for gamers most of the focus will rest there. 

The chip itself is a 7.1 billion mega-ton beast though a card with the GTX 780 label is actually utilizing much fewer than that.  Below you will find a couple of block diagrams that represent the reduced functionality of the GTX 780 versus the GTX Titan:

block1.png

Continue reading our review of the NVIDIA GeForce GTX 780 3GB GK110 Graphics Card!!

Author:
Manufacturer: Intel

The Intel HD Graphics are joined by Iris

Intel gets a bad wrap on the graphics front.  Much of it is warranted but a lot of it is really just poor marketing about the technologies and features they implement and improve on.  When AMD or NVIDIA update a driver or fix a bug or bring a new gaming feature to the table, they are sure that every single PC hardware based website knows about and thus, that as many PC gamers as possible know about it.  The same cannot be said about Intel though - they are much more understated when it comes to trumpeting their own horn.  Maybe that's because they are afraid of being called out on some aspects or that they have a little bit of performance envy compared to the discrete options on the market. 

Today might be the start of something new from the company though - a bigger focus on the graphics technology in Intel processors.  More than a month before the official unveiling of the Haswell processors publicly, Intel is opening up about SOME of the changes coming to the Haswell-based graphics products. 

We first learned about the changes to Intel's Haswell graphics architecture way back in September of 2012 at the Intel Developer Forum.  It was revealed then that the GT3 design would essentially double theoretical output over the currently existing GT2 design found in Ivy Bridge.  GT2 will continue to exist (though slightly updated) on Haswell and only some versions of Haswell will actually see updates to the higher-performing GT3 options.  

01.jpg

In 2009 Intel announced a drive to increase graphics performance generation to generation at an exceptional level.  Not long after they released the Sandy Bridge CPU and the most significant performance increase in processor graphics ever.  Ivy Bridge followed after with a nice increase in graphics capability but not nearly as dramatic as the SNB jump.  Now, according to this graphic, the graphics capability of Haswell will be as much as 75x better than the chipset-based graphics from 2006.  The real question is what variants of Haswell will have that performance level...

02.jpg

I should note right away that even though we are showing you general performance data on graphics, we still don't have all the details on what SKUs will have what features on the mobile and desktop lineups.  Intel appears to be trying to give us as much information as possible without really giving us any information. 

Read more on Haswell's new graphics core here.

Author:
Manufacturer: Various

Our 4K Testing Methods

You may have recently seen a story and video on PC Perspective about a new TV that made its way into the office.  Of particular interest is the fact that the SEIKI SE50UY04 50-in TV is a 4K television; it has a native resolution of 3840x2160.  For those that are unfamiliar with the new upcoming TV and display standards, 3840x2160 is exactly four times the resolution of current 1080p TVs and displays.  Oh, and this TV only cost us $1300.

seiki5.jpg

In that short preview we validated that both NVIDIA and AMD current generation graphics cards support output to this TV at 3840x2160 using an HDMI cable.  You might be surprised to find that HDMI 1.4 can support 4K resolutions, but it can do so only at 30 Hz (60 Hz 4K TVs won't be available until 2014 most likely), half the refresh rate of most TVs and monitors at 60 Hz.  That doesn't mean we are limited to 30 FPS of performance though, far from it.  As you'll see in our testing on the coming pages we were able to push out much higher frame rates using some very high end graphics solutions.

I should point out that I am not a TV reviewer and I don't claim to be one, so I'll leave the technical merits of the monitor itself to others.  Instead I will only report on my experiences with it while using Windows and playing games - it's pretty freaking awesome.  The only downside I have found in my time with the TV as a gaming monitor thus far is with the 30 Hz refresh rate and Vsync disabled situations.  Because you are seeing fewer screen refreshes over the same amount of time than you would with a 60 Hz panel, all else being equal, you are getting twice as many "frames" of the game being pushed to the monitor each refresh cycle.  This means that the horizontal tearing associated with Vsync will likely be more apparent than it would otherwise. 

4ksizes.png

Image from Digital Trends

I would likely recommend enabling Vsync for a tear-free experience on this TV once you are happy with performance levels, but obviously for our testing we wanted to keep it off to gauge performance of these graphics cards.

Continue reading our results from testing 4K 3840x2160 gaming on high end graphics cards!!

Author:
Manufacturer: AMD

The card we have been expecting

Despite all the issues that were brought up with our new graphics performance testing methodology we are calling Frame Rating, there is little debate in the industry that AMD is making noise once again in the graphics field.  From the elaborate marketing and game bundles with all Radeon HD 7000 series cards over the last year to the hiring of Roy Taylor, VP of sales but also the company's most vocal supporter. 

slide1_0.jpg

Along with the marketing though goes plenty of technology and important design wins.  With the dominance of the APU on the console side (Wii U, Playstation 4 and the next Xbox), AMD is making sure that the familiarity with its GPU architecture there pays dividends on the PC side as well.  Developers will be focusing on AMD's graphics hardware for 5-10 years with the console generation and that could result in improved performance and feature support for Radeon graphics for PC gamers. 

Today's release of the Radeon HD 7990 6GB Malta dual-GPU graphics card shows a renewed focus on high-end graphics markets since the release of the Radeon HD 7970 in January of 2012.  And while you may have seen something for sale previously with the HD 7990 name attached, those were custom designs built by partners, not by AMD. 

slide2_0.jpg

Both ASUS and PowerColor currently have high-end dual-Tahiti cards for sale.  The PowerColor HD 7990 Devil 13 used the brand directly but ASUS' ARES II kept away from the name and focused on its own high-end card brands instead. 

The "real" Radeon HD 7990 card was first teased at GDC in March and takes a much less dramatic approach to its design without being less impressive technically.  The card includes a pair of Tahiti, HD 7970-class GPUs on a single PCB with 6GB of total memory.  The raw specifications are listed here:

slide6_0.jpg

Considering there are two HD 7970 GPUs on the HD 7990, the doubling of the major specs shouldn't be surprising though it is a little deceiving.  There are 8.6 billion transistors yes, but there are still 4.3 billion on each GPU.  Yes there are 4096 stream processors but only 2048 on each GPU requiring software GPU scaling to increase performance.  The same goes with texture fill rate, compute performance, memory bandwidth, etc.  The same could be said for all dual-GPU graphics cards though.

Continue reading our review of the AMD Radeon HD 7990 6GB Graphics Card!!

Author:
Manufacturer: Various

A very early look at the future of Catalyst

Today is a very interesting day for AMD.  It marks both the release of the reference design of the Radeon HD 7990 graphics card, a dual-GPU Tahiti behemoth, and the first sample of a change to the CrossFire technology that will improve animation performance across the board.  Both stories are incredibly interesting and as it turns out both feed off of each other in a very important way: the HD 7990 depends on CrossFire and CrossFire depends on this driver. 

If you already read our review (or any review that is using the FCAT / frame capture system) of the Radeon HD 7990, you likely came away somewhat unimpressed.  The combination of a two AMD Tahiti GPUs on a single PCB with 6GB of frame buffer SHOULD have been an incredibly exciting release for us and would likely have become the single fastest graphics card on the planet.  That didn't happen though and our results clearly state why that is the case: AMD CrossFire technology has some serious issues with animation smoothness, runt frames and giving users what they are promised. 

Our first results using our Frame Rating performance analysis method were shown during the release of the NVIDIA GeForce GTX Titan card in February.  Since then we have been in constant talks with the folks at AMD to figure out what was wrong, how they could fix it, and what it would mean to gamers to implement frame metering technology.  We followed that story up with several more that showed the current state of performance on the GPU market using Frame Rating that painted CrossFire in a very negative light.  Even though we were accused by some outlets of being biased or that AMD wasn't doing anything incorrectly, we stuck by our results and as it turns out, so does AMD. 

Today's preview of a very early prototype driver shows that the company is serious about fixing the problems we discovered. 

If you are just catching up on the story, you really need some background information.  The best place to start is our article published in late March that goes into detail about how game engines work, how our completely new testing methods work and the problems with AMD CrossFire technology very specifically.  From that piece:

It will become painfully apparent as we dive through the benchmark results on the following pages, but I feel that addressing the issues that CrossFire and Eyefinity are creating up front will make the results easier to understand.  We showed you for the first time in Frame Rating Part 3, AMD CrossFire configurations have a tendency to produce a lot of runt frames, and in many cases nearly perfectly in an alternating pattern.  Not only does this mean that frame time variance will be high, but it also tells me that the value of performance gained by of adding a second GPU is completely useless in this case.  Obviously the story would become then, “In Battlefield 3, does it even make sense to use a CrossFire configuration?”  My answer based on the below graph would be no.

runt.jpg

An example of a runt frame in a CrossFire configuration

NVIDIA's solution for getting around this potential problem with SLI was to integrate frame metering, a technology that balances frame presentation to the user and to the game engine in a way that enabled smoother, more consistent frame times and thus smoother animations on the screen.  For GeForce cards, frame metering began as a software solution but was actually integrated as a hardware function on the Fermi design, taking some load off of the driver.

Continue reading our article on the new prototype driver from AMD to address frame pacing issues in CrossFire!!

Author:
Manufacturer: PC Perspective

Not a simple answer

After publishing the Frame Rating Part 3 story, I started to see quite a bit of feedback from readers and other enthusiasts with many requests for information about Vsync and how it might affect the results we are seeing here.  Vertical Sync is the fix for screen tearing, a common artifact seen in gaming (and other mediums) when the frame rendering rate doesn’t match the display’s refresh rate.  Enabling Vsync will force the rendering engine to only display and switch frames in the buffer to match the vertical refresh rate of the monitor or a divisor of it.  So a 60 Hz monitor could only display frames at 16ms (60 FPS), 33ms (30 FPS), 50ms (20 FPS), and so on.

Many early readers hypothesized that simply enabling Vsync would fix the stutter and runt issues that Frame Rating was bringing to light.  In fact, AMD was a proponent of this fix, as many conversations we have had with the GPU giant trailed into the direction of Vsync as answer to their multi-GPU issues. 

In our continuing research on graphics performance, part of our Frame Rating story line, I recently spent many hours playing games on different hardware configurations and different levels of Vertical Sync.  After this time testing, I am comfortable in saying that I do not think that simply enabling Vsync on platforms that exhibit a large number of runt frames fixes the issue.  It may prevent runts, but it does not actually produce a completely smooth animation. 

To be 100% clear - the issues with Vsync and animation smoothness are not limited to AMD graphics cards or even multi-GPU configurations.  The situations we are demonstrating here present themselves equally on AMD and NVIDIA platforms and with single or dual card configurations, as long as all other parameters are met.  Our goal today is only to compare a typical Vsync situation from either vendor to a reference result at 60 FPS and at 30 FPS; not to compare AMD against NVIDIA!!

Crysis3_1920x1080_PLOT_1.png

In our initial research with Frame Rating, I presented this graph on the page discussing Vsync.  At the time, I left this note with the image:

The single card and SLI configurations without Vsync disabled look just like they did on previous pages but the graph for GTX 680 SLI with Vsync on is very different.  Frame times are only switching back and forth between 16 ms and 33 ms, 60 and 30 instantaneous FPS due to the restrictions of Vsync.  What might not be obvious at first is that the constant shifting back and forth between these two rates (two refresh cycles with one frame, one refresh cycle with one frame) can actually cause more stuttering and animation inconsistencies than would otherwise appear.

Even though I had tested this out and could literally SEE that animation inconsistency I didn't yet have a way to try and demonstrate it to our readers, but today I think we do.

The plan for today's article is going to be simple.  I am going to present a set of three videos to you that show side by side runs from different configuration options and tell you what I think we are seeing in each result.  Then on another page, I'm going to show you three more videos and see if you can pinpoint the problems on your own.

Continue reading our article on the effects of Vsync on gaming animation smoothness!!

Author:
Manufacturer: PC Perspective

What to look for and our Test Setup

Because of the complexity and sheer amount of data we have gathered using our Frame Rating performance methodology, we are breaking it up into several articles that each feature different GPU comparisons.  Here is the schedule:

 

Today marks the conclusion of our first complete round up of Frame Rating results, the culmination of testing that was started 18 months ago.  Hopefully you have caught our other articles on the subject at hand, and you really will need to read up on the Frame Rating Dissected story above to truly understand the testing methods and results shown in this article.  Use the links above to find the previous articles!

To round out our Frame Rating testing in this interation, we are looking at more cards further down the product stack in two different sets.  The first comparison will look at the AMD Radeon HD 7870 GHz Edition and the NVIDIA GeForce GTX 660 graphics cards in both single and dual-card configurations.  Just like we saw with our HD 7970 vs GTX 680 and our HD 7950 vs GTX 660 Ti testing, evaluating how the GPUs compare in our new and improved testing methodology in single GPU configurations is just as important as testing in SLI and CrossFire.  The GTX 660 ($199 at Newegg.com) and the HD 7870 ($229 at Newegg.com) are the closest matches in terms of pricing though both card have some interesting game bundle options as well.

7870.jpg

AMD's Radeon HD 7870 GHz Edition

Our second set of results will only be looking at single GPU performance numbers for lower cost graphics cards like the AMD Radeon HD 7850 and Radeon HD 7790 and from NVIDIA the GeForce GTX 650 Ti and GTX 650 Ti BOOST.  We didn't include multi-GPU results on these cards simply due to time constraints internally and because we are eager to move onto further Frame Rating testing and input testing. 

gtx660.jpg

NVIDIA's GeForce GTX 660


If you are just joining this article series today, you have missed a lot!  If nothing else you should read our initial full release article that details everything about the Frame Rating methodology and why we are making this change to begin with.  In short, we are moving away from using FRAPS for average frame rates. We are using a secondary hardware capture system to record each frame of game play as the monitor would receive them. That recorded video is then analyzed to measure real world performance.

Because FRAPS measures frame times at a different point in the game pipeline (closer to the game engine) its results can vary dramatically from what is presented to the end user on their display.  Frame Rating solves that problem by recording video through a dual-link DVI capture card that emulates a monitor to the testing system and by simply applying a unique overlay color on each produced frame from the game, we can gather a new kind of information that tells a very unique story.

card1.jpg

The capture card that makes all of this work possible.

I don't want to spend too much time on this part of the story here as I already wrote a solid 16,000 words on the topic in our first article and I think you'll really find the results fascinating.  So, please check out my first article on the topic if you have any questions before diving into these results today!

Test System Setup
CPU Intel Core i7-3960X Sandy Bridge-E
Motherboard ASUS P9X79 Deluxe
Memory Corsair Dominator DDR3-1600 16GB
Hard Drive OCZ Agility 4 256GB SSD
Sound Card On-board
Graphics Card NVIDIA GeForce GTX 660 2GB
AMD Radeon HD 7870 2GB
NVIDIA GeForce GTX 650 Ti 1GB
NVIDIA GeForce GTX 650 Ti BOOST 2GB
AMD Radeon HD 7850 2GB
AMD Radeon HD 7790 1GB
Graphics Drivers AMD: 13.2 beta 7
NVIDIA: 314.07 beta
Power Supply Corsair AX1200i
Operating System Windows 8 Pro x64

On to the results! 

Continue reading our review of the GTX 660 and HD 7870 using Frame Rating!!

What to Look For, Test Setup

Because of the complexity and sheer amount of data we have gathered using our Frame Rating performance methodology, we are breaking it up into several articles that each feature different GPU comparisons.  Here is the schedule:

We are back again with another edition of our continued reveal of data from the capture-based Frame Rating GPU performance methods.  In this third segment we are moving on down the product stack to the NVIDIA GeForce GTX 660 Ti and the AMD Radeon HD 7950 - both cards that fall into a similar price range.

gtx660ti.JPG

I have gotten many questions about why we are using the cards in each comparison and the answer is pretty straight forward: pricing.  In our first article we looked at the Radeon HD 7970 GHz Edition and the GeForce GTX 680 while in the second we compared the Radeon HD 7990 (HD 7970s in CrossFire), the GeForce GTX 690 and the GeForce GTX Titan.  This time around we have the GeForce GTX 660 Ti ($289 on Newegg.com) and the Radeon HD 7950 ($299 on Newegg.com) but we did not include the GeForce GTX 670 because it sits much higher at $359 or so.  I know some of you are going to be disappointed that it isn't in here, but I promise we'll see it again in a future piece!


If you are just joining this article series today, you have missed a lot!  If nothing else you should read our initial full release article that details everything about the Frame Rating methodology and why we are making this change to begin with.  In short, we are moving away from using FRAPS for average frame rates or even frame times and instead are using a secondary hardware capture system to record all the frames of our game play as they would be displayed to the gamer, then doing post-process analyzation on that recorded file to measure real world performance.

Because FRAPS measures frame times at a different point in the game pipeline (closer to the game engine) its results can vary dramatically from what is presented to the end user on their display.  Frame Rating solves that problem by recording video through a dual-link DVI capture card that emulates a monitor to the testing system and by simply applying a unique overlay color on each produced frame from the game, we can gather a new kind of information that tells a very unique story.

card1.jpg

The capture card that makes all of this work possible.

I don't want to spend too much time on this part of the story here as I already wrote a solid 16,000 words on the topic in our first article and I think you'll really find the results fascinating.  So, please check out my first article on the topic if you have any questions before diving into these results today!

Test System Setup
CPU Intel Core i7-3960X Sandy Bridge-E
Motherboard ASUS P9X79 Deluxe
Memory Corsair Dominator DDR3-1600 16GB
Hard Drive OCZ Agility 4 256GB SSD
Sound Card On-board
Graphics Card NVIDIA GeForce GTX 660 Ti 2GB
AMD Radeon HD 7950 3GB
Graphics Drivers AMD: 13.2 beta 7
NVIDIA: 314.07 beta
Power Supply Corsair AX1200i
Operating System Windows 8 Pro x64

 

On to the results! 

Continue reading our review of the GTX 660 Ti and HD 7950 using Frame Rating!!

Manufacturer: NVIDIA

NVIDIA releases the GeForce GT 700M family

NVIDIA revolutionized gaming on the desktop with the release of its 600-series Kepler-based graphics cards in March 2012. With the release of the GeForce GT 700M series, Kepler enters the mobile arena to power laptops, ultrabooks, and all-in-one systems.

Today, NVIDIA introduces four new members to its mobile line: the GeForce GT 750M, the GeForce GT 740M, the GeForce GT 735M, and the GeForce GT 720M. These four new mobile graphics processors join the previously-released members of the GeForce GT 700m series: the GeForce GT 730M and the GeForce GT 710M. With the exception of the Fermi-based GeForce GT 720M, all of the newly-released mobile cores are based on NVIDIA's 28nm Kepler architecture.

Notebooks based on the GeForce GT 700M series will offer in-built support for the following new technologies:

Automatic Battery Savings through NVIDIA Optimus Technology

02-optimus-tech-slide.PNG

Automatic Game Configuration through the GeForce Experience

03-gf-exp.PNG

Automatic Performance Optimization through NVIDIA GPU Boost 2.0

03-gpu-boost-20.PNG

Continue reading our release coverage of the NVIDIA GTX 700M series!

Summary Thus Far

Because of the complexity and sheer amount of data we have gathered using our Frame Rating performance methodology, we are breaking it up into several articles that each feature different GPU comparisons.  Here is the schedule:

Welcome to the second in our intial series of articles focusing on Frame Rating, our new graphics and GPU performance technology that drastically changes how the community looks at single and multi-GPU performance.  In the article we are going to be focusing on a different set of graphics cards, the highest performing single card options on the market including the GeForce GTX 690 4GB dual-GK104 card, the GeForce GTX Titan 6GB GK110-based monster as well as the Radeon HD 7990, though in an emulated form.  The HD 7990 was only recently officially announced by AMD at this years Game Developers Conference but the specifications of that hardware are going to closely match what we have here on the testbed today - a pair of retail Radeon HD 7970s in CrossFire. 

titancard.JPG

Will the GTX Titan look as good in Frame Rating as it did upon its release?

If you are just joining this article series today, you have missed a lot!  If nothing else you should read our initial full release article that details everything about the Frame Rating methodology and why we are making this change to begin with.  In short, we are moving away from using FRAPS for average frame rates or even frame times and instead are using a secondary hardware capture system to record all the frames of our game play as they would be displayed to the gamer, then doing post-process analyzation on that recorded file to measure real world performance.

Because FRAPS measures frame times at a different point in the game pipeline (closer to the game engine) its results can vary dramatically from what is presented to the end user on their display.  Frame Rating solves that problem by recording video through a dual-link DVI capture card that emulates a monitor to the testing system and by simply applying a unique overlay color on each produced frame from the game, we can gather a new kind of information that tells a very unique story.

card1.jpg

The capture card that makes all of this work possible.

I don't want to spend too much time on this part of the story here as I already wrote a solid 16,000 words on the topic in our first article and I think you'll really find the results fascinating.  So, please check out my first article on the topic if you have any questions before diving into these results today!

 

Test System Setup
CPU Intel Core i7-3960X Sandy Bridge-E
Motherboard ASUS P9X79 Deluxe
Memory Corsair Dominator DDR3-1600 16GB
Hard Drive OCZ Agility 4 256GB SSD
Sound Card On-board
Graphics Card NVIDIA GeForce GTX TITAN 6GB
NVIDIA GeForce GTX 690 4GB
AMD Radeon HD 7970 CrossFire 3GB
Graphics Drivers AMD: 13.2 beta 7
NVIDIA: 314.07 beta (GTX 690)
NVIDIA: 314.09 beta (GTX TITAN)
Power Supply Corsair AX1200i
Operating System Windows 8 Pro x64

 

On to the results! 

Continue reading our review of the GTX Titan, GTX 690 and HD 7990 using Frame Rating!!

How Games Work

 

Because of the complexity and sheer amount of data we have gathered using our Frame Rating performance methodology, we are breaking it up into several articles that each feature different GPU comparisons.  Here is the schedule:

 

Introduction

The process of testing games and graphics has been evolving even longer than I have been a part of the industry: 14+ years at this point. That transformation in benchmarking has been accelerating for the last 12 months. Typical benchmarks test some hardware against some software and look at the average frame rate which can be achieved. While access to frame time has been around for nearly the full life of FRAPS, it took an article from Scott Wasson at the Tech Report to really get the ball moving and investigate how each frame contributes to the actual user experience. I immediately began research into testing actual performance perceived by the user, including the "microstutter" reported by many in PC gaming, and pondered how we might be able to test for this criteria even more accurately.

The result of that research is being fully unveiled today in what we are calling Frame Rating – a completely new way of measuring and validating gaming performance.

The release of this story for me is like the final stop on a journey that has lasted nearly a complete calendar year.  I began to release bits and pieces of this methodology starting on January 3rd with a video and short article that described our capture hardware and the benefits that directly capturing the output from a graphics card would bring to GPU evaluation.  After returning from CES later in January, I posted another short video and article that showcased some of the captured video and stepping through a recorded file frame by frame to show readers how capture could help us detect and measure stutter and frame time variance. 

card4.jpg

Finally, during the launch of the NVIDIA GeForce GTX Titan graphics card, I released the first results from our Frame Rating system and discussed how certain card combinations, in this case CrossFire against SLI, could drastically differ in perceived frame rates and performance while giving very similar average frame rates.  This article got a lot more attention than the previous entries and that was expected – this method doesn’t attempt to dismiss other testing options but it is going to be pretty disruptive.  I think the remainder of this article will prove that. 

Today we are finally giving you all the details on Frame Rating; how we do it, what we learned and how you should interpret the results that we are providing.  I warn you up front though that this is not an easy discussion and while I am doing my best to explain things completely, there are going to be more questions going forward and I want to see them all!  There is still much to do regarding graphics performance testing, even after Frame Rating becomes more common. We feel that the continued dialogue with readers, game developers and hardware designers is necessary to get it right.

Below is our full video that features the Frame Rating process, some example results and some discussion on what it all means going forward.  I encourage everyone to watch it but you will definitely need the written portion here to fully understand this transition in testing methods.  Subscribe to your YouTube channel if you haven't already!

Continue reading our analysis of the new Frame Rating performance testing methodology!!

Author:
Manufacturer: NVIDIA

The GTX 650 Ti Gets Boost and More Memory

In mid-October NVIDIA released the GeForce GTX 650 Ti based on GK106, the same GPU that powers the GTX 660 though with fewer enabled CUDA cores and GPC units.  At the time we were pretty impressed with the 650 Ti:

The GTX 650 Ti has more in common with the GTX 660 than it does the GTX 650, both being based on the GK106 GPU, but is missing some of the unique features that NVIDIA has touted of the 600-series cards like GPU Boost and SLI.

Today's release of the GeForce GTX 650 Ti BOOST actually addresses both of those missing features by moving even closer to the specification sheet found on the GTX 660 cards. 

Our video review of the GTX 650 Ti BOOST and Radeon HD 7790.

block1.jpg

Option 1: Two GPCs with Four SMXs

Just like we saw with the original GTX 650 Ti, there are two different configurations of the GTX 650 Ti BOOST; both have the same primary specifications but will differ in which SMX is disabled from the full GK106 ASIC.  The newer version will still have 768 CUDA cores but clock speeds will increase from 925 MHz to 980 MHz base and 1033 MHz typical boost clock.  Texture unit count remains the same at 64.

Continue reading our review of the NVIDIA GeForce GTX 650 Ti BOOST graphics card!!

Author:
Manufacturer: AMD

A New GPU with the Same DNA

When we talked with AMD recently about its leaked roadmap that insinuated that we would not see any new GPUs in 2013, they were adamant that other options would be made available to gamers but were coy about about saying when and to what degree.  As it turns out, today marks the release of the Radeon HD 7790, a completely new piece of silicon under the Sea Islands designation, that uses the same GCN (Graphics Core Next) architecture as the HD 7000-series / Southern Islands GPUs with a handful of tweaks and advantages from improved clock boosting with PowerTune to faster default memory clocks.

slide02.png

To be clear, the Radeon HD 7790 is a completely new ASIC, not a rebranding of a currently available part, though the differences between the options are mostly in power routing and a reorganization of the GCN design found in Cape Verde and Pitcairn designs.  The code name for this particular GPU is Bonaire and it is one of several upcoming updates to the HD 7000 cards. 

Bonaire is built on the same 28nm TSMC process technology that all Southern Islands parts are built on and consists of 2.08 billion transistors in a 160 mm2 die.  Compared to the HD 7800 (Pitcairn) GPU at 212 mm2 and HD 7700 (Cape Verde) at 120 mm2, the chip for the HD 7790 falls right in between.  And while the die images above are likely not completely accurate, it definitely appears that AMD's engineers have reorganized the internals.

slide03.png

Bonaire is built with 14 CUs (compute units) for a total stream processor count of 896, which places it closer to the performance level of the HD 7850 (1024 SPs) than it does the HD 7770 (640 SPs).  The new Sea Islands GPU includes the same dual tessellation engines of the higher end HD 7000s as well and a solid 128-bit memory bus that runs at 6.0 Gbps out the gate on the 1GB frame buffer.  The new memory controller is completely reworked in Bonaire and allows for a total memory bandwidth of 96 GB/s in comparison to the 72 GB/s of the HD 7770 and peaking theoretical compute performance at 1.79 TFLOPS.

The GPU clock rate is set at 1.0 GHz, but there is more on that later.

Continue reading our review of the Sapphire AMD Radeon HD 7790 1GB Bonaire GPU!!

Author:
Manufacturer: PC Perspective

In case you missed it...

UPDATE: We have now published full details on our Frame Rating capture and analysis system as well as an entire host of benchmark results.  Please check it out!!

In one of the last pages of our recent NVIDIA GeForce GTX TITAN graphics card review we included an update to our Frame Rating graphics performance metric that details the testing method in more detail and showed results for the first time.  Because it was buried so far into the article, I thought it was worth posting this information here as a separate article to solict feedback from readers and help guide the discussion forward without getting lost in the TITAN shuffle.  If you already read that page of our TITAN review, nothing new is included below. 

I am still planning a full article based on these results sooner rather than later; for now, please leave me your thoughts, comments, ideas and criticisms in the comments below!


Why are you not testing CrossFire??

If you haven't been following our sequence of stories that investigates a completely new testing methodology we are calling "frame rating", then you are really missing out.  (Part 1 is here, part 2 is here.)  The basic premise of Frame Rating is that the performance metrics that the industry is gathering using FRAPS are inaccurate in many cases and do not properly reflect the real-world gaming experience the user has.

Because of that, we are working on another method that uses high-end dual-link DVI capture equipment to directly record the raw output from the graphics card with an overlay technology that allows us to measure frame rates as they are presented on the screen, not as they are presented to the FRAPS software sub-system.  With these tools we can measure average frame rates, frame times and stutter, all in a way that reflects exactly what the viewer sees from the game.

We aren't ready to show our full sets of results yet (soon!) but the problems lie in that AMD's CrossFire technology shows severe performance degradations when viewed under the Frame Rating microscope that do not show up nearly as dramatically under FRAPS.  As such, I decided that it was simply irresponsible of me to present data to readers that I would then immediately refute on the final pages of this review (Editor: referencing the GTX TITAN article linked above.) - it would be a waste of time for the reader and people that skip only to the performance graphs wouldn't know our theory on why the results displayed were invalid.

Many other sites will use FRAPS, will use CrossFire, and there is nothing wrong with that at all.  They are simply presenting data that they believe to be true based on the tools at their disposal.  More data is always better. 

Here are these results and our discussion.  I decided to use the most popular game out today, Battlefield 3 and please keep in mind this is NOT the worst case scenario for AMD CrossFire in any way.  I tested the Radeon HD 7970 GHz Edition in single and CrossFire configurations as well as the GeForce GTX 680 and SLI.  To gather results I used two processes:

  1. Run FRAPS while running through a repeatable section and record frame rates and frame times for 60 seconds
  2. Run our Frame Rating capture system with a special overlay that allows us to measure frame rates and frame times with post processing.

Here is an example of what the overlay looks like in Battlefield 3.

fr_sli_1.jpg

Frame Rating capture on GeForce GTX 680s in SLI - Click to Enlarge

The column on the left is actually the visuals of an overlay that is applied to each and every frame of the game early in the rendering process.  A solid color is added to the PRESENT call (more details to come later) for each individual frame.  As you know, when you are playing a game, multiple frames will make it on any single 60 Hz cycle of your monitor and because of that you get a succession of colors on the left hand side.

By measuring the pixel height of those colored columns, and knowing the order in which they should appear beforehand, we can gather the same data that FRAPS does but our results are seen AFTER any driver optimizations and DX changes the game might make.

fr_cf_1.jpg

Frame Rating capture on Radeon HD 7970 CrossFire - Click to Enlarge

Here you see a very similar screenshot running on CrossFire.  Notice the thin silver band between the maroon and purple?  That is a complete frame according to FRAPS and most reviews.  Not to us - we think that frame rendered is almost useless. 

Continue reading our 3rd part in a series of Frame Rating and to see our first performance results!!

Author:
Manufacturer: NVIDIA

TITAN is back for more!

Our NVIDIA GeForce GTX TITAN Coverage Schedule:

If you are reading this today, chances are you were here on Tuesday when we first launched our NVIDIA GeForce GTX TITAN features and preview story (accessible from the link above) and were hoping to find benchmarks then.  You didn't, but you will now.  I am here to show you that the TITAN is indeed the single fastest GPU on the market and MAY be the best graphics cards (single or dual GPU) on the market depending on what usage models you have.  Some will argue, some will disagree, but we have an interesting argument to make about this $999 gaming beast.

A brief history of time...er, TITAN

In our previous article we talked all about TITAN's GK110-based GPU, the form factor, card design, GPU Boost 2.0 features and much more and I would highly press you all to read it before going forward.  If you just want the cliff notes, I am going to copy and paste some of the most important details below.

IMG_9502.JPG

From a pure specifications standpoint the GeForce GTX TITAN based on GK110 is a powerhouse.  While the full GPU sports a total of 15 SMX units, TITAN will have 14 of them enabled for a total of 2688 shaders and 224 texture units.  Clock speeds on TITAN are a bit lower than on GK104 with a base clock rate of 836 MHz and a Boost Clock of 876 MHz.  As we will show you later in this article though the GPU Boost technology has been updated and changed quite a bit from what we first saw with the GTX 680.

The bump in the memory bus width is also key, being able to feed that many CUDA cores definitely required a boost from 256-bit to 384-bit, a 50% increase.  Even better, the memory bus is still running at 6.0 GHz resulting in total memory bandwdith of 288.4 GB/s.

blockdiagram2.jpg

Speaking of memory - this card will ship with 6GB on-board.  Yes, 6 GeeBees!!  That is twice as much as AMD's Radeon HD 7970 and three times as much as NVIDIA's own GeForce GTX 680 card.  This is without a doubt a nod to the super-computing capabilities of the GPU and the GPGPU functionality that NVIDIA is enabling with the double precision aspects of GK110.

Continue reading our full review of the NVIDIA GeForce GTX TITAN graphics card with benchmarks and an update on our Frame Rating process!!