Author:
Manufacturer: Intel

The Intel HD Graphics are joined by Iris

Intel gets a bad wrap on the graphics front.  Much of it is warranted but a lot of it is really just poor marketing about the technologies and features they implement and improve on.  When AMD or NVIDIA update a driver or fix a bug or bring a new gaming feature to the table, they are sure that every single PC hardware based website knows about and thus, that as many PC gamers as possible know about it.  The same cannot be said about Intel though - they are much more understated when it comes to trumpeting their own horn.  Maybe that's because they are afraid of being called out on some aspects or that they have a little bit of performance envy compared to the discrete options on the market. 

Today might be the start of something new from the company though - a bigger focus on the graphics technology in Intel processors.  More than a month before the official unveiling of the Haswell processors publicly, Intel is opening up about SOME of the changes coming to the Haswell-based graphics products. 

We first learned about the changes to Intel's Haswell graphics architecture way back in September of 2012 at the Intel Developer Forum.  It was revealed then that the GT3 design would essentially double theoretical output over the currently existing GT2 design found in Ivy Bridge.  GT2 will continue to exist (though slightly updated) on Haswell and only some versions of Haswell will actually see updates to the higher-performing GT3 options.  

01.jpg

In 2009 Intel announced a drive to increase graphics performance generation to generation at an exceptional level.  Not long after they released the Sandy Bridge CPU and the most significant performance increase in processor graphics ever.  Ivy Bridge followed after with a nice increase in graphics capability but not nearly as dramatic as the SNB jump.  Now, according to this graphic, the graphics capability of Haswell will be as much as 75x better than the chipset-based graphics from 2006.  The real question is what variants of Haswell will have that performance level...

02.jpg

I should note right away that even though we are showing you general performance data on graphics, we still don't have all the details on what SKUs will have what features on the mobile and desktop lineups.  Intel appears to be trying to give us as much information as possible without really giving us any information. 

Read more on Haswell's new graphics core here.

Author:
Manufacturer: Various

Our 4K Testing Methods

You may have recently seen a story and video on PC Perspective about a new TV that made its way into the office.  Of particular interest is the fact that the SEIKI SE50UY04 50-in TV is a 4K television; it has a native resolution of 3840x2160.  For those that are unfamiliar with the new upcoming TV and display standards, 3840x2160 is exactly four times the resolution of current 1080p TVs and displays.  Oh, and this TV only cost us $1300.

seiki5.jpg

In that short preview we validated that both NVIDIA and AMD current generation graphics cards support output to this TV at 3840x2160 using an HDMI cable.  You might be surprised to find that HDMI 1.4 can support 4K resolutions, but it can do so only at 30 Hz (60 Hz 4K TVs won't be available until 2014 most likely), half the refresh rate of most TVs and monitors at 60 Hz.  That doesn't mean we are limited to 30 FPS of performance though, far from it.  As you'll see in our testing on the coming pages we were able to push out much higher frame rates using some very high end graphics solutions.

I should point out that I am not a TV reviewer and I don't claim to be one, so I'll leave the technical merits of the monitor itself to others.  Instead I will only report on my experiences with it while using Windows and playing games - it's pretty freaking awesome.  The only downside I have found in my time with the TV as a gaming monitor thus far is with the 30 Hz refresh rate and Vsync disabled situations.  Because you are seeing fewer screen refreshes over the same amount of time than you would with a 60 Hz panel, all else being equal, you are getting twice as many "frames" of the game being pushed to the monitor each refresh cycle.  This means that the horizontal tearing associated with Vsync will likely be more apparent than it would otherwise. 

4ksizes.png

Image from Digital Trends

I would likely recommend enabling Vsync for a tear-free experience on this TV once you are happy with performance levels, but obviously for our testing we wanted to keep it off to gauge performance of these graphics cards.

Continue reading our results from testing 4K 3840x2160 gaming on high end graphics cards!!

Author:
Manufacturer: AMD

The card we have been expecting

Despite all the issues that were brought up with our new graphics performance testing methodology we are calling Frame Rating, there is little debate in the industry that AMD is making noise once again in the graphics field.  From the elaborate marketing and game bundles with all Radeon HD 7000 series cards over the last year to the hiring of Roy Taylor, VP of sales but also the company's most vocal supporter. 

slide1_0.jpg

Along with the marketing though goes plenty of technology and important design wins.  With the dominance of the APU on the console side (Wii U, Playstation 4 and the next Xbox), AMD is making sure that the familiarity with its GPU architecture there pays dividends on the PC side as well.  Developers will be focusing on AMD's graphics hardware for 5-10 years with the console generation and that could result in improved performance and feature support for Radeon graphics for PC gamers. 

Today's release of the Radeon HD 7990 6GB Malta dual-GPU graphics card shows a renewed focus on high-end graphics markets since the release of the Radeon HD 7970 in January of 2012.  And while you may have seen something for sale previously with the HD 7990 name attached, those were custom designs built by partners, not by AMD. 

slide2_0.jpg

Both ASUS and PowerColor currently have high-end dual-Tahiti cards for sale.  The PowerColor HD 7990 Devil 13 used the brand directly but ASUS' ARES II kept away from the name and focused on its own high-end card brands instead. 

The "real" Radeon HD 7990 card was first teased at GDC in March and takes a much less dramatic approach to its design without being less impressive technically.  The card includes a pair of Tahiti, HD 7970-class GPUs on a single PCB with 6GB of total memory.  The raw specifications are listed here:

slide6_0.jpg

Considering there are two HD 7970 GPUs on the HD 7990, the doubling of the major specs shouldn't be surprising though it is a little deceiving.  There are 8.6 billion transistors yes, but there are still 4.3 billion on each GPU.  Yes there are 4096 stream processors but only 2048 on each GPU requiring software GPU scaling to increase performance.  The same goes with texture fill rate, compute performance, memory bandwidth, etc.  The same could be said for all dual-GPU graphics cards though.

Continue reading our review of the AMD Radeon HD 7990 6GB Graphics Card!!

Author:
Manufacturer: Various

A very early look at the future of Catalyst

Today is a very interesting day for AMD.  It marks both the release of the reference design of the Radeon HD 7990 graphics card, a dual-GPU Tahiti behemoth, and the first sample of a change to the CrossFire technology that will improve animation performance across the board.  Both stories are incredibly interesting and as it turns out both feed off of each other in a very important way: the HD 7990 depends on CrossFire and CrossFire depends on this driver. 

If you already read our review (or any review that is using the FCAT / frame capture system) of the Radeon HD 7990, you likely came away somewhat unimpressed.  The combination of a two AMD Tahiti GPUs on a single PCB with 6GB of frame buffer SHOULD have been an incredibly exciting release for us and would likely have become the single fastest graphics card on the planet.  That didn't happen though and our results clearly state why that is the case: AMD CrossFire technology has some serious issues with animation smoothness, runt frames and giving users what they are promised. 

Our first results using our Frame Rating performance analysis method were shown during the release of the NVIDIA GeForce GTX Titan card in February.  Since then we have been in constant talks with the folks at AMD to figure out what was wrong, how they could fix it, and what it would mean to gamers to implement frame metering technology.  We followed that story up with several more that showed the current state of performance on the GPU market using Frame Rating that painted CrossFire in a very negative light.  Even though we were accused by some outlets of being biased or that AMD wasn't doing anything incorrectly, we stuck by our results and as it turns out, so does AMD. 

Today's preview of a very early prototype driver shows that the company is serious about fixing the problems we discovered. 

If you are just catching up on the story, you really need some background information.  The best place to start is our article published in late March that goes into detail about how game engines work, how our completely new testing methods work and the problems with AMD CrossFire technology very specifically.  From that piece:

It will become painfully apparent as we dive through the benchmark results on the following pages, but I feel that addressing the issues that CrossFire and Eyefinity are creating up front will make the results easier to understand.  We showed you for the first time in Frame Rating Part 3, AMD CrossFire configurations have a tendency to produce a lot of runt frames, and in many cases nearly perfectly in an alternating pattern.  Not only does this mean that frame time variance will be high, but it also tells me that the value of performance gained by of adding a second GPU is completely useless in this case.  Obviously the story would become then, “In Battlefield 3, does it even make sense to use a CrossFire configuration?”  My answer based on the below graph would be no.

runt.jpg

An example of a runt frame in a CrossFire configuration

NVIDIA's solution for getting around this potential problem with SLI was to integrate frame metering, a technology that balances frame presentation to the user and to the game engine in a way that enabled smoother, more consistent frame times and thus smoother animations on the screen.  For GeForce cards, frame metering began as a software solution but was actually integrated as a hardware function on the Fermi design, taking some load off of the driver.

Continue reading our article on the new prototype driver from AMD to address frame pacing issues in CrossFire!!

Author:
Manufacturer: PC Perspective

Not a simple answer

After publishing the Frame Rating Part 3 story, I started to see quite a bit of feedback from readers and other enthusiasts with many requests for information about Vsync and how it might affect the results we are seeing here.  Vertical Sync is the fix for screen tearing, a common artifact seen in gaming (and other mediums) when the frame rendering rate doesn’t match the display’s refresh rate.  Enabling Vsync will force the rendering engine to only display and switch frames in the buffer to match the vertical refresh rate of the monitor or a divisor of it.  So a 60 Hz monitor could only display frames at 16ms (60 FPS), 33ms (30 FPS), 50ms (20 FPS), and so on.

Many early readers hypothesized that simply enabling Vsync would fix the stutter and runt issues that Frame Rating was bringing to light.  In fact, AMD was a proponent of this fix, as many conversations we have had with the GPU giant trailed into the direction of Vsync as answer to their multi-GPU issues. 

In our continuing research on graphics performance, part of our Frame Rating story line, I recently spent many hours playing games on different hardware configurations and different levels of Vertical Sync.  After this time testing, I am comfortable in saying that I do not think that simply enabling Vsync on platforms that exhibit a large number of runt frames fixes the issue.  It may prevent runts, but it does not actually produce a completely smooth animation. 

To be 100% clear - the issues with Vsync and animation smoothness are not limited to AMD graphics cards or even multi-GPU configurations.  The situations we are demonstrating here present themselves equally on AMD and NVIDIA platforms and with single or dual card configurations, as long as all other parameters are met.  Our goal today is only to compare a typical Vsync situation from either vendor to a reference result at 60 FPS and at 30 FPS; not to compare AMD against NVIDIA!!

Crysis3_1920x1080_PLOT_1.png

In our initial research with Frame Rating, I presented this graph on the page discussing Vsync.  At the time, I left this note with the image:

The single card and SLI configurations without Vsync disabled look just like they did on previous pages but the graph for GTX 680 SLI with Vsync on is very different.  Frame times are only switching back and forth between 16 ms and 33 ms, 60 and 30 instantaneous FPS due to the restrictions of Vsync.  What might not be obvious at first is that the constant shifting back and forth between these two rates (two refresh cycles with one frame, one refresh cycle with one frame) can actually cause more stuttering and animation inconsistencies than would otherwise appear.

Even though I had tested this out and could literally SEE that animation inconsistency I didn't yet have a way to try and demonstrate it to our readers, but today I think we do.

The plan for today's article is going to be simple.  I am going to present a set of three videos to you that show side by side runs from different configuration options and tell you what I think we are seeing in each result.  Then on another page, I'm going to show you three more videos and see if you can pinpoint the problems on your own.

Continue reading our article on the effects of Vsync on gaming animation smoothness!!

Author:
Manufacturer: PC Perspective

What to look for and our Test Setup

Because of the complexity and sheer amount of data we have gathered using our Frame Rating performance methodology, we are breaking it up into several articles that each feature different GPU comparisons.  Here is the schedule:

 

Today marks the conclusion of our first complete round up of Frame Rating results, the culmination of testing that was started 18 months ago.  Hopefully you have caught our other articles on the subject at hand, and you really will need to read up on the Frame Rating Dissected story above to truly understand the testing methods and results shown in this article.  Use the links above to find the previous articles!

To round out our Frame Rating testing in this interation, we are looking at more cards further down the product stack in two different sets.  The first comparison will look at the AMD Radeon HD 7870 GHz Edition and the NVIDIA GeForce GTX 660 graphics cards in both single and dual-card configurations.  Just like we saw with our HD 7970 vs GTX 680 and our HD 7950 vs GTX 660 Ti testing, evaluating how the GPUs compare in our new and improved testing methodology in single GPU configurations is just as important as testing in SLI and CrossFire.  The GTX 660 ($199 at Newegg.com) and the HD 7870 ($229 at Newegg.com) are the closest matches in terms of pricing though both card have some interesting game bundle options as well.

7870.jpg

AMD's Radeon HD 7870 GHz Edition

Our second set of results will only be looking at single GPU performance numbers for lower cost graphics cards like the AMD Radeon HD 7850 and Radeon HD 7790 and from NVIDIA the GeForce GTX 650 Ti and GTX 650 Ti BOOST.  We didn't include multi-GPU results on these cards simply due to time constraints internally and because we are eager to move onto further Frame Rating testing and input testing. 

gtx660.jpg

NVIDIA's GeForce GTX 660


If you are just joining this article series today, you have missed a lot!  If nothing else you should read our initial full release article that details everything about the Frame Rating methodology and why we are making this change to begin with.  In short, we are moving away from using FRAPS for average frame rates. We are using a secondary hardware capture system to record each frame of game play as the monitor would receive them. That recorded video is then analyzed to measure real world performance.

Because FRAPS measures frame times at a different point in the game pipeline (closer to the game engine) its results can vary dramatically from what is presented to the end user on their display.  Frame Rating solves that problem by recording video through a dual-link DVI capture card that emulates a monitor to the testing system and by simply applying a unique overlay color on each produced frame from the game, we can gather a new kind of information that tells a very unique story.

card1.jpg

The capture card that makes all of this work possible.

I don't want to spend too much time on this part of the story here as I already wrote a solid 16,000 words on the topic in our first article and I think you'll really find the results fascinating.  So, please check out my first article on the topic if you have any questions before diving into these results today!

Test System Setup
CPU Intel Core i7-3960X Sandy Bridge-E
Motherboard ASUS P9X79 Deluxe
Memory Corsair Dominator DDR3-1600 16GB
Hard Drive OCZ Agility 4 256GB SSD
Sound Card On-board
Graphics Card NVIDIA GeForce GTX 660 2GB
AMD Radeon HD 7870 2GB
NVIDIA GeForce GTX 650 Ti 1GB
NVIDIA GeForce GTX 650 Ti BOOST 2GB
AMD Radeon HD 7850 2GB
AMD Radeon HD 7790 1GB
Graphics Drivers AMD: 13.2 beta 7
NVIDIA: 314.07 beta
Power Supply Corsair AX1200i
Operating System Windows 8 Pro x64

On to the results! 

Continue reading our review of the GTX 660 and HD 7870 using Frame Rating!!

What to Look For, Test Setup

Because of the complexity and sheer amount of data we have gathered using our Frame Rating performance methodology, we are breaking it up into several articles that each feature different GPU comparisons.  Here is the schedule:

We are back again with another edition of our continued reveal of data from the capture-based Frame Rating GPU performance methods.  In this third segment we are moving on down the product stack to the NVIDIA GeForce GTX 660 Ti and the AMD Radeon HD 7950 - both cards that fall into a similar price range.

gtx660ti.JPG

I have gotten many questions about why we are using the cards in each comparison and the answer is pretty straight forward: pricing.  In our first article we looked at the Radeon HD 7970 GHz Edition and the GeForce GTX 680 while in the second we compared the Radeon HD 7990 (HD 7970s in CrossFire), the GeForce GTX 690 and the GeForce GTX Titan.  This time around we have the GeForce GTX 660 Ti ($289 on Newegg.com) and the Radeon HD 7950 ($299 on Newegg.com) but we did not include the GeForce GTX 670 because it sits much higher at $359 or so.  I know some of you are going to be disappointed that it isn't in here, but I promise we'll see it again in a future piece!


If you are just joining this article series today, you have missed a lot!  If nothing else you should read our initial full release article that details everything about the Frame Rating methodology and why we are making this change to begin with.  In short, we are moving away from using FRAPS for average frame rates or even frame times and instead are using a secondary hardware capture system to record all the frames of our game play as they would be displayed to the gamer, then doing post-process analyzation on that recorded file to measure real world performance.

Because FRAPS measures frame times at a different point in the game pipeline (closer to the game engine) its results can vary dramatically from what is presented to the end user on their display.  Frame Rating solves that problem by recording video through a dual-link DVI capture card that emulates a monitor to the testing system and by simply applying a unique overlay color on each produced frame from the game, we can gather a new kind of information that tells a very unique story.

card1.jpg

The capture card that makes all of this work possible.

I don't want to spend too much time on this part of the story here as I already wrote a solid 16,000 words on the topic in our first article and I think you'll really find the results fascinating.  So, please check out my first article on the topic if you have any questions before diving into these results today!

Test System Setup
CPU Intel Core i7-3960X Sandy Bridge-E
Motherboard ASUS P9X79 Deluxe
Memory Corsair Dominator DDR3-1600 16GB
Hard Drive OCZ Agility 4 256GB SSD
Sound Card On-board
Graphics Card NVIDIA GeForce GTX 660 Ti 2GB
AMD Radeon HD 7950 3GB
Graphics Drivers AMD: 13.2 beta 7
NVIDIA: 314.07 beta
Power Supply Corsair AX1200i
Operating System Windows 8 Pro x64

 

On to the results! 

Continue reading our review of the GTX 660 Ti and HD 7950 using Frame Rating!!

Manufacturer: NVIDIA

NVIDIA releases the GeForce GT 700M family

NVIDIA revolutionized gaming on the desktop with the release of its 600-series Kepler-based graphics cards in March 2012. With the release of the GeForce GT 700M series, Kepler enters the mobile arena to power laptops, ultrabooks, and all-in-one systems.

Today, NVIDIA introduces four new members to its mobile line: the GeForce GT 750M, the GeForce GT 740M, the GeForce GT 735M, and the GeForce GT 720M. These four new mobile graphics processors join the previously-released members of the GeForce GT 700m series: the GeForce GT 730M and the GeForce GT 710M. With the exception of the Fermi-based GeForce GT 720M, all of the newly-released mobile cores are based on NVIDIA's 28nm Kepler architecture.

Notebooks based on the GeForce GT 700M series will offer in-built support for the following new technologies:

Automatic Battery Savings through NVIDIA Optimus Technology

02-optimus-tech-slide.PNG

Automatic Game Configuration through the GeForce Experience

03-gf-exp.PNG

Automatic Performance Optimization through NVIDIA GPU Boost 2.0

03-gpu-boost-20.PNG

Continue reading our release coverage of the NVIDIA GTX 700M series!

Summary Thus Far

Because of the complexity and sheer amount of data we have gathered using our Frame Rating performance methodology, we are breaking it up into several articles that each feature different GPU comparisons.  Here is the schedule:

Welcome to the second in our intial series of articles focusing on Frame Rating, our new graphics and GPU performance technology that drastically changes how the community looks at single and multi-GPU performance.  In the article we are going to be focusing on a different set of graphics cards, the highest performing single card options on the market including the GeForce GTX 690 4GB dual-GK104 card, the GeForce GTX Titan 6GB GK110-based monster as well as the Radeon HD 7990, though in an emulated form.  The HD 7990 was only recently officially announced by AMD at this years Game Developers Conference but the specifications of that hardware are going to closely match what we have here on the testbed today - a pair of retail Radeon HD 7970s in CrossFire. 

titancard.JPG

Will the GTX Titan look as good in Frame Rating as it did upon its release?

If you are just joining this article series today, you have missed a lot!  If nothing else you should read our initial full release article that details everything about the Frame Rating methodology and why we are making this change to begin with.  In short, we are moving away from using FRAPS for average frame rates or even frame times and instead are using a secondary hardware capture system to record all the frames of our game play as they would be displayed to the gamer, then doing post-process analyzation on that recorded file to measure real world performance.

Because FRAPS measures frame times at a different point in the game pipeline (closer to the game engine) its results can vary dramatically from what is presented to the end user on their display.  Frame Rating solves that problem by recording video through a dual-link DVI capture card that emulates a monitor to the testing system and by simply applying a unique overlay color on each produced frame from the game, we can gather a new kind of information that tells a very unique story.

card1.jpg

The capture card that makes all of this work possible.

I don't want to spend too much time on this part of the story here as I already wrote a solid 16,000 words on the topic in our first article and I think you'll really find the results fascinating.  So, please check out my first article on the topic if you have any questions before diving into these results today!

 

Test System Setup
CPU Intel Core i7-3960X Sandy Bridge-E
Motherboard ASUS P9X79 Deluxe
Memory Corsair Dominator DDR3-1600 16GB
Hard Drive OCZ Agility 4 256GB SSD
Sound Card On-board
Graphics Card NVIDIA GeForce GTX TITAN 6GB
NVIDIA GeForce GTX 690 4GB
AMD Radeon HD 7970 CrossFire 3GB
Graphics Drivers AMD: 13.2 beta 7
NVIDIA: 314.07 beta (GTX 690)
NVIDIA: 314.09 beta (GTX TITAN)
Power Supply Corsair AX1200i
Operating System Windows 8 Pro x64

 

On to the results! 

Continue reading our review of the GTX Titan, GTX 690 and HD 7990 using Frame Rating!!

How Games Work

 

Because of the complexity and sheer amount of data we have gathered using our Frame Rating performance methodology, we are breaking it up into several articles that each feature different GPU comparisons.  Here is the schedule:

 

Introduction

The process of testing games and graphics has been evolving even longer than I have been a part of the industry: 14+ years at this point. That transformation in benchmarking has been accelerating for the last 12 months. Typical benchmarks test some hardware against some software and look at the average frame rate which can be achieved. While access to frame time has been around for nearly the full life of FRAPS, it took an article from Scott Wasson at the Tech Report to really get the ball moving and investigate how each frame contributes to the actual user experience. I immediately began research into testing actual performance perceived by the user, including the "microstutter" reported by many in PC gaming, and pondered how we might be able to test for this criteria even more accurately.

The result of that research is being fully unveiled today in what we are calling Frame Rating – a completely new way of measuring and validating gaming performance.

The release of this story for me is like the final stop on a journey that has lasted nearly a complete calendar year.  I began to release bits and pieces of this methodology starting on January 3rd with a video and short article that described our capture hardware and the benefits that directly capturing the output from a graphics card would bring to GPU evaluation.  After returning from CES later in January, I posted another short video and article that showcased some of the captured video and stepping through a recorded file frame by frame to show readers how capture could help us detect and measure stutter and frame time variance. 

card4.jpg

Finally, during the launch of the NVIDIA GeForce GTX Titan graphics card, I released the first results from our Frame Rating system and discussed how certain card combinations, in this case CrossFire against SLI, could drastically differ in perceived frame rates and performance while giving very similar average frame rates.  This article got a lot more attention than the previous entries and that was expected – this method doesn’t attempt to dismiss other testing options but it is going to be pretty disruptive.  I think the remainder of this article will prove that. 

Today we are finally giving you all the details on Frame Rating; how we do it, what we learned and how you should interpret the results that we are providing.  I warn you up front though that this is not an easy discussion and while I am doing my best to explain things completely, there are going to be more questions going forward and I want to see them all!  There is still much to do regarding graphics performance testing, even after Frame Rating becomes more common. We feel that the continued dialogue with readers, game developers and hardware designers is necessary to get it right.

Below is our full video that features the Frame Rating process, some example results and some discussion on what it all means going forward.  I encourage everyone to watch it but you will definitely need the written portion here to fully understand this transition in testing methods.  Subscribe to your YouTube channel if you haven't already!

Continue reading our analysis of the new Frame Rating performance testing methodology!!

Author:
Manufacturer: NVIDIA

The GTX 650 Ti Gets Boost and More Memory

In mid-October NVIDIA released the GeForce GTX 650 Ti based on GK106, the same GPU that powers the GTX 660 though with fewer enabled CUDA cores and GPC units.  At the time we were pretty impressed with the 650 Ti:

The GTX 650 Ti has more in common with the GTX 660 than it does the GTX 650, both being based on the GK106 GPU, but is missing some of the unique features that NVIDIA has touted of the 600-series cards like GPU Boost and SLI.

Today's release of the GeForce GTX 650 Ti BOOST actually addresses both of those missing features by moving even closer to the specification sheet found on the GTX 660 cards. 

Our video review of the GTX 650 Ti BOOST and Radeon HD 7790.

block1.jpg

Option 1: Two GPCs with Four SMXs

Just like we saw with the original GTX 650 Ti, there are two different configurations of the GTX 650 Ti BOOST; both have the same primary specifications but will differ in which SMX is disabled from the full GK106 ASIC.  The newer version will still have 768 CUDA cores but clock speeds will increase from 925 MHz to 980 MHz base and 1033 MHz typical boost clock.  Texture unit count remains the same at 64.

Continue reading our review of the NVIDIA GeForce GTX 650 Ti BOOST graphics card!!

Author:
Manufacturer: AMD

A New GPU with the Same DNA

When we talked with AMD recently about its leaked roadmap that insinuated that we would not see any new GPUs in 2013, they were adamant that other options would be made available to gamers but were coy about about saying when and to what degree.  As it turns out, today marks the release of the Radeon HD 7790, a completely new piece of silicon under the Sea Islands designation, that uses the same GCN (Graphics Core Next) architecture as the HD 7000-series / Southern Islands GPUs with a handful of tweaks and advantages from improved clock boosting with PowerTune to faster default memory clocks.

slide02.png

To be clear, the Radeon HD 7790 is a completely new ASIC, not a rebranding of a currently available part, though the differences between the options are mostly in power routing and a reorganization of the GCN design found in Cape Verde and Pitcairn designs.  The code name for this particular GPU is Bonaire and it is one of several upcoming updates to the HD 7000 cards. 

Bonaire is built on the same 28nm TSMC process technology that all Southern Islands parts are built on and consists of 2.08 billion transistors in a 160 mm2 die.  Compared to the HD 7800 (Pitcairn) GPU at 212 mm2 and HD 7700 (Cape Verde) at 120 mm2, the chip for the HD 7790 falls right in between.  And while the die images above are likely not completely accurate, it definitely appears that AMD's engineers have reorganized the internals.

slide03.png

Bonaire is built with 14 CUs (compute units) for a total stream processor count of 896, which places it closer to the performance level of the HD 7850 (1024 SPs) than it does the HD 7770 (640 SPs).  The new Sea Islands GPU includes the same dual tessellation engines of the higher end HD 7000s as well and a solid 128-bit memory bus that runs at 6.0 Gbps out the gate on the 1GB frame buffer.  The new memory controller is completely reworked in Bonaire and allows for a total memory bandwidth of 96 GB/s in comparison to the 72 GB/s of the HD 7770 and peaking theoretical compute performance at 1.79 TFLOPS.

The GPU clock rate is set at 1.0 GHz, but there is more on that later.

Continue reading our review of the Sapphire AMD Radeon HD 7790 1GB Bonaire GPU!!

Author:
Manufacturer: PC Perspective

In case you missed it...

UPDATE: We have now published full details on our Frame Rating capture and analysis system as well as an entire host of benchmark results.  Please check it out!!

In one of the last pages of our recent NVIDIA GeForce GTX TITAN graphics card review we included an update to our Frame Rating graphics performance metric that details the testing method in more detail and showed results for the first time.  Because it was buried so far into the article, I thought it was worth posting this information here as a separate article to solict feedback from readers and help guide the discussion forward without getting lost in the TITAN shuffle.  If you already read that page of our TITAN review, nothing new is included below. 

I am still planning a full article based on these results sooner rather than later; for now, please leave me your thoughts, comments, ideas and criticisms in the comments below!


Why are you not testing CrossFire??

If you haven't been following our sequence of stories that investigates a completely new testing methodology we are calling "frame rating", then you are really missing out.  (Part 1 is here, part 2 is here.)  The basic premise of Frame Rating is that the performance metrics that the industry is gathering using FRAPS are inaccurate in many cases and do not properly reflect the real-world gaming experience the user has.

Because of that, we are working on another method that uses high-end dual-link DVI capture equipment to directly record the raw output from the graphics card with an overlay technology that allows us to measure frame rates as they are presented on the screen, not as they are presented to the FRAPS software sub-system.  With these tools we can measure average frame rates, frame times and stutter, all in a way that reflects exactly what the viewer sees from the game.

We aren't ready to show our full sets of results yet (soon!) but the problems lie in that AMD's CrossFire technology shows severe performance degradations when viewed under the Frame Rating microscope that do not show up nearly as dramatically under FRAPS.  As such, I decided that it was simply irresponsible of me to present data to readers that I would then immediately refute on the final pages of this review (Editor: referencing the GTX TITAN article linked above.) - it would be a waste of time for the reader and people that skip only to the performance graphs wouldn't know our theory on why the results displayed were invalid.

Many other sites will use FRAPS, will use CrossFire, and there is nothing wrong with that at all.  They are simply presenting data that they believe to be true based on the tools at their disposal.  More data is always better. 

Here are these results and our discussion.  I decided to use the most popular game out today, Battlefield 3 and please keep in mind this is NOT the worst case scenario for AMD CrossFire in any way.  I tested the Radeon HD 7970 GHz Edition in single and CrossFire configurations as well as the GeForce GTX 680 and SLI.  To gather results I used two processes:

  1. Run FRAPS while running through a repeatable section and record frame rates and frame times for 60 seconds
  2. Run our Frame Rating capture system with a special overlay that allows us to measure frame rates and frame times with post processing.

Here is an example of what the overlay looks like in Battlefield 3.

fr_sli_1.jpg

Frame Rating capture on GeForce GTX 680s in SLI - Click to Enlarge

The column on the left is actually the visuals of an overlay that is applied to each and every frame of the game early in the rendering process.  A solid color is added to the PRESENT call (more details to come later) for each individual frame.  As you know, when you are playing a game, multiple frames will make it on any single 60 Hz cycle of your monitor and because of that you get a succession of colors on the left hand side.

By measuring the pixel height of those colored columns, and knowing the order in which they should appear beforehand, we can gather the same data that FRAPS does but our results are seen AFTER any driver optimizations and DX changes the game might make.

fr_cf_1.jpg

Frame Rating capture on Radeon HD 7970 CrossFire - Click to Enlarge

Here you see a very similar screenshot running on CrossFire.  Notice the thin silver band between the maroon and purple?  That is a complete frame according to FRAPS and most reviews.  Not to us - we think that frame rendered is almost useless. 

Continue reading our 3rd part in a series of Frame Rating and to see our first performance results!!

Author:
Manufacturer: NVIDIA

TITAN is back for more!

Our NVIDIA GeForce GTX TITAN Coverage Schedule:

If you are reading this today, chances are you were here on Tuesday when we first launched our NVIDIA GeForce GTX TITAN features and preview story (accessible from the link above) and were hoping to find benchmarks then.  You didn't, but you will now.  I am here to show you that the TITAN is indeed the single fastest GPU on the market and MAY be the best graphics cards (single or dual GPU) on the market depending on what usage models you have.  Some will argue, some will disagree, but we have an interesting argument to make about this $999 gaming beast.

A brief history of time...er, TITAN

In our previous article we talked all about TITAN's GK110-based GPU, the form factor, card design, GPU Boost 2.0 features and much more and I would highly press you all to read it before going forward.  If you just want the cliff notes, I am going to copy and paste some of the most important details below.

IMG_9502.JPG

From a pure specifications standpoint the GeForce GTX TITAN based on GK110 is a powerhouse.  While the full GPU sports a total of 15 SMX units, TITAN will have 14 of them enabled for a total of 2688 shaders and 224 texture units.  Clock speeds on TITAN are a bit lower than on GK104 with a base clock rate of 836 MHz and a Boost Clock of 876 MHz.  As we will show you later in this article though the GPU Boost technology has been updated and changed quite a bit from what we first saw with the GTX 680.

The bump in the memory bus width is also key, being able to feed that many CUDA cores definitely required a boost from 256-bit to 384-bit, a 50% increase.  Even better, the memory bus is still running at 6.0 GHz resulting in total memory bandwdith of 288.4 GB/s.

blockdiagram2.jpg

Speaking of memory - this card will ship with 6GB on-board.  Yes, 6 GeeBees!!  That is twice as much as AMD's Radeon HD 7970 and three times as much as NVIDIA's own GeForce GTX 680 card.  This is without a doubt a nod to the super-computing capabilities of the GPU and the GPGPU functionality that NVIDIA is enabling with the double precision aspects of GK110.

Continue reading our full review of the NVIDIA GeForce GTX TITAN graphics card with benchmarks and an update on our Frame Rating process!!

Author:
Manufacturer: NVIDIA

GK110 Makes Its Way to Gamers

Our NVIDIA GeForce GTX TITAN Coverage Schedule:

Back in May of 2012 NVIDIA released information on GK110, a new GPU that the company was targeting towards HPC (high performance computing) and the GPGPU markets that are eager for more processing power.  Almost immediately the questions began on when we might see the GK110 part make its way to consumers and gamers in addition to finding a home in supercomputers like Cray's Titan system capable of 17.59 Petaflops/s. 

 

Video Loading...

Watch this same video on our YouTube channel

02.jpg

Nine months later we finally have an answer - the GeForce GTX TITAN is a consumer graphics card built around the GK110 GPU.  Comprised of 2,688 CUDA cores, 7.1 billion transistors and with a die size of 551 mm^2, the GTX TITAN is a big step forward (both in performance and physical size).

specs3.jpg

From a pure specifications standpoint the GeForce GTX TITAN based on GK110 is a powerhouse.  While the full GPU sports a total of 15 SMX units, TITAN will have 14 of them enabled for a total of 2688 shaders and 224 texture units.  Clock speeds on TITAN are a bit lower than on GK104 with a base clock rate of 836 MHz and a Boost Clock of 876 MHz.  As we will show you later in this article though the GPU Boost technology has been updated and changed quite a bit from what we first saw with the GTX 680.

The bump in the memory bus width is also key, being able to feed that many CUDA cores definitely required a boost from 256-bit to 384-bit, a 50% increase.  Even better, the memory bus is still running at 6.0 GHz resulting in total memory bandwdith of 288.4 GB/s

Continue reading our preview of the brand new NVIDIA GeForce GTX TITAN graphics card!!

Author:
Manufacturer: Futuremark

The Ice Storm Test

Love it or hate it, 3DMark has a unique place in the world of PC gaming and enthusiasts.  Since 3DMark99 was released...in 1998...with a target on DirectX 6, Futuremark has been developing benchmarks on a regular basis in time with major API changes and also major harware changes.  The most recent release of 3DMark11 has been out since late in 2010 and has been a regular part of our many graphics card reviews on PC Perspective

Today Futuremark is not only releasing a new version of the benchmark but is also taking fundamentally different approach to performance testing and platforms.  The new 3DMark, just called "3DMark", will not only target high-end gaming PCs but integrated graphics platforms and even tablets and smartphones. 

We interviewed the President of Futuremark, Oliver Baltuch, over the weekend and asked some questions about this new direction for 3DMark, how mobile devices were going to affect benchmarks going forward and asked about the new results patterns, stuttering and more.  Check out the video below!

Video Loading...

Make no bones about it, this is a synthetic benchmark and if you have had issues with that in the past because it is not a "real world" gaming test, you will continue to have those complaints.  Personally I see the information that 3DMark provides to be very informative though it definitely shouldn't be depended on as the ONLY graphics performance metric. 

Continue reading our story on the new 3DMark benchmark and the first performance results!!

Author:
Manufacturer: PC Perspective

Another update

In our previous article and video, I introduced you to our upcoming testing methodology for evaluating graphics cards based not only frame rates but on frame smoothness and the efficiency of those frame rates.  I showed off some of the new hardware we are using for this process and detailed how direct capture of graphics card output allows us to find interesting frame and animation anomalies using some Photoshop still frames.

d31.jpg

Today we are taking that a step further and looking at a couple of captured videos that demonstrate a "stutter" and walking you through, frame by frame, how we can detect, visualize and even start to measure them.

dis1.jpg

This video takes a couple of examples of stutter in games, DiRT 3 and Dishonored to be exact, and shows what they look like in real time, at 25% speed and then finally in a much more detailed frame-by-frame analysis.

 

Video Loading...

 

Obviously this is just a couple instances of what a stutter is and there are often times less apparent in-game stutters that are even harder to see in video playback.  Not to worry - this capture method is capable of seeing those issues as well and we plan on diving into the "micro" level as well shortly.

We aren't going to start talking about whose card and what driver is being used yet and I know that there are still a lot of questions to be answered on this topic.  You will be hearing more quite soon from us and I thank you all for your comments, critiques and support.

Let me know below what you thought of this video and any questions that you might have. 

Author:
Manufacturer: PC Perspective

A change is coming in 2013

If the new year will bring us anything, it looks like it might be the end of using "FPS" as the primary measuring tool for graphics performance on PCs.  A long, long time ago we started with simple "time demos" that recorded rendered frames in a game like Quake and then played them back as quickly as possible on a test system.  The lone result was given as time, in seconds, and was then converted to an average frame rate having known the total number of frames recorded to start with.

More recently we saw a transition to frame rates over time and the advent frame time graphs like the ones we have been using in our graphics reviews on PC Perspective. This expanded the amount of data required to get an accurate picture of graphics and gaming performance but it was indeed more accurate, giving us a more clear image of how GPUs (and CPUs and systems for that matter) performed in games.

And even though the idea of frame times have been around just a long, not many people were interested in getting into that detail level until this past year.  A frame time is the amount of time each frame takes to render, usually listed in milliseconds, and could range from 5ms to 50ms depending on performance.  For a reference, 120 FPS equates to an average of 8.3ms, 60 FPS is 16.6ms and 30 FPS is 33.3ms.  But rather than average those out by each second of time, what if you looked at each frame individually?

Video Loading...

Scott over at Tech Report started doing that this past year and found some interesting results.  I encourage all of our readers to follow up on what he has been doing as I think you'll find it incredibly educational and interesting. 

Through emails and tweets many PC Perspective readers have been asking for our take on it, why we weren't testing graphics cards in the same fashion yet, etc.  I've stayed quiet about it simply because we were working on quite a few different angles on our side and I wasn't ready to share results.  I am still not ready to share the glut of our information yet but I am ready to start the discussion and I hope our community find its compelling and offers some feedback.

card.jpg

At the heart of our unique GPU testing method is this card, a high-end dual-link DVI capture card capable of handling 2560x1600 resolutions at 60 Hz.  Essentially this card will act as a monitor to our GPU test bed and allow us to capture the actual display output that reaches the gamer's eyes.  This method is the best possible way to measure frame rates, frame times, stutter, runts, smoothness, and any other graphics-related metrics.

Using that recorded footage, sometimes reaching 400 MB/s of consistent writes at high resolutions, we can then analyze the frames one by one, though with the help of some additional software.  There are a lot of details that I am glossing over including the need for perfectly synced frame rates, having absolutely zero dropped frames in the recording, analyzing, etc, but trust me when I say we have been spending a lot of time on this. 

Continue reading our editorial on Frame Rating: A New Graphics Performance Metric.

Author:
Manufacturer: ASUS

A ton of technology in here

In the world of graphics cards there a lot of also-rans, cards that were released but didn't really leave a mark on the industry.  Reference cards are a dime a dozen (not really, though $0.10 HD 7970s sounds like a great thing to me) and when the only thing vendors can compete on is price it is very hard to make a compelling argument for one card over another.  The ASUS Matrix Platinum HD 7970 that we are looking at today in no way suffers from these problems - it is a custom design with unique features that really give it the ability to stand out from the clogged quarters of the GPU shelf.

As you should expect by now with the ASUS ROG brand, the Matrix HD 7970 not only has a slightly overclocked clock speed on the GPU and memory but also some unique features like VGA Hotwire, TweakIt buttons and more!

ASUS ROG Matrix Design

Before we dive into performance and our experiences in overclocknig the HD 7970 with the ASUS Matrix Platinum we wanted to go over some of the design highlights that make this graphics card unique.  Available in both Matrix and Matrix Platinum (hand picked chips) revisions, this triple-slot design will include a custom built PCB with 20-phase power and quite a bit more. 

explodefull.jpg

This "exploded" view of the Matrix HD 7970 shows a high-level view of these features with details to follow below.  Some of the features are really aimed at the extreme overclockers that like to get their hands into some LN2 but there is still a lot to offer users that just want to try their hand at getting additional performance through air overclocking. 

ASUS' custom DirectCU II cooler is at work on the ASUS Matrix HD 7970 using all copper heatpipes to help lower temps by 20% compared to the reference HD 7970 while also running quieter thanks to the larger 100mm fans.  These fans can be independently controlled and include the ASUS dust proof fan technology we have seen previously.

Continue reading our review of the ASUS Matrix HD 7970 3GB Graphics Card!!

Author:
Manufacturer: Nintendo

We go inside the Wii U

Last night after the midnight release of the new Nintendo Wii U gaming console, we did what any self respecting hardware fan would do: we tore it apart.  That's right, while live on our PC Perspective Live! page, we opened up a pair of Wii U consoles, played a couple of games on the Deluxe while we took a tri-wing screwdriver to the second.  Inside we found some interesting hardware (and a lot more screws) and at the conclusion of the 5+ hour marathon, we had a reassembled system with only a handful of leftover screws!

If you missed the show last night we have archived the entire video on our YouTube channel (embedded below) as well as the photos we took during the event in their full resolution glory.  There isn't much to discuss about the teardown other than what we said in the video but I am going to leave a few comments after each set of four images.

OH!  And if you missed the live event and want to be apart of another one, we are going to be holding a Hitman: Absolution Game Stream on our Live Page sponsored by AMD with giveaways like Radeon graphics cards and LOTS of game keys!  Stop by again and see us on http://pcper.com/live on Tuesday the 20th at 8pm ET.

 

During the stream we promised photos of everything we did while taking it apart, so here you go!  Click to get the full size image!

 

Getting inside the Wii U was surprisingly easy as the white squares over the screws were simply stickers and we didn't have to worry about any clips breaking, etc.  The inside is dominated by the optical drive provided by Panasonic.

Continue reading to see ALL the images from our Nintendo Wii U Teardown!!