Review Index:
Feedback

Frame Rating Dissected: Full Details on Capture-based Graphics Performance Testing

How Games Work

 

Because of the complexity and sheer amount of data we have gathered using our Frame Rating performance methodology, we are breaking it up into several articles that each feature different GPU comparisons.  Here is the schedule:

 

Introduction

The process of testing games and graphics has been evolving even longer than I have been a part of the industry: 14+ years at this point. That transformation in benchmarking has been accelerating for the last 12 months. Typical benchmarks test some hardware against some software and look at the average frame rate which can be achieved. While access to frame time has been around for nearly the full life of FRAPS, it took an article from Scott Wasson at the Tech Report to really get the ball moving and investigate how each frame contributes to the actual user experience. I immediately began research into testing actual performance perceived by the user, including the "microstutter" reported by many in PC gaming, and pondered how we might be able to test for this criteria even more accurately.

The result of that research is being fully unveiled today in what we are calling Frame Rating – a completely new way of measuring and validating gaming performance.

The release of this story for me is like the final stop on a journey that has lasted nearly a complete calendar year.  I began to release bits and pieces of this methodology starting on January 3rd with a video and short article that described our capture hardware and the benefits that directly capturing the output from a graphics card would bring to GPU evaluation.  After returning from CES later in January, I posted another short video and article that showcased some of the captured video and stepping through a recorded file frame by frame to show readers how capture could help us detect and measure stutter and frame time variance. 

View Full Size

Finally, during the launch of the NVIDIA GeForce GTX Titan graphics card, I released the first results from our Frame Rating system and discussed how certain card combinations, in this case CrossFire against SLI, could drastically differ in perceived frame rates and performance while giving very similar average frame rates.  This article got a lot more attention than the previous entries and that was expected – this method doesn’t attempt to dismiss other testing options but it is going to be pretty disruptive.  I think the remainder of this article will prove that. 

Today we are finally giving you all the details on Frame Rating; how we do it, what we learned and how you should interpret the results that we are providing.  I warn you up front though that this is not an easy discussion and while I am doing my best to explain things completely, there are going to be more questions going forward and I want to see them all!  There is still much to do regarding graphics performance testing, even after Frame Rating becomes more common. We feel that the continued dialogue with readers, game developers and hardware designers is necessary to get it right.

Below is our full video that features the Frame Rating process, some example results and some discussion on what it all means going forward.  I encourage everyone to watch it but you will definitely need the written portion here to fully understand this transition in testing methods.  Subscribe to your YouTube channel if you haven't already!

Continue reading our analysis of the new Frame Rating performance testing methodology!!

How Games Work

Before we dive into why I feel that our new Frame Rating testing method is the best for the gamer, there is some necessary background information that you must understand.  While we all play games on a near daily basis, most of us don’t fully grasp the complexity and detail that goes into producing an image that makes its way from code a developer writes to the pixels on your monitor.  The below diagram attempts to simplify the entire process from the game engine to the display.

View Full Size

In the image above we have defined a few important variables based on time that will help us explain graphics anomalies.  The first is t_game and it refers the internal time that the game engine is using to keep track of its internal simulations and interactions.  This is where processes like the physics simulations, user interface, artificial intelligence and more are handled.  Different game engines keep time in different ways, but they usually fall into two categories: fixed or variable time steps.  In a fixed time method the game engine cycles the environment in its internal simulation on a regular, fixed iteration.  This is more predictable but it also forces other systems to sync up to it (drivers, GPU rendering) which may cause some issues.  The variable time step method allows a lot more flexibility, but it can be more complicated for developers to maintain a fluid feeling simulation because of simply not knowing when the next simulation update will take place. 

Following that is t_present, the point at which the game engine and graphics card communicate to say that they are ready to pass information for the next frame to be rendered and displayed.  What is important about this time location is that this is where FRAPS gets its time stamps and data and also where the overlay that we use for our Frame Rating method is inserted.  What you should notice right away though is that there is quite a bit more work that occurs AFTER the t_present command is sent and before the user actually sees any result.  This in particular is where capture method’s advantages stem from.  

After DirectX calls the game engine to recieve its time, the graphics driver gets its hands on it for the first time.  The driver maps the DX calls to its own specific hardware and then starts the rendering process on a GPU.  Once the frame is rendered, we will define another variable, t_render, which is reported when the image is ready to be sent to a display.  Finally, t_display will be defined as the time in which data from the frame is on the display, whether that be a complete frame (Vsync enabled for example) or a partial frame. 

You can likely already see where the differences between FRAPS data measurement and Frame Rating measurement start to develop.  Using capture hardware and analysis tools that we’ll detail later, we are recording the output from the graphics card directly as if it were a monitor.  We are essentially measuring frames at the t_display level rather than the t_present point, giving us data that has been filtered through the entire game engine, DirectX, driver, rendering and display process. 

It is also useful to discuss stutter and how it relates to these time definitions.  Despite some readers opinions, stutter in game play is related to the smoothness of animation and not hard wired to low frame rate.  If you have a steady frame rate of 25 FPS you can still have an enjoyable experience (as evident by the 24 FPS movies we all watch at the theater).  Instead, we should view stutter as a variance level between t_game and t_display; if the total display time runs at 50ms (20 FPS) you won’t have stuttering in animation (in most cases) but if total display time shifts between 20ms and 50ms you definitely will.

In our second video on Frame Rating, we looked at a capture from Dishonored in a frame by frame analysis and saw a rather large stutter.  I think a better term for that is a hitch, a large single frame rate issue that isn’t indicative of a microstutter smoothness problem.  Using the above variables, a hitch would be a single large instance of t_game – t_display.  As you’ll see in our results analysis pages (including games like Skyrim) this happens fairly often, even in some games that are smooth otherwise. 

Our 2nd video on Frame Rating from a couple months ago...

There is also a completely different discussion on the advantages and differences of capturing data from FRAPS versus capturing data with our Frame Rating methodology.  I believe that the advantages of hardware capture outweigh the concerns currently, but in reality the data that FRAPS generates isn’t that important, it just happens to be the closest data point to another metric we would love to know more about: game time. 

Game time is the internal game clock that the software engine uses to keep track of the physical world.  This clock could be based on a fixed time span for each tick or it could be variable or timed off of another source (like the OS or GPU driver).  An Intel GPU engineer, Andrew Lauritzen, recently made a great post over on the Beyond3D.com forums about game time, back pressure on the game pipeline and much more.  Here is a short portion of that post, but I would encourage everyone to read the entirety completely:

1) Smooth motion is achieved by having a consistent throughput of frames all the way from the game to the display.

2) Games measure the throughput of the pipeline via timing the back-pressure on the submission queue. The number they use to update their simulations is effectively what FRAPS measures as well.

3) A spike anywhere in the pipeline will cause the game to adjust the simulation time, which is pretty much guaranteed to produce jittery output. This is true even if frame delivery to the display (i.e. rendering pipeline output) remains buffered and consistent. i.e. it is never okay to see spikey output in frame latency graphs.

4) The converse is actually not true: seeing smooth FRAPS numbers does not guarantee you will see smooth display, as the pipeline could be producing output to the display at jittery intervals even if the input is consistent. This is far less likely though since GPUs typically do relatively simple, predictable work.

Clearly the best case for evaluating overall gaming performance will be to have access to the internal game and measure it in comparison the output from Frame Rating, the actual frames on your display.  Differences there could be analyzed to find exact bottlenecks in the pipeline from game code to display.  The problem is no game engine developers allow access to the information currently and the number of different engines in use today makes it difficult for even the likes of NVIDIA and AMD to gather data reliably.  There is opportunity for change here if an API were to exist (in DirectX for example) that would give all game engines reliable time iterations that we would then have access to.

 

NVIDIA's Involvement

You may notice that there is a lot of “my” and “our” in this story while also seeing similar results from other websites being released today.  While we have done more than a year’s worth of the testing and development on our own tools to help expedite a lot of this time consuming testing, some of the code base and applications were developed with NVIDIA and thus were distributed to other editors recently.

NVIDIA was responsible for developing the color overlay that sits between the game and DirectX (in the same location of the pipeline as FRAPS essentially) as well as the software extractor that reads the captured video file to generate raw information about the lengths of those bars in an XLS file.  Obviously, NVIDIA has a lot to gain from this particular testing methodology: its SLI technology looks much better than AMD’s CrossFire when viewed in this light, highlighting the advantages that SLI’s hardware frame metering bring to the table. 

The next question from our readers should then be: are there questions about the programs used for this purpose?  After having access to the source code and applications for more than 12 months I can only say that I have parsed through it all innumerable times and I have found nothing that NVIDIA has done that is disingenuous.  Even better, we are going to be sharing all of our code from the Perl-based parsing scripts (that generate the data in the graphs you’ll see later from the source XLS file) as well as a couple of examples of the output XLS files. 

Not only do we NEED to have these tools vetted by other editors, but we also depend on the community to keep us on our toes as well.  When we originally talked with NVIDIA about this project the mindset from the beginning was merely to get the ball rolling and let the open source community and enthusiast gamers look at every aspect of the performance measurement.  That is still the goal – with only one minor exception: NVIDIA doesn’t want the source code of the overlay to leak out simply because of some potential patent/liability concerns.  Instead, we are hoping to have ANOTHER application built to act as the overlay; it may be something that Beepa and the FRAPS team can help us with.

March 29, 2013 | 11:28 AM - Posted by Steve (not verified)

Ryan: Is it possible to test AMD's CF with older drivers to see if this problem has been around for a long time or if it is a more recent problem with AMD's continuous driver upgrades to improve speeds?

March 28, 2013 | 07:00 AM - Posted by Filip Svensson (not verified)

First, a very interesting article with lots of information but..

there are some pretty big holes in the conclusions in this article.

First, the conclusion is that you observe smoother graphics if you have lots of GPU frames inside each display frame. This is just not true. If you accept this the following conclusions will also be not true. That runt and dropped frames always affect the perceived frame rate.
I will give you an example that proves this:

Say that the graphics card is able to produce two frames (letters) for each display frame (numbers). So if the the output will look like this to the display:
1A,2C,3E,4G etc.
then you would have an optimal smoothness (as long as the output from the game engine is constant). This even if you in this case drops 1 gpu frame each display frame. If in stead it would be like the following:
1A,2D,3E,4H
then you could perhaps notice some unevenness even though you still have the same number of drops. I doubt it but it could be possible. Someone should do a video and see if this behaviour could be detected by the human eye :)

If you instead have drop frames when the gpu frame is spread across a multiple of display frames, then you would potentially have a serious issue with stuttering. But that is not what you are measuring here. Ex. 1A,2A,3C,4C

One conclusion to this is that you Observed FPS is totally wrong. Both from what I write above and that you are not limiting this figure to the refresh rate of your monitor. Capping the graphs to 60 frames would make it some way better. Alternately give it some other headline for example (and here comes the flame bite):
NVIDIA sponsored measuring technique to make there technology to look good

March 29, 2013 | 10:41 AM - Posted by Ryan Shrout

I don't follow your letters/numbers analogy at all but I can assure we are confident in our results and experiences here.

March 28, 2013 | 07:10 AM - Posted by Martin Trautvetter (not verified)

Very insightful article full of interesting data points, thanks for all the work that went into this!

I wonder if you plan to expand this testing down to AMD's APU and Crossfired APUs, as well as Intel iGPUs in the future, I know it's not as flashy as the 'big guns', but that's where a huge chunk of the market is going and I'm curious if there are skeletons to find in that closet, too.

March 29, 2013 | 10:41 AM - Posted by Ryan Shrout

I do plan on using this method whenever possible going forward.  Laptops are a bit more of a pain since we'd have to external displays, but we are going to experiment.

March 30, 2013 | 06:25 AM - Posted by Martin Trautvetter

Cool, can't wait for your findings!

_
btw: Twich is SO much better when you guys are in the same room, really enjoyed this week's episode!

March 30, 2013 | 06:26 AM - Posted by Martin Trautvetter

Cool, can't wait for your findings!

_
btw: Twich is SO much better when you guys are in the same room, really enjoyed this week's episode!

March 28, 2013 | 07:58 AM - Posted by steen (not verified)

Nice work Ryan. The key is the capture card. Input metering from eg FRAPs to output at the monitor. What you're missing is that games seem to use use single frame timing to determine simulation engine steps. No smoothing to account for any overheads - at all.

This whole AFR caper is just a sham, though. NUMA-esque multi gpu designs are the only way to do it. Simple 3dfx SLI was better at distributing load, but in the days of DX11+, load balancing is tricky. V-Sync with triple buffering is also an option, but input lag is a problem any way you slice it.

I do have concerns over Nvidia's overlay layer & software, though. They do kick an own goal with the GTX68 being slower than a 7970, but that's been known for a while now. They're banking on SLI & Titan. Your comments also spruik Nvidia, rather than just give facts.

March 29, 2013 | 10:42 AM - Posted by Ryan Shrout

Thanks, appreciate it!

I have another version of the overlay from a third party we are testing out as well.

March 30, 2013 | 10:03 AM - Posted by Luciano (not verified)

They're not the same code nor are available through the same ways.
But they are the same methods and persue the same results.
SLI and Crossfire are not the same thing...
But...

March 28, 2013 | 08:08 AM - Posted by steen (not verified)

P.S. Did you get a visit from Tom Petersen, too? ;)

March 30, 2013 | 10:22 AM - Posted by John Doe (not verified)

He gets a visit from me everyday.

March 28, 2013 | 08:37 AM - Posted by steen (not verified)

P.P.S. (Sorry) Haven't you fixed the sampling rate of the capture card at 60Hz?

March 28, 2013 | 08:59 AM - Posted by ThorAxe

Thank you very much for testing SLI and crossfire. It confirms my suspicion about my Crossfire and SLI configurations.

To give you some background I have run 8800GTX SLI, 4870x2 + 4870 Trifire, 6870 Crossfire and GTX 680 SLI.

The 4870x2 + 4870 appeared to my eyes to be okay, however 6870 Crossfire never seemed to be quite right while the GTX 680 has always appeared smooth to me. I don't recall any issues with 8800GTX SLI but that was a while ago.

March 28, 2013 | 09:28 AM - Posted by Luciano (not verified)

Error in the article:
"Smooth Vsync", "Adaptive VSync", etc, are not exclusive to nVidia.
They are available for everyone and you can use them through console commands.
The names differ due to manufacturers marketing.
But they are available since at least 2005 (rFactor game).

Various names: "double vsync", "vsync", "dynamic vsync", "vsync with double or triple buffering", "vsync with 1~5 frame queue", etc.

If the game lack the option in a menu, you can use console commands or ini files.

Radeonpro is the most famous "ini profile" creator to that use.

March 29, 2013 | 10:43 AM - Posted by Ryan Shrout

These are definitely not the same things...

March 30, 2013 | 10:06 AM - Posted by Luciano (not verified)

They're not the same code nor are available through the same ways.
But they are the same methods and persue the same results.
SLI and Crossfire are not the same thing...
But...

March 30, 2013 | 10:06 AM - Posted by Luciano (not verified)

They're not the same code nor are available through the same ways.
But they are the same methods and persue the same results.
SLI and Crossfire are not the same thing...
But...

March 28, 2013 | 09:59 AM - Posted by Luciano (not verified)

You have created a minimum quality level where the basic requirement is "more than X scan lines displayed" because of "its contribution for the animation observed".
"Animation" is measured in full frames in sequence.
Any corruption in alternating frames is animation corruption.
Thus you have to filtered half of the SLI performance too.

SLI is UNDOUBTLY superior as the data shows.
But "animation" is corrupted by ANY tearing or stutter.
Simracers always use framecap and vsync with triple buffer for that matter.

March 28, 2013 | 12:00 PM - Posted by onehourleft (not verified)

How would the framerating change on Windows 8 vs. Windows 7? Linus appears to have found FPS improvements in some games in Windows 8. http://youtu.be/YHnsfIJtZ2o . I'm wondering if runt or dropped frames are increasing or there are actual improvements in user experience.

March 29, 2013 | 10:44 AM - Posted by Ryan Shrout

We started this process on Windows 7 before moving to Windows 8 and nothing changed.

March 28, 2013 | 01:30 PM - Posted by gamerk2 (not verified)

I've speculated since 2008 that SLI/CF introduced unacceptable latency into the system, based on all the threads titled "Why do I get 90 FPS and my game is laggy?" in various forums. I'm glad someone is FINALLY really looking into this aspect of the actual rendering chain.

March 28, 2013 | 06:25 PM - Posted by Anonymous (not verified)

Hi;

Can you please test other SLI render methodologies such as split frame rendering (SFR).

I know that SFR is not officially supported by nvidia anymore but you can always force it using Nvidia Inspector as some of us sometimes do.

It would be great if you could try other render methods with AMD as well. (such as scissor or supertile methods as far as I know they can be forced using radeon pro tool)

Best Regards

March 28, 2013 | 07:59 PM - Posted by Foosh (not verified)

I play with Radeon Pro's Dynamic Vsync Control which eliminates stuttering without introducing any noticeable input lag. Vsync off will run your video cards at 100% full time generating a lot of heat and decreasing their life with minimal benefit. If you're playing for twitch response you're running at 120Hz double buffered, your latency will be 16ms max which isn't bad considering good human response is 226ms. If you're playing for maximum visual quality then screen tearing is unacceptable. Statements like "Crossfire does nothing" just creates unnecessary drama.

March 29, 2013 | 10:45 AM - Posted by Ryan Shrout

I consider it entirely necessary to make sure people see what is going on.

Input latency is our next thing to try and address though and its possible that CrossFire, even with its runt frames, is improving that more.

March 29, 2013 | 08:06 PM - Posted by steen (not verified)

I bet that's exactly what you'll find. AMD will have reduced input lag at the expense of these "runt" frames, whereas Nvidia's metering will show huge input lag. AMD were just outmanouvered by Nvidia subverting your (& other's) inverstigations on frame latency. I can see AMD introducing a latency/metering control for Xfire in future drivers. Will Nvidia do the same, I wonder? As I said a pox on AFR. SFR is an alternative with Nvidia via hack, but has its own issues.

March 29, 2013 | 08:07 PM - Posted by steen (not verified)

I bet that's exactly what you'll find. AMD will have reduced input lag at the expense of these "runt" frames, whereas Nvidia's metering will show huge input lag. AMD were just outmanouvered by Nvidia subverting your (& other's) inverstigations on frame latency. I can see AMD introducing a latency/metering control for Xfire in future drivers. Will Nvidia do the same, I wonder? As I said a pox on AFR. SFR is an alternative with Nvidia via hack, but has its own issues.

March 30, 2013 | 12:50 AM - Posted by bystander (not verified)

Given that AFR has every other frame rendered by a different card, the actual time between moving the mouse and it being displayed on the screen would not improve with crossfire/SLI over a single card.

However, how often a move initiates a frame does improve, but if those extra updates are almost at the same exact time as the single cards updates, it won't give you any benefit, so spacing will likely help.

March 30, 2013 | 01:57 AM - Posted by bystander (not verified)

Hopefully when you guys test latency, you realize that there is a polling component to consider.

If you have evenly spaced out times when you initiate a frame, your input is more evenly received. While simply taking an input when each GPU is ready may reduce latency, two in a row at almost the same exact time results redundant frames and input.

However, if those frames evenly distributed and received, more useful mouse inputs are gathered and utilized. The benefit of this may out weigh pure latency readings.

The difference may be the difference between receiving input a max of 33ms intervals, and having up to 66ms intervals with near 0 intervals at other points.

March 30, 2013 | 05:43 AM - Posted by Ryan Shrout

Interesting, hadn't considered the pros/cons of smoother or erratic input polling.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.