Feedback

Frame Rating: A New Graphics Performance Metric

Author:
Manufacturer: PC Perspective

A change is coming in 2013

If the new year will bring us anything, it looks like it might be the end of using "FPS" as the primary measuring tool for graphics performance on PCs.  A long, long time ago we started with simple "time demos" that recorded rendered frames in a game like Quake and then played them back as quickly as possible on a test system.  The lone result was given as time, in seconds, and was then converted to an average frame rate having known the total number of frames recorded to start with.

More recently we saw a transition to frame rates over time and the advent frame time graphs like the ones we have been using in our graphics reviews on PC Perspective. This expanded the amount of data required to get an accurate picture of graphics and gaming performance but it was indeed more accurate, giving us a more clear image of how GPUs (and CPUs and systems for that matter) performed in games.

And even though the idea of frame times have been around just a long, not many people were interested in getting into that detail level until this past year.  A frame time is the amount of time each frame takes to render, usually listed in milliseconds, and could range from 5ms to 50ms depending on performance.  For a reference, 120 FPS equates to an average of 8.3ms, 60 FPS is 16.6ms and 30 FPS is 33.3ms.  But rather than average those out by each second of time, what if you looked at each frame individually?

Video Loading...

Scott over at Tech Report started doing that this past year and found some interesting results.  I encourage all of our readers to follow up on what he has been doing as I think you'll find it incredibly educational and interesting. 

Through emails and tweets many PC Perspective readers have been asking for our take on it, why we weren't testing graphics cards in the same fashion yet, etc.  I've stayed quiet about it simply because we were working on quite a few different angles on our side and I wasn't ready to share results.  I am still not ready to share the glut of our information yet but I am ready to start the discussion and I hope our community find its compelling and offers some feedback.

View Full Size

At the heart of our unique GPU testing method is this card, a high-end dual-link DVI capture card capable of handling 2560x1600 resolutions at 60 Hz.  Essentially this card will act as a monitor to our GPU test bed and allow us to capture the actual display output that reaches the gamer's eyes.  This method is the best possible way to measure frame rates, frame times, stutter, runts, smoothness, and any other graphics-related metrics.

Using that recorded footage, sometimes reaching 400 MB/s of consistent writes at high resolutions, we can then analyze the frames one by one, though with the help of some additional software.  There are a lot of details that I am glossing over including the need for perfectly synced frame rates, having absolutely zero dropped frames in the recording, analyzing, etc, but trust me when I say we have been spending a lot of time on this. 

Continue reading our editorial on Frame Rating: A New Graphics Performance Metric.

The result are multi-GB files (60 seconds of game play produces a 16GB file) that have each frame presented to the monitor of the gamer in perfect fashion. 

View Full Size

View Full Size

The second image here was made in Photoshop to show you the three different "frames" of Unigine that make up the single frame that the gamer would actually see.  With a 60 Hz display, this equates to three frames being shown in 16ms, though each frame has a variable amount of real estate on the screen. 

We are still finalizing what we can do with all of this data, but it definitely allows us to find some unique cases:

View Full Size

This shot from Sleeping Dogs shows three frames on our single 16ms instance to the monitor, but notice how little screen spaces the green frame takes up - is measuring this frame even useful for gamers?  Should it be dropped from the performance metrics all together?

I put together this short video below with a bit more on my thoughts on this topic but I can assure you that January and February are going to bring some major change to the way graphics cards are tested.  Please, we want your thoughts!  This is an important dialogue for not only us but for all PC gamers going forward.  Scott has done some great work "Inside the Second" and we hope that we can offer additional information and clarification on this problem.  Rest assured we have not been sitting on our hands here! 

Video Loading...

Is this kind of testing methodology going to be useful?  Is it overkill?  Leave us your thoughts below!

January 3, 2013 | 04:10 PM - Posted by Nebulis01

Ryan this looks like a great new metric to see how well v-sync/tearing and dropped frames affect the smoothness and playability of games. Keep up the good work, can't wait to see more.

January 3, 2013 | 04:12 PM - Posted by Zanthis

Very cool, having other metrics other than FPS will be awesome.

January 3, 2013 | 04:20 PM - Posted by arbiter

From how it is explained in the video it will give you a true picture on the performance of a card. If what was said is right about " http://www.pcper.com/image/view/19762?return=node%2F56191 " that image being 16ms being seen as 3 frames. yet 1 of frames is only like 10 pixels wide. We can see what drivers due to increase FPS in a game.

January 3, 2013 | 04:44 PM - Posted by Ryan Shrout

Correct - one thing this method will definitely find quickly is cheating drivers.  Dropped frames that are NEVER shown is another point too.

March 28, 2013 | 04:12 PM - Posted by Anonymous (not verified)

And thus AMD's runt frames will be shown, their cheating exposed, and so this must not be allowed to go any further.

First, after AMD called, harrassed, and bribed everyone to keep a lid on it, until they can "get their cheating under control" and iron our stuttering they never gave a crud about before to make up for the cheating runt frames... we are MONTHS later into it and "nothing".

Silence bought again for the failing cheating bankrupt AMD. Perhaps they should get a Public Award not the Nobel Tech Prize - oh why not they deserve it. Also many promotions and honorifics.

April 5, 2013 | 11:38 AM - Posted by anubis44 (not verified)

Yes, because the angelic, honest, beyond reproach nVidia NEVER does anything wrong. It doesn't stiff customers with dead graphics chips in laptops (Bumpgate: http://hardforum.com/showthread.php?t=1550848)

and they've never cheated in benchmarks: http://www.pcworld.com/article/111012/article.html

or

http://www.geek.com/games/futuremark-confirms-nvidia-is-cheating-in-benc...

and neither has Intel: http://techreport.com/review/17732/intel-graphics-drivers-employ-questio...

As far as I'm concerned, Intel is a mafia-style company, and nVidia are a pack of cheaters and neither of them are ever getting any more money out of me. So ahead a bend over for both of them if you want to, but I'm buying AMD CPUs and GPUs unless and until something very basic changes.

January 3, 2013 | 05:00 PM - Posted by Anonymous (not verified)

Great, great work!

This means a lot for the PC gamers.

January 3, 2013 | 05:39 PM - Posted by S@m (not verified)

Thanks Ryan! This is awesome news! It was about time more heavy weight PC hardware review sites, besides the Tech Report, got behind this new metric. The amount of doubt and disinformation (not to mention cheating on the part of some GPU manufacturers) that has been created over this issue in the last few years, in part by the lack of proper benchmarking, has become too big to ignore now and it definitely calls for new metrics to help consumers to make better decisions before shelling out big piles of cash for new GPUs. The fact that PC Pespective is finally adopting this metric in 2013 will certainly help to push the rest of the elite PC enthusiast media to consider this move too, which will do us all a big service!

January 3, 2013 | 06:01 PM - Posted by wujj123456

Excellent. Now we have Fraps, High speed cameras and professional frame capturing gear. I bet they will yield some different results. It will be interesting to analyze the discrepancy between them, and reveal more about how current graphic drivers behave.

January 3, 2013 | 06:06 PM - Posted by Ryan Shrout

Agreed.

I think we'll all see slightly different results, but I think they will all be valid.  It's all in how you analyze it!

March 28, 2013 | 04:17 PM - Posted by Anonymous (not verified)

What ?

" It's all in how you analyze it! "

So you are saying you can get whatever answer you want for either company ? Maybe you're saying it's too much data and won't matter, or that you can make up issues that don't matter - not really certain.
Maybe you were just excited and spouting something not too bright.

I think you might want to retract that statement. There are correct ways to do things, and ways that are deceitful.
Hopefully you do it honestly, instead of in a biased fashion, caving in for AMD.

January 3, 2013 | 06:32 PM - Posted by Nilbog

I have to say that this is truly a unique take on testing this.
Kudos sir!

I am interested in ALL of the info that we will get from this. Hopefully this will force everybody to care more about the gamer's actual experience instead of just who can pump the most FPS, without caring if it is actually displaying smoothly.
I am also excited to understand more about tearing. I am curious to see if this will push the driver developers to try to reduce it, actually eliminate it or just try to hide it better.

I think Skyrim is definitely going to show just how bad this can get if frametimes are ignored. The TR video is exactly how Skyrim looks to me. Even while running at constant 60 FPS it is far from smooth playing on my system.

About the tearing and counting the frames.
I think that you should include these extra frames no matter how small. You wasted clock cycles rendering that tiny useless frame correct?
By now, aren't there ways to prevent this? (without limiting FPS)
Or is this an example of drivers trying to pump as many frames as possible for good numbers?
Either way if my GPU and CPU are working on a frame that i never see or only see like 10 pixels of should be counted in the benchmark, and looked down upon.

Maybe im just a noob or confused but by now i would expect my system to only be rendering frames that i will actually see and that are actually useful(i think the consoles do that). I realize that because we all have different parts this becomes very difficult to achieve, but it feels like we can at least try and do a better job at it.

January 3, 2013 | 07:39 PM - Posted by Ryan Shrout

Good points Nil!  I would say yes this is how GPU drivers can pump frames to get higher AVG FPS without really benefiting the consumer at all.  

January 3, 2013 | 07:19 PM - Posted by razor512

Looks really good. Would help with identifying issues such as a game running at 40FPS but feels like 25-50 due to not all frames each second happening at proper intervals

PS, for the sleeping dogs image, that green frame makes all the difference and the game just wouldn't be the same without it :)

I was wondering one thing, with issues such as that tiny green frame, if you keep the frame rate cap the same but run the display at a higher refresh rate, eg 120Hz VS 60Hz to find out if it is the refresh rate that is causing part of the frame to be dropped (since the refresh rate stuff happens after the GPU has rendered (is the GPU rendering the full frame but only that tiny sliver is displayed making the frame count still valid even if only row of pixels were displayed.

January 3, 2013 | 07:40 PM - Posted by Ryan Shrout

Frame count would be completely valid, true.  But does the user actually get benefit from that being rendered and displayed?  I say no.

January 5, 2013 | 03:56 PM - Posted by Matthew (not verified)

I think that something that has to keep in mind is that video rendering is tightly coupled with other subsystems in a game, such as input processing, audio processing, networking, game logic. The easiest way to ensure that one subsystem doesn't delay the entire system is to cut it off prematurely (drop frames). Even with multi-core systems, some systems need to be linear to a degree, such as game logic must come before video/audio rendering.

January 6, 2013 | 02:05 AM - Posted by Anonymous (not verified)

different subsystems usually tick at different hertz.
the ai for example, doesn't usually need to run at full rendering speed.
the physiscs simulation, if it's sophisticated enough, could tick faster only for those objects that move faster than a threshold, therefore eating more cpu. also, some simulations (and even other algos) run directly in the gpu.
so, i'm not saying the proposed method is wrong, but keep in mind that it's not measuring only the graphics, because it's affected by whatever is happening in the game.
feel free to correct me if i'm wrong :)

January 3, 2013 | 07:39 PM - Posted by Anonymous (not verified)

maybe video games need to start rendering in a rolling shutter style method, allowing each line to have nearly 0 end to end latency. This, as well as not using full vsync, has display articacts, like verical lines appearing diagonal with rapid horizontal motion, yet allows for lowest latency. If we really want to improve latency, however, someone needs to make some actual low latency displays. End to end latency on a display, assuming output at display resolution, could be driven down to sub 1ms. For some reason, the display industry does not seem to care about this.

January 3, 2013 | 07:41 PM - Posted by Ryan Shrout

True, though there aren't a whole lot of uses for high frame rate content.  Games are pretty much it.

January 3, 2013 | 08:03 PM - Posted by Anonymous (not verified)

or head mounted camera and augmented reality systems.

January 3, 2013 | 08:03 PM - Posted by ThorAxe

Ryan this is what I have been hoping for and exactly what I meant when I disagreed on the choice of GPU pick of the the Year in the podcast (when bundle value is not taken into consideration).

January 3, 2013 | 08:17 PM - Posted by at0mhard (not verified)

Wow you guys took TechReport's idea and then upgraded it and blew right by them.

I haven't been thinking about FPS avg as GPU be-all-and-end-all for few years now. Looks like I was right, huh?

What is needed now is thoughtful methodology and figuring out the best way to convey gathered data.

January 3, 2013 | 08:34 PM - Posted by arbiter

Yea for years all that was looked at was min, max and avg FPS when doing reviews. This will show what card truly does a better job at fps, and what drivers cheat to make the fps look better then they really are. Micro shuddering is starting to be come a real problem with cards as powerful as they are now.

January 3, 2013 | 08:50 PM - Posted by Ryan Shrout

Yes, and to get as close to a "perfect" experience as possible, these new testing methods are required.

January 3, 2013 | 08:49 PM - Posted by Ryan Shrout

I don't think we have "blown right by them" quite yet though I do hope our method works out for the best. We have spent a LOT of time and money on developing it thus far...

And you are correct! Gathering data is easy.  Thoughtful analysis and the best way to convey the HUGE amounts of data is the hard part.

January 3, 2013 | 09:57 PM - Posted by ThorAxe

TR have done a great job getting the ball rolling so kudos to them.

What I am really looking forward to are the SLI and Crossfire results. Buying two high end cards is a big investment so getting the pair that offers the best performance has always been my primary objective. Heat and power consumption are a distant second.

January 4, 2013 | 08:07 AM - Posted by Dragorth (not verified)

Are you planning on giving a frame number from each game?

Example. 60 seconds of one play through would ideally have 60*60 frames or 3600 frames. This would count each of those images that show three frames as three, and could give a percentage of frames missed. Or how much faster the percentage is.

Also, I imagine each game is different, so are we going to see games shamed that exasperate this problem? And how does Crossfire/SLI and Lucid affect the frames.

All very interesting work. Thanks for all the hard work, guys.

January 5, 2013 | 10:40 AM - Posted by ThorAxe

My mistake about TR getting the ball rolling. I was not aware that you guys have actually been working on this for over a year and a half, just not publicly.

I can't tell you how awesome I think your method is. We will finally be able to see through all the smoke and mirrors such as partial frames being counted towards averages.

I haven't been this excited about GPU reviews sinve the Voodoo 2.

January 3, 2013 | 08:51 PM - Posted by Wolvenmoon (not verified)

This is really cool and all, but as I progress in a computer science degree I'm starting to care a bit more about raw power than I do gaming performance. Next to frame latency tests, I hope GPGPU and raw compute tests are included.

I get the feeling this is going to alter the course of GPU development (unless performance PCs die as a viable market, which I somewhat doubt). I don't want AMD/Nvidia ditching strong compute performance to minimize frame latencies if they see the opportunity to sacrifice one for the other. Both feet need to be held to the fire. GPGPU performance is the difference between static environments and smashing everything.

January 4, 2013 | 12:03 AM - Posted by Ryan Shrout

I don't really think these topics are related.  We are basically going to be forcing GPU vendors to improve graphics "smoothness" just like we ask them to improve AA, etc.

There will still be plenty of work done on the compute side.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.