Frame Rating: A New Graphics Performance Metric

Manufacturer: PC Perspective

A change is coming in 2013

If the new year will bring us anything, it looks like it might be the end of using "FPS" as the primary measuring tool for graphics performance on PCs.  A long, long time ago we started with simple "time demos" that recorded rendered frames in a game like Quake and then played them back as quickly as possible on a test system.  The lone result was given as time, in seconds, and was then converted to an average frame rate having known the total number of frames recorded to start with.

More recently we saw a transition to frame rates over time and the advent frame time graphs like the ones we have been using in our graphics reviews on PC Perspective. This expanded the amount of data required to get an accurate picture of graphics and gaming performance but it was indeed more accurate, giving us a more clear image of how GPUs (and CPUs and systems for that matter) performed in games.

And even though the idea of frame times have been around just a long, not many people were interested in getting into that detail level until this past year.  A frame time is the amount of time each frame takes to render, usually listed in milliseconds, and could range from 5ms to 50ms depending on performance.  For a reference, 120 FPS equates to an average of 8.3ms, 60 FPS is 16.6ms and 30 FPS is 33.3ms.  But rather than average those out by each second of time, what if you looked at each frame individually?

Video Loading...

Scott over at Tech Report started doing that this past year and found some interesting results.  I encourage all of our readers to follow up on what he has been doing as I think you'll find it incredibly educational and interesting. 

Through emails and tweets many PC Perspective readers have been asking for our take on it, why we weren't testing graphics cards in the same fashion yet, etc.  I've stayed quiet about it simply because we were working on quite a few different angles on our side and I wasn't ready to share results.  I am still not ready to share the glut of our information yet but I am ready to start the discussion and I hope our community find its compelling and offers some feedback.

View Full Size

At the heart of our unique GPU testing method is this card, a high-end dual-link DVI capture card capable of handling 2560x1600 resolutions at 60 Hz.  Essentially this card will act as a monitor to our GPU test bed and allow us to capture the actual display output that reaches the gamer's eyes.  This method is the best possible way to measure frame rates, frame times, stutter, runts, smoothness, and any other graphics-related metrics.

Using that recorded footage, sometimes reaching 400 MB/s of consistent writes at high resolutions, we can then analyze the frames one by one, though with the help of some additional software.  There are a lot of details that I am glossing over including the need for perfectly synced frame rates, having absolutely zero dropped frames in the recording, analyzing, etc, but trust me when I say we have been spending a lot of time on this. 

Continue reading our editorial on Frame Rating: A New Graphics Performance Metric.

The result are multi-GB files (60 seconds of game play produces a 16GB file) that have each frame presented to the monitor of the gamer in perfect fashion. 

View Full Size

View Full Size

The second image here was made in Photoshop to show you the three different "frames" of Unigine that make up the single frame that the gamer would actually see.  With a 60 Hz display, this equates to three frames being shown in 16ms, though each frame has a variable amount of real estate on the screen. 

We are still finalizing what we can do with all of this data, but it definitely allows us to find some unique cases:

View Full Size

This shot from Sleeping Dogs shows three frames on our single 16ms instance to the monitor, but notice how little screen spaces the green frame takes up - is measuring this frame even useful for gamers?  Should it be dropped from the performance metrics all together?

I put together this short video below with a bit more on my thoughts on this topic but I can assure you that January and February are going to bring some major change to the way graphics cards are tested.  Please, we want your thoughts!  This is an important dialogue for not only us but for all PC gamers going forward.  Scott has done some great work "Inside the Second" and we hope that we can offer additional information and clarification on this problem.  Rest assured we have not been sitting on our hands here! 

Video Loading...

Is this kind of testing methodology going to be useful?  Is it overkill?  Leave us your thoughts below!

January 3, 2013 | 04:10 PM - Posted by Nebulis01

Ryan this looks like a great new metric to see how well v-sync/tearing and dropped frames affect the smoothness and playability of games. Keep up the good work, can't wait to see more.

January 3, 2013 | 04:12 PM - Posted by Zanthis

Very cool, having other metrics other than FPS will be awesome.

January 3, 2013 | 04:20 PM - Posted by arbiter

From how it is explained in the video it will give you a true picture on the performance of a card. If what was said is right about " " that image being 16ms being seen as 3 frames. yet 1 of frames is only like 10 pixels wide. We can see what drivers due to increase FPS in a game.

January 3, 2013 | 04:44 PM - Posted by Ryan Shrout

Correct - one thing this method will definitely find quickly is cheating drivers.  Dropped frames that are NEVER shown is another point too.

March 28, 2013 | 04:12 PM - Posted by Anonymous (not verified)

And thus AMD's runt frames will be shown, their cheating exposed, and so this must not be allowed to go any further.

First, after AMD called, harrassed, and bribed everyone to keep a lid on it, until they can "get their cheating under control" and iron our stuttering they never gave a crud about before to make up for the cheating runt frames... we are MONTHS later into it and "nothing".

Silence bought again for the failing cheating bankrupt AMD. Perhaps they should get a Public Award not the Nobel Tech Prize - oh why not they deserve it. Also many promotions and honorifics.

April 5, 2013 | 11:38 AM - Posted by anubis44 (not verified)

Yes, because the angelic, honest, beyond reproach nVidia NEVER does anything wrong. It doesn't stiff customers with dead graphics chips in laptops (Bumpgate:

and they've never cheated in benchmarks:


and neither has Intel:

As far as I'm concerned, Intel is a mafia-style company, and nVidia are a pack of cheaters and neither of them are ever getting any more money out of me. So ahead a bend over for both of them if you want to, but I'm buying AMD CPUs and GPUs unless and until something very basic changes.

January 3, 2013 | 05:00 PM - Posted by Anonymous (not verified)

Great, great´╗┐ work!

This means a lot for the PC gamers.

January 3, 2013 | 05:39 PM - Posted by S@m (not verified)

Thanks Ryan! This is awesome news! It was about time more heavy weight PC hardware review sites, besides the Tech Report, got behind this new metric. The amount of doubt and disinformation (not to mention cheating on the part of some GPU manufacturers) that has been created over this issue in the last few years, in part by the lack of proper benchmarking, has become too big to ignore now and it definitely calls for new metrics to help consumers to make better decisions before shelling out big piles of cash for new GPUs. The fact that PC Pespective is finally adopting this metric in 2013 will certainly help to push the rest of the elite PC enthusiast media to consider this move too, which will do us all a big service!

January 3, 2013 | 06:01 PM - Posted by wujj123456

Excellent. Now we have Fraps, High speed cameras and professional frame capturing gear. I bet they will yield some different results. It will be interesting to analyze the discrepancy between them, and reveal more about how current graphic drivers behave.

January 3, 2013 | 06:06 PM - Posted by Ryan Shrout


I think we'll all see slightly different results, but I think they will all be valid.  It's all in how you analyze it!

March 28, 2013 | 04:17 PM - Posted by Anonymous (not verified)

What ?

" It's all in how you analyze it! "

So you are saying you can get whatever answer you want for either company ? Maybe you're saying it's too much data and won't matter, or that you can make up issues that don't matter - not really certain.
Maybe you were just excited and spouting something not too bright.

I think you might want to retract that statement. There are correct ways to do things, and ways that are deceitful.
Hopefully you do it honestly, instead of in a biased fashion, caving in for AMD.

January 3, 2013 | 06:32 PM - Posted by Nilbog

I have to say that this is truly a unique take on testing this.
Kudos sir!

I am interested in ALL of the info that we will get from this. Hopefully this will force everybody to care more about the gamer's actual experience instead of just who can pump the most FPS, without caring if it is actually displaying smoothly.
I am also excited to understand more about tearing. I am curious to see if this will push the driver developers to try to reduce it, actually eliminate it or just try to hide it better.

I think Skyrim is definitely going to show just how bad this can get if frametimes are ignored. The TR video is exactly how Skyrim looks to me. Even while running at constant 60 FPS it is far from smooth playing on my system.

About the tearing and counting the frames.
I think that you should include these extra frames no matter how small. You wasted clock cycles rendering that tiny useless frame correct?
By now, aren't there ways to prevent this? (without limiting FPS)
Or is this an example of drivers trying to pump as many frames as possible for good numbers?
Either way if my GPU and CPU are working on a frame that i never see or only see like 10 pixels of should be counted in the benchmark, and looked down upon.

Maybe im just a noob or confused but by now i would expect my system to only be rendering frames that i will actually see and that are actually useful(i think the consoles do that). I realize that because we all have different parts this becomes very difficult to achieve, but it feels like we can at least try and do a better job at it.

January 3, 2013 | 07:39 PM - Posted by Ryan Shrout

Good points Nil!  I would say yes this is how GPU drivers can pump frames to get higher AVG FPS without really benefiting the consumer at all.  

January 3, 2013 | 07:19 PM - Posted by razor512

Looks really good. Would help with identifying issues such as a game running at 40FPS but feels like 25-50 due to not all frames each second happening at proper intervals

PS, for the sleeping dogs image, that green frame makes all the difference and the game just wouldn't be the same without it :)

I was wondering one thing, with issues such as that tiny green frame, if you keep the frame rate cap the same but run the display at a higher refresh rate, eg 120Hz VS 60Hz to find out if it is the refresh rate that is causing part of the frame to be dropped (since the refresh rate stuff happens after the GPU has rendered (is the GPU rendering the full frame but only that tiny sliver is displayed making the frame count still valid even if only row of pixels were displayed.

January 3, 2013 | 07:40 PM - Posted by Ryan Shrout

Frame count would be completely valid, true.  But does the user actually get benefit from that being rendered and displayed?  I say no.

January 5, 2013 | 03:56 PM - Posted by Matthew (not verified)

I think that something that has to keep in mind is that video rendering is tightly coupled with other subsystems in a game, such as input processing, audio processing, networking, game logic. The easiest way to ensure that one subsystem doesn't delay the entire system is to cut it off prematurely (drop frames). Even with multi-core systems, some systems need to be linear to a degree, such as game logic must come before video/audio rendering.

January 6, 2013 | 02:05 AM - Posted by Anonymous (not verified)

different subsystems usually tick at different hertz.
the ai for example, doesn't usually need to run at full rendering speed.
the physiscs simulation, if it's sophisticated enough, could tick faster only for those objects that move faster than a threshold, therefore eating more cpu. also, some simulations (and even other algos) run directly in the gpu.
so, i'm not saying the proposed method is wrong, but keep in mind that it's not measuring only the graphics, because it's affected by whatever is happening in the game.
feel free to correct me if i'm wrong :)

January 3, 2013 | 07:39 PM - Posted by Anonymous (not verified)

maybe video games need to start rendering in a rolling shutter style method, allowing each line to have nearly 0 end to end latency. This, as well as not using full vsync, has display articacts, like verical lines appearing diagonal with rapid horizontal motion, yet allows for lowest latency. If we really want to improve latency, however, someone needs to make some actual low latency displays. End to end latency on a display, assuming output at display resolution, could be driven down to sub 1ms. For some reason, the display industry does not seem to care about this.

January 3, 2013 | 07:41 PM - Posted by Ryan Shrout

True, though there aren't a whole lot of uses for high frame rate content.  Games are pretty much it.

January 3, 2013 | 08:03 PM - Posted by Anonymous (not verified)

or head mounted camera and augmented reality systems.

January 3, 2013 | 08:03 PM - Posted by ThorAxe

Ryan this is what I have been hoping for and exactly what I meant when I disagreed on the choice of GPU pick of the the Year in the podcast (when bundle value is not taken into consideration).

January 3, 2013 | 08:17 PM - Posted by at0mhard (not verified)

Wow you guys took TechReport's idea and then upgraded it and blew right by them.

I haven't been thinking about FPS avg as GPU be-all-and-end-all for few years now. Looks like I was right, huh?

What is needed now is thoughtful methodology and figuring out the best way to convey gathered data.

January 3, 2013 | 08:34 PM - Posted by arbiter

Yea for years all that was looked at was min, max and avg FPS when doing reviews. This will show what card truly does a better job at fps, and what drivers cheat to make the fps look better then they really are. Micro shuddering is starting to be come a real problem with cards as powerful as they are now.

January 3, 2013 | 08:50 PM - Posted by Ryan Shrout

Yes, and to get as close to a "perfect" experience as possible, these new testing methods are required.

January 3, 2013 | 08:49 PM - Posted by Ryan Shrout

I don't think we have "blown right by them" quite yet though I do hope our method works out for the best. We have spent a LOT of time and money on developing it thus far...

And you are correct! Gathering data is easy.  Thoughtful analysis and the best way to convey the HUGE amounts of data is the hard part.

January 3, 2013 | 09:57 PM - Posted by ThorAxe

TR have done a great job getting the ball rolling so kudos to them.

What I am really looking forward to are the SLI and Crossfire results. Buying two high end cards is a big investment so getting the pair that offers the best performance has always been my primary objective. Heat and power consumption are a distant second.

January 4, 2013 | 08:07 AM - Posted by Dragorth (not verified)

Are you planning on giving a frame number from each game?

Example. 60 seconds of one play through would ideally have 60*60 frames or 3600 frames. This would count each of those images that show three frames as three, and could give a percentage of frames missed. Or how much faster the percentage is.

Also, I imagine each game is different, so are we going to see games shamed that exasperate this problem? And how does Crossfire/SLI and Lucid affect the frames.

All very interesting work. Thanks for all the hard work, guys.

January 5, 2013 | 10:40 AM - Posted by ThorAxe

My mistake about TR getting the ball rolling. I was not aware that you guys have actually been working on this for over a year and a half, just not publicly.

I can't tell you how awesome I think your method is. We will finally be able to see through all the smoke and mirrors such as partial frames being counted towards averages.

I haven't been this excited about GPU reviews sinve the Voodoo 2.

January 3, 2013 | 08:51 PM - Posted by Wolvenmoon (not verified)

This is really cool and all, but as I progress in a computer science degree I'm starting to care a bit more about raw power than I do gaming performance. Next to frame latency tests, I hope GPGPU and raw compute tests are included.

I get the feeling this is going to alter the course of GPU development (unless performance PCs die as a viable market, which I somewhat doubt). I don't want AMD/Nvidia ditching strong compute performance to minimize frame latencies if they see the opportunity to sacrifice one for the other. Both feet need to be held to the fire. GPGPU performance is the difference between static environments and smashing everything.

January 4, 2013 | 12:03 AM - Posted by Ryan Shrout

I don't really think these topics are related.  We are basically going to be forcing GPU vendors to improve graphics "smoothness" just like we ask them to improve AA, etc.

There will still be plenty of work done on the compute side.

January 3, 2013 | 11:58 PM - Posted by brisa117

Finally! This looks to be the best real-world analysis I've ever seen from a reviewer! What would be interesting would be to somehow compare what you're capturing and what monitor X is displaying with monitor reviews! No idea how that would work though! Best of luck, you guys rock!

January 4, 2013 | 12:02 AM - Posted by Ryan Shrout

I wouldn't expect to see differences there as the capture card is actually working as our "monitor".

January 4, 2013 | 12:38 AM - Posted by Anonymous (not verified)

This is where you will see the DEFINITIVE superiority of Nvidia products. At least you better, or Tom is going to have some explaining to do.

If we cross paths at CES I will personally shake your hands and kiss your feet.

January 4, 2013 | 02:22 AM - Posted by Anonymous (not verified)

I think it will appear that nvidia is missing frames to smooth out game play whilst AMD brute forcing all of them.

I also think people saying AMD has better picture quality will be finally proven against nvidia`s washed out/colorless frames.

At the end of the day, their is only soo much you can do with a gimped GPU vs a beefed up one.

April 8, 2013 | 06:14 PM - Posted by Anonymous (not verified)

The gimped GPU being AMD and the good one being Nvidia, correct ?

January 4, 2013 | 03:27 AM - Posted by nummakayne (not verified)

This is phenomenal stuff - a total game-changer. Kudos to you, and your shoutout to Scott Wasson. I have always had great respect for you and PC Perspective, especially since that time so many years ago (I think it was in 2004?) when I e-mailed you in response to a motherboard review and you actually took time out to respond to it. You, Mr. Shrout, are awesome and far too kind.

I wouldn't necessarily say I'm particularly knowledgeable at this kind of stuff but I think I understand enough to know that this is definitely the right way to truly paint an honest picture of GPU performance and the end-user experience.

It reminds of the time when Crysis came out and people were divided into two camps: one that argued that any shooter running at less than 60fps is not 'smooth' while reviewers and many others insisted Crysis was 'smooth enough' even at 25fps average with dips to the high teens.

Would this kind of testing reveal any superiority in the way CryEngine handled frame output? Again, maybe I'm not making sense but this new wave of 'high average frame rate doesn't always relate to a smooth user experience' testing reminded me of that time.

January 4, 2013 | 03:00 PM - Posted by Ryan Shrout

It very well could but I don't know if we'd go back that far since this testing is so time consuming.

January 4, 2013 | 03:53 AM - Posted by dragosmp (not verified)

A question about this part:
"At the heart of our unique GPU testing method is this card, a high-end dual-link DVI capture card capable of handling 2560x1600 resolutions at 60 Hz."

You haven't mentioned what is the frequency of the capture, is it 60Hz? It would seem so since you talk about 16ms frame times. If this is true, then you lack the means to measure frames faster than 16ms, which do exist (of course they do, we have over 60FPS games). Also to follow up on that idea, the capture card may lack the means to measure anything but multiples of 16ms frame times. Is it the case, or if I missed something please explain.

Nice start to 2013

January 4, 2013 | 02:33 PM - Posted by arbiter

It might be a limit on the card but 60Hz is pretty standard for most monitors and people that play games. It could be possible for 120FPS capture but thing less you dumb down the graphic options or spend a lot of SLI setup just smaller market. You could see how inflated the numbers are as like image above where in 1 16ms instance there was was counted as 3 frames. 60Hz is just a good point to work from since majority of people that use it and don't need SLI setup to get it for most games.

January 4, 2013 | 03:03 PM - Posted by Ryan Shrout

Keep in mind in this testing there are two "frame rates".  The first is the rate at which the game engine is being rendered.  The second is the frame rate with which our hardware is capturing on the other side. 

The second only matters in that it must be 60 Hz (or match the refresh rate of the display) EXACTLY.  We need that to be perfect so that we know we are getting each and every frame from the graphics card as it would be display on a 60 Hz monitor.  We can still measure frame rates much higher than 60 Hz with Vsync disabled as we showed you in that Photoshoped screenshot by measuring the "rendered frames".

January 4, 2013 | 05:00 AM - Posted by Supreeth (not verified)

Hi Ryan,

Good job with the new performance testing metric..
My question is with such a thing like the stuttering and latency observed in these tests of graphics cards..will the already existing cards be able to remove such effects which the consumers observe while gaming? If so how?

Once again nice job!!

January 4, 2013 | 03:03 PM - Posted by Ryan Shrout

Drivers drivers drivers.

January 4, 2013 | 08:47 PM - Posted by raffriff (not verified)

driver developers! driver developers! driver developers! *throws chair*

January 4, 2013 | 08:51 PM - Posted by raffriff (not verified)

Sorry for the multi post. I blame NoScript.

January 4, 2013 | 06:52 AM - Posted by Dave (not verified)

I'm really glad to see more review sites looking past FPS as the end-all be-all metric for performance.

It's the reason I read sites like Tech Report (who sorta pioneered the frame time metric) and [H]ardOCP who, while reporting FPS min, max, avg, etc., concentrate their conclusions on the "experience" rather than the raw numbers.

This is the most important "metric" - the experience. I think frame times and your expansion on that provide an objective way of measuring the "experience."

As others have alluded to, I'd rather have a game play smoothly at 45 FPS with very little latency and low frame times (giving a smooth experience) than a game that averages higher FPS but produces more latency and more higher frame times resulting in a less smooth experience.

Good work on this Ryan, and I hope to see more soon.

January 4, 2013 | 07:08 AM - Posted by Torvik (not verified)

Certainly Ryan I think you need to show any frames that are dropped and whether or not they did contain any meaningful data. Hardware that is consistently dropping meaningful data isn't working correctly in my opinion. If these were network cards constantly dropping data packets you'd swap it out straight away. Any dropped frames would be interesting to see especially if gpu vendors are using driver optimisations to specifically drop frames to increase perceived average frame rates. It's not hard to imagine GPU vendors have long gotten wise to press review techniques and given rise to the "review press kits" which test their hardware in the best light. Gamers don't want to see the best light we want to see the reality however horrific that might be. Only then can you give good independent advice and I think that's what you should be doing.

So the more data the better though it needs to be presented in a meaningful way to the average user. Perhaps starting with an article or part article that explains what all these terms are and how they would effect your gaming experience.

Also interested to see what triple buffering is doing here in terms of frame latency and render ahead frame limits in DX11. How switching on AA or AF effects frame latency is probably important too allowing a user to understand fully the consequence of switching graphics options on or off outside of a FPS number.

Ultimately myself as a gamer I am looking for a fluid artifact free gaming experience that would be like watching a movie. That's every gamers goal I think.

I think it's also important to set a benchmark by testing current generation hardware to get a baseline. Also being clear what driver versions and settings are being used as well.

I think moving in this direction will drive a new development in graphics hardware/software and will really probably be the biggest breakthrough for some time.

Well done.

January 4, 2013 | 03:06 PM - Posted by Ryan Shrout


What I meant by the question of "removing" small frames from our ratings is basically trying to decide if it should count in the real "performance frame rate."

For example, if over a 1 second time period there are 80 frames displayed but 30 of them are only shown for 20 scanlines or less, then essentially your "smooth" frame rate is more like 50.

January 5, 2013 | 10:14 AM - Posted by Torvik (not verified)

Yes I think you probably should be presenting a "smooth" frame rate of 50 in the case you described but you have to clearly state the cut off you are using and at what point you consider a frame to not contain enough data to justify being counted is that 5% of the screen or more or less?

Throughout reading all of this though I think you have trashed the idea of a single metric of frame rate as a valid performance measurement.

Perhaps a better way to look at this is to judge all cards by a set acceptable value of smooth motion framerate lets for arguments sake say 60fps standard vsync monitor rate. Then present a % of frame delivery outside of this baseline or inside this baseline

e.g 60 fps - 1 new frame drawn every 16ms then report the percentage of frames delivered slower than 16ms effectively creating a fluid/smoothness rating.

It's about consitency now not about min and max though there has to be a baseline.

The other way to present it might be to calculate the avg frame delivery time and again present the percentage of frames falling outside the cards average frame delivery time?

January 4, 2013 | 10:58 AM - Posted by gamerk2 (not verified)

What I think would be helpful would be to determine, on average, how many UNIQUE frames actually are drawn to the display over a second, rather then how many frames are created. This is VERY important for those of us who use Vsync, since frames are either all or nothing.

January 4, 2013 | 09:24 PM - Posted by Ryan Shrout

We should be able to determine that as well.

January 8, 2013 | 10:16 AM - Posted by gamerk2 (not verified)

Good to hear. In my mind, I don't care how many frames the GPU puts out, I care about how many unique frames make it to my screen. That, at the end of the day, is what really matters in my mind.

In any case, I'm glad FPS is finally being put to the wayside. This method especially will have a LOT of impact when looking at SLI/CF configs (including the proof that microstutter=latency, which I've suspected for some time now).

January 8, 2013 | 10:16 AM - Posted by gamerk2 (not verified)

Good to hear. In my mind, I don't care how many frames the GPU puts out, I care about how many unique frames make it to my screen. That, at the end of the day, is what really matters in my mind.

In any case, I'm glad FPS is finally being put to the wayside. This method especially will have a LOT of impact when looking at SLI/CF configs (including the proof that microstutter=latency, which I've suspected for some time now).

January 4, 2013 | 11:31 AM - Posted by Brokenstorm (not verified)

Would it be possible use this to test how a 60Hz monitor compares to a 120Hz one as far as dropped/partially displayed frames go?

I assume that as long as the output doesn't exceed the max data rate of the capture card (650MB/s) it shouldn't be a problem. Obviously this would not be possible at 1440p/1600p put perhaps it could work at 1080p/1200p.

January 4, 2013 | 03:07 PM - Posted by Ryan Shrout

That is the bottleneck - capture hardware.  We haven't tried it on a 120 Hz monitor yet but we plan to right after CES.

January 4, 2013 | 02:01 PM - Posted by Mike D (not verified)

These results will be very useful in crossfire/sli testing involving micro-stuttering to give the end user an idea of actual scaling. Keep up the good work Ryan & PCPer!

January 4, 2013 | 02:17 PM - Posted by YTech2 (not verified)

This is a great topic!

This may be something that GPU-Software engineers have been ignoring ("users won't notice").
This reminds me of concept testing for stereoscopic images, animation, and model rendering to provide fast rendering at best quality on highest resolutions.

Thank you for sharing. I understand about being quiet. Keep up the good work and I am looking forward to hear more about your findings through your podcast!

January 4, 2013 | 03:07 PM - Posted by Ryan Shrout

Thanks for listening!

January 4, 2013 | 03:48 PM - Posted by Fulgurant

I registered just to say that you're doing great work. Thanks for articulating (and attempting to measure) what many of us have wrestled with for years.

And good luck to those Bengals on Saturday. :)

January 4, 2013 | 04:32 PM - Posted by Ryan Shrout

Who Dey!

January 4, 2013 | 05:10 PM - Posted by xbeaTX (not verified)

my most sincere congratulations for your idea ... I think we will show very interesting results and closer to the real gaming experience.
it's time to say stop to the reviews that look like subliminal advertising.

January 4, 2013 | 06:22 PM - Posted by David (not verified)

This is awesome!

Scott and the guys over at Tech Report have made me a believer and this promises to offer a nice twist on latency-based testing. Very curious to see how the actual benchmark testing evolves out of this!

January 5, 2013 | 10:24 AM - Posted by I don't have a name. (not verified)

Looks awesome Ryan. This sounds like a unique and innovative way of doing things, and should provide a great second perspective to what Scott at TR has been doing. :)

January 5, 2013 | 12:28 PM - Posted by shellbunner (not verified)

Always loved your reviews and have been a long time fan of this site since the AMD days. I'm REALLY looking forward to your input on this topic as it has exploded the last month. It will definitely be a great resource especially for the next generation of GPU's that will be here in a few months.
Thanks for the hard work Ryan.

January 5, 2013 | 04:21 PM - Posted by The Sorcerer (not verified)

I am more curious about that tool. Would you be making that tool be available in the public? atleast putting up some screenshots of that tool will really help!

January 5, 2013 | 05:51 PM - Posted by mtcn77

I cannot wait to read your review.
I just wonder if the display splitter might incur some nasty "input lag", just as that epic 30" Dell display might in real time. I noticed some anonymous input lag discrepancy in reviews, the bigger the screen gets from 23" and up.

January 5, 2013 | 09:15 PM - Posted by Anonymous (not verified)

No point in testing when Pcper doesn't use the latest Nvidia drivers and constant handicapping Nvidia hardware running outdated drivers.

January 5, 2013 | 10:36 PM - Posted by Anonymous Coward (not verified)

Dear Ryan

I hope by the sheer volume of comments on a post that's not a giveaway, you can see the intense interest in this sort of testing. We're sick and f***ing tired of being lied to with FPS, and having both AMD and Nvidia optimize their cards for the metrics they know will end up in reviews. Smoothness is a big deal, and is the #1 thing people want to know. I look forward to pcper having some friendly competition with the techreport in advancing the state of the art. Because we're all so sick of FPS charts that lie to our faces. I remember playing Battlefield 3 at 100+ fps and having it feel like garbage (tri-fire 6970s for those curious)

January 6, 2013 | 01:22 AM - Posted by Anonymous (not verified)

Micro stutter is a specially a problem wit CF!

Tho i also had tons of problems in the beginning, and was on the verge of selling my second 5870, when i was reading on the WSGF of a guy that got a 3th card, and it reduced the problem for him whit in tolerable limits.

I then got a 3th 2GB Matrix, thinking, if it dose not fix my problem i just send it back and sell my second card, but yes for me, the problem also reduced significantly, now i got 6 months a go a cheap 4th 5870 Matrix to bridge me over till there are 8970s or 780s with more then 4GB mem. (as i want to use a 6000x1920 setup)

And the 4th even reduced the micro stutters even a little more, the general theory was that every card got more time to prepare for the next frame.

Now i hear that the driver for GCN 7xx0 cards is badly optimized to prevent Micro stutters, compared to my VLIW 5870 cards.
(so i am actually glad i waited a extra generation ;-)

Now i am wondering, if you have your testbed ready for prime time, if you gone do some single, CF/SLI, Tri-CF/SLI and Quad-CF/SLI testing?

And i think it actually would be interesting to see 5870 vs 6970 vs 7970 vs 480 vs 580 vs 680, up to quad setups, to see if different generations act different.

January 8, 2013 | 12:07 AM - Posted by Anonymous (not verified)

How will this effect low level cards that might have trouble splitting the signals to both computers. Would the actually splitter handle the buck of the issue?

January 10, 2013 | 01:52 PM - Posted by arbiter

They are using a in line DVI spliter to split the dvi. So the low level cards don't even see its being split.

January 8, 2013 | 06:07 AM - Posted by kukreknecmi (not verified)

How the capture card and capture system will actually determine / distinguish each frame? Is there a signal on dvi link ,saying that frame is drawn and this is a new frame and the capture card/system will be aware of it is a new frame so it can timestamp it? From Fraps' point, i understand that since it is on the same system, it can near-ideally identify and timestamp the frame generation times and write to csv. By trying to identify the frames from out of the system, i'm just confused that how it will work. ofc i dunno if there is frame- drawn kinda signal/feed-back on the Dvi system.

January 10, 2013 | 01:54 PM - Posted by arbiter

It different frames being drawn can be seen by tearing effects in the recorded video. Look at above screen shots you can see a slight tearing of the image.

January 10, 2013 | 09:47 PM - Posted by kukreknecmi (not verified)

Do you really have idea how it works or just assuming? How can it detects if i stand still? I guess almost no tearing then. Is the capturing system have idea which frame is being drawn on the gaming system so it can timestamp it? It is some different version of recording with camera i got that part, it can detect tearing/cheating etc. i got that too. I just dont get how it will detect the real frame generation time and make stat of it like fraps'. Without the frame time data it cant accurately stat the frame generation / latency time on the gaming system.

January 10, 2013 | 09:48 PM - Posted by kukreknecmi (not verified)

Do you really have idea how it works or just assuming? How can it detects if i stand still? I guess almost no tearing then. Is the capturing system have idea which frame is being drawn on the gaming system so it can timestamp it? It is some different version of recording with camera i got that part, it can detect tearing/cheating etc. i got that too. I just dont get how it will detect the real frame generation time and make stat of it like fraps'. Without the frame time data it cant accurately stat the frame generation / latency time on the gaming system.

January 10, 2013 | 09:48 PM - Posted by kukreknecmi (not verified)

Do you really have idea how it works or just assuming? How can it detects if i stand still? I guess almost no tearing then. Is the capturing system have idea which frame is being drawn on the gaming system so it can timestamp it? It is some different version of recording with camera i got that part, it can detect tearing/cheating etc. i got that too. I just dont get how it will detect the real frame generation time and make stat of it like fraps'. Without the frame time data it cant accurately stat the frame generation / latency time on the gaming system.

January 8, 2013 | 11:46 PM - Posted by bootdiscerror

16GB a minute is an insane amount of data to sort through. I look forward to to the future, and what this will bring. No more 'padding the specs'.

Great stuff!

January 9, 2013 | 07:08 AM - Posted by BigMack70 (not verified)

Yes please! This stuff is awesome, guys... I've been hoping for a while now that sites like yours and others would take the idea TechReport had of looking at frametimes and develop on it yourselves.

Can't wait to see what you guys come up with as a final method of evaluation.

January 9, 2013 | 01:12 PM - Posted by markt (not verified)

Ryan reading all the stuff I'v read I understand why u have kept this project hush hush.....some people get big time butt-hurt

January 10, 2013 | 02:03 PM - Posted by arbiter

Yea, well its gonna reveal which side is cheating the fps numbers. Look at techreports where they credited the idea from. Just because you have higher fps doesn't mean more fluid game play and where frame latency can show which overall is smoother. This will put the proverbial Instant replay on the cards to see what each card is doing to get those numbers like you see in the photo where one image of 16ms has 3 frames counted when only 2 frames drawn at best.

January 14, 2013 | 03:34 AM - Posted by Fairplay (not verified)

If you want to see some fuzzy numbers,have a look at the frame latencies of GTX560/570.
It has huge latency spikes compared with the AMD 6 series cards and yet we never heard a word from nvidia fans.
Suddenly now when nvidia is well behind in frame rates per dollar.."OMG its a HUGE problem"!!!!


January 17, 2013 | 05:57 AM - Posted by uartin (not verified)

Why everybody keeps saying that techreport started a new way to benchmark when microstutter has been talked about in pcgameshardware and computerbase reviews sites for ages (we are talking 2008 time, maybe even earlier)?

I can also remember interminable discussions on ancient threads of nvidia forums where people kept posting their Unreal tournament fraps frametimes dump showing how slow frames would break the smoothness of gameplay in multigpu configurations...

January 19, 2013 | 04:46 PM - Posted by NitroX infinity (not verified)

Would you share some info on that capturing card (company/model) ?

And how do you measure the frametimes/dropped frames, etc? With the custom software you mentioned in the video? If so, will you be releasing that software or keeping it to yourself? I've got a lot of OLD 3d cards that I'd like to do the frame rating test with and fraps won't do since it only works with DirectX/OpenGL.

September 20, 2013 | 02:03 AM - Posted by ezjohny

This is a very good method you have going here, it shows AMD and Nvidia someone out there is paying attention to video games and that someone cares! Good job guys keep up the good work!
Question: I do not like when you buy a video game, and your character goes through a solid object, Who is responsible for this, is this the game developers. "Hope someone could get on them for this"!

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.