PCPer Live! Frame Rating and FCAT - Your Questions Answered!

Subject: Editorial, Graphics Cards | May 8, 2013 - 11:37 PM |
Tagged: video, nvidia, live, frame rating, fcat

Update: Did you miss the live stream?  Watch the on-demand replay below and learn all about the Frame Rating system, FCAT, input latency and more!!

I know, based solely on the amount of traffic and forum discussion, that our readers have really adopted and accepted our Frame Rating graphics testing methodology.  Based on direct capture of GPU output via an external system and a high end capture card, our new systems have helped users see GPU performance a in more "real-world" light that previous benchmarks would not allow.

I also know that there are lots of questions about the process, the technology and the results we have shown.  In order to try and address these questions and to facilitate new ideas from the community, we are hosting a PC Perspective Live Stream on Thursday afternoon.

Joining me will be NVIDIA's Tom Petersen, a favorite of the community, to talk about NVIDIA's stance on FCAT and Frame Rating, as well as just talk about the science of animation and input. 

View Full Size

The primary part of this live stream will be about education - not about bashing one particular product line or talking up another.  And part of that education is your ability to interact with us live, ask questions and give feedback.  During the stream we'll be monitoring the chat room embedded on http://pcper.com/live and I'll be watching my Twitter feed for questions from the audience.  The easiest way to get your question addressed though will be to leave a comment or inquiry here in this post below.  It doesn't require registration and this will allow us to think about the questions before hand, giving it a better chance of being answered during the stream.

Frame Rating and FCAT Live Stream

11am PT / 2pm ET - May 9th

PC Perspective Live! Page

So, stop by at 2pm ET on Thursday, May 9th to discuss the future of graphics performance and benchmarking!

 

May 7, 2013 | 11:11 AM - Posted by Steven (not verified)

Does your new office studio allow for a live audience?

May 7, 2013 | 11:50 AM - Posted by Adam T (not verified)

Two questions for Ryan and Tom.

#1: I want to get the visually smoothest performance possible out of my SLI'ed Titans. What settings should I sent in the NVidia Control Panel and GeForce Experience program?

#2: Are frame-rate caps (at say 120hz) a good alternative for avoiding the problems with v-sync? Can you set caps with any of the publicly available NVidia tools for games that don't offer them as an option?

May 7, 2013 | 03:05 PM - Posted by Virty (not verified)

Hi!

I'd like you guys explored a bit on how frame rating benchmarks can be somewhat misleading when considering real gaming scenarios. One can, for example, contrapose the illustration pic of this article with this one: http://www.tomshardware.com/reviews/radeon-hd-7990-devil13-7970-x2,3329-...
http://www.radeonpro.info/features/dynamic-frame-rate-control/

Can DFC effectively deliver a much smoother experience? If so, isn't the AMD Crossfire issue heavily exacerbated?

Thanks! :)

May 7, 2013 | 04:11 PM - Posted by Jarrett (not verified)

1) Nvidia has clearly been looking at and managing frame times for years now. The question is, why didn't they raise the issue to the public, only developing or releasing FCAT now that benchmarking websites have realized the importance of frame times?

2) There seems to be a certain amount of subjectivity regarding which is considered smoother: a locked low framerate with no frametime deviation, or a substantially higher yet more variable framerate. Given this, isn't it deceptive to imply that lower frametime deviation inherently means 'smoother' animation? Perhaps the issue isn't deception, but simply ambiguity in the meaning of 'smoothness'. Some subjective research is needed to determine how much jitter is perceivable, and a weighting is needed to determine the relative importance of framerate and jitter.

3) Looking at a frametime percentile graph, which corresponds to smoother animation: a plot that's horizontal for the first 90% then curves drastically upwards near the end; or a plot that has a slow linear increase in frametime?

4) Analysis of frametimes with vsync on have focused on 60Hz displays. Do things change significantly with higher refresh-rate displays, like 120Hz?

5) How do frametimes and jitter affect quality of experience for the Oculus Rift? I know alot of PC gamers are eager to get into virtual reality and wonder which metric most affects the subjective experience, particularly in regards to motion sickness.

Just as a note, I'd like to see frametiming of integrated graphics.

May 8, 2013 | 12:24 AM - Posted by Nilbog

Glad to see Tom outside of a product launch. He is a lot of fun.

1) I am curious to know when NVIDIA discovered how important frametimes are, and how long did it take to implement?

2) As others have asked, why hasn't NVIDIA moved to educate users on the importance of frame latency?

All BS aside why hasn't NVIDIA pushed this as some kind of open standard to improve the overall gaming experience? Regardless of the vendor. This is the exact kind of thing we need to share, in order to progress the industry as a whole.

May 8, 2013 | 05:59 AM - Posted by dreamer77dd

Timing from the game engine?
Game engines holding back and pushing timing of the simulation. Refilling the queue.
The cause of the results does not always fall on the shoulders of the GPU but the game engine?
Can developers of the game engines change FRAC?

How much does the diver of these cards affect the overall results of FRAC? Game engine internal timing of the game engine is affected by the drivers installed on that machine affecting the results of your testing?

How accurate are these results or understanding the results because of not seeing the entire rendering pipeline?

With games that that need a server to run like WOW and other MMOs, how does this affect testing and results?
Are you really getting a true result or latency also from waiting on a server, ping, game engine, and GPU?
I would expect alot of variation in results?

Is Multi GPU really that much faster?
AFR, each GPU taking turns rendering an image.
I am thinking it goes back and forth.
Giving each GPU a turn to do one frame, hopefully well the other one retrieving more data to show another frame.
But is this faster? Less jitters?
And with that, I believe it been said that more memory on a GPU helps to smooth out game animation?

Nvidia frame metering technology for SLI puts delays in delivery of frame rates to make it smooth that are shown.
Is the lag between the user and visual response a big deal when smoothing out animations?
Going forward accepting such latency how will it affect the future, example Oculus Rift?
Does AMD need frame metering stay competitive or is their a better solution for them to go down?

Is it better to have a 60fps lock in a game?

Nvidia has an option i believe that filters out runt frames and dropped frames. Do you recommend this? What is your opinion on FCAT filters and Nvidia FRACT tools?
Do these tools favor their own products or is it because they have a head start on implementing this into their hardware compared to AMD?

Intel and Microsoft also
provide tools which makes me question software from these companies. Will FutureMark be getting involved in this benchmarking?

Example of steps until we see a result on the screen.
User input, Windows kernel, Driver, GPU, then display.
To get a complete understanding of were the delays or latency is happening you would need to test every step of the rendering pipeline.
Are their going to be tools to do such testing and they going to be anytime soon?
I guess FRAC is enough of a jump for now.
Do GPU companies know from start to finish what is going on in that rendering pipeline between the user and their hardware? They do talk with the developers that make the software so they can make drivers, etc?
They must have some clue or testing that we are not aware of, dont they?

What is the difference in FCAT and Fraps?
Is AMD more interested in FCAT results or Fraps?
What does it mean when FCAT and Fraps results do not relate to each other?
What I have read is that FRACT shows more frame to frame fluctuation and that Fraps do not capture the frame rate differences as well when using multi GPU configuration?
I think FRACT is closer to the end result and that Fraps is at the beginning.
Fixing runts, jitters and over all smoothness in graphics in games, how does this relate to other applications, like scientific computing and supercomputers?
Being able to have multi GPU supercomputers that don’t step on each other’s toes when working through a problem and would use less electricity to get results.
Less work on the GPU and smooth out latency in GPU accelerated programs big or small scale I would imagine and not just in video games, but all things related, right.
Lets say braking a code or protein folding.

Sorry my thoughts went all over the place but i hope that it can be understood and not to confusing.

May 8, 2013 | 06:55 AM - Posted by fausto412 (not verified)

will the driver improvements AMD is working on mean that the video cards will not work as hard since they'll be sort of pacing themselves? will this reduce noise and temperatures? will the driver improvements trickle to older crossfire generations?

I have a 6950 and a 6990 and I swore I didn't see a difference before even though fps was higher with the 6990.

are 3 gpu configurations affected by frametime issues as much? and do 3 gpu configurations present an easier or tougher challenge to manage the frame times?

how come Nvidia didn't bring up frametimes before and how long have they been managing animation quality?

May 8, 2013 | 08:04 AM - Posted by Randomoneh

Are colorful bars automatically translated to and saved as frame times in ms or do you have to do it manually by studying each frame separately?

May 13, 2013 | 04:44 AM - Posted by NVIDIA TAP (not verified)

The bars are extracted from the captured AVI using simple DSP program written by NVIDIA. Those raw dumps can further processed using the FCAT scripts to get meaning information about performance automatically.

May 9, 2013 | 01:56 AM - Posted by Joe (not verified)

Ryan was great on TWIT.
He corrected those nimrods like Laporte/Stevens that think PC/Console gaming is dead.

May 13, 2013 | 02:40 AM - Posted by ThorAxe

Yes; it was great to hear him on TWIT. Leo seems like a good guy but he and other panel members appear clueless on the reality of mobile gaming.

Mobile Gaming Revenue is roughly 4 or 5% of the global video game market. PC Gaming alone earned roughly 700% more revenue. What many pundits forget is that mobile games only cost around $5 a copy.

May 9, 2013 | 01:58 AM - Posted by Joe (not verified)

Ryan is great...so well spoken and backs things up with facts/science.

May 9, 2013 | 10:47 AM - Posted by SPBHM

does every Kepler GPU with SLI support hardware frame rate metering like the 690?

How different is the Kepler GPU with framerate metering compared to a Fermi GPU SLI, like the 590?

can you quantify the additional input lag? is it like 10, 20, 50, 100ms? compared to single GPU.

--

Do you have any data about slower CPUs vs SLI, and single GPUs in terms of frame times?

How many parts of different frames you will normally see on a single refresh of let's say 80 FPS at 60Hz, or it varies?
would a 60Hz screen running a game at 240Hz present newer information than a 60Hz screen with a game at 120Hz (always without vsync)?

how much frame latency variation would be enough to cause an unpleasant experience? like from 10 to 15ms for every different frame or something?

is some sort of frame rate metering adopted for single GPU? to protect the smoothness from other bottlenecks like CPU, I/O in general...

---

will nvidia add an option to disable framerate metering on their drivers control panel?

does it have something to do with the "Max Frames to Render ahead"?

what about a dynamic framerate target (with multiple targets, let's say if enough performance lock at 100, if not lock at 60 or something) would it work?

thanks

May 10, 2013 | 10:36 PM - Posted by nvtweakman (not verified)

SPBHM said:
"does it have something to do with the "Max Frames to Render ahead"?"

Yes. V-sync and GPU-limitations "back pressure the entire render pipeline" (quoting Tom there). You can think of maximum pre-rendered frames as a "buffer" or "queue" of frames building up in the pipeline. The more frames allowed to build up, the higher the input lag.

May 9, 2013 | 03:18 PM - Posted by Thedarklord

Question: Is NVIDIA making any headway on improving their Overclocking tools (such as Boost) as well as their Adaptive VSync tools?

If so what are the goals NVIDIA has and what headway have they made?

May 12, 2013 | 05:38 AM - Posted by ThorAxe

Thanks for a fantastic explanation of input latency. I can now finally explain to my console friends why I feel like I am playing in slow motion on my PS3 on the rare occasions that I game on it.

May 13, 2013 | 07:43 AM - Posted by n2k (not verified)

This was very interesting to watch! I love the new metrics. I'm wondering why all the technical v-sync information is so hard to get by?

I still use CRTs (I have 2 Compaq P1220) and I'm wondering when enabling V-Sync it doesn't switch between 50 fps (half my refresh rate) and 100 fps, running 100hz at 1600x1200 when it dips below 100 fps in games, this is different than LCD displays. I suppose that this is because I'm using VGA (analog) as opposed to DVI (digital)?

Would like to know the technical explanation of why this is and what the difference there is between V-Sync for CRTs and LCDs.

May 13, 2013 | 05:02 PM - Posted by nvtweakman (not verified)

n2k said:
"I'm wondering when enabling V-Sync it doesn't switch between 50 fps (half my refresh rate) and 100 fps . . . this is different than LCD displays."

It's really no different, because this is hardly dependent on the display technology itself (e.g. CRT/VGA, LCD/DVI, etc.). Here's what really happens:

The video card renders a new frame into a "back buffer" while an already completed frame is being sent to the display (from the "front buffer"). If the game only makes use of one back buffer and FPS falls below refresh rate with v-sync enabled, it must now wait every other display refresh cycle to send a new frame to the display (which divides FPS in half). Triple buffering (i.e. multiple back buffers) is one solution to that problem.

In other words V-sync and FPS behave pretty much the same between CRT and LCD monitors. LCD monitors generally create more latency though, for several reasons (buffering frames internally, etc.).

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.