Review Index:
Feedback

The NVIDIA GeForce GTX 1080 Ti Review

Author: Ryan Shrout
Manufacturer: NVIDIA

Testing Suite and Methodology Update

If you have followed our graphics testing at PC Perspective you’ll know about a drastic shift we made in 2012 to support a technology we called Frame Rating. Frame Rating use the direct capture of output from the system into uncompressed video files and FCAT-style scripts to analyze the video to produce statistics including frame rates, frame times, frame time variance and game smoothness.

Readers and listeners might have also heard about the issues surrounding the move to DirectX 12 and UWP (Unified Windows Platform) and how it affected our testing methods. Our benchmarking process depends on a secondary application running in the background on the tested PC that draws colored overlays along the left hand side of the screen in a repeating pattern to help us measure performance after the fact. The overlay we have been using supported DirectX 9, 10 and 11, but didn’t work with DX12 or UWP games.

We worked with NVIDIA to fix that and we have an overlay that behaves exactly in the same way as before, but it now will let us properly measure performance and smoothness on DX12 and UWP games. This is a big step to maintaining the detailed analytics of game performance that enable us to push both game developers and hardware vendors to perfect their products and create the best possible gaming experiences for consumers.

So, as a result, our testing suite has been upgraded with a new collection of games and tests. Included in this review are the following:

  • 3DMark Fire Strike Extreme and Ultra
  • Unigine Heaven 4.0
  • Dirt Rally (DX11)
  • Fallout 4 (DX11)
  • Gears of War Ultimate Edition (DX12/UWP)
  • Grand Theft Auto V (DX11)
  • Hitman (DX12)
  • Rise of the Tomb Raider (DX12)
  • The Witcher 3 (DX11)

We have included racing games, third person, first person, DX11, DX12, UWP and some synthetics, going for a mix that I think encapsulates the gaming market of today and the future as best as possible. Hopefully we can finally end the bickering in comments about not using DX12 titles in our GPU reviews! (Ha, right.)

Our GPU testbed remains unchanged, including an 8-core Haswell-E processor and plenty of memory and storage.

  PC Perspective GPU Testbed
Processor Intel Core i7-5960X Haswell-E
Motherboard ASUS Rampage V Extreme X99
Memory G.Skill Ripjaws 16GB DDR4-3200
Storage OCZ Agility 4 256GB (OS)
Adata SP610 500GB (games)
Power Supply Corsair AX1500i 1500 watt
OS Windows 10 x64
Drivers AMD: 17.2.1
NVIDIA: 378.78

View Full Size

GPU-Z needs updating for the GeForce GTX 1080 Ti

Picking out graphics cards for our comparison was easy enough, we are only looking at the best of the best. And the best of the best hasn't changed in some time...

The new GTX 1080 Ti doesn't have much competition to be blunt. The Pascal-based Titan X should be nearly idential in performance, though its priced $600+ higher if you can even find it. The GTX 1080 should be on sale for about $200 less than the 1080 Ti starting this week. And the highest performing AMD graphics cards (the Fury X and Fury) are just...not good options.

For those of you that have never read about our Frame Rating capture-based performance analysis system, the following section is for you. If you have, feel free to jump straight into the benchmark action!!

 

Frame Rating: Our Testing Process

If you aren't familiar with it, you should probably do a little research into our testing methodology as it is quite different than others you may see online.  Rather than using FRAPS to measure frame rates or frame times, we are using an secondary PC to capture the output from the tested graphics card directly and then use post processing on the resulting video to determine frame rates, frame times, frame variance and much more. 

This amount of data can be pretty confusing if you attempting to read it without proper background, but I strongly believe that the results we present paint a much more thorough picture of performance than other options.  So please, read up on the full discussion about our Frame Rating methods before moving forward!!

While there are literally dozens of file created for each “run” of benchmarks, there are several resulting graphs that FCAT produces, as well as several more that we are generating with additional code of our own. 

The PCPER FRAPS File

Previous example data

While the graphs above are produced by the default version of the scripts from NVIDIA, I have modified and added to them in a few ways to produce additional data for our readers.  The first file shows a sub-set of the data from the RUN file above, the average frame rate over time as defined by FRAPS, though we are combining all of the GPUs we are comparing into a single graph.  This will basically emulate the data we have been showing you for the past several years.

 

The PCPER Observed FPS File

Previous example data

This graph takes a different subset of data points and plots them similarly to the FRAPS file above, but this time we are look at the “observed” average frame rates, shown previously as the blue bars in the RUN file above.  This takes out the dropped and runts frames, giving you the performance metrics that actually matter – how many frames are being shown to the gamer to improve the animation sequences. 

As you’ll see in our full results on the coming pages, seeing a big difference between the FRAPS FPS graphic and the Observed FPS will indicate cases where it is likely the gamer is not getting the full benefit of the hardware investment in their PC.

 

The PLOT File

Previous example data

The primary file that is generated from the extracted data is a plot of calculated frame times including runts.  The numbers here represent the amount of time that frames appear on the screen for the user, a “thinner” line across the time span represents frame times that are consistent and thus should produce the smoothest animation to the gamer.  A “wider” line or one with a lot of peaks and valleys indicates a lot more variance and is likely caused by a lot of runts being displayed.

 

The RUN File

While the two graphs above show combined results for a set of cards being compared, the RUN file will show you the results from a single card on that particular result.  It is in this graph that you can see interesting data about runts, drops, average frame rate and the actual frame rate of your gaming experience. 

Previous example data

For tests that show no runts or drops, the data is pretty clean.  This is the standard frame rate per second over a span of time graph that has become the standard for performance evaluation on graphics cards.

Previous example data

A test that does have runts and drops will look much different.  The black bar labeled FRAPS indicates the average frame rate over time that traditional testing would show if you counted the drops and runts in the equation – as FRAPS FPS measurement does.  Any area in red is a dropped frame – the wider the amount of red you see, the more colored bars from our overlay were missing in the captured video file, indicating the gamer never saw those frames in any form.

The wide yellow area is the representation of runts, the thin bands of color in our captured video, that we have determined do not add to the animation of the image on the screen.  The larger the area of yellow the more often those runts are appearing.

Finally, the blue line is the measured FPS over each second after removing the runts and drops.  We are going to be calling this metric the “observed frame rate” as it measures the actual speed of the animation that the gamer experiences.

 

The PERcentile File

Previous example data

Scott introduced the idea of frame time percentiles months ago but now that we have some different data using direct capture as opposed to FRAPS, the results might be even more telling.  In this case, FCAT is showing percentiles not by frame time but instead by instantaneous FPS.  This will tell you the minimum frame rate that will appear on the screen at any given percent of time during our benchmark run.  The 50th percentile should be very close to the average total frame rate of the benchmark but as we creep closer to the 100% we see how the frame rate will be affected. 

The closer this line is to being perfectly flat the better as that would mean we are running at a constant frame rate the entire time.  A steep decline on the right hand side tells us that frame times are varying more and more frequently and might indicate potential stutter in the animation.

 

The PCPER Frame Time Variance File

Of all the data we are presenting, this is probably the one that needs the most discussion.  In an attempt to create a new metric for gaming and graphics performance, I wanted to try to find a way to define stutter based on the data sets we had collected.  As I mentioned earlier, we can define a single stutter as a variance level between t_game and t_display. This variance can be introduced in t_game, t_display, or on both levels.  Since we can currently only reliably test the t_display rate, how can we create a definition of stutter that makes sense and that can be applied across multiple games and platforms?

We define a single frame variance as the difference between the current frame time and the previous frame time – how consistent the two frames presented to the gamer.  However, as I found in my testing plotting the value of this frame variance is nearly a perfect match to the data presented by the minimum FPS (PER) file created by FCAT.  To be more specific, stutter is only perceived when there is a break from the previous animation frame rates. 

Our current running theory for a stutter evaluation is this: find the current frame time variance by comparing the current frame time to the running average of the frame times of the previous 20 frames.  Then, by sorting these frame times and plotting them in a percentile form we can get an interesting look at potential stutter.  Comparing the frame times to a running average rather than just to the previous frame should prevent potential problems from legitimate performance peaks or valleys found when moving from a highly compute intensive scene to a lower one.

Previous example data

While we are still trying to figure out if this is the best way to visualize stutter in a game, we have seen enough evidence in our game play testing and by comparing the above graphic to other data generated through our Frame rating system to be reasonably confident in our assertions.  So much in fact that I am going to call this data the PCPER ISU, which beer fans will appreciate as the acronym of International Stutter Units.

To compare these results you want to see a line that is as close the 0ms mark as possible indicating very little frame rate variance when compared to a running average of previous frames.  There will be some inevitable incline as we reach the 90+ percentile but that is expected with any game play sequence that varies from scene to scene.  What we do not want to see is a sharper line up that would indicate higher frame variance (ISU) and could be an indication that the game sees microstuttering and hitching problems.


March 9, 2017 | 09:11 AM - Posted by Anonymous (not verified)

Small remark on 11 page (Detailed Power Consumption Testing):

Last Light testing at 4K with the Titan X running at a +150 MHz offset (should be 1080 Ti, instead of Titan X).

Other than that - thanks for review.

March 10, 2017 | 02:39 AM - Posted by djotter

Saw that too, copy pasta!

March 9, 2017 | 09:20 AM - Posted by khanmein

interesting power draw from 6-pin & any slow down once we hit peak of 11 GB VRAM?

weird no DOOM, For Honor, Sniper Elite 4 etc benchmark?

March 9, 2017 | 02:36 PM - Posted by Anonymous (not verified)

Doom isn't much of a stress test for anything, I think thats been demonstrated very well by many testing outlets. For Honor and Sniper Elite 4 are pretty damn new, hard to castigate them for not using them

March 10, 2017 | 04:38 AM - Posted by Anonymous (not verified)

Actually running with Vulcan is good for testing CPU OC stability.

March 9, 2017 | 09:26 AM - Posted by Anonymous (not verified)

2% faster then GTX1080 in Grand Theft Auto V and Gears of War at 1440p.

Woooooww!

March 9, 2017 | 11:53 AM - Posted by slizzo

Those two titles are heavily CPU dependent.

March 9, 2017 | 02:51 PM - Posted by Anonymous (not verified)

Gears of War isn't

March 9, 2017 | 04:54 PM - Posted by Anonymous (not verified)

You know it's not GOW4 right?

March 9, 2017 | 06:35 PM - Posted by Anonymous (not verified)

Unreal engine isn't cpu dependant

March 9, 2017 | 09:30 AM - Posted by Anonymous (not verified)

IB4 Ryzen testing requests.

March 9, 2017 | 09:34 AM - Posted by Geoff Peterson (not verified)

Thanks for the 1400p 980ti benchmark comparisons for those of us thinking of upgrading from that gpu. Looks like a pretty compelling upgrade.

My one question is the reference PCB layout. Is it identical to Titan X pascal? I like to watercool my GPUs to get the most out of them

March 9, 2017 | 09:59 AM - Posted by Jann5s

yes, it is the same PCB, see the pictures here

March 9, 2017 | 09:35 AM - Posted by Anonymous (not verified)

250 amps? I'd like to see that power supply!

March 9, 2017 | 10:51 AM - Posted by Anonymous (not verified)

At 3V, 5V, 12V... to name a few! :)

March 9, 2017 | 12:19 PM - Posted by Anonymousmouse (not verified)

Hopefully, it is at 1V or below:) It was mentioned in the launch live stream too, there must be a mix up somewhere.

March 9, 2017 | 11:18 AM - Posted by Anonymous (not verified)

"the NVIDIA GeForce GTX 1080 Ti is going to be selling for $699 starting today from both NVIDIA and it's partners"

But.. but.. it's not selling today anywhere..

March 9, 2017 | 11:54 AM - Posted by slizzo

Yeah, looks like NDA date was today, actual launch is tomorrow.

March 9, 2017 | 01:01 PM - Posted by Anonymous (not verified)

Any word on the 980ti step up to 1080ti?

March 9, 2017 | 01:04 PM - Posted by Anonymous (not verified)

A bigger chip off the GP 102 block, so more performance through more resources. It will be intresting to see what AMD's Vega will offer in performance with all of Vega's new features that are new to the Vega micro-arch relative to the Ti's more amounts of the same with some slightly higher clocks. that $699 price point will not be too difficult for AMD to beat and Vega will have that NCU, HBCC, and other all new features to take into account. So it's the Pascal refresh versus Vega's new features for AMD's latest Flagship offering when Vega is released 2Q of 2017.

AMD does have its more of the same in the mainstream also with the RX 500 series Polaris refresh offerings in April but then Vega will be following. So hopefully by the summer Vega will be here.

I hope that the games are being tuned for Vega in advance unlike some of the games that were/are still not tuned for Ryzen or Ryzen's extra cores for the affordable prices.

March 9, 2017 | 01:54 PM - Posted by Anonymous (not verified)

You realize that by the time Vega is out (may), we will be looking at preliminary Volta information, right?

AMD missed this generation entirely on the high-end.

March 9, 2017 | 03:19 PM - Posted by Anonymous (not verified)

Good more Volta news to force AMD to get its Navi to market on time! But with Zen/Ryzen and Zen/Naples making some mad revenues for AMD there will be no more excuses for AMD to be late with any flagship SKUs! So the competition moves from the two Ps in the mainstream market to the two Vs in the Flagship market. AMD does not have a flagship P SKU but some dual RX 480 prices deals will be had by some when the RX 500 series P refresh get here for AMD’s mainstream P SKUs along with that V flagship from AMD shortly after.

That Ps and Vs competition will lead to some N and whatever comes after competition as always.

March 10, 2017 | 09:04 AM - Posted by Anonymous (not verified)

Vega is coming to desktop in the next few months. Desktop Volta is now 1H next year. So at minimum it will be *at least* six months behind Vega, and possibly as much as a year.

Also, AMD will be launching Navi next year. So it won't be Vega that Volta finds itself up against.

Hate to break it to you, but it's AMD who are going to leap frog 'Paxwell' with Vega this year, and unless Volta is something really special, Navi will keep them ahead.

I'm not really sure how you managed to get things *completely* the wrong way round, but trust me, you have.

March 9, 2017 | 01:59 PM - Posted by Geforcepat (not verified)

march 2017. amd gives out "vega" tshirts. nvidia releases a new enthusiast card. and drops the price on another. #gofigure

March 9, 2017 | 02:42 PM - Posted by Anonymous (not verified)

you have an error in the summary, "Whether or not they can deliver on those promises has yet to be proven.", should say 'deliver on any promises has never been proven'.

Still very upset about Rysen.

PC Master Race, lets see those breath of wild & horizon zero dawn benches. LOL reviews 2 week's before launch suckas

September 8, 2017 | 12:52 PM - Posted by AnonymousaNUS (not verified)

You have a knack for scrutiny. Prepositions aren't sentunzez. Phreiz is 4 needing, dog cat barf, pooped em' XD

www.whatinthehellisthislinkandwhyisitsolongandwhydidyouactuallyspendthet...

March 9, 2017 | 02:56 PM - Posted by Dark_wizzie

Seems there is a typo in OC page:

'NVIDIA is slightly aggressive with target clock speeds and voltages ath the rated power targets, and that results in the variance that you see here.'

March 9, 2017 | 03:28 PM - Posted by Anonymous (not verified)

I think you should do an article on just how much of a lie TDP figures can be.

250W part? Might spike to 20% higher than that rating.

I think this is something that has real consequences. For supercomputers and other systems that have to last and be run at full load with minimal amounts of failures, this is unacceptable because it shortens the bathtub curve somewhat unpredictably.

Consumer GPUs with boost features are made to squeeze every last bit of performance out of them, rather than last long and be reliable. Hopefully theboard makers compensate, and hopefully their EDA, electromigration simulations and emulation when designing the chips treat the TDP as a complete lie as well.

Power virus games with shitty ScaleformUI menus could very well have a gpu like this sitting above its TDP long enough to probably cause some electromigration. Its really no wonder that larger GPU VRMs are a common failure.

March 9, 2017 | 04:28 PM - Posted by superj (not verified)

I think the key to TDP is that it is THERMAL DESIGN POWER. Not Total Input Power. As long as the short term average power stays below the rated TDP, then substantial brief spikes above the TDP are ok since the thermal cooling capability is buffered by the thermal mass and dissipation power of the cooler.

March 9, 2017 | 04:25 PM - Posted by Angelica (not verified)

Why not Doom test?

March 9, 2017 | 04:45 PM - Posted by Jaye1998

awesome

March 9, 2017 | 04:52 PM - Posted by Anonymous Nvidia User (not verified)

108% faster than Fury's in AMD sponsored Hitman 4k. Wow!!!!!

37% better than 1080 in GTA5 4k.

26% faster than 1080 Gears of War 4k.

Performance in 1440 may be the result of both ti and 1080 hitting engine limits not that ti is only 2% faster.

This card is designed for 4k isn't this what AMD users always bragging on.

Anyways I'm looking for a great 4k card and this looks like a winner.

Good review PCPer.

March 9, 2017 | 04:52 PM - Posted by retiredmtngryl07

great review.

March 9, 2017 | 05:03 PM - Posted by Stinky3toes (not verified)

WHY in your testing results do you not show any results for multiple monitors. With all the connectors on the GPU, Why don't you do any testing using them ?

March 9, 2017 | 05:27 PM - Posted by Jeremy Hellstrom

FCAT incompatibility unfortunately.  We are working on it with vendors, so hopefully we will be able to do this soon as we do want to provide that info in reviews.

March 9, 2017 | 05:07 PM - Posted by Ugiboy

I am very envious, looks amazing

March 9, 2017 | 05:17 PM - Posted by Michael Bertrand Pratte (not verified)

Seems to be the best card ATM! For quality and price, gtx 1080 ti FTW!

March 9, 2017 | 07:46 PM - Posted by ChuckyDiBi (not verified)

That power draw is much smarter when it come to repartition than the mess that was stock RX480 at launch. Still, I'd wait for a dual 8pin version from OEM!

March 10, 2017 | 02:13 AM - Posted by serpico (not verified)

I'm curious how you measure power through PCI-E. Measuring voltage is easy enough, do you use a precision resistor and op-amp to measure the current?

March 12, 2017 | 06:44 PM - Posted by crackshot91 (not verified)

Hey Ryan, just wondering if you could share the exact settings you used for the unigine heaven bench? It says 1440p Ultra, but what about tessellation (I imagine that's maxed out, of course) or more importantly, antialiasing? Only wondering because compared to your 980 score, mine is a few fps lower with 8xAA, though my card is heavily OC'd and scores significantly higher in 3DMark. Just trying to get a perfect-as-possible comparison :)

March 12, 2017 | 10:42 PM - Posted by Brandito (not verified)

Any word on wether that display port adapter is active or passive?

March 14, 2017 | 08:02 AM - Posted by Ninjawithagun

Get a new monitor that has DisplayPort 1.2 or higher. Problem solved. Why would you waste money buying an adapter for an older, inferior monitor?

March 16, 2017 | 11:41 AM - Posted by An extremely concerned reader (not verified)

I have 4 displays, I'd like to use all of them and am wondering whether I'd need to buy another adapter or use the one supplied with the card. What kind of pleb only runs one display? Do you even know how to PC Master Race?

March 14, 2017 | 08:01 AM - Posted by Ninjawithagun

Meh, my Titan X performs a bit better as it will hit 2100Mhz on the GPU. The only real advantage here is the price point. For me, it's a 'no thanks'. I'll wait for Volta to be released this fall ;-)

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.