Review Index:
Feedback

The NVIDIA GeForce GTX TITAN Z Review

Author: Ryan Shrout
Manufacturer: NVIDIA

Test Setup

Testing Configuration

To power the Titan Z and R9 295X2 we are are using our 6-core Sandy Bridge-E platform. We are still waiting for Haswell-E to arrive to mix it all up!

Test System Setup
CPU Intel Core i7-3960X Sandy Bridge-E
Motherboard ASUS P9X79 Deluxe
Memory Corsair Dominator DDR3-1600 16GB
Hard Drive OCZ Agility 4 256GB SSD
Sound Card On-board
Graphics Card NVIDIA GeForce GTX TITAN Z 12GB
NVIDIA GeForce GTX 780 Ti 3GB
AMD Radeon R9 295X2 8GB
Graphics Drivers AMD: 14.6 Beta
NVIDIA: 337.91 (GTX Titan Z)
NVIDIA: 337.88 (GTX 780 Ti)
Power Supply Corsair AX1200i
Operating System Windows 8 Pro x64

View Full Size

What you should be watching for

  1. GTX TITAN Z vs Radeon R9 295X2 - The two biggest, baddest graphics cards from AMD and NVIDIA go head to head in this battle. We are testing 2560x1440 and 3840x2160 (4K) to really see which can handle the most intense gaming scenarios.
     
  2. GTX TITAN Z vs GTX 780 Ti SLI - For NVIDIA fans, this battle might be the most interesting. Can a pair of GTX 780 Ti cards (for $1300) out perform the GTX Titan Z ($3000)?

If you don't need the example graphs and explanations below, you can jump straight to the benchmark results now!!

Frame Rating: Our Testing Process

If you aren't familiar with it, you should probably do a little research into our testing methodology as it is quite different than others you may see online.  Rather than using FRAPS to measure frame rates or frame times, we are using an secondary PC to capture the output from the tested graphics card directly and then use post processing on the resulting video to determine frame rates, frame times, frame variance and much more. 

This amount of data can be pretty confusing if you attempting to read it without proper background, but I strongly believe that the results we present paint a much more thorough picture of performance than other options.  So please, read up on the full discussion about our Frame Rating methods before moving forward!!

While there are literally dozens of file created for each “run” of benchmarks, there are several resulting graphs that FCAT produces, as well as several more that we are generating with additional code of our own. 

If you don't need the example graphs and explanations below, you can jump straight to the benchmark results now!!

 

The PCPER FRAPS File

While the graphs above are produced by the default version of the scripts from NVIDIA, I have modified and added to them in a few ways to produce additional data for our readers.  The first file shows a sub-set of the data from the RUN file above, the average frame rate over time as defined by FRAPS, though we are combining all of the GPUs we are comparing into a single graph.  This will basically emulate the data we have been showing you for the past several years.

 

The PCPER Observed FPS File

This graph takes a different subset of data points and plots them similarly to the FRAPS file above, but this time we are look at the “observed” average frame rates, shown previously as the blue bars in the RUN file above.  This takes out the dropped and runts frames, giving you the performance metrics that actually matter – how many frames are being shown to the gamer to improve the animation sequences. 

As you’ll see in our full results on the coming pages, seeing a big difference between the FRAPS FPS graphic and the Observed FPS will indicate cases where it is likely the gamer is not getting the full benefit of the hardware investment in their PC.

 

The PLOT File

The primary file that is generated from the extracted data is a plot of calculated frame times including runts.  The numbers here represent the amount of time that frames appear on the screen for the user, a “thinner” line across the time span represents frame times that are consistent and thus should produce the smoothest animation to the gamer.  A “wider” line or one with a lot of peaks and valleys indicates a lot more variance and is likely caused by a lot of runts being displayed.

 

The RUN File

While the two graphs above show combined results for a set of cards being compared, the RUN file will show you the results from a single card on that particular result.  It is in this graph that you can see interesting data about runts, drops, average frame rate and the actual frame rate of your gaming experience. 

For tests that show no runts or drops, the data is pretty clean.  This is the standard frame rate per second over a span of time graph that has become the standard for performance evaluation on graphics cards.

A test that does have runts and drops will look much different.  The black bar labeled FRAPS indicates the average frame rate over time that traditional testing would show if you counted the drops and runts in the equation – as FRAPS FPS measurement does.  Any area in red is a dropped frame – the wider the amount of red you see, the more colored bars from our overlay were missing in the captured video file, indicating the gamer never saw those frames in any form.

The wide yellow area is the representation of runts, the thin bands of color in our captured video, that we have determined do not add to the animation of the image on the screen.  The larger the area of yellow the more often those runts are appearing.

Finally, the blue line is the measured FPS over each second after removing the runts and drops.  We are going to be calling this metric the “observed frame rate” as it measures the actual speed of the animation that the gamer experiences.

 

The PERcentile File

Scott introduced the idea of frame time percentiles months ago but now that we have some different data using direct capture as opposed to FRAPS, the results might be even more telling.  In this case, FCAT is showing percentiles not by frame time but instead by instantaneous FPS.  This will tell you the minimum frame rate that will appear on the screen at any given percent of time during our benchmark run.  The 50th percentile should be very close to the average total frame rate of the benchmark but as we creep closer to the 100% we see how the frame rate will be affected. 

The closer this line is to being perfectly flat the better as that would mean we are running at a constant frame rate the entire time.  A steep decline on the right hand side tells us that frame times are varying more and more frequently and might indicate potential stutter in the animation.

 

The PCPER Frame Time Variance File

Of all the data we are presenting, this is probably the one that needs the most discussion.  In an attempt to create a new metric for gaming and graphics performance, I wanted to try to find a way to define stutter based on the data sets we had collected.  As I mentioned earlier, we can define a single stutter as a variance level between t_game and t_display. This variance can be introduced in t_game, t_display, or on both levels.  Since we can currently only reliably test the t_display rate, how can we create a definition of stutter that makes sense and that can be applied across multiple games and platforms?

We define a single frame variance as the difference between the current frame time and the previous frame time – how consistent the two frames presented to the gamer.  However, as I found in my testing plotting the value of this frame variance is nearly a perfect match to the data presented by the minimum FPS (PER) file created by FCAT.  To be more specific, stutter is only perceived when there is a break from the previous animation frame rates. 

Our current running theory for a stutter evaluation is this: find the current frame time variance by comparing the current frame time to the running average of the frame times of the previous 20 frames.  Then, by sorting these frame times and plotting them in a percentile form we can get an interesting look at potential stutter.  Comparing the frame times to a running average rather than just to the previous frame should prevent potential problems from legitimate performance peaks or valleys found when moving from a highly compute intensive scene to a lower one.

While we are still trying to figure out if this is the best way to visualize stutter in a game, we have seen enough evidence in our game play testing and by comparing the above graphic to other data generated through our Frame rating system to be reasonably confident in our assertions.  So much in fact that I am going to going this data the PCPER ISU, which beer fans will appreciate the acronym of International Stutter Units.

To compare these results you want to see a line that is as close the 0ms mark as possible indicating very little frame rate variance when compared to a running average of previous frames.  There will be some inevitable incline as we reach the 90+ percentile but that is expected with any game play sequence that varies from scene to scene.  What we do not want to see is a sharper line up that would indicate higher frame variance (ISU) and could be an indication that the game sees microstuttering and hitching problems.

June 10, 2014 | 03:36 PM - Posted by mAxius

great article i see nvidia trolls incoming :D

June 16, 2014 | 01:33 PM - Posted by Anonymous (not verified)

No man I am I dedicated nvidia buyer and even I can see that this card is a huge rip off. At the price of this card you could install two titan blacks in your PC and get higher performance while saving money.

June 10, 2014 | 04:03 PM - Posted by LtMatt

I was there when this was benched!!

I can't believe how much faster the 295x2 is vs not only the TitanZ but even 780TI SLI. Good job AMD. Now drop the price on the 295x2 a bit so normal people can afford them. :P

Have to say the TitanZ looks amazing with the shroud off. Lovely heatsink design.

June 11, 2014 | 05:28 PM - Posted by Lance (not verified)

I would love a price drop but they never will until Nvidia do something, if you had a product that was better than your only competitor and you had it for half the price you would keep it at that knowing it will sell aslong as you could, AMD are still a company not as price gougey as NVIDIA but still need to turn a profit :)

June 10, 2014 | 04:11 PM - Posted by XnnUser (not verified)

This card is nonsense for the NVIDIA's commitment with PC Gaming.

If this video card is for computing, why not put into the Quadro brand? Is that's something hard to do?

I hope that PCPer publish some benchmarks against Quadro cards.

June 10, 2014 | 04:26 PM - Posted by Scott Michaud

Because Quadro implies a lot more than just high compute performance, at least from the driver side of things. Things like quad-buffered stereoscopic 3D in OpenGL professional applications, strict driver certification, and so forth. This is much closer to the Tesla line, except that Tesla implies no video outputs. The GeForce line is a way to get it out to the masses without making it a full-fledged Quadro part.

I can see why NVIDIA put it under their gaming brand, but that doesn't mean I think it is a good purchase, especially for gamers, when an equivalent amount of Titan Blacks (or 780 Tis if full speed, 64-bit computation isn't required) is so much cheaper. It's a pretty big commitment for the design and build quality of the Titan Z.

June 10, 2014 | 04:44 PM - Posted by Thedarklord

Though Quadro does imply the higher compute and a high level of support for professional development apps, doesn't that kind of reinforce the fact that the entire Titan lineup has never been "Gamer Friendly".

Again I think the Titan lineup should be branded Quadro or Tesla, as those are the target audiences, I mean NVIDIA may sell a fair amount of these Titan cards, but it is somewhat of a slap in the face to call these "Gamer Cards" as NVIDIA does do.

June 10, 2014 | 07:12 PM - Posted by Anonymous (not verified)

Can't be called Quadro without the Driver support and certification that cost big bucks to develop and maintain.
Quadro brand is out of the question, these GPU are mostly for number crunchers that will use them round the clock and will pay the higher costs, dew to the Titan Z's lower power usage, the Z still costs less than getting a Quadro and not using the expensive drivers, and the stock traders and scientific users don't need the certified for professional graphics use drivers. 10 or more of these things running in a stock trading boiler room, and crunching away on stock derivatives, and millisecond stock trading will save enough in power to pay the extra initial costs, but it beats having to buy the even higher priced Quadros. They can be used for gaming with gaming drivers, and they do use less power, it just takes a single gamer longer to recoup the extra costs in power savings, compared to a scientific user that may have 100 or more of these running 24/7 in a cluster.

What Nvidia sells to gamers, of even their standard gaming GPUs, is mostly paid for by the Business and scientific customers, who buy Nvidia's GPUs for GPGPU accelerators, and HPC/Supercomputing and pay the higher prices that actually fund the R&D that winds up in the consumer gaming SKUs, same goes for the enthusiasts CPUs that Intel sells, if gamers had to pay all of the R&D costs that the professional and business users cover, the actual costs of development, the gamers alone with out any other markets subsidizing the R&D, then most gamers would not be able to afford most mid range GPUs by todays standards.

Uncle Sam spends uber Butt loads of cash funding supercomputers, and Nvidia's R&D budget is funded indirectly and sometimes directly by these gigabucks projects, who's technology R&D filters down to the consumer divisions and into the consumer products. It has always been thus, for Nvidia, Intel, AMD, IBM..., gamers may let the marketing convince them that it is all about gaming, but reality is technology filters down to the consumer market, even more so for high tech than other products.

Slap in the face, let gamers pay the real costs of R&D, instead of Government, scientific, and business users, then see what a real slap is. The drug companies will buy Titan Zs buy the truckloads, of course they'll get a bulk discount, for their protean folding clusters. The Titian Z will come down in price, when the next generation of GPUs hit the market.

June 12, 2014 | 04:27 AM - Posted by Wendigo (not verified)

Quadro is a line of products for 3d design, this is their origin, not the gpgpu market.

For this exist the Testla cards.

And then they are the Geforce cards, the 3D gaming and "allforone" solution of nvidia.

Titan brand is a hybrid between Geforce cards and Tesla Cards, the Titan cards haven't the optimizations in the driver to accelerate special functions for 3D professional applications.

So the mention for the reason to not include the Titan with Quadro line is a non sense.

It's more a Tesla card, but not completely, its driver is a geforce one without optimizations, and its gpgpu capabilities are almost the same that the Tesla cards, BUT this card disables some advance feature related to run in networks of Tesla cards (is more a solution for a workstation, not to make supercomputers or nets of GPGPU systems).

Is a madness buy one of this to only gaming, but the people are sometimes very crazy.

June 12, 2014 | 10:29 AM - Posted by Anonymous (not verified)

Quadro is the professional drivers, and not so much of a hardware difference, maybe a slightly different BIOS, and some clock tweaking to minimize bit errors in the rendering, but not much difference in the hardware. That and the millions of extra man(or woman) hours that go into getting the Quadro brand's graphics drivers certified to work a flawlessly as possible with the major third party graphics software applications. the Titan Z is not too different from the GPUs utilized in the Quadro line, maybe less clock tweaking, and BIOS quality control. The Quadros probably have better power distribution/filtering circuitry and high end capacitors, for more error free operation, and extra error correction for the Gddr5 memory. Titan Z is engineered for a different market than just the gaming market, but if some gamers have the cash why not try and sell some there also, but those scientific users and stock traders will certainly spend the bucks, and recoup the higher initial costs in power bill savings.

With Quadro/Firepro/WhateverPro it's the graphics driver and graphics driver maintenance guarantee that you are buying, its like a service contract for software, it costs money to pay the software engineers, and millions of dollars are at stake, and people's jobs(Graphics professionals) are made or broken on deadlines, serous deadlines, the show must go on, Trade Show, or other that needs the graphics.

June 10, 2014 | 11:52 PM - Posted by renz (not verified)

finally. someone that can understand that titan is nvidia cheap version of tesla instead of quadro. i've seen many people compare titan with quadro saying titan (well the original one) is a good card for 3d artist without the need to spend a couple of thousands of dollar for quadro. and in the end the same people commenting did not understand why nvidia want to undercut their own Quadro line up with Titan. the truth is it doesn't matter how good titan price/performance compared to quadro for people/company that need the certified driver for their pro application titan was never a choice. titan is for starter developer that interested developing their application using CUDA. once they software is fully develop they will substitute titan with tesla and have full pro support from nvidia.

June 11, 2014 | 11:40 AM - Posted by Anonymous (not verified)

No Titan Z, is not for just developers(CUDA, or others), and it's not for pro 3d graphics Artists, that do not want to starve, and need the certified graphics drivers, some of those big ad graphics take hours to run on multiple Quadros, and a few dropped pixels will ruin a rinder, and show up big time on large tradeshow graphic, or ad. The Quadro brand is not about the hardware, it is about the professional graphics drivers, that cost as much as the hardware to develop and maintain. Titan Z, note the "Z" as that is the indicator of the different branding, it can be used for gaming, but it's out there more for the scientific users, and stock traders, whose jobs, and livelihoods, do not depend on spotless rendering, and you can not photoshop or Gimp an artifact out of a fancy AA and AO, with ray tracing, 20 hour render, with fancy reflections inside of reflections, you have to rerun the render, so the Quadros, and FirePros are worth every red cent for those costly drivers, that and ECC memory. Titan Z is for number crunching, and lower power consuming 24/7 use, it can be used for gaming, with installed gaming drivers, but is for a different market segment than pro graphics or just gaming alone. they are not undercutting Quadro, Quadro is the pro drivers! Titan Z is for scientists and stock traders, and such, Gaming is an afterthought for Titan Z, if you can afford the bucks up front, if you are running 10 or more of these Titan Z cards, the power savings will pay for the extra cost over their usable lifetime, at 24/7 use.

Pro graphics users work on deadlines, just like news reporters, miss too many deadlines because of a bad render, and find another way to make a living PDQ!

June 12, 2014 | 01:01 PM - Posted by Brunostako (not verified)

I don't know much of professional GPUs, but the pro cards also have ECC memory, don't they?

June 10, 2014 | 04:39 PM - Posted by Thedarklord

I think the entire Titan lineup should have been branded Quadro's or Telsa's, and be done with it, since they really are intended for CUDA development/professional development.

NVIDIA could/should? release a GTX 790, with two GTX 780 TI's (GK110/3GB Each/Etc), at a $1500 price point, that is really targeted at gamer's.

June 10, 2014 | 04:51 PM - Posted by Silver Sparrow

With all the caveats to the Titan Z I can't help but be impressed with the engineering of this card. Single fan cooling config and quiet performance with two gpu cores makes me as impressed as I was seeing all those memory lanes on a single PCB with the 295x2.

I suppose if you're spending this amount of money on a PC what's an extra £150 or so for a fc waterblock.

June 10, 2014 | 04:58 PM - Posted by Anonymous (not verified)

Incomplete review Ryan

You left out temperature comparison and cold vs hot benchmark runs. You emphasized it on the last few video card reviews. Curious why you left it out this time.

June 10, 2014 | 07:00 PM - Posted by Ryan Shrout

The R9 295X2 does not suffer from the hot/cold issue at all. This was addressed in our launch review of that product.

June 10, 2014 | 07:57 PM - Posted by Anonymous (not verified)

In the video you mention between the 780 Ti SLI & Titan Z
"When the GPUs are heavily loaded the clocks have to come down some".

Is throttling an issues with prolong loading?

Also in the video you point that heat is a one of the 295x2 flaws.

You didn't provide any data for either temp comparisons or clock behavior.

June 10, 2014 | 09:13 PM - Posted by Ryan Shrout

I have some data here on the Titan Z. I'm going to try playing with some overclocking options on the card and will make a post tomorrow.

June 11, 2014 | 02:12 AM - Posted by Anonymous (not verified)

It be interesting to know how much cooler the Titan Z operates then the 295x2 given its flaw. Most sites report the 295x2 under 70C in Furmarks so the Titan Z must run really cool.

Can't wait to see those numbers.

June 11, 2014 | 04:32 PM - Posted by Anonymous (not verified)

Temperatures at load non overclocked:
295X2 = 65*C
Titan Z = 82*C

June 11, 2014 | 04:58 PM - Posted by Anonymous (not verified)

What the .... ?

Ryan said the 295x2 flaw was its temperature. How can the Titan Z be running hotter?

I hope he clarifies this with data because that's not good if the 295x2 runs cooler then Titan Z and he is calling it a flaw.

August 11, 2014 | 01:43 PM - Posted by AMDBumLover (not verified)

heat output and temperatures can vary I guess. However is it unfortunate that even when AMD has a superior product they still wont sell as many as nvidia at 2x the price. Incompetence or just bad luck.

June 10, 2014 | 05:12 PM - Posted by Airbrushkid (not verified)

Wouldn't 2 Titan Blacks be faster then the 780ti's and maybe the Titan Z in gaming?

June 11, 2014 | 12:20 PM - Posted by Airbrushkid (not verified)

Thanks for no help. I found out that 2 Titan Blacks in sli are faster then 1 Titan Z.

June 10, 2014 | 06:52 PM - Posted by IndraEMC (not verified)

For $3000 card (cost twice as much as R9 295X2)but for most benchmark, it lose to R9 295X2 ? i can say GTX TITAN-Z is a failure for gaming card, but for workstation ? i hope Pc perspective will try to benchmark this against $3000 Quadro Cards on 3D application.

June 10, 2014 | 06:52 PM - Posted by Anonymous (not verified)

this article lacks overclocking capability of the TitanZ, but hey still good to finaly see some benchs, Nvidia handled the defeat of TitanZ poorly, they could have just supplied samples instead of this idiotic reaction they had.
i hope they bring a 790 out as ryan said without double P, but cut the price, so that AMD drops too and have some affordable bi-gpu this season.

June 10, 2014 | 08:37 PM - Posted by abundantcores

I don't understand Nvidia's mentality with this GPU.

Its a nice card, but, compared with the 295X2, its not as good.
Its also suffers from stuttering where as AMD's GPU does not, how the tables have turned.

So AMD's solution is just better, and yet Nvidia want to charge twice as much for this failed solution?

Do they think we are all dumb?

June 10, 2014 | 11:12 PM - Posted by johnnythegeek

I would compare this Titan Z to 2x780ti and 1 Tesla K40. Do you know how many Tflops is the peak double precision floating point and peak single precision floating point is on the Z? Is the GDDR5 capable of ECC? If this thing beats out a tesla K40 then $3000 is much cheaper than a $4000 K40 that is capable of no gaming or a K6000 that can only game as much as a 780.

June 11, 2014 | 12:27 AM - Posted by RE4PRR (not verified)

It's a shame you didn't do any 780ti SLI overclocking to compare.

You boost both to at least 1 ghz clockspeeds (making those equal to 295x2)and it will easily surpass the 295x2 at all resolutions unless vram really becomes a limiting factor.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.