Review Index:
Feedback

NVIDIA GeForce GTX TITAN Preview - GK110, GPU Boost 2.0, Overclocking and GPGPU

Author:
Manufacturer: NVIDIA

GK110 Makes Its Way to Gamers

Our NVIDIA GeForce GTX TITAN Coverage Schedule:

Back in May of 2012 NVIDIA released information on GK110, a new GPU that the company was targeting towards HPC (high performance computing) and the GPGPU markets that are eager for more processing power.  Almost immediately the questions began on when we might see the GK110 part make its way to consumers and gamers in addition to finding a home in supercomputers like Cray's Titan system capable of 17.59 Petaflops/s. 

 

Video Loading...

Watch this same video on our YouTube channel

View Full Size

Nine months later we finally have an answer - the GeForce GTX TITAN is a consumer graphics card built around the GK110 GPU.  Comprised of 2,688 CUDA cores, 7.1 billion transistors and with a die size of 551 mm^2, the GTX TITAN is a big step forward (both in performance and physical size).

View Full Size

From a pure specifications standpoint the GeForce GTX TITAN based on GK110 is a powerhouse.  While the full GPU sports a total of 15 SMX units, TITAN will have 14 of them enabled for a total of 2688 shaders and 224 texture units.  Clock speeds on TITAN are a bit lower than on GK104 with a base clock rate of 836 MHz and a Boost Clock of 876 MHz.  As we will show you later in this article though the GPU Boost technology has been updated and changed quite a bit from what we first saw with the GTX 680.

The bump in the memory bus width is also key, being able to feed that many CUDA cores definitely required a boost from 256-bit to 384-bit, a 50% increase.  Even better, the memory bus is still running at 6.0 GHz resulting in total memory bandwdith of 288.4 GB/s

Continue reading our preview of the brand new NVIDIA GeForce GTX TITAN graphics card!!

View Full Size

Speaking of memory - this card will ship with 6GB on-board.  Yes, 6 GeeBees!!  That is twice as much as AMD's Radeon HD 7970 and three times as much as NVIDIA's own GeForce GTX 680 card.  This is without a doubt a nod to the super-computing capabilities of the GPU and the GPGPU functionality that NVIDIA is enabling with the double precision aspects of GK110.

View Full Size

These are the very same single precision CUDA cores that we know on the GK104 part and as such you can guess at performance (based on clocks and core counts); and you'll have to do that for a couple more days still.  (NVIDIA is asking us to hold off on our benchmark results until Thursday so be sure to check back then!) 

While the GeForce GTX 680 (and the family of GK104/106/107 GPUs) were built for single precision computing, GK110 was truly built with both single and double precision computing in mind.  That is why the die size and transistor count is so much higher than GK104, the double precision units that give TITAN its capability in GPGPU workloads are abscent in the GK104 part.  And while most games today don't take advantage of double precision workloads we cannot discount the potential for future GPGPU applications and what GK110 would offer differently than GK104.

I asked our very own Josh Walrath for some feedback on the GK110 GPU release we are seeing today in particular how it related to process technology.  Here was his response:

The GK110 is based on TSMC's 28 nm HKMG process.  This is the same process used for the other Kepler based products that are out today.  The 28 nm process first saw light of day in graphics with the AMD HD 7970 back in December, 2011.  NVIDIA followed some months later with the GK104 based GTX 680.  28 nm has been a boon to the graphics market, but it seems that we are at a bit of a standstill at the moment.  There have been few basic improvements to the process since its introduction in terms of power and switching speed, though obviously yields have improved dramatically over that time.  This has left AMD and NVIDIA in a bit of a bind.  With no easy updates due to process improvements, both companies are stuck with thermal and power limits that have remained essentially unchanged for well over a year.

The GK110 is a very large chip at 7.1 billion transistors.  It likely is approaching the maximum reticle limit for lithography, and it would make little sense to try to make a larger chip.  So it seems that GK110 will be the flagship single GPU for quite some time.  Happily for NVIDIA they have made some interesting design decisions about the double precision units and keeping them from affecting TDPs when running single precision applications and games (which are wholly single precision).  There is a slight chance for these products to move to a 28 nm HKMG/FD-SOI process which would improve both leakage and transistor switching properties.  Apparently the switch would be relatively painless, but FD-SOI has not gone into full production at either TSMC or GLOBALFOUNDRIES.

View Full Size

We still have quite a bit more to share with you on the next pages though including details on GPU Boost 2.0, new overclocking and overVOLTING options for enthusiasts and even a new feature to enable display overclocking!

February 19, 2013 | 06:57 AM - Posted by Kyle (not verified)

Yeah, but will it run Crysis?

February 19, 2013 | 07:01 AM - Posted by Ryan Shrout

We'll find out on Thursday.  :)

February 20, 2013 | 12:49 PM - Posted by Anonymous (not verified)

no

February 22, 2013 | 04:07 AM - Posted by Anonymous (not verified)

Actually, the better question is: will it blend?

February 19, 2013 | 07:03 AM - Posted by I don't have a name. (not verified)

Crysis 3, more like. :) Can't wait to see the performance results!

February 19, 2013 | 07:05 AM - Posted by I don't have a name. (not verified)

Might the GTX Titan review be the first to use your new frame-rating performance metric? Can't wait! :)

February 19, 2013 | 07:06 AM - Posted by Ryan Shrout

I can tell you right now that it won't be the full version of our Frame Rating system, but we'll have some new inclusions in it for sure.

February 19, 2013 | 07:21 AM - Posted by stephenC (not verified)

I am really looking forward to seeing how this card will perform with crysis 3

February 19, 2013 | 07:40 AM - Posted by Anonymous (not verified)

I think I'll get one. I have no need for it, but it looks cool. So, from someone who knows very little about computers or gaming, I take it this would be "overkill" to have my buddy build me a new system around this?

February 19, 2013 | 07:43 AM - Posted by Prodeous (not verified)

Hope to see some GPGPU testing and comparsion to 680. I can assist with Blender3D as it support GPGPU rendering on CUDA cards if you guys want.. Need to find out if I need to start putting coins away for another puchase ;)

February 19, 2013 | 08:31 AM - Posted by Ryan Shrout

Point me in the right direction!

February 19, 2013 | 07:16 PM - Posted by Riley (not verified)

+1 for blender render times! Go to blender.org to download blender (looks like it is down as I post this, probably for v2.66 release).

blendswap.com could be used to find stuff to render. That way the files would be publicly available to anyone who wanted to compare your times to their own. Just make sure the files use the cycles rendering engine (not blender internal).

Instructions at http://www.blenderguru.com/4-easy-ways-to-speed-up-cycles/ can be followed to swap between CPU and GPU rendering.

February 20, 2013 | 02:22 PM - Posted by Gadgety

I'm looking to replacing three GTX580 3GB mainly for rendering. With 6GB memory and the massive amount of CUDA cores I'm really looking forward to Blender Cycles speeds, but also Octane, VRay-RT. This must be one of the usage scenarios for the Titan, rather than pure gaming.

February 21, 2013 | 02:49 PM - Posted by Prodeous (not verified)

Download blender from:
http://www.blender.org/download/get-blender/

Windows 64bit install is the simplest.

The most common test for GPGPU rendering is to use the link below. "The unofficial Blender Cycles benchmark"
http://blenderartists.org/forum/showthread.php?239480-2-61-Cycles-render...

And then use the information provided by Riley:

http://www.blenderguru.com/4-easy-ways-to-speed-up-cycles/

February 19, 2013 | 08:12 AM - Posted by Jumangi (not verified)

*shrug* I guess its fun for the reviewer to get something new in the office but this might as well still be a myth as 99.99% of users will never even think about getting something this expensive.

Give us something new that regular people can actually afford...

February 19, 2013 | 08:18 AM - Posted by Rick (not verified)

i agree with above. one of these or two 670? i hope we see that comparison in your review.

February 19, 2013 | 08:32 AM - Posted by Ryan Shrout

670s are too cheap - you'll see two GTX 680s though.

February 19, 2013 | 10:12 AM - Posted by mAxius

ah the nvidiots will gladly put out $1000.00 for a graphics card that is nothing more than a 680 on roid rage... i know im not touching this card unless i win the lotto and get hit in the head with a brick making me mentally incapacitated.

February 19, 2013 | 10:38 AM - Posted by Oskars Apša (not verified)

If PcPer'll have the time and blessing of Nvidias highers and Geekbox Gods, make a review with a system comprised of these cards in sli and dual Intel Xeons and/or AMD Opterons.

February 19, 2013 | 10:52 AM - Posted by Anonymous (not verified)

Sweet mother of GOD!

I want! I can't wait to see the benches. They outta make Titan version the norm for future generations to bring down the x80 series lol.

February 19, 2013 | 11:23 AM - Posted by Anonymous (not verified)

It would be nice is someone would test this card with blender 3d to see how it rates for rendering against the Quadro GPUs that cost way more! Does anyone know of websites that benchmark open source software using GPUs, as most of the benchmarking sites are using Maya and other high priced software, and little opensource free software for their tests.

February 19, 2013 | 11:30 AM - Posted by PapaDragon

ALL that POWER from one Card, shame about the price. This is is my Adobe Premiere Pro and After Effects dream card, plus gaming! Too bad for the price.

Anyhow, a lot of great features, cant wait for the review, great informative video Ryan!

February 19, 2013 | 12:23 PM - Posted by David (not verified)

I wish NVIDIA had been a little more aggressive on the price but I still couldn't have afforded it. Just upgraded both my and my wife's system to 7970 GHzs because the bundle deal was too good to pass up.

That said, for all the people who say cards like this are overkill, my current Skyrim install would give this card a run for its money even at 1080p60.

February 19, 2013 | 12:32 PM - Posted by Anonymous (not verified)

Can I mount a 4-way Titan SLI?

February 19, 2013 | 03:44 PM - Posted by Josh Walrath

I don't know, that sounds uncomfortable.

February 19, 2013 | 02:53 PM - Posted by ezjohny

AMD here is your competition, What do you have in mind.
"NEXT GENERATION" gameplay AMD!
This graphic card is going to pack a punch people!!!

February 19, 2013 | 02:59 PM - Posted by Evan (not verified)

What is this? Lucky i got a ps3 last week! it looks better and pre ordered crysis 3 and it has blu ray so its better graphics!

February 19, 2013 | 04:32 PM - Posted by Scott Michaud

Lol... the sad part is there are some people who actually believe that.

February 19, 2013 | 05:20 PM - Posted by Anonymous (not verified)

The sad part is people who go from SD to HD and see no difference.

February 19, 2013 | 03:18 PM - Posted by papajohn

i will take out a loan to buy this card! :)

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.