NVIDIA Launches Flagship Quadro K6000 Graphics Card For Visual Computing Professionals

Subject: Graphics Cards | July 23, 2013 - 09:00 AM |
Tagged: workstation, simulation, quadro k6000, quadro, nvidia, k6000, gk110

Today, NVIDIA announced its flagship Quadro graphics card called the K6000. Back in March of this year, NVIDIA launched a new like of Quadro graphics cards for workstations. Those cards replaced the Fermi-based predecessors with new models based on NVIDIA’s GK-104 “Kepler” GPUs. Notably missing from that new lineup was NVIDIA Quadro K6000, which is the successor to the Quadro 6000.

View Full Size

Contrary to previous rumors, the Quadro K6000 will be based on the full GK110 chip. In fact, it will be the fastest single-GPU graphics card that NVIDIA has to offer.

The Quadro K6000 features a full GK110 GPU, 12GB of GDDR5 memory on a 384-bit bus, and a 225W TDP. The full GK110-based GPU has 2,880 CUDA cores, 256 TMUs, and 48 ROPs. Unfortunately, NVIDIA has not yet revealed clockspeeds for the GPU or memory.

View Full Size

Thanks to the GPU not having any SMX units disabled, the NVIDIA Quadro K6000 is rated for approximately 1.4 TFLOPS of peak double precision floating point performance of and 5.2 TFLOPS of single precision floating point performance.

The chart below illustrates the differences between the new flagship Quadro K6000 with full GK110 GPU and the highest tier Tesla and consumer graphics cards which have at least one SMX unit disabled.

NVIDIA GK110-Based Graphics Cards

  Quadro K6000 Tesla K20X GTX TITAN
CUDA 2,880 2,688 2,688
TMUs 256 224 224
ROPs 48 48 48
Memory 12GB 6GB 6GB
Memory Bus 384-bit 384-bit 384-bit
Memory Bandwidth 288 GB/s 250 GB/s 288 GB/s
Single Precision FP 5.2 TFLOPS 3.95 TFLOPS 4.5 TFLOPS
Double Precision FP ~1.4 TFLOPS 1.31 TFLOPS 1.31 TFLOPS

The NVIDIA GTX TITAN gaming graphics card has 2,688 CUDA cores, 224 TMUs, and 48 ROPs and is rated for peak double and single precision of 1.31 TFLOPS and 4.5 TFLOPS respectively. On the other hand, the lower-clocked Tesla K20X compute accelerator card has 2,688 CUDA cores, 224 TMUs, and 48 ROPs along with lower clockspeeds on the memory and GPU. Because of the lower clockspeeds, the K20X is rated for double and single precision floating point performance of 1.31 TFLOPS and 3.95 TFLOPS and memory bandwidth of 250GB/s versus the 288GB/s bandwidth on the TITAN and K6000.

View Full Size

NVIDIA® Quadro® K6000 GPU

In all, the new K6000 is an impressive card for professional users, and the GK110 chip should perform well in the workstation environment where GK104 was the only option before. NVIDIA claims that the GK110 is up to 3-times the performance of the Quadro 6000 (non K) predecessor. It is also the first Quadro GPU with 12GB of GDDR5 memory, which should lend itself well to high resolutions and artists working with highly detailed models and simulations.

View Full Size

Specifically, NVIDIA is aiming this graphics card at the visual computing market, which includes 3D designers, visual effects artists, 3d animation, and simulations. The company provided several examples in the press release, including using the GK110-based card to render nearly complete photorealistic vehicle models in RTT Deltagen that can run real time during design reviews.

View Full Size

The Quadro K6000 allows for larger and fully populated virtual sets with realistic lighting and scene detail when 3D animators and VFX artists are working with models and movie scenes in real time. Simulation work also takes advantage of the beefy double precision horsepower to support up to 3-times faster simulation run times in Terraspark's InsightEarth simulation. Users can run simulations with wider areas in less time than the previous generation Quardo cards, and is being used by oil companies to determine the best places to drill.

View Full Size

Pixar's Vice President of Software and R&D Guido Quaroni had the following to say regarding the K6000.

"The Kepler features are key to our next generation of real-time lighting and geometry 
handling. The added memory and other features allow our artists to see much more of the 
final scene in a real-time, interactive form, which allows many more artistic iterations."

The K6000 is the final piece to the traditional NVIDIA Quadro lineup and is likely to be well recieved by workstation users that need the increased double precision performance that GK110 offers over the existing GK104 chips. Specific pricing and availability are still unknown, but the K6000 will be available from workstation providers, system integrators, and authorized distribution partners beginning this fall.

Source: NVIDIA
July 23, 2013 | 02:42 PM - Posted by YTech

That's pretty neat!

- Will there be any improvements for gamers?
- What about Stereoscopic systems? Are all port able to sync images correctly together? (no ghosting, etc.) Specifically for a standard passive setup of 2 projection systems for 2 overlay images (2 display ports), more would be neat for great immersion.
- Which CPU/APU shows more improvement when combined with this video card?

I guess the cost will be double the TITAN, sadly :(

July 23, 2013 | 03:51 PM - Posted by duttyfoot (not verified)

The quadro cards always cost more than the geforce cards and these cards can play games but that isn't the target market. I used to game on my quadro 750xgl back in the day :)

July 23, 2013 | 04:04 PM - Posted by Scott Michaud

Double is probably... low. The Tesla K20X, which has less performance than the Quadro K6000, is almost four times as much.

These are professional SKUs.

July 23, 2013 | 06:26 PM - Posted by Tim Verry

Yup, I think last time I estimated the cost around $3,000 back when it was just a rumor and one that didn't indicate it was a full GK110 die w/ everything enabled so who knows it could cost more than that! It's aimed at business customers with expense accounts :).

July 23, 2013 | 08:46 PM - Posted by Anonymous (not verified)

On amazon the k5000 is listed for 2500 but being sold for 1800 so you're right on the $3000 price tag.

July 24, 2013 | 12:14 PM - Posted by YTech

I found this info to help support/answer one of my questions:
http://www.nvidia.ca/object/3d-vision-pro-requirements.html

Note that the minimum requirements are too low and was probably the minimum requirements for WinXP systems.

And since it doesn't have the new Quadro card on the list, I would believe this info is outdated and requires to be revised.

Ryan S. > This info can also help you understand certain results from some of your work you're working on.

Cheers!

August 2, 2013 | 11:57 AM - Posted by Anonymous (not verified)

Double? lol buddy this will be around $4,000, also this is not a gaming card.

July 24, 2013 | 02:13 AM - Posted by Crow (not verified)

Nothing special, AMD already has a faster solution. Yawn

Nice move by nVidia to cripple their desktop gaming GPU's, a cheapo Radeon 7970 beats GTX Titan by a maginute of 4-5 times.

July 24, 2013 | 03:29 AM - Posted by Tim Verry

Yes, GCN (Graphics Core Next) is a decent GPGPU architecture. Yes, the AMD FirePro S10000 is the newest model and is faster, but it is also a dual-GPU card*.

This new Quadro K6000 should actually be compared to the AMD FirePro S9000 which is the highest-end single GPU workstation card. The S9000 is a 225W part (Quadro K6000 is also 225W) with rated single and double precision floating point of 3.23 TFLOPS and 0.806 TFLOPS respectively. Big Kepler wins this round, at least on paper and just going by peak single/double precision floating point.

 

*The dual GPU FirePro S10000 is rated for 5.91 TFLOPS single precision and 1.48 TFLOPS double precision floating point.

OTOH, you could compare them based on pricing, and in that case it seems like you could fairly compare the NV K6000 and AMD S10000 and they both will (likely) have $3000+ MSRPs.

July 24, 2013 | 04:53 AM - Posted by Crow (not verified)

It is kinda ironic how nVidia has the majority of the market share even though its actually inferior in most cases.

nVidia's desktop gaming GPU's were always hotter, more consuming and prone to overheating and death ultil nVidia decided to cripple its compute units to be competitive in to AMD's in terms heat and power consumption and still they are hotter. Do you know why AMD has advantage over nVidia?

AMD(formerly ATI) developed "High Density" for their GPU's, it allowed them to compress up to 30% thus allowing a lot more transistors on the same nm process and die size.

If AMD was to decide to cripple their compute units then they would be miles ahead in terms heat and power consumption. nVidia is only winning by full brute force, if AMD was to make a GPU chip the same die then they would brutally trash GTX Titan.

So AMD's GPU's are basically like 22nm in a way.

AMD's Excavator will be made on 20nm process with "High Density" implemented so it should rival Skylake.

July 24, 2013 | 05:14 PM - Posted by BlackDove (not verified)

It's interesting that you say that, considering AMD's GPU's and CPU's tend to use more power than Nvidia or Intel's.

They also tend to have higher theoretical peak floating point performance, and more cores in the case of CPU's, yet their benchmark results show that they can't make use of their theoretical performance, in reality(8 core AMD chips tend to perform worse than Intel's 4 core, while consuming more power).

You really didn't say anything factual, and the benchmarks beg to differ with you.

AMD doesn't really have an advantage over Nvidia. They have the console contracts and better integrated GPU's, and that's about it.

July 25, 2013 | 09:54 AM - Posted by Anonymous (not verified)

Sorry to burst your bubble, but the AMD you knew 10 years ago is dead. The engineering team that designed the Athlon, hell, even their management team, is gone almost to the man.

These days AMD is like Gil from the Simpsons
http://www.youtube.com/watch?v=Qq0c2o6GhfU

Unfortunately all you're going to see out of AMD going forward is more of the same: cheaply designed hardware with very lousy software and driver support, sold at a price that appeals to people whose budget doesn't leave a choice.

Seriously, AMD's motto should be "Beggars can't be choosers" because "The Future is Fusion" sure as hell didn't get them anywhere.

August 2, 2013 | 11:58 AM - Posted by Anonymous (not verified)

7970 beats a titan by 4-5x huh? Thanks for demonstrating your lack of knowledge when it comes to GPUs

July 27, 2013 | 12:33 PM - Posted by Trey Long (not verified)

Nvidia crushes AMD in real world pro performance and that's why they have 80% of the pro graphics market and have for years.

July 28, 2013 | 10:28 AM - Posted by Anonymous (not verified)

From what I've heard AMD's architecture is poorly designed, take an intel CPU at stock Freq's and you'll get consistent benchmarks as the cpu clocks upward. Take the exact same clocked AMD cpu and you'll notice as clock's increase benchmark performance drops, so 2 cpu's same OC freq's, but AMD's architecture shows it's not as efficient as intels as clocks increase, so hardware can perform well, but architecture is sloppy and falls of the charts in real world applications.

January 27, 2014 | 01:49 PM - Posted by kameothegreenfox (not verified)

a magazine done a test recently comparing the 7 top most graphics cards for gaming - 4 nvidia and 3 AMD , when it came to the top two cards the magazine recomended the AMD over the nvidia , why?
the two cards had little to no difference between them , so little the only way for the nvidia to beat the AMD was to overclock it.
also only one of the nvidia cards was cheaper than AMD's highest costing card.
i dont know much about the graphics cards in whole but nvidia is not a persuasive choice when i compare them with others.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.